By better understanding user contexts and intent, Sensor Platforms’ FreeMotion Library of algorithmic software now enables context aware applications on mobile devices to proactively engage with the user, and not merely interact with that user. This capability of the FreeMotion Library to enable context aware applications will be in beta this quarter, and in production early next year. Context awareness results from a layer of sophisticated algorithms that interpret sensor data to infer higher-level information, such as whether the device is in motion; how the device is being carried; the posture of the user; and the mode of transportation, such as train, auto, or airplane. These algorithms distill a context, for example “the user is walking,” into a series of characteristic features in sensor data. The presence of these and other features inform the context detection algorithm if the “walking” context is valid. And more than one single context is usually valid at the same time; for example, a user could be walking with his phone in his pocket. Context awareness, as presented in the examples above, builds on, but requires more than, sensor fusion which results from combining the outputs from two or more sensors recording a common event, so that the fused result better captures the event than any single sensor input. Because all these sensors must work in the background on mobile devices, preserving battery life has to be the core principle of any context detection architecture. To address that, a proprietary layered framework conserves sensor and computation power required to understand user contexts. Thus the context aware framework preserves battery life, and can according to the company, actually contribute to prolonging that battery life by allowing more aggressive system power management.