Advertisement
Articles
Advertisement

Beyond sensor fusion: Making smartphones smarter

Mon, 12/17/2012 - 4:09pm
Ian Chen, Executive Vice President, Sensor Platforms, Inc., www.sensorplatforms.com

Sensor fusion is now integrated into most smartphones and tablets, enabling many mobile apps. But consumers want more: they want their mobile devices to be even smarter without having to learn any new interfaces themselves. This article provides an overview of a new class of sensor applications that go beyond sensor fusion, using sensor data to interpret user contexts and thus open new possibilities for smart electronics.

In our most recent survey of the apps stores, we found that smartphone and tablet users have downloaded many hundreds of different mobile apps that use MEMS sensors; apps that let users monitor their motions, find their compass headings, and play games by moving their devices. These apps share a common theme: they demand the active involvement of their users. However, we found that  average users interact with their smartphones only about 6 percent of the  waking day. Opportunities to make smartphones smarter, on the other hand, lie in using sensors 100 percent of the time.

Sensors measure more than just user motions. They capture muscle resonances, breathing rhythms, magnetic interferences, subtle pressure changes from HVACs, and many other events of which uses are unaware. Advanced algorithms that go beyond merely fusing these data can interpret identifying features in these signals to learn about the user’s context.

For example, biomechanical rhythms of humans are very distinctive from the vibrations and movements of a car. Consequently, context aware algorithms can distinguish whether a phone is being carried, or is not on a person, from the characteristics of the motion pattern that sensors record. Users who have misplaced their phones somewhere in their house know that phone finder apps that merely return the GPS location of their house are not very helpful. With context awareness, the smartphone may now inform the phone finder that the last time the user had the phone was eight o’clock Saturday morning and he was sitting down using the crossword puzzle app on the phone. Or perhaps, that was three o’clock in the afternoon when the phone fell out of his pocket while he was sitting in a car.

Context awareness goes far beyond today’s sensor fusion algorithms because it requires some sensors to be continuously active in a device. Implementing context awareness in a mobile device therefore requires a power conscious architectural foundation.

One software library, for example, creates a layered framework to conserve the sensor and computation power required to understand user contexts (see Figure 1).

Computation for context detection occurs primarily with feature extraction and context Classification. A feature is a piece of sensor data that contains special characteristics. These characteristics are hidden among everything the raw sensor signals record. Raw sensor data can be distilled to remove all the extraneous signals and uncover the hidden characteristic patterns. Then, context classifiers take in extracted features to determine if the combinations of features found to be present suggest a context is detected.

The greater the variety of contexts a device is aware of, the more features need to be extracted. Computation costs power, always a scarce resource in mobile platforms. To conserve power in this framework, a robust software library can provide an efficient algorithm that takes a broad high-level view of incoming sensor signals and identify significant changes in the sensor data that suggest the user context may be changing. When context is not changing, only a small subset of feature extractors pertaining to verifying the current context needs to be active. Only when context is changing would the algorithm call on additional computing resources to identify the new context.
 
User contexts change infrequently – for example, when the user stops walking.

Context aware apps need an application programmer interface (API) that transparently handles which sensors need to be engaged for a given context, and the motion kinematics and biomechanics involved, thus allowing developers to focus on usability. For example, a power management app may want to know when a user is sitting so that it can turn off GPS refreshes without having to be concerned with sensor configurations.

The MEMS Industry Group (http://www.memsindustrygroup.org/) recently reported seeing a massive proliferation of MEMS devices across a broad range of applications: from mobile handsets and tablets, to health/medical monitors, automotive safety systems, the smart grid, gaming, and robotics. Making all these devices context aware promises new and interesting collaborations so these smart devices will better support people’s lives.

Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading