Advertisement
Articles
Advertisement

Design Talk: Software

Mon, 11/16/2009 - 6:09am

Mobile Virtualization – Mobile Design Engineers’ and Developers’ Friend

By Robert C. McCammon, Vice President of Product Management, Open Kernel Labs

Today's mobile phones boast computing capabilities once found in mainframe computers and workstations. Mobile CPU clocks run hundreds of MHz, and mobile 32 bit processors access gigabytes of memory. Additionally, mobile network connections stream data at broadband speeds, and mobile versions of enterprise platforms such as Linux and Windows run shrink-wrap applications.
So it should surprise no one that today's mobile phones can also host mobile virtualization platforms with a range of accompanying benefits.

Mobile phone virtualization gives design engineers a powerful new tool to address a variety of device development challenges. It builds in security, helps in extending application longevity, and lets handset OEMs consolidate hardware and software by enabling multiple OSes to run on a single ARM core. Mobile virtualization also enables integrators, mobile network operators, and other ecosystem participants to sustain legacy code, support trusted computing, and enable other use cases.

OpenKernalLabs-pic1-web

 

Despite obvious similarities between enterprise/desktop virtualization and its mobile counterpart, mobile phone use cases present key differences. In particular, although mobile device specifications today match those of blades and white box computers, and boast open, rich OSes capable of running and managing freestanding applications, mobile device development practices still dictate tight vertical integration and control in contrast to the looser horizontal approaches employed in the enterprise market.

In addition to reliability and performance, mobile platforms requirements include:
• The need for a seamless, consumer-grade experience versus normal IT rollout hiccups
• Limited memory, storage, and CPU horsepower in fixed-function systems versus easy upgrades to blades, servers, and desktops
• Challenges to re-flashing fielded hardware versus periodic updates/upgrades by IT staff
• Self-management by end users versus access to IT teams and call centers
• Real-time response and other real-world, mission-critical performance requirements

But mobile phone virtualization can provide a unique opportunity for developers to ditch their dual roles as systems programmers and applications programmers, by freeing them from the drudgery of tasks such as legacy migration, cross-platform device support, and integrating unreliable and untrusted code.

OpenKernalLabs-pic2-web

 



Legacy migration
Porting and supporting Real-Time OS (RTOS) code on new platforms is hard to perform in-house and expensive and tricky to outsource. Mobile virtualization turns the endless migration chore into a modular integration task. With mobile virtualization, developers and integrators can preserve legacy code intact, in situ on its legacy RTOS inside a Virtual Machine (VM).

Cross-platform device support
Mobile developers are often required to write the same driver over and over again for different OSes. Mobile virtualization can enable a true write-once, run-anywhere experience for cross-platform device support by letting applications in one guest OS leverage tested and working drivers resident in another VM.

Integrating unreliable and untrusted code
Developers and software engineering managers confront the buy versus build conundrum on every project. Widely available open source software hasn’t made the decision any easier; but has rather added more choices and variables to the mix. Using software from independent software vendors, open source projects, and other third parties presents developers with three unavoidable risks: cost/effort to integrate code; software maturity, reliability, and fault resilience; and code susceptibility to exploits and security threats, by itself or in tandem with other system components.

Mobile virtualization helps developers meet this triple threat and keep projects on schedule by easing integration of third-party code as well as legacy code. It enhances both system reliability and trustworthiness by isolating questionable code within separate VMs run by a smaller, more manageable trusted computing base. When and if third-party code misbehaves, its impact is limited to the VM containing it. Watchdogs can restart guest environments, and trusted code can shut down or throttle offending VMs in the face of runaway execution or denial-of-service attacks.

Balancing transparency and control
Mobile phone virtualization will probably never be as transparent as virtualization is in enterprise IT. Mobile device development will always demand greater control, both in-house and through distribution channels. But virtualization does empower developers to save time and effort, and get back to their primary job – making mobile devices more practical, user-friendly, and successful in the global marketplace.

 

Future-proof Your OS Selection

By Kerry Johnson, Director of Product Management, QNX Software Systems, kjohnson@qnx.com

kerry_johnson-webThe view of an embedded device as a fixed-function, standalone unit is changing dramatically. Devices are becoming network connected, with support for remote control, status monitoring, and in-field software upgrades. Devices currently thought of as simple are becoming complex — witness the rise of intelligent refrigerators and washing machines.

When determining how best to add software intelligence to their products, embedded developers need to consider the merits of moving from a software solution built entirely in-house to one that uses an off-the-shelf operating system (OS). As part of this process, developers must look beyond current requirements. Demands for human machine interfaces, data storage, wireless networking, and other capabilities are evolving quickly, creating the need for flexible system designs — and for OSs that make such designs easier to achieve.

To put this decision in context, let us consider some key trends.
• Increasing device complexity —  As devices become interconnected to one another and to the Cloud, support for interfaces such as Bluetooth, WiFi, and Zigbee introduce new levels of software complexity. To delay obsolescence, a device may also support in-field software upgrade capabilities. This, in turn, requires more software.
• Ease of use — Software complexity cannot be passed along to the end user. Devices must remain easy to use, or customers will leave them on the shelf. In mission-critical environments, meanwhile, there is little or no room for human errors caused by complex user interfaces. To make the user experience more compelling and intuitive, new user interface and graphics technologies are finding their way into embedded devices.
• Increasing processing power through multicore processors — Connectivity, software intelligence, and improved graphics combine to tax the embedded processor. Multicore processors in their various forms are becoming the common way to achieve more computing capacity within an embedded power budget. Software that leverages parallel processing serves as a key enabling technology for these new devices.
• Safety and security — As devices grow more connected and complex, so do concerns for safety and security. These concerns go beyond simple network security and often require tamper-proof software designs. System reliability serves as a key indicator of device safety. Recently, more attention is being given to certifications in these areas.

The trends described above illustrate the importance of building systems that can evolve, adapt, and scale. Although deterministic real-time response and low OS overhead remain important, it has become equally important to consider a range of technologies.

future_proof_os_diagram-webUser interface and graphics
The Apple iPhone demonstrated that a small form factor device can support a rich, intuitive user interface complete with multi-touch screens, gestures, smooth screen transitions, and animations. Economically, it isn’t feasible to create these appealing, easy-to-use features without support from the OS. The OS must provide hardware-accelerated 2D and 3D graphics capabilities and high-level user interface design tools. Also, to create a sophisticated user interface, developers must often combine multiple graphics technologies such as Adobe Flash, native graphics applications, HTML content, and video on the same display. The OS must provide facilities to layer these multiple technologies seamlessly.

As always, developers should look for standard interfaces to ensure portability. The Khronos Group defines a set of graphics standards for OpenGL ES (3D graphics with hardware acceleration), OpenVG (2D graphics with hardware acceleration), and OpenKODE (display composition).

Multicore processing
Multicore processors come in a variety of shapes and sizes. Common variants include: 1) a single general-purpose processor with a DSP, 2) standalone dual- or quad-core general-purpose processors, and 3) multiple general-purpose processors and multiple DSP accelerators.

When selecting an OS, it is important to keep the target application in mind. For example, if the target is a low-cost media player, the OS should provide a clear way to handle general-purpose functions (such as the user interface) and a framework for handling DSP-accelerated codecs. If, however, the target is a high-end medical imaging system that requires pure computing capacity, the OS should support parallel processing through symmetric multiprocessing (SMP).

Safety and security
Some applications demand system-level safety or security certification. The Common Criteria for Information Technology Security Evaluation defines a set of security requirements that can be applied to embedded devices. Likewise, the International Electrotechnical Commission's (IEC) IEC 61508 defines a number of Safety Integrity Levels, such as SIL 3, that define techniques and measures to prevent systematic failures (bugs) from being designed into the device or system. The Federal Drug Administration (FDA) also requires device certification. A certifiable OS can greatly reduce the effort required to achieve such certifications. Even if certification isn’t an immediate requirement, it can be important to keep your options open. An OS that can be upgraded to a certifiable version without API changes is a real benefit.

Architecture
Future proofing an OS decision also comes down to architecture. To keep pace with evolving requirements, an OS must provide the modularity and clear separation of responsibilities that allows new or updated services to be plugged in or out as required. The flexibility afforded by such modularity not only simplifies life for the developer, but can build satisfaction in the customer.

Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading