Advertisement
Articles
Advertisement

Displays from the big screen: Motion sensor systems from Hollywood

Tue, 07/15/2014 - 2:48pm
Gus Breiland, Customer Service Engineer Manager, Proto Labs

Motion-sensor systems have altered the perception of common meeting spaces

Imagine being surrounded by the imagery of your dreams on a panoramic display. Now imagine being able to manipulate these images by pointing directly at them, like a conductor leading an orchestra of pixels. It sounds like something straight out of a science fiction film — which is exactly what it is.

Minority Report, a Steven Spielberg-directed sci-fi thriller set in 2054, follows a detective played by Tom Cruise who uses a sophisticated “spatial operating environment” to prevent future crimes. The system allows him to rapidly locate, organize and scrutinize multiple video sequences on a large, immersive display.

To help Spielberg create a cinematic vision rooted in realistic technology, the film hired MIT Ph.D. John Underkoffler as its chief science and technology advisor. A computer scientist with a love of cinema, Underkoffler invented a gestural command alphabet and a visual language that could, in theory, be built with existing sensing and visualization technologies. To instruct the actors, he produced detailed training materials, as if the product already existed.

But as the film’s simulated interface became increasingly influential, Underkoffler decided to devote himself to building a working system. He started developing g-speak, a next-generation computing platform with support for motion-based input devices, flexible device networking and displaying data on multiple screens based on their locations in physical space.

The first wave of g-speak systems used motion-capture cameras to track users’ hand positions as they navigated large projected displays. Each finger and hand had to be identified uniquely. To achieve this, Oblong’s VP of Hardware Engineering, Paul Yarin, used custom injection-molded plates to attach motion-capture markers to fabric gloves.

Though the large-scale gesture tracking systems were originally used by organizations with advanced visualization needs — defense, oil and gas exploration, and research institutions, it was soon recognized that conference rooms — where people congregate daily to meet, brainstorm, present and implement ideas—could be another viable application. It  used the g-speak platform as the basis for a display product that would utilize the technology in a corporate or educational setting.


The system runs on a server appliance installed in one or more conference rooms letting multiple users in multiple locations wirelessly collaborate on multiple devices across multiple screens. A triptych of flat-panel monitors are traditionally affixed at the front of the room with additional monitors mounted peripherally to the left and right, operating as virtual corkboards. Meeting attendees can shift and manipulate documents, photos, video and other data around the screens. They can access material stored on the Mezzanine server. They can take snapshots of white boards and save them for later. And anyone in a meeting can upload data in real time to be accessed during a meeting.

But, meetings typically require attendees to write or type at some point, and a  glove-based control is cumbersome when pens, laptops and smartphones were used. Drawing on its experience with the glove, the company developed a motion-tracked “wand” designed specifically for conference rooms. The goal with the wand was to provide an intuitive interface that was responsive and precise, like the technology that preceded it. The wand projects a cursor, similar to a laser pointer, wherever it’s aimed. When held vertically, it becomes a zoom control and when rotated while pointing, special selection modes can be accessed. The wand’s design allows it to emulate some of the human hand’s flexibility.


The wand is based on a hybrid ultrasonic-inertial positioning technology. Ultrasonic emitters mounted on the ceiling act as position beacons that the wand detects using tiny microphones. The wand reports its position data via 900 MHz or 2.4 GHz radios, providing input data.

Proto Labs, a technology-based manufacturer of prototype and low-volume plastic and metal parts, was used to CNC machine and injection mold different components for the wand. Early prototype units were machined out of polycarbonate blocks, and production parts were later produced via thermoplastic injection molding.

The wand’s shell consists of two clear halves that are painted white creating a semi-translucent enclosure that exposes some of the underlying molded features. The case was made to be easily opened and closed for access to its circuit boards or Lithium-ion battery. Its Santoprene (TPE) control buttons, and ABS and nylon enclosures for ceiling-mounted emitters, were all injection molded at Proto Labs as well.

Remote-collaboration capabilities are geared toward multinational corporations, but integration into systems at universities and health care institutions is also realistic. Professors could theoretically integrate the system into their classrooms or lecture halls where they could toggle between a presentation, real-time notes, instructive videos and a live chat, for example. Any real-time notes could then be saved and sent out to students after each lesson. Likewise, medical centers could conduct seminars to a global audience that integrate case studies, videos of procedural demonstrations, live feeds from a panel of doctors, and more.

If this technology is able to provide us with a modern day glimpse of a Spielbergian future, it makes you wonder what technologies will be at our fingertips 50 years from now.

Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading