Project Tango: From gesture sensing to machine vision
Project Tango was announced in February 2014. It is a special project from Google’s ATAP (Advanced Technology and Projects, formerly part of Motorola) and Movidius, a startup in Silicon Valley. When Google sold Motorola (Mobility) to Lenovo, it held on to ATAP. Project Tango is focused on bringing machine vision to smart mobile devices. The project has distributed 200 handsets to app developers, and it has been reported that Google will offer tablet PCs (about 4,000 units) with the technology this June, possibly to be announced at Google I/O.
The Project Tango smartphone has similar specifications to others on the market, including Qualcomm’s Snapdragon 8974 (2.3GHz) processor, 2GB LPDDR3, 64GB flash, inertial MEMS sensors, touch user interface (Synaptics S3202), and a 5” display.
Its uniqueness is in the camera modules; in addition to the front camera (selfie-cam), there are two cameras on the rear, one with a 4MP sensor and the other with a fisheye lens. Sunny Optical (China) provides the camera modules using image sensing chips from OmniVision. The 4MP camera serves regular RGB (video and photo) and infrared imaging (for depth sensing to build range image). There is also an infrared LED near the fisheye lens. The most critical chips are a PrimeSense PS1200 SoC and a Movidius Myriad 1 computer vision co-processor; the former enables sensing technology using PrimeSense’ structured light and the latter is used to build 3D models.
It is interesting to see PrimeSense’s technology used in a Google product. When it the first released the Kinect in 2010, Microsoft adopted PrimeSense’s structured light and own technologies for gesture sensing on the Xbox 360, instead of time of flight technology from 3DV and Canesta, acquired by Microsoft. In November 2013, Apple acquired PrimeSense and before any adoption in an Apple product, it is being used by Google. Project Tango’s leader Jonny Lee worked on the Microsoft Kinect team, which could be one of the reasons it chose to adopt PrimeSense’s technology before Apple.
Software development kits have been released to developers, and many creative applications are envisioned for Project Tango technology, such as replace guide dogs for blind people. End users can make use of machine vision as 3D scanning kit to easily build 3D modeling. Home appliances such as air conditioners can sense how many people are in a room and adjust the temperature appropriately. As Intel has promoted with its “Perceptual Computing” concept, we may be seeing devices empowered by intelligence and understanding, and user interface is the key to such intelligence, making these devices smarter.
For the past few years, industry leaders have been deploying advanced user interface technologies, and gesture sensing has been a critical approach. We have seen brands adopt three major technologies: time of flight by Microsoft (Kinect and Xbox), stereo vision by Sony (PS Camera and PS4), and structured light by Google (Project Tango) and Apple.
Source: DisplaySearch Gesture Sensing Control for Smart Devices Report