Creating a New Natural User-Interface Paradigm
Readers of my columns in Stantum's newsletters may have spotted a guilty penchant to proselytize the combination of multi-touch and stylus. The motivation behind this crusade is to pave the way for a new natural user interface paradigm that is both bi-manual and multi-modal.
Bi-manual connotes that the interface leverages our natural ability to use our two hands to execute complex tasks. If we are born two-handed, there might be a good reason. Multi-modal interfaces allow us to choose the input technique that looks the most appropriate for a given task or context: the tip of our finger to flip the pages of a book, a stylus for drawing or annotating it. Though YouTube already hosts dozens of popular videos showcasing the art of finger-painting on an iPad, we can safely assert that in our daily life, we won’t revert back to a practice Neanderthals gave up about 45,000 years ago.
Ambidexterity and multi-modality are the two pillars of Stantum's core project – making the use of touch-enabled devices more creative and productive. Amongst others, there is one field of application where we truly see a soaring need for ambidexterity and multi-modality – augmented textbooks.
Unlike their printed predecessors, electronic textbooks can empower traditional educational material with embedded annotation tools, advanced bookmarks and search features, didactic videos and animations, interactive assessments and exercises that can be done directly on the book, and then stored instantaneously in the cloud.
You may think these applications have no chance to replace printed textbooks anytime soon. Actually, this revolution is already on track in various countries around the globe – in South Korea, for instance, the government recently confirmed its plan to replace all printed textbooks nationwide with electronic counterparts by 2015.
A year ago, we introduced at SID 2010 a new generation of touch panels capable of handling full multi-touch finger input and high resolution stylus input simultaneously. Unlike existing solutions, this touch panel did not need a dedicated electronic stylus and was capable of simultaneously tracking fingers and stylus – by contrast, alternative solutions involving the combination of a capacitive sensor and a electromagnetic stylus need to alternate between both input techniques.
The only limitation of our system at that time was that it didn’t accurately discriminate different type of contacts (stylus, finger, palm, etc.). We believe this smart contact detection is critical in order to to deliver ambidexterity and multi-modality. This is why Stantum showcased at SID 2011 the first generation of touch panels that deliver at each acquisition frame not only all the contact points, but also their type (stylus, finger, palm, etc.). This new paradigm we've been dreaming about for years has never been so close!