Imagine communicating with a robot with just gestures. Or a digital assistant that can go beyond mere instructions. With the rise of voice and conversational interfaces, everyone’s looking forward to the next wave that will make the user experience more immersive and frictionless. Here are some exciting new possibilities.

Voiceprint

An advancement of voice technology, voiceprint technology evaluates the set of unique characteristics that make up individual voices. This is done through newer machine learning techniques and data troves of recorded voices. It can identify people by their voices. Now researchers at Carnegie Mellon are working on techniques that allow them to build a 3D version of someone’s face using only their voiceprint. Voiceprints can be used to interact within smart homes to unlock doors and connect with digital assistants.

Thought communication

This has been a dream goal of AI researchers and science fiction fans alike. The idea that humans can connect and operate machines simply by thought has always fascinated the scientific community.  Researchers are already exploring this with regards to helping patients who are suffering from stroke and paralysis. Researchers at Duke University have built a suit that enabled a patient suffering from complete paralysis of his lower body to take a few steps and kick a soccer ball. They are also working on a Brainet that will connect the brains of a group of mammals to harness and direct their neural activity. The big names in tech are also actively backing this direction. Elon Musk’s Neuralink hopes to do further research in this field and commercialize human-machine interface technologies.

Gesture interface

This was previously seen only in the movies as in Minority Report, where Tom Cruise interacts with the computer system equipped with a pair of sensors fitted gloves. Today, motion-sensing technology is already in play in gaming devices like Wii. Now Apple is hinting at a gesture-based interface for their Macs in the near future. They recently acquired a patent for a system where user’s hand, arm, and facial movements are tracked and translated into on-screen actions. Apple fans and tech experts are speculating that this could combine this motion-sensing technology with existing facial scanning technology to make the Mac user experience more frictionless.

There are also desktop controllers by Leap Motion that allow users to control their computers using finger and hand gestures. Microsoft’s Kinect has already delivered a natural user interface (NUI) for gaming. Some of the latest drones can be flown and operated to take photos using gestures.  Google also announced advances in gesture interface technology through Project Soli, a chip that will allow users to gesture above a device. Users will be able to move their fingers and use hand gestures that simulate using physical controls.

Ultrasound & Haptics

Also pushing the envelope in the realm of gesture tracking lies ultrasound and haptics. This will offer mid-air touch using ultrasound. An example of this is in the category of automotive assistants that guide you on a car journey. Using an array of speakers, the acoustic field near the device can be sculpted, as in augmented reality. If you’re listening to music on a digital device while driving and you need to adjust the volume, you can hold your hand out and get a projection of the volume dial on your hand. Soundwaves are manipulated to change the type of vibration hands can feel, so clicks, dials, shapes, textures can be created. It feels like operating a regular physical interface, but it’s flexible and invisible.

Smart Touch surfaces

You will no longer need to limit your touch interactions to the phone or watch or tablet. Every surface within reach could potentially become a smart touch interface. Researchers are now working on ways to turn surfaces like desks and walls and even your arms into smart touch interfaces with laser projection. An example here is the Wall++ Project by Carnegie Mellon University. It makes the use of conductive paint and low-cost electronics to transform regular walls into smart infrastructure that can detect human touch. This is particularly useful, given that the smart home is now a reality in many countries.

AI-led Digital Assistants

Advances in NLP and AI will bring in a new generation of digital assistants that will have the ability to learn from your previous interactions. This allows them to understand your preferences and guide you to more appropriate choices. For example, you can ask your digital assistant to take you to a restaurant on your commute back home and it will lead you to the restaurant with your favorite cuisine.

The big players in the tech industry are moving towards more intuitive and integrated technology. At the same time, they are looking to create an environment that is more meaningfully connected to the user’s real world and space, in order to create a tech experience that is more natural.  The ultimate goal, of course, is Zero UI, or touchless technology.  It’s a goal that will only come closer with every new step in User Interface technology.