Smart speakers have taken us a huge step forward in human-digital interactions, but the user experience must become more intuitive to deliver on the promise of a smart home.
As these devices become more and more ubiquitous they are giving easy access to digital information in the cloud, with 90% of people using them for music and 80% for real-time information like weather and traffic. At the same time, there has been an increase in the number of other smart devices available for the home, with everything from toilets to ovens having an internet connection. Smart devices have the potential to improve productivity, safety, and enjoyment in our lives, but using a voice user interface for interacting with these physical devices is still a frustrating experience.
One reason is that VUI today lacks much of the richness that we take for granted in everyday interactions with other people. An improved UI would understand context, such as our gestures, body language, facial expressions, and physical surroundings. It would go beyond speech to text and understand meaning from our voice, including things like emotion and identity. The ultimate goal, really, is “Zero UI” or an interface so natural you don’t realize it’s an interface.
The ultimate goal, really, is “Zero UI” or an interface so natural you don’t realize it’s an interface
We’ve been developing solutions to this, incorporating different sensor technologies and more capable artificial intelligence to make human-digital interactions more intuitive, both for our clients and through technology demonstrations. One example is a home assistant we call Gerard. Gerard has integrated machine vision, voice and gesture recognition, and 3D mapping capabilities which give it the ability to understand much more than the words you say. For example, Gerard knows if you are looking at it when you talk, so an awkward wake word is not necessary. Gerard also understands what objects are around you, where you are, and where it is, so that you can gesture to objects and say things like “turn on that light,” or even ask “where did I leave my keys?”
It’s possible now to bring interactions like these to many devices around the home beyond smart speakers. Nest executed this nicely with their thermostat, which “wakes up” when you approach it, without requiring any touch or voice to do so; but, the company has also seen challenges with Zero UI—their first generation smoke and CO2 detector had a gesture interface to silence the alarm, which was buggy and was eventually disabled. This shows how solving these problems isn’t as easy as just adding sensors to a device. It requires a full system approach, which, in Nest’s case, meant algorithms that worked reliably in a challenging environment. We have experience developing gesture-based algorithms and know how hard it can be to get them to work reliably for diverse populations and use cases.
A system engineering approach is what Synapse and Cambridge Consultants specialize in, and developing a device that benefits from natural interactions like this can leverage many of our technical capabilities. If integrating natural UI is on your product roadmap, or you’d like to discuss advances in intuitive user interfaces, please get in touch!