Skip to content Skip to sidebar Skip to footer

In the days before the smartphone, sending text and email from a standard phone keypad was an exercise in itself. You really needed finger dexterity and even with the advent of predictive text, typing long messages became a chore. Accessing functions and features was also challenging. When touch appeared on the iPhone, which by itself wasn’t really anything new other than to package everything available into a supreme interface, there was no going back.
Touch is a natural human interface. The first thing an infant does with his hands is to touch, feel and grasp. When I watch my then 3-year-old interact with a tablet, touch was natural. The next step though is voice. Whenever we needed something, we talked or asked. Have you tried navigating a bank app or website trying to do something but taking some time to even find the function? Why can’t we just say exactly what we want?

I’ve had Amazon’s Echo in my home for a while now. While it really has less functionality outside the US, it has become indispensable in my daily use. My Echo sits in the open area between my dining and kitchen and it’s useful to ask Alexa for the weather, and traffic, set a timer, or play my Spotify playlists. It’s also useful when you have your hands full with prepping dinner and you need to know what 375 F is in Celsius. This only serves to reinforce my belief that voice will be the CX interface to watch. Apple and Microsoft has already brought their voice control capabilities of Siri and Cortana to their computers so why not other interfaces?
Yes, it’s very Star Trek but I can’t wait for it. For starters, it’ll save a user wading through three levels of IVR to do something but better yet, pair that with an AI chatbot and you can dispense of the agent completely for trivial requests.