This piece was originally published at Forbes.com.
Voice interactions with digital devices are not new. Dragon’s Naturally Speaking has been around since the late 1990s, and speech-to-text in some capacity has been on almost every device available since then. What has changed are the integrations and capabilities and, of course, the accuracy. Alexa is now able to order almost anything from Amazon, Siri can send messages and set up reminders and Cortana opens desktop applications and sends emails. The Google Assistant, for which I develop apps at Bottle Rocket, aims to provide a lot of this same functionality, with a few enhancements along the way.
NOW IS THE TIME
The number of voice interactions is growing exponentially, and the opportunities for companies to get in front of users are following suit. As accuracy and capabilities grow, so does consumer demand across every platform. When people look at new Internet of Things (IoT) devices, they’re expecting integration with Alexa or Google Home. It’s becoming more common for family members to ask their favorite voice assistant to play the music they want, instead of loading up their music app and searching for it manually. Even children are learning that talking to the assistant can result in faster answers than a browser search, even if the results are the same. If you want to create a voice-based app for your company, now is the time to start working on it.
CHOOSE YOUR PATH
When people think about your company, what do they think about as the primary interaction? This is a good place to start when building a voice-based app. If you’re a national food chain that focuses on delivery, your voice app needs to let people order food. If you’re not sure what people might want, ask your support channels about the users they connect with. They will likely know the top three requests off the top of their head.
Besides responding to user requests, you can also use a voice app to educate customers about other products and services you have. Maybe you want people to think about larger catering orders for their office, not just family-size orders. You can mention that as one option in the conversation, much like you would present it as an option in a smartphone app.
You’ll want to meet user’s basic expectations about your brand, but that doesn’t mean you can’t offer new ideas that help guide users to new areas or experiences. Regardless of which way the user goes, you’ll need to help them finish the task at hand or offer a way out if they feel like they’ve gone too far. If they get stuck, you can provide options on how to answer the current question, but you also should let them exit a conversation or start over if they decide they really don’t want to do something.
HOW MACHINE LEARNING FITS IN
Google has doubled down on using machine learning in all their products, and their assistant backend, Dialog Flow, is no exception. The best example is the machine learning of triggering phrases that start various actions. Many examples are entered, such as “start an order,” “place an order” and “I want to order,” and then Dialog Flow creates models based on these entries to help find the ordering activity. Then, even when someone says “Make an order,” the model will determine that the user probably wants to start the ordering activity. This means that users don’t have to spend as much time learning how to use the app.
But what if the user says something that doesn’t match any specific action? Your app won’t know what to do. This happens all the time in normal conversation — we’re just used to dealing with it, and voice apps have contingencies for this. The Google Assistant calls these “fallback” actions. Maybe your Assistant should say “I’m sorry, I didn’t catch that.” It could also provide a list of valid options for the question it asked. For example, the list of toppings you allow that you were expecting the user to pick from. To make it more natural, you can vary these responses, so the user doesn’t hear “I’m sorry, please repeat that” over and over.
Finally, all of the phrases people say to your app that aren’t understood are saved and sorted, so that later you can decide what to do with them. You can help train the AI model on new phrases you want to trigger existing actions, or you might create an action that helps explain to the user why it can’t do a common request that you’re seeing. Even more than that, you can see what people are requesting and use that to build your future roadmap.
A MORE PERSONAL INTERACTION
When it comes to voice interfaces, there is no wrong input, only unexpected input. People are going to say random things, and you have to be prepared for it. I’ve learned that the tools have gotten a lot better in the past year, and we’re seeing even more improvement on the horizon. Voice apps can help users accomplish the tasks they want quickly, but it can also be a tool to educate users on what’s possible.
People are looking for ways to have a more personal experience with technology, one that feels custom-tailored to their unique needs. Voice interactions can provide this if they really listen to the customer and take advantage of all the interactions you have across all your users. Your company or brand now has a very personal way to talk to your customers, so think about the personality you want to present and how you can help them, and you’re likely to end up as a trusted advisor.