Artificial Intelligence (AI) adoption has skyrocketed over the last 18 months. And Gartner says that chatbots are just one step away from a slope of enlightenment on its AI hype cycle. At the same time, AI technologies are coming to accelerate business growth and ensure engineering trust. Together with Conversation Design, Conversational AI is transforming customer experience, customer support, and digital customer services for an onscreen world.
From mobile-first experience to Conversational AI multimodality in customer interaction
“Mobile-first experience” – this is the paradigm that has been the number one goal in the strategy of IT companies since Google announced this concept back in 2010. Now in 2022, it’s time for companies to expand on that approach and think about multimodality.
To determine if multimodal experiences are best for your users, you need to ask yourself the following questions:
- Do your users have access to multimodal devices?
- How valuable is that for those users?
- What natural conversations are your users having?
- What are they looking for? And how could a bot help them achieve it?
The mobile world shows the flexibility and scalability of company offers, and virtual assistants are the same. But not everyone has multimodal assistants in their household and its adoption for enterprises is still in its infancy.
Featured resources: Free guide to Conversation Design and How to Approach It.
Multimodal Conversation Design is exciting because it marries voice and chat together, and they can fill in gaps that each experience may not offer. For example, today’s voice technology is still limited, such as the challenges around understanding certain accents. Multimodal technology can support this pain point by leveraging visuals for the user to lean on instead of the voice experience. This offers a more accessible experience to all users.
During consultation for the automotive industry, when we looked at English support it became very clear that for English US, English UK, Australian etc cultural context is extremely important to consider. So the way you would name a car part in English US would be different from English UK, and you really need to customize your language model.
Conversational AI creates stable and well-trained language models as basics, and then you look outwards in the context, what channels are interesting, or what modalities can best surface brand or user experience. Language is the biggest factor in Conversational AI, once you get started to build a conversation you probably have dialects or different languages inside one country. Check out our investigation of different names of soft drinks in the United States in a recent post, Dialect Diversity in Conversation Design.
It’s essential for conversation design teams to understand how the end-users talk about products, services, and things the virtual assistant will need to know. Always collect sample dialog from a diverse representative sample of the bot’s end users to ensure the system will understand all the different types of jargon and phrases.
Best use cases for Multimodal Conversational AI Assistants
A great multimodal experience is one that feels seamless, easily switching out contexts. A good example with a booking self-driving vehicle agent by the textbox, but also talking to you inside of the vehicle via voice. Check out more Multimodal Conversation Design Use Cases and opportunities for enterprises.
The Future of Multimodal Conversation Designed Experiences
The not so far future will be that everytime a brand launches a conversational experience, it will be across multiple channels, specially designed for that channel. Brands need to invest in offering automation to their customers across multiple voice and chat channels, creating more accessible solutions. By allowing more entryways for users to self-serve, a company’s ROI will only increase.
Want to Reduce Customer Support Costs?
We analyze your customer pain points and address them with automation.