Gartner just released their latest hype cycle for AI.
Right there in the trough of disillusionment are chatbots.
What went wrong? Everyone was so excited about the possibilities!
I think it’s a general issue with AI. Or at least, our perception of AI.
Humans evolved to create and maintain highly complex social groups. To do this, our brains try to understand what other humans are thinking, so we can anticipate and provide for each other’s needs. Or anticipate and avoid an attack.
But we over-generalize. We fall in love with cats, that probably don’t love us. We complain that a dog is naughty when they’re just responding to the stimuli we introduce into their environment.
We over-generalize with pets that have surprisingly similar DNA, biology, and brains to us.
Imagine just how easy it is for us to over-generalize with AI that appears to do things like we do.
Like chatbots, that ask and respond to questions.
But chatbots share almost none of the underlying biology (infrastructure), DNA (programming) nor experience (data) that makes us who we are.
And that’s the issue. We can’t use our intuition to understand what AI is capable of. When Google shows off an AI chatbot that pretends to be an airplane and appears to chat convincingly with their human creators (I wrote about it here) it’s easy to imagine those capabilities will extend to the chatbot we deploy in our contact center.
But Google’s chatbot isn’t intelligent like us.
It’s not flexible, helpful, and resourceful, like us.
It’s just a computer program that’s harnessing billions of examples of conversation to say something plausible. It might be good at chatting about folding paper and flying. But it doesn’t know your business needs. Nor your legacy processes. Nor to whom it should turn for help if it gets stuck. Unless you provide very detailed instructions to it.
And that’s the issue. IVR, voice or chatbot platform plus data does not equal the capabilities of a human agent.
Nowhere near it.
You have to provide the instructions. That’s what Conversation Designers do. They teach robots to talk.
The latest AI algorithms make this easier. You don’t have to teach the bots everything. Conversational AI platforms have general models of language built in to help IVR and chatbots understand what a customer says. But only at the surface level. You have to provide them with lots of examples to fill in the gaps.
You also have to provide very precise instructions on how to behave based on what they think a customer said. And meant.
The platforms try to make that easier with nice-looking graphical interfaces to build your bots with. But they provide an illusion of simplicity. Just because you can quickly build a bot, doesn’t mean you can actually build a bot to do what your business needs. There are lots of gaps, and corner cases that can catch you out.
Conversation Designers, and the teams that support them, need to understand what Conversational AI platforms are capable of, and work around those constraints to build a bot that customers will engage with, and understand. And that will engage with and understand them. But only in certain, very narrow, situations.
Why your IVR, voice and chatbots need HR for AI
There are two fundamental differences between conversational experiences, like IVR, voice and chatbo…
Designing conversational AI chatbots and what we can learn from IVR
Conversational AI chatbots are a hot topic. Looking at our Twitter and LinkedIn feeds, the interest …
Multichannel Considerations in IVR & Conversational AI Design
At a time when new channels are emerging, the differentiators for retailers are going to have an awf…