Automated chatbots don’t chat, they debate?

Vanita Tanna, head of Experience and Service Design at Rufus Leonard, explores our relationship with chatbot technology.

We have a love-hate relationship with automated chatbots. We love to play with them but get frustrated when the chatbot can’t answer our questions or fulfil our requests. Those that create chatbots thrive off the challenge of making the chatbot as human as possible, but find writing the several thousand possible conditional permutations to try to achieve this very arduous.

This mutual desire to make interacting with an automated chatbot human-like has propelled the powering of bots through natural language processing (NLP). The machine learning behind the chatbot bot and from which it is learning is built on decision tree logic and is an exchange of sequences of predetermined inputs and outputs. However, natural conversation is not like this.

See also: Under the AI umbrella: Machine learning, natural language and computer vision — key to enhancing chatbot capabilities – Anshuman Singh, head of digital for Mindtree in Europe helps Information Age explore the importance of AI and natural language in developing chatbots. But, be careful to not forget the human element.

Distorting the patterns of natural conversation

As in natural conversation, humans and automated chatbots take turns at speaking i.e. exchanging messages. In natural talk, irrespective of the content of what is being said even in the absence of visual cues and tone of voice, there is also meaning in talk that overlaps, meaning in the silences that occurs between taking turns, and meaning in efforts to repair misunderstandings. These actions are actually constituents of what it means to take a turn and are, inherent to natural conversation. They help to propel conversations forward and reveal the intent behind them as the conversation unfolds.

A move from chatbots to debate-bots?

When it comes to interactional patterns, these chatbots have more in common with the back and forth nature of pre-allocated turns in debates (Sacks et al, 1974) than they do with natural conversation. If the job of an automated chatbot is to ‘go fetch’, do we even need social niceties such as “Hi”? By humanising the chatbot are we then responsible for setting up incorrect user expectation management? Or is this just service with a smile?

The idea then that chatbots might not be chatting, illustrates an almost philosophical question. Why do bots have to be so human-like and in what capacity? What is it that we are actually automating when it comes to automating chatbots?

What is a chatbot? How to make a chatbot? Why chatbots fail? All your questions answered

What is a chatbot? How do you make a chatbot? Why chatbots fail? In this article, we answer all these questions with the help of Howard Pull, strategic development director at MullenLowe Profero. Read here

Taking turns

To provide more accurate and relevant responses, automated chatbots are leveraging NLP in a way similar to Google’s conversational search. This does result in more intelligent processing but still largely relies on both the human and the bot taking well organised turns at speaking i.e. sending one message or one well-constructed multi-stack at a time. Both chatbots and humans are able to send multi-stacks, but the human multi-stack is far more disruptive than the chatbots. We may render the chatbot’s next response irrelevant or inappropriate resulting in a poor paired action such as a Q&A, match which leaves us as users feeling frustrated.

Because natural conversation is not composed of paired actions, neatly produced one after the other, we need to start analysing chatbot interaction as a whole as opposed to sets of single paired exchanges. We also need interventions so that when humans create these disruptive multi-stacks we can better hear the ‘conversation’.

The meaning of silence

One way to manage human-chatbot interaction is through animation. Animation is the form of loaders, ‘…is typing’ or ‘please wait’ offers a two-way benefit to both bot and user by helping to manage transparency and to keep the conversation moving forwards. It helps to try and reduce problematic human multi-stacks by staking a claim to the turn in conversations.

As Adrian Zumbrunnen, a designer at Google stated “without animation, there’s no conversation” and this is true to a large extent. In the same way we associate temporal delay to affirmative or negative responses, could a bot deduce the same meaning of our response lag? Understanding whether we have a low technical ability or whether we are using the chat interface affordance to buy time to think are interesting challenges to meet when sensing intent to repair.

How chatbots can help recruiters boost candidate engagement

Peter Linas, EVP of Corporate Development and International at Bullhorn explores the role chatbots can and should be playing in recruitment. Read here

Do bots have to be human?

Google doesn’t pretend to be human and even though search term analysis shows that we ask it very human questions. Furthermore, when Google shows irrelevant results or presents a non-match we blame ourselves for not getting the search term right. We are more forgiving because ‘Googling’ isn’t set up as conversation, it’s set up as retrieval. The fact that we empathise with more human looking or feeling bots but less with mechanical is not necessarily a disadvantage when it comes to interaction – it helps to set user expectation.

The wrong interface?

The decision trees powering automated chatbots, which are built on sequences of conditional control statements, share similarities with adventure game dialogues.

Bandersnatch, the latest instalment of the Black Mirror series from Netflix late last year is a “Choose Your Own Adventure” film that explores free will. Netflix even developed “state tracking” that saves our trail of choices so that they can be deployed later on in the narrative. Bandersnatch is a branching narrative made up of 250 segments that was written in Twine (a decision tree tool for writing interactive fiction also used to design decision trees for chatbots). Could it be then, that like film, chatbots can take advantage of the familiarity and user interface (UI) characteristics of adventure game interfaces?

The current chatbot interface offers limited screen size, limited single message length and usually sits in the bottom right hand corner of the screen. Whilst appreciating the UI inheritance from old school messenger services like ICQ, why have we not yet gone full screen for automated chatbots?

Related: HR, chatbots and robotic process automation: key for digital transformation – Organisations must modernise their HR function to succeed in digital transformation efforts. How? Success relies on software automation — chatbots and robotic process automation — and continued human involvement.

From forecasting to managing

That natural conversation gets distorted by human-automated chatbot interaction may help to explain why, despite the intelligence of NLP, interactions with automated chatbots are still met with mixed feelings. Partly due to its UI affordance and partly due to our learned behaviour of what it means to participate in natural conversation, there is meaning in silence, turn overlap and in conversational repair.

Automated chatbot responses don’t necessarily need to be human — they just need to be managed in a very human way.

Written by Vanita Tanna, head of Experience and Service Design at Rufus Leonard

See also: ChatGPT vs GDPR – what AI chatbots mean for data privacy

Editor's Choice

Editor's Choice consists of the best articles written by third parties and selected by our editors. You can contact us at timothy.adler at stubbenedge.com