Teaching machines to talk

By the age of six, most children can already understand not only spoken language but also its subtle complexities, such as ambiguity, novel words and non-literal expressions.

But while children are able to understand the nuances of spoken language, technology’s ability to detect the same linguistic intricacies is largely still in its infancy.

Making contact – Why we need ‘talking robots’

In customer care for example, resolving issues over the phone, even with a human on the other end, can be a painful and laborious process. For convenience, many people are turning to digital self-service channels to look for answers, with the proportion of traffic going through 'non-voice' digital channels growing exponentially.

At the same time, businesses are undoubtedly beginning to value self-service as part of their care mix. Online tools such as virtual agents, help guides, forums and even social channels such as Twitter reduce demand on their more expensive contact centres.

>See also: Is machine learning about to go mainstream?

However, many self-service technologies simply aren't the silver bullet that many perceive them to be. A large percentage of customers are still finding self-service tools less than helpful.

The reason for this is that user-generated content such as blogs, forums and micro-posts do not follow universal, unambiguous structures. They can be difficult to analyse without the right understanding of the context of the situation, the culture in question, and even the very specific topic or people involved.

As customers, and thus brands, move more and more of their interactions to digital channels, it is increasingly important for brands to understand what’s being said in order for them to interact with their customers appropriately via this new media.

Understanding humans

When people come across ambiguous use of language, we are able to infer meaning from the implicit context and draw from personal knowledge and understanding. Computers don’t benefit from the subtleties of human experience and learning however – they don’t have our “common sense”.

Computers work from a logical standpoint, taking everything they come across literally. Thus, while it’s easy for us humans to understand a freeform use of language that does not strictly obey the traditional rules, it’s surprisingly difficult for machines.

With over 80% of information represented in this form of language, it is an important challenge that needs to be overcome – especially as businesses become increasingly reliant on automation.

Natural language processing (NLP), an arm of artificial intelligence, is improving technology’s ability to understand how humans use language. It is the first step towards a future where people can naturally interact with computers using natural, familiar expressions.

NLP requires systems to convert the words and phrases that people use into formal semantic representations, in order to extract meaning that can be understood and acted upon. It also requires the analysis of underlying linguistic structures and relationships, from grammatical rules to explicit concepts, implicit meanings to logic and discourse context.

As individual words and sentences can have multiple meanings, researchers have found that a single concept can be expressed in many different ways. Just like a human learning a foreign language, the ambiguity that can arise when interpreting a single sentence poses a significant challenge to computers achieving NLP.

To understand our world, computers require information to be what human language is not – namely, precise, unambiguous and highly structured.

Rise of the virtual agent

Thanks to research undertaken by scientists in centres such as the Palo Alto Research Center (PARC) and Xerox Research Center Europe (XRCE), progress is being made in the automation of natural dialogue in the customer care sphere.

With NLP and machine learning, computers can now understand complex informal linguistic expressions to improve their online conversations with humans. This has many highly useful applications, ranging from fact checking to automated customer issue handling.

In customer care specifically, this is creating opportunities for customers to interact directly with ‘virtual agents’ for example, without having to change their communication patterns and preferences.

>See also: Man vs machine: productivity, creativity and job creation

Monitoring how ‘live agents’ in the contact centre are diagnosing and solving customer problems, the virtual agent uses machine learning to grow in confidence and understand the best way to manage customer interaction, learning through experience.

Through this technology, digital customer care channels will be able to accurately and quickly understand what customers are saying. And with a more accurate understanding of customer needs and expectations, businesses will be able to respond more appropriately to meet them.

As this technology advances, tomorrow's digital self-service experience won't be impersonal or inaccurate. With the knowledge of thousands of live agents, tomorrow’s virtual agent will be able to make decisions based on experience and data in a fraction of a second.

 

Nick Gyles, CTO, WDS, a Xerox company

Avatar photo

Ben Rossi

Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...