Voice Risk Analysis technology – Secrets and lies

On 4 March 1996, the biggest shopping centre in the Israeli city of Tel Aviv was hit by a suicide bombing that killed and injured dozens of people. Like other attacks, the bombing added fuel to the public debate over effective security measures. There were the inevitable calls for extra checkpoints, better intelligence on terrorist groups and more body searches.

But others were thinking laterally. One Israeli technologist investigated a means of identifying terrorists before they had a chance to strike. Amir Liberman’s answer sounded more like science fiction. He set about developing a software-based ‘lie detector’ that could pick up signs of deception by analysing the subtleties of speech – whether the voice input was face-to-face or coming in over the telephone.

Seven years on, and there is nothing fanciful about his application of voice risk analysis (VRA) technology. Similar techniques to VRA have been around since the 1960s, and applied with varying degrees of success and failure. But Liberman argues his approach is the answer they were all looking for.

Unsurprisingly, he will not reveal the extent to which his version of VRA technology is used by the Israeli security forces, but in recent months his software has been finding an enthusiastic audience well beyond national security circles. A select group of businesses in the UK have found that it can make key processes much more effective by helping them tell when customers or employees are lying.

Most notably, several of the biggest names in the UK insurance sector are using the software to try to cut the losses that result from fraud that, according to the Association of British Insurers (ABI), cost the industry £2.2 billion annually. Lying is endemic, the ABI argues, with its research showing that 7% of all motorists have made fraudulent claims and one in six people are willing to commit insurance fraud if think they can get away with it.

Those kinds of numbers convinced Highway Insurance, a Lloyds of London syndicate, that it needed a more powerful – if somewhat controversial – means of fighting customer fraud. Since July 2002, says Michael Lawrence, marketing and special projects manager, the company’s motor vehicle claims department has been using a voice analysis package of software and services from Liberman’s company, Nemesysco, and its UK distributor Digilog, in an effort to identify the ‘stress points’ that occur in customer conversations.

Heavy hints

Unlike earlier attempts to develop a computerised lie detector which sought to identify ‘micro-tremors’ (the stress-related response in the muscles that create the human voice), the Nemesysco product does not measure stress in itself. “Stress is not an indication of a lie,” Liberman says. “Under investigation, people will be stressed. What we care about is cognitive stress – how certain the person is [about the incident], how willing they are to share information, and how often they go into extreme emotional states.”

Liberman says VRA analyses the different layers and frequencies of speech using a sophisticated algorithm – a closely guarded equation – which can detect if the subject is excited, confused, stressed, concentrating, anticipating a response or unwilling to share information.

But that is not enough to ensure a successful result. The system needs to be implemented as part of an overall fraud prevention strategy that includes training call centre agents in cognitive interviewing and evidence gathering techniques. For example, if a customer calls with a household burglary claim, they might embark on opportunist fraud by adding items to the claim that were not actually stolen. The system will flag up abnormal ‘stress’ levels in the voice associated with the extra items and enable staff to focus on these details.

On identifying an apparent lie, staff enter into a dialogue with the customer that will provide him or her with the opportunity to scale down or drop the claim. The objective is to get customers to withdraw all or part of a claim without having to conduct further (expensive) investigation. In a burglary, for example, the insurance company will focus the conversation on the items that are flagged up as possibly ‘stolen’, suggesting that the claimant may want to see if the items might still turn up, so to speak, in the process of cleaning up after the burglary.

There is no direct accusation; but the claimants, who will have been warned the conversation is subject to voice analysis, are made aware that the company is suspicious and given them a legitimate escape route. If the customer persists with the claim, and the company still thinks a fraud is likely, it will escalate the investigation and gather evidence in a more direct way.

Stream of lies

To date, the UK insurance companies that have tested the package have reported highly favourable outcomes. Digilog managing director Kerry Furber, who first implemented the Nemesysco system while working at Highway, recalls its initial success: “The very first customer that went through the system actually found his ‘stolen’ car in the neighbourhood after we suggested that he had another look,” he says.

Out of the thousands of claims that Highway put through the system during the trial period, one-in-12 was rejected either outright or after some further investigation. Typically, insurance companies will reject only 1% of claims on fraud grounds.

The benefits are not just in spotting problem claims, but also in speeding up the processing of legitimate ones.

Highway’s payback has not gone unnoticed elsewhere in the finance industry. More than 20 companies in the UK are currently considering or trialling the software, including Esure, Groupama, Fortis and Churchill Insurance. The biggest name to sign up to date has been HBOS (Halifax Bank of Scotland), which in late 2003 embarked on a three-month pilot of the software in its general household claims division.

Another high-profile name, direct car insurer Admiral, has been testing the Nemesysco system since May 2003 through an outsourced agreement with loss adjuster Brownsword. Admiral claims that the use of the system has led to a 25% reduction in overall motor theft claims. It now plans to continue using the technology after the trial ends. Outsourcing the claims process to companies using VRA is proving popular. Brownsword is also administering accounts at Cox (part of Equity Red Star), which went live in November 2003.

Love detectives

Despite such results, there is still some scepticism. Some argue that the large-scale withdrawal of claims has less to do with the application of sophisticated technology, and more to do with the fact that people are told that they will be subjected to a fraud check. In other words, the insurers would have found similar results if they had merely developed an elaborate bluff.

Other sceptics question how effective such ‘lie-detector’ software is in a business setting – not least of all because of how Nemesysco’s technology is being applied elsewhere. Alongside the applications for VRA in call centres, HR, police investigations and airport security, the company is also marketing a version of its software as a ‘love detection’ service. Worse, a Korean company is selling a bootlegged version of the Nemesysco ‘love detector’ software over the Internet.

And even advocates of the technology accept that it is not infallible. When one of the most famous apparent lies of recent years – President Clinton’s denial of an affair with his intern, Monica Lewinsky – was run through a VRA system, the results were inconclusive. What is more, the American Polygraph Association ran a series of independent research projects on VRA software between 1994 and 2001. The results raised doubts about the validity of measuring voice stress as a means of detecting deception.

Others are concerned about claims that the technology is able to measure lies. “Lies are not something that you can measure,” says Bill Trueman, a director of Absolute Customer Management, an insurance fraud and claims management service that uses cognitive interviewing techniques to weed out fraudsters. “You can measure emotion or stress, but this may not indicate a lie.”

Civil liberties groups including Liberty question the effect that VRA software could have on individuals who need to make a genuine claim, especially when they are either worried about the idea of taking a lie detector test or do not know what voice risk analysis technology means. Then there is the thorny issue of whether customers have the right to know that a VRA test is being carried out – and if they should be given the opportunity to opt out of the test.

Despite being opposed to the technology in principle, Liberty says businesses that are determined to apply it should be as open about its use as possible. The group points to the case of Direct Line, which was exposed in the press in 2002 for using VRA techniques without informing its customers. After the revelation, the company stopped using the technology altogether.

Campaigners say that customers should always be given the chance to opt out of a VRA test. Most insurers have accepted this convention. But others are holding out against such practices, arguing that it makes the technology ineffective.

Even those insurers that give customers the chance to opt out of a test often do so with strings attached. “Insurance companies are shrewd,” says Raphael Rahav, general manager of polygraph and investigation at ISC Global, an independent investigation and security agency. “They tell customers that if they take the test they will be paid much quicker. If they refuse then the company will ‘play’ with them – saying the claim could take up to a year to get paid, and even then payment is not guaranteed. They can prolong the pain.”

What’s more, it is not clear whether insurers might view an opt-out as evidence of possible fraud, and order an investigation into the claim anyway. But these and other worries are over-blown, say proponents. Businesses are just exploring the potential of lie detection software, says Highway’s Lawrence. The issues of trust for customers and employees have barely been explored, he says. And he has no qualms about that. “Those who are genuine have nothing to fear,” he says.

 

The art of deception

The polygraph is still the investigator’s standard technology for lie detection. It uses a variety of indicators to determine stress levels, including breathing, sweating and blood pressure. A series of control questions in which the answer is undeniably true is followed by a set of questions that might result in false answers.

In comparison, voice risk analysis (VRA) software uses algorithms to detect changing voice patterns in unstructured conversations. It analyses six different parts of the voice: ‘textual’ (meaning the spoken word); ‘identifying’ (sounds that are unique to individuals); ‘intonation’ (individual expressions); ’emotional’ (uncontrolled excitement and emotion); ‘cognitive’ (an uncontrolled conflict with the words spoken); ‘psychological’ (stress); and physiological (awareness and condition of the individual).

When it comes to the application of VRA technology, speech analysis technology company Digilog says there several steps to the screening process:

  • Trained call centre agents will converse with claimants, covering personal and case details, to create a truth baseline. Operators use ‘conversation management’ skills designed to maximise empathy with the customer. They also apply psychological skills, which test the principle that genuine customers recall claim detail much more easily than fraudsters. Variations from the baseline are identified by the system and flagged as potentially problematic (‘high risk’).

  • Claims evaluated as genuine are ‘fast-tracked’ to settlement. Claims flagged as ‘high risk’ are passed on to specialist operators for further investigation.

  • Agents are looking for information to enable them to repudiate claims. The aim is to unnerve fraudsters and encourage them to drop or change the claim – not to accuse them directly.

  • All remaining cases are investigated by company agents. The aim is to gather all information necessary to reject the claim – information that, unlike lie detector evidence, is admissible in court.

     

  •  

    Avatar photo

    Ben Rossi

    Ben was Vitesse Media's editorial director, leading content creation and editorial strategy across all Vitesse products, including its market-leading B2B and consumer magazines, websites, research and...

    Related Topics