Biometric diversity

Biometric technology is an industry characterised by “disappointment and broken dreams”.

This stark analysis, voiced by one of the evangelists of biometric technology – David McIntosh, the former chairman of the Intellect Association for Biometrics – underlines just how troubled the history of biometrics has been. Even those with a keen interest in promoting it cannot deny that false promises, clunky technology and intrusive approaches have done little for its cause – particularly within the private sector.

Where biometric technology has flourished, it has been at the hands of government officials as part of large-scale government-mandated implementations, focusing on areas such as border-control and crime-prevention: passports, visa programmes, the ever-controversial UK ID card scheme and massive criminal databases are, it seems, regarded as the technology’s ‘killer app’.

This association has not, it is now evident, been to its benefit. Indeed, the public rows over data privacy and civil liberties, which seem invariably to follow the unveiling of such schemes, have single-handedly served to politicise the technology into near obscurity.

Far from the public gaze, however, IT executives have been quietly undertaking select biometric implementations – at least according to insiders such as Will McMeechan, director of the European Biometrics Forum. “In the UK, the popular press is particularly anti-biometric, so there are things being done quietly so as not to raise too much attention,” he suggests.

One of the UK’s biggest casino groups, for example, is now deploying facial recognition in order to welcome its customers as they step through the door. Likewise, many UK banks are implementing voice recognition to authenticate telephone customers.

The trend can also be witnessed globally. Businesses in Saudi Arabia are increasingly using the technology to immediately identify high-value customers. Meanwhile, the use of biometrics at ATMs and point-of-sales is also emerging – particularly in Asia – in order to deliver smoother services while simultaneously ensuring strong security. With such examples, says McMeechan, biometrics will break free from its negative image as an agent of control.

“To a large extent, the biometric market has been its own worst enemy in concentrating on security,” he observes. Rather, the peripheral advantages, such as loyalty schemes, increased convenience and the ability to access multiple services, should become a chief focus for the technology developers and marketers, he argues.

But fulfilling such tasks requires biometric technologies more inventive than those that have already made their name and reputation. Iris recognition, fingerprint scanning and the use of facial characteristics have now, by all accounts, reached a level of maturity that make them highly suitable – if not always acceptable – for access and control implementations. But while well-known, these are often intrusive and require client-side hardware.

In some instances, it would be preferable for biometrics to operate in a non-intrusive, seamless way – for example at a distance or without the subject’s knowledge or co-operation. In other cases, biometric authentication that can be operated over the web or over the phone, using established hardware, would be much more appropriate and cost effective. “The point of biometrics,” argues Professor Mark Nixon, head of the Electronics and Computer Science department at Southampton University, “is to look for the unique advantage. There will never be a panacea, but you should not expect one.”

Rather, he counsels, understanding how particular biometrics – however esoteric they might at first appear – can be applied effectively in particular contexts should be a critical goal. This philosophy is already evident at the highest levels. Andrew Tilbrook, director of the Defence Technology Centre, which is co-ordinated and funded by US contractor General Dynamics and the Ministry of Defence, tracks the development of a range of biometrics as part of his research programme. “The importance is that no one biometric is going to answer every problem: [the successful application] will be a suitable combination of biometrics, thereby eliminating the risk of ambiguity,” he told Information Age.

For example, by combining a thumbprint reader (with an error rate of one in 10,000) with voice recognition and facial recognition software (both with a one in 1,000 error rate) an overall error rate of one in 10 billion can be achieved. Increasingly, this ‘multi-modal’ use of biometrics will come to dominate the application of the technology. For example, palm-vein mapping combined with smart cards, is already replacing PINs in Asia. Meanwhile, keystroke biometrics are being used in conjunction with user ID information and passwords to allow users to experience ‘frictionless’ online transactions, while offering a strong but invisible authentication control.

In this model, future users will be better able to calibrate their risk profiles and customise their applications to suit both their existing hardware and the application context. As the technology showcases below highlight, there is no shortage of creative behavioural and biological biometric technologies by which to achieve this end – from analysing the way someone walks to the unique electrical signal generated by their heart. However, the user community, and technology executives, must be prepared to make the leap.

Walk the walk: Gait analysis

The works of Shakespeare are, by anyone’s measure, an unlikely starting-place when developing cutting-edge technology. But when, in 1994, Professor Mark Nixon, head of the Electronics and Computer Science department at Southampton University, pioneered the idea that ‘human gait’ – the way in which individuals walk – could serve as a unique, identifying characteristic, the Bard’s writing offered several supporting examples of the theory. In fact, says Nixon, Shakespeare’s protagonists are said, on several occasions, to recognise characters by their gait. The hypothesis, Nixon went on to learn, had also been explored in psychological, historical, medical and general literature.

Armed with the theoretical evidence, Nixon and his team at Southampton embarked on developing the technological capability that would transform theory into an automated reality using two techniques. The first is the silhouette-based model, in which an understanding of the unique shape of an individual’s body – including body height, weight, width, shape and part-proportion – is derived from their silhouette when walking. This information, described as ‘static cue’, is captured using computer vision techniques and translated into a unique set of numbers. The second technique focuses on ‘dynamic cue’, that is, the motion of an individual characterised by thigh swings, lower-leg swings, stride, the angle of both the lower-leg and thigh, and style of arm swing. All such measurements change while walking, meaning that a sequence of numbers describing this change can be generated and used to create a profile. “This is different for different people,” explains Nixon. “All you want is a set of numbers describing this difference.”

Nixon’s work – supported by research at MIT and the University of Maryland – has been largely funded by General Dynamics and the US Department of Defense central research agency as part of its ‘Human Identification at a Distance’ programme. Indeed, the unique advantage of gait analysis, argues Nixon, is its capacity to identify an individual without their knowledge or co-operation, at a distance or even on video. The biometric’s credibility has no doubt been boosted by its applications in murder investigations, having been used to identify the killer of Swedish foreign minister Anna Lindh after her assassination in September 2003.

Nonetheless, the technology, according to David McIntosh, deputy chair of the Intellect Association for Biometrics, needs to mature for at least another decade before it will be seen in full-scale public sector or commercial implementations. But Nixon is phlegmatic. “We’ve only been working on it for 14 years. We’ve had to design new techniques and new technologies along the way,” he points out. Currently, he adds, the team is addressing the issue of ‘covariants’ – factors that affect or disrupt the identification process such as unusual footwear, long-coats or injury. On this issue, says Nixon, “The jury is still out.”

Type cast: Keystroke biometrics

Despite its name, keystroke biometrics – or keystroke dynamics – owes less to the modern keyboard than it does to one of the oldest means of long-distance communication – Morse code. Since 1844, when the first message was transmitted using Samuel Morse’s innovation, individuals have had their own characteristic tapping rhythm. During the Second World War, military intelligence was able to distinguish ally from enemy using a tapping rhythm methodology known as ‘The Fist of the Sender’. This was developed into a computer security technology by the US National Science Foundation and, later, the National Bureau of Standards.

Principal heir to this research is Washington-based technology company BioPassword, which acquired several keystroke dynamics patents in 2002 and remains one of its leading proponents. A behavioural biometric, keystroke dynamics is predicated on the twin principles that an individual’s typing style is both idiosyncratic and consistent. In order to create a profile, the user is asked to type a passage of a certain length, or type the same word up to ten times. Measurements, relating to a range of variables which determine the user’s typing manner and rhythm, are then recorded. These include the ‘dwell time’ (the time spent pressing down a key) and the ‘flight time’ (the time between each ‘key down’ and the time between ‘key up’. BioPassword processes this data through a neural algorithm, which then determines a primary pattern for future comparison.

Similar technology has been pioneered by the University of Regensburg’s research centre in Germany, deploying a range of parameters determined by psychometrical features such as left- and/or right-handedness and key finger placing. Other suppliers have emerged, marking the biometric out as a true commercial proposition. The technology’s unique advantage is its remote use, and it is deployed by several US banks for their online services, as well as internal access control systems – unknown to end-users. As such, it is among the least intrusive biometrics currently in commercial deployment, requiring no additional client-side hardware.

Some scepticism remains, however, as to its robustness, which many experts argue is only truly viable in a ‘stress-free’ environment. Its application is best applied in a multi-factor authentication set-up, as deployed by BioPassword, explains CEO Mark Upson. “We build a profile around a word phrase, working off a combination of the user’s keystroke rhythm and their user ID and login details.” Used in this configuration, the company claims a 99% accuracy rate. Its application, however, is necessarily confined to computer-based activities and services.

You’re so vein: Vascular pattern recognition

An Eastern innovation, vascular pattern recognition – more widely understood as vein mapping and focused around veins in the hand – has enjoyed widespread take-up in Asia, especially Japan. It is no small coincidence that Japanese giants Hitachi and Fujitsu are the chief pioneers (and patent-holders) of this technology, but it has also interested European technology companies, including systems integrator Accenture, which has installed the respective vendors’ technology in its R&D labs in France.

Finger-vein mapping usesinfrared cameras to scan the random pattern of the hand’s blood vessels. The information is processed using an algorithm similar to that used for biometric fingerprints. The scanner operates irrespective of skin condition (important among some Asian ethnic groups where the thinness of skin makes fingerprint recognition unreliable), and can even scan the hand through thin material, claim the developers. “Because it scans the inside of your hand, the condition of your hand is irrelevant. Also, your ID is not visible to others,” says Ken Ashida, strategic planning director at Fujitsu. Nor is the technology affected by ageing, race or gender.

Fujitsu focuses on palm-vein recognition, Hitachi on finger-veins – although it remains unclear if there is any technical, as opposed to practical, advantage to be gained by this differentiation. Fujitsu’s Ashida claims, unsurprisingly, that palm-vein mapping is more secure because the palm pattern is more complicated, and therefore less prone to error. Both techniques enjoy low error rates overall, with false rejection rates of around 0.01% and false acceptance rates of less than 0.0001%.

The infrared scanner used is technically contactless, which, in theory, eliminates the hygiene problems of fingerprint scanning that have made fingerprint biometrics so unpopular in Japan. In practice, however, finger-vein mapping can require contact, while Ashida concedes that palm vein-mapping requires the hand to be carefully positioned. The technology has still been widely applied for access to Japanese residences, while in the corporate realm major banks – including Mizuho Bank – are now using the biometric for authentication at ATMs where the information relating to an individual’s vein pattern is stored in the chip of a smart-card, against which the ATM scan is verified.

A Japanese library has begun using the technology for book sign-outs, while Fujitsu has just announced that it is to deploy the technology in a Scottish school, in order to automate canteen payments. Evidently, the biometric has already made the step from security control to a service facilitator.

Speak easy: Voice recognition

After its more established peer, facial recognition, voice recognition – the biometric by which an individual is identified according to a range of features unique to their voice – is the most intuitive biometric technology yet developed. Deprived of sight, humans regularly identify one another – chiefly, of course, over the phone – by the character of the voice. Unlike the majority of other biometrics, however, the very action of recording an individual’s voice necessarily serves to degrade the integrity of the information captured.

“When you start recording, you are already destroying certain features that are individualistic,” explains Hermann Künzel, professor of phonetics at the University of Marburg, Germany, who has been working closely with Agnitio, a voice recognition software company based in Madrid. “This is different from a fingerprint, for example, where, unless you put only half your thumb on the reader, you’ll have the full information required. This is the crucial point in voice biometrics: you have to make the best of the subset of parameters you are left with [after a voice has been recorded].”

Traditional voice identification operates according to a range of parameters such as average vocal pitch, variation in frequency and accent. Modern voice biometrics, however, have been developed according to a much more robust set of physiological parameters: the source, that is the individual’s throat, the pitch of speech and “what happens to the sound before it leaves an individual’s lips and nose”, explains Künzel.

This last parameter is the most fundamental, relating to a set of internal cavities “that are so special in terms of their dimensions and the internal skin”, says Künzel, that the noise generated is entirely distinct from the general noise created by the larynx. With these acoustic parameters, first a spectrum and then a sub-spectrum are created. “You are then left with 19 mathematical reflection coefficients of the reverberations that characterise your internal body parts,” Künzel elucidates.

Global law enforcement agencies, in particular the FBI, have been developing the technology since 2000 and have carried out extensive research into the covariants, such as vocal ageing, ill-health and imitation. Very bad ambient noise, says Künzel, is one of the most problematic covariants. “But worse than technical noise would be other voices. That would make it very difficult for a computer to identify your voice.” Under low-noise conditions, however, the biometric fares very well. Indeed, the FBI’s research appears to have paid off. In one high-profile example seen in August 2007, the leader of a Columbian drug cartel, who had undergone extensive plastic surgery, was identified and arrested using voice information captured over the phone.

In this way, voice recognition biometrics, like gait analysis, can be used without the subject’s knowledge or co-operation. Similarly, it can be deployed – like keystroke biometrics – remotely, using established hardware. For this reason, many banks, including ABN Amro, which has implemented an offering from a company called VoiceVault, have begun to deploy the technology to authenticate telephone banking sessions. In this instance, voice recognition acts as the second authentication factor when speaking a password. Similar applications focusing on call centre-based transactions requiring authentication promise to serve as the technology’s primary commercial niche.

Global reach

Because the parameters by which modern voice recognition functions are physiological, the biometric is language independent, meaning the same software can be rolled out globally. But what about the voice being impacted by the common cold or a throat virus? Agnitio claims that this is not a problem because the software creates a ‘rolling profile’. “It is not comparing one recent questionable example to one reference sample,” explains Künzel. “You have a profile that is updated every time you use it. This allows the software to keep track of minor alterations.

Further reading

Nationwide moves to two-factor

New security device uses skin to conduct ID data

The truth about biometrics The adoption of biometrics is at a tipping point after decades of failed trials and mistrust.

Related Topics