In the mid-1990s, Hewlett-Packed researcher Phil Kuekes set out to prove that a supercomputer could be built from unreliable, even defective, components. The system he built, the Teramac, had 220,000 hardware defects and yet could perform certain tasks 100 times faster than contemporary workstations.
Kuekes’s point was proven, but he could not have foreseen that the project may have laid the foundations for a genuine paradigm shift in computing.
More than two decades earlier, Berkeley professor Leon Chua had proposed a fourth fundamental component of electrical circuits.
Chua said that besides the resistor, the capacitor and the inductor, there should be something called a ‘memristor’, which could increase resistance when current flowed in one direction but reduce when it flowed the opposite way.
The idea of a memristor suggested the possibility of computers that ‘remember’ data without power. This could have both pragmatic applications, such as laptops that do not lose live data when the battery runs out, and far-reaching consequences: systems could use the features of a memristor to mimic the learning and memory capabilities of the human brain.
In 2008, nearly 40 years after Chua’s idea, researchers at HP Labs managed to build a functioning memristor. Stan Williams had been working on nano-electrical components for computers, and it was during discussions with Kuekes over pizza and beer that he realised that his approach to distributing workloads across unreliable components could be applied at the nano-scale.
This realisation led to a working prototype of a memristor, and now commercial products based on memristors are expected in the next two years. The technology could be revolutionary, and was made possible by a 20-year-old project to build a computer from dodgy components.