What is the backstory of Darktrace and its technology?
The company was founded in 2013 by folk from GCHQ and MI5 along with mathematicians from the University of Cambridge – specialists in unsupervised learning.
The legacy approach to cyber security was to build walls around the outside of the network, and there were a lot of different approaches to securing the perimeter with the thought that you could keep the bad guys out.
The reality, and especially coming from the intelligence community, is that attackers can get into any network, given a number of approaches and an amount of time. Therefore, we turned the problem around to make light of what’s going on inside the network and to find threats early as people start to manoeuvre though the network in unusual ways.
We basically created an approach that worked very much like the human immune system, in that you can use machine learning to develop a sense of self – a normal pattern of life for every user and device in the network – and based on that find out when things are not normal.
How unique is Darktrace’s AI technology?
It is very unique, and the world leaders in the unsupervised machine learning approach are based in the UK.
If you look at machine learning, there are two pools of thought. The most common is that you use data sets to train the machine learning. For example, in cyber security you would ask the computer to show all of the malware that ever existed, and you would use that as a training set. Then you’d ask the computer to show everything that looks kind of like that.
The reality is that there are a lot of unknown threats out there, and there is much besides malware – insider threats, IoT attacks and all kinds of unknown threats that those types of training methods would miss.
The other thing is, no matter how well you train something in a lab, it’s different from the real world. And if you look at how companies operate, even two global banks operate entirely differently from one another – their network architectures are completely different. So it really takes the unsupervised approach that Darktrace uses, which means we have no prior knowledge – we just use machine learning to learn in real time from network traffic.
This approach of learning in the real world is a difficult problem to solve, especially when it comes to cyber, but Darktrace has been able to crack the code and get it to work. We’ve deployed it now more than 1,500 times, and it’s working in everything from large global banks and airlines to e-commerce.
How important is AI in the fight against cybercrime and cyber espionage?
If you take a look back at where cyber threats started, we heard about credit cards being stolen and websites being defaced. After that, we started to evolve into what I call trust attacks, so not just for monetary gain but to actually try to get society to learn to lose trust.
The attack on the Democratic Party’s servers in the US was exactly that: something to try to get society to lose trust in democracy. There are other examples that we’ve seen in financial systems and legal markets.
I think this trust attack route was the next phase, and we’ll see them for some time to come. However, beyond that, about four months ago with a client in India, Darktrace saw a new novel attack that, once it got inside the network, used AI to try to learn to how that network and its users behaved. And it tried to blend into the background of the noisy network. Had we not used our own machine learning to spot it quickly, it would never have been detected.
It’s almost impossible for humans to spot these AI attacks, so it’s rapidly becoming an arms race of machines against machines – AI versus AI.
I think we’ll especially see that in state-sponsored attacks, where you can imagine countries using mathematicians and AI specialists to create these kinds of attacks. If the attackers start using AI, the logical way – probably the only way – to defend against that is for the good guys to use AI to get ahead.
How will the growing importance of AI influence the role of humans in cyber security?
Humans are incredibly important in the cyber security world. Machine learning and AI technology is going to be used to detect the threats automatically, but you need humans to understand the business context of that and determine how worrying the threat is and what should be done about it.
After AI starts detecting the threats – which is what Darktrace’s Enterprise Immune System does, automatically detecting and visualising those threats for humans to make a decision on what to do about it – the next phase will be to automatically take action.
What we see right now is people wanting to have the AI make a suggestion about what a company should do regarding the attacks it has identified. Most importantly, what we see at the moment is the AI being used to slow down the attack, giving humans time to catch up and make decisions about what to do, how to respond, what the business context is and what the implications would be.
What has been the secret of Darktrace’s fast growth? How have you scaled the company over the past three years?
It’s been a great growth success story. In our three short years we’ve grown to 330 employees, with our headquarters in Cambridge and offices around the world. We have also raised a tremendous amount of money, including our Series C round back in July led by KKR, a global growth investor. To date, we have raised $91 million.
We now have a valuation of just under $500 million. Part of the reason for that is that we have over 350 large global customers and 1,500 deployments of the Darktrace appliance. We have now reached a total contract value through our SaaS subscription model of more than $100 million, which is virtually unheard of in a B2B business in this amount of time.
We’ve attracted investors because we’ve found a true market need – in our case cyber defence – and demonstrated how to apply AI to it. Another reason for the success is how we’ve got machine learning to work at scale without needing an army of consultants. That is really the huge win on the technology front – getting AI to work without humans having to manually support or tune it.