Logo Header Menu

Can AI and blockchain be used in fight against deepfake?

Deepfakes are hitting the news with more frequency, what is the fix? Can AI and blockchain be used in the fight against deepfake? Can AI and blockchain be used in fight against deepfake? image

Most of us have heard of phishing, we may get an email supposedly from the CEO of the company we work for demanding we transfer some money. As it’s the boss, and we are human, and don’t always react calmly when our boss aggressively demands we do something, we may well comply. But these days, more people are aware of the danger — and are likely to check the authenticity of the email. Suppose, however, we get a phone call apparently from the boss, complete with the cadences of the boss’s voice that we are familiar with — we are far less likely to be suspicious. But now, in a variation of deepfake, it has been reported that AI has been used to scam an organisation out of money by impersonating the voice of a company’s chief executive. So what is the answer? Can AI or blockchain be used to fight in the battle against deepfake? Or does it boil down to staff training? Information Age spoke to three experts.

According to a report in the Washington Post,  criminals used software applying AI to scam a UK energy company out of $220,000 or £194,000. The CEO of the company thought he recognised the voice of the chief executive of the parent company — and duly transferred the money he was asked to transfer.

Another recent example of deepfake, was less serious, but had more serious implications. A YouTube creator going by the name of Ctrl Shift, deep faked a scene from the AMC TV series Better Call Saul with the voice of Donald Trump and his son in law Jared Kushner. 

How can organisations respond to the threat of deepfake?

According to Dr Alexander Adam, Data Scientist at Faculty, it’s much harder for AI to create deepfake audio than video.

He explained: “The human ear is sensitive to sound waves extending over an impressively large spectrum of frequencies, so generating human-quality speech requires the algorithm to correctly predict the sound wave thousands of times per second. By comparison, the human eye can only perceive data at around 30 frames per second. This means in general, small incorrectness in video deepfake is less noticeable than in its audio counterpart.”

AI ethics and how adversarial algorithms might be the answer

AI ethics and bias arising from data: fairness, robustness and explainability, plus adversarial algorithms may be the answer

Training staff for Deepfake

As for the fix, Jake Moore, Cybersecurity Specialist at ESET puts the emphasis on training staff.

He said: “We will see a huge rise in machine-learned cybercrimes in the near future. We have already seen Deepfake videos imitating celebrities and public figures, but to create convincing materials, cyber-criminals use footage that is already available in the public domain. As computing power increases, we are starting to see this become even easier to create, which paints a scary picture ahead.

“To help reduce these types of risks, companies should start by raising awareness and educating their employees, then introduce a second layer of protection and verification, one that would be hard to spoof, like a single-use password generator (OTP devices). Two-factor authentication is a powerful, inexpensive and simple technique to add an extra layer of security to protect your money from going into a rogue account.

“Before you know it, deepfake will be more convincing than ever, therefore companies need to consider investing in deepfake detecting software sooner rather than later. However, counter software is never developed that fast, so companies should focus on training their employees rather than just rely on software.”

AI in cyber security: a help or a hindrance?

Dan Panesar, VP EMEA at Certes Networks, explains to Information Age the role AI is having in the cyber security space

Blockchain response to deepfake

Blockchain expert Kevin Gannon, who is the blockchain tech lead and solutions architect, PwC said: When it comes to the area of Deepfake, emerging technology like blockchain can come to the fore to provide some levels of security, approval and validation. Blockchain has typically been touted as a visibility and transparency play, where once something is done, the who and when becomes apparent; but it can go further.

“When a user who has a digital identity wants to do something — they could be prompted for proof of their identity before access to something (like funds) can be granted. From another angle, the actual authenticity of video, audio files can be proven via a blockchain application where the hash of certain files (supposed proofs) can be compared against the originals. Though, it is not a silver bullet, and as always, the adoption and applicability of the technology in the right way is key. From a security perspective, more open data mechanisms (like a public ledger) have an increased attack surface, so inherent protection can not just be assumed.

“But enhancing security protocols around the approvals process, where smart contracts could also come into play, can strengthen such processes. In addition, at a more technical level, by applying multi-sig (multiple signature) transactions in the processes can mean that even if one identity is compromised, there is more than one identity needed to provide ultimate approval.”

Blockchain: single source of truth and digital twins

Blockchain could become the single source of truth, it could create digital twins, but there is one thing you must never forget.

AI and Deepfake

As for how AI can be used to combat deepfake, we return to Dr Alexander Adam. He said: “Machine learning algorithms are great at recognising patterns in large amounts of data. ML can provide a way to detect fake audio from real audio by using classification techniques that work by showing an algorithm large amounts of deepfake and real audio and teaching it to distinguish the difference in (for example) the frequency composition between the two. For example, by using image classification on the audio spectrograms you can teach an ML model to ‘spot the difference’. However, as far as I am aware no out-of-the-box solution exists yet.”

“In part, this may be because audio deepfake hasn’t been regarded as being as much of a threat as video deepfake. Audio deepfake are not pitch perfect and you should be able to tell the difference if it’s tailored to a specific person that you know. That said, interference across phone lines or staging general ‘outside’ background noise could probably be used to mask a lot of this. And as there has been so much high profile media attention on deepfake videos, the public are perhaps less aware of the potential risks of audio deepfake. So, if you have a reason to be suspicious, you should always validate it’s who you think it might be.

“However, we expect that the creation and use of audio deepfake for malicious purposes will increase in the coming years and become more sophisticated. This is because there is a better understanding of machine learning models and how to transfer what was used on one model to another person and train it quickly. But, it’s worth noting that as the generation of deepfake content gets better, typically so do the detection methods.”


Related articles

A history of AI; key moments in the story of AI

Gartner: debunking five artificial intelligence misconceptions

AI: A new route for cyber-attacks or a way to prevent them?

Enterprise AI adoption hampered by lack of skilled experts, says survey

Understanding the viability of blockchain in supply chain management

Driving business value with responsible AI

Emerging technologies, are they set to transform business?

UK tech sector leads Europe in AI — but what about the rest of the world?

EU artificial intelligence guidelines will help unlock potential of AI technology


Latest news

divider
AI & Machine Learning
Balancing control and speed when integrating AI

Balancing control and speed when integrating AI

22 November 2019 / Within the cloud space, AI is being considered for collaboration more and more as the [...]

divider
Releases & Updates
Digital work to increase 50% within two years, says study

Digital work to increase 50% within two years, says study

21 November 2019 / The report, entitled ‘Content Intelligence for the Future of Work‘, was carried out by the [...]

divider
Releases & Updates
Belief in multi-cloud usage lacking, says study

Belief in multi-cloud usage lacking, says study

20 November 2019 / The study, entitled ‘Mapping the Multi-Cloud Enterprise’, was carried out by the Business Performance Innovation [...]

divider
Releases & Updates
HSBC and MuleSoft join forces to build APIs

HSBC and MuleSoft join forces to build APIs

20 November 2019 / The partnership will offer a multi-channel customer experience using APIs that will take on the [...]

divider
AI & Machine Learning
The 3 factors preventing successful AI adoption, according to IBM’s GM

The 3 factors preventing successful AI adoption, according to IBM’s GM

20 November 2019 / AI is predicted to add up to $15.7 trillion by 2030, but three main aspects [...]

divider
Major Contracts
Salesforce and AWS extend partnership

Salesforce and AWS extend partnership

19 November 2019 / Set to be a part of Salesforce‘s Service Cloud Voice, the integration will aim to [...]

divider
Releases & Updates
“Chasm in perception” found regarding cyber security

“Chasm in perception” found regarding cyber security

19 November 2019 / A recent study has claimed there is a “chasm in perception” between IT decision-makers and [...]

divider
Governance, Risk and Compliance
How to deliver value with intelligent data governance

How to deliver value with intelligent data governance

19 November 2019 / Intelligent data governance is needed to deliver business value, but there’s a problem; governing data [...]

divider
AI & Machine Learning
What is the answer to regulating AI? And why is it important?

What is the answer to regulating AI? And why is it important?

18 November 2019 / Regulating AI; what could be so hard? The European Union recently published seven guidelines for [...]

Do NOT follow this link or you will be banned from the site!

Pin It on Pinterest