Generative AI: a blessing or a curse for cybersecurity?

Benjamin Fabre considers the benefits and risks that generative AI capabilities will bring for cybersecurity, now and in the future

Last month, Elon Musk wrote an open letter requesting a six month halt on the development of AI systems, warning that “human-competitive intelligence can pose profound risks to society and humanity”. While this may sound alarmist, the threats posed by generative AI are very real, and already evident in the cybersecurity space with the rise of AI-generated phishing emails, malware, and deep fake campaigns. As this technology becomes more advanced, generative AI will inevitably continue to facilitate cybercriminal activity, and grant ‘wannabe’ cybercriminals the ability to attack organisations in increasingly sophisticated ways, often without having to write a single line of code.

There is no doubt that organisations should be concerned about the risks posed in this new generative AI age. However, this technology can also be used to protect against malicious actors, from bolstering antivirus software, fraud detection, and identity and access management.

Generative AI is here to stay, and CISOs in particular cannot afford to draw a blind eye to the risks this new technology poses. So, what are the risks, and how can companies protect themselves amidst the new generative AI-driven threat landscape? 


ChatGPT vs GDPR – what AI chatbots mean for data privacyWhile OpenAI’s ChatGPT is taking the large language model space by storm, there is much to consider when it comes to data privacy.


Exploitation of generative AI by cybercriminals

The emergence of ChatGPT and similar generative AI models has created a new threat landscape, whereby almost anyone who wishes to conduct malicious cyber activity against a company can do so. Cybercriminals no longer need to have advanced coding knowledge or skills. All they need is malicious intent, and access to ChatGPT.

Influence fraud should be an area of particular concern for organisations in the future. By no means a new or novel concept; for years, bots have been used to generate comments across social media platforms and mainstream media comment sections to shape political discourse. For example, in the weeks leading up to the 2016 Presidential Election, bots were found to retweet Donald Trump ten times more than Hillary Clinton. But now in the age of generative AI, this type of deceit — which was historically reserved for high-level political fraud — could trickle down to the organisational level. In a matter of seconds, malicious actors could theoretically use ChatGPT or Google Bard to generate millions of harmful messages on social media, mainstream news outlets or customer service pages. Engineered attacks against companies — including their customers and employees — can be executed at unprecedented scale and speed.  

Another major concern for organisations is the surge of bad bots. In 2021, 27.7 per cent of global internet traffic was made of bad bots, and over the past couple of years this number has only skyrocketed. With its advanced natural language processing capabilities, ChatGPT can generate realistic user agent strings, browser fingerprints, and other attributes that can make scraping bots appear more like legitimate users. In fact, according to a recent report, GPT-4 is so adept at generating language, it convinced a human that it was blind in order to get that human to solve a CAPTCHA for the chatbot. This poses an enormous security threat to businesses, and will become an increasing issue as generative AI develops. 

Generative AI can serve as a blessing for CISOs

The risks posed by generative AI are no doubt alarming, however the technology is here to stay, and advancing rapidly. CISOs must therefore utilise generative AI to bolster their cybersecurity strategies, and develop more robust defences against the new surge of sophisticated malicious attacks. 

One of the biggest challenges that CISOs are facing at the moment is the cybersecurity skills gap. To date, there are approximately 3.5 million open cyber jobs worldwide and ultimately, without skilled staff, organisations simply cannot protect themselves against threats. However, generative AI offers a solution to this industry-wide challenge, as tools such as ChatGPT and Google Bard can be used to expedite manual work and reduce the work flow of stretched cybersecurity personnel. In particular, ChatGPT can help accelerate code development and detect vulnerable code, improving code security. The introduction of GPT-4’s code interpreter is a game changer for short staffed organisations, as the automation of tedious operations can free up time for security experts to focus on strategic issues. Microsoft is already helping to streamline these operations for cybersecurity staff with the introduction of Microsoft Security Copilot, powered by GPT-4.

Furthermore, AI chatbot tools can support incident responses. For example, in the event of a bot attack, ChatGPT and Google Bard can provide real-time information to security teams and help to coordinate response activities. The tech can also assist in the analysis of attack data, helping security teams to identify the source of the attack and take appropriate measures to contain and mitigate its effects. 

Organisations can also use ChatGPT and other generative AI models to analyse large volumes of data to identify patterns and anomalies that may indicate the presence of criminal bots. By analysing chat logs, social media data and other sources of information, AI tools can help detect and alert security teams to potential bot attacks before they can cause significant harm. 


What ChatGPT means for developersWill ChatGPT replace software developers? Or will it make their jobs more enjoyable by doing the heavy lifting when it comes to writing boilerplate code, debugging and testing?


Protection in the new Generative AI Age

We have now entered into the Generative AI Age, and organisations are facing more frequent and sophisticated cyberattacks as a result. CISOs must accept this new reality and harness the power of AI to fight these AI-enhanced cyber attacks. Cybersecurity solutions that fail to employ machine learning in real time are ultimately doomed to fall behind. 

For example, we know that as a result of generative AI, organisations will witness a surge in the number of bad bots that attempt fraud on their websites. In a world where bad actors use bots-as-a-service to create complex, stealth threats, choosing not to leverage machine learning to block these threats is like bringing a knife to a gun fight. Therefore, now more than ever before, AI-powered bot detection and blocking tools are imperative for organisational cybersecurity.

Case in point: traditional CAPTCHAs, long considered a trusty cybersecurity tool, are no match for today’s bots. Bots now use AI to pass “old school” CAPTCHAs (like the traffic light images), prompting the need for enterprises to upgrade to solutions that challenge traffic first, using CAPTCHAs as a last resort (and then, only cybersecurity-rich CAPTCHAs). Additionally, organisations can protect themselves by implementing multi-factor authentication and identity-based access controls, which grants users access permission via their biometric data, which will help reduce unauthorised access and misuse. 

Generative AI poses significant security risks to organisations, but if used correctly, it can also help mitigate against the threats that it creates. Cybersecurity is a cat-and-mouse game, and CISOs need to be one step ahead in order to protect their organisations from the devastating financial and reputational damage that can emerge as part of this new AI-powered threat landscape. By understanding the threats, and using the technology in an effective way, CISOs can protect their organisations from the newly emerging attacks that generative AI poses.

Benjamin Fabre is co-founder and CEO of bot and online fraud solution provider, DataDome.

Related:

The importance of disaster recovery and backup in your cybersecurity strategyA strong disaster recovery as-a-service (DRaaS) solution can prove the difference between success and failure when it comes to keeping data protected.