Generative AI in Cybersecurity: Battlefield, Threat, and Defense Now

Date:

Battlefield

What started as excitement about the capabilities of Generative AI quickly turned into concern. ChatGPT, Google Bard, Dall-E etc. Producer AI tools such as continue to make headlines due to security and privacy concerns.

It even leads to questioning what is real and what is not. Generative AI can come up with content that is highly plausible and therefore persuasive.

So much so that at the end of the last 60 Minutes episode about artificial intelligence, host Scott Pelley left viewers with this statement:

“We’ll end on a note that never appeared on 60 Minutes, but you may hear often in the AI ​​revolution: the former was built with 100% human content.”

Generative AI cyber warfare begins with this believable and real-life context, and the battlefield is filled with hackers using ChatGPT etc.

This is where it leverages Generative AI using tools. Carrying out their crimes through social engineering, phishing and impersonation attacks.

Threatening

Generative AI has the power to fuel increasingly sophisticated cyber attacks.

Because technology can easily produce such believable and human-like content, new cyber scams that leverage AI are more difficult to easily detect by security teams.

AI-generated scams can take the form of social engineering attacks, such as multi-channel phishing attacks conducted through email and messaging apps.

A real-world example would be an email or message containing a document sent from a third-party vendor to a company executive via Outlook (Email) or Slack (Messaging App).

The email or message directs them to click to view an invoice. With Generative AI, it can be nearly impossible to distinguish between a fake and real email or message. That’s why it’s so dangerous.

However, one of the most worrying examples is that with Generative AI, cybercriminals can craft attacks in multiple languages, regardless of whether the hacker speaks that language. The goal is to cast a wide net, and cybercriminals do not discriminate against victims based on language.

The advancement of generative AI signals that the scale and efficiency of these attacks will continue to increase.

Defense

Cyber ​​defense for Generative AI has been the notoriously missing piece of the puzzle. Until now. By using machine-to-machine warfare or pitting AI against AI, we can defend against this new and growing threat.

But how should this strategy be defined and what does it look like?

First, the industry must take action to peg computer versus computer rather than man versus computer.

To sustain this effort, we must consider advanced detection platforms that can detect AI-derived threats, reducing flagging time and the time required to resolve a social engineering attack originating from Generative AI. Something a human cannot do.

We recently ran a test of what this might look like. We crafted a language-based callback phishing email into ChatGPT in multiple languages ​​to see if a Natural Language Understanding platform or advanced detection platform could detect this.

We gave ChatGPT the prompt to “write an urgent email encouraging someone to call about a final notice regarding the software license agreement.” We also ordered him to write in English and Japanese.

The advanced detection platform was able to immediately flag the emails as a social engineering attack.

BUT, native email checks like Outlook’s phishing detection platform couldn’t do this. Even before the launch of ChatGPT, social engineering through conversational, language-based attacks was successful because they could bypass traditional controls and land in inboxes without a connection or payload.

So yes, machine versus machine combat is required to defend, but we also need to make sure we use effective artillery such as the advanced detection platform. Anyone who has these tools at their disposal has an advantage in the fight against Generative AI.

Machine-to-machine defenses can also be improved when it comes to the scale and plausibility of social engineering attacks enabled by ChatGPT and other forms of Generative AI.

For example, this defense can be deployed in multiple languages. It also doesn’t have to be limited to just email security, Slack, WhatsApp, Teams etc. It can also be used for other communication channels such as applications.

Stay Alert

One of our employees came across a Generative AI social engineering initiative while browsing LinkedIn.

A strange “whitepaper” download ad appeared, featuring what can only generously be described as “weird” ad creative.

Upon closer inspection, the employee saw a distinctive color pattern imprinted in the lower-right corner of the images produced by Dall-E, an artificial intelligence model that generates images from text-based prompts.

Coming across this fake LinkedIn ad was an important reminder of the new social engineering dangers that now arise when combined with Generative AI.

Being vigilant and skeptical is more critical than ever.

The era of generative AI used for cybercrime is here, and we must be vigilant and ready to respond with every tool at our disposal.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

Popular

More like this
Related

Artificial Intelligence Tools That Can Be Used in E-Export

In the "ChatGPT and Artificial Intelligence Tools in E-Export"...

What are SMART goals, why are they needed and how to set them correctly

In the modern world, where everyone strives to achieve...

How and why the United States is developing a lunar economy

The United States is seriously thinking about developing an...

China faces problem of untreatable gonorrhea

In China, there are a growing number of strains...