close
close
How can threat actors use generative artificial intelligence?

How can threat actors use generative artificial intelligence?

Pierluigi Paganini
December 2, 2024

Generative Artificial Intelligence (GAI) is rapidly revolutionizing various industries, including cybersecurity, enabling the creation of realistic and personalized content.

The skills that make Generative artificial intelligence Because it is a powerful tool for advancement, it also poses a significant threat in the cyber domain. The use of GAI by malicious actors is becoming increasingly common, allowing them to carry out a wide range of cyberattacks. From generating deepfakes to amplifying phishing campaigns, GAI is becoming a tool for large-scale cyberattacks

GAI has attracted the attention of researchers and investors due to its cross-industry transformation potential. Unfortunately, abuse by malicious actors is changing the cyber threat landscape. Among the most worrying applications of generative artificial intelligence is the creation of deepfakes and disinformation campaigns, which are already proving effective and dangerous.

Deepfakes are media content – ​​such as videos, images or audio – created using GAI to realistically manipulate faces, voices or even entire events. The increasing sophistication of these technologies has made it harder than ever to distinguish real content from fake. This makes deepfakes a powerful weapon for attackers involved in disinformation campaigns, fraud, or data breaches.

A study The study presented in 2019 by the Massachusetts Institute of Technology (MIT) found that AI-generated deepfakes can deceive people up to 60% of the time. Given advances in AI since then, it’s likely that this percentage has increased, making deepfakes an even greater threat. Attackers can use them to fabricate events, impersonate influential figures, or create scenarios that manipulate public opinion.

The use of generative artificial intelligence in disinformation campaigns is no longer a hypothesis. According to a report According to the Microsoft Threat Analysis Center (MTAC), Chinese threat actors are using GAI to conduct influence operations against other countries, including the United States and Taiwan. By generating AI-driven content such as provocative memes, videos and audio content, these actors aim to exacerbate social divisions and influence voter behavior.

For example, these campaigns use fake social media accounts to post questions and comments about controversial internal issues in the US. The data collected through these operations can provide insights into voter demographics and potentially influence election results. Microsoft experts expect China’s use of AI-generated content to expand to influence elections in countries such as India, South Korea and the United States

Generative artificial intelligence China AI influences operations Taiwan

GAI is also a boon for attackers looking for financial gain. By automating the creation of phishing emails, malicious actors can scale their campaigns and create highly personalized and convincing messages that are more likely to deceive victims.

An example of this abuse is the creation of fraudulent social media profiles using GAI. In 2022, the Federal Bureau of Investigation (FBI) warned of a rise in fake profiles aimed at financially exploiting victims. GAI allows attackers to generate not only realistic text, but also photos, videos and audio that make these profiles appear real.

Additionally, platforms such as FraudGPT and WormGPT, launched in mid-2023, offer tools specifically designed for phishing and Business Email Compromise (BEC) attacks. For a monthly fee, attackers can access sophisticated services that automate the creation of fraudulent emails, increasing the efficiency of their scams.

Another area of ​​concern is the use of GAI to develop malicious code. By automating the generation of malware variants, attackers can bypass the detection mechanisms of major anti-malware engines. This makes it easier for them to carry out large-scale attacks with minimal effort.

One of the most alarming aspects of GAI is its potential to automate complex attack processes. This includes creating tools for offensive purposes, such as malware or scripts designed to exploit vulnerabilities. GAI models can refine these tools to bypass security measures, making attacks more sophisticated and harder to detect.

While the malicious use of GAI is still in its early stages, it is gaining traction among cybercriminals and state-sponsored actors. The increasing accessibility of GAI through “as-a-service” models will only accelerate its adoption. These services enable attackers with minimal technical expertise to carry out sophisticated attacks, democratizing cybercrime.

For example, the effects of GAI are already visible in disinformation campaigns. When it comes to phishing and financial fraud, the use of tools like FraudGPT shows how attackers can scale their operations. Automating malware development is another worrying trend as it lowers the barrier to entry for cybercrime.

Leading security companies as well as major GAI providers such as OpenAI, Google and Microsoft are actively working on solutions to mitigate these new threats. Efforts include developing robust detection mechanisms for deepfakes, improving anti-phishing tools, and creating safeguards to prevent misuse of GAI platforms.

However, due to rapid technological advances, attackers are always one step ahead. As GAI becomes more sophisticated and accessible, the challenges for defenders increase.

Generative artificial intelligence is a double-edged sword. Although it offers tremendous opportunities for innovation and progress, it also poses significant risks when weaponized by malicious actors. The ability to create realistic and personalized content has already transformed the cyber threat landscape, ushering in a new era of attacks ranging from deepfakes to large-scale phishing campaigns.

As technology evolves, so will its misuse. It is essential for governments, businesses and individuals to recognize the potential dangers of GAI and take proactive measures to address them. Through collaboration and innovation, we can realize the benefits of GAI while mitigating its risks. This is how we ensure that this powerful tool serves humanity rather than harming it.

Follow me on Twitter: @securityaffairs And Facebook and Mastodon

Pierluigi Paganini

(Security matters hacking, generative artificial intelligence)



Leave a Reply

Your email address will not be published. Required fields are marked *