Blog Layout

The Cybersecurity Lair™ • April 16, 2024

Latest News | The Rise of AI in Cybercrime

Evidence Suggests Hacking Group Utilised Chatbot for Malicious Campaign

Proofpoint security researchers discovered evidence suggesting that a hacking group, TA547, utilised an AI chatbot to aid in crafting a malware attack involving a phishing scheme.


Proofpoint security researchers investigating a phishing scheme by hacking group TA547 uncovered evidence indicating the use of an AI chatbot, such as ChatGPT, Gemini, or Copilot, to refine a malware attack. The phishing emails aimed to distribute the Rhadamanthys malware via a PowerShell script, which exhibited characteristics typically associated with AI-generated content. Although it's not conclusive that TA547 utilised an AI chatbot, the incident highlights the potential for cybercriminals to leverage freely available large language models to enhance their attacks.

Highlights:


  • Proofpoint researchers discovered evidence suggesting the use of an AI chatbot by hacking group TA547 to refine a malware attack.
  • The malware attack involved phishing emails distributing the Rhadamanthys malware via a PowerShell script.
  • Characteristics in the PowerShell script, such as grammatically correct and hyper-specific comments, indicate possible AI-generated content.
  • The presence of such comments is atypical in code written by human hackers, suggesting the involvement of AI.
  • The incident underscores the potential for cybercriminals to utilise freely available large language models to enhance their attacks.
  • Both Microsoft and OpenAI previously warned about state-sponsored hackers using generative AI for cyberattacks.


You might also like 101 | Artificial Intelligence | The Dark Side of AI


AI chatbots pose a concerning potential for nefarious actors seeking to enhance their malware or ransomware campaigns. By leveraging AI chatbots like ChatGPT, Gemini, or Copilot, cybercriminals can streamline the process of crafting malicious code and refining attack techniques. These chatbots, powered by advanced natural language processing models, can generate complex and realistic-sounding scripts, making it harder for security measures to detect and mitigate attacks. Moreover, AI chatbots could assist in developing sophisticated social engineering tactics, such as crafting convincing phishing emails or deceptive messages, to lure unsuspecting victims into clicking on malicious links or downloading infected files. As AI technology continues to advance and become more accessible, it's imperative for cybersecurity professionals to remain vigilant and adapt their defences to counter the evolving threat landscape posed by AI-driven cybercrime.


Takeaways to avoid such issues:

  • Enhance cybersecurity awareness and vigilance among employees to recognize and report phishing attempts.
  • Regularly update and patch software to mitigate vulnerabilities that could be exploited by malware attacks.
  • Employ robust email filtering and security solutions to detect and prevent phishing emails from reaching employees' inboxes.
  • Conduct thorough investigations and analysis of suspicious activities or malware incidents to identify potential AI involvement.
  • Collaborate with security researchers and industry experts to stay informed about emerging threats and attack techniques, including those leveraging AI.


Source and further reading.


Kan, M. (2024, April 10). An AI chatbot may have helped create this malware attack.
PCMAG. https://www.pcmag.com/news/an-ai-chatbot-may-have-helped-create-this-malware-attack


MSN
. (n.d.). https://www.msn.com/en-us/news/technology/an-ai-chatbot-may-have-helped-create-this-malware-attack/ar-BB1lpoQD?apiversion=v2&noservercache=1&domshim=1&renderwebcomponents=1&wcseo=1&batchservertelemetry=1&noservertelemetry=1

Share by: