Proofpoint security researchers discovered evidence suggesting that a hacking group, TA547, utilised an AI chatbot to aid in crafting a malware attack involving a phishing scheme.
Proofpoint security researchers investigating a phishing scheme by hacking group TA547 uncovered evidence indicating the use of an AI chatbot, such as ChatGPT, Gemini, or Copilot, to refine a malware attack. The phishing emails aimed to distribute the Rhadamanthys malware via a PowerShell script, which exhibited characteristics typically associated with AI-generated content. Although it's not conclusive that TA547 utilised an AI chatbot, the incident highlights the potential for cybercriminals to leverage freely available large language models to enhance their attacks.
Highlights:
You might also like 101 | Artificial Intelligence | The Dark Side of AI
AI chatbots pose a concerning potential for nefarious actors seeking to enhance their malware or ransomware campaigns. By leveraging AI chatbots like ChatGPT, Gemini, or Copilot, cybercriminals can streamline the process of crafting malicious code and refining attack techniques. These chatbots, powered by advanced natural language processing models, can generate complex and realistic-sounding scripts, making it harder for security measures to detect and mitigate attacks. Moreover, AI chatbots could assist in developing sophisticated social engineering tactics, such as crafting convincing phishing emails or deceptive messages, to lure unsuspecting victims into clicking on malicious links or downloading infected files. As AI technology continues to advance and become more accessible, it's imperative for cybersecurity professionals to remain vigilant and adapt their defences to counter the evolving threat landscape posed by AI-driven cybercrime.
Takeaways to avoid such issues:
Source and further reading.
Kan, M. (2024, April 10). An AI chatbot may have helped create this malware attack.
PCMAG.
https://www.pcmag.com/news/an-ai-chatbot-may-have-helped-create-this-malware-attack