Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

The Dark Side of AI: How Generative Models Are Being Exploited by Cyber Threat Actors




OpenAI has confirmed that its chatbot was used by Chinese and Iranian threat actors to enhance their malicious operations. The report highlights the growing threat of AI-powered cyber espionage and serves as a stark reminder of the need for increased vigilance and cooperation among cybersecurity professionals, researchers, and organizations to counter emerging threats.

  • ChatGPT was used by Chinese and Iranian threat actors to enhance the effectiveness of their malicious operations.
  • SweetSpecter, a Chinese adversary group, targeted OpenAI employees with spear phishing emails using ChatGPT for LLM-informed reconnaissance.
  • CyberAv3ngers, an Iranian IRGC-affiliated threat group, used ChatGPT to create custom scripts and obfuscate code for their operations.
  • Threat actors used ChatGPT to scan networks for exploitable vulnerabilities, spread misinformation, and evade detection.
  • The incident highlights the growing threat of AI-powered cyber espionage and the need for increased vigilance and cooperation among cybersecurity professionals.



  • In a disturbing revelation, cybersecurity firm OpenAI has confirmed that its AI-powered chatbot, ChatGPT, was used by Chinese and Iranian threat actors to enhance the effectiveness of their malicious operations. The report, which focuses on operations since the beginning of the year, constitutes the first official confirmation that generative mainstream AI tools are being exploited for nefarious purposes.

    According to OpenAI, SweetSpecter, a Chinese adversary group, targeted employees at the company directly, sending spear phishing emails with malicious ZIP attachments masked as support requests. If opened, the attachments triggered an infection chain, leading to SugarGh0st RAT being dropped on the victim's system. The attackers used ChatGPT for LLM-informed reconnaissance, asking about vulnerabilities in various applications, and even utilized the tool to create a PowerShell loader.

    The Iranian Government Islamic Revolutionary Guard Corps (IRGC)-affiliated threat group 'CyberAv3ngers' also leveraged ChatGPT for their operations. They asked OpenAI's chatbot to produce default credentials in widely used Programmable Logic Controllers (PLCs), develop custom bash and Python scripts, and obfuscate code. The attackers also used the tool to plan their post-compromise activity, learn how to exploit specific vulnerabilities, and steal user passwords on macOS systems.

    The Iranian threat actors were found to have asked ChatGPT for lists of electricity companies, contractors, and common PLCs in Jordan, as well as information on recently disclosed vulnerabilities in CrushFTP and the Cisco Integrated Management Controller. They also used the tool to scan networks for exploitable vulnerabilities and ask how to copy a SAM file.

    Another Iranian threat actor group, STORM-0817, was found to have used ChatGPT to support the development of malware that could steal contact lists, call logs, and files stored on devices, take screenshots, scrutinize browsing history, and get precise position data. The command and control server for this malware is a WAMP setup.

    The report by OpenAI confirms that threat actors use ChatGPT to write malware, spread misinformation, evade detection, and conduct spear-phishing attacks. While the cases described above do not provide new capabilities in developing malware, they constitute proof that generative AI tools can make offensive operations more efficient for low-skilled actors, assisting them in all stages, from planning to execution.

    The use of ChatGPT in real attacks highlights the growing threat of AI-powered cyber espionage. As AI technology continues to advance and become more accessible, it is only a matter of time before more sophisticated attackers begin to exploit these tools for their own nefarious purposes.

    The incident serves as a stark reminder of the need for increased vigilance and cooperation among cybersecurity professionals, researchers, and organizations to counter the growing threat landscape. By staying informed about emerging threats and working together, we can potentially mitigate the damage caused by AI-powered cyber attacks.

    In conclusion, the exploitation of ChatGPT by Chinese and Iranian threat actors is a concerning development that highlights the dark side of generative models. As AI technology continues to advance, it is essential for cybersecurity professionals and organizations to stay vigilant and develop effective strategies to counter these emerging threats.



    Related Information:

  • https://www.bleepingcomputer.com/news/security/openai-confirms-threat-actors-use-chatgpt-to-write-malware/

  • https://unit42.paloaltonetworks.com/operation-diplomatic-specter/

  • https://www.bloomberg.com/news/articles/2024-10-09/openai-says-china-linked-group-tried-to-phish-its-employees

  • https://attack.mitre.org/groups/G1027/

  • https://www.cisa.gov/news-events/cybersecurity-advisories/aa23-335a


  • Published: Sat Oct 12 14:05:01 2024 by llama3.2 3B Q4_K_M













         


    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us