Today's cybersecurity headlines are brought to you by ThreatPerspective


Ethical Hacking News

OpenAI's Fight Against Cybercrime: Blocking 20 Global Malicious Campaigns Using AI


OpenAI has disrupted over 20 global malicious campaigns using its platform since the start of the year, highlighting the company's efforts to combat cybercrime and disinformation. The campaigns involved a range of activities, including generating social media content connected to elections in several countries and creating AI-generated profile pictures for fake accounts.

  • OpenAI has disrupted over 20 operations and deceptive networks across the globe that attempted to utilize its platform for malicious purposes since the beginning of the year.
  • The malicious campaigns encompassed a range of activities, including debugging malware, generating biographies, and producing AI-generated profile pictures for fake accounts on X.
  • Several entities, including SweetSpecter (China-based) and Cyber Av3ngers (Iranian IRGC-affiliated), were involved in exploiting OpenAI's services for malicious purposes.
  • Storm-0817, an Iranian threat actor, used OpenAI models to debug Android malware capable of harvesting sensitive information and scrape Instagram profiles via Selenium.
  • OpenAI has taken steps to block several clusters of accounts, including those involved in influence operations and generating fake content for various websites and social media platforms.
  • The use of AI-powered tools, such as DALL·E-generated images, poses a risk for malicious activities, highlighting the need for companies like OpenAI to prioritize security.
  • Generative AI can be used to disseminate tailored misinformation through microtargeted emails, posing a threat to political campaigns and elections.



  • In a recent announcement, OpenAI, a leading artificial intelligence (AI) company, disclosed its efforts to combat cybercrime and disinformation using its platform. According to the data provided, OpenAI has disrupted over 20 operations and deceptive networks across the globe that attempted to utilize its platform for malicious purposes since the beginning of the year. This move comes as part of the company's commitment to ensuring the integrity and safety of its platform.

    The data reveals that these malicious campaigns encompassed a wide range of activities, including debugging malware, creating articles for websites, generating biographies for social media accounts, and even producing AI-generated profile pictures for fake accounts on X. Furthermore, OpenAI disrupted operations related to generating social media content connected to elections in the U.S., Rwanda, India, and the European Union. Notably, none of these networks were able to achieve viral engagement or sustain audiences.

    A closer examination of the operations highlighted by OpenAI reveals the involvement of several entities and groups, including SweetSpecter, a suspected China-based adversary that leveraged OpenAI's services for LLM-informed reconnaissance, vulnerability research, scripting support, anomaly detection evasion, and development. Additionally, Cyber Av3ngers, a group affiliated with the Iranian Islamic Revolutionary Guard Corps (IRGC), utilized OpenAI models to conduct research into programmable logic controllers.

    Another notable operation highlighted by OpenAI is Storm-0817, an Iranian threat actor that used its models to debug Android malware capable of harvesting sensitive information and scrape Instagram profiles via Selenium. The capabilities demonstrated by these groups underscore the evolving nature of cyber threats and their willingness to exploit AI technologies for malicious purposes.

    It's worth noting that the company has taken steps to block several clusters of accounts, including an influence operation codenamed A2Z and Stop News, which generated English- and French-language content for subsequent posting on various websites and social media platforms across different platforms. Researchers Ben Nimmo and Michael Flossman observed that these operations were notably prolific in their use of imagery, often utilizing DALL·E-generated images to attract attention.

    Furthermore, OpenAI identified two other networks, Bet Bot and Corrupt Comment, which used the API to generate conversations with users on X and send them links to gambling sites. They also manufactured comments that were posted on X. These findings highlight the versatility of AI-powered tools in being repurposed for malicious activities.

    In a recent report, cybersecurity company Sophos discussed the potential risks associated with generative AI, including its ability to disseminate tailored misinformation through microtargeted emails. According to researchers Ben Gelman and Adarsh Kyadige, this entails abusing AI models to create political campaign websites, AI-generated personas across the political spectrum, and email messages that specifically target them based on campaign points.

    The discovery of these malicious campaigns underscores the need for companies like OpenAI to prioritize security and take proactive measures to prevent exploitation. As the threat landscape continues to evolve, it is essential for organizations to stay vigilant and adapt their strategies to counter emerging threats.

    In recent months, OpenAI has faced criticism over its handling of certain operations, including an Iranian covert influence operation called Storm-2035 that leveraged ChatGPT to generate content focused on the upcoming U.S. presidential election. The company's efforts to disrupt these malicious campaigns demonstrate its commitment to addressing these concerns and ensuring the integrity of its platform.

    The story highlights the complexities and challenges associated with combating cybercrime and disinformation in today's digital landscape. As AI technologies continue to advance, it is crucial for companies like OpenAI to maintain a strong focus on security and work closely with law enforcement agencies and other stakeholders to stay ahead of emerging threats.

    In conclusion, OpenAI's efforts to disrupt over 20 global malicious campaigns using its platform demonstrate the company's commitment to addressing the evolving threat landscape. As AI technologies continue to play a more significant role in cybercrime and disinformation, it is essential for companies like OpenAI to prioritize security and adapt their strategies to counter emerging threats.



    Related Information:

  • https://thehackernews.com/2024/10/openai-blocks-20-global-malicious.html

  • https://www.bloomberg.com/news/articles/2024-10-09/openai-says-china-linked-group-tried-to-phish-its-employees

  • https://me.pcmag.com/en/ai/26287/chinese-hackers-sent-openai-staff-malware-in-spear-phishing-attacks

  • https://fortune.com/2024/10/09/openai-china-phishing-employees-hacker-attempt/

  • https://www.msn.com/en-us/news/technology/openai-says-chinese-gang-tried-to-phish-its-staff/ar-AA1s08ed


  • Published: Thu Oct 10 11:22:48 2024 by llama3.2 3B Q4_K_M













         


    © Ethical Hacking News . All rights reserved.

    Privacy | Terms of Use | Contact Us