Ethical Hacking News
Microsoft has filed a lawsuit against 10 foreign-based cybercriminals who used stolen API keys to bypass safety guardrails in its Azure Open AI service, creating harmful content and selling it as a service to other miscreants.
Microsoft has filed a lawsuit against foreign-based cybercriminals who exploited its Azure Open AI service. The defendants used stolen API keys to bypass safety guardrails in Microsoft's generative AI tools and create harmful content. The scheme involved a client-side software tool called "de3u" and a reverse proxy service, which allowed the cybercriminals to bypass Microsoft's technical protective measures. Microsoft has taken steps to boost its genAI guardrails and add safety mitigations to prevent AI-related security threats.
In a bold move to combat the growing threat of AI abuse, Microsoft has filed a lawsuit against a group of foreign-based cybercriminals who have been exploiting the company's Azure Open AI service. The lawsuit, which was filed in December 2024 in a US District Court, accuses 10 defendants of using API keys stolen from "multiple" Microsoft customers along with custom-designed software to break into computers running Microsoft's Azure Open AI service.
The scheme, according to Microsoft, involved the use of stolen API keys to bypass safety guardrails in the company's generative AI tools. The tools were then used to create harmful content, which was sold as a service to other miscreants. This is not an isolated incident; rather, it is part of a larger pattern of behavior that has been ongoing for some time.
According to Microsoft, the company uncovered the scheme in July 2024, but the exact way in which the criminals stole the API keys is still unknown. However, the lawsuit does provide some insight into how the cybercriminals operated. The defendants created a client-side software tool referred to by Defendants as "de3u," which was made publicly available via the "rentry.org/dc3u" domain. This tool allowed users to issue Microsoft API calls to generate images using the DALL-E model, which is available to Azure OpenAI Service customers.
The defendants also created a reverse proxy service, referred to as the "oai reverse proxy," designed specifically for processing and routing communications from the de3u software to Microsoft's systems. This allowed the cybercriminals to bypass Microsoft's technical protective measures and create harmful content in violation of Microsoft's policies.
In addition to using stolen API keys, the defendants also operated a hacking-as-a-service scheme, where they resold access to their tools and services to other criminals. This is a classic example of how cybercrime can be orchestrated and scaled for maximum impact.
Microsoft has taken steps to boost its genAI guardrails and add safety mitigations that it says help prevent this type of abuse. While the exact details of these new safety measures are not provided, they demonstrate Microsoft's commitment to protecting its customers and preventing AI-related security threats.
The lawsuit against the foreign-based cybercriminals is a significant development in the ongoing battle against AI abuse. As AI technology continues to evolve and become more accessible, it is likely that we will see more instances of AI-related security threats. However, with companies like Microsoft taking proactive steps to protect their customers and prevent these types of attacks, there is hope that we can mitigate the impact of AI abuse and create a safer digital environment.
Related Information:
https://go.theregister.com/feed/www.theregister.com/2025/01/13/microsoft_sues_foreignbased_crims_seizes/
https://www.msn.com/en-us/money/technologyinvesting/microsoft-sues-foreign-based-cyber-crooks-seizes-sites-used-to-abuse-ai/ar-BB1rnXCU
https://arstechnica.com/security/2025/01/microsoft-sues-service-for-creating-illicit-content-with-its-ai-platform/
Published: Mon Jan 13 14:33:11 2025 by llama3.2 3B Q4_K_M