Ethical Hacking News
An unsecured database belonging to South Korea-based website GenNomis has been exposed, revealing tens of thousands of explicit images generated by AI, including child sexual abuse material. This disturbing finding sheds light on the dark side of generative AI, highlighting the ease with which malicious actors can create and distribute harmful content using these powerful tools.
Tens of thousands of explicit images generated by AI, including child sexual abuse material (CSAM), were found in an unsecured database linked to South Korea-based website GenNomis. The database, containing over 95,000 records, was discovered by security researcher Jeremiah Fowler and includes prompt data and images of celebrities de-aged to look like children. More than 45 GB of data, primarily composed of AI-generated images, was left accessible online, highlighting the dangers of generative AI when wielded by malicious actors. The discovery underscores the need for robust safeguards to prevent the spread of AI-generated CSAM and highlights the urgency of addressing these issues. The ease with which CSAM can be generated using AI poses an existential threat to our digital society, requiring stringent regulations and moderation tools to prevent such activities. Companies like GenNomis and AI-Nomis may be enabling or facilitating malicious activities if they are not taking sufficient steps to block harmful content. The situation highlights the need for greater accountability among tech companies and regulatory bodies to address the creation, possession, and distribution of CSAM.
In a shocking revelation, an unsecured database belonging to South Korea-based website GenNomis has been exposed, revealing tens of thousands of explicit images generated by AI, including child sexual abuse material (CSAM). The database, which was discovered by security researcher Jeremiah Fowler, contains over 95,000 records, including prompt data and images of celebrities such as Ariana Grande, the Kardashians, and Beyoncé de-aged to look like children. This disturbing finding sheds light on the dark side of generative AI, highlighting the ease with which malicious actors can create and distribute harmful content using these powerful tools.
The exposed database, which was linked to GenNomis' parent company, AI-Nomis, hosted a number of image generation and chatbot tools for users to employ. More than 45 GB of data, primarily composed of AI-generated images, was left accessible online, providing an unprecedented glimpse into the capabilities and dangers of generative AI. According to Fowler, this data serves as a stark reminder of the devastating potential of these technologies when wielded by malicious actors.
The implications of this discovery are far-reaching and alarming. As Fowler, the security researcher who uncovered the leak, aptly puts it, "The big thing is just how dangerous this is." Looking at the data from a security perspective, as well as that of a parent or concerned individual, is terrifying. Moreover, the ease with which CSAM can be generated using AI underscores the urgency of addressing these issues and implementing robust safeguards to prevent such malicious activities.
The rise of deepfake technology and AI-generated content has been on the ascent in recent years, with numerous "deepfake" and "nudify" websites, bots, and apps popping up, causing thousands of women and girls to be targeted with damaging imagery and videos. This phenomenon is accompanied by a spike in AI-generated CSAM, which poses an existential threat to our digital society. The ease with which malicious actors can create such content using generative AI highlights the need for robust moderation tools and stringent regulations to prevent such activities.
Fowler's discovery of the exposed database also raises questions about the role of companies like GenNomis and AI-Nomis in facilitating or enabling these malicious activities. As Fowler alleges, "If I was able to see those images with nothing more than the URL, that shows me that they're not taking all the necessary steps to block that content." This disturbing revelation underscores the need for greater accountability among tech companies and regulatory bodies.
Clare McGlynn, a law professor at Durham University in the UK who specializes in online- and image-based abuse, succinctly encapsulates the gravity of this situation. "This example also shows—yet again—the disturbing extent to which there is a market for AI that enables such abusive images to be generated," she states. "This should remind us that the creation, possession, and distribution of CSAM is not rare, and attributable to warped individuals." McGlynn's words serve as a stark reminder of the importance of addressing these issues and taking proactive measures to combat the spread of AI-generated child abuse material.
In response to the exposé, both GenNomis and AI-Nomis have taken steps to rectify the situation. The websites for both companies were shut down after WIRED reached out, with the GenNomis website now returning a 404 error page. However, this move raises questions about the efficacy of such measures in preventing future incidents.
As we navigate this treacherous landscape, it is essential that we prioritize the development and implementation of robust safeguards to prevent the spread of AI-generated CSAM. This includes strengthening moderation tools, enhancing regulations, and promoting public awareness campaigns to combat these malicious activities. Only through concerted effort and collective action can we hope to mitigate the dangers posed by generative AI and ensure a safer digital environment for all.
The discovery of this exposed database serves as a stark reminder of the need for vigilance and cooperation in addressing the dark side of generative AI. As Fowler so aptly puts it, "The technology has raced ahead of any of the guidelines or controls." It is imperative that we catch up with these advancements and ensure that they are harnessed for the greater good.
Related Information:
https://www.ethicalhackingnews.com/articles/An-Expos-on-the-Dark-Side-of-Generative-AI-Uncovering-the-Hidden-Dangers-of-AI-Generated-Child-Abuse-Material-ehn.shtml
https://www.wired.com/story/genomis-ai-image-database-exposed/
Published: Mon Mar 31 07:08:49 2025 by llama3.2 3B Q4_K_M