×
South Korean AI startup shuts down, disappears after database exposed deepfake porn images
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

That breeze coming from the south of the peninsula is an AI startup in the wind…

The explosive growth of AI-generated explicit content has reached a disturbing milestone with South Korean company GenNomis shutting down after researchers discovered an unsecured database containing thousands of non-consensual pornographic deepfakes. This incident highlights the dangerous intersection of accessible generative AI technology and inadequate regulation, creating serious harm particularly for women who constitute most victims of these digital violations.

The big picture: A South Korean AI startup called GenNomis abruptly deleted its entire online presence after a researcher discovered tens of thousands of AI-generated pornographic images in an unsecured database.

  • The company’s software, called Nudify, had created explicit images depicting celebrities, politicians, random women, and children.
  • Just hours after being contacted by Wired for comment, both GenNomis and its parent company AI-Nomis disappeared from the web entirely.

Behind the discovery: Cybersecurity researcher Jeremiah Fowler found the explicit image cache and immediately sent a responsible disclosure notice to the company.

  • The company initially restricted public access to the database before completely shutting down operations.
  • This incident represents part of a much broader problem as generative AI makes creating convincing deepfakes increasingly accessible.

Why this matters: Non-consensual deepfake pornography creates severe real-world harm for victims while raising urgent questions about AI regulation and accountability.

  • Women constitute the vast majority of deepfake pornography victims, who face digital violations without consent.
  • These materials have been weaponized to damage reputations, cause job loss, facilitate extortion, and even generate child sexual abuse materials.

Beyond pornography: The deepfake problem extends far beyond explicit content, creating multiple societal threats.

  • Non-pornographic deepfakes have contributed to significant increases in financial fraud and cybercrimes.
  • The technology also enables the creation and spread of convincing misinformation, further complicating digital literacy challenges.
AI Startup Deletes Entire Website After Researcher Finds Something Disgusting There

Recent News

Scaling generative AI 4 ways from experiments to production

Organizations face significant hurdles when moving generative AI initiatives from experimentation to production-ready systems, with most falling short of deployment goals despite executive interest.

Google expands Gemini AI with 2 new plans, leak reveals

Google prepares to introduce multiple subscription tiers for Gemini, addressing the gap between its free and premium AI offerings.

AI discovers potential Alzheimer’s cause and treatment

AI identifies PHGDH gene as a direct cause of Alzheimer's disease beyond its role as a biomarker, offering a new understanding of spontaneous cases and potential treatment pathways.