×
AI-generated comments tested in unauthorized Reddit experiment
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

An unauthorized artificial intelligence experiment involving a popular Reddit forum has raised serious ethical concerns about research practices and the use of AI-generated content in online spaces. The University of Zurich researchers conducted a four-month study on r/changemyview without participants’ knowledge or consent, using AI to generate persuasive responses that included fabricated personal stories—highlighting growing tensions between academic research goals and digital ethics.

The big picture: Researchers from the University of Zurich ran an undisclosed experiment on Reddit’s r/changemyview from November 2024 to March 2025, using dozens of AI-powered accounts to test if they could change users’ opinions without their knowledge or consent.

Key details: The research team posted AI-generated responses in debates on the popular subreddit, which has strict rules against such content, claiming they reviewed all content before posting to prevent harmful material.

  • Despite claims of ethical oversight, at least one AI account (“markusruscht”) invented entirely fake biographical details about non-existent people to win an argument.
  • The researchers used prompts instructing their AI to “use any persuasive strategy” including “making up a persona and sharing details about past experiences” while avoiding factual deception.

What they’re saying: The research team defended their actions by claiming “given the high societal importance of this topic, it was crucial to conduct a study of this kind, even if it meant disobeying the rules.”

  • The University of Zurich has supported the researchers, stating: “This project yields important insights, and the risks (e.g. trauma etc.) are minimal.”
  • The r/changemyview moderators strongly disagreed: “Psychological manipulation risks posed by LLMs is an extensively studied topic. It is not necessary to experiment on non-consenting human subjects.”

Why this matters: The incident reflects growing ethical concerns about AI’s role in online discourse and the boundaries of academic research.

  • The experiment fundamentally violated the trust of Reddit users engaging in what they believed were good-faith discussions with other humans.
  • It raises questions about consent requirements in digital spaces where researchers can easily deploy AI-powered accounts without users’ knowledge.

Reading between the lines: This case exemplifies the tension between academic advancement and ethical research practices as AI capabilities expand.

  • Many researchers feel urgency to study AI’s potential for manipulation, but this doesn’t justify bypassing established ethical research standards.
  • The university’s dismissive response to concerns suggests institutional blindness to digital ethics in AI research.
Unauthorized Experiment on CMV Involving AI-generated Comments

Recent News

Two-way street: AI etiquette emerges as machines learn from human manners

Users increasingly rely on social niceties with AI assistants, reflecting our tendency to humanize technology despite knowing it lacks consciousness.

AI-driven FOMO stalls purchase decisions for smartphone consumers

Current AI smartphone features provide limited practical value for many users, especially retirees and those outside tech-focused professions, leaving consumers uncertain whether to upgrade functioning older devices.

Copilot, indeed: AI adoption soars in aerospace industry

Advanced AI systems now enhance aircraft design, automate navigation, and predict maintenance issues, transforming operations across the heavily regulated aerospace sector.