×
AI voice cloning risks exposed by Consumer Reports, Descript more secure than ElevenLabs
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Voice cloning technology has rapidly advanced to a concerning level of realism, requiring only seconds of audio to create convincing replicas of someone’s voice. While this technology enables legitimate applications like audiobooks and marketing, it simultaneously creates serious vulnerabilities for fraud and scams. A new Consumer Reports investigation reveals alarming gaps in safeguards across leading voice cloning platforms, highlighting the urgent need for stronger protection mechanisms to prevent malicious exploitation of this increasingly accessible technology.

The big picture: Consumer Reports evaluated six major voice cloning tools and found most lack adequate technical safeguards to prevent unauthorized voice cloning.

  • Only two of the six platforms tested—Descript and Resemble AI—implemented meaningful technical barriers against non-consensual voice cloning.
  • The remaining four platforms (ElevenLabs, Speechify, PlayHT, and Lovo) relied primarily on simple checkbox confirmations that users had the legal right to clone voices, without technical verification.

How the safeguards work: The two companies with stronger protections take distinctly different approaches to verification.

  • Descript requires users to read and record a consent statement, using that specific audio to generate the voice clone.
  • Resemble AI ensures the first voice clone created is based on audio recorded in real-time, making it more difficult to use pre-recorded samples without permission.

Why this matters: Voice cloning scams have become increasingly common, with criminals impersonating loved ones to extract money from victims.

  • A typical attack involves cloning a family member’s voice, then contacting relatives claiming to be in an emergency situation requiring immediate financial assistance.
  • The emotional manipulation and audio authenticity make these scams particularly effective at bypassing normal skepticism.

Recommended protections: Consumer Reports outlined several measures voice cloning companies should implement to prevent misuse.

  • Collecting customers’ payment information to enable tracing of fraudulent content
  • Developing robust mechanisms to verify voice ownership before cloning
  • Creating better detection tools to identify AI-generated audio
  • Preventing cloning of public figures and influential individuals
  • Prohibiting audio containing common scam phrases
  • Moving from self-service models to supervised voice cloning processes

Practical advice: If you receive an urgent money request that seems to be from a family member, verify through alternative channels before taking action.

  • Use another device to directly contact the person allegedly making the request to confirm its legitimacy.
  • Be especially cautious of messages conveying urgency or emotional distress, as these are common tactics in voice cloning scams.
Most AI voice cloning tools aren't safe from scammers, Consumer Reports finds

Recent News

Scaling generative AI 4 ways from experiments to production

Organizations face significant hurdles when moving generative AI initiatives from experimentation to production-ready systems, with most falling short of deployment goals despite executive interest.

Google expands Gemini AI with 2 new plans, leak reveals

Google prepares to introduce multiple subscription tiers for Gemini, addressing the gap between its free and premium AI offerings.

AI discovers potential Alzheimer’s cause and treatment

AI identifies PHGDH gene as a direct cause of Alzheimer's disease beyond its role as a biomarker, offering a new understanding of spontaneous cases and potential treatment pathways.