×
New Meta AI app unifies smart glasses and phone experiences
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Meta‘s new standalone AI assistant app represents a significant evolution in how users can interact with conversational AI across platforms. Built on the Llama 4 model, this personalized assistant integrates with Meta’s ecosystem while introducing voice capabilities and social discovery features. The app’s design bridges the gap between Meta’s smart glasses and other devices, creating a unified experience that maintains conversations across platforms.

The big picture: Meta has launched its first standalone Meta AI app across iOS, Android, web browsers, and Ray-Ban Meta smart glasses with personalization features and enhanced voice capabilities.

  • The app leverages Meta’s Llama 4 model to deliver a more contextual and relevant AI experience that remembers previous interactions.
  • It serves as the replacement for the Meta View companion app for Ray-Ban Meta glasses, automatically transferring settings, devices, and media.

Key features: The Meta AI app introduces a social Discover feed where users can explore and share AI interactions with others.

  • Users can browse popular prompts, remix them for their own use, and share their interactions, though Meta emphasizes that “nothing is shared to your feed unless you choose to post it.”
  • The app includes an experimental voice assistant with full-duplex speech technology that generates voice output directly rather than reading text-based responses.

Cross-platform integration: Conversations started with Meta AI can be continued across multiple platforms in a seamless experience.

  • Users can begin interactions using their Ray-Ban Meta glasses and continue them in the app or on meta.ai, though conversations cannot be transferred from the app or web back to the glasses.
  • The assistant draws on information users have shared on Meta products, including profile information and content engagement, to provide more personalized responses.

Web experience upgrades: Meta has enhanced its AI’s web interface with voice interaction capabilities and optimizations for larger screens.

  • The desktop browser experience now includes access to the Discover feed and improved image generation features.
  • New options for adjusting lighting, mood, and style have been added to the image generation tools, along with additional presets.

Looking ahead: Meta positions this launch as “the first step toward building a more personal AI” with plans for continued evolution based on user feedback.

  • The company aims to expand the assistant’s capabilities over time while maintaining its focus on everyday tasks like recommendations, brainstorming, and staying connected.
  • While the current version lacks real-time web access, Meta frames the voice features as “a glimpse into the future” of voice AI technology.
Meta Releases Standalone Meta AI App with Personalization, Voice Features, and Companion Integration

Recent News

Coming down: AI hallucinations drop to 1% with new guardian agent approach

Guardian agents detect, explain, and automatically repair AI hallucinations, reducing rates to below 1% in smaller language models.

MCP emerges as enterprise AI’s universal language

The protocol enables disparate AI systems to communicate while giving organizations more control over sensitive data than traditional APIs.

Web3 and AI entertainment fund launches with $20M at Cannes

The $20 million fund aims to integrate decentralized technologies and AI into media production, potentially giving creators more autonomy while offering investors greater transparency into project performance.