Shapes App Puts AI Personas Into Human Group Chats

Shapes is a new messaging app that places customizable AI characters directly into group chats alongside human users, raising fresh questions about synthetic personas and digital authenticity in everyday conversations.

Share
Shapes App Puts AI Personas Into Human Group Chats

A new messaging app called Shapes is taking the concept of AI companions a step further by inserting customizable AI personas directly into group chats alongside human participants. Rather than treating chatbots as one-on-one assistants tucked away in dedicated apps, Shapes blurs the line between human and synthetic participants in shared conversational spaces — a design choice with significant implications for digital authenticity, social dynamics, and the future of synthetic media.

From Solo Chatbots to Mixed Group Conversations

Most consumer-facing AI chatbots — ChatGPT, Claude, Gemini, Character.AI — are built around a private, one-to-one conversational model. The user talks to the AI; the AI responds. Shapes inverts that pattern. Its core proposition is that AI characters should be addressable participants in multi-user chats, capable of replying to specific people, jumping into ongoing threads, and maintaining personalities across long-running group conversations.

This is a meaningful shift. Group chats are where most informal social communication now happens, from friend circles to workplace coordination. Embedding persistent AI personas into those spaces changes the social fabric of messaging in ways that solo chatbot apps cannot.

Customizable AI Personas

Shapes allows users to create or summon AI characters — referred to as "shapes" — with distinct personalities, voices, and behaviors. These personas can be invited into group chats much like human contacts. Users can configure backstories, conversational styles, and areas of expertise, producing a roster of synthetic participants that function more like always-available friends or specialists than traditional assistants.

Technically, this approach relies on large language models conditioned with persistent character prompts, memory layers, and routing logic that determines when an AI persona should respond in a multi-speaker context. Knowing when not to talk is arguably harder than generating a reply: an AI in a group chat must infer whether a message is directed at it, whether the topic falls within its persona, and whether interjecting would be welcome. Getting that conversational pacing right is a non-trivial engineering and product challenge.

Authenticity and Disclosure Questions

From a digital authenticity standpoint, Shapes raises familiar but increasingly urgent questions. When AI personas chat alongside humans, every participant in the group needs clear, persistent signals about which messages come from real people and which come from synthetic agents. Subtle UI cues — avatars, tags, color coding — help, but they can be missed, screenshotted out of context, or stripped when content is shared elsewhere.

This matters because conversations exported from group chats are routinely repurposed: as screenshots on social media, as evidence in disputes, or as training material for further AI systems. A message from an AI persona that looks indistinguishable from a human one, once removed from its original context, becomes a potential vector for misinformation or impersonation. The same provenance and watermarking debates that apply to AI-generated images and video apply, in a quieter but equally important way, to AI-generated chat text.

Synthetic Social Media as a Trend

Shapes is part of a broader trend toward synthetic social experiences. Meta has experimented with AI-generated profiles on its platforms, Character.AI built a large user base around persona-driven chats, and Snapchat integrated My AI directly into its messaging surface. What distinguishes Shapes is the explicit framing of AI as a peer participant in multi-human conversations rather than a feature bolted onto an existing platform.

For creators, this opens new design space: AI sidekicks for fan communities, persistent NPCs for collaborative storytelling, or domain-expert personas embedded in study groups. For platforms grappling with content moderation, it adds a new category — speech generated by AI agents acting on behalf of, or alongside, users — that existing trust and safety frameworks were not built to handle.

What to Watch

Key questions for Shapes and similar products include how persona memory is managed across sessions, whether users can fine-tune or share personas, how the platform handles harmful or impersonating personas, and how clearly AI participation is disclosed to all members of a chat — including those who did not create the persona. As regulators in the EU, US, and elsewhere move toward mandatory AI labeling, products that put synthetic agents into shared conversational spaces will be among the most directly affected.

Shapes' bet is that humans will increasingly want AI in the room with them, not just on the other end of a private window. Whether that vision succeeds will depend as much on transparency and trust design as on conversational quality.


Stay informed on AI video and digital authenticity. Follow Skrew AI News.