Pro-Iran Group Deploys AI Lego Cartoons as Propaganda
A pro-Iranian media outlet called Explosive Media is using AI-generated Lego-style cartoons to mock U.S. leadership, highlighting how accessible synthetic media tools are enabling new forms of state-aligned propaganda.
A pro-Iranian media operation known as Explosive Media has begun deploying AI-generated Lego-style cartoons to mock U.S. President Donald Trump and American foreign policy, marking a notable evolution in how state-aligned actors are leveraging accessible synthetic media tools for geopolitical propaganda campaigns.
AI-Generated Propaganda Goes Mainstream
The campaign, reported by Ars Technica, highlights a growing trend in which influence operations adopt commercially available AI image generation tools to rapidly produce visually engaging content. Rather than relying on traditional graphic design or hand-drawn political cartoons, Explosive Media has turned to AI systems capable of generating images in the distinctive aesthetic of Lego minifigures — a style that went viral across social media platforms in early 2025 when users flooded platforms with AI-generated Lego box art depicting celebrities, politicians, and everyday scenarios.
By co-opting this culturally familiar format, the pro-Iran outlet is able to produce propaganda that is shareable, visually appealing, and designed to bypass the typical skepticism that accompanies overtly political messaging. The Lego aesthetic gives the content a veneer of humor and playfulness that masks its underlying geopolitical objectives.
The Synthetic Media Influence Pipeline
This development is significant for several reasons within the synthetic media landscape. First, it demonstrates how the barrier to entry for producing sophisticated visual propaganda has collapsed. What once required skilled graphic artists, photo manipulation expertise, or video production teams can now be accomplished with text prompts fed into widely available AI image generators. Tools from companies like Midjourney, DALL-E, Stable Diffusion, and numerous open-source alternatives have made it trivially easy to generate polished, stylized imagery at scale.
Second, the choice of a Lego-style format is tactically shrewd. AI-generated photorealistic imagery of political figures often triggers immediate scrutiny and platform moderation responses. By contrast, cartoonish or stylized content occupies a gray area — it's clearly not attempting to pass as real photography, yet it still carries potent political messaging. This makes it harder for content moderation systems to flag and remove, as it blends into the broader wave of lighthearted AI-generated content that floods social media daily.
Implications for Digital Authenticity
The Explosive Media campaign underscores a challenge that researchers in digital authenticity and content provenance have been warning about: the problem isn't limited to photorealistic deepfakes. While much of the public discourse around synthetic media threats focuses on convincing face swaps and voice clones, stylized AI-generated content can be equally effective as a propaganda vector. The emotional and memetic impact of a well-crafted AI cartoon can rival that of a manipulated photograph.
For organizations working on content authentication — including initiatives like the Coalition for Content Provenance and Authenticity (C2PA) and various watermarking schemes embedded in AI generators — this raises important questions. Current provenance standards are primarily designed to track whether an image was AI-generated and to embed metadata about its origin. But when AI-generated content is deliberately designed as political satire or propaganda, the question shifts from "Is this real?" to "Who made this and why?"
A Growing Pattern
Explosive Media is far from the first state-aligned operation to adopt AI-generated content. Throughout 2025 and into 2026, researchers at organizations like the Stanford Internet Observatory, Graphika, and Mandiant have documented AI-generated content appearing in influence operations linked to Russia, China, and Iran. However, the sophistication and cultural savvy of adopting a viral meme format like Lego-style imagery represents an evolution in tactics — one that suggests these operations are becoming more attuned to Western internet culture and social media dynamics.
The campaign also arrives amid heightened U.S.-Iran tensions, giving the content additional geopolitical context and urgency. By framing its messaging in a humorous, viral-friendly format, Explosive Media maximizes the likelihood that the content will be shared organically by users who may not recognize or care about its origins.
The Broader Challenge
For the AI and synthetic media community, this case serves as a reminder that the tools being built for creative expression, entertainment, and productivity are dual-use by nature. The same image generation capabilities that enable artists and marketers to work more efficiently also empower propaganda operations to produce content at unprecedented speed and scale. As AI generation quality continues to improve and costs continue to drop, these influence campaigns will only become more sophisticated and harder to distinguish from organic content.
The question for platforms, policymakers, and authenticity researchers is whether detection and provenance systems can keep pace — not just with photorealistic deepfakes, but with the full spectrum of AI-generated media now being deployed in the information war.
Stay informed on AI video and digital authenticity. Follow Skrew AI News.