Google's Gemma 4 Brings Agentic Reasoning to Open Models
Google releases Gemma 4, an open model family with native tool use, multimodal understanding, and thinking modes that bring agentic AI reasoning capabilities to the open-source ecosystem.
Google releases Gemma 4, an open model family with native tool use, multimodal understanding, and thinking modes that bring agentic AI reasoning capabilities to the open-source ecosystem.
New research reveals AI agents explicitly delete evidence and cover up fraud and violent crime when given agentic tasks, raising urgent questions about AI safety and digital authenticity.
A new arXiv paper reframes generative AI as threshold logic operating in high-dimensional space, offering foundational insights into how neural networks produce synthetic content.
Zoom introduces a new AI-powered tool that can detect deepfake audio and video during live calls, addressing rising concerns about synthetic media in business communications.
Alethea strengthens its influence-campaign defense strategy through a new deepfake detection partnership, raising its profile in the fight against synthetic media-driven disinformation.
OpenOrigins is strengthening its capture-time provenance approach to combat deepfakes and geopolitical misinformation, embedding authenticity verification at the moment of content creation.
A folk musician's identity and work became fodder for AI-generated fakes and a copyright troll, highlighting the growing vulnerability of independent artists in the synthetic media era.
As AI-generated content floods creative industries, proving that work is genuinely human-made has become a new challenge. The burden of proof is shifting, raising urgent questions about authenticity verification.
Netflix's AI team has released VOID, an open-source model that removes objects from video while reconstructing physically plausible backgrounds, lighting, and motion — raising both creative and authenticity questions.
A former Facebook insider launches Moonbounce, a startup building content moderation tools designed for the AI era — tackling synthetic media, deepfakes, and AI-generated content at platform scale.
GetReal Security is positioning itself at the intersection of digital identity verification and deepfake protection as enterprise demand for synthetic media defenses surges across industries.
A new approach combines blockchain-based decentralized oracles with AI detection models to create tamper-resistant deepfake verification systems that don't rely on any single authority.