Zoom Launches Real-Time Deepfake Detection for Calls
Zoom introduces a new AI-powered tool that can detect deepfake audio and video during live calls, addressing rising concerns about synthetic media in business communications.
Zoom introduces a new AI-powered tool that can detect deepfake audio and video during live calls, addressing rising concerns about synthetic media in business communications.
Alethea strengthens its influence-campaign defense strategy through a new deepfake detection partnership, raising its profile in the fight against synthetic media-driven disinformation.
OpenOrigins is strengthening its capture-time provenance approach to combat deepfakes and geopolitical misinformation, embedding authenticity verification at the moment of content creation.
A folk musician's identity and work became fodder for AI-generated fakes and a copyright troll, highlighting the growing vulnerability of independent artists in the synthetic media era.
As AI-generated content floods creative industries, proving that work is genuinely human-made has become a new challenge. The burden of proof is shifting, raising urgent questions about authenticity verification.
Netflix's AI team has released VOID, an open-source model that removes objects from video while reconstructing physically plausible backgrounds, lighting, and motion — raising both creative and authenticity questions.
A former Facebook insider launches Moonbounce, a startup building content moderation tools designed for the AI era — tackling synthetic media, deepfakes, and AI-generated content at platform scale.
GetReal Security is positioning itself at the intersection of digital identity verification and deepfake protection as enterprise demand for synthetic media defenses surges across industries.
A new approach combines blockchain-based decentralized oracles with AI detection models to create tamper-resistant deepfake verification systems that don't rely on any single authority.
New research shows that using LLMs to impersonate an author's writing style does not successfully evade existing authorship verification methods, reinforcing the robustness of stylometric detection techniques.
New research reveals LLMs rely on shallow surface-level patterns rather than true logical reasoning, with surface heuristics systematically overriding implicit constraints even in advanced models.
At RSAC 2025, enterprise security leaders shared strategies for combating deepfake attacks targeting organizations, from real-time detection tools to zero-trust verification protocols.