OpenAI
OpenAI Seeks New Head of Preparedness for AI Safety
OpenAI is hiring a new Head of Preparedness to lead efforts assessing and mitigating risks from frontier AI models, including potential misuse in synthetic media generation.
OpenAI
OpenAI is hiring a new Head of Preparedness to lead efforts assessing and mitigating risks from frontier AI models, including potential misuse in synthetic media generation.
AI Regulation
China's Cyberspace Administration proposes comprehensive rules targeting AI systems that simulate human appearance, voice, and behavior, with major implications for synthetic media and deepfake technology.
voice cloning
Honor adds free AI-powered voice cloning detection to Magic8 Pro, targeting the growing threat of synthetic voice scam calls. The feature represents a shift toward consumer-level deepfake protection.
AI Video
Major studios embraced AI tools for film and TV production in 2025, but the creative and commercial outcomes remain questionable as the industry grapples with synthetic media integration.
deepfake detection
FaceOff Technologies unveils new AI-powered detection platform targeting deepfakes and synthetic fraud, aiming to strengthen digital trust infrastructure for enterprises.
OpenAI
SoftBank accelerates to finalize a historic $22.5 billion investment in OpenAI before year-end, marking the largest single funding round in AI history.
deepfake detection
Gartner positions Reality Defender as a leading deepfake detection solution as enterprises face mounting synthetic media fraud risks across video, audio, and image authentication.
deepfake detection
Google launches new detection technology to identify AI-altered videos as deepfake concerns grow. The tool aims to help users verify video authenticity amid rising synthetic media threats.
deepfake detection
LexisNexis launches enhanced IDVerse platform combining AI-powered document authentication with advanced deepfake detection to combat synthetic identity fraud in financial services verification.
multi-modal AI
New research introduces MMGR, a framework that enables AI models to perform generative reasoning across multiple modalities including text, images, and video.
deepfake
Cyber insurance provider Coalition adds deepfake-specific coverage to policies, signaling growing recognition of synthetic media fraud risks in enterprise security.
AI Security
New research reveals how anyone with API access can clone AI models and strip away safety guardrails, creating unregulated copies capable of generating harmful content.