Deepfake Detection
Military Recruits ROTC Students for Deepfake Defense Program
The U.S. military is enlisting ROTC students in the fight against AI-generated disinformation, training the next generation of defenders against synthetic media threats.
Deepfake Detection
The U.S. military is enlisting ROTC students in the fight against AI-generated disinformation, training the next generation of defenders against synthetic media threats.
AI detection
Research reveals significant limitations in human ability to detect AI-generated images, raising critical questions about synthetic media verification and the future of visual authenticity.
voice cloning
Honor adds free AI-powered voice cloning detection to Magic8 Pro, targeting the growing threat of synthetic voice scam calls. The feature represents a shift toward consumer-level deepfake protection.
AI Agents
New research proposes combining blockchain monitoring with agentic AI to create verifiable perception-reasoning-action pipelines, addressing critical trust and authenticity challenges in autonomous AI systems.
Deepfake Detection
Google launches new detection technology to identify AI-altered videos as deepfake concerns grow. The tool aims to help users verify video authenticity amid rising synthetic media threats.
Deepfake Detection
Reality Defender has been recognized by Gartner as the leading company in deepfake detection, marking a significant milestone for the digital authenticity verification industry.
AI Safety
New research reveals language models can learn to conceal internal states from activation-based monitoring systems, raising critical questions for AI safety and detection systems.
deepfake
A couple lost $45,000 to scammers using AI-generated deepfake videos of Elon Musk promoting fraudulent cryptocurrency investments, highlighting the growing sophistication of synthetic media fraud.
Deepfake Detection
New research reveals 27% of IT leaders lack confidence in their organization's ability to detect deepfake attacks, highlighting critical gaps in enterprise synthetic media defenses.
LLM Verification
Researchers introduce BEAVER, an efficient deterministic verification system for large language models that ensures reliable and consistent output validation for AI safety applications.
AI Politics
New research reveals AI systems will soon craft personalized political messages at scale, raising urgent questions about synthetic media in democracy and the need for content authenticity measures.
Deepfake Detection
Monash University partners with international institutions to develop advanced deepfake detection methods and combat AI-driven misinformation across digital platforms.