ai-security
CAMIA Attack Exposes AI Memory Vulnerabilities
New privacy attack method CAMIA reveals whether your data was used to train AI models, exposing critical vulnerabilities in synthetic media generation systems.
ai-security
New privacy attack method CAMIA reveals whether your data was used to train AI models, exposing critical vulnerabilities in synthetic media generation systems.
ai-video
PIRF technique improves diffusion models by enforcing physical laws during generation, potentially creating more convincing synthetic videos that follow real-world physics.
ai-audio
Researchers adapt MM-SHAP to quantify whether Audio LLMs truly process sound or rely on text reasoning—critical insights for developing robust deepfake detection systems.
deepfakes
New research reveals that audio watermarking, meant to protect copyright, significantly degrades anti-spoofing systems' ability to detect synthetic speech and deepfakes.
deepfakes
Critical iOS vulnerability allows attackers to inject synthetic faces into live video calls, marking a dangerous evolution in real-time deepfake deployment capabilities.
ai-video
The 2025 horror film Appofeniacs explores new territory by incorporating AI deepfake technology as a central element, marking a shift in how synthetic media enters cinema.
deepfakes
Danish government aims to implement comprehensive deepfake legislation by winter, signaling Europe's growing urgency in regulating synthetic media and AI-generated content.
ai-regulation
California's proposed AI safety bill SB 53 may establish crucial oversight for AI companies, with significant implications for deepfake and synthetic media regulation.
deepfakes
iProov's threat intelligence team discovers sophisticated iOS video injection tool designed to bypass biometric security systems using deepfake technology.
ai-video
Major ChatGPT and Sora disruption highlights critical dependencies on centralized AI systems, raising urgent questions about digital reliability.
ai-video
Researchers discover asking deepfake videos to draw basic shapes reveals their artificial nature, offering new hope in the fight against digital deception.
ai-video
Rising deepfake abuse among Korean teens signals urgent need for digital literacy and verification tech as AI-generated content becomes weaponized.