ai-safety
AI Models Can't Truly Forget: Memory Regeneration Study
New research reveals that AI image generators can regenerate supposedly 'unlearned' harmful content through adversarial prompts, posing challenges for deepfake prevention.
ai-safety
New research reveals that AI image generators can regenerate supposedly 'unlearned' harmful content through adversarial prompts, posing challenges for deepfake prevention.
Deepfakes
Researchers develop constrained adversarial training that prevents overly pessimistic defenses, improving real-world detection of synthetic media and deepfakes.
ai-research
New framework reveals how AI models internally structure concepts through hierarchical trees, offering insights into how deepfakes and synthetic media are generated at the neural level.
ai-video
OpenAI's Sora introduces new safeguards letting users restrict how AI-generated versions of themselves appear in videos, addressing deepfake concerns.
ai-tools
Build your own ChatGPT, image engines, and voice translators with these advanced generative AI projects featuring complete code, tutorials, and live demos.
ai-infrastructure
New standardized communication protocols for AI agents could revolutionize how synthetic media systems collaborate on generation, detection, and authentication.
ai-video
DeepSeek-R1 and o3-mini showcase how reinforcement learning and smart reasoning are replacing brute-force scaling, with major implications for synthetic media.
Deepfakes
Researchers develop new Bayesian approximation methods to quantify uncertainty in deepfake detection models, addressing critical reliability gaps in current systems.
ai-detection
New 4B-parameter model achieves 84% accuracy in detecting AI hallucinations, matching larger models while using half the resources - key tech for deepfake detection.
ai-research
ToolBrain introduces flexible reinforcement learning for training AI agents to use tools effectively, with implications for future content generation and verification systems.
ai-video
Researchers introduce VIRTUE, a visual-interactive embedding model that can understand specific image regions through user prompts, advancing capabilities for synthetic media.
Deepfakes
New research reveals frontier AI models achieve only 28% accuracy distinguishing truth from manipulation in high-stakes environments, with critical implications for deepfake detection.