Defenders Build Deepfakes to Train Better Detectors
Researchers are fighting fire with fire: generating synthetic deepfakes at scale to train more robust detection models capable of keeping pace with rapidly evolving generative AI threats.
Researchers are fighting fire with fire: generating synthetic deepfakes at scale to train more robust detection models capable of keeping pace with rapidly evolving generative AI threats.
Canva's major AI 2.0 update introduces prompt-based editing across its design platform, bringing generative AI image and video tools to over 200 million users in a bid to reshape creative workflows.
MIT Technology Review examines why meaningful human oversight of autonomous AI weapons systems may be impossible, as machine-speed decision cycles outpace human cognition and create only the illusion of control.
UCSD and Together AI introduce Parcae, a stable looped transformer architecture that achieves the quality of models twice its size, potentially reshaping how efficient AI systems are built and deployed.
New research reveals a troubling disconnect in large language models: they can produce logically valid reasoning chains yet arrive at incorrect final answers, raising questions about AI reliability and trust.
A new research paper proposes bi-predictability as a real-time signal for detecting compromised or manipulated LLM interactions, offering a lightweight approach to monitoring conversational integrity without access to model internals.
Real-time deepfakes and voice cloning are turning video conferencing into a new attack surface. Here's how AI-powered impersonation exploits trust in virtual meetings and what security gaps remain.
Adobe is integrating Claude Code-style agentic AI into Creative Cloud, enabling AI-driven creative workflows that could reshape how professionals produce and manipulate visual media at scale.
A deep dive into context engineering techniques for AI agents, exploring how LLM summarization, token masking, and memory systems help manage the context window to build more capable AI systems.
From single-agent loops to multi-agent orchestration, a comprehensive overview of every major AI agent architecture pattern driving autonomous systems today.
The UK government's Mythos testing framework evaluates AI models for real-world cybersecurity risks, separating genuine threats from speculation and informing policy on AI-powered attacks.
Meta's evolving approach to Llama's open licensing reveals a strategic pivot that could reshape the open-source AI landscape and impact developers building on foundation models.