Study: Humans Outperform AI at Detecting Deepfake Videos
New research reveals a surprising detection gap: while machines excel at spotting deepfake images, humans consistently outperform AI systems when identifying synthetic videos.
New research reveals a surprising detection gap: while machines excel at spotting deepfake images, humans consistently outperform AI systems when identifying synthetic videos.
How modern AI models process massive context without quadratic memory explosion: Sparse Attention, Linear Attention, State Space Models, and Memory-Augmented transformers explained.
New research introduces SO-LoRA, combining sparse and orthogonal low-rank adaptation to enable efficient multi-task LLM fine-tuning over wireless networks with reduced interference.
New research introduces CREDIT, a certified framework for verifying deep neural network ownership and defending against model extraction attacks through provable security guarantees.
Federal judge dismisses Elon Musk's xAI trade secrets claims against OpenAI, marking a significant legal victory with implications for AI industry talent movement and competitive dynamics.
Enterprises face a growing threat from deepfake-enabled interview fraud, where bad actors use real-time face swapping and voice cloning to impersonate candidates during remote hiring processes.
New research goes beyond behavioral analysis to trace the internal mechanisms LLMs use when weighing competing reward signals, offering insights into AI decision-making at the circuit level.
Researchers introduce a new benchmark for evaluating how general LLM agents perform when given additional compute resources at inference time, addressing a critical gap in agent evaluation.
New research proposes combining ML-assisted sampling with LLM labeling to measure policy-violating content at scale, offering a methodological breakthrough for detecting synthetic media and deepfakes.
Anthropic launches secondary share sale worth up to $6B, valuing the Claude maker at $61B as it competes with OpenAI for AI dominance and attracts major backing from Google and Salesforce.
Moving beyond simple accuracy, these five metrics—task success rate, tool usage accuracy, context coherence, response latency, and safety compliance—reveal what truly matters when assessing AI agents.
Rising deepfake fraud incidents are creating new investment opportunities in cybersecurity ETFs as detection and authentication technologies become critical enterprise priorities.