Building a Decentralized Oracle for Deepfake Detection
A new approach combines blockchain-based decentralized oracles with AI detection models to create tamper-resistant deepfake verification systems that don't rely on any single authority.
A new approach combines blockchain-based decentralized oracles with AI detection models to create tamper-resistant deepfake verification systems that don't rely on any single authority.
New research shows that using LLMs to impersonate an author's writing style does not successfully evade existing authorship verification methods, reinforcing the robustness of stylometric detection techniques.
New research reveals LLMs rely on shallow surface-level patterns rather than true logical reasoning, with surface heuristics systematically overriding implicit constraints even in advanced models.
At RSAC 2025, enterprise security leaders shared strategies for combating deepfake attacks targeting organizations, from real-time detection tools to zero-trust verification protocols.
China's latest Five-Year Plan outlines sweeping AI deployment goals across industries, setting the stage for accelerated competition in generative AI, synthetic media, and digital content technologies.
A new research paper proposes decision-centric design for LLM systems, shifting focus from model accuracy to downstream decision quality — with implications for how AI pipelines are architected.
New research examines why safety alignment in large AI models remains fundamentally fragile, with implications for content guardrails meant to prevent deepfake and synthetic media generation.
Orange Business integrates deepfake detection into its enterprise security portfolio, signaling growing demand for AI-powered authenticity tools in corporate communications and fraud prevention.
At RSA Conference 2025, CISOs revealed how they're restructuring security operations to combat deepfake attacks targeting enterprise authentication and communications.
Modulate launches Velma Deepfake Detect, a tool focused on identifying AI-generated synthetic voices in real time, addressing growing concerns about voice cloning fraud and audio deepfakes.
Liquid AI releases a 350M parameter model trained on 28 trillion tokens with scaled reinforcement learning, challenging assumptions about what compact models can achieve.
A new framework called OneComp promises to compress generative AI models with a single line of code, potentially making diffusion and video generation models far more deployable at the edge.