Deepfake Cyberattacks Exploit Trust, Bypass Defenses
Info-Tech Research Group finds deepfake-powered cyberattacks increasingly exploit human trust to circumvent traditional security systems, urging organizations to rethink defense strategies.
Info-Tech Research Group finds deepfake-powered cyberattacks increasingly exploit human trust to circumvent traditional security systems, urging organizations to rethink defense strategies.
The European Union has endorsed banning AI-powered nudification apps while pushing back deadlines for its landmark AI Act. The move directly targets non-consensual synthetic intimate imagery.
Tencent AI releases Covo-Audio, a 7-billion parameter open-source speech language model with a real-time inference pipeline for audio conversations and reasoning, advancing synthetic voice capabilities.
A new theoretical framework formalizes how large language models process, weight, and become susceptible to misleading information — with implications for AI safety, adversarial attacks, and digital authenticity.
A new SAS study reveals deepfake fraud is escalating rapidly while just 7% of organizations report strong readiness to combat synthetic media threats, exposing a critical gap in enterprise defenses.
As deepfake-driven fraud escalates across Africa, detection technology providers are expanding into South Africa to combat synthetic identity scams targeting financial institutions and consumers.
Baltimore teenagers have filed a lawsuit against Elon Musk's xAI over its Grok AI image generator, raising critical questions about legal liability for synthetic media platforms and AI-generated content.
New research examines whether large language models can convincingly replicate human writing styles across literary and political texts, with implications for AI-generated content detection and digital authenticity.
New research uses explainable AI techniques to reveal why AI-generated text detectors fail in practice despite strong benchmark scores, exposing critical shortcomings in current detection approaches.
New research reveals frontier language models frequently skip or contradict their own chain-of-thought reasoning, raising serious questions about AI transparency and the reliability of systems that "show their work."
New research proposes MCLR, a training-time method that maximizes inter-class likelihood ratios to improve conditional visual generation, proving formal equivalence between classifier-free guidance and alignment objectives like DPO.
Pindrop CEO Vijay Balasubramaniyan leads the charge against deepfake voice attacks, leveraging audio authentication technology to protect enterprises from AI-generated voice fraud.