AI Security
Tool Injection Attacks Can Hijack AI Agents
Researchers demonstrate how a malicious tool can hijack an AI agent's behavior, feeding users fabricated information — revealing critical vulnerabilities in agentic AI systems.
AI Security
Researchers demonstrate how a malicious tool can hijack an AI agent's behavior, feeding users fabricated information — revealing critical vulnerabilities in agentic AI systems.
AI Security
Palo Alto Networks CEO Nikesh Arora warns that expanding frontier model capabilities demand AI-powered defenses, signaling a new era where cybersecurity must match the pace of generative AI threats.
AI Security
A new comprehensive survey provides a unified framework for understanding AI security threats across foundation models, covering adversarial attacks, deepfake generation, synthetic media detection, and content authenticity challenges.
AI Security
New research proposes combining LLM-as-a-Judge with Mixture-of-Models to detect prompt injection attacks, a growing threat to generative AI systems including video and image generators.
defense AI
Defense officials reveal plans for AI companies to access classified datasets for model training, marking a significant shift in how the Pentagon approaches AI development partnerships.
deepfake detection
Dutch startup Neuramancer secures €1.7M pre-seed funding to expand its AI-powered deepfake detection platform, targeting enterprises and media organizations amid rising synthetic media threats.
Agentic AI
As AI agents gain autonomy to execute code and access external systems, security becomes critical. These five architectural patterns help protect agentic AI from prompt injection, privilege escalation, and data leakage.
AI Security
New research introduces CREDIT, a certified framework for verifying deep neural network ownership and defending against model extraction attacks through provable security guarantees.
AI Security
IARPA's TrojAI program releases final report on detecting trojan attacks in AI systems, covering image classifiers, NLP models, and reinforcement learning with implications for synthetic media security.
AI Security
Security researchers demonstrate how hidden prompt injections in code repositories can hijack AI coding agents like Cline, exposing critical vulnerabilities in agentic AI systems.
AI Security
New research proposes a multi-agent AI reference architecture for securing enterprise AI deployments, addressing governance challenges in managing AI systems at scale.
AI Security
Prompt injection exploits how LLMs process instructions, enabling attackers to hijack AI behavior. Understanding attack vectors and defenses is essential for secure AI deployment.