LLM Security
Special Token Attacks: The 96% LLM Jailbreak Exploit
Security researchers uncover how special tokens in LLM architectures create hidden attack surfaces, enabling jailbreak success rates as high as 96% across major models.
LLM Security
Security researchers uncover how special tokens in LLM architectures create hidden attack surfaces, enabling jailbreak success rates as high as 96% across major models.
world models
Meta AI chief Yann LeCun is betting $3.5 billion that world models—not language models—will achieve true machine intelligence. This architectural pivot could reshape AI video generation and physical simulation.
xAI
Malaysia has lifted its ban on Elon Musk's Grok AI chatbot following a compliance review, marking a significant development in how Southeast Asian nations regulate generative AI platforms.
deepfakes
Cybercriminals are leveraging AI-generated deepfakes alongside fake security alerts and evolving malware to compromise users. Here's how these threats work and how to protect yourself.
LLM Research
Researchers introduce S-RLS, a novel method for continuous LLM knowledge updates that avoids catastrophic forgetting through soft memory preservation instead of rigid constraints.
LLM Detection
A new arXiv paper examines whether current LLM detectors can be trusted, revealing critical limitations in AI-generated text detection that impact digital authenticity efforts.
NLP
New research exposes systematic sentiment bias in NLP transformers, showing how AI language models struggle to maintain neutral tone in business communications, raising concerns for automated content generation.
AI Video Generation
New research argues current AI video generators like Sora lack true physical understanding. The paper proposes a shift from pattern-matching to physics-grounded world models for reliable simulation.
LLM Research
Researchers propose methods to measure and eliminate hallucination risks in large language models, shifting from generative to consultative AI for high-stakes legal applications.
neuro-symbolic AI
New research proposes tensor network mathematics to unify neural networks with symbolic AI, potentially enabling more interpretable and reasoning-capable AI systems.
synthetic data
Researchers analyze why Empirical Risk Minimization fails when models train on synthetic data, revealing fundamental barriers that affect AI video generation and deepfake systems.
Generative AI
New research explores how generative models can iteratively improve their own training datasets, potentially enhancing quality across AI video, image synthesis, and synthetic media generation.