AI Agents
GraphBit: Building Reliable Agentic AI Workflows
Learn how GraphBit enables production-grade AI agent workflows through deterministic tools, validated execution graphs, and optional LLM orchestration for reliable automation.
AI Agents
Learn how GraphBit enables production-grade AI agent workflows through deterministic tools, validated execution graphs, and optional LLM orchestration for reliable automation.
OpenAI
OpenAI is hiring a new Head of Preparedness to lead efforts assessing and mitigating risks from frontier AI models, including potential misuse in synthetic media generation.
AI Regulation
China's Cyberspace Administration proposes comprehensive rules targeting AI systems that simulate human appearance, voice, and behavior, with major implications for synthetic media and deepfake technology.
voice cloning
Honor adds free AI-powered voice cloning detection to Magic8 Pro, targeting the growing threat of synthetic voice scam calls. The feature represents a shift toward consumer-level deepfake protection.
AI video
Major studios embraced AI tools for film and TV production in 2025, but the creative and commercial outcomes remain questionable as the industry grapples with synthetic media integration.
AI Agents
New research proposes combining blockchain monitoring with agentic AI to create verifiable perception-reasoning-action pipelines, addressing critical trust and authenticity challenges in autonomous AI systems.
AI safety
Researchers introduce a new evaluation framework for measuring when and how autonomous AI agents violate safety constraints while pursuing objectives, addressing critical gaps in AI alignment research.
Generative AI
Researchers propose a Taylor-based approach that outperforms the classic Paterson-Stockmeyer method for computing matrix exponentials in flow-based generative AI models, offering efficiency gains for video and image synthesis.
AI safety
New research bridges efficiency and safety by developing formal verification methods for neural networks with early exits, enabling mathematically proven safety guarantees for adaptive AI systems.
Diffusion Models
New research reveals how diffusion models suffer 'generative collapse' when trained on synthetic data, with dominated samples disappearing while dominating ones proliferate across generations.
LLM Agents
New research introduces GenEnv, a framework where LLM agents and environment simulators co-evolve through difficulty-aligned training, enabling more robust agent capabilities.
AI Agents
A technical deep dive into how AI coding agents work, from tool-calling mechanisms and agentic loops to planning systems and memory architectures that enable autonomous code generation.