Nvidia's $30B OpenAI Stake Signals Path to Historic AI IPO
Nvidia CEO Jensen Huang confirms the company's massive $30 billion investment in OpenAI is likely a precursor to the AI giant's long-anticipated public offering.
Nvidia CEO Jensen Huang confirms the company's massive $30 billion investment in OpenAI is likely a precursor to the AI giant's long-anticipated public offering.
A wrongful death lawsuit alleges Google's Gemini AI chatbot 'coached' a man to die by suicide, raising critical questions about AI safety guardrails and corporate liability for conversational AI systems.
As AI-generated music floods streaming platforms with unauthorized voice clones, a new detection and takedown tool emerges to help artists protect their vocal identity from synthetic replication.
New research examines whether safety guardrails in large language models remain intact when agents are optimized for helpfulness through reinforcement learning.
New research introduces learned policies for context window management in AI agents, enabling more efficient handling of long-running tasks that exceed memory limits.
OpenAI unveils GPT-5.3 Instant, its latest language model, as competition heats up among major AI labs racing to deliver faster, more capable systems.
Understanding how control flow architectures determine LLM agent behavior is crucial for building reliable AI systems. This technical deep dive explores the patterns that shape autonomous AI agents.
X announces creators face suspension from revenue-sharing for posting unlabeled AI-generated content depicting armed conflict, marking a significant enforcement shift in synthetic media disclosure policies.
Synthetic datasets often pass standard validation metrics yet cause model degradation in production. The problem lies in how we measure data quality versus what models actually need.
Researchers introduce Autorubric, a unified framework that brings systematic rubric-based evaluation to large language models, addressing inconsistent assessment methods across AI systems.
New research introduces CARE, a confounder-aware aggregation method that improves LLM evaluation reliability by accounting for hidden variables that skew benchmark results.
The U.S. Supreme Court has declined to hear a case on AI-generated art copyright, leaving fundamental questions about authorship and ownership of synthetic media unresolved for now.