Anthropic Secures $30B Series G at $380B Valuation
Anthropic raises $30 billion in Series G funding, reaching a $380 billion valuation. The massive investment signals continued confidence in frontier AI development.
Anthropic raises $30 billion in Series G funding, reaching a $380 billion valuation. The massive investment signals continued confidence in frontier AI development.
The European Commission opens formal proceedings against X over Grok AI's generation of explicit deepfake images, marking a significant regulatory action under the Digital Services Act.
ByteDance launches Seedance 2.0, a next-generation AI model that generates video clips from text, images, audio, and video inputs, expanding multimodal capabilities in synthetic media.
New research introduces risk-equalized differentially private synthetic data that protects outliers by controlling record-level influence, addressing critical privacy gaps in AI training data.
New research introduces a principled approach to removing harmful concepts from generative AI models using tempering and classifier guidance, with major implications for synthetic media safety.
New research introduces PRISM, a differentially private synthetic data framework using structure-aware budget allocation to optimize prediction accuracy while maintaining privacy guarantees.
Google's new Agent2Agent protocol establishes a standard for AI agents to communicate and collaborate across platforms, enabling complex multi-agent workflows for enterprise applications.
TikTok parent ByteDance is reportedly developing proprietary AI chips and in talks with Samsung for manufacturing, signaling major vertical integration in AI infrastructure.
New research examines how AI communities are splitting on human control approaches for autonomous agents, finding significant divergence in oversight philosophies that could shape the future of AI governance.
New research introduces a reference-free evaluation framework using multiple independent LLMs to assess AI outputs with better human alignment than single-judge approaches.
New research introduces PABU, a framework that helps LLM agents track their progress and update beliefs more efficiently, reducing computational waste in multi-step reasoning tasks.
New research introduces ELPO, a training method that teaches LLMs to learn from irrecoverable errors in tool-integrated reasoning chains, improving agent capabilities.