ByteDance Developing Custom AI Chips with Samsung
TikTok parent ByteDance is reportedly developing proprietary AI chips and in talks with Samsung for manufacturing, signaling major vertical integration in AI infrastructure.
TikTok parent ByteDance is reportedly developing proprietary AI chips and in talks with Samsung for manufacturing, signaling major vertical integration in AI infrastructure.
New research examines how AI communities are splitting on human control approaches for autonomous agents, finding significant divergence in oversight philosophies that could shape the future of AI governance.
New research introduces a reference-free evaluation framework using multiple independent LLMs to assess AI outputs with better human alignment than single-judge approaches.
New research introduces PABU, a framework that helps LLM agents track their progress and update beliefs more efficiently, reducing computational waste in multi-step reasoning tasks.
New research introduces ELPO, a training method that teaches LLMs to learn from irrecoverable errors in tool-integrated reasoning chains, improving agent capabilities.
PAN 2026 announces five research challenges targeting generative AI detection, text watermarking, multi-author analysis, plagiarism detection, and reasoning trajectory identification.
Understanding LLM parameters is key to grasping how AI models generate text, images, and video. Learn what weights and biases actually do and why model scale matters.
From chain-of-thought reasoning to self-consistency sampling, these seven prompt engineering techniques can dramatically improve how large language models respond to complex queries.
New research uncovers systematic shortcuts in LLM-based evaluation systems, revealing how AI judges may rely on superficial patterns rather than genuine quality assessment.
New research reveals LLMs favor summaries with high lexical overlap to source texts, missing genuinely good abstractive summaries that humans prefer.
New research shows that requiring LLMs to think step-by-step before responding can backfire in conversational settings, making AI agents appear cold and disengaged to users.
New research equips large language models with directional multi-talker speech capabilities, enabling AI to understand who is speaking and from where in complex audio environments.