LLM
Proactive Memory Extraction Advances LLM Agent Capabilities
New research proposes proactive memory extraction for LLM agents, moving beyond static summarization to enable more dynamic knowledge retention and recall in autonomous AI systems.
LLM
New research proposes proactive memory extraction for LLM agents, moving beyond static summarization to enable more dynamic knowledge retention and recall in autonomous AI systems.
LLM
New research introduces an evaluation-driven multi-agent workflow that automatically optimizes prompt instructions for improved LLM instruction following performance.
digital twins
New survey explores how Digital Twin AI evolves from LLMs to world models, enabling AI systems to simulate and predict physical reality with unprecedented accuracy.
LLM
New research introduces cognitive artifacts that maintain coherence across extended LLM conversations, addressing the fundamental challenge of context degradation in long interactions.
LLM
Quantization and fine-tuning techniques like QLoRA can reduce large language model sizes by 75% while preserving performance, enabling efficient AI deployment on consumer hardware.
LLM
New research introduces HaluNet, a framework using multi-granular uncertainty modeling to efficiently detect hallucinations in LLM question answering systems.
LLM
New research introduces entropy-based adaptive speculation that detects reasoning phases in LLMs, dynamically adjusting decoding strategies to improve both speed and output quality.
LLM
New research introduces STED and Consistency Scoring, a systematic framework for measuring how reliably large language models produce structured outputs—critical for production AI systems.
speech recognition
New research introduces a verification-based approach to correct speech recognition errors while minimizing LLM hallucinations through structured multi-stage processing.
LLM
Researchers propose efficient Shapley value approximation using language model arithmetic to determine which training data samples matter most for LLM fine-tuning.
LLM
New research demonstrates LLMs can design complete neural network architectures for image captioning under strict API constraints, opening new possibilities for automated AI system design.
embeddings
The famous equation 'King - Man + Woman = Queen' reveals how embeddings capture semantic meaning in vector space, forming the foundation of why large language models appear intelligent.