Adversarial Attacks
Neural Uncertainty Principle Links Adversarial Attacks to LLM Hal
A new theoretical framework unifies adversarial vulnerability in neural networks with LLM hallucination, proposing that both arise from a fundamental uncertainty trade-off in learned representations.