LLM Security
LLM Poisoning: How Corrupted Training Data Compromises AI
Data poisoning attacks targeting large language models can manipulate outputs by corrupting training datasets. Understanding these vulnerabilities is critical for maintaining AI system integrity and authenticity.