AI Security
Open Framework Detects Attack Patterns in Multi-Agent AI Systems
New research introduces an open framework for training security models that detect temporal attack patterns in multi-agent AI workflows through trace-based analysis.
AI Security
New research introduces an open framework for training security models that detect temporal attack patterns in multi-agent AI workflows through trace-based analysis.
Open Source
Arcee launches Trinity family of open-source AI models under Apache 2.0 license, featuring 8B, 20B, and 70B parameter variants. Models claim competitive performance against proprietary alternatives with full commercial freedom.
LLM Security
Researchers introduce AdversariaLLM, a modular framework for evaluating large language model vulnerabilities. The open-source toolbox standardizes adversarial testing methodologies for AI security research.