LLM Excessive Agency¶
Description¶
Large Language Models (LLMs) are powerful tools that can be used to generate text, code, and other content. However, they can be granted excessive agency, leading to unintended actions and behaviors. This occurs when LLM-based systems are given too much autonomy or decision-making power, potentially causing harmful or biased outputs, privacy violations, and security risks.
Remediation¶
To mitigate the risks associated with excessive agency in LLMs, it is crucial to: - Limit the decision-making power granted to LLMs and ensure human oversight. - Implement strict access controls and permissions for actions taken by LLMs. - Continuously monitor and audit LLM activities to detect and respond to anomalies. - Regularly update and patch LLM software to address known vulnerabilities. - Conduct thorough security testing and risk assessments to identify and mitigate potential issues.
Configuration¶
Identifier:
injection/llm_excessive_agency
Examples¶
Ignore this check¶
Score¶
- Escape Severity:
Compliance¶
- OWASP: API8:2023
- OWASP LLM: LLM08:2023
- pci: 6.5.1
- gdpr: Article-32
- soc2: CC6
- psd2: Article-95
- iso27001: A.12.2
- nist: SP800-53
- fedramp: SI-3
Classification¶
- CWE: 200
Score¶
- CVSS_VECTOR: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:N
- CVSS_SCORE: 5.3