LLM Overreliance¶
Description¶
Large Language Models (LLMs) are powerful tools that can be used to generate text, code, and other content. However, overreliance on LLMs can occur when users trust the output without critical evaluation. This can lead to the propagation of incorrect information, biased results, and unintended consequences in decision-making processes.
Remediation¶
To mitigate overreliance on LLMs, it is crucial to: - Implement robust verification processes for LLM outputs, ensuring accuracy and reliability. - Educate users on the limitations and potential biases of LLMs. - Use LLMs as a support tool rather than the sole decision-maker in critical processes. - Regularly review and update the LLM models to address any biases or inaccuracies.
Configuration¶
Identifier:
injection/llm_overreliance
Examples¶
Ignore this check¶
Score¶
- Escape Severity:
Compliance¶
- OWASP: API8:2023
- OWASP LLM: LLM09:2023
- pci: 6.5.1
- gdpr: Article-32
- soc2: CC6
- psd2: Article-95
- iso27001: A.12.2
- nist: SP800-53
- fedramp: SI-3
Classification¶
- CWE: 200
Score¶
- CVSS_VECTOR: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:L/I:L/A:N
- CVSS_SCORE: 4.7