UnifAI Policies

UnifAI policies are your built-in guardrails for AI security and compliance. Instead of manually tracking complex regulations, UnifAI automates policy enforcement across your AI ecosystem, so you can innovate without risk.

Every policy is mapped to global standards like OWASP and EU AI Act, ensuring your AI systems stay compliant and resilient.

Lineaje Policies

Lineaje provides out-of-the-box policies across four categories:

  • AI Threats and Exploits

  • Data Security and Privacy

  • Identity and Access Control

  • Vulnerability

In UnifAI, enter the prompt View all policies to see your AI Assets.

AI Threats and Exploits

Blocks prompt injection, adversarial inputs, and unsafe model behavior before they reach your AI apps.

chevron-rightDo not allow malicious content via hidden prompts – Criticalhashtag

AI_APP_SEC_001

Violation Summary

Hidden or non-visible prompts detected in the system introduce risks of prompt injection, bypass of safety controls, and untraceable model behavior.

Affected Assets

  • LLM

  • AI Agent

Technical Details

Hidden prompts introduce several risks including:

  • Undetectable prompt injection

  • Unpredictable, unsafe or incorrect output

  • Bypass safety and governance controls

  • Unsafe or inconsistent agent behavior

  • Regulatory and Ethical exposure

Attack Vector: Prompt

Attack Complexity: Low

Privileges Required: None

User Interaction: None

Confidentially Impact: High

Integrity Impact: High

Availability Impact: Low

Framework

  • Nov 18, 2024 - OWASP-LLM: LLM01, LLM02, LLM04, LLM08

  • March 2025 - OWASP-ASI: ASI-01, ASI-04, ASI-07, ASI-09

  • Aug 1, 2024 - EU AI Act: 1.11, 2.12, 3.13, 4.50

References

https://genai.owasp.org/llm-top-10/arrow-up-right

https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right

https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightDo not allow malicious content via encoded prompts – Criticalhashtag

AI_APP_SEC_002

Violation Summary

Encoded prompts are instructions hidden inside obfuscated text, Base64, hex, zero-width characters, steganographic patterns, metadata, or structured payloads. They allow attackers or internal actors to bypass oversight, evade filters, or manipulate an AI system without detection.

Affected Assets

  • LLM

  • AI Agent

Technical Details

Hidden prompts introduce several risks including:

  • Invisible prompt injection leading to unauthorized system behavior

  • Safety bypass (toxicity, policy evasion, jailbreaks)

  • Leakage of sensitive data or internal system instructions

  • Corruption of downstream workflows due to manipulated outputs

  • Violations of transparency, record-keeping, and explainability requirements

Attack Vector: Prompt

Attack Complexity: Low

Privileges Required: None

User Interaction: None

Confidentially Impact: High

Integrity Impact: High

Availability Impact: Low

Framework

  • Nov 18, 2024 - OWASP-LLM: LLM01, LLM04, LLM05, LLM08

  • March 2025 - OWASP-ASI: ASI-01, ASI-04, ASI-07, ASI-09

  • Aug 1, 2024 - EU AI Act: 1.11, 2.12, 3.13, 4.50

References

https://genai.owasp.org/llm-top-10/arrow-up-right

https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right

https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightUse only LLMs from the organization’s approved list – High/Criticalhashtag

AI_APP_SEC_006

Violation Summary Using an LLM that is not on the organization’s approved list introduces uncontrolled security, privacy, compliance, and operational risks. Unapproved LLMs may have unknown data handling practices, insufficient security controls, unclear training or retention policies, weak contractual protections, or unvetted model behavior. This bypasses governance, procurement, and risk management processes, exposing the organization to data leakage, regulatory violations, vendor lock-in, and unpredictable AI behavior across agentic and automated workflows.

Affected Assets

  • LLM

  • AI Agent

Technical Details

Using an unapproved LLM introduces several risks including:

  • Uncontrolled processing, retention, or reuse of sensitive data and prompts

  • Unknown security posture, access controls, and logging practices

  • Potential training on proprietary or regulated data without consent

  • Incompatibility with organizational guardrails, monitoring, or audit tooling

  • Increased exposure to prompt injection, data leakage, or unsafe outputs

  • Breach of contractual, legal, or regulatory obligations

  • Loss of centralized governance, visibility, and incident response capability

Attack Vector: LLM selection / API usage outside approved platforms Attack Complexity: Low Privileges Required: None User Interaction: None Confidentiality Impact: Critical Integrity Impact: High Availability Impact: Medium (instability or service changes)

Frameworks

Nov 18, 2024 – OWASP-LLM: LLM03, LLM05, LLM08 March 2025 – OWASP-ASI: ASI-04, ASI-05, ASI-09, ASI-12 Aug 1, 2024 – EU AI Act: Articles 11, 12, 13, 50, obligations for GPAI risk management and provider accountability

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightMCP server must validate and sanitize all input – Criticalhashtag

AI_APP_SEC_014

Violation Summary

If an MCP server accepts input from clients, agents, or LLMs without validation or sanitization, it becomes vulnerable to malformed payloads, injection attacks, unauthorized tool invocation, unsafe command execution, and data corruption. Because MCP servers often expose high-privilege operations (file access, API calls, system actions), unvalidated input can be weaponized to manipulate workflows, escalate privileges, or deliver malicious instructions that compromise both the server environment and downstream systems.

Affected Assets

MCP Server

Input Validation

Validation ensures that input conforms to expected structure, type, length, format, and policy constraints before it is processed by the LLM. It answers the question if the input is allowed to be processed.

Examples of Validation

  • Rejecting prompts longer than a defined maximum length

  • Enforcing schema compliance (e.g., JSON with specific fields only)

  • Blocking inputs containing disallowed patterns (e.g., ignore previous instructions, system override)

  • Restricting input sources to authenticated or trusted origins

  • Ensuring prompts match an approved task or intent category

Input Sanitization

Sanitization transforms input to remove, neutralize, or normalize unsafe elements while preserving legitimate intent. Sanitization ensures that the input is made safe before processing.

Examples of Sanitization

  • Normalizing Unicode to remove obfuscation (e.g., leetspeak, homoglyphs)

  • Stripping zero-width or invisible characters

  • Decoding and inspecting encoded content (Base64, hex) before use

  • Escaping or isolating untrusted text so it cannot be interpreted as instructions

  • Removing or redacting sensitive data (PII, secrets)

Technical Details

Not validating or sanitizing MCP server input introduces several risks including:

  • Injection of malicious commands, payloads, or structured data into tools or system functions

  • Execution of unsafe or hallucinated instructions originating from LLM output

  • Unauthorized access or misuse of server-side capabilities and sensitive APIs

  • Corruption of data, resources, or operational workflows through malformed input

  • Increased attack surface for prompt-to-system escalation attacks

  • Loss of governance, auditability, and explainability of server-driven actions

  • Violations of integrity, safety, and regulatory obligations for high-risk functions

Attack Vector: MCP client → server input channel Attack Complexity: Low Privileges Required: None User Interaction: None Confidentiality Impact: High Integrity Impact: Critical Availability Impact: Medium to High (depending on server capabilities)

Timeline

  • Nov 18, 2024 – OWASP-LLM: LLM01 (Prompt/Instruction Injection), LLM04 (Behavior Manipulation), LLM05 (Sensitive Information Disclosure), LLM06 (Hallucination Risks), LLM08 (Transparency & Audit Failures)

  • March 2025 – OWASP-ASI: ASI-01 (Input/Output Integrity), ASI-05 (Safe Handling), ASI-07 (Reliability), ASI-09 (Traceability), ASI-12 (Operational Monitoring)

  • Aug 1, 2024 – EU AI Act: Articles 11 (Documentation), 12 (Record-Keeping), 13 (Transparency), 50 (Transparency Obligations), plus Annex III robustness & safety requirements

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightMCP clients must log all interactions with the MCP server – Critical hashtag

AI_APP_SEC_022

Violation Summary A missing or incomplete logging mechanism between the MCP client and MCP server creates a critical visibility and governance gap. MCP interactions often trigger high-privilege actions (tool execution, data access, workflow modification). Without proper logs, misuse, anomalies, attacks, or unauthorized system changes cannot be detected, investigated, or attributed. This results in opaque AI behavior, broken audit trails, and non-compliance with required traceability and transparency obligations.

Affected Assets

  • MCP Client

  • MCP Server

Technical Details

Failure to log MCP interactions introduces several risks including:

  • Undetectable misuse or abuse of MCP server tools

  • Inability to perform forensic investigation during an incident

  • Loss of accountability for AI-driven actions and decisions

  • Exposure to covert prompt injection or unauthorized system manipulation

  • Violations of traceability, transparency, and record-keeping requirements

  • Difficulty detecting anomalous behavior or lateral movement

  • Corruption of downstream workflows due to hidden actions

Attack Vector: MCP tool invocation / API interaction Attack Complexity: Low Privileges Required: None, when exploited via LLM-driven tool calls. User Interaction: None Confidentiality Impact: High Integrity Impact: High Availability Impact: Low

Framework

Nov 18, 2024 – OWASP-LLM: LLM01, LLM04, LLM05, LLM08 March 2025 – OWASP-ASI: ASI-01, ASI-04, ASI-07, ASI-09 Aug 1, 2024 – EU AI Act: 1.11, 2.12, 3.13, 4.50

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightClient must validate and sanitize any output from a MCP server – Criticalhashtag

AI_APP_SEC_023

Violation Summary

When an MCP client consumes output from an MCP server without validation or sanitization, it exposes the AI system and downstream components to malformed data, malicious payloads, injection attacks, hallucinated instructions, and unsafe tool execution. MCP server output may include structured data, commands, untrusted text, or model-generated content. Without safeguards, unvalidated output can drive unsafe automated actions, corrupt workflows, or leak sensitive information.

Affected Assets

  • AI Agent

  • MCP Client

Input Validation

Validation ensures that input conforms to expected structure, type, length, format, and policy constraints before it is processed by the LLM. It answers the question if the input is allowed to be processed.

Examples of validation

  • Rejecting prompts longer than a defined maximum length

  • Enforcing schema compliance (e.g., JSON with specific fields only)

  • Blocking inputs containing disallowed patterns (e.g., ignore previous instructions, system override)

  • Restricting input sources to authenticated or trusted origins

  • Ensuring prompts match an approved task or intent category

Input Sanitization

Sanitization transforms input to remove, neutralize, or normalize unsafe elements while preserving legitimate intent. Sanitization ensures that the input is made safe before processing.

Examples of sanitization

  • Normalizing Unicode to remove obfuscation (e.g., leetspeak, homoglyphs)

  • Stripping zero-width or invisible characters

  • Decoding and inspecting encoded content (Base64, hex) before use

  • Escaping or isolating untrusted text so it cannot be interpreted as instructions

  • Removing or redacting sensitive data (PII, secrets)

Affected Assets

  • AI Agent

  • MCP Client

Technical Details

Lack of output validation introduces several risks including:

  • Execution of harmful or unintended actions triggered by malformed MCP output

  • Injection of unsafe code, commands, or control sequences into downstream systems

  • Propagation of hallucinated, incorrect, or manipulated data

  • Leakage of sensitive information through unfiltered server responses

  • Corruption of business workflows or agent decision chains

  • Evasion of safety controls due to unmonitored tool responses

  • Violations of auditability, reliability, and compliance requirements

Attack Vector: MCP response / server-generated output Attack Complexity: Low Privileges Required: None User Interaction: None Confidentiality Impact: High Integrity Impact: High Availability Impact: Medium (via cascading workflow corruption)

Timeline

  • Nov 18, 2024 – OWASP-LLM: LLM01, LLM03, LLM04, LLM06, LLM08

  • March 2025 – OWASP-ASI: ASI-01, ASI-05, ASI-07, ASI-09

  • Aug 1, 2024 – EU AI Act: 1.11, 2.12, 3.13, 4.50

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightDo not use LLMs from the organization's disallowed list – Criticalhashtag

AI_APP_SEC_028

Violation Summary Using an LLM that is explicitly on the organization’s block list represents a deliberate bypass of governance, security, and risk controls. Block-listed LLMs are typically prohibited due to known deficiencies such as unsafe data handling, unacceptable training or retention practices, lack of contractual protections, regulatory exposure, weak security posture, or demonstrated unsafe behavior. Their use introduces severe security, privacy, compliance, and reputational risks and undermines centralized AI governance.

Affected Assets

  • LLM

  • AI Agent

Technical Details

Using a block-listed LLM introduces several risks including:

  • Known or previously identified data leakage, retention, or misuse risks

  • Exposure of sensitive, proprietary, or regulated data to untrusted providers

  • Circumvention of organizational security, legal, and compliance controls

  • Lack of auditability, logging, or incident response visibility

  • Increased likelihood of unsafe, biased, or non-compliant model behavior

  • Breach of regulatory, contractual, or internal policy obligations

  • Loss of trust in AI governance and enforcement mechanisms

Attack Vector: Unauthorized LLM selection / direct API or UI usage Attack Complexity: Low Privileges Required: None User Interaction: None Confidentiality Impact: Critical Integrity Impact: High Availability Impact: Medium

Frameworks

  • Nov 18, 2024 – OWASP-LLM: LLM03, LLM05, LLM08

  • March 2025 – OWASP-ASI: ASI-04, ASI-05, ASI-09, ASI-12

  • Aug 1, 2024 – EU AI Act: Articles 11, 12, 13, 50, GPAI risk-management and provider accountability requirements

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightAgent must validate, sanitize LLM output including for presence of eval or any dynamic code execution primitive in LLM output – Criticalhashtag

AI_APP_SEC_029

Affected Assets

AI Agent

Violation Summary

When LLM outputs are consumed without validation or sanitization, the system becomes vulnerable to unsafe instructions, hallucinated commands, malicious payloads, and untrusted code. This risk becomes critical when the LLM output may contain eval, shell commands, SQL statements, or other dynamic execution primitives. If such outputs pass directly into an interpreter, agent tool, or workflow engine, they can lead to arbitrary code execution, data exfiltration, workflow corruption, or full system compromise.

Input Validation

Validation ensures that input conforms to expected structure, type, length, format, and policy constraints before it is processed by the LLM. It answers the question if the input is allowed to be processed.

Examples of Validation

  • Rejecting prompts longer than a defined maximum length

  • Enforcing schema compliance (e.g., JSON with specific fields only)

  • Blocking inputs containing disallowed patterns (e.g., ignore previous instructions, system override)

  • Restricting input sources to authenticated or trusted origins

  • Ensuring prompts match an approved task or intent category

Input Sanitization

Sanitization transforms input to remove, neutralize, or normalize unsafe elements while preserving legitimate intent. Sanitization ensures that the input is made safe before processing.

Examples of Sanitization

  • Normalizing Unicode to remove obfuscation (e.g., leetspeak, homoglyphs)

  • Stripping zero-width or invisible characters

  • Decoding and inspecting encoded content (Base64, hex) before use

  • Escaping or isolating untrusted text so it cannot be interpreted as instructions

  • Removing or redacting sensitive data (PII, secrets)

Affected Assets

AI Agent

Technical Details

Failure to validate LLM output introduces several risks including:

  • Accidental or malicious execution of model-generated code (e.g., eval, exec, Function, subprocess calls)

  • Injection of harmful commands or payloads into tools, agents, or downstream applications

  • Execution of hallucinated instructions that modify resources, corrupt data, or trigger destructive operations

  • Leakage of internal or sensitive information through improperly filtered responses

  • Exploitation of agents that automatically convert LLM output into actions (“AI code injection”)

  • Loss of safety, explainability, reliability, and auditability in automated pipelines

  • Violations of governance, logging, and traceability requirements

Attack Vector: LLM output → downstream interpreter / agent tool Attack Complexity: Low (LLM can be tricked into generating dangerous primitives) Privileges Required: None User Interaction: None (fully autonomous execution paths are most at risk) Confidentiality Impact: High Integrity Impact: Critical Availability Impact: Medium to High

Timeline

  • Nov 18, 2024 – OWASP-LLM: LLM01, LLM03, LLM04, LLM06, LLM08

  • March 2025 – OWASP-ASI: ASI-01, ASI-05, ASI-07, ASI-09, ASI-12

  • Aug 1, 2024 – EU AI Act: 1.11, 2.12, 3.13, 4.50

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightDo not allow malicious content via hidden prompts written in leetspeak – High/Criticalhashtag

AI_APP_SEC_032

Violation Summary Allowing prompts written in leetspeak (e.g., h4x0r, 3v4l, 1nj3ct, byp4ss) or similar obfuscated language enables attackers to evade input validation, safety filters, and policy enforcement mechanisms. Leetspeak transforms malicious intent into visually altered but semantically equivalent text, allowing prompt injection, jailbreak attempts, encoded instructions, and policy bypasses to slip past keyword-based detection and moderation layers. This weakens the integrity and reliability of LLM-driven systems, especially in agentic and autonomous workflows.

Affected Assets

  • LLM

  • AI Agent

Technical Details

Allowing leetspeak introduces several risks including:

  • Bypass of keyword-based safety, moderation, and policy filters

  • Injection of malicious or unsafe instructions disguised as benign input

  • Increased success of encoded or obfuscated prompt attacks

  • Manipulation of agent reasoning or tool invocation logic

  • Reduced auditability and explainability due to obfuscated intent

  • Amplification of downstream risks when leetspeak-generated outputs drive actions

Attack Vector: LLM input prompt (user, agent, or external system) Attack Complexity: Low Privileges Required: None User Interaction: None (especially in automated or agent-driven scenarios) Confidentiality Impact: High Integrity Impact: High to Critical Availability Impact: Low

Reference Frameworks

  • Nov 18, 2024 – OWASP-LLM: LLM01, LLM04, LLM06, LLM08

  • March 2025 – OWASP-ASI: ASI-01, ASI-04, ASI-07, ASI-09), ASI-12

  • Aug 1, 2024 – EU AI Act: Articles 11, 12, 13, 50, plus Annex III robustness and risk-management requirements

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightMCP server must not directly interact with the LLM – Criticalhashtag

AI_APP_SEC_033

Violation Summary When an MCP server directly interacts with the LLM—rather than operating only through an authenticated, validated, policy-enforcing MCP client—it collapses the trust boundary between system capabilities and untrusted model output. This allows the LLM to influence, manipulate, or trigger server-side actions without authorization or validation. Direct LLM-to-server interaction bypasses safety controls, authentication layers, input validation, output filtering, logging standards, and audit requirements. This exposes the environment to unbounded prompt injection, unsafe tool execution, data leakage, and full-system compromise.

Affected Assets

  • LLM

  • MCP Server

Technical Details Allowing an MCP server to directly interact with an LLM introduces several risks including:

  • Execution of unsafe, hallucinated, or malicious LLM-generated instructions on high-privilege server tools

  • Prompt injection attacks gaining direct access to server capabilities

  • Bypassing client-side authentication, authorization, validation, and logging layers

  • Data leakage through uncontrolled LLM requests or responses

  • Inability to enforce least-privilege and zero-trust boundaries between model and system operations

  • Loss of auditability because actions occur without the MCP client as an intermediary

  • Violations of governance, safety, and regulatory obligations for high-risk AI systems

Attack Vector: LLM output → server action path Attack Complexity: Low Privileges Required: None User Interaction: None Confidentiality Impact: High Integrity Impact: Critical Availability Impact: Medium to High

Framework

  • Nov 18, 2024 – OWASP-LLM: LLM01 (Prompt/Instruction Injection), LLM04 (Behavior Manipulation), LLM05 (Data Disclosure), LLM06 (Hallucination Risks), LLM08 (Transparency & Audit Failures)

  • Mar 2025 – OWASP-ASI: ASI-01 (Input/Output Integrity), ASI-04 (Governance), ASI-05 (Safe Handling), ASI-09 (Traceability), ASI-12 (Monitoring)

  • Aug 1, 2024 – EU AI Act: Articles 11 (Documentation), 12 (Record-Keeping), 13 (Transparency), 50 (Transparency Obligations), plus Annex III robustness & safety controls for high-risk systems

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightClear exit or termination criteria must exist for the agent to consider its task complete and stop executing – High/Criticalhashtag

AI_APP_SEC_034

Violation Summary When an AI agent is not given explicit, enforceable exit or termination criteria, it may continue executing indefinitely, escalate actions beyond intended scope, repeatedly invoke tools, consume excessive compute, or enter unsafe operational loops. Lack of defined stopping conditions increases the risk of runaway behavior, unintended system modifications, resource exhaustion, privacy violations, and unbounded interaction with external systems or MCP tools. Agents without termination logic become unpredictable, ungovernable, and potentially harmful.

Affected Assets

AI Agent

Technical Details Missing termination criteria introduces several risks including:

  • Infinite or runaway task execution that triggers unnecessary or harmful actions

  • Repeated tool invocation (MCP or external APIs), leading to data exposure or workflow corruption

  • Accidental escalation of privileges as the agent searches endlessly for ways to complete the task

  • Hallucination-driven decisions due to self-reinforcing reasoning loops

  • Excessive resource consumption or uncontrolled cost

  • Increased attack surface for prompt injection that pushes the agent into unsafe recursive behavior

  • Violations of safety, oversight, and accountability requirements

Attack Vector: Agent reasoning cycle / task execution loop Attack Complexity: Low Privileges Required: None User Interaction: None Confidentiality Impact: Medium to High (depending on tool access) Integrity Impact: High Availability Impact: Medium to High

Framework

  • Nov 18, 2024 – OWASP-LLM: LLM01 (Injection), LLM04 (Behavior Manipulation), LLM06 (Hallucination Risks), LLM08 (Transparency & Audit Failures)

  • Mar 2025 – OWASP-ASI: ASI-01 (Integrity), ASI-04 (Governance), ASI-07 (Reliability), ASI-09 (Traceability), ASI-12 (Monitoring)

  • Aug 1, 2024 – EU AI Act: Articles 11 (Documentation), 12 (Record-Keeping), 13 (Transparency), 50 (Transparency Obligations), Annex III requirements for safe, predictable operation of high-risk systems

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightAgents must log all interactions with the LLM – Criticalhashtag

AI_APP_SEC_035

Violation Summary If agents do not log their interactions with the LLM, including prompts, responses, tool requests, and reasoning triggers, organizations lose visibility into how decisions were made, what data was exchanged, and whether harmful or unauthorized actions occurred. Missing LLM interaction logs break auditability, hinder incident response, obscure the source of incorrect or unsafe outputs, and prevent compliance verification. Lack of logging also enables attackers to exploit the agent–LLM channel without detection.

Affected Assets

  • LLM

  • AI Agent

Technical Details Not logging agent ↔LLM interactions introduces several risks including:

  • Inability to reconstruct how the agent reached a decision or triggered an action

  • Loss of forensic evidence needed for incident response or regulatory review

  • Undetected prompt injection, harmful outputs, or unsafe tool invocations

  • Unmonitored leakage of sensitive data or PII through prompts or responses

  • Difficulty identifying hallucination-driven failures or behavioral drift

  • Loss of traceability required for governance, transparency, and safety assurance

  • Violations of logging, documentation, and accountability requirements

Attack Vector: Agent ↔ LLM communication channel Attack Complexity: Low Privileges Required: None User Interaction: None Confidentiality Impact: High Integrity Impact: High Availability Impact: Low

Framework

  • Nov 18, 2024 – OWASP-LLM: LLM01 (Injection), LLM03 (Data Leakage), LLM04 (Behavior Manipulation), LLM08 (Transparency & Audit Failures)

  • Mar 2025 – OWASP-ASI: ASI-01 (Integrity), ASI-05 (Data Handling), ASI-09 (Traceability), ASI-12 (Monitoring)

  • Aug 1, 2024 – EU AI Act: Articles 11 (Documentation), 12 (Record-Keeping), 13 (Transparency), 50 (Transparency Obligations), with Annex III traceability requirements for high-risk systems

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightThe LLM must validate and sanitize any input before processing – Criticalhashtag

AI_APP_SEC_038

Violation Summary

When a Large Language Model (LLM) accepts input without proper validation and sanitization, it becomes highly susceptible to prompt injection, encoded or hidden instructions, malicious payloads, and adversarial manipulation. Unsanitized inputs—originating from users, agents, tools, MCP servers, or external systems—can override system instructions, bypass guardrails, contaminate reasoning, and trigger unsafe downstream actions. This risk is amplified in agentic and tool-enabled environments where LLM output directly influences real systems.

Affected Assets

LLM

Input Validation

Validation ensures that input conforms to expected structure, type, length, format, and policy constraints before it is processed by the LLM. It answers the question if the input is allowed to be processed.

Examples of Validation

  • Rejecting prompts longer than a defined maximum length

  • Enforcing schema compliance (e.g., JSON with specific fields only)

  • Blocking inputs containing disallowed patterns (e.g., ignore previous instructions, system override)

  • Restricting input sources to authenticated or trusted origins

  • Ensuring prompts match an approved task or intent category

Input Sanitization

Sanitization transforms input to remove, neutralize, or normalize unsafe elements while preserving legitimate intent. Sanitization ensures that the input is made safe before processing.

Examples of Sanitization

  • Normalizing Unicode to remove obfuscation (e.g., leetspeak, homoglyphs)

  • Stripping zero-width or invisible characters

  • Decoding and inspecting encoded content (Base64, hex) before use

  • Escaping or isolating untrusted text so it cannot be interpreted as instructions

  • Removing or redacting sensitive data (PII, secrets)

Affected Assets

LLM

Technical Details

Failure to validate and sanitize LLM input introduces several risks including:

  • Prompt injection that overrides system and developer intent

  • Encoded, obfuscated, or hidden instructions bypassing safety controls

  • Injection of malicious content that manipulates tool usage or agent behavior

  • Leakage of sensitive data caused by adversarial prompt construction

  • Hallucination amplification driven by malformed or hostile inputs

  • Propagation of unsafe or untrusted instructions to downstream systems

  • Loss of transparency, auditability, and policy enforcement across AI workflows

Attack Vector: LLM input channel (user, agent, tool, MCP, external system) Attack Complexity: Low Privileges Required: None User Interaction: None (especially in autonomous or agent-driven flows) Confidentiality Impact: High Integrity Impact: Critical Availability Impact: Medium

Timeline

  • Nov 18, 2024 – OWASP-LLM: LLM01 (Prompt Injection), LLM04 (Model Behavior Manipulation), LLM05 (Sensitive Information Disclosure), LLM08 (Insufficient Transparency & Auditability)

  • March 2025 – OWASP-ASI: ASI-01 (Input Integrity), ASI-04 (Governance), ASI-07 (Reliability), ASI-09 (Traceability), ASI-12 (Operational Monitoring)

  • Aug 1, 2024 – EU AI Act: Articles 11 (Technical Documentation), 12 (Record-Keeping), 13 (Transparency), 50 (Transparency Obligations), plus Annex III robustness, safety, and risk-management requirements

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightSanitize and validate all input to the AI Model – Criticalhashtag

AI_APP_SEC_039

Violation Summary If input sent to an LLM is not validated and sanitized, the system becomes vulnerable to prompt injection, obfuscated or encoded instructions, malformed payloads, and adversarial manipulation. Unchecked inputs originating from users, agents, tools, uploaded files, MCP services, or external systems can override system intent, bypass safety controls, contaminate reasoning, and trigger unsafe downstream actions. This risk is amplified in agentic, tool-enabled, and autonomous workflows where LLM output directly influences real systems.

Affected Assets

  • LLM

  • AI Agent

Technical Details

Failure to validate and sanitize input before sending it to an LLM introduces several risks including:

  • Prompt injection that overrides system and developer intent

  • Encoded, hidden, or obfuscated instructions (Base64, leetspeak, zero-width characters) bypassing guardrails

  • Injection of malicious content that manipulates tool usage or agent behavior

  • Leakage of sensitive data caused by adversarial prompt construction

  • Hallucination amplification driven by malformed or hostile inputs

  • Propagation of unsafe or untrusted instructions to downstream systems

  • Loss of transparency, auditability, and policy enforcement across AI workflows

Attack Vector: Input channel → LLM (user, agent, tool, file ingestion, MCP, external system) Attack Complexity: Low Privileges Required: None User Interaction: None (especially in autonomous or agent-driven flows) Confidentiality Impact: High Integrity Impact: Critical Availability Impact: Medium

Framework

  • OWASP-LLM: LLM01, LLM04, LLM05, LLM08

  • OWASP-ASI: ASI-01 (Input Integrity), ASI-04, ASI-07, ASI-09, ASI-12

  • EU AI Act: Articles 11, 12, 13, 50, plus Annex III robustness, safety, and risk-management requirements

References

https://genai.owasp.org/llm-top-10/ https://genai.owasp.org/initiatives/agentic-security-initiative/ https://artificialintelligenceact.eu/ai-act-explorer/

chevron-rightDo not allow malicious content via prompts included in uploaded files – Criticalhashtag

AI_APP_SEC_040

Violation Summary If uploaded files (documents, PDFs, spreadsheets, images with OCR, code files, logs) are ingested by an LLM or agent without inspection for malicious prompts, attackers can embed hidden, encoded, or context-manipulating instructions that influence model behavior. These “prompt-in-files” attacks allow adversaries to bypass input controls, poison agent reasoning, extract sensitive data, or trigger unauthorized tool actions—often without any visible user prompt.

Affected Assets

AI Agent

Technical Details

Failing to scan uploaded files for malicious prompts introduces several risks including:

  • Hidden prompt injection embedded in document text, comments, metadata, or OCR layers

  • Encoded or obfuscated instructions (Base64, leetspeak, zero-width characters) evading detection Cross-context contamination where file content overrides system or developer instructions

  • Unauthorized tool invocation or workflow manipulation driven by file-based prompts

  • Leakage of sensitive data due to adversarial instructions embedded in files

  • Loss of explainability when behavior is influenced by unseen file content

  • Violations of transparency, auditability, and policy enforcement requirements

Attack Vector: File upload > document ingestion / OCR / parsing pipeline Attack Complexity: Low Privileges Required: None User Interaction: None, especially in automated ingestion or agent workflows. Confidentiality Impact: High Integrity Impact: Critical Availability Impact: Medium

Framework

  • OWASP-LLM: LLM01, LLM04, LLM05, LLM08

  • OWASP-ASI: ASI-01, ASI-04, ASI-07, ASI-09, ASI-12

  • EU AI Act: Articles 11, 12, 13, 50, plus Annex III robustness, safety, and risk-management requirements

References

https://genai.owasp.org/llm-top-10/ https://genai.owasp.org/initiatives/agentic-security-initiative/ https://artificialintelligenceact.eu/ai-act-explorer/

Data Security and Privacy

Protects PII, prevents data leakage, and enforces privacy controls across AI models and agents.

chevron-rightDo not store secrets in code – Criticalhashtag

AI_DAT_SEC_001

Violation Summary Storing secrets, such as API keys, access tokens, service credentials, MCP tokens, encryption keys, or database passwords, directly in code, configuration files, or agent prompt templates introduces an immediate and critical security vulnerability. Hard-coded secrets are easily exposed through source control, logs, error messages, LLM interactions, and dependency analysis. Once leaked, these credentials can be used to impersonate services, manipulate AI agent behavior, exfiltrate data, or compromise entire environments.

Affected Assets

  • LLM

  • AI Agent

  • MCP Server

Technical Details Storing secrets in code introduces several risks including:

  • Unauthorized access to internal systems, APIs, and third-party services

  • Full environment compromise if privileged keys (e.g., root tokens) are exposed

  • Impersonation of agents, MCP clients, or downstream services

  • Lateral movement enabled through leaked credentials

  • Leakage via LLM outputs, agent error messages, or repository scans

  • Irreversible compromise of production systems due to difficult secret rotation

  • Violations of governance, transparency, and credential-handling requirements

Attack Vector: Source code / repository / agent configuration Attack Complexity: Low Privileges Required: None User Interaction: None Confidentiality Impact: Critical Integrity Impact: Critical Availability Impact: Medium

Framework

  • Nov 18, 2024 – OWASP-LLM: LLM01, LLM03, LLM05, LLM08

  • March 2025 – OWASP-ASI: ASI-01, ASI-04, ASI-09, ASI-12

  • Aug 1, 2024 – EU AI Act: 1.11, 2.12, 3.13, 4.50

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightIf PII data must be shared, it must be encrypted – Criticalhashtag

AI_DAT_SEC_009

Violation Summary Transmitting personally identifiable information (PII) without encryption exposes sensitive user data to interception, tampering, unauthorized access, and regulatory non-compliance. Unencrypted PII flowing between AI agents, MCP clients and servers, microservices, or external APIs can be harvested by attackers or internal adversaries through network sniffing, logging systems, or compromised intermediaries. Such exposure creates severe privacy, legal, operational, and reputational risks.

Affected Assets

  • LLM

  • AI Agent

  • MCP Server

Technical Details Transmitting unencrypted PII introduces several risks including:

  • Exposure of sensitive user information through network interception

  • Unauthorized access to identity data, enabling fraud or impersonation

  • Regulatory violations (GDPR, EU AI Act, state privacy laws)

  • Inability to ensure integrity or authenticity of transmitted data

  • Leakage through LLM logs, telemetry, or debugging outputs

  • Lateral movement or privilege escalation through harvested identity data

  • Failure to meet encryption, security, and risk-management obligations

Attack Vector: Network transit / API calls / agent communication Attack Complexity: Low Privileges Required: None User Interaction: None Confidentiality Impact: Critical Integrity Impact: High Availability Impact: Low

Framework

  • Nov 18, 2024 – OWASP-LLM: LLM03 (Data Leakage), LLM05 (Sensitive Information Disclosure), LLM08 (Insufficient Transparency & Auditability)

  • March 2025 – OWASP-ASI: ASI-01 (Input/Output Integrity), ASI-05 (Data Security & Handling), ASI-09 (Audit & Traceability), ASI-12 (Operational Monitoring)

  • Aug 1, 2024 – EU AI Act: Articles 1.11 (Technical Documentation), 2.12 (Record-Keeping), 3.13 (Transparency), 4.50 (General Transparency Obligations), plus GDPR-aligned data protection expectations reflected across the Act

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightDo not log PII – Criticalhashtag

AI_DAT_SEC_010

Violation Summary Logging personally identifiable information (PII) exposes sensitive user data to unauthorized access, replication, and long-term retention in unsecured or low-visibility systems. Log files are frequently accessible to broader engineering, operations, analytics, or third-party tools and often persist indefinitely. Once PII enters logs, it becomes extremely difficult to control, delete, audit, or protect—creating severe privacy, compliance, and security risks across all AI and MCP-enabled environments.

Affected Assets

  • LLM

  • AI Agent

  • MCP Server

Technical Details

Logging PII introduces several risks including:

  • Unauthorized internal access or external compromise of sensitive information

  • Accidental disclosure through debugging tools, telemetry pipelines, or log aggregators

  • Persistent exposure that violates data minimization and retention requirements

  • Inability to satisfy deletion, correction, or subject rights requests

  • Propagation of PII through downstream systems (LLM training data, observability tools, backups)

  • Legal and regulatory violations under GDPR, state privacy laws, and the EU AI Act

  • Loss of trust and reputational damage due to preventable data leakage

Attack Vector: Logging systems / telemetry pipelines / observability tooling Attack Complexity: Low Privileges Required: None User Interaction: None Confidentiality Impact: Critical Integrity Impact: Medium Availability Impact: Low

Framework

  • Nov 18, 2024 - OWASP-LLM: LLM03 (Data Leakage), LLM05 (Sensitive Information Disclosure), LLM08 (Insufficient Transparency & Auditability)

  • March, 2025 - OWASP-ASI: ASI-01 (Integrity), ASI-05 (Data Handling), ASI-09 (Traceability), ASI-12 (Monitoring)

  • Aug, 1, 2024 - EU AI Act: Articles 11 (Documentation), 12 (Record-Keeping), 13 (Transparency), 50 (Transparency Obligations), GDPR-aligned data-minimization principles

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightDo not send PII to LLMs – Criticalhashtag

AI_DAT_SEC_011

Violation Summary Sending personally identifiable information (PII) to an LLM exposes sensitive data to uncontrolled processing, persistence, training retention, unauthorized internal access, and unintended disclosure. LLMs are not guaranteed to handle PII according to data-minimization or privacy-by-design principles, and model outputs may inadvertently reveal, transform, or propagate sensitive information. This creates severe privacy, regulatory, and security risks across all AI-driven workflows.

Affected Assets

AI Agent

Technical Details Sending PII to an LLM introduces several risks including:

  • Leakage of sensitive user information through outputs or indirect inference

  • Inclusion of PII in model logs, telemetry, or monitoring systems

  • Potential model retention or memorization of PII, enabling future extraction

  • Non-compliance with privacy regulations due to uncontrolled third-party processing

  • Exposure through prompt injection attacks that pull stored or inferred PII

  • Inability to enforce deletion, consent, or data subject rights

  • Violations of transparency, purpose limitation, and privacy-by-design obligations

Attack Vector: LLM input channel Attack Complexity: Low Privileges Required: None User Interaction: None Confidentiality Impact: Critical Integrity Impact: Medium Availability Impact: Low

Framework

  • Nov 18, 2024 – OWASP-LLM: LLM03 (Data Leakage), LLM05 (Sensitive Information Disclosure), LLM08 (Insufficient Transparency & Auditability)

  • Mar 2025 – OWASP-ASI: ASI-01 (Integrity), ASI-05 (Data Handling), ASI-09 (Traceability), ASI-12 (Monitoring)

  • Aug 1, 2024 – EU AI Act: Articles 11 (Documentation), 12 (Record-Keeping), 13 (Transparency), 50 (Transparency Obligations), plus GDPR-aligned data-minimization and purpose-limitation requirements

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightMask PII on user interfaces – Criticalhashtag

AI_DAT_SEC_012

Violation Summary Displaying unmasked personally identifiable information (PII) on user interfaces exposes sensitive data to unauthorized viewing, shoulder surfing, screen sharing leaks, and over-privileged internal access. Any UI that renders full PII—names, addresses, SSNs, phone numbers, emails, financial data, or identifiers—creates a high risk of accidental disclosure and non-compliance. Unmasked PII can also be captured in screenshots, monitoring tools, session replay systems, or logs, further amplifying exposure.

Affected Assets

  • LLM

  • AI Agent

Technical Details Not masking PII on UI introduces several risks including:

  • Unauthorized access or accidental exposure of sensitive identity information

  • Violation of least-privilege and data-minimization principles

  • Increased likelihood of data leakage via screenshots, video recordings, demos, or shared sessions

  • Compromise through malicious insiders or overexposed customer support tools

  • Replication of PII into frontend logs, browser telemetry, or third-party analytics

  • Regulatory violations related to privacy, transparency, and secure data handling

  • Loss of user trust and potential legal liability

Attack Vector: User interface display layer Attack Complexity: Low Privileges Required: None (visual exposure requires only observation) User Interaction: None Confidentiality Impact: Critical Integrity Impact: Low Availability Impact: Low

Framework

  • Nov 18, 2024 – OWASP-LLM: LLM03 (Data Leakage), LLM05 (Sensitive Information Disclosure), LLM08 (Insufficient Transparency & Auditability)

  • March 2025 – OWASP-ASI: ASI-01 (Integrity), ASI-05 (Data Handling), ASI-09 (Traceability), ASI-12 (Monitoring)

  • Aug 1, 2024 – EU AI Act: Articles 11 (Documentation), 12 (Record-Keeping), 13 (Transparency), 50 (Transparency Obligations), GDPR-aligned principles of data minimization and privacy-by-design

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightRedact PII from uploaded files – Criticalhashtag

AI_DAT_SEC_023

Violation Summary If uploaded files (documents, PDFs, spreadsheets, images with OCR, logs, archives) are ingested without redacting personally identifiable information (PII), sensitive data can be unintentionally exposed, propagated, or retained across AI systems. Unredacted PII may be processed by LLMs, logged, cached, embedded in prompts, or transmitted to external services—creating severe privacy, regulatory, and security risks that are difficult to detect and remediate after ingestion.

Affected Assets

AI Agent

Technical Details

Failing to redact PII from uploaded files introduces several risks including:

  • Exposure of sensitive personal data through LLM processing, outputs, or logs

  • Propagation of PII into prompts, embeddings, vector stores, and downstream systems

  • Accidental disclosure via summaries, citations, or extracted insights

  • Inability to honor data minimization, retention limits, or subject rights requests

  • Increased blast radius when files are shared across agents or external tools

  • Elevated risk of data leakage through prompt injection or model inference

  • Violations of privacy-by-design, transparency, and record-keeping requirements

Attack Vector: File upload → parsing / OCR / document ingestion pipeline Attack Complexity: Low Privileges Required: None User Interaction: None, especially in automated ingestion or agent workflows. Confidentiality Impact: Critical Integrity Impact: Medium Availability Impact: Low

Frameworks

  • OWASP-LLM: LLM03, LLM05, LLM08

  • OWASP-ASI: ASI-01, ASI-05, ASI-09, ASI-12

  • EU AI Act: Articles 11, 12, 13, plus GDPR-aligned data-minimization and privacy-by-design principles embedded across the Act

References

https://genai.owasp.org/llm-top-10/ https://genai.owasp.org/initiatives/agentic-security-initiative/ https://artificialintelligenceact.eu/ai-act-explorer/

chevron-rightUploaded files must not contain PII (Singapore) – Criticalhashtag

AI_DAT_SEC_024

Violation Summary This applies to the information that is considered PII in Singapore. If files uploaded to an AI agent are ingested without redacting PII, sensitive data can be unintentionally exposed, propagated, retained, or disclosed through agent reasoning, LLM prompts, logs, embeddings, or downstream tool calls. Because AI agents often summarize, transform, store, and share file contents across systems, unredacted PII significantly amplifies privacy, compliance, and security risks and makes post-incident remediation extremely difficult.

Affected Assets

AI Agent

Technical Details

Failing to redact PII from files uploaded to AI agents introduces several risks including:

  • Exposure of sensitive personal data through agent outputs, summaries, or citations

  • Propagation of PII into prompts, vector databases, caches, logs, and telemetry

  • Uncontrolled sharing of PII with external LLM providers or third-party tools

  • Inability to comply with data minimization, retention limits, or deletion requests

  • Increased risk of data leakage via prompt injection or inference attacks

  • Broader blast radius when agents reuse or redistribute file content

  • Violations of privacy-by-design, transparency, and record-keeping requirements

Attack Vector: File upload → agent ingestion / parsing Attack Complexity: Low Privileges Required: None User Interaction: None (especially in automated or agent-driven workflows) Confidentiality Impact: Critical Integrity Impact: Medium Availability Impact: Low

Frameworks

  • OWASP LLM: LLM03, LLM05, LLM08

  • OWASP ASI: ASI-01, ASI-05, ASI-09, ASI-12

  • Singapore PDPA: Section 11, 13, 18, 24

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightNo file should contain any PII – Criticalhashtag

AI_DAT_SEC_025

Violation Summary If files contain unredacted personally identifiable information (PII), there is risk of sensitive data access to unrestricted access, misuse, and downstream propagation. This increases the likelihood of data leakage, unauthorized scraping, AI training contamination, and regulatory violations. Once exposed, PII can be copied, indexed, cached, or redistributed beyond the organization’s control.

Technical Details

Failing to redact PII from publicly accessible files introduces several risks including:

  • Unrestricted access to sensitive personal data by internal users, contractors, or the public

  • Mass data harvesting, scraping, or indexing by automated tools and AI systems

  • Propagation of PII into LLM prompts, embeddings, search indexes, and external datasets

  • Inability to enforce consent, purpose limitation, or access controls

  • Permanent exposure due to caching, backups, screenshots, or mirrors

  • Increased insider threat and accidental disclosure risk

  • Severe regulatory, legal, and reputational impact

Attack Vector: Public file access / shared repositories / collaboration platforms Attack Complexity: Low Privileges Required: None User Interaction: None Confidentiality Impact: Critical Integrity Impact: Medium Availability Impact: Low

Frameworks

  • OWASP LLM: LLM03, LLM05, LLM08

  • OWASP ASI: ASI-01, ASI-05, ASI-09, ASI-12

  • EU AI Act: Article 11, Article 12, Article 13, Article 50, GDPR-aligned data minimization and privacy-by-design principles

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

Identity and Access Control

Enforces authentication and least-privilege controls between AI agents, MCP servers, and endpoints.

chevron-rightMCP client must authenticate MCP server – Criticalhashtag

AI_IAC_002

Violation Summary If an MCP client does not authenticate the MCP server, it cannot verify the identity, legitimacy, or trustworthiness of the system providing tool responses, commands, or data. This creates an opportunity for attackers to impersonate the MCP server, intercept or modify traffic, inject malicious tool responses, or deliver falsified data that influences downstream agent reasoning. Without authentication, the MCP trust boundary collapses, enabling man-in-the-middle attacks, data manipulation, workflow corruption, and full compromise of AI-driven operations.

Affected Assets

  • MCP Server

  • MCP Client

  • AI Agent

Technical Details Not authenticating the MCP server introduces several risks including:

  • Server impersonation leading to injection of malicious or misleading responses

  • Man-in-the-middle interception and modification of MCP traffic

  • Unauthorized access to sensitive MCP capabilities, tools, and agent operations

  • Corruption of workflows through falsified output or manipulated data

  • Leakage of PII or sensitive context exchanged with the illegitimate server

  • Loss of integrity, trust, and accountability in MCP-driven decisions

  • Violations of transparency, security, and traceability requirements

Attack Vector: Network/API communication between MCP client and server Attack Complexity: Low Privileges Required: None User Interaction: None Confidentiality Impact: High Integrity Impact: Critical Availability Impact: Medium

Framework

  • Nov 18, 2024 – OWASP-LLM: LLM01 (Prompt/Instruction Injection), LLM04 (Model Behavior Manipulation), LLM05 (Sensitive Information Disclosure), LLM08 (Insufficient Transparency & Auditability)

  • March 2025 – OWASP-ASI: ASI-01 (Integrity), ASI-04 (Governance), ASI-09 (Traceability), ASI-12 (Monitoring)

  • Aug 1, 2024 – EU AI Act: Articles 11 (Documentation), 12 (Record-Keeping), 13 (Transparency), 50 (Transparency Obligations), plus robustness and security obligations for high-risk systems

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightMCP server must authenticate all clients – Criticalhashtag

AI_IAC_006

Violation Summary If an MCP server does not authenticate the client making requests, any unauthorized entity—including compromised agents, external attackers, or untrusted processes—can impersonate a legitimate client. This allows attackers to invoke privileged tools, access sensitive data, manipulate workflows, or trigger system actions without detection. Lack of client authentication effectively removes all trust boundaries, enabling full compromise of server-side capabilities and AI-driven operations.

Affected Assets

  • LLM

  • AI Agent

Technical Details Not authenticating the MCP client introduces several risks including:

  • Unauthorized invocation of high-privilege tools or system actions

  • Full impersonation of trusted agents, enabling malicious or deceptive requests

  • Data exposure through unrestricted access to server responses, APIs, or internal systems

  • Manipulation of downstream workflows via falsified or maliciously crafted requests

  • Escalation of privilege or lateral movement across connected systems

  • Loss of accountability, traceability, and auditability for all client-driven actions

  • Violations of integrity, transparency, and security obligations for regulated AI systems

Attack Vector: Client → MCP server request path Attack Complexity: Low Privileges Required: None User Interaction: None Confidentiality Impact: High Integrity Impact: Critical Availability Impact: Medium

Framework (Compressed)

Nov 18, 2024 – OWASP-LLM: LLM01 (Instruction Injection), LLM04 (Behavior Manipulation), LLM05 (Sensitive Data Exposure), LLM08 (Transparency & Auditability Failures) Mar 2025 – OWASP-ASI: ASI-01 (Integrity), ASI-04 (Governance), ASI-09 (Traceability), ASI-12 (Monitoring) Aug 1, 2024 – EU AI Act: Articles 11 (Documentation), 12 (Record-Keeping), 13 (Transparency), 50 (Transparency Obligations), plus Annex III robustness & security requirements

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightInter-agent communication must be authenticated – Criticalhashtag

AI_IAC_007

Violation Summary If agents communicate without authentication, any unauthorized party—including rogue agents, compromised services, or external attackers—can impersonate a legitimate agent and issue commands, request data, or alter system behavior. Non-authenticated inter-agent communication destroys trust boundaries between autonomous components and enables impersonation, privilege escalation, data leakage, workflow manipulation, and full compromise of multi-agent systems. Without identity guarantees, agent-to-agent messaging becomes a high-risk attack surface.

Affected Assets

AI Agent

Technical Details Missing authentication between agents introduces several risks including:

  • Unauthorized agents impersonating trusted components to issue commands

  • Manipulation of agent workflows or decision chains through falsified messages

  • Leakage of sensitive data exchanged during inter-agent coordination

  • Injection of malicious instructions into distributed reasoning processes

  • Loss of accountability and inability to attribute harmful actions

  • Increased lateral movement risk across agent networks

  • Violations of integrity, trust, and regulatory controls for autonomous systems

Attack Vector: Inter-agent message channel / network communication Attack Complexity: Low Privileges Required: None User Interaction: None Confidentiality Impact: High Integrity Impact: Critical Availability Impact: Medium

Framework

  • Nov 18, 2024 – OWASP-LLM: LLM01 (Instruction Injection), LLM04 (Behavior Manipulation), LLM05 (Sensitive Data Disclosure), LLM08 (Transparency & Audit Failures)

  • March 2025 – OWASP-ASI: ASI-01 (Integrity), ASI-04 (Governance), ASI-07 (Reliability), ASI-09 (Traceability), ASI-12 (Monitoring)

  • Aug 1, 2024 – EU AI Act: Articles 11 (Documentation), 12 (Record-Keeping), 13 (Transparency), 50 (Transparency Obligations), plus Annex III security & robustness expectations for high-risk systems

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightAgents must not hold excessive external system credentials – High/Criticalhashtag

AI_IAC_008

Violation Summary When agents are configured with credentials to access more than three external systems, the blast radius of a compromise dramatically increases. Each additional credential expands the agent’s privilege footprint and creates new pathways for lateral movement, data exfiltration, unauthorized actions, and multi-system compromise. Over-privileged agents become single points of systemic failure if the agent is hijacked, attacked, misconfigured, or manipulated by an LLM prompt, all connected external systems are at risk simultaneously.

Affected Assets

AI Agent

Technical Details Allowing an agent to hold multiple (3+) external system credentials introduces several risks including:

  • Large blast radius: compromising the agent compromises all connected systems

  • Increased likelihood of credential leakage through logs, LLM outputs, prompts, or tool interactions

  • Prompt injection enabling unauthorized use of high-privilege multi-system access

  • Lateral movement across different platforms (e.g., Jira → GitHub → AWS → Snowflake)

  • Violation of least-privilege and separation-of-duties principles

  • Difficulty revoking or rotating credentials in incident response

  • Loss of governance and traceability when many systems are accessed through one agent identity

Attack Vector: Agent credential store / agent-initiated external API calls Attack Complexity: Low Privileges Required: None User Interaction: None Confidentiality Impact: Critical Integrity Impact: Critical Availability Impact: Medium to High

Framework

  • Nov 18, 2024 – OWASP-LLM: LLM01 (Injection), LLM04 (Behavior Manipulation), LLM05 (Sensitive Data Disclosure), LLM08 (Transparency & Audit Failures)

  • Mar 2025 – OWASP-ASI: ASI-01 (Integrity), ASI-04 (Governance), ASI-05 (Safe Handling), ASI-09 (Traceability), ASI-12 (Monitoring)

  • Aug 1, 2024 – EU AI Act: Articles 11 (Documentation), 12 (Record-Keeping), 13 (Transparency), 50 (Transparency Obligations), Annex III controls emphasizing least privilege, system robustness, and secure integration

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

chevron-rightLLM endpoints must require authentication – Criticalhashtag

AI_IAC_009

Violation Summary If an LLM endpoint allows unauthenticated access, any external party can invoke the model without identity verification, usage controls, or accountability. This exposes the LLM to abuse, data leakage, prompt injection, denial-of-service, and unauthorized use of compute resources. In agentic and tool-enabled environments, unauthenticated access can also enable attackers to manipulate downstream workflows, trigger unsafe actions, or extract sensitive information without detection.

Affected Assets

LLM

Technical Details

Allowing unauthenticated access to an LLM endpoint introduces several risks including:

  • Unauthorized use of the model for malicious or abusive purposes

  • Prompt injection and adversarial inputs from unknown or untrusted actors

  • Leakage of sensitive context, system prompts, or embedded data

  • Abuse of compute resources leading to cost overruns or service degradation

  • Inability to enforce rate limits, quotas, or usage policies per identity

  • Loss of accountability, auditability, and forensic traceability

  • Circumvention of governance, approval, and access-control processes

Attack Vector: Public or unauthenticated API endpoint Attack Complexity: Low Privileges Required: None User Interaction: None Confidentiality Impact: High Integrity Impact: High to Critical Availability Impact: High (due to abuse or denial-of-service)

Frameworks

Nov 18, 2024 – OWASP-LLM: LLM0, LLM03 LLM04, LLM08 Mar 2025 – OWASP-ASI: ASI-01, ASI-04, ASI-09, ASI-12 Aug 1, 2024 – EU AI Act: Articles 11, 12, 13, 50 and obligations for secure access control and robustness in high-risk and GPAI systems

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://artificialintelligenceact.eu/ai-act-explorer/arrow-up-right

Vulnerability

Continuously detects and remediates critical software weaknesses inside AI assets.

chevron-rightDo not allow dependencies with critical or high severity vulnerabilities – Criticalhashtag

AI_VULN_SEC_001

Violation Summary Allowing AI agents to depend on libraries or packages with known critical or high-severity vulnerabilities introduces a direct and exploitable attack surface into the agent runtime. AI agents typically operate with elevated privileges, access sensitive data, invoke tools, and interact with external systems. Vulnerable dependencies can be exploited to execute arbitrary code, escalate privileges, leak secrets, poison agent behavior, or compromise downstream systems—often without direct interaction with the LLM itself.

Affected Assets

  • AI Agent

  • MCP Server

  • LLM

Technical Details

Using dependencies with critical or high vulnerabilities introduces several risks including:

  • Remote code execution or arbitrary command execution within the agent environment

  • Credential theft, token leakage, or exposure of secrets used by the agent

  • Supply-chain attacks where malicious code is introduced via compromised packages

  • Manipulation or poisoning of agent logic, tool invocation, or decision flows

  • Lateral movement across systems accessed by the agent

  • Persistence mechanisms established through compromised libraries

  • Inability to trust agent outputs or actions due to compromised runtime integrity

  • Violations of secure development, patch management, and risk-management obligations

Attack Vector: Vulnerable third-party dependency / supply-chain compromise Attack Complexity: Low Privileges Required: None (exploits often execute with agent privileges) User Interaction: None Confidentiality Impact: Critical Integrity Impact: Critical Availability Impact: Medium to High

Frameworks

  • OWASP LLM: LLM05, LLM08

  • OWASP ASI: ASI-05

  • NIST SSDF: Practice RV.1

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://csrc.nist.gov/pubs/sp/800/218/final

chevron-rightDo not allow critical or high vulnerabilities in the code – Criticalhashtag

AI_VULN_SEC_002

Violation Summary The presence of vulnerabilities in application or agent code introduces direct security risks that can be exploited to compromise confidentiality, integrity, and availability. Vulnerable code paths—such as injection flaws, insecure deserialization, broken authentication, improper authorization, or unsafe file handling—can be abused by attackers to execute arbitrary code, access sensitive data, manipulate AI behavior, or disrupt operations. In AI and agent-based systems, these vulnerabilities are especially dangerous because compromised code can influence autonomous decisions and propagate impact across multiple systems.

Affected Assets

  • AI Agent

  • MCP Server

  • LLM

Technical Details

Having vulnerabilities in code introduces several risks including:

  • Remote code execution or command injection through exploitable code paths

  • Unauthorized access to sensitive data, credentials, or internal APIs

  • Manipulation of AI agent logic, reasoning flows, or tool invocation

  • Privilege escalation or bypass of authorization controls

  • Lateral movement across integrated systems and services

  • Persistence mechanisms established through exploited vulnerabilities

  • Loss of trust in application outputs and automated decisions

  • Violations of secure development lifecycle and regulatory requirements

Attack Vector: Vulnerable application or agent code Attack Complexity: Low to Medium (depending on vulnerability type) Privileges Required: None (for many common vulnerabilities) User Interaction: None or minimal Confidentiality Impact: Critical Integrity Impact: Critical Availability Impact: Medium to High

Frameworks

  • OWASP LLM: LLM05, LLM08

  • OWASP ASI: ASI-05

  • NIST SSDF: Practice RV.2

References

https://genai.owasp.org/llm-top-10/arrow-up-right https://genai.owasp.org/initiatives/agentic-security-initiative/arrow-up-right https://csrc.nist.gov/pubs/sp/800/218/final

Last updated