diamond-exclamationThe Single-Click Microsoft Copilot Attack

Published: Thursday, March 18th, 2026

Overview

Reprompt is a novel attack that allows an adversary to bypass built-in AI safeguards and silently exfiltrate user data with a single click on a legitimate link. Once executed, the attacker can maintain control of the victim’s Copilot session and execute follow-on instructions without further interaction.

  • Invisible compromise: A threat actor requires only a single click on a crafted Microsoft Copilot link to initiate the exploit.

  • Safety bypass: The attack circumvents Copilot’s built-in guardrails, enabling actions not intended by the user.

  • Stealthy data exfiltration: Follow-up instructions originate from the attacker’s server post-initial click, making detection difficult with client-side tools.

  • Broad scope of access: Attackers can query sensitive information such as file summaries, personal details, or user behavior.

circle-exclamation

How Did It Happen?

Actions
How the Attack Happened?

Action 1

The attacker crafts a legitimate-looking Copilot link containing a hidden prompt injected via a "q" URL parameter.

Action 2

The victim clicks the link, triggering Copilot to automatically execute the injected prompt without additional user input.

Action 3

Copilot interprets the prompt as user-authorized and allows it to run past initial safety checks.

Action 4

The prompt instructs Copilot to perform a double request, bypassing safeguards applied only to the first execution.

Action 5

Copilot fetches instructions from an attacker-controlled server as part of the second request.

Action 6

The attacker’s server dynamically issues follow-on prompts based on prior Copilot responses.

Action 7

Copilot queries and summarizes sensitive user data within the active session.

Action 8

The extracted data is incrementally exfiltrated to the attacker without visible prompts or alerts.

How Reprompt Works

Reprompt exploits default AI assistant behaviors through three core techniques:

  1. Parameter-to-Prompt Injection (P2P)

    • Utilizes the “q” URL parameter to inject prompts directly via the link.

    • When Copilot loads, it executes the injected instruction as if entered by the user.

    • This vector requires no plugins and no explicit user interaction beyond the click.

  2. Double-Request Method

    • Safeguards apply only to the initial AI request.

    • The attacker instructs Copilot to repeat actions twice, enabling sensitive operations (like URL fetches) on the second request.

    • Circumvents safety filters designed to block direct data leaks.

  3. Chain-Request Technique

    • After initiating the attack, the attacker’s server sends dynamic instructions based on previous responses.

    • This creates an ongoing back-and-forth communication loop that continuously exfiltrates sensitive information.

    • The real intent is obscured from defenders because subsequent commands never appear in the original prompt.

Unique Attributes vs. Other AI Vulnerabilities

  • No user prompts required: Unlike prompt injection or jailbreak techniques, Reprompt doesn’t depend on user-typed instructions.

  • Stealthy & scalable: Extracted data can feed follow-on requests for deeper access without detection.

  • Guardrail blind spots: Existing safety mechanisms only inspect initial requests, not chained server-driven flows.

Threat Impact

If exploited successfully:

  • Sensitive corporate or personal data exfiltrates silently.

  • Traditional monitoring may not detect the breach.

  • User sessions remain compromised even after closing AI tools.

  • Attackers can iteratively probe for more information based on response context.

Mitigation and Prevention

For AI Vendors

  • Treat all external input as untrusted. Don’t rely on URL parameters or deep-linked prompts without strict validation.

  • Extend safeguards across entire interaction chains. Ensure security controls cover all request cycles, not just the initial one.

  • Adopt least-privilege models. Assume AI assistants operate with significant access; enforce strict access controls.

For Users (especially personal Copilot users)

  • Be cautious with AI tool links. Only click links from verified sources.

  • Monitor unusual AI behavior. Stop sessions that request sensitive data unexpectedly.

  • Review pre-filled prompts carefully. Inspect any automatically populated prompt before execution.

Indicators of Compromise (IoCs)

Potential signs Reprompt may have been triggered include:

  • Unexpected AI queries for personal or corporate data.

  • AI interactions continuing in the background after the tool’s UI is closed.

  • Unusual outbound connections from AI services to unrecognized domains.

circle-info

Specific IoCs may vary by environment and detection tooling.

Industry Context

Resources

Last updated