The Problem of Passing Context Between Agents
← Back to blog

AI Security

The Problem of Passing Context Between Agents (And Why It's So Dangerous)

Why naive context sharing breaks multi-agent systems — and why securing A2A must come first.

Paulina XuDecember 1, 202515 min
SecurityMulti-Agent SystemsAI Safety

Multi-agent architectures are rapidly becoming the default pattern for building capable AI systems. Agents now collaborate, delegate, critique, plan, retrieve, and execute across complex workflows. But beneath this excitement lies a structural weakness few practitioners fully appreciate:

Passing context between agents may be the single most dangerous design choice in modern AI systems — because it silently carries instructions, data, and state across boundaries that were never meant to be crossed.

Research across prompt injection, RAG safety, autonomy, and multi-agent systems reveals a consistent theme: naive context sharing creates an attack surface that is far larger and far harder to secure than anything seen in single-agent environments.1 2 3 4

This problem must be solved before A2A (agent-to-agent) interaction can be deployed safely inside enterprises.

What "Context Passing" Really Means (It's Not Just Chat History)

In multi-agent systems, "context" often includes much more than conversation transcripts. Modern frameworks routinely pass rich, structured state between agents — including:4 1

  • System and task prompts (tool schemas, role definitions, constraints).4
  • Retrieved documents from RAG or search tools.
  • Plans, chain-of-thought reasoning, or sub-goals.4
  • Memory references allowing agents to read/write shared data stores.1
  • Tool results, API responses, logs, error traces — even credentials.7 8

Each of these channels can silently carry:

  • sensitive data
  • hidden instructions
  • policy-breaking content
  • malicious prompts
  • attack payloads

Worse: no single entity (agent, orchestrator, or developer) has full visibility into what's being shared across the system.3 1

In other words: the moment agents start exchanging context, you've created a distributed trust network with no shared notion of provenance, authorization, or data classification.

How Prompt Injection Evolves into "Prompt Infection" in Multi-Agent Systems

Classic prompt injection research distinguishes:

  • Direct injection — attacker talks to the model.
  • Indirect injection — malicious instructions hidden in retrieved or tool-generated data.

But multi-agent workflows introduce a third, far more dangerous category:

Prompt infection: malicious instructions that replicate across agents like a virus.9 10 11

Lehmann et al. formally document how prompt infections spread across LLM agents by embedding malicious instructions into intermediate messages, which downstream agents treat as trusted system-level guidance.11 2 A single compromised agent can:

  • Plant hidden instructions in responses sent to other agents.
  • Trick other agents into performing actions they do not have permissions for.
  • Persist malicious prompts into shared memory structures.11 2
  • Hijack tool-using agents to exfiltrate data or escalate access.

Because agents pass context wholesale — full plans, full transcripts, full RAG outputs — an injected instruction can "hitchhike" through the system unnoticed.11 3

This is exponentially harder to detect than single-agent prompt injection.

RAG Poisoning + Multi-Agent Systems = Hidden Cross-Agent Attack Paths

RAG (Retrieval-Augmented Generation) amplifies the dangers of context passing because retrieved text is often treated as trusted evidence, not as an instruction channel.

Research shows attackers can poison RAG pipelines with extremely small insertion rates:

  • PoisonedRAG studies show that poisoning as low as 10-4 of the corpus can reliably alter outputs in black-box settings.6 14
  • Backdoored retrievers can ensure specific queries always surface malicious documents.15 5
  • RAG threat models highlight cross-agent issues such as corpus poisoning, document leakage, and retriever manipulation.13 12

In multi-agent systems, this becomes even more dangerous:

One agent retrieves a poisoned document; another agent executes the malicious instruction inside it.3 12

Attribution becomes difficult because the agent that retrieves is not the agent that acts. This enables attacks where:

  • Retrieval agent → passes poisoned snippet
  • Planning agent → interprets it as a system instruction
  • Tool agent → executes a harmful action

No single agent sees the full chain.

Tool Use + Context Passing = Stealth Exfiltration

Toolformer-style architectures blur the line between text generation and action execution.16 17 Once agents can call APIs, web search, or browsing tools, context-based attacks escalate from "bad text" to real data movement.

Recent work shows that attackers can exploit context sharing to orchestrate multi-step exfiltration flows:8 7 4

  1. Agent A (internal) summarizes sensitive documents or knowledge.
  2. Malicious prompt embedded upstream tells Agent A to structure that summary a certain way.
  3. Agent B (with HTTP or search tools) receives that structured text and unknowingly embeds it into outgoing URLs or queries.
  4. The attacker-controlled endpoint receives the internal data.

Because each agent only sees its local state, and logs are fragmented:7 3

  • No single policy sees the entire leak.
  • No local rule is violated.
  • No single agent "looks malicious."

This kind of compositional attack is impossible in single-agent systems — it only emerges when context hops across multiple specialized agents.

Cross-Domain Context Bypass: When Small Pieces Become Sensitive in Combination

A major risk documented in cross-domain multi-agent systems research is context bypass — where individually harmless pieces of data combine into a policy violation.3 1

For example:

  • Agent A inside Enterprise X produces aggregated payroll numbers.
  • Agent B in Partner Organization Y receives partial breakdowns.
  • Combined context reconstructs individual salaries — violating both organizations' policies.3

Because each agent sees only its local task:

  • Each local step appears compliant.
  • Global policy is violated in the aggregate.
  • No single actor is accountable.

This makes naive context passing incompatible with enterprise data governance.

Autonomy + Multi-Agent Coordination Makes This Worse

Security surveys emphasize that as agents gain autonomy — memory, planning, delegation — risk scales nonlinearly.19 4

Key failure amplifiers:

  • Subgoal formation: agents reinterpret instructions, compounding earlier errors.4
  • Delegation chains: planner → worker → worker → tool, with no provenance tracking.19
  • Long-horizon contexts: agent memories persist malicious instructions.19 4
  • Cross-agent contamination: one compromised agent corrupts downstream agents.

The Knight Institute's "levels of autonomy" framework and multi-agent safety surveys argue that conventional input/output safety checks fail in multi-agent pipelines because the boundary between "input," "policy," and "output" collapses.19 4

Why Naive Context Sharing Is Structurally Unsafe

Bringing the research together, several structural dangers emerge:

1. Data vs. Instructions Blur Together

Models can't reliably distinguish quoted text from actionable instructions.10 9

2. Provenance Is Lost Immediately

Downstream agents cannot tell whether context was:

  • user-provided
  • system-generated
  • adversarial
  • retrieved
  • injected by another agent11 3

3. Compositional Attacks Exploit Agent Boundaries

Benign local steps combine into dangerous global behavior.3 4

4. Tool Access Amplifies Any Upstream Mistake

Poisoned context can cause API calls, writes, external network access.5 6 8

5. No One Has End-to-End Visibility

Logs are fragmented across agents, making forensics difficult.7 3

This makes context passing not just a privacy issue — but a systems security problem.

Emerging Mitigations — and Why They Must Come Before A2A

Researchers are beginning to propose foundational defenses:

Context Provenance & Tagging

Prompt Infection research proposes metadata tags indicating trusted vs. untrusted content, helping downstream agents apply correct guardrails.2 11

Principle of Least Context

Security analyses recommend only passing minimal required fields — not entire transcripts or documents.12 1

Retrieval & KB Hardening

Detection of poisoning, index integrity checks, retrieval validation, and anomaly detection before context reaches agents.14 15 6 5

Agent-Aware Threat Modeling

Systems must be modeled as multi-agent graphs, not single-agent loops.18 3

Autonomy-Tiered Governance

Higher-autonomy agents require stricter monitoring, sandboxing, and human oversight.19 4

The Core Thesis: Before We Can Do Safe A2A, We Must Fix Context

All research points to one unavoidable conclusion:

In multi-agent systems, context is the control surface. Whoever controls the context controls the agent.

Until we build:

  • provenance
  • classification
  • permission boundaries
  • per-user scopes
  • context minimization
  • cross-agent audit trails
  • end-to-end authorization checks

—we cannot deploy safe A2A communication at scale.

Enterprise agents won't fail because their models hallucinate.

They'll fail because they believed the wrong context from the wrong agent at the wrong time.

Fixing context passing is the prerequisite for safe agent ecosystems.

It is step zero in building real agent governance.

Sources

  1. Bringing Memory to AI: MCP, A2A, Agent Context Protocols
  2. Prompt Infection Research (Lehmann et al.)
  3. Cross-Domain Multi-Agent Systems
  4. Multi-Agent Systems Security Survey
  5. Backdoored Retrievers
  6. PoisonedRAG Framework
  7. Multi-Agent Exfiltration
  8. Context-Based Exfiltration Flows
  9. Data vs. Instructions in LLMs
  10. Prompt Injection Taxonomy
  11. Prompt Infection Across LLM Agents (Lehmann et al.)
  12. RAG Threat Models
  13. RAG Threat Models (PDF)
  14. PoisonedRAG Framework
  15. Backdoored Retrievers
  16. LLM Toolformer Patterns
  17. Breaking Down Toolformer
  18. OWASP Top 10 for LLM Applications - Agent-Aware Threat Modeling
  19. Levels of Autonomy for AI Agents