Skip to content

[Contrib] Agent-Mesh trust layer for verified handoffs#65

Closed
imran-siddique wants to merge 3 commits intoopenai:mainfrom
imran-siddique:contrib/agent-mesh-trust
Closed

[Contrib] Agent-Mesh trust layer for verified handoffs#65
imran-siddique wants to merge 3 commits intoopenai:mainfrom
imran-siddique:contrib/agent-mesh-trust

Conversation

@imran-siddique
Copy link

Summary

Adds trust-verified handoffs to OpenAI Swarm using the Agent-Mesh CMVK identity layer.

The Problem

Swarm enables multi-agent orchestration through handoffs, but has no built-in way to:

  • Verify the receiving agent is trusted
  • Prevent handoffs to malicious/compromised agents
  • Audit handoff chains
  • Protect sensitive context during handoffs

The Solution

This contrib module provides:

  • **\TrustedSwarm**: Wrapper with trust-verified handoffs
  • **\TrustPolicy**: Configurable trust requirements
  • **\HandoffVerifier**: Validates trust before allowing handoffs
  • **\AgentIdentity**: DID-based agent identification (\did:swarm:xxx)
  • Audit trail: Full handoff logging with timestamps

Example

\\python
from swarm import Agent
from swarm.contrib.agentmesh import TrustedSwarm, TrustPolicy

Create agents

triage = Agent(name='Triage', functions=[transfer_to_sales])
sales = Agent(name='Sales')

Create trusted swarm

policy = TrustPolicy(min_trust_score=0.5, audit_logging=True)
swarm = TrustedSwarm(policy=policy)

Register with trust scores

swarm.register_agent(triage, trust_score=0.8)
swarm.register_agent(sales, trust_score=0.7)

Handoffs are now verified

response = swarm.run(triage, messages)
\\

Key Features

Feature Description
Trust scores 0.0-1.0 score per agent
Blocked agents Prevent handoffs to specific agents
Allowed list Only allow handoffs to approved agents
Sensitive context Higher trust required for sensitive data
Audit logging Full handoff history
Violation callbacks Custom handling for blocked handoffs

Trust Verification

Handoffs are blocked when:

  • Target agent is not registered
  • Target agent is in blocked list
  • Target agent below trust threshold
  • Sensitive context + insufficient trust

Files Added

  • \swarm/contrib/agentmesh/init.py\ - Module exports
  • \swarm/contrib/agentmesh/trusted_handoff.py\ - Core implementation
  • \swarm/contrib/agentmesh/README.md\ - Documentation
  • \swarm/contrib/agentmesh/test_trusted_handoff.py\ - Tests

Testing

\\�ash
pytest swarm/contrib/agentmesh/test_trusted_handoff.py -v
\\

Why This Matters

In multi-agent systems, handoffs create attack vectors. Trust verification ensures only vetted agents participate in your swarm, preventing:

  • Malicious agent interception
  • Context exfiltration
  • Unauthorized data access

References

Adds CMVK-based trust verification for Swarm agent handoffs.

Key features:
- TrustedSwarm: Wrapper with trust-verified handoffs
- TrustPolicy: Configurable trust requirements
- HandoffVerifier: Validates trust before handoffs
- AgentIdentity: DID-based agent identification
- Audit trail: Full handoff logging

Trust verification prevents:
- Handoffs to unregistered agents
- Handoffs to blocked agents
- Handoffs to agents below trust threshold
- Sensitive context to low-trust agents

Includes comprehensive tests and documentation.

Agent-Mesh: https://github.com/imran-siddique/agent-mesh
@imran-siddique
Copy link
Author

Ready for Final Review 🙏

This PR has been open for a while. The AgentMesh trust layer integration is complete and tested.

Could a maintainer please provide a final review? Happy to address any remaining concerns.

Thank you!

@imran-siddique
Copy link
Author

Friendly nudge -- AgentMesh trust layer was just merged into microsoft/agent-lightning (14k stars): microsoft/agent-lightning#478 -- Happy to address any feedback on this Swarm integration!

@imran-siddique
Copy link
Author

Update: Our AgentMesh trust layer was just merged into LlamaIndex (47k stars): run-llama/llama_index#20644. This is our second major integration merge this week after Microsoft's agent-lightning (14k stars). Would love to get this PR reviewed as well!

@imran-siddique
Copy link
Author

Friendly follow-up! Since opening this PR, our trust layer has been merged into three major frameworks:

Trust-verified handoffs are especially relevant for Swarm's agent-to-agent pattern. Happy to iterate on this if there's any feedback.

imran-siddique added a commit to microsoft/agent-governance-toolkit that referenced this pull request Mar 4, 2026
New proposal documents for all external submissions:
- AUTOGEN-INTEGRATION-PROPOSAL.md (microsoft/autogen#7212)
- CREWAI-INTEGRATION-PROPOSAL.md (crewAI#4384 + examples#300)
- OPENAI-SWARM-PROPOSAL.md (openai/swarm#65)
- METAGPT-INTEGRATION-PROPOSAL.md (MetaGPT#1936)
- ANTHROPIC-INTEGRATION-PROPOSAL.md (skills#424, plugins#415, cookbooks#384)
- MCP-ECOSYSTEM-PROPOSAL.md (servers#3352, registry#978)
- DIFY-INTEGRATION-PROPOSAL.md (dify-plugins#2060, merged)
- GITHUB-COPILOT-PROPOSAL.md (awesome-copilot#755-757, all merged)
- PROPOSALS-INDEX.md — master index of all 45 submissions

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@imran-siddique
Copy link
Author

Migration update: This project has officially moved to microsoft/agent-governance-toolkit under the Microsoft org.

The code in this PR has been updated to reference the new location. Install via:

pip install ai-agent-compliance

All old personal repos (imran-siddique/agent-os, agent-mesh, etc.) are archived and redirect to the new repo. Happy to address any review feedback!

@imran-siddique
Copy link
Author

Closing — this project has moved to microsoft/agent-governance-toolkit. Will re-submit fresh proposals from the Microsoft repo. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant