Today’s AI agents can solve narrow tasks, but they can’t hand work to each other without custom glue code. Every hand-off is a one-off patch.
当今的 AI 智能体虽能完成特定任务,但缺乏自定义粘合代码就无法相互交接工作。每次任务传递都像是一次临时补丁。
To solve this problem, Google recently released the Agent2Agent (A2A) Protocol, a tiny, open standard that lets one agent discover, authenticate, and stream results from another agent. No shared prompt context, no bespoke REST endpoints, and no re-implementing auth for the tenth time.
为解决这一问题,谷歌近期发布了 Agent2Agent(A2A)协议 —— 一个轻量级的开放标准,允许智能体相互发现、认证并传输结果。无需共享提示上下文,无需定制 REST 端点,也无需反复实现身份验证。
The spec is barely out of the oven, and plenty may change, but it’s a concrete step toward less brittle, more composable agent workflows.
该规范刚刚出炉,后续可能还有诸多调整,但它朝着构建更健壮、更可组合的智能体工作流迈出了坚实一步。
If you’re interested in why agents need a network-level standard, how A2A’s solution works, and the guardrails to run A2A safely, keep scrolling.
如果你想知道为什么智能体需要网络级标准、A2A 解决方案如何运作,以及运行 A2A 的安全保障措施,请继续阅读。
Why we need the Agent2Agent Protocol
为什么我们需要 Agent2Agent 协议#
Modern apps already juggle a cast of “copilots.” One drafts Jira tickets, another triages Zendesk, a third tunes marketing copy.
现代应用已经在协调多个 "副驾驶":一个负责起草 Jira 工单,另一个处理 Zendesk 工单分类,第三个则优化营销文案。
But each AI agent lives in its own framework, and the moment you ask them to cooperate, you’re back to copy-pasting JSON or wiring short-lived REST bridges. (And let’s be real: copy-pasting prompts between agents is the modern equivalent of emailing yourself a draft-final-final_v2
zip file.)
但每个 AI 代理都运行在自己的框架中,当你要求它们协作时,你又回到了复制粘贴 JSON 或搭建临时 REST 桥接的老路。(说实话:在代理之间复制粘贴提示词,就像现代版的给自己发 draft-final-final_v2
压缩包邮件。)
The Model Context Protocol (MCP) solved only part of that headache. MCP lets a single agent expose its tool schema so an LLM can call functions safely. Trouble starts when that agent needs to pass the whole task to a peer outside its prompt context. MCP stays silent on discovery, authentication, streaming progress, and rich file hand-offs, so teams have been forced to spin up custom micro-services.
模型上下文协议 (MCP) 只解决了部分难题。MCP 让单个代理能公开其工具模式,使 LLM 可以安全调用功能。但当该代理需要将整个任务移交给提示上下文之外的同伴时,问题就出现了。MCP 在服务发现、身份验证、进度流式传输和富文件交接方面保持沉默,迫使团队不得不搭建自定义微服务。
Here’s where the pain shows up in practice:
以下是实践中出现痛点的地方:
- Unstable hand-offs: A single extra field in a DIY “handover” JSON can break the chain.
不稳定的交接:DIY 的 "交接"JSON 中哪怕多出一个字段,就可能导致整个链条中断。 - Security gridlock: Every in-house agent ships its own auth scheme; security teams refuse to bless unknown endpoints.
安全僵局:每个内部代理都自带一套认证方案;安全团队拒绝为未知端点授权。 - Vendor lock-in: Some SaaS providers expose agents only through proprietary SDKs, pinning you to one cloud or framework.
供应商锁定:部分 SaaS 提供商仅通过私有 SDK 开放智能体接口,将用户绑定在特定云平台或框架中。
That brings us to Agent2Agent (A2A). Think of it as a slim, open layer built on JSON-RPC. It defines just enough—an Agent Card for discovery, a Task state machine, and streamed Messages or Artifacts—so any client agent can negotiate with any remote agent without poking around in prompts or private code.
这就引出了 Agent2Agent(A2A)协议。它可视为基于 JSON-RPC 构建的轻量开放层,仅定义最核心要素 —— 用于发现的智能体名片(Agent Card)、任务状态机,以及流式消息或工作产物 —— 使得任何客户端智能体都能与远程智能体直接协商,无需探查提示词或私有代码。
(^A2A use case example from Google’s announcement post.)
(源自 Google 公告中的 A2A 应用场景示例)
A2A doesn’t replace MCP; it sits above it, filling the “between-agent” gap that has stalled real-world adoption. Think of agents like workers at an office: MCP gives them employee handbooks, fax machines, and filing cabinets; A2A lets them chit-chat in the break room.
A2A 并非取代 MCP,而是构建在其之上,填补了阻碍实际应用落地的 "智能体间协作" 空白。将智能体比作办公室员工:MCP 为他们提供员工手册、传真机和文件柜;A2A 则让他们能在休息室自由交谈。
The goal of A2A is simple: make multi-agent orchestration feel routine rather than risky, while still giving frameworks and vendors room to innovate under the hood.
A2A 的目标很简单:让多智能体协同变得常规化而非高风险,同时为框架和供应商保留底层创新的空间。
Roles 101: Client agent vs. remote agent
角色基础:客户端代理与远程代理#
Before we walk through a full A2A exchange, it helps to tag the two players clearly.
在我们完整了解 A2A 交互流程前,明确区分这两个角色会很有帮助。
Client agent 客户端代理#
This is the side that lives inside your stack—maybe a function in Genkit, a LangGraph node, or even an n8n workflow. It discovers a remote agent’s card, decides whether it can satisfy the announced auth method, and then creates a task by sending a JSON-RPC message such as createTask
.
这是驻留在您技术栈内部的组件 —— 可能是 Genkit 中的函数、LangGraph 节点,甚至是 n8n 工作流。它会发现远程智能体的服务卡,判断能否满足声明的认证方式,然后通过发送类似 createTask
的 JSON-RPC 消息来创建任务。
From that moment on the client acts as the task’s shepherd: it listens for status events, forwards any follow-up input the remote requests, and finally collects artifacts for downstream use.
从此刻起,客户端就充当任务的监督者:监听状态事件、转发远程端请求的后续输入,最终收集产出物供下游使用。
Remote agent 远程智能体#
Think of this as a specialized micro-service that just happens to speak A2A. It might be running in Cloud Run, Lambda, or on a bare VPS. Once it receives a task it owns the heavy lifting—whether that means querying a vector store, fine-tuning a model, or exporting a PDF.
可以将其视为一个恰好支持 A2A 协议的专用微服务,可能运行在 Cloud Run、Lambda 或裸金属 VPS 上。一旦接收任务,它就会承担繁重工作 —— 无论是查询向量数据库、微调模型还是导出 PDF。
Throughout execution, it streams back TaskStatusUpdate
and TaskArtifactUpdate
events. Crucially, the remote can’t flip the connection: it can ask for more input (status: input-required
) from the client, but it never becomes the caller.
在执行过程中,它会持续返回 TaskStatusUpdate
和 TaskArtifactUpdate
事件。关键在于远程端无法反转连接关系:它可以向客户端请求更多输入( status: input-required
),但永远不会成为调用方。
One-way communication 单向通信#
- Only the client initiates JSON-RPC requests.
仅客户端发起 JSON-RPC 请求 - Only the remote updates task state.
仅远程端更新任务状态 - Either side can terminate the stream if something goes wrong, but responsibility for cleanup (e.g., deleting temp files) lies with the remote.
任一方均可在出现问题时终止数据流,但清理工作(如删除临时文件)由远程端负责
A mental model that works well is “front-of-house vs back-of-house.” The client stays in front, taking new orders and relaying clarifications; the remote is the kitchen, head-down until the dish is ready. (The downsides are true, too: If the remote burns the soufflé, the client still has to smile and comp dessert.)
一个很好的思维模型是 "前台与后台" 的关系。客户端就像前台,负责接收新订单和传达具体要求;远程端则是厨房,埋头工作直到菜品完成。(缺点也同样真实:如果远程端搞砸了舒芙蕾,客户端仍得保持微笑并赠送甜点。)
With those lanes marked, we can zoom in on the data structures and security rails that make the hand-off safe.
明确了这些分工后,我们可以深入研究实现安全交接的数据结构和安全机制。
Where A2A fits: Above MCP, beside your orchestrator
A2A 的定位:位于 MCP 之上,与您的编排器并列#
When people first see A2A they often ask, “Wait, doesn’t MCP already cover agent tooling?” Almost—but not quite.
当人们初次接触 A2A 时,常会问:"等等,MCP 不是已经涵盖代理工具了吗?" 差不多 —— 但不完全。
A quick map of the layers makes the distinction clear:
通过各层级的快速对比可以清晰区分:
- Inside a single agent (prompt level): Here the agent needs a schema so its model can call a tool. That’s MCP territory: JSON schemas, function names, argument validation, prompt-injection worries.
单个智能体内部(提示层级):此处智能体需要模式定义以便其模型调用工具。这属于 MCP 范畴:JSON 模式、函数名称、参数验证及提示注入防范。 - Between agents (network level): As soon as an agent wants to hand the whole task to a peer, MCP has nothing to say about discovery, auth, or streamed artifacts. That gap is what A2A fills with Agent Cards, Tasks, and status events. (More on agentic systems and orchestrators.)
智能体之间(网络层级):当智能体需要将整个任务移交同伴时,MCP 在服务发现、身份验证或流式产物传输方面无能为力。这个空白正是 A2A 通过智能体卡片、任务和状态事件填补的领域。(更多关于智能体系统与协调器的内容) - Inside your process (workflow level): Frameworks like LangGraph, CrewAI, and AutoGen wire steps together in memory. They’re great for small chains on one machine, but once you need to cross a network boundary—or mix languages and vendors—you step out of their sandbox and into A2A.
流程内部(工作流层级):LangGraph、CrewAI 和 AutoGen 等框架在内存中串联步骤。它们擅长单机上的短链操作,但一旦需要跨越网络边界 —— 或混合不同语言和供应商方案 —— 你就离开了它们的沙盒,进入了 A2A 的领域。
Think of it like this:
可以这样理解:
- MCP is the API contract inside a single micro-service.
MCP 是单个微服务内部的 API 契约。 - A2A is the HTTP layer between micro-services.
A2A 是微服务之间的 HTTP 通信层。 - LangGraph et al. are the workflow engine that decides when each micro-service gets called.
LangGraph 等工具是决定何时调用每个微服务的工作流引擎。
At scale, most real systems end up using all three. A LangGraph flow might call an internal Python agent (in-process), then hand the job to a third-party finance agent via A2A, and that finance agent might rely on MCP to trigger a spreadsheet-export tool deep inside its own prompt.
在大规模应用中,大多数实际系统最终会同时使用这三种方式。一个 LangGraph 流程可能先调用内部 Python 代理(进程内),然后通过 A2A 协议将任务转交给第三方金融代理,而该金融代理可能依赖 MCP 来触发其内部深处的电子表格导出工具。
Keeping these boundaries straight prevents duplicated effort: you don’t bolt custom auth onto every MCP tool, and you don’t overload A2A with prompt schemas it was never meant to parse.
明确这些边界划分能避免重复劳动:既不必为每个 MCP 工具都添加自定义认证,也不会让 A2A 协议超负荷解析它本不该处理的提示模板。
With the layers sorted, we can dig into the wire format itself—the Agent Card, the Task state machine, and how messages and artifacts move across the stream.
厘清层级关系后,我们就可以深入探究通信格式本身 —— 包括代理卡片、任务状态机,以及消息和工件如何在数据流中传输。
Anatomy of an A2A exchange
A2A 交互协议剖析#
If you can picture buying a book on Amazon, you already understand the four data shapes A2A moves across the wire.
如果你能想象在亚马逊上购买一本书的场景,就已经理解了 A2A 协议传输的四种数据形态。
Take a look: 请看:
Your Amazon flow 您的亚马逊流程
A2A primitive A2A 原语
What it contains 包含内容
Product listing page: You browse, see what’s for sale, learn payment options
产品列表页:您可以浏览查看在售商品,了解支付选项
Agent Card (/.well-known/agent.json
)
智能体卡片( /.well-known/agent.json
)
Agent ID, description, capabilities list, supported auth method, optional cryptographic signature
代理 ID、描述信息、能力列表、支持的身份验证方法、可选的加密签名
Order confirmation / invoice: Click “Buy Now,” receive an order ID
订单确认 / 发票:点击 "立即购买",获取订单 ID
Task (created via createTask
)
任务(通过 createTask
创建)
task_id
, input payload, current status
task_id
、输入载荷、当前状态
Shipping-status pings: “Order packed,” “out for delivery,” “arriving today”
物流状态通知:"订单已打包","正在派送中","今日送达"
Message (TaskStatusUpdateEvent
) 消息( TaskStatusUpdateEvent
)
Role (agent
or client
), text, optional small files
角色( agent
或 client
),文本,可选小文件
Package on your doorstep: The thing you bought
门口包裹:您购买的商品
Artifact (TaskArtifactUpdateEvent
) 制品( TaskArtifactUpdateEvent
)
Typed payload: TextPart
, FilePart
, or DataPart
类型化负载: TextPart
、 FilePart
或 DataPart
**The step-by-step checkout#
逐步结账 **
- Browse the listing: The client fetches the Agent Card once. If the “features” (capabilities) and “checkout” (auth) look good, it proceeds.
浏览列表:客户端获取一次代理卡片。如果 "功能"(能力)和 "结账"(认证)符合要求,则继续操作。 - Place the order: The client sends a
createTask
JSON-RPC request (like clicking “Buy Now”). The remote agent replies with atask_id
, your order number for the job.
下单:客户端发送一个createTask
JSON-RPC 请求(类似点击 "立即购买")。远程代理会回复一个task_id
,即您的工作订单号。 - Watch the tracking emails: The remote streams Messages over Server-Sent Events:
pending
,processing
, maybeinput-required
(a “signature needed” moment). The client can answer with addInput, just as you’d update delivery instructions.
监控追踪邮件:远程通过服务器发送事件流传输消息:pending
、processing
,可能还有input-required
("需要签名" 的时刻)。客户端可通过 addInput 响应,就像更新配送指令一样操作。 - Receive the package: When status flips to
completed
, Artifact events deliver the payload—PDF report, PNG asset, JSON data, or whatever was promised.
接收包裹:当状态变为completed
时,Artifact 事件会交付有效载荷 —— 可能是 PDF 报告、PNG 资源、JSON 数据或任何承诺的内容。 - Close the loop: If the task fails or is canceled, the remote marks it
failed
orcanceled
and no artifacts ship (like Amazon refunding an unfulfilled order).
完成闭环:如果任务失败或被取消,远程端会将其标记为failed
或canceled
,且不会交付任何产物(就像亚马逊会退还未完成的订单款项)。
By framing the exchange this way, you can see why A2A keeps the spec minimal: it only defines what every shopper (client) and seller (remote) absolutely need—catalog, order, tracking, delivery—while leaving the “warehouse internals” (model prompts, tool schemas) to MCP or any other mechanism the seller chooses.
通过这种交互框架,你可以理解为何 A2A 协议保持极简规范:它只定义每个买家(客户端)和卖家(远程端)必需的核心要素 —— 目录、订单、追踪、交付 —— 而将 "仓库内部细节"(模型提示、工具架构)留给 MCP 或卖家选择的其他机制处理。
Safety, observability, and governance in the A2A protocol
A2A 协议中的安全性、可观测性与治理#
A2A keeps its on-wire spec thin, but production systems still need three layers of protection and visibility.
A2A 协议保持了简洁的线缆规范,但生产系统仍需三层防护与可视化保障。
Secure the handshake 确保握手安全#
- Signed agent cards: Add a JSON Web Signature (JWS) to the card and publish the signer’s public key. Clients “pin” that key; if anyone swaps the card in transit, signature verification fails and the call is dropped. “Trust me, bro” isn’t a real security policy.
签名代理卡片:为卡片添加 JSON Web 签名(JWS)并发布签名者的公钥。客户端会 "固定" 该公钥;若有人在传输过程中替换卡片,签名验证将失败且呼叫会被终止。"兄弟信我" 可不是真正的安全策略。 - Auth choices: Demos usually rely on simple Bearer tokens, but you can level-up to mutual TLS (like a secret handshake without the finger guns) or plug into your company’s single sign-on flow.
认证方式选择:演示通常使用简单的 Bearer 令牌,但您可升级为双向 TLS 认证(就像无需手势的加密握手),或接入公司的单点登录流程。 - Runtime policy: A remote agent can reject oversized or risky payloads before its model ever runs. A common guard looks like: “accept only JSON or PNG files under 5 MB.” (This is a lot like Zod schema validation in MCP.)
运行时策略:远程代理可在模型运行前拒绝过大的或存在风险的载荷。常见的防护措施类似:"仅接受小于 5MB 的 JSON 或 PNG 文件"(这与 MCP 中的 Zod 模式验证非常相似)。
**See what the agents see#
查看代理所见 **
Each status or artifact event already carries timestamps, task_id
, and an optional trace header. Wrap your A2A client in an OpenTelemetry middleware and you get end-to-end spans out of the box—no hacking JSON.
每个状态或工件事件已自带时间戳 task_id
和可选的追踪头信息。将您的 A2A 客户端封装在 OpenTelemetry 中间件中,即可开箱即用地获得端到端跨度 —— 无需手动处理 JSON。
Pipe those spans into your observability stack, and you should be able to answer, “Which remote agent turned slow at 3 p.m.?” before customers notice.
将这些跨度数据接入您的可观测性堆栈,就能在客户察觉前回答 "下午 3 点哪个远程代理变慢了?" 的问题。
Trust, but verify 信任,但要验证#
Today, discovery of A2A remotes is DIY:
目前,A2A 远程设备的发现过程需要自行配置:
- YAML files for internal teams (
registry.yaml
checked into repo).
内部团队使用的 YAML 文件(registry.yaml
已提交至代码库)。 - Vertex AI catalogue: Tick “Publish” and Google hosts the card in a private directory.
Vertex AI 目录:勾选 "发布" 后,Google 会将卡片托管在私有目录中。 - Emerging public hubs: LangChain and Flowise communities are hacking on npm-style registries, but there’s no global “verified badge” yet.
新兴公共枢纽:LangChain 和 Flowise 社区正在构建类似 npm 风格的注册中心,但目前尚未形成全球统一的 "认证徽章" 体系。
Until those hubs mature, most companies will treat third-party agents like SaaS vendors: security questionnaires, software bill of materials (SBOMs), and limited network scopes.
在这些枢纽成熟之前,大多数企业会像对待 SaaS 供应商那样管理第三方智能体:包括安全问卷审查、软件物料清单 (SBOM) 提交以及严格的网络访问范围限制。
**How A2A narrows the attack surface vs MCP#
A2A 如何缩小攻击面相比 MCP**
MCP exposes every tool schema in natural-language prompts, so injection and argument-tampering are daily worries.
MCP 将所有工具架构以自然语言提示形式暴露,因此注入攻击和参数篡改成为日常需要防范的问题。
A2A hides all of that behind the remote’s fence; the client sees only high-level tasks and capped artifacts. You still need to trust the remote’s code, but your prompt is never on the table, which eliminates an entire class of exploits.
A2A 将所有底层细节隐藏在远程防护墙之后;客户端仅能看到高级任务和受限产物。虽然仍需信任远程代码,但您的提示信息永远不会暴露在风险中,从而彻底消除了一整类漏洞攻击的可能性。
The takeaways for all this: Sign what you publish, pin what you trust, trace every hop, and keep payload limits sane. With those guardrails in place, A2A is no riskier than calling a well-behaved REST service—and a lot more flexible when you add new agents tomorrow.
这一切的核心要点是:对你发布的内容进行签名,固定你信任的对象,追踪每一跳路由,并保持合理的负载限制。有了这些防护措施,A2A 协议的风险不会高于调用一个行为规范的 REST 服务 —— 而且当你明天添加新代理时,它会灵活得多。
A2A dream vs. current reality
A2A 愿景与当前现实#
The dream 梦想#
- Browse an “Agent Mall.” You open CoolAgentMall.dev, search “tax compliance,” and see live agents with star-ratings and signed cards. One click drops the URL into your private registry—no SDKs, no secrets.
浏览 “智能体商城”。您打开 CoolAgentMall.dev,搜索 “税务合规”,就能看到带有星级评分和签名证书的在线智能体。只需点击一下,即可将 URL 添加到您的私有注册表中 —— 无需 SDK,无需密钥。 - Drag-and-drop chains. In Flowise (or n8n) you drag a green Tax-Check (A2A) block after “Generate Invoice,” hit Run, and watch a JSON artifact stream back with the correct jurisdiction codes—zero glue code on your side.
拖拽式链式操作。在 Flowise(或 n8n)中,您只需将绿色的 Tax-Check(A2A)模块拖到 "生成发票" 之后,点击运行,就能看到返回的 JSON 数据流已包含正确的管辖区域代码 —— 您无需编写任何胶水代码。
The reality (May 2025) 现实情况(2025 年 5 月)#
- ~50 vendors have announced support, but most agents still exist in the “DM me for a demo” stage.
已有约 50 家供应商宣布支持,但大多数智能体仍处于 "私信获取演示" 阶段。 - LangGraph, CrewAI, and AutoGen adapters are solid; Flowise and n8n remain on community betas.
LangGraph、CrewAI 和 AutoGen 的适配器已趋完善;Flowise 和 n8n 仍处于社区测试版阶段。 - No public registry yet—teams rely on registry.yaml files or Vertex AI’s private catalogue.
目前尚无公开注册中心 —— 各团队依赖 registry.yaml 文件或 Vertex AI 的私有目录。 - Very few agents ship signed cards, and rate-limits or billing caps are DIY middleware.
很少有代理商会提供签名卡服务,而速率限制或计费上限通常需要自行开发中间件来实现。 - Performance data is anecdotal; Google’s reference server adds ~30 ms per hop in local tests.
性能数据仅供参考;谷歌参考服务器在本地测试中每跳增加约 30 毫秒延迟。
A2A is ready for prototypes and internal workflows, but consumer apps and regulated stacks will want extra guardrails until registries and security standards mature.
A2A 协议已准备好用于原型设计和内部工作流程,但对于消费级应用和受监管的技术栈而言,在注册机制和安全标准完善之前仍需额外保障措施。
When to reach for A2A (and when not to)
何时采用 A2A 协议(以及何时不采用)#
Great fits 适用场景#
- Cross-vendor workflows: Your product-manager agent needs a finance-forecast agent from another company. A2A gives them a shared handshake, auth, and streaming without exposing prompt guts.
跨厂商工作流:当您的产品经理智能体需要调用另一家公司的财务预测智能体时,A2A 协议为它们提供了共享的握手认证、授权及数据流传输机制,同时确保核心提示词不被暴露。 - Security-sensitive black boxes: A vendor won’t share its model prompts but will expose a signed Agent Card. You still get a clean contract plus task-level audit trails.
安全敏感的黑箱系统:供应商不会共享其模型提示,但会公开经过签名的智能体卡片。您仍可获得清晰的合约以及任务级别的审计追踪。 - Hybrid stacks & mixed languages: A TypeScript front end can call a Python data-science agent—or the other way around—because only JSON-RPC crosses the wire.
混合技术栈与多语言协作:TypeScript 前端可以调用 Python 数据科学代理,反之亦然,因为只有 JSON-RPC 在网络上传输。 - Long-running jobs that need progress updates: Build pipelines, PDF rendering, data exports: stream status and artifacts over Server-Sent Events instead of polling a custom REST endpoint.
需要进度更新的长时间运行任务:构建流水线、PDF 渲染、数据导出 —— 通过服务器发送事件流式传输状态和产物,而非轮询自定义 REST 端点。
Probably overkill 可能有些过度设计#
- Everything runs in one process: If your whole flow sits inside your orchestrator of choice, stick with the framework’s in-memory calls.
所有流程运行在单一进程中:如果整个流程都在所选编排器内部完成,直接使用框架的内存调用即可。 - Tiny helper scripts: A cron job that pings a lone OpenAI function doesn’t need discovery or streaming; direct API calls are lighter.
微型辅助脚本:一个仅调用单个 OpenAI 函数的 cron 任务无需发现或流式处理功能,直接 API 调用更为轻量。 - One-off data pulls: For a weekly export where latency and chatter don’t matter, a plain REST endpoint is easier to monitor.
一次性数据提取:对于延迟和通信量不重要的每周导出,简单的 REST 端点更易于监控。 - Schema-heavy, prompt-light tools: When the main need is validating complex arguments inside a prompt, MCP alone is the right layer.
模式复杂、提示简单的工具:当主要需求是验证提示中的复杂参数时,单独使用 MCP 就是合适的层级。
Reach for A2A when a task crosses a network boundary and you care about trust, live progress, or swapping in new specialist agents later. Skip it when a well-documented API already fits the bill, or your whole stack fits on a Raspberry Pi taped to your monitor.
当任务跨越网络边界且您关注信任、实时进度或后续替换新专业代理时,请选择 A2A 协议。如果已有完善文档记录的 API 能满足需求,或者您的整个技术栈可以运行在贴在显示器上的树莓派上时,则无需使用它。
A2A doesn’t add new magic to models. Instead, it adds is a dependable handshake so your existing agents can meet, swap work, and keep a tidy audit trail.
A2A 协议并非为模型增添新的魔法,而是提供可靠的安全握手机制,让您现有的智能体能够相互连接、交换任务并保持清晰可追溯的审计记录。
The registry story is still DIY and many agents live behind private demos, but the plumbing is solid enough for prototypes and internal workflows today.
注册表的故事仍需要自行搭建,目前许多智能体还停留在私有演示阶段,但这些基础设施已足够支撑原型开发和内部工作流。
Less glue, more interesting work.
减少胶水代码,专注更有价值的工作。