Levix

Levix's zone

x
telegram

AI 代理的未来是事件驱动的

AI agents are set to transform enterprise operations with autonomous problem-solving, adaptive workflows, and scalability. But the real challenge isn’t building better models.

AI 智能体将通过自主解决问题、自适应工作流程和可扩展性彻底改变企业运营。但真正的挑战并非构建更优模型。

Agents need access to data, tools, and the ability to share information across systems, with their outputs available for use by multiple services — including other agents. This isn’t an AI problem; it’s an infrastructure and data interoperability problem. It requires more than stitching together chains of commands; it demands an event-driven architecture (EDA) powered by streams of data.

智能体需要能够访问数据、工具,并具备跨系统共享信息的能力,其输出需可供多种服务(包括其他智能体)使用。这并非单纯的人工智能问题,而是基础设施与数据互操作性挑战。它要求的不只是简单串联命令链,而是需要一个由数据流驱动的事件驱动架构(EDA)。

As HubSpot CTO Dharmesh Shah put it, “Agents are the new apps.” Meeting this potential requires investing in the right design patterns from the start. This article explores why EDA is the key to scaling agents and unlocking their full potential in modern enterprise systems.

正如 HubSpot 首席技术官 Dharmesh Shah 所言:“智能体即新应用。” 实现这一潜力需要从一开始就投资于正确的设计模式。本文探讨为何 EDA 是规模化智能体并释放其在现代企业系统中全部潜能的关键。

To fully understand why EDA is essential for the next wave of AI, we must first look at how AI has evolved to this point.

要彻底理解 EDA 为何对下一波 AI 浪潮至关重要,我们首先需回顾 AI 是如何发展到当前阶段的。

The Evolution of AI AI 的演进#

AI has progressed through two distinct waves and is now entering a third. The first two waves unlocked new possibilities but also suffered critical limitations.

AI 已历经两波明显的发展浪潮,现正进入第三波。前两波虽开启了新可能,但也存在关键局限。

The First Wave of AI: Predictive Models 人工智能的第一波浪潮:预测模型#

The first wave of AI revolved around traditional machine learning, focusing on predictive capabilities for narrowly defined tasks.

人工智能的第一波浪潮围绕传统机器学习展开,专注于为明确定义的狭窄任务提供预测能力。

image

The traditional machine learning workflow
传统机器学习的工作流程

Building these models required significant expertise, as they were crafted specifically for individual use cases. They were domain-specific, with their domain specificity embedded in the training data, making them rigid and tough to repurpose. Adapting a model to a new domain often meant starting from scratch — an approach that lacked scalability and slowed adoption.

构建这些模型需要深厚的专业知识,因为它们是为特定用例量身定制的。这些模型具有领域专一性,其领域特性嵌入在训练数据中,使得它们僵化且难以复用。将模型调整至新领域往往意味着从零开始 —— 这种方式缺乏可扩展性,阻碍了技术的快速采纳。

The Second Wave of AI: Generative Models 人工智能的第二波浪潮:生成模型#

Generative AI, driven by deep learning, marked a turning point.

由深度学习驱动的生成式 AI 标志着一个转折点。

Instead of being confined to single domains, these generative models were trained on vast, diverse datasets, giving them the ability to generalize across a variety of contexts. They could generate text, images, and even videos, opening up exciting new applications. However, this wave came with its own challenges.

这些生成模型不再局限于单一领域,而是在海量多样的数据集上训练,使其具备跨多种情境的泛化能力。它们能生成文本、图像乃至视频,开启了激动人心的新应用。然而,这一浪潮也伴随着自身的挑战。

Generative models are fixed in time — unable to incorporate new or dynamic information — and are difficult to adapt. Fine-tuning can address domain-specific needs, but it’s expensive and error-prone. Fine-tuning requires vast data, significant computational resources, and ML expertise, making it impractical for many situations. Additionally, since LLMs are trained on publicly available data, they don’t have access to domain-specific information, limiting their ability to accurately respond to questions that require context.

生成模型在时间上是固定的 —— 无法整合新的或动态信息 —— 且难以调整。微调可以满足特定领域的需求,但成本高昂且易出错。微调需要大量数据、可观的计算资源和机器学习专业知识,使得它在许多情况下不切实际。此外,由于 LLMs 是基于公开可用数据训练的,它们无法获取领域特定信息,这限制了它们准确回答需要上下文的问题的能力。

For example, suppose you ask a generative model to recommend an insurance policy tailored to a user’s personal health history, location, and financial goals.

例如,假设你要求一个生成模型根据用户的个人健康史、地理位置和财务目标推荐一份定制保险方案。

image

Simple prompt and response with an LLM
简单的提示与包含 LLM 的响应

In this scenario, you prompt the LLM and it generates a response. Clearly the model can’t deliver accurate recommendations because it lacks access to the relevant user data. Without it, the response will either be generic or flat-out wrong.

在此场景中,你向 LLM 发出提示,它生成响应。显然,模型无法提供准确推荐,因为它缺乏相关用户数据的访问权限。没有这些数据,响应要么泛泛而谈,要么完全错误。

Compound AI Bridges the Gap Compound AI 弥合差距#

To overcome these limitations, Compound AI systems integrate generative models with other components like programmatic logic, data retrieval mechanisms, and validation layers. This modular design allows AI to combine tools, fetch relevant data, and tailor outputs in a way that static models cannot.

为克服这些限制,复合 AI 系统将生成模型与程序化逻辑、数据检索机制及验证层等其他组件相结合。这种模块化设计使 AI 能够整合工具、获取相关数据并以静态模型无法实现的方式定制输出。

For instance, in the insurance recommendation example:

例如,在保险推荐场景中:

  • A retrieval mechanism pulls the user’s health and financial data from a secure database.
    检索机制从安全数据库中提取用户的健康与财务数据。
  • This data is added to the context provided to the LLM during prompt assembly.
    该数据被添加至提示组装过程中提供给 LLM 的上下文中。
  • The LLM uses the assembled prompt to generate an accurate response.
    LLM 利用组装好的提示生成准确响应。

image

Simple RAG architecture 简易 RAG 架构

This process, known as Retrieval-Augmented Generation (RAG), bridges the gap between static AI and real-world needs by dynamically incorporating relevant data into the model’s workflow.

这一过程被称为检索增强生成(RAG),通过动态地将相关数据整合到模型工作流中,弥合了静态 AI 与现实需求之间的鸿沟。

While RAG effectively handles tasks like this, it relies on fixed workflows, meaning every interaction and execution path must be pre-defined. This rigidity makes it impractical to handle more complex or dynamic tasks where workflows cannot be exhaustively encoded. Encoding all possible execution paths manually is labor-intensive and ultimately limiting.

尽管 RAG 能有效处理此类任务,但它依赖于固定的工作流程,意味着每次交互和执行路径都需预先定义。这种僵化性使其难以应对更复杂或动态的任务,因为无法穷尽所有可能的工作流编码。手动编码所有可能的执行路径不仅劳动密集,最终也限制了系统的灵活性。

The limitations of fixed-flow architectures have led to the rise of the third wave of AI: agentic systems.

固定流程架构的局限性催生了人工智能的第三波浪潮:代理系统。

The Rise of Agentic AI 代理型 AI 的崛起#

While AI has come a long way, we’re hitting the limits of fixed systems and even LLMs.

尽管人工智能已取得长足进步,但我们正触及固定系统乃至 LLMs 的极限。

Google’s Gemini is reportedly failing to meet internal expectations despite being trained on a larger set of data. Similar results have been reported by OpenAI and their next-generation Orion model.

据报道,谷歌的 Gemini 虽基于更庞大的数据集训练,却未能达到内部预期。OpenAI 及其下一代 Orion 模型也报告了类似结果。

Salesforce CEO Marc Benioff recently said on The Wall Street Journal’s “Future of Everything” podcast that we’ve reached the upper limits of what LLMs can do. He believes the future lies with autonomous agents — systems that can think, adapt, and act independently — rather than models like GPT-4.

赛富时 CEO 马克・贝尼奥夫近日在《华尔街日报》“万物未来” 播客中表示,我们已触及 LLMs 能力的上限。他认为未来属于能独立思考、适应并自主行动的自治代理系统,而非 GPT-4 这类模型。

Agents bring something new to the table: dynamic, context-driven workflows. Unlike fixed paths, agentic systems figure out the next steps on the fly, adapting to the situation at hand. That makes them ideal for tackling the kinds of unpredictable, interconnected problems businesses face today.

智能体带来了全新的维度:动态、上下文驱动的工作流程。与固定路径不同,代理系统能够即时决定下一步行动,根据当前情况灵活调整。这使它们成为应对当今企业面临的不可预测且相互关联问题的理想选择。

image

Control logic, programmatic versus agentic
控制逻辑:程序化与代理化的对比

Agents flip traditional control logic on its head.

智能体彻底颠覆了传统的控制逻辑。

Instead of rigid programs dictating every move, agents use LLMs to drive decisions. They can reason, use tools, and access memory — all dynamically. This flexibility allows for workflows that evolve in real time, making agents far more powerful than anything built on fixed logic.

不同于僵化程序对每一步的严格指令,智能体利用 LLMs 来驱动决策。它们能够进行推理、使用工具并访问记忆 —— 所有这些都是动态进行的。这种灵活性支持实时演进的工作流程,使得智能体远比基于固定逻辑构建的任何系统更为强大。

image

Agent architecture (Inspired by *https://arxiv.org/pdf/2304.03442)*

代理架构(灵感源自 https://arxiv.org/pdf/2304.03442)

How Design Patterns Shape Smarter Agents 设计模式如何塑造更智能的代理#

AI agents derive their strength not only from their core abilities but also from the design patterns that structure their workflows and interactions. These patterns allow agents to tackle complex problems, adapt to changing environments, and collaborate effectively.

AI 代理的强大不仅源于其核心能力,还归功于构建其工作流程与交互的设计模式。这些模式使代理能够解决复杂问题、适应多变环境并高效协作。

Let’s cover some of the common design patterns that enable effective agents.

下面介绍一些实现高效代理的常见设计模式。

Reflection: Improvement through Self-Evaluation 反思:通过自我评估实现改进#

Reflection allows agents to evaluate their own decisions and improve their output before taking action or providing a final response. This capability enables agents to catch and correct mistakes, refine their reasoning, and ensure higher-quality outcomes.

反思能力使智能体能够评估自身决策,并在采取行动或提供最终响应前优化输出。这一功能让智能体得以发现并纠正错误,精炼推理过程,从而确保更高质量的结果。

image

Reflection design pattern for agents
智能体的反思设计模式

Tool Use Expands Agent Capabilities 工具使用扩展智能体能力#

Interfacing with external tools extends an agent’s functionality, allowing it to perform tasks like retrieving data, automating processes, or executing deterministic workflows. This is particularly valuable for operations requiring strict accuracy, such as mathematical calculations or database queries, where precision is non-negotiable. Tool use bridges the gap between flexible decision-making and predictable, reliable execution.

与外部工具的交互扩展了智能体的功能,使其能够执行诸如数据检索、流程自动化或确定性工作流等任务。这对于需要严格精确度的操作尤为关键,例如数学计算或数据库查询,在这些场景中精度不容妥协。工具使用弥合了灵活决策与可预测、可靠执行之间的鸿沟。

image

Tool use design pattern for agents
代理工具使用设计模式

Planning Turns Goals Into Actions 规划将目标转化为行动#

Agents with planning capabilities can break down high-level objectives into actionable steps, organizing tasks in a logical sequence. This design pattern is crucial for solving multi-step problems or managing workflows with dependencies.

具备规划能力的代理能够将高层次目标分解为可执行的步骤,按逻辑顺序组织任务。这一设计模式对于解决多步骤问题或管理具有依赖关系的工作流至关重要。

image

Planning design pattern for agents
代理规划设计模式

Multi-Agent Collaboration: Modular Thinking 多智能体协作:模块化思维#

Multi-agent systems take a modular approach to problem-solving by assigning specific tasks to specialized agents. This approach offers flexibility: you can use smaller language models (SLMs) for task-specific agents to improve efficiency and simplify memory management. The modular design reduces complexity for individual agents by keeping their context focused on their specific tasks.

多智能体系统采用模块化方法解决问题,将特定任务分配给专门的智能体。这种方法提供了灵活性:你可以使用较小规模的语言模型(SLMs)作为任务专用智能体,以提高效率并简化内存管理。模块化设计通过保持各智能体上下文专注于其特定任务,降低了单个智能体的复杂性。

A related technique is Mixture-of-Experts (MoE), which employs specialized submodels, or “experts,” within a single framework. Like multi-agent collaboration, MoE dynamically routes tasks to the most relevant expert, optimizing computational resources and enhancing performance. Both approaches emphasize modularity and specialization — whether through multiple agents working independently or through task-specific routing in a unified model.

相关技术包括混合专家模型(MoE),它在单一框架内采用专门的子模型或 “专家”。与多智能体协作类似,MoE 动态地将任务路由至最相关的专家,优化计算资源并提升性能。这两种方法都强调模块化与专业化 —— 无论是通过多个独立工作的智能体,还是通过统一模型中的任务特定路由。

Just like in traditional system design, breaking problems into modular components makes them easier to maintain, scale, and adapt. Through collaboration, these specialized agents share information, divide responsibilities, and coordinate actions to tackle complex challenges more effectively.

正如传统系统设计一样,将问题分解为模块化组件使其更易于维护、扩展和适应。通过协作,这些专业化的智能体共享信息、分工负责并协调行动,从而更有效地应对复杂挑战。

image

Multi-agent collaboration design pattern for agents
面向智能体的多智能体协作设计模式

In short, agents don’t just execute workflows; they reshape how we think about them. They’re the next step in building scalable, adaptable AI systems — moving past the constraints of traditional architectures and the current limitations of LLMs.

简而言之,智能体不仅执行工作流,还重塑了我们对其的思考方式。它们是构建可扩展、适应性强的 AI 系统的下一步 —— 超越了传统架构的约束和 LLMs 当前的局限性。

Agentic RAG: Adaptive and Context-Aware Retrieval 代理式 RAG:自适应与情境感知检索#

Agentic RAG evolves RAG by making it more dynamic and context-driven. Instead of relying on fixed workflows, agents can determine in real time what data they need, where to find it, and how to refine their queries based on the task at hand. This flexibility makes agentic RAG well-suited for handling complex, multi-step workflows that require responsiveness and adaptability.

代理式 RAG 通过使其更具动态性和上下文驱动性来演进 RAG。智能体不再依赖固定工作流,而是能实时确定所需数据、其来源以及如何根据当前任务优化查询。这种灵活性使代理式 RAG 非常适合处理需要响应性和适应性的复杂多步骤工作流。

For instance, an agent creating a marketing strategy might start by pulling customer data from a CRM, use APIs to gather market trends, and refine its approach as new information emerges. By retaining context through memory and iterating on its queries, the agent produces more accurate and relevant outputs. Agentic RAG brings together retrieval, reasoning, and action.

例如,一个制定营销策略的智能体可能首先从 CRM 中提取客户数据,利用 API 收集市场趋势,并随着新信息的出现调整其方法。通过记忆保留上下文并迭代查询,智能体产生更准确且相关的输出。代理式 RAG 集检索、推理与行动于一体。

image

Agentic RAG design pattern
代理式 RAG 设计模式

The Challenges with Scaling Intelligent Agents 智能代理规模化面临的挑战#

Scaling agents — whether a single agent or a collaborative system — hinges on their ability to access and share data effortlessly. Agents need to gather information from multiple sources, including other agents, tools, and external systems, to make decisions and take action.

扩展代理 —— 无论是单个代理还是协作系统 —— 关键在于它们能否轻松访问和共享数据。代理需要从多个来源(包括其他代理、工具和外部系统)收集信息,以做出决策并采取行动。

image

Single agent dependencies
单一代理依赖项

Connecting agents to the tools and data they need is fundamentally a distributed systems problem. This complexity mirrors the challenges faced in designing microservices, where components must communicate efficiently without creating bottlenecks or rigid dependencies.

将代理与其所需的工具和数据连接起来,本质上是一个分布式系统问题。这种复杂性反映了设计微服务时面临的挑战,其中各组件必须高效通信,同时避免产生瓶颈或形成僵化的依赖关系。

Like microservices, agents must communicate efficiently and ensure their outputs are useful across the broader system. And like any service, their outputs shouldn’t just loop back into the AI application — they should flow into other critical systems like data warehouses, CRMs, CDPs, and customer success platforms.

如同微服务一样,智能体之间必须高效沟通,并确保其输出在整个大系统中具有实用价值。与任何服务相同,它们的输出不应仅循环回 AI 应用内部,而应流入数据仓库、客户关系管理系统 (CRM)、客户数据平台 (CDP) 及客户成功平台等其他关键系统。

Sure, you could connect agents and tools through RPC and APIs, but that’s a recipe for tightly coupled systems. Tight coupling makes it harder to scale, adapt, or support multiple consumers of the same data. Agents need flexibility. Their outputs must seamlessly feed into other agents, services, and platforms without locking everything into rigid dependencies.

当然,你可以通过 RPC 和 API 连接智能体与工具,但这会导致系统紧耦合。紧耦合使得扩展、适应或支持同一数据的多消费者变得困难。智能体需要灵活性。它们的输出必须能无缝接入其他智能体、服务和平台,而不将所有环节锁定在僵硬的依赖关系中。

What’s the solution? 解决方案是什么?

Loose coupling through an event-driven architecture. It’s the backbone that allows agents to share information, act in real time, and integrate with the broader ecosystem — without the headaches of tight coupling.

通过事件驱动架构实现松耦合。这一核心架构使智能体能够共享信息、实时行动并与更广泛的生态系统集成,同时避免了紧耦合带来的种种麻烦。

Event-Driven Architectures: A Primer 事件驱动架构入门指南#

In the early days, software systems were monoliths. Everything lived in a single, tightly integrated codebase. While simple to build, monoliths became a nightmare as they grew.

早期,软件系统是单体架构。所有功能都集中在一个紧密集成的代码库中。虽然构建简单,但随着规模扩大,单体架构变成了噩梦。

Scaling was a blunt instrument: you had to scale the entire application, even if only one part needed it. This inefficiency led to bloated systems and brittle architectures that couldn’t handle growth.

扩展是一种粗暴的手段:即使只有部分功能需要扩展,也不得不扩展整个应用。这种低效导致了系统臃肿和架构脆弱,难以应对增长需求。

Microservices changed this.

微服务改变了这一局面。

By breaking applications into smaller, independently deployable components, teams could scale and update specific parts without touching the whole system. But this created a new challenge: how do all these smaller services communicate effectively?

通过将应用拆分为更小、可独立部署的组件,团队能够在不触及整个系统的情况下扩展和更新特定部分。但这带来了新的挑战:如何让这些小型服务高效通信?

If we connect services through direct RPC or API calls, we create a giant mess of interdependencies. If one service goes down, it impacts all nodes along the connected path.

如果我们通过直接的 RPC 或 API 调用连接服务,就会制造出一团相互依赖的混乱。一旦某个服务宕机,就会影响到连接路径上的所有节点。

image

Tightly-coupled Microservices
紧耦合的微服务

EDA solved the problem. EDA 解决了这个问题。

Instead of tightly coupled, synchronous communication, EDA enables components to communicate asynchronously through events. Services don’t wait on each other — they react to what’s happening in real-time.

EDA 摒弃了紧耦合的同步通信方式,转而让组件通过事件进行异步通信。服务之间无需相互等待 —— 它们能实时响应正在发生的情况。

image

Event-Driven Architecture
事件驱动架构

This approach made systems more resilient and adaptable, allowing them to handle the complexity of modern workflows. It wasn’t just a technical breakthrough; it was a survival strategy for systems under pressure.

这种方法使系统更具韧性和适应性,能够应对现代工作流程的复杂性。它不仅仅是一项技术突破,更是系统在压力下的生存策略。

The Rise and Fall of Early Social Giants 早期社交巨头的兴衰史#

The rise and fall of early social networks like Friendster underscore the importance of scalable architecture. Friendster captured massive user bases early on, but their systems couldn’t handle the demand. Performance issues drove users away, and the platform ultimately failed.

早期社交网络如 Friendster 的兴衰凸显了可扩展架构的重要性。Friendster 早期吸引了大量用户,但其系统无法应对需求。性能问题导致用户流失,平台最终失败。

On the flip side, Facebook thrived not just because of its features but because it invested in scalable infrastructure. It didn’t crumble under the weight of success — it rose to dominate.

另一方面,Facebook 之所以蓬勃发展,不仅因其功能,更在于其对可扩展基础设施的投资。它没有在成功的重压下崩溃,而是崛起成为主导者。

Today, we risk seeing a similar story play out with AI agents.

今天,我们可能正目睹 AI 智能体上演类似的故事。

Like early social networks, agents will experience rapid growth and adoption. Building agents isn’t enough. The real question is whether your architecture can handle the complexity of distributed data, tool integrations, and multi-agent collaboration. Without the right foundation, your agent stack could fall apart just like the early casualties of social media.

如同早期的社交网络,智能体将经历快速增长与普及。仅构建智能体是不够的,关键在于你的架构能否应对分布式数据、工具集成及多智能体协作的复杂性。缺乏坚实基础,你的智能体堆栈可能如早期社交媒体般分崩离析。

The Future is Event-Driven Agents 未来属于事件驱动型智能体#

The future of AI isn’t just about building smarter agents — it’s about creating systems that can evolve and scale as the technology advances. With the AI stack and underlying models changing rapidly, rigid designs quickly become barriers to innovation. To keep pace, we need architectures that prioritize flexibility, adaptability, and seamless integration. EDA is the foundation for this future, enabling agents to thrive in dynamic environments while remaining resilient and scalable.

AI 的未来不仅在于打造更聪明的智能体,更在于创建能随技术进步而演进和扩展的系统。面对 AI 技术栈与底层模型的快速迭代,僵化的设计很快会成为创新阻碍。要保持领先,我们需要优先考虑灵活性、适应性和无缝集成的架构。事件驱动架构(EDA)正是这一未来的基石,它使智能体在动态环境中蓬勃发展,同时保持韧性与可扩展性。

Agents as Microservices with Informational Dependencies 作为具有信息依赖性的微服务智能体#

Agents are similar to microservices: they’re autonomous, decoupled, and capable of handling tasks independently. But agents go further.

代理与微服务类似:它们自主、解耦,能够独立处理任务。但代理更进一步。

While microservices typically process discrete operations, agents rely on shared, context-rich information to reason, make decisions, and collaborate. This creates unique demands for managing dependencies and ensuring real-time data flows.

微服务通常处理离散操作,而代理则依赖共享的、上下文丰富的信息进行推理、决策和协作。这对管理依赖关系和确保实时数据流提出了独特要求。

For instance, an agent might pull customer data from a CRM, analyze live analytics, and use external tools — all while sharing updates with other agents. These interactions require a system where agents can work independently but still exchange critical information fluidly.

例如,一个代理可能从 CRM 系统拉取客户数据,分析实时分析数据,并使用外部工具 —— 同时与其他代理共享更新。这些交互需要一个系统,使代理既能独立工作,又能流畅地交换关键信息。

EDA solves this challenge by acting as a “central nervous system” for data. It allows agents to broadcast events asynchronously, ensuring that information flows dynamically without creating rigid dependencies. This decoupling lets agents operate autonomously while integrating seamlessly into broader workflows and systems.

事件驱动架构(EDA)通过充当数据的 “中枢神经系统” 解决了这一挑战。它允许代理异步广播事件,确保信息动态流动而不产生硬性依赖。这种解耦使代理能够自主运行,同时无缝集成到更广泛的工作流和系统中。

image

An Event-Driven architecture for AI agents
面向 AI 代理的事件驱动架构

Decoupling While Keeping Context Intact 解耦同时保持上下文完整#

Building flexible systems doesn’t mean sacrificing context. Traditional, tightly coupled designs often bind workflows to specific pipelines or technologies, forcing teams to navigate bottlenecks and dependencies. Changes in one part of the stack ripple through the system, slowing innovation and scaling efforts.

构建灵活系统并不意味着牺牲上下文。传统紧耦合设计常将工作流绑定到特定管道或技术上,迫使团队应对瓶颈和依赖。堆栈某部分的变更会在系统中产生连锁反应,拖慢创新和扩展步伐。

EDA eliminates these constraints. By decoupling workflows and enabling asynchronous communication, EDA allows different parts of the stack — agents, data sources, tools, and application layers — to function independently.

事件驱动架构(EDA)消除了这些限制。通过解耦工作流并实现异步通信,EDA 使得堆栈的不同部分 —— 代理、数据源、工具和应用层 —— 能够独立运作。

Take today’s AI stack, for example. MLOps teams manage pipelines like RAG, data scientists select models, and application developers build the interface and backend. A tightly coupled design forces all these teams into unnecessary interdependencies, slowing delivery and making it harder to adapt as new tools and techniques emerge.

以当今的 AI 技术栈为例。MLOps 团队管理如 RAG 这样的流水线,数据科学家挑选模型,而应用开发者则构建界面和后端。紧密耦合的设计迫使所有这些团队陷入不必要的相互依赖,延缓交付速度,并在新工具和技术出现时更难适应。

In contrast, an event-driven system ensures that workflows stay loosely coupled, allowing each team to innovate independently.

相比之下,事件驱动系统确保工作流保持松耦合,允许每个团队独立创新。

Application layers don’t need to understand the AI’s internals — they simply consume results when needed. This decoupling also ensures AI insights don’t remain siloed. Outputs from agents can seamlessly integrate into CRMs, CDPs, analytics tools, and more, creating a unified, adaptable ecosystem.

应用层无需理解 AI 的内部机制 —— 它们只需在需要时消费结果。这种解耦还确保了 AI 洞察不会孤立无援。智能体的输出可以无缝集成到 CRM、CDP、分析工具等系统中,形成一个统一且适应性强的生态系统。

Scaling Agents with Event-Driven Architecture 利用事件驱动架构扩展智能体#

EDA is the backbone of this transition to agentic systems.

EDA 是向代理系统转型的支柱。

Its ability to decouple workflows while enabling real-time communication ensures that agents can operate efficiently at scale. As discussed here, platforms like Kafka exemplify the advantages of EDA in an agent-driven system:

它能够解耦工作流同时实现实时通信,确保代理能在规模扩展时高效运作。如本文所述,像 Kafka 这样的平台展现了 EDA 在代理驱动系统中的优势:

  • Horizontal Scalability: Kafka’s distributed design supports the addition of new agents or consumers without bottlenecks, ensuring the system grows effortlessly.
    水平可扩展性:Kafka 的分布式设计支持无瓶颈地添加新代理或消费者,确保系统轻松扩展。
  • Low Latency: Real-time event processing enables agents to respond instantly to changes, ensuring fast and reliable workflows.
    低延迟:实时事件处理使代理能即时响应变化,保障工作流快速可靠。
  • Loose Coupling: By communicating through Kafka topics rather than direct dependencies, agents remain independent and scalable.
    松耦合:通过 Kafka 主题而非直接依赖进行通信,各代理保持独立性和可扩展性。
  • Event Persistence: Durable message storage guarantees that no data is lost in transit, which is critical for high-reliability workflows.
    事件持久化:持久化的消息存储确保传输过程中无数据丢失,这对高可靠性工作流至关重要。

image

Agents as event producers and consumers on a real-time streaming platform
代理作为实时流平台上的事件生产者和消费者

Data streaming enables the continuous flow of data throughout a business. A central nervous system acts as the unified backbone for real-time data flow, seamlessly connecting disparate systems, applications, and data sources to enable efficient agent communication and decision-making.

数据流技术实现了业务中数据的持续流动。中枢神经系统作为实时数据流的统一骨干,无缝连接分散的系统、应用及数据源,以支持高效的代理通信与决策制定。

This architecture is a natural fit for frameworks like Anthropic’s Model Context Protocol (MCP).

该架构与 Anthropic 的模型上下文协议(MCP)等框架天然契合。

MCP provides a universal standard for integrating AI systems with external tools, data sources, and applications, ensuring secure and seamless access to up-to-date information. By simplifying these connections, MCP reduces development effort while enabling context-aware decision-making.

MCP 为 AI 系统与外部工具、数据源及应用的集成提供了通用标准,确保安全无缝地获取最新信息。通过简化这些连接,MCP 降低了开发难度,同时支持基于上下文的决策制定。

EDA addresses many of the challenges MCP aims to solve. MCP requires seamless access to diverse data sources, real-time responsiveness, and scalability to support complex multi-agent workflows. By decoupling systems and enabling asynchronous communication, EDA simplifies integration and ensures agents can consume and produce events without rigid dependencies.

EDA 解决了 MCP 旨在应对的诸多挑战。MCP 需要无缝接入多样化数据源、实时响应能力及可扩展性,以支持复杂的多智能体工作流。通过解耦系统和实现异步通信,EDA 简化了集成过程,确保智能体能不受刚性依赖限制地消费和生成事件。

Event-Driven Agents Will Define the Future of AI 事件驱动型智能体将定义 AI 的未来#

The AI landscape is evolving rapidly, and architectures must evolve with it.

AI 领域正快速发展,架构设计必须与时俱进。

And businesses are ready. A Forum Ventures survey found that 48% of senior IT leaders are prepared to integrate AI agents into operations, with 33% saying they’re very prepared. This shows a clear demand for systems that can scale and handle complexity.

企业已做好准备。Forum Ventures 的一项调查显示,48% 的资深 IT 领导者已准备好将 AI 代理整合到运营中,其中 33% 表示他们准备得非常充分。这表明市场对能够扩展并处理复杂性的系统有着明确需求。

EDA is the key to building agent systems that are flexible, resilient, and scalable. It decouples components, enables real-time workflows, and ensures agents can integrate seamlessly into broader ecosystems.

事件驱动架构(EDA)是构建灵活、弹性且可扩展的代理系统的关键。它解耦了各组件,支持实时工作流,并确保代理能无缝融入更广泛的生态系统。

Those who adopt EDA won’t just survive — they’ll gain a competitive edge in this new wave of AI innovation. The rest? They risk being left behind, casualties of their own inability to scale.

采用 EDA 的企业不仅能够生存 —— 还将在这一波 AI 创新浪潮中获得竞争优势。其余企业?它们可能因无法扩展而落后,成为自身局限性的牺牲品。

原文链接:https://seanfalconer.medium.com/the-future-of-ai-agents-is-event-driven-9e25124060d6

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.