IBM has announced the draft for a new agent communication protocol (ACP) designed to standardise how AI agents interact and collaborate. The protocol and its architecture are currently in the ‘alpha’ stage.
AI agent systems use diverse communication standards, adding a layer of complexity. ACP addresses these issues by standardising interactions tailored specifically for agents. “ACP simplifies integration and promotes effective collaboration across agent-based ecosystems,” said the company.
IBM’s offering is similar to Anthropic’s model context protocol, released last year. However, IBM calls ACP an extension of MCP.
“MCP currently provides essential mechanisms for sharing context with models and agents, such as tools, resources, and prompts. ACP leverages these capabilities while explicitly adding the concept of agents as primary participants,” the company added.
IBM also said that MCP is recognised as an interim solution, and its original design, which focuses on context sharing makes it an ‘imperfect fit’ for ACP’s requirements for agent communication. “ACP plans to diverge from MCP during its Alpha phase, addressing this misalignment by establishing a standalone standard specifically optimised for robust agent interactions,” read the documentation.
ACP is said to prioritise useful features for AI agents first, with standardisation to follow after proving their value and adoption.
ACP is a part of IBM’s BeeAI, the company’s research project dedicated to AI agents. IBM is currently working on the documentation of ACP’s architecture, and is inviting developers to discuss the ideas that are being implemented on GitHub.
When Anthropic released the feature last year, a wide range of MCP servers were made available that utilise APIs from some of the most popular apps available today, including Spotify, Google Maps, Todolist and Brave.
There’s also a website that offers a much more user-friendly interface to explore all the MCP servers. Several developers on X have recently been utilising MCP to build various servers and use cases with LLMs, data sources, and external tools.