Model Context Protocol, agents.json llms.txt : standardize communication between large language models and tools.

Modern LLMs are getting much better at answering questions and helping at everyday tasks. They also became much better at hiding their shortcomings like hallucinations. One way to improve large language models is to provide them some context or equipping them with « tools » and telling them how to use them.

There exist many frameworks that can help you equipping language models with tools to provide them with data or functions to improve on their answers. Problems arise when you change either your tools or your llms and must redo all the work to integrate the new tool to the new llm. It would be best if you could reuse the connection on either side allowing you to be more flexible with your choices.

Since large language models brought a lot of attention on themselves and saw their usage spread a lot on the last few months/years a need for standardization arose. 

Anthropic, developing the model Claude, offered in late 2024 a proposition for a standard called Model Context Protocol. Since its first introduction the standard has been accepted by both Google and OpenAI. This standard is slowly making its way at becoming the standard for connecting resources to large language models. Implementing MCP for your resources will make you sure that Anthropic, OpenAI and Google language models will be able to use those resources.

Other solutions have been introduced as well like agents.json and llms.txt. These solutions like MCP help developers make their solutions available to large language models. All these standards are pretty new and come with some drawbacks, but they are a step in the right direction. These standards are somewhat complementary. Implementing all of them for your solutions might prove time consuming but could be worth the effort.

We are now going to detail each of these solutions and see how you can make your solutions or API available to language models to improve local agents of spread their usage through llms. Implementing standard connectors for data and tools will help you develop local agents that can evolve easily and are not dependent on any particular solution.

Model Context Protocol

The Model Context Protocol was introduced by Anthropic in late 2024 in a effort to standardize communications between large language models and resources. It has already got its first revision at the end of March 2025. Their objective is to be the equivalent of the USB-C for connections between tools and llms.

Figure 1- Architecture générale montrant un serveur connecté à plusieurs clients – https://modelcontextprotocol.io/introduction

The protocol is a client-server protocol. The MCP client is the language model. It will establish connections to MCP servers and may use the connections to access the resources behind the server to answer the queries it receives.

The protocol is based on JSON-RPC 2.0 messages to send requests and get answers. The connections between the client and the server are stateful which is not ideal and hopefully will change in a future version of the standard. It is not really a problem for a local agent using local resources but if you are providing a service through the internet maintaining thousands of connections might prove challenging.

MCP servers can expose 3 kinds of resources to the client 

  • Resources, which should be used when one wants to expose data to the LLM to add to its context. The data can take many forms, it can range from files to API endpoints. And have many formats. The exposed data need not be text data, it can be binary data like PDF, audio or video files, etc.
  • Tools. Tools are a way to equip the language model with functions it can call that potentially have an impact on the real world. Using tools, llms can call APIs, update databases or simply do arithmetic operations with a guaranteed correct result.
  • Prompts are a way for servers to provide the client and/or user with reusable prompts so they will have useful prompts at their disposal without having to type them.

There also exists a philosophical difference between Resources, Tools and Prompts and how they should be used. Resources are supposed to be application-controlled, meaning that depending on the client you are using it may not use the Resources in the same way. “Claude Desktop currently requires users to explicitly select resources before they can be used” but it might not be the case for every client especially if you write it yourself.

Tools are designed to be model-controlled, meaning that the model can use them as they see fit. This is basically what you already have when you build agents nowadays. You provide functions together with their description to the language model and it decides if it needs a particular function to answer a query.

Prompts are designed to be user-controlled, meaning they are supposed to be exposed to the end who will explicitly choose which one to use and how.

Building a Model Context Protocol server, you have then several options to expose data and functions to large language models with only having to do it once per protocol version. If you want to equip your client with several tools and/or resources, you only have to provide a description of the resources with their URI to have access to them through any large language model even if you decide to switch at some point during the lifetime of your project.

While facilitating a lot of things MCP is still not the alpha and the omega. Maintaining stateful connections might prove problematic if you want to equip your language models with a lot of resources. Also, since resources must be in context it might prove another limitation to connecting to an infinite number of tools even with context windows ever growing.

Exposing tools and data, one should always be careful about how they are used. Using MCP does not refrain you from using authentication and being careful about attacks on your system. It is not because it is AI that it is not malign. 

Agents.json

Agents.json is a standard that helps describe an API to a large language model, so it knows how to use it efficiently. It is built on top of OpenAPI which is the standard to describe what API endpoints can do and how they should be used.

Adding agents.json to your API is quite straightforward. You do not need to edit anything existing. You simply need to add and agents.json file to a specific location /.well-known/agents.json for it to be discovered by a llm. Then the language model will be able to translate any natural language query into an API call

Figure 2- agents.json schema description – https://github.com/wild-card-ai/agents-json

The specification file is in the JSON format and extends on OpenAPI. The full schema can be found at https://docs.wild-card.ai/agentsjson/schema. When writing the specification file, you must describe Flows, which are a series of 1 or more API calls. The Flows contain actions that are the actual API calls and links which makes the connection between the API route parameters and the llms or the other API calls parameters, so it knows how to properly chain them.

Since it relies heavily on OpenAPI, it is better to set it up if you already have an OpenAPI specification of your API otherwise it can prove a bit cumbersome. The main differences between agents.json and the MCP protocol are that agents.json only helps with API exposure to large language models but does not require a stateful connection to work. The project keeps a registry of all the publicly available agents.json files if you want to add them to your agent. There are not that many at the moment but hopefully the number will grow.

Llms.txt

Llms.txt is a proposition to help LLMs better understand websites. The idea is to add a llms.txt file for language models like you would put a robots.txt file for crawlers, to your website.

The file uses markdown language as it is easily understood by language models. The file contains a description of the website and optionally a list of URLs of interests for the language model. The proposition also suggests adding markdown files for the URLs of interest to the language model by adding a file with an additional .md extension at the end. The idea behind the markdown files for each URL is to help the LLM fitting the webpage into its context by removing all boilerplate content.

Figure 3 – Format for llms.txt files – https://llmstxt.org

Helping LLMs find relevant information on your website might help with these models selecting your website as a source for its users. If a language model cannot understand what your website talks about or where the information is it will not present its information in its answer.

Llmstxt.org maintains two repositories  with websites that have llms.txt files if you want to run some tests with your favourite model. 

As large language models see their usage spread, making information available for them is important if you want it to reach end users. LLMs are becoming an interface of choice for many tasks and therefore making your tools and API available is essential if you want them to be part of processes at the end users.

Adding a llms.txt file to your website and duplicating URLs for language models does not come with any drawbacks. It does not open any vulnerability or open any unwanted access. You can generate automatically the markdown files from the content so it can be at large scale easily.

There exist many ways to expose your data, API, tools, etc. to LLMs and recently a lot of effort has been done to standardize the communications between large language models and resources. The model context protocol is a very promising approach that has been accepted by all major LLMs and can expose many different resources to language models. It is not the only standard that is under revision at the moment. Agents.json and llms.txt are less ambitious standard as they only regard on kind of resources each they are still very useful.

Making your resources available to large language models is a new way to make it to new users that prefers language models as their new interface for accessing data and tools.

Keeping an eye on possible standards is important to make sure your solutions are available to a maximum of ways.

Resources