How to Use MCP to Integrate External Tools in Your Mendix Chatbot

Key Takeaways
- MCP isn’t just for exposing tools, it’s for consuming them too. The new MCP Client lets your Mendix app access tools from other apps, empowering the LLM to decide how and when to use them in conversations.
- No need to reinvent the wheel. You can reuse existing tools and prompts—locally or from open-source MCP servers like GitHub, Slack, or Google Drive—and plug them into your chat experience without custom integrations.
- Low code makes AI orchestration feel effortless. Thanks to modules like GenAI Commons, Conversational UI, and MCP Client, wiring up external tools to your chatbot is just a few microflows away.
In my last blog, I showed how you can expose your Mendix microflows as tools that Large Language Models (LLMs) like Claude can discover and invoke via the Model Context Protocol (MCP). That opened the door for AI assistants to tap directly into your Mendix logic. Now, we’re flipping things around.
With the new MCP Client module, your Mendix app can act as a consumer of external MCP servers, meaning your app can discover and invoke prompts and tools running elsewhere. This makes it easy to plug powerful external logic or AI services into your Mendix apps without creating any custom integrations.
What Is the MCP Client Module?
The MCP Client module lets your Mendix app connect to any MCP-compliant server, whether that’s another Mendix app, a high-code-based tool service, or an agent hosted in the cloud.
That means:
- Reuse logic across apps — have one Mendix app provide tools, and another consume them
- Connect to third-party AI services—no REST or SDK wrangling required
- Chain tools and prompts into full GenAI workflows—all from low code
In short, Mendix can now speak both MCP server and client, making it the tool to build powerful AI-enhanced applications. It’s like plugging your Mendix app into a shared toolbox of AI-powered capabilities.
Here’s how that works at a high level for our example:
Please note that while these tools handle data retrieval, MCP tools can just as easily trigger actions, update records, or perform any other operation your app or external system supports.
This shows a Mendix app acting as an MCP Client, connected to an external MCP Server (which could be another Mendix app). When the user interacts with the chatbot UI, the client dynamically discovers tools from the server and includes them in the request to the LLM. If the model chooses to invoke one, the client passes the call to the server, gets the result, and continues the conversation.
Everything runs over standard MCP HTTPS calls, and no custom integrations are required.

Note: The latest version of the GenAI Showcase App shows a step-by-step guide on how to connect an MCP Client in your Mendix app to an external server to consume tools and then use them in your Mendix chatbot.
Connect to an MCP Server from your Mendix app
Prerequisites
You can choose whether to start from scratch or extend an existing application. If you wish to start from scratch, the Blank GenAI App, which already includes all necessary GenAI modules, is the best starting point. If you are extending an existing app, make sure to install the MCP Client, GenAI Commons, Conversational UI and your favorite GenAI connector module from the marketplace. You should also have access to an MCP Server (running in a Mendix app or externally) for which you can also use the GenAI Showcase App MCP Server example.
Create your chatbot
In case you don’t have any chatbot in your application yet, I will guide you step by step on how to create one. Even if you have already implemented one earlier, it would be best to follow these steps to make the changes accordingly.
Step one
There are a couple of example microflows available in the MCP Client module which will help you get started more quickly. Go to the module and copy the microflow: ChatContext_MCPClient_ActionMicroflow into your own module.
Step two
Since the microflow is excluded by default, you’ll need to include it first.
Step three
Add a dataview to your page with a microflow as data source. Create a microflow named DS_ChatContext_Create:
-
- Inside of that microflow first retrieve one DeployedModel from the database (or make your custom retrieve to select the right LLM).
- Afterwards add the New Chat action from the toolbox. Select the retrieved model and for the action microflow, choose the previously included action microflow. You can leave the system prompt and provider name inputs empty since they are optional.
- Finally return the ChatContext object at the end of the microflow.
Step four
Add a button inside the dataview to open page ConversationalUI_FullScreenChat (or a different ConversationalUI page) from the ConversationalUI module.
Ensure that your user has the module role User from ConversationalUI assigned. Your chat is now ready to be used.
But wait: how does Mendix know which MCP server to use?
How does this technically work?
Registering tools with the request
Let me first explain what you just blindly copied to your own module:
- The ChatContext_MCPClient_ActionMicroflow makes sure that a message typed by a user is correctly sent to the right model, with all its important configurations and context (such as the conversation history).
- Inside of the action-microflow, an MCPServerConfiguration object is retrieved from the database. Feel free to adjust this and retrieve any other MCPServerConfiguration object and pass it to the request: Add all tools from MCP server action. This action connects to your MCP Server, discovers all tools that are exposed and adds to the request that is sent to the LLM.
- For each message that is sent to the model, the MCP tools are known, and it can choose to call them.
Tool invocation and proxying
How can your Mendix app call a tool in another app? It starts with the MCP tools being registered with a request along with their input arguments. When the model returns a tool call for an MCP tool, the MCPClient_ToolMicroflow is executed. Using the MCP Client’s Call Tool action, the microflow forwards the request to the MCP Server, acting as a middleman. The server’s response is then sent back to the LLM, which can process the user’s request—either by replying directly or using other tools if needed.
As you see, no custom integrations or changes are needed to make it work for every MCP server. Just plug and play to enrich your chat experience with powerful tools that you don’t need to manage in the same application.
Establish the MCP connection
Now let’s wire it up. Your app needs to know where the MCP server lives.
First of all, we need to allow administrators to setup and manage MCP server configurations. Therefore, assign the MCP Client Administrator module role to your administrator role. Secondly, add the MCPServerConfiguration_Overview page from the MCP Client module to your navigation. Then run the app, login as administrator and navigate to this page. From there, you can create your first MCP server configuration and save it to the database.
If you need to authenticate by passing custom HTTP Headers, you can create a get credentials microflow. This microflow cannot have any inout parameters and needs to return a list of System.HttpHeader. You can use the Config: Create Http Header and add to list toolbox action to do that. You can then select this microflow as an Get Credentials microflow when creating the MCP Server Configuration as an administrator at runtime. Take a look at the GetCredentials_EXAMPLE microflow for a simiplified example of how to create a Get Credentials microflow.
Don’t have an MCP Server at hand? This might be the moment to revisit the latest blog post about exposing logic from your Mendix app via MCP. You can build your own MCP Server or reuse the example in the GenAI Showcase app. For using it locally from the showcase app, the endpoint likely looks like this: http://localhost:8080/mcp-ticketsystem
Rerun your application and ask a question which can be answered through the available tools. If you’re using the showcase MCP server, a question like “How many tickets are open?” will likely trigger two tool calls checking how many bug and feature tickets are open, the model can use this information to calculate the total amount of open tickets.
What’s next?
Now that your Mendix apps can both expose and consume tools via MCP, you’re ready to build more advanced agentic workflows—chaining applications together, offloading tasks to AI, or integrating with internal microservices across teams. There’s really no excuse anymore: you can create a seamless AI-powered landscape across your enterprise, using low-code.
Even better, there’s a growing ecosystem of open-source MCP servers available for third-party services like GitHub, Slack, or Google Drive. You can self-host these and let your Mendix app plug right in. So—what external service will your Mendix app connect to first?
Frequently Asked Questions
-
What is Model Context Protocol, or MCP?
An open protocol that standardizes how LLMs connect to apps autonomously. Just as USB-C provides a universal port for peripherals, MCP provides a universal interface for LLMs to “plug in” to tools and other resources.
-
Why does MCP matter?
MCP allows for tool discovery without having to preconfigure everything at design time. Your agent can easily connect to various MCP servers without implementing custom API integrations. Suddenly, every service available through MCP can be integrated into your agentic app.
-
Why use Mendix with MCP?
Integrating both is a match made in heaven: you can easily build powerful microflows in Mendix and start an MCP server from within your app to make those microflows available to agentic AI systems, such as another Mendix app using the MCP Client module. This makes it easy to power agentic workflows that connect your Mendix logic to external AI agents.
-
When should I use the MCP Client vs. Server module?
Use the MCP Client module when the LLM in your Mendix app needs to integrate with tools outside of your application. For example, to connect to another service or app that exposes tools via MCP. Use the MCP Server module when you want to expose your own Mendix microflows as tools that can be discovered and called by external MCP clients (like another Mendix app or Claude Desktop).