← Back to blog

Architecture

Unlocking MCP

Unlocking MCP

I’d been holding off on MCP for a v1 release. Every time I looked into it I ended up down a rabbit hole of sandboxing .. VMs, Docker, Firecracker, gVisor, Seatbelt, jails .. the works. On the way down I came across this fantastic article by Luis Cardoso: Sandboxes for AI. Its such a great mental model. Well worth a read.

MCP (Model Context Protocol) is an open standard that lets AI agents connect to external tools and services. It required installation of untrusted libraries and code onto local systems. Anyhow, what short-circuited the whole process was discovering Remote MCP. Yes, it’s a thing! No longer do we need local containers or sandboxing. It all happens on the vendor side and a growing number of SaaS providers now host their own MCP (Model Context Protocol) servers .. Klaviyo, Mixpanel, Stripe, and more every week. The idea is elegantly simple .. instead of writing a Stripe integration, Stripe publishes an MCP server that exposes its tools over a standard protocol. Agents connect over HTTP, auth by api-key or oAuth, discover what’s available, and start using it.

One protocol, hundreds of services. Gold.

One Client to Rule Them All

So its in and you can now add Remote MCP servers just like any other integration from the Workspace Settings modal.

And the tunig to make it sing

One thing that became really obvious during testing (thanks PostHog) was that bolting on more and more MCP servers really blows out the context window. So, to combat that we have two new features 1) Progressive disclosure; and 2) Improved MCP prompt generation.

Progressive Disclosure

Progressive disclosure protects the context window by only showing enough info in the prompt to let the LLM know where to look for more information. It does take an extra tool call, to query for ‘How do i use this MCP server’, but if the prompt is on point then its no big deal and its a better option than the alternative.

MCP Prompt generation

This is the blurb that tells the LLM what this MCP server is all about. Too much and we waste context (or get useless tool calls), to little and the tool never gets called .. Aye carumba !

This is now easier as well, with a MCP call to fetch the tool list and a Prompt Generator to boil that down into something palettable. The testrun against the PostHog MCP server resulted in 52 tools and 19k of text!! Prompt generator summarised that nicely down to 1,700 bytes (or about 425 tokens). Pretty good for 52 tools.

No New Tables

I’m quietly pleased about this one. No new database tables. The existing integrations, credentials and manifest infrastructure handles everything. An MCP server is just another integration with provider type “mcp”. Credentials are encrypted the same way. The agent manifest references them the same way. Feels good.

What’s Next

Of course one thing leads to another. If only I could somehow join JSON responses from different MCPs instead of relying on the LLM to get it right…