If you’ve seen a explosion of mentions and references of MCP out of no where you are not alone. A quick Google Trend search shows interest is at an all time high and although it has yet to be formally established as ‘the’ standard for LLM’s to connect to applications and services its certainly heading in that direction.

And looking at the most popular tool directory (punkpeye/awesome-mcp-servers) the number of MCP servers has increased from 129 at the start of the year to now over 420. In the next 5 minutes I’ll run you through what you as an engineer needs to know about all things MCP.
What is MCP?
Think of MCP as GraphQL for your APIs when working with LLMs. Rather than building custom integrations for every API your LLM application needs (like Slack, GitHub, AWS), MCP, introduced by Anthropic, provides a standardised way to connect LLM clients to various services through one intermediary layer known as an MCP server.
MCP consists of two main components:
- MCP Server: Implements functionality specific to interacting with an external API and exposes this through the MCP protocol.
- MCP Client: An SDK your LLM application (for example, Cursor, VS Code extensions, or cloud-based tools) can use to communicate with MCP servers.
Once a particular API provider exposes an MCP-compatible server, any LLM app with an MCP client can integrate with it seamlessly, regardless of the underlying API details. This eliminates the need for developers to repeatedly build and maintain custom integration logic.
How Does MCP Work?
MCP uses JSON RPC for communication over standard input/output (stdio) or HTTP, and there's work underway toward secure remote transport over the internet (more on that later ...
The MCP protocol has built-in "discovery" functionalities similar to GraphQL. The MCP client (the LLM application) sends an initialisation request to the MCP server, and in response, the server lists its capabilities, which generally include prompts, resources, and tools:
- Prompts: Templated, reusable prompt snippets for your LLM, improving the consistency and quality of interactions.
- Resources: Offers read-only access to data sources (like files or database queries), providing valuable context to your LLM interactions.
- Tools: Specialised functions enabling actions with side effects (like creating GitHub issues or deploying infrastructure).
Why use it?
MCP has several advantages:
- Simplified Integrations: Quickly connect your LLM apps to various APIs without writing repetitive custom integrations.
- Increased Efficiency: Leverage existing MCP server implementations to access diverse functionalities instantly.
- Standardised Approach: Offers a consistent API communication method across different LLM deployments and applications.
- Improved Prompting: Utilise shared prompt templates for consistently high-quality LLM outputs.
- Enhanced LLM Capabilities: Extend your LLM workflows by integrating actionable tools that perform real-life tasks.
Servers for engineers
Here’s a list to get you started:
────────────────────────────────────────────────────────────────────────
alexei-led/k8s-mcp-server Lets AI assistants securely run Kubernetes CLI commands (kubectl, helm, istioctl, argocd) in a controlled Docker environment. Great for quickly spinning up or tearing down clusters, deployments, and other infra tasks.https://github.com/alexei-led/k8s-mcp-server
────────────────────────────────────────────────────────────────────────
flux159/mcp-server-kubernetes A TypeScript-based Kubernetes MCP server for managing pods, deployments, and services. Offers a straightforward, standardised interface for cluster operations.Link: https://github.com/flux159/mcp-server-kubernetes
────────────────────────────────────────────────────────────────────────
rohitg00/kubectl-mcp-server Another clean Kubernetes-focused MCP implementation that supports unifying cluster interactions and bridging them into AI workflows.https://github.com/rohitg00/kubectl-mcp-server
────────────────────────────────────────────────────────────────────────
nwiizo/tfmcp Terraform MCP server implemented in Rust, enabling AI assistants to inspect, plan, and apply Terraform configurations. Ideal for infrastructure-as-code workflows and multi-cloud setups.https://github.com/nwiizo/tfmcp
────────────────────────────────────────────────────────────────────────
QuantGeekDev/docker-mcp A Docker-focused MCP server for container management and operations. Lets AI agents handle container creation, removal, and inspection, which is useful for DevOps pipelines.https://github.com/QuantGeekDev/docker-mcp
────────────────────────────────────────────────────────────────────────
grafana/mcp-grafana Integrates with Grafana, allowing searching of dashboards and querying data sources for metrics and logs. Perfect for troubleshooting, incident response, and real-time analytics.https://github.com/grafana/mcp-grafana
────────────────────────────────────────────────────────────────────────
sapientpants/sonarqube-mcp-server Integrates with SonarQube for code quality metrics, security checks, and scanning results. Helps maintain code health and manage technical debt within DevOps workflows.https://github.com/sapientpants/sonarqube-mcp-server
────────────────────────────────────────────────────────────────────────
modelcontextprotocol/server-git A straightforward local Git integration that gives AI assistants the ability to read, search, analyze, and manage Git repositories. Useful for code review, version control workflows, and automation.https://github.com/modelcontextprotocol/server-git
Considerations
Today
- Security: Like all external integrations, MCP has inherent supply-chain risks. Always use trusted providers for MCP servers. Companies may prefer MCP implementations from reputable sources, like Anthropic. In the future, developments could include formalised package management and digital signatures for enhanced security.
- Adoption Stage: MCP is still relatively new in its journey but shows significant promise for simplifying LLM integration contexts and no promoise of slowing down.
In the future
- Too many tools: There will also need to be an established approach to dealing with the the scenario of too many tools. A recent research paper demonstrated a solution named CoTools that improves performance and interpretability in tool selection compared to existing methods. The core idea revolves around leveraging the semantic understanding of the LLM's hidden states to decide when and which tools to invoke, even if those tools were not part of the model's training. Here at Overmind, we have implemented a similar solution for our assistant and are very pleased with the results. In our scenario we are looking looked for semantic matching across a 120+ available toolset.
What's Next for MCP?
Cloudflare has just released a remote implementation of the Model Context Protocol (MCP) server, addressing technical limitations inherent to locally hosted deployments. This development represents a significant architectural shift for MCP infrastructure. The remote server design eliminates local configuration overhead, providing programmatic access to MCP functionality through Cloudflare's network. Integration with authentication providers (Auth0, Stytch, WorkOS) standardises secure access control mechanisms previously requiring custom implementation. Context is persistent between sessions which means the server preserves interaction state, enabling stateful processing across discrete operations.
Given MCP's open-source foundation, this remote server implementation potentially accelerates protocol adoption by reducing deployment complexity. The architectural approach focuses on enabling autonomous agent execution of web-based tasks rather than simply generating instructional output, positioning MCP as an integration layer between language models and external API services.