Model Context Protocol (MCP) - An Introduction

By K A R S H
June 13, 2025
Tech
Last Updated: December 11, 2025
Image for Model Context Protocol (MCP) - An Introduction

An open, standardized framework that enables Large Language Models (LLMs) to connect with external tools and data sources, allowing AI agents to perform complex tasks beyond their original training data. Developed by Anthropic, MCP provides a standardized, two-way communication channel, similar to an API, that allows AI systems to access real-time information and execute actions by interacting with services, applications, and databases.

What is the purpose?

MCP adds a level of logic to the LLMs, through which the LLMs become capable of executing deterministic problems. Instead of generating code or randomly fetching resources, the LLMs will just analyse your prompts and run the commands you have written for code execution or for context.

The 3 layer Architecture

The MCP primarily has 3 layers:

  • MCP Host
  • MCP Client
  • MCP Server

The MCP Host is the chat application that is user facing, this can be GitHub Copilot in VS Code or Claude Code Desktop app. You can think of an MCP host as an orchestrator. It creates and manages the MCP clients. MCP Client maintains a channel for an MCP server and gets the context/metadata from MCP server for the Host. MCP Server is the place where the logic is defined and run. MCP server provides this metadata to client as a context to LLMs.

The 3 Pillars of MCP Servers

There are 3 main constructs currently stable in MCP Servers:

  • Prompts: You can define a standardized way for servers to expose the prompt templates to the clients. This allows us to control the way we can receive outputs from an LLM.
  • Resources: These are the static data that you can provide to the LLM through the MCP server. It can be a file, database, schema, etc.
  • Tools: Tools are the logic blocks that you can define in MCP server that gets executed when an execution call is issued from an LLM. You can put any application logic inside of it to query, modify and send structured response back to an LLM.

Whenever we are talking about MCP, We primarily focus on the MCP Servers. And in MCP server tools are the hot topic. Tools give the LLMs a deterministic capability to execute programs that can be tested and verified.

For more info: What is Model Context Protocol?

Workflow or Lifecycle of an MCP agent

In simple terms you can think of MCP + LLMs as an AI wrapper around a REST API. The AI augments the execution of the logic that you define, and provides output in natural language.

Here's how it works when a user prompts in a Chat Host:

  1. The Host (GitHub Copilot in VS Code or Claude Desktop) initializes client and server.
  2. The Client pings the MCP server for "tools/list" to get the metadata of all the available tools, resources and prompts that are defined in the server.
  3. As soon as the User sends a prompt to the LLM, the MCP metadata is also sent along with it.
  4. The LLM issues a tool-call back to the client, with the input data to be run along with the tool.
  5. The MCP client invokes "tools/call" request to the MCP server.
  6. The MCP server executes and returns a structured output or an error.
  7. The Host will feed this output back into the LLM.
  8. LLM parses this output and returns with a generated paragraph of the answer back to the host.

In a recent news, Anthropic is Donating the Model Context Protocol and establishing the Agentic AI Foundation. This makes it a very exciting space currently for MCP. This article is the basic mental model to think about MCPs, please let me know for any suggestions.