artificial intelligence

What is model context protocol? A practical guide to MCP

Your AI agents can connect to ChatGPT, but can they actually access your enterprise data management systems when they need to answer a question like "Which accounts churned last quarter?" Even if you’re riding the wave of agentic technologies in business intelligence and data analytics, your organization might be hitting the same barrier that many others do: Namely, building custom integrations for every single data source, API, and business system their AI needs to touch.

Model context protocol is designed to solve the endless cycle of one-off connections that slow down your AI projects and limit what your agents can actually accomplish. Here's how MCP works as a universal standard for AI connections, plus the practical steps you need to implement it safely in your own data environment without compromising security or governance.

What is the model context protocol (MCP)?

Model Context Protocol (MCP) is an open-source standard that standardizes how AI applications connect to external data sources, tools, and workflows. Think of it as USB-C for AI: one universal adapter that replaces the need for building countless one-off integrations. 

Instead of writing custom code or spinning up new ETL pipelines every time you want your AI to access a database, content repository, or business system, MCP creates a single, secure bridge. This standardized connection layer allows your AI agents to interact with any approved data source through one consistent protocol, reducing integration complexity and accelerating time-to-value for your AI initiatives. 

How MCP works (the 30-second architecture)

MCP operates on a simple but powerful architectural pattern that defines how different components communicate. This structure standardizes interactions to make agent behavior consistent and secure, whether your AI is running locally or connecting to remote services.

Host → Client → Server

The architecture consists of three main roles:

Component

What it does

Real-world example

Host

The AI application where you're actually working. This is the interface you interact with directly—it's what displays results and accepts your requests.

Your ChatGPT interface, Claude desktop app, or a custom analytics dashboard where you type questions like "Show me Q4 revenue by region."

Client

The middleman that translates requests between your AI application and external systems. It handles the technical details of establishing connections, formatting messages, and managing the back-and-forth communication.

When you ask your AI assistant to pull sales data, the client takes that request, figures out which database to connect to, formats the query properly, and sends it along—then brings the response back to display in your interface.

Server

The component that sits in front of your actual data sources and business systems. It exposes what's available (like specific databases, APIs, or tools) and defines how the AI can interact with them safely.

An MCP server connected to your Snowflake data warehouse that says "Here are the tables you can query, here's how to search them, and here are the actions you're allowed to perform" without giving the AI direct, unrestricted access to everything.

So, how do these systems work together? 

  1. The host receives your question. 

  2. The client routes the request to the appropriate server.

  3. The server safely accesses your data source and returns the answer through the same chain. 

This three-part structure keeps your AI connected to real business systems while maintaining security and control at every step.

JSON-RPC + transports

At its core, MCP uses a lightweight messaging format called JSON-RPC to send requests and responses between components. Think of it like a standardized language that lets different parts of your system talk to each other without confusion.

The protocol supports different "transports"—basically, different ways of moving messages between components. If everything's running on the same machine, it uses a simple local connection. If you need to connect across a network, it uses standard web protocols (the same ones your browser uses). This flexibility means MCP works whether you're testing on your laptop or running agentic analytics across your entire cloud infrastructure.

The key advantage: MCP doesn't lock you into one specific infrastructure pattern. The protocol remains consistent regardless of your deployment model—from local database connections on a single server to distributed systems that span multiple cloud environments and remote APIs.

See how ThoughtSpot's agentic analytics platform helps you build intelligent experiences with governed, instant insights. Start your free trial

The 3 primitives you'll see everywhere

MCP is built around three fundamental building blocks—called "primitives" in the lingo of agentic AI—that define how an AI agent can understand context, take action, and follow guided instructions.

Think of primitives as the basic, reusable components that everything else is built on top of. These three primitives work together to create a complete interaction framework.

The 3 core primitives explained

MCP is built on three fundamental building blocks that define how your AI agent understands context, takes action, and follows structured workflows. Here's how each primitive works in practice:

Primitive

What it does

Real-world example

Resources (context)

Represents any piece of context the AI might need to answer questions or complete tasks. Each resource is identified by a unique URI, and your host application controls how these resources are presented to users or the AI model for selection.

Your customer database schema, a product catalog file, or your company's sales methodology document. When your AI needs to understand "What fields are in our customer table?" it accesses that schema as a Resource.

Tools (actions)

Specific actions the AI model can invoke to interact with your systems. For any action that could be risky or alter data, you can build in a human approval step to maintain control and safety.

Running a SQL query against your data warehouse, calling a CRM API to update a contact record, or generating a visualization from query results. High-risk actions like "update customer status" require explicit human confirmation before executing.

Prompts (guided workflows)

User-controlled templates for completing specific tasks. These act as discoverable commands with predefined arguments, guiding both you and the AI through a structured workflow while preventing the AI from going off-script.

A "Monthly Revenue Analysis" prompt that automatically pulls the right metrics, applies consistent filters, and formats results the same way every time—ensuring your team gets standardized insights without reinventing the analysis each month.

How they work together: Resources provide the context your AI needs, Tools define what actions it can take, and Prompts create repeatable workflows that combine both. This three-part framework gives you the flexibility to build powerful AI experiences while maintaining governance and control.

A "read → decide → act" pattern for analytics

When applying model context protocol to data and analytics use cases, a "read, decide, act" pattern provides a safe and effective framework. This approach helps your AI agents operate on trusted information and take actions within governed boundaries.

Read (trusted context)

Before your AI can make intelligent decisions, it needs access to a reliable source of truth. That's where MCP Resources come in: trusted datasets, metric definitions, and schemas that your AI can read and reference.

This approach requires a governed data context layer to ensure the information your AI accesses remains both accurate and secure. An agentic semantic layer serves as this foundation by defining your business logic and maintaining a consistent context for your AI model. Unlike static dashboards that grow stale or outdated data extracts that lose relevance, a live semantic layer delivers governed, real-time data that your AI agents can confidently trust and act on.

Decide (repeatable analysis)

Once your AI has access to trusted data, it needs to analyze that information through a repeatable, structured process. This is where MCP Prompts delivers value: pre-built workflows like "Investigate last week's KPI drop" that enforce specific inputs and consistent output formats. By supplying a clear business context at this stage, you ensure every analysis follows the same logical framework.

The result? Your AI stays focused on the task at hand rather than wandering into tangential analysis. Modern data teams use this approach to scale structured decision-making across their entire organization, turning one-off analyses into repeatable workflows that anyone can execute.

Act (safe automation)

The final step is where your AI takes action based on its analysis. For read-only operations like generating a SQL query or pulling a report, the AI can proceed automatically. But for any operation that writes or changes data—like updating a customer record or modifying a database entry—you build in a human approval gate before executing.

This "read, decide, act" pattern requires an MCP server that sits between your AI agents and your data systems. The server exposes your governed data as Resources, defines which Tools the AI can invoke, and enforces security policies at every step. It translates natural language requests into safe, structured operations while maintaining full audit trails and access controls.

Platforms like ThoughtSpot's Agentic MCP Server implement this architecture by connecting AI agents to a governed semantic layer. This allows agents like ChatGPT or Gemini to interact with your business data using natural language while ensuring every query runs against trusted, real-time information with proper security and reasoning transparency.

Your essential MCP implementation checklist 

Getting started with MCP doesn't require a massive, all-hands overhaul. The key is proving value with one focused implementation while building the security and governance foundation you'll need to scale.

1. Secure your connection points first

Treat MCP servers as part of your attack surface from day one. Implement proper LLM security controls, use OAuth or robust access management for remote connections, and follow transport security guidance like origin validation. These foundational controls let you expand confidently later without retrofitting security.

2. Start with one high-impact workflow

Choose a single use case that delivers clear business value. Connect one data source, define one Resource, one Tool, and one Prompt that support that workflow. For example, start with a focused analytics use case like "Analyze customer churn trends" that connects to your data warehouse and provides immediate insights to your sales team.

3. Validate before you scale

Build a "golden prompts" test set to validate functionality and track data quality metrics before expanding to additional systems. Test your MCP server with real user questions, verify that responses are accurate and secure, and document any edge cases or limitations.

💡 Pro tip: Rather than trying to connect everything at once, focus on one high-impact workflow that demonstrates clear ROI. This approach lets you refine your security model and user experience before scaling across your organization.

ThoughtSpot MCP: A new foundation for agentic analytics

ThoughtSpot's Agentic MCP Server connects your AI agents directly to a governed semantic layer, giving them instant access to trusted business data without building custom integrations. Your agents can query live data, generate insights, and take action—all while maintaining the security controls and audit trails you need for enterprise-level analytics.

Ready to see how MCP can transform your AI analytics workflow? Explore ThoughtSpot's Agentic MCP Server and start connecting your AI agents to governed data. 

Just getting started with agentic analytics? Check out the 2026 Gartner Market Guide for Agentic Analytics, packed with insights for business leaders who are building the foundations of their future data workflows. 

Model context protocol FAQs

Is MCP the same as Retrieval-Augmented Generation (RAG)?

No, MCP is not the same as RAG, and it doesn't replace vector search. MCP is a protocol that can use RAG as one of its components, providing a standardized way for an AI agent to access a vector database or other knowledge sources. Think of MCP as the communication standard, while RAG is a specific technique that can work within that standard.

How is the model context protocol different from function calling APIs?

Function calling APIs are proprietary features of specific large language models like GPT-4 or Claude. MCP is an open-source, model-agnostic protocol that allows any AI platform to connect with any external system or data source, providing a universal standard that works across different AI platforms.

Can MCP run in air-gapped or on-premises environments?

Yes, MCP can run in air-gapped or on-premises environments. The main change is using local transports like stdio instead of web-based ones and making sure all components (host, client, and server) are deployed within your private network. This makes it a potentially good fit if your company has strict data sovereignty requirements.

How should you version and promote MCP servers across development environments?

Treat MCP servers like any other software service. Use version control for your server code, and establish a promotion pipeline to move tested versions from development to staging and production. This gives you stability and consistency as you scale your MCP implementation.

What's the best way to validate an MCP server before production deployment?

Create a comprehensive test suite that includes your "golden prompts," which are test cases that verify the server correctly exposes Resources, executes Tools safely, and handles Prompts as expected. This testing helps make the AI's behavior predictable and reliable in production environments.