When AI assistants started showing up in developer tools, two distinct approaches emerged for connecting those assistants to external data and functionality: the Model Context Protocol (MCP) and code extensions. MCP is an open standard, introduced by Anthropic in late 2024, that defines a universal way for AI models to communicate with external tools, data sources, and services through a structured client-server architecture. Code extensions, by contrast, are plugin-style additions built directly into a specific application or IDE, like VS Code extensions, that add functionality within that tool's own ecosystem using that tool's proprietary APIs.
Think of it this way: if you are building a house, a code extension is like a custom fixture designed to fit one specific room in one specific house. MCP is more like a standardized electrical outlet standard that any appliance from any manufacturer can plug into. A data team using Claude or another MCP-compatible AI assistant could connect it to their SQL database, their internal documentation, and their analytics platform all through a single MCP server, without writing custom integration code for each one. That same MCP server works regardless of which AI model sits on the other end.
The choice between MCP and code extensions has real consequences for how quickly your team can move and how much technical debt you accumulate. Code extensions are fast to adopt if you are already inside the tool they were built for, but they create tight coupling between your AI workflows and a single vendor's ecosystem. If that vendor changes their API or you want to switch AI models, you are often back to square one. MCP, because it is model-agnostic and tool-agnostic, means the integration work you do today is not wasted when your stack evolves. For data and analytics teams specifically, where the toolchain tends to be complex and constantly shifting, that portability is not a minor convenience. It is a meaningful reduction in integration overhead.
Identify the integration goal: decide whether you need functionality scoped to one specific tool (a strong case for a code extension) or cross-tool, cross-model connectivity (a strong case for MCP).
For MCP, build or deploy an MCP server that exposes your data sources, APIs, or tools as structured resources and functions the AI model can call.
Connect an MCP-compatible AI client (like Claude Desktop or a custom agent) to the MCP server using the standardized JSON-RPC-based protocol that MCP defines.
For code extensions, install or build the extension within the target application using that application's native extension API, which scopes all functionality to that environment.
Define the permissions and context boundaries: MCP uses explicit capability declarations so the AI model knows exactly what it can access, while code extensions rely on the host application's permission model.T
est the integration by issuing real queries or commands and verifying that the AI receives accurate, up-to-date context from the connected source.
A financial services company wants its AI assistant to pull live portfolio data from its internal database, cross-reference it with market data from an external API, and surface results inside whatever tool the analyst happens to be using that day. Building a separate code extension for each tool would require maintaining multiple codebases. Instead, the team builds a single MCP server that exposes the portfolio database and the market data API. Any MCP-compatible AI client can now access both sources, and the team estimates it cuts integration maintenance time by roughly 60% compared to managing individual extensions.
A software development team at a mid-sized SaaS company relies heavily on VS Code and has standardized on GitHub Copilot. They need the AI to understand their internal component library and coding conventions. A VS Code extension is the right fit here: it integrates directly into the editor, surfaces inline suggestions based on the proprietary component library, and works within the workflow the team already uses every day. The extension does not need to work outside VS Code, so the tight coupling is a feature, not a liability.
A data engineering team building an agentic analytics workflow needs their AI model to query a cloud data warehouse, read from a metadata catalog, and write results back to a dashboard platform. Because they want the flexibility to swap AI models as the market matures, they build the integrations as MCP servers. When they later migrate from one AI provider to another, the MCP servers require no changes. The team avoids what would have been an estimated two to three weeks of re-integration work.
Portability across models and tools: MCP integrations are not tied to a single AI model or application. If you build an MCP server for your data warehouse today, it works with any MCP-compatible model tomorrow. A team that invested in MCP-based integrations during an early AI pilot was able to switch AI providers without touching their data connectivity layer.
Reduced integration overhead: Code extensions require you to learn and maintain each host application's proprietary extension API. MCP standardizes that interface, so a team that knows MCP can connect to dozens of data sources without learning a new API for each one. In practice, this means fewer engineers spending time on plumbing and more time on the actual analytics work.
Scoped, auditable context: MCP's architecture requires explicit declaration of what resources and tools the AI can access. This makes it easier to audit what context an AI model is operating with, which matters a lot in regulated industries like finance and healthcare where data access controls are non-negotiable.
Deep tool-specific integration with code extensions: When your workflow lives entirely inside one application, a code extension delivers a tighter, more native experience than MCP can. A well-built VS Code extension can intercept editor events, modify the UI, and respond to user actions in ways that a general-purpose MCP server simply is not designed to do.
Faster time to value in constrained environments: If your team uses one tool, one AI model, and has no plans to change either, a code extension gets you to a working integration faster. There is no server infrastructure to stand up, no protocol to learn, and no cross-tool coordination to manage.
ThoughtSpot has been watching the MCP ecosystem closely because the protocol aligns with how modern analytics workflows actually operate: across multiple tools, multiple data sources, and increasingly, multiple AI models. Spotter, ThoughtSpot's AI-powered analytics assistant, is built to work where your data lives, and MCP represents a meaningful step toward making AI-driven data access genuinely interoperable rather than locked into any single vendor's walled garden. For teams building with ThoughtSpot Embedded or exploring agentic analytics use cases, the distinction between MCP and code extensions is not academic. It shapes how flexible and maintainable your AI-connected data workflows will be as both the AI landscape and your own stack continue to evolve.
MCP and code extensions both connect AI models to external functionality, but MCP does it through an open, model-agnostic standard built for portability, while code extensions deliver deep, tool-specific integration within a single application's ecosystem.