What Is MCP (Model Context Protocol)?

TL;DR

MCP (Model Context Protocol) is an open standard that lets AI agents connect to external tools through a unified interface. Build one MCP server, it works with every MCP-compatible client.

But be deliberate about what tools return — anything in the context window can be extracted via prompt injection.

If you’ve been following the AI tooling space, you’ve probably seen “MCP” mentioned alongside tools like Claude Code, OpenClaw, and Cursor. But what is it, and why should you care?

MCP in one sentence

The Model Context Protocol (MCP) is an open standard that lets AI agents connect to external tools, APIs, and data sources through a unified interface.

How it works

MCP follows a client-server model:

  • MCP client — the AI tool you’re using (Claude Code, OpenClaw, Cursor, Windsurf, and others)
  • MCP server — a small program that exposes tools to the AI client

When a client connects to a server, it discovers what tools are available. During a conversation, the AI can call these tools as needed — fetching data, performing actions, or interacting with external services.

A concrete example: a password manager

Take passwd-mcp, an MCP server that connects AI agents to a team password vault.

When the server starts, it registers these read-only tools:

Tool What the agent can do
passwd_login Authenticate with Google OAuth
list_secrets Search the vault by name or type
get_secret View secret details (credentials redacted with ••••••••)
get_totp_code Generate a current six-digit two-factor code
get_current_user View the authenticated user profile

The AI client discovers these tools automatically via MCP’s tools/list method.

From that point, the agent can call any of them mid-conversation.

You ask:

“What credentials do we have for staging?”

The agent calls list_secrets, then get_secret to inspect the entry.

Sensitive fields like passwords are replaced with:

••••••••

No special prompt engineering. No custom plugins per AI tool.

The same MCP server works in Claude Code, Cursor, Windsurf, and other MCP-compatible clients.

MCP security: why it matters what tools return

MCP is powerful, but it introduces a risk that’s easy to overlook: everything an MCP tool returns enters the AI’s context window.

That context may be:

  • extracted via prompt injection
  • logged by the client
  • cached by the model
  • included in training pipelines

For sensitive data like credentials, this is a serious problem.

The safe pattern is redaction in MCP plus out-of-band injection for actual credential usage.

How passwd-mcp handles this

passwd-mcp follows a strict separation model.

MCP is used for discovery and metadata.

The agent can:

  • browse credentials
  • search the vault
  • generate TOTP codes
  • inspect metadata

Sensitive fields are replaced with:

••••••••

When an agent also needs to use credentials, it can optionally run the agent CLI (@passwd/passwd-agent-cli) using exec --inject.

This injects credentials into a subprocess environment without returning the raw values to the AI.

What using an MCP server looks like

Here is an example workflow.

  1. You say:

Deploy the API to staging. The database credentials are in the vault.

  1. Claude Code calls list_secrets.
  2. The MCP server queries the Passwd API.
  3. The vault returns matching secrets.
  4. Claude Code calls get_secret.

The secret is returned with credentials redacted:

••••••••

If the agent needs to use the credential, it can optionally run the agent CLI with exec --inject, injecting the value into a subprocess environment so it never appears in the conversation.

Getting started

If you want to try MCP with a password manager, passwd-mcp connects any MCP-compatible AI agent to your team’s Passwd vault.