What is MCP?

The Model Context Protocol (MCP) is a widely adopted protocol that standardises how applications provide context to large language models. The protocol lets you plug an AI model into different data sources and tools such as Spike. MCP enables a client (for example, ChatGPT, Claude, your own hosted model) to connect to one or more remote servers and exchange messages over a JSON‑RPC‑based protocol. In the MCP architecture:
  • MCP host – an AI application (your own or proxied through 3rd party, such as OpenAI) that manages one or more MCP client connections to servers;
  • MCP client – a component that maintains a dedicated connection to a server and requests context or tool execution;
  • MCP server – a program that exposes tools, context and resources for AI models.
MCP servers may run locally (using the stdio transport) or remotely over streamable HTTP. Remote server, such as described here, use HTTP POST requests with server sent events (SSE) for streaming; authentication tokens (Bearer as for API) are carried in HTTP headers.

Spike’s ready‑to‑use MCP server

Spike hosts a remote MCP server that makes health and fitness data available to AI models. This server is available at:
https://app-api.spikeapi.com/v3/mcp
The server exposes tools that mirror the Spike API. The underlying APIs aggregate metrics (steps, calories, distances, heart‑rate metrics, sleep, etc.) by day using the user’s local timezone. Daily statistics sum or average measurements within each day so that you can analyse trends over time. Below is a visual representation of how the client application processes a user prompt, triggers a tool request to Spike’s hosted MCP server, and then translates that into API calls against Spike’s REST API.

Configuration for ChatGPT (OpenAI)

To make Spike’s MCP server available to ChatGPT, you need to add it to your tool configuration. ChatGPT uses a config.json file to declare external tools. Add an entry of type "mcp" with the server URL and authentication header. You can also specify which tools are allowed and whether user approval is required.
{
  "tools": [
    {
      "type": "mcp",
      "server_label": "spike-health-data",
      "server_url": "https://app-api.spikeapi.com/v3/mcp",
      "headers": {
        "Authorization": "Bearer <SPIKE_ACCESS_TOKEN>"
      },
      "server_description": "Health and fitness data analysis server providing daily statistics from connected wearables and health devices.",
      "allowed_tools": ["query_statistics_daily"],
      "require_approval": "never"
    }
  ]
}
Place this configuration inside the tools array of your ChatGPT project’s config.json file. Replace <SPIKE_ACCESS_TOKEN> with the token you generate for indidual application user. After reloading the project, ChatGPT will list spike‑health‑data in the Tools panel. Use the tool by writing natural language prompts (e.g. “Get my step count for last week”).

Configuration for Claude

Claude clients store MCP server definitions in a .mcp.json file. The example below uses the HTTP transport, supplies a bearer token via environment variable, and names the server spike-health-data. When using environment variables, Claude will expand ${VAR} when reading the file.
{
  "mcpServers": {
    "spike-health-data": {
      "type": "http",
      "url": "https://app-api.spikeapi.com/v3/mcp",
      "headers": {
        "Authorization": "Bearer ${SPIKE_ACCESS_TOKEN}"
      }
    }
  }
}
Store this file at the appropriate scope (local, project, or user) depending on your use case. Claude Code will prompt you for approval before connecting to a project‑scoped server. You can also add the server via the command line:
claude mcp add --transport http spike-health-data https://app-api.spikeapi.com/v3/mcp --header Authorization="Bearer $SPIKE_ACCESS_TOKEN"

Summary

The Model Context Protocol provides a standard way for AI assistants to access external data and tools. Spike leverages MCP to expose it’s health‑data API as a remote server, allowing your application to retrieve relevant data about application users health and body metrics with just a few lines of configuration. By adding the ready‑to‑use server to your application or ChatGPT / Claude environment clients, you can start querying wearable and health data immediately and build richer AI‑powered experiences.