What is MCP?
The Model Context Protocol (MCP) is a widely adopted protocol that standardises how applications provide context to large language models. The protocol lets you plug an AI model into different data sources and tools such as Spike. MCP enables a client (for example, ChatGPT, Claude, your own hosted model) to connect to one or more remote servers and exchange messages over a JSON‑RPC‑based protocol. In the MCP architecture:- MCP host – an AI application (your own or proxied through 3rd party, such as OpenAI) that manages one or more MCP client connections to servers;
- MCP client – a component that maintains a dedicated connection to a server and requests context or tool execution;
- MCP server – a program that exposes tools, context and resources for AI models.
Spike’s ready‑to‑use MCP server
Spike hosts a remote MCP server that makes health and fitness data available to AI models. This server is available at:Configuration for ChatGPT (OpenAI)
To make Spike’s MCP server available to ChatGPT, you need to add it to your tool configuration. ChatGPT uses aconfig.json file to declare external tools. Add an entry of type "mcp" with the server URL and authentication header. You can also specify which tools are allowed and whether user approval is required.
tools array of your ChatGPT project’s config.json file. Replace <SPIKE_ACCESS_TOKEN> with the token you generate for indidual application user. After reloading the project, ChatGPT will list spike‑health‑data in the Tools panel. Use the tool by writing natural language prompts (e.g. “Get my step count for last week”).
Configuration for Claude
Claude clients store MCP server definitions in a.mcp.json file. The example below uses the HTTP transport, supplies a bearer token via environment variable, and names the server spike-health-data. When using environment variables, Claude will expand ${VAR} when reading the file.
