Model Context Protocol

Give Your Agent a Brain

Connect Claude, Cursor, or any MCP-compatible agent to HatiData. 24 tools across SQL, Memory, Chain-of-Thought, Triggers, and Branches.

60-Second Setup

Three steps. No Docker, no cloud account required.

1

Install the MCP server

$ pip install hatidata-agent

2

Add to your MCP config

Claude Desktop / Claude Code (~/.claude/mcp.json)

{
  "mcpServers": {
    "hatidata": {
      "command": "hatidata-mcp-server",
      "args": ["--host", "localhost", "--port", "5439", "--agent-id", "my-agent", "--database", "hatidata"]
    }
  }
}

Cursor (.cursor/mcp.json)

{
  "mcpServers": {
    "hatidata": {
      "command": "hatidata-mcp-server",
      "args": ["--host", "localhost", "--port", "5439", "--agent-id", "cursor-agent", "--database", "hatidata"]
    }
  }
}

Any MCP host (stdio transport)

$ hatidata-mcp-server --host localhost --port 5439 --agent-id my-agent --database hatidata

3

Initialize a local warehouse

$ hati init # creates .hati/ with local DuckDB

Your agent now has access to all 24 tools. Ask it to list tables or query SELECT 1 to verify the connection.

Want hybrid SQL?

Standard SQL works immediately. For semantic search (semantic_match, JOIN_VECTOR), get a free cloud key (50 queries/day) and set HATIDATA_CLOUD_KEY in your environment.

Cloud Setup

$29/mo

Connect your agent to a managed HatiData Cloud instance. Requires an API key from your HatiData dashboard.

Claude Desktop / Claude Code

{
  "mcpServers": {
    "hatidata": {
      "command": "hatidata-mcp-server",
      "args": [
        "--host", "your-org.proxy.hatidata.com",
        "--port", "5439",
        "--agent-id", "my-agent",
        "--database", "hatidata"
      ],
      "env": {
        "HATIDATA_API_KEY": "hd_live_your_api_key_here"
      }
    }
  }
}

Cursor

{
  "mcpServers": {
    "hatidata": {
      "command": "hatidata-mcp-server",
      "args": [
        "--host", "your-org.proxy.hatidata.com",
        "--port", "5439",
        "--agent-id", "cursor-agent",
        "--database", "hatidata"
      ],
      "env": {
        "HATIDATA_API_KEY": "hd_live_your_api_key_here"
      }
    }
  }
}

Any MCP host (stdio transport)

$ HATIDATA_API_KEY=hd_live_... hatidata-mcp-server --host your-org.proxy.hatidata.com --port 5439 --agent-id my-agent --database hatidata

 Local (Free)Cloud ($29/mo)
Hostlocalhostyour-org.proxy.hatidata.com
API key requiredNoYes — HATIDATA_API_KEY
Hybrid SQLWith free cloud key (50/day)Included (10K/day)
Data locationYour machineManaged cloud
Setuphati initSign up, get connection string

23 Tools, 5 Categories

Every tool call passes through HatiData's full security pipeline: authentication, ABAC policy evaluation, audit logging, and metering.

SQL Tools

6 tools

Query the warehouse, explore schemas, and track usage.

query

Execute SQL and return JSON results

sql

list_tables

List all tables the agent can access

no parameters

describe_table

Get column names, types, and nullability

table_name

list_schemas

List all schemas in the database

no parameters

read_query

Execute a read-only SQL query and return results

sql

get_usage_stats

Get query usage and resource consumption stats

agent_id?, since?

Memory Tools

5 tools

Store, search, and manage long-term agent memory with vector similarity.

store_memory

Store a memory with type, metadata, and importance

content, memory_type?, metadata?, importance?

search_memory

Semantic search across stored memories

query, top_k?, memory_type?, min_importance?

get_agent_state

Retrieve a named state value for the agent

key

set_agent_state

Set or update a named state value

key, value

delete_memory

Permanently delete a memory entry by ID

memory_id

Chain-of-Thought Tools

3 tools

Record and replay agent reasoning traces with SHA-256 hash chains.

log_reasoning_step

Log a step (observation, thought, action, decision, etc.)

session_id, step_type, content, metadata?, importance?

replay_decision

Replay a full reasoning chain with hash verification

session_id, verify_chain?

get_session_history

List reasoning sessions with optional filtering

limit?, agent_id?, since?

Trigger Tools

4 tools

Semantic triggers that fire when content matches a concept above a threshold.

register_trigger

Register a trigger with concept, threshold, and action

name, concept, action, threshold?, cooldown_seconds?

list_triggers

List all registered triggers

status?

delete_trigger

Delete a trigger by ID

trigger_id

test_trigger

Test a trigger against sample content (dry run)

trigger_id, content

Branch Tools

5 tools

Isolated data branches for speculative what-if analysis with zero-copy views.

branch_create

Create an isolated branch from main data

name?, description?, ttl_seconds?

branch_query

Execute SQL within a branch's isolated schema

branch_id, sql

branch_merge

Merge branch changes back to main

branch_id, strategy?

branch_discard

Discard a branch and clean up

branch_id

branch_list

List all active branches

status?

Works With

Claude Desktop

~/.claude/mcp.json

Claude Code

~/.claude/mcp.json

Cursor

.cursor/mcp.json

Any MCP Host

stdio transport

Configuration

Pass these as environment variables in your MCP config or export them in your shell.

VariableDefaultDescription
HATIDATA_HOSTlocalhostProxy hostname (cloud: your-org.proxy.hatidata.com)
HATIDATA_PORT5439Proxy port
HATIDATA_API_KEY(none)Org API key for cloud auth (hd_live_...)
HATIDATA_AGENT_IDmcp-agentAgent identifier for audit and billing
HATIDATA_CLOUD_KEY(none)Cloud key for hybrid SQL transpilation
HATIDATA_FRAMEWORKmcpFramework tag (langchain, crewai, etc.)
HATIDATA_PRIORITYnormalScheduling priority: low, normal, high, critical

Ready to connect?

Get started locally for free, or sign up for a cloud key to unlock hybrid SQL.