← All Cookbooks
CrewAIBeginner10 min

CrewAI + HatiData: Multi-Agent Shared Memory

Build a CrewAI crew where agents share persistent memory through HatiData. One agent's discoveries become another's context.

What You'll Build

A CrewAI research crew with shared persistent memory. Agents write findings to a common memory pool and retrieve each other's discoveries via semantic search.

Prerequisites

$pip install crewai hatidata-agent

$hati init

$OpenAI API key

Architecture

┌──────────────┐    ┌──────────────┐    ┌──────────────┐
│  CrewAI      │───▶│  HatiData    │───▶│   Engine     │
│  Crew        │    │ Shared Memory│    │  + Vectors   │
└──────────────┘    └──────────────┘    └──────────────┘
    Agent A writes ──▶ Agent B reads (shared namespace)

Key Concepts

  • Shared memory namespaces: multiple agents write to the same namespace and discover each other's findings through semantic search
  • Agent attribution: every memory records which agent_id created it, enabling provenance tracking across the crew
  • Cross-agent discovery: semantic_match() finds relevant memories regardless of which agent stored them — knowledge flows automatically
  • Persistent across sessions: crew memories survive restarts, so long-running research projects accumulate knowledge over days or weeks

Step-by-Step Implementation

1

Install Dependencies

Install CrewAI and the HatiData agent SDK.

Bash
pip install crewai hatidata-agent
hati init

Note: CrewAI orchestrates multi-agent workflows. HatiData provides the shared memory layer.

2

Set Up Shared Memory for a Crew

Create a shared memory namespace that all agents in the crew can read and write to.

Python
from hatidata_agent import HatiDataAgent

# Researcher agent stores a finding
researcher = HatiDataAgent(host="localhost", port=5439, agent_id="researcher")
researcher.execute("""
    SELECT store_memory(
        'APAC cloud infrastructure market growing 34% YoY, led by Indonesia and Vietnam',
        'market-research'
    )
""")

# Analyst agent discovers the finding via semantic search
analyst = HatiDataAgent(host="localhost", port=5439, agent_id="analyst")
findings = analyst.query("""
    SELECT agent_id, content, created_at
    FROM _hatidata_memory.memories
    WHERE namespace = 'market-research'
      AND semantic_match(embedding, 'growth markets in Asia Pacific', 0.65)
    ORDER BY semantic_rank(embedding, 'growth markets in Asia Pacific') DESC
    LIMIT 5
""")

print(f"Analyst found {len(findings)} shared findings:")
for f in findings:
    print(f"  [{f['agent_id']}] {f['content']}")
Expected Output
Analyst found 1 shared findings:
  [researcher] APAC cloud infrastructure market growing 34% YoY, led by Indonesia and Vietnam
3

Run the Full Crew

Define a CrewAI crew with specialized agents that share memory through HatiData.

Python
from crewai import Agent, Task, Crew
from hatidata_agent import HatiDataAgent

hati = HatiDataAgent(host="localhost", port=5439, agent_id="crew-lead")

researcher = Agent(
    role="Market Researcher",
    goal="Find emerging trends in cloud infrastructure",
    backstory="You research technology market trends.",
)

analyst = Agent(
    role="Strategy Analyst",
    goal="Synthesize research into actionable recommendations",
    backstory="You analyze research findings and create strategic briefs.",
)

research_task = Task(
    description="Research APAC cloud market trends and store findings in shared memory",
    agent=researcher,
)

analysis_task = Task(
    description="Query shared memory for research findings and create a strategy brief",
    agent=analyst,
)

crew = Crew(agents=[researcher, analyst], tasks=[research_task, analysis_task])
result = crew.kickoff()
print(result)
Expected Output
Strategy brief: APAC presents the highest growth opportunity at 34% YoY...

Ready to build?

Install HatiData locally and start building with CrewAI in minutes.

Join Waitlist