CrewAI + HatiData: Shared Memory for Multi-Agent Teams
The Multi-Agent Memory Challenge
CrewAI excels at orchestrating multiple agents that work together on complex tasks. A research crew might include a data analyst, a fact checker, and a report writer. A customer success crew might include a triage agent, a technical support agent, and an escalation agent.
But each agent in a CrewAI crew has its own context window. When the data analyst discovers an important insight, the report writer does not automatically know about it. When the triage agent classifies a ticket, the technical support agent must be explicitly told the classification. Information flows through the task output chain, but there is no persistent, searchable knowledge base that all agents share.
HatiData provides that shared knowledge base. Each agent in a crew can store memories into shared namespaces and retrieve relevant memories from those namespaces. The result is a crew where agents build on each other's knowledge accumulating over time, not just within a single task execution.
Setting Up Shared Memory
The crewai-hatidata package integrates HatiData into CrewAI's agent and tool system:
pip install crewai-hatidataNamespace Design for Crews
The key architectural decision is namespace design. Each namespace represents a knowledge domain that specific agents can access. A typical crew uses three types of namespaces:
- 1Agent-private namespaces — Each agent has its own namespace for personal knowledge (e.g.,
analyst/private,writer/private) - 2Shared crew namespace — All agents in the crew share a common namespace for cross-agent knowledge (e.g.,
research-crew/shared) - 3Domain namespaces — Subject-specific namespaces that multiple crews can access (e.g.,
company/products,company/customers)
from crewai import Agent, Task, Crew
from crewai_hatidata import HatiDataMemory, HatiDataTools
# Shared memory configuration
shared_memory = HatiDataMemory(
url="http://localhost:5439",
api_key="hd_live_crew_key",
namespace="research-crew/shared",
)
# Agent-specific tools with access to shared + private namespaces
analyst_tools = HatiDataTools(
url="http://localhost:5439",
api_key="hd_live_analyst_key",
namespaces=["research-crew/shared", "analyst/private", "company/data"],
)
writer_tools = HatiDataTools(
url="http://localhost:5439",
api_key="hd_live_writer_key",
namespaces=["research-crew/shared", "writer/private", "company/brand"],
)Building a Research Crew
Here is a complete example of a research crew where agents share memory through HatiData:
from crewai import Agent, Task, Crew, Process
from crewai_hatidata import HatiDataMemory, HatiDataTools
# Initialize shared memory
memory = HatiDataMemory(
url="http://localhost:5439",
api_key="hd_live_crew_key",
namespace="research-crew/shared",
)
# Data Analyst Agent
analyst = Agent(
role="Data Analyst",
goal="Analyze data and extract key insights",
backstory="You are an expert data analyst who finds patterns in complex datasets.",
tools=HatiDataTools(
url="http://localhost:5439",
api_key="hd_live_analyst_key",
namespaces=["research-crew/shared", "company/data"],
).get_tools(),
memory=True,
)
# Fact Checker Agent
checker = Agent(
role="Fact Checker",
goal="Verify claims and ensure accuracy",
backstory="You verify every claim against source data and flag inaccuracies.",
tools=HatiDataTools(
url="http://localhost:5439",
api_key="hd_live_checker_key",
namespaces=["research-crew/shared", "company/data"],
).get_tools(),
memory=True,
)
# Report Writer Agent
writer = Agent(
role="Report Writer",
goal="Synthesize findings into clear, actionable reports",
backstory="You write compelling reports that translate data insights into business recommendations.",
tools=HatiDataTools(
url="http://localhost:5439",
api_key="hd_live_writer_key",
namespaces=["research-crew/shared", "company/brand"],
).get_tools(),
memory=True,
)Task Definitions with Memory
analysis_task = Task(
description="Analyze Q4 customer churn data and identify the top 3 drivers",
expected_output="A structured analysis with data-backed churn drivers",
agent=analyst,
)
verification_task = Task(
description="Verify the analyst's churn drivers against raw data sources",
expected_output="Verified findings with confidence scores",
agent=checker,
context=[analysis_task],
)
report_task = Task(
description="Write an executive summary of verified churn findings with recommendations",
expected_output="A polished executive summary suitable for C-level audience",
agent=writer,
context=[verification_task],
)
crew = Crew(
agents=[analyst, checker, writer],
tasks=[analysis_task, verification_task, report_task],
process=Process.sequential,
memory=True,
)
result = crew.kickoff()How Memory Flows Between Agents
When the crew executes, memory flows in two directions:
Explicit: Task Context Chain
CrewAI's task context chain passes outputs from upstream tasks to downstream tasks. The analyst's output becomes input context for the fact checker, and the fact checker's output becomes input context for the report writer. This is standard CrewAI behavior.
Persistent: Shared Memory via HatiData
In addition to the task context chain, each agent stores key findings in the shared namespace as it works. The analyst stores "Q4 churn driven primarily by pricing dissatisfaction in mid-market segment." The fact checker stores "Pricing churn claim verified: 67% of churned mid-market accounts cited pricing in exit surveys." The writer retrieves both when composing the executive summary.
The critical difference is that these memories persist beyond the current crew execution. When the same crew runs next quarter, the agents can retrieve Q4's findings and compare them against Q1 data. Over time, the shared namespace accumulates a rich history of the crew's collective research.
Execution 1 (Q4 Analysis):
Analyst stores: "Q4 churn: 67% pricing, 22% support, 11% features"
Checker stores: "Pricing churn verified against exit surveys"
Writer stores: "Executive summary delivered, recommended 15% mid-market discount"
Execution 2 (Q1 Analysis):
Analyst retrieves Q4 findings for comparison
Analyst stores: "Q1 churn: 45% pricing (improved), 35% support (worsened)"
Checker cross-references Q4 recommendations with Q1 results
Writer notes improvement in pricing churn and emerging support issueThe Four CrewAI Tools
The HatiDataTools class provides 4 tools designed for CrewAI agent use:
query
Executes SQL queries against the agent's data warehouse. The agent writes natural SQL to retrieve data, filter by conditions, aggregate results, and join tables.
# Agent uses this tool to ask:
# "Query the customer table to find all enterprise accounts that churned in Q4"
# Tool translates to SQL and returns structured resultslist_tables
Returns all available tables with row counts and column summaries. Agents use this at the start of a task to understand what data is available.
describe
Returns the full schema for a specific table. Agents use this after identifying a relevant table to understand its structure before writing queries.
context_search
Performs semantic search across table descriptions, column names, and stored memories. This is the bridge between natural language and structured data — the agent describes what it needs, and context_search finds the relevant tables and memories.
Team Coordination Patterns
Pattern 1: Knowledge Accumulation
Each execution of a crew adds to the shared knowledge base. Over time, agents develop institutional memory — they know what worked in the past, what failed, and what the organization cares about. This pattern is ideal for recurring analysis tasks (weekly reports, quarterly reviews, ongoing monitoring).
Pattern 2: Specialist Handoff
A triage agent classifies incoming requests and stores the classification in shared memory. A specialist agent retrieves the classification and relevant context to handle the request. A follow-up agent checks shared memory for the resolution and sends a summary. Each agent operates independently but coordinates through shared memory.
Pattern 3: Collaborative Research
Multiple agents explore different aspects of a question simultaneously. Each stores its findings in the shared namespace. A synthesis agent retrieves all findings and composes a unified answer. This pattern parallelizes research while maintaining coherence through the shared memory layer.
Pattern 4: Quality Assurance
A primary agent completes a task and stores its output in shared memory. A QA agent retrieves the output, evaluates it against quality criteria, and stores its assessment. If the assessment indicates issues, the primary agent retrieves the feedback and revises. This pattern creates a quality improvement loop powered by persistent memory.
Production Considerations
API Key Per Agent
In production, each agent should have its own API key with appropriate scope and namespace access. This enables:
- Per-agent quota tracking and cost attribution
- Least-privilege access (the writer does not need access to raw data tables)
- Audit trail that identifies which agent performed each operation
Memory Cleanup
Long-running crews accumulate memories. Implement a periodic cleanup task that archives old memories or removes duplicates. HatiData's retention policies can automate this — set a TTL on the shared namespace and memories are automatically expired.
Error Handling
If HatiData is unavailable, the HatiDataMemory class degrades gracefully — agents continue operating without persistent memory, using only the task context chain. When HatiData comes back, new memories are stored normally. This ensures that a memory infrastructure issue does not block the entire crew.
Next Steps
The CrewAI integration is designed for multi-agent teams that need shared, persistent knowledge. For single-agent persistent memory, see the LangChain integration. For semantic trigger-based coordination between crews, see the semantic triggers cookbook. For production deployment with access controls, see the governance guide.