← All Cookbooks
LangChainBeginner5 min
LangChain + HatiData: Persistent Memory in 5 Minutes
Give your LangChain agent persistent memory that survives restarts. Store and retrieve conversation context with semantic search.
What You'll Build
A LangChain agent with persistent memory backed by HatiData. The agent remembers past conversations and retrieves relevant context using semantic search.
Prerequisites
$pip install langchain hatidata-agent
$hati init
$OpenAI API key
Architecture
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ LangChain │───▶│ HatiData │───▶│ Engine │
│ Agent │ │ Memory API │ │ + Vectors │
└──────────────┘ └──────────────┘ └──────────────┘
Persistent memory via SQL + vector searchKey Concepts
- ●Persistent memory: agent memories survive process restarts because they're stored in HatiData's query engine, not in-process RAM
- ●Semantic search: store_memory() auto-generates embeddings, and semantic_match() retrieves contextually relevant memories at query time
- ●Namespace isolation: memories are scoped by agent_id and optional namespace, so multiple agents can share a HatiData instance without conflicts
- ●SQL-native: all memory operations use standard SQL — no proprietary APIs, no vendor lock-in
Step-by-Step Implementation
1
Install Dependencies
Install LangChain and the HatiData agent SDK.
Bash
pip install langchain langchain-openai hatidata-agent
hati initNote: hati init creates a local HatiData instance on port 5439 with vector search enabled.
2
Configure Persistent Memory
Create a LangChain agent that stores and retrieves memories from HatiData using semantic search.
Python
from langchain_openai import ChatOpenAI
from hatidata_agent import HatiDataAgent
# Connect to local HatiData
agent = HatiDataAgent(host="localhost", port=5439, agent_id="langchain-agent")
# Store a memory
agent.execute("""
SELECT store_memory(
'user prefers concise answers and dislikes jargon',
'user-preferences'
)
""")
# Retrieve relevant memories
memories = agent.query("""
SELECT content, created_at
FROM _hatidata_memory.memories
WHERE semantic_match(embedding, 'how should I respond to this user', 0.7)
ORDER BY semantic_rank(embedding, 'how should I respond to this user') DESC
LIMIT 3
""")
print(f"Found {len(memories)} relevant memories")
for m in memories:
print(f" - {m['content']}")Expected Output
Found 1 relevant memories
- user prefers concise answers and dislikes jargon3
Run Your Agent with Memory
Build a conversational agent that automatically stores new interactions and recalls past context.
Python
from langchain_openai import ChatOpenAI
from hatidata_agent import HatiDataAgent
llm = ChatOpenAI(model="gpt-4o")
agent = HatiDataAgent(host="localhost", port=5439, agent_id="langchain-agent")
def chat_with_memory(user_input: str) -> str:
# Retrieve relevant past context
memories = agent.query(f"""
SELECT content FROM _hatidata_memory.memories
WHERE semantic_match(embedding, '{user_input}', 0.65)
ORDER BY semantic_rank(embedding, '{user_input}') DESC
LIMIT 3
""")
context = "\n".join(m["content"] for m in memories)
prompt = f"Context from memory:\n{context}\n\nUser: {user_input}"
response = llm.invoke(prompt)
# Store this interaction as a new memory
agent.execute(f"""
SELECT store_memory(
'User asked: {user_input}. Response: {response.content[:200]}',
'conversations'
)
""")
return response.content
print(chat_with_memory("What's our refund policy?"))Expected Output
Based on your previous interactions, here's a concise summary of the refund policy...Related Use Case
Operations
Customer Support
Agents That Remember Every Customer