← All Cookbooks
Vercel AI SDKIntermediate20 min

Vercel AI SDK + HatiData: Streaming + Memory

Build a Next.js chat interface with streaming responses from Vercel AI SDK and persistent conversation memory backed by HatiData.

What You'll Build

A Next.js chat interface using Vercel AI SDK with HatiData-backed persistent memory and streaming responses.

Prerequisites

$Node.js 18+

$pnpm or npm

$OpenAI API key

$hati init

Architecture

┌──────────────┐    ┌──────────────┐    ┌──────────────┐
│  Next.js     │───▶│  Vercel AI   │───▶│   OpenAI     │
│  Frontend    │    │  SDK Stream  │    │   GPT-4o     │
└──────────────┘    └──────────────┘    └──────────────┘
       │
       ▼
┌──────────────┐    ┌──────────────┐
│  HatiData    │───▶│   Engine     │
│  Memory API  │    │  + Vectors   │
└──────────────┘    └──────────────┘

Key Concepts

  • Streaming + memory: retrieve memory context before streaming starts, then persist the full response after streaming completes
  • Postgres wire protocol: HatiData's proxy speaks Postgres, so any pg/postgres client library works out of the box
  • onFinish callback: store conversation turns after streaming completes so memory persistence adds zero latency to the user experience
  • Cross-session memory: conversations persist in HatiData's engine + vector index, surviving server restarts and browser refreshes

Step-by-Step Implementation

1

Set Up Next.js Project

Create a new Next.js project with the Vercel AI SDK and HatiData client.

Bash
npx create-next-app@latest chat-app --typescript --tailwind --app
cd chat-app
pnpm add ai @ai-sdk/openai pg
Expected Output
Creating a new Next.js app in /chat-app...
Success! Created chat-app

added 3 packages

Note: The pg package provides a PostgreSQL client that connects to HatiData's Postgres-compatible proxy on port 5439.

2

Create the API Route Handler

Build a streaming route handler that retrieves memory context before generating responses.

TypeScript
// app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
import { Client } from "pg";

const hati = new Client({
  host: "localhost",
  port: 5439,
  user: "admin",
  database: "main",
});
hati.connect();

export async function POST(req: Request) {
  const { messages } = await req.json();
  const lastMessage = messages[messages.length - 1].content;

  // Retrieve relevant memories
  const { rows } = await hati.query(
    `SELECT content FROM _hatidata_memory.memories
     WHERE namespace = 'chat-history'
       AND semantic_match(embedding, $1, 0.6)
     ORDER BY semantic_rank(embedding, $1) DESC
     LIMIT 3`,
    [lastMessage]
  );

  const context = rows.map((r: any) => r.content).join("\\n");

  const result = streamText({
    model: openai("gpt-4o"),
    system: `You are a helpful assistant. Context from memory:\n${context}`,
    messages,
  });

  return result.toDataStreamResponse();
}
Expected Output
// API route created at app/api/chat/route.ts

Note: HatiData's proxy speaks Postgres wire protocol, so any pg client works. Memory retrieval happens before streaming starts.

3

Build the Chat Interface

Create a streaming chat UI component using the Vercel AI SDK useChat hook.

TypeScript
// app/page.tsx
"use client";
import { useChat } from "ai/react";

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } =
    useChat();

  return (
    <div className="max-w-2xl mx-auto p-4">
      <h1 className="text-2xl font-bold mb-4">Chat with Memory</h1>
      <div className="space-y-4 mb-4">
        {messages.map((m) => (
          <div
            key={m.id}
            className={`p-3 rounded ${
              m.role === "user" ? "bg-blue-900" : "bg-gray-800"
            }`}
          >
            <strong>{m.role === "user" ? "You" : "AI"}:</strong> {m.content}
          </div>
        ))}
      </div>
      <form onSubmit={handleSubmit} className="flex gap-2">
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Ask something..."
          className="flex-1 p-2 rounded bg-gray-800 text-white"
        />
        <button
          type="submit"
          disabled={isLoading}
          className="px-4 py-2 bg-amber-500 rounded"
        >
          Send
        </button>
      </form>
    </div>
  );
}
Expected Output
// Chat UI component created at app/page.tsx

Note: useChat() handles streaming automatically. Each token appears in real-time as the model generates it.

4

Persist Conversations to Memory

Add a callback to store each conversation turn in HatiData for future context retrieval.

TypeScript
// Add to app/api/chat/route.ts — after streaming completes

export async function POST(req: Request) {
  const { messages } = await req.json();
  const lastMessage = messages[messages.length - 1].content;

  // ... memory retrieval and streaming (from previous step) ...

  const result = streamText({
    model: openai("gpt-4o"),
    system: `You are a helpful assistant. Context from memory:\n${context}`,
    messages,
    onFinish: async ({ text }) => {
      // Store the conversation turn as a memory
      await hati.query(
        `SELECT store_memory($1, 'chat-history')`,
        [`User: ${lastMessage}\nAssistant: ${text.slice(0, 500)}`]
      );
    },
  });

  return result.toDataStreamResponse();
}
Expected Output
// Memory persistence callback added to streaming handler

Note: onFinish fires after the stream completes, so memory storage does not add latency to the streaming response.

5

Test the Full Flow

Run the dev server and test the chat interface with persistent memory.

Bash
# Start HatiData and the Next.js dev server
hati init  # if not already running
pnpm dev

# Test in browser at http://localhost:3000
# Conversation 1: "My name is Alice and I work on the data team"
# Close and reopen the browser
# Conversation 2: "What team do I work on?"
# The AI remembers: "You work on the data team, Alice"
Expected Output
  ▲ Next.js 15.1.0
  - Local: http://localhost:3000

# Browser test:
You: My name is Alice and I work on the data team
AI: Nice to meet you, Alice! How can I help the data team today?

# New session:
You: What team do I work on?
AI: You work on the data team, Alice.

Note: Memory persists across browser sessions because it is stored in HatiData, not in browser state or server RAM.

Ready to build?

Install HatiData locally and start building with Vercel AI SDK in minutes.

Join Waitlist