Quickstart
Installation
Section titled “Installation”npm install vitamem openaiStep 1: Initialize Vitamem
Section titled “Step 1: Initialize Vitamem”The unified config API handles LLM and storage setup in a single call. No manual adapter boilerplate required.
import { createVitamem } from "vitamem";
const mem = await createVitamem({ provider: "openai", apiKey: process.env.OPENAI_API_KEY!, storage: "ephemeral",});That’s it — three lines to a working memory system. Vitamem creates the OpenAI adapter internally using gpt-5.4-mini for chat and text-embedding-3-small for embeddings.
Step 2: Start a conversation
Section titled “Step 2: Start a conversation”// Create a thread for a userconst thread = await mem.createThread({ userId: "user-123" });console.log(thread.state); // 'active'
// Chat -- messages are stored, LLM responds naturallyconst { reply } = await mem.chat({ threadId: thread.id, message: "I prefer dark mode, use TypeScript, and deploy on Vercel.",});console.log(reply); // AI response
// Continue the conversationawait mem.chat({ threadId: thread.id, message: "I'm also working on a React Native mobile app right now.",});Step 3: Extract memories when the session ends
Section titled “Step 3: Extract memories when the session ends”When the user’s session goes quiet, trigger the dormant transition to extract and embed memories:
// Transitions thread to dormant and runs the embedding pipeline:// 1. Extracts facts from messages via LLM// 2. Embeds each fact as a vector// 3. Deduplicates against existing memories// 4. Saves new unique memoriesawait mem.triggerDormantTransition(thread.id);
const dormantThread = await mem.getThread(thread.id);console.log(dormantThread.state); // 'dormant'Step 4: Retrieve memories for the next session
Section titled “Step 4: Retrieve memories for the next session”When the user returns, retrieve relevant memories to enrich the conversation:
const memories = await mem.retrieve({ userId: "user-123", query: "tools and preferences", limit: 5,});
for (const m of memories) { console.log(`${m.content} (${m.source}, score: ${m.score.toFixed(3)})`);}// "Prefers TypeScript over JavaScript" (confirmed, score: 0.962)// "Deploys on Vercel" (confirmed, score: 0.941)// "Working on a React Native mobile app" (confirmed, score: 0.887)You can now inject these memories into the system prompt of the new conversation so the AI responds with full context.
Step 5: Or use autoRetrieve
Section titled “Step 5: Or use autoRetrieve”Instead of manually retrieving and injecting memories, enable autoRetrieve and Vitamem handles it for you:
const mem = await createVitamem({ provider: "openai", apiKey: process.env.OPENAI_API_KEY!, storage: "ephemeral", autoRetrieve: true,});
// Memories from previous sessions are automatically injected// into the system prompt on every chat() callconst { reply, memories } = await mem.chat({ threadId: newThread.id, message: "What tools do I use?",});
console.log(memories);// [{ content: "Prefers TypeScript over JavaScript", source: "confirmed", score: 0.95 }]Step 6: Stream responses
Section titled “Step 6: Stream responses”For real-time UIs, use chatStream() to receive tokens as they are generated:
const { stream, thread: streamThread } = await mem.chatStream({ threadId: thread.id, message: "What were my preferences again?",});
for await (const chunk of stream) { process.stdout.write(chunk);}See Streaming Output for SSE integration and more patterns.
Production: Use Supabase
Section titled “Production: Use Supabase”For production, swap "ephemeral" with "supabase":
const mem = await createVitamem({ provider: "openai", apiKey: process.env.OPENAI_API_KEY!, storage: "supabase", supabaseUrl: process.env.SUPABASE_URL!, supabaseKey: process.env.SUPABASE_KEY!,});See the Supabase setup for the required SQL schema.
Optional: Enable Phase 1 Features
Section titled “Optional: Enable Phase 1 Features”Vitamem includes several optional features that improve memory quality and retrieval. Add any of these to your config:
const mem = await createVitamem({ provider: "openai", apiKey: process.env.OPENAI_API_KEY!, storage: "ephemeral", enableReflection: true, // Validate extracted facts with a second LLM pass forgetting: { forgettingHalfLifeMs: 90 * 86400000, // Older unretrieved memories decay minRetrievalScore: 0.1, }, prioritySignaling: true, // Add [CRITICAL]/[IMPORTANT]/[INFO] markers to context});These are all optional and independent — enable whichever ones fit your use case. See Reflection, Active Forgetting, and Memory Formatting for details.
Next Steps
Section titled “Next Steps”- Installation — provider setup, peer dependencies, and TypeScript config
- Thread Lifecycle — understand how state transitions work
- Health Companion Guide — health-specific profiles, rules, and auto-pinning
- API Reference — full function and type documentation