Migration Guide
Migrating from Mem0
Section titled “Migrating from Mem0”Conceptual mapping
Section titled “Conceptual mapping”| Mem0 concept | Cognitive Memory equivalent |
|---|---|
MemoryClient | CognitiveMemory / SyncCognitiveMemory |
client.add() | mem.add() or mem.extract_and_store() |
client.search() | mem.search() |
client.update() | Handled automatically by conflict detection |
client.delete() | await adapter.delete(memory_id) |
| Memory categories | MemoryCategory enum (episodic, semantic, procedural, core) |
| No decay | Ebbinghaus decay with floors |
Step-by-step
Section titled “Step-by-step”1. Replace the client
Section titled “1. Replace the client”# Before (Mem0)from mem0 import Memorym = Memory()m.add("User likes coffee", user_id="user-1")results = m.search("coffee", user_id="user-1")
# After (Cognitive Memory)from cognitive_memory import SyncCognitiveMemorymem = SyncCognitiveMemory()mem.add("User likes coffee", category="semantic", importance=0.5)results = mem.search("coffee")2. Replace conversation ingestion
Section titled “2. Replace conversation ingestion”# Before (Mem0)m.add(conversation_text, user_id="user-1")
# After (Cognitive Memory)mem.extract_and_store(conversation_text, session_id="s1")Cognitive Memory’s extraction is more thorough — it extracts every discrete fact rather than summarizing. You may get more memories per conversation, which is intentional.
3. Handle search results
Section titled “3. Handle search results”# Before (Mem0)results = m.search("coffee", user_id="user-1")for r in results: print(r["memory"])
# After (Cognitive Memory)results = mem.search("coffee")for r in results: print(r.memory.content) print(f" score={r.combined_score:.3f}") print(f" retention={r.retention_score:.3f}")4. Data migration
Section titled “4. Data migration”To migrate existing Mem0 data into Cognitive Memory:
from cognitive_memory import SyncCognitiveMemory, MemoryCategoryfrom datetime import datetime
mem = SyncCognitiveMemory()
# Export from Mem0mem0_memories = m.get_all(user_id="user-1")
# Import into Cognitive Memoryfor m0 in mem0_memories: mem.add( content=m0["memory"], category=MemoryCategory.SEMANTIC, # default; adjust as needed importance=0.5, timestamp=datetime.fromisoformat(m0.get("created_at", datetime.now().isoformat())), )What you gain
Section titled “What you gain”- Temporal decay: old memories naturally deprioritize
- Multi-hop: associative linking connects related facts
- Core protection: critical facts get a 0.60 retention floor
- Deep recall: consolidated details remain accessible
What changes
Section titled “What changes”- Search results include retention scores — you may want to filter by
combined_scoreinstead of raw similarity - Memories decay over time — you’ll see fewer old results (this is a feature)
- Conflict detection automatically handles updates and contradictions
Migrating from FadeMem
Section titled “Migrating from FadeMem”Conceptual mapping
Section titled “Conceptual mapping”| FadeMem concept | Cognitive Memory equivalent |
|---|---|
| Forgetting curve | Ebbinghaus decay with R(m) = max(floor, exp(-dt / (S * B * beta_c))) |
| Decay-weighted retrieval | score = sim * R^alpha with configurable alpha |
| No strengthening | Two-tier retrieval boosting |
| No associations | Synaptic tagging + co-retrieval strengthening |
| No consolidation | LLM compression of fading clusters |
Key differences
Section titled “Key differences”FadeMem applies decay to retrieval scoring but doesn’t strengthen memories on retrieval or create associations. To migrate:
- Replace your FadeMem memory store with a Cognitive Memory instance
- Re-ingest your conversation history using
extract_and_store() - The decay, strengthening, and association mechanisms activate automatically
You’ll see immediate improvement on multi-hop queries due to the association graph.
Migrating from naive RAG
Section titled “Migrating from naive RAG”What to change
Section titled “What to change”| RAG component | Cognitive Memory replacement |
|---|---|
| Text splitter / chunker | extract_and_store() (LLM extraction) |
| Vector database (Pinecone, Weaviate, etc.) | Cognitive Memory adapter (Postgres, etc.) |
| Similarity search | mem.search() with decay-weighted scoring |
| Metadata filters | Memory categories + importance thresholds |
Step-by-step
Section titled “Step-by-step”1. Replace chunking with extraction
Section titled “1. Replace chunking with extraction”# Before (RAG)chunks = text_splitter.split(conversation)embeddings = embed_batch(chunks)vector_db.upsert(chunks, embeddings)
# After (Cognitive Memory)mem.extract_and_store(conversation, session_id="s1")This is the biggest change. Instead of blindly chunking text, Cognitive Memory uses an LLM to extract discrete facts. Each memory is a single, precise fact rather than a text fragment.
2. Replace similarity search
Section titled “2. Replace similarity search”# Before (RAG)results = vector_db.query(embed(query), top_k=5)context = "\n".join(r.text for r in results)
# After (Cognitive Memory)results = mem.search(query, top_k=10)context = "\n".join(r.memory.content for r in results)3. Remove manual metadata management
Section titled “3. Remove manual metadata management”RAG systems often use metadata for filtering (timestamps, categories, importance). Cognitive Memory handles all of this automatically through its decay model, categories, and importance scoring.
What you gain
Section titled “What you gain”- No more chunk boundary problems
- Automatic importance scoring and decay
- Multi-hop reasoning through associations
- Conflict detection for contradictory information
- Consolidation of fading information
Migration tips
Section titled “Migration tips”- Start with the InMemory adapter for testing, then switch to Postgres for production
- Use
embedder="hash"for fast offline testing during migration - Set
run_maintenance_during_ingestion=Falsefor bulk re-ingestion, then calltick()once at the end - Compare retrieval quality by running the same queries against both systems during the transition period