Skip to content

Migration Guide

Mem0 conceptCognitive Memory equivalent
MemoryClientCognitiveMemory / SyncCognitiveMemory
client.add()mem.add() or mem.extract_and_store()
client.search()mem.search()
client.update()Handled automatically by conflict detection
client.delete()await adapter.delete(memory_id)
Memory categoriesMemoryCategory enum (episodic, semantic, procedural, core)
No decayEbbinghaus decay with floors
# Before (Mem0)
from mem0 import Memory
m = Memory()
m.add("User likes coffee", user_id="user-1")
results = m.search("coffee", user_id="user-1")
# After (Cognitive Memory)
from cognitive_memory import SyncCognitiveMemory
mem = SyncCognitiveMemory()
mem.add("User likes coffee", category="semantic", importance=0.5)
results = mem.search("coffee")
# Before (Mem0)
m.add(conversation_text, user_id="user-1")
# After (Cognitive Memory)
mem.extract_and_store(conversation_text, session_id="s1")

Cognitive Memory’s extraction is more thorough — it extracts every discrete fact rather than summarizing. You may get more memories per conversation, which is intentional.

# Before (Mem0)
results = m.search("coffee", user_id="user-1")
for r in results:
print(r["memory"])
# After (Cognitive Memory)
results = mem.search("coffee")
for r in results:
print(r.memory.content)
print(f" score={r.combined_score:.3f}")
print(f" retention={r.retention_score:.3f}")

To migrate existing Mem0 data into Cognitive Memory:

from cognitive_memory import SyncCognitiveMemory, MemoryCategory
from datetime import datetime
mem = SyncCognitiveMemory()
# Export from Mem0
mem0_memories = m.get_all(user_id="user-1")
# Import into Cognitive Memory
for m0 in mem0_memories:
mem.add(
content=m0["memory"],
category=MemoryCategory.SEMANTIC, # default; adjust as needed
importance=0.5,
timestamp=datetime.fromisoformat(m0.get("created_at", datetime.now().isoformat())),
)
  • Temporal decay: old memories naturally deprioritize
  • Multi-hop: associative linking connects related facts
  • Core protection: critical facts get a 0.60 retention floor
  • Deep recall: consolidated details remain accessible
  • Search results include retention scores — you may want to filter by combined_score instead of raw similarity
  • Memories decay over time — you’ll see fewer old results (this is a feature)
  • Conflict detection automatically handles updates and contradictions

FadeMem conceptCognitive Memory equivalent
Forgetting curveEbbinghaus decay with R(m) = max(floor, exp(-dt / (S * B * beta_c)))
Decay-weighted retrievalscore = sim * R^alpha with configurable alpha
No strengtheningTwo-tier retrieval boosting
No associationsSynaptic tagging + co-retrieval strengthening
No consolidationLLM compression of fading clusters

FadeMem applies decay to retrieval scoring but doesn’t strengthen memories on retrieval or create associations. To migrate:

  1. Replace your FadeMem memory store with a Cognitive Memory instance
  2. Re-ingest your conversation history using extract_and_store()
  3. The decay, strengthening, and association mechanisms activate automatically

You’ll see immediate improvement on multi-hop queries due to the association graph.


RAG componentCognitive Memory replacement
Text splitter / chunkerextract_and_store() (LLM extraction)
Vector database (Pinecone, Weaviate, etc.)Cognitive Memory adapter (Postgres, etc.)
Similarity searchmem.search() with decay-weighted scoring
Metadata filtersMemory categories + importance thresholds
# Before (RAG)
chunks = text_splitter.split(conversation)
embeddings = embed_batch(chunks)
vector_db.upsert(chunks, embeddings)
# After (Cognitive Memory)
mem.extract_and_store(conversation, session_id="s1")

This is the biggest change. Instead of blindly chunking text, Cognitive Memory uses an LLM to extract discrete facts. Each memory is a single, precise fact rather than a text fragment.

# Before (RAG)
results = vector_db.query(embed(query), top_k=5)
context = "\n".join(r.text for r in results)
# After (Cognitive Memory)
results = mem.search(query, top_k=10)
context = "\n".join(r.memory.content for r in results)

RAG systems often use metadata for filtering (timestamps, categories, importance). Cognitive Memory handles all of this automatically through its decay model, categories, and importance scoring.

  • No more chunk boundary problems
  • Automatic importance scoring and decay
  • Multi-hop reasoning through associations
  • Conflict detection for contradictory information
  • Consolidation of fading information
  • Start with the InMemory adapter for testing, then switch to Postgres for production
  • Use embedder="hash" for fast offline testing during migration
  • Set run_maintenance_during_ingestion=False for bulk re-ingestion, then call tick() once at the end
  • Compare retrieval quality by running the same queries against both systems during the transition period