Skip to content
47.1% multi-hop accuracy — 66% ahead of Mem0

Cognitive Memory

Human-like memory for AI agents. Your agent remembers what matters, forgets what doesn't, and strengthens memories through use — just like a brain.
Core
Terminal window
pip install cognitive-memory

Most memory systems treat memories as static documents in a vector database. Search, retrieve, done. But human memory is nothing like that. Memories fade over time, strengthen with repeated recall, form associative links, and consolidate into denser representations as they age.

Cognitive Memory brings these dynamics to AI agents. The result: agents that naturally prioritize recent and frequently-accessed information, surface associated context through spreading activation, and gracefully forget irrelevant details instead of drowning in noise.

Conversation Extraction Embedding Storage Decay Retrieval

Ebbinghaus Decay

Memories fade on a forgetting curve. Episodic memories (events) decay in ~30 days. Semantic memories (facts) last ~90 days. Procedural memories (skills) never decay. Core memories have a 0.60 retention floor — they dim but never disappear.

Retrieval Strengthening

Every time a memory is retrieved, its stability increases. Spaced repetition matters: retrieving after a longer gap produces a bigger stability boost. Memories that are accessed frequently become harder to forget.

Associative Linking

Memories encoded together form synaptic tags. When one is recalled, associated memories are activated and returned alongside it. Links strengthen through co-retrieval and decay without use.

Tiered Storage

Hot memories live in vector search. Cold memories are accessible by ID only. Stubs are archived summaries. Memories migrate between tiers based on their retention scores.

1.0 0.6 0 Retention 0 90 180 Days core floor Episodic (45d) Semantic (120d) Core (0.60 floor)
from cognitive_memory import SyncCognitiveMemory
mem = SyncCognitiveMemory()
# Ingest from conversation
mem.extract_and_store(
"User: I'm allergic to peanuts and I just moved to Portland.",
session_id="s1",
)
# Search with decay-weighted scoring
results = mem.search("what is the user allergic to?")
for r in results:
print(f"{r.memory.content} (score={r.combined_score:.3f})")
Embedding Recall top_k x 3 candidates Stage 1 Rα Temporal Scoring score = sim x R^0.3 Stage 2 LLM Re-ranking gpt-4o-mini scores 1-10 Stage 3

On the LoCoMo long-conversation benchmark (10 conversations, 1540 questions):

Multi-hop questions require reasoning across multiple stored facts. This is where decay-weighted retrieval and associative linking shine — our multi-hop score beats Mem0 by 66% and FadeMem by 60%.

Python SDK

Full async API with sync wrapper. OpenAI embeddings out of the box, or bring your own. Python quickstart

TypeScript SDK

First-class TypeScript with adapters for Convex, Postgres, JSONL, and more. TypeScript quickstart

Pluggable Adapters

InMemory for testing, SQLite for local dev, Postgres with pgvector for production, Redis for ephemeral, Convex for serverless. Adapter overview

Concepts Deep-Dive

Understand the science: Ebbinghaus curves, synaptic tagging, memory consolidation, and more. Concepts overview