Skip to content

InMemory Adapter

The InMemory adapter stores all memories in Python dicts or JavaScript Maps. It requires zero configuration, has no external dependencies, and provides the fastest possible read/write performance.

Data is lost when the process exits. This is intentional — use it for testing, prototyping, and single-session scripts.

from cognitive_memory import CognitiveMemory
from cognitive_memory.adapters.memory import InMemoryAdapter
# Default — InMemoryAdapter is used automatically
mem = CognitiveMemory()
# Explicit
mem = CognitiveMemory(adapter=InMemoryAdapter())

The Python adapter uses three dicts for tiered storage:

class InMemoryAdapter(MemoryAdapter):
def __init__(self):
self.hot: dict[str, Memory] = {} # vector-searchable
self.cold: dict[str, Memory] = {} # ID-only access
self.stubs: dict[str, Memory] = {} # archived summaries

Vector search is brute-force cosine similarity over all hot memories. This is O(n) per query but fast enough for collections up to ~50,000 memories.

  • migrate_to_cold(): Moves a memory from hot to cold, sets is_cold=True
  • migrate_to_hot(): Moves from cold back to hot, resets cold flags
  • convert_to_stub(): Creates a lightweight stub in stubs, removes from hot/cold

Association links are stored directly on Memory.associations dicts. The adapter delegates link management to the engine, which modifies Memory objects in-place.

The InMemory adapter supports search_lexical / searchLexical via a built-in BM25 tokenizer. When hybrid search is enabled in the config, keyword queries are scored using BM25 over tokenized memory content and combined with vector similarity results. No external dependencies are required.

  • Unit tests and integration tests
  • Quick prototyping and experimentation
  • Single-session scripts (data processing, one-off analysis)
  • Benchmarks and evaluation runs
  • Anywhere persistence isn’t needed
  • Production deployments (data loss on restart)
  • Multi-process or multi-server scenarios (no shared state)
  • Large memory collections (> 50,000 memories; search becomes slow)
  • Any scenario requiring durability
OperationComplexityNotes
CreateO(1)Dict insertion
Get by IDO(1)Dict lookup
Vector searchO(n)Scans all hot memories
Migrate hot/coldO(1)Dict move
ClearO(1)Dict clear

The adapter is fast enough for most use cases. If you’re hitting performance limits with in-memory storage, you should be using Postgres or Redis anyway.