Skip to content

Custom Adapters

Write a custom adapter when:

  • Your storage backend isn’t covered by the built-in adapters
  • You need specialized behavior (encryption, multi-region, custom caching)
  • You’re integrating with an existing system that has its own storage layer

Both Python and TypeScript SDKs define an abstract MemoryAdapter class. Your custom adapter must implement every abstract method.

from cognitive_memory.adapters.base import MemoryAdapter
from cognitive_memory.types import Memory
from datetime import datetime
from typing import Optional
class MyAdapter(MemoryAdapter):
# CRUD
async def create(self, memory: Memory) -> None: ...
async def get(self, memory_id: str) -> Optional[Memory]: ...
async def get_batch(self, memory_ids: list[str]) -> list[Memory]: ...
async def update(self, memory: Memory) -> None: ...
async def delete(self, memory_id: str) -> None: ...
async def delete_batch(self, memory_ids: list[str]) -> None: ...
# Vector search
async def search_similar(
self, query_embedding: list[float],
top_k: int = 10,
include_superseded: bool = False,
) -> list[tuple[Memory, float]]: ...
# Tiered storage
async def migrate_to_cold(self, memory_id: str, cold_since: datetime) -> None: ...
async def migrate_to_hot(self, memory_id: str) -> None: ...
async def convert_to_stub(self, memory_id: str, stub_content: str) -> None: ...
# Association links
async def create_or_strengthen_link(
self, source_id: str, target_id: str, weight: float,
) -> None: ...
async def get_linked_memories(
self, memory_id: str, min_weight: float = 0.3,
) -> list[tuple[Memory, float]]: ...
async def delete_link(self, source_id: str, target_id: str) -> None: ...
# Consolidation helpers
async def find_fading(self, threshold: float, exclude_core: bool = True) -> list[Memory]: ...
async def find_stable(self, min_stability: float, min_access_count: int) -> list[Memory]: ...
async def mark_superseded(self, memory_ids: list[str], summary_id: str) -> None: ...
# Traversal
async def all_active(self) -> list[Memory]: ...
async def all_hot(self) -> list[Memory]: ...
async def all_cold(self) -> list[Memory]: ...
# Counts
async def hot_count(self) -> int: ...
async def cold_count(self) -> int: ...
async def stub_count(self) -> int: ...
async def total_count(self) -> int: ...
# Batch operations
async def batch_update(self, memories: list[Memory]) -> None: ...
async def update_retention_scores(self, updates: dict[str, float]) -> None: ...
# Transactions
async def transaction(self, callback) -> any: ...
# Reset
async def clear(self) -> None: ...

Custom adapters may optionally implement search_lexical (Python) / searchLexical (TypeScript) to support hybrid retrieval. The default implementation returns an empty list, so hybrid search gracefully falls back to vector-only if not implemented.

# Python
async def search_lexical(self, query_text: str, top_k: int = 10) -> list[tuple[Memory, float]]:
# Implement BM25, full-text search, or any keyword-based retrieval
...
// TypeScript
async searchLexical(queryText: string, topK?: number): Promise<ScoredMemory[]> {
// Implement keyword-based retrieval
...
}

This is the most performance-critical method. It must:

  1. Compute cosine similarity between the query embedding and all hot memories
  2. Optionally include superseded memories (when include_superseded=True)
  3. Exclude stubs
  4. Return results sorted by similarity score, descending
  5. Truncate to top_k

If your backend supports native vector search (pgvector, Pinecone, etc.), use it. Otherwise, fall back to brute-force:

async def search_similar(self, query_embedding, top_k=10, include_superseded=False):
results = []
for mem in await self.all_hot():
if mem.embedding is None or mem.is_stub:
continue
if mem.is_superseded and not include_superseded:
continue
sim = cosine_similarity(query_embedding, mem.embedding)
results.append((mem, sim))
results.sort(key=lambda x: x[1], reverse=True)
return results[:top_k]

Your adapter must support three tiers. The engine calls migrate_to_cold(), migrate_to_hot(), and convert_to_stub() to move memories between tiers. How you implement tiers is up to you:

  • Separate tables/collections (recommended for databases)
  • A tier column with an enum value
  • Separate key prefixes (for Redis)

If your backend supports transactions, implement them. If not, provide a passthrough:

async def transaction(self, callback):
return await callback(self) # no-op for non-transactional backends

Use the built-in test suite to verify your adapter implements the contract correctly:

from cognitive_memory.tests.test_adapters import AdapterTestSuite
class TestMyAdapter(AdapterTestSuite):
def create_adapter(self):
return MyAdapter(...)

This runs a comprehensive set of tests covering CRUD, vector search, tiered storage, associations, and consolidation.