Memory plugin for OpenClaw

Give AI a
Brain-like Memory

Memok is a persistent memory system for OpenClaw—auto-save, intelligent recall, and overnight dream optimization—so your agents keep getting stronger the more you use them.

Memory is model-agnostic: the same durable, associative context for any LLM, frontier or local. Compact models gain long-horizon judgment and continuity that used to require a far larger model—small weights, big-brain behavior.

100K+
Memory Sentences
10M+
Word-Sentence Links
24h
Dream Cycle

Why Memok exists

Agents forget by default: every fresh session throws away hard-won context. Vector stacks help, but they often mean new vendors, opaque scores, and heavy embedding pipelines—painful for teams that must stay local or ship fast on small models.

Memok started from a simple goal: durable, explainable memorythat installs with your agent, respects privacy, and keeps improving while you sleep—so builders can move from “cool demo” to “production copilot” without a memory platform tax.

Problem → durable memory on your stack → measurable continuity for agents. No separate memory SaaS required.

We do not publish tagged releases; changes land on the default branch. Bugs → Issues; broader design threads → Discussions.

Community voices

We highlight public write-ups in GitHub Discussions—deployment notes, before/after patterns, and teaching ideas. This page does not host paid testimonials; your thread can be the first.

Share in Discussions →

Built in the open

Memok moves forward with contributors on the memok-ai repository—there is no separate vendor roster here. Read code, open issues, or land a focused PR.

View contributor graph →

Core Features

Not just storage—memory that works like a brain: selective encoding, associative recall, and sleep-time consolidation. The same powerful memory layer for every model; lightweight agents can punch far above their parameter count.

💾

Auto-Save

Automatically extracts core content after each conversation round and stores it in a local SQLite database. No manual operation needed—conversation is memory.

🎯

Smart Recall

Based on random word sampling + weight association, automatically injects relevant memory candidates before each round. AI decides which to use.

🌙

Dream Optimization

Automatically runs dreaming-pipeline at night: weight decay, orphan cleanup, memory merging—making memories more precise over time.

Brain-Like Memory, Grounded in Science

Memok is not a generic vector store with a catchy name. Its pipeline is deliberately aligned with cognitive mechanisms—selective encoding, associative retrieval, consolidation during “offline” processing, and controlled forgetting—so behavior stays interpretable and defensible. That design depth is the moat: fewer moving parts than embedding stacks, but a clearer path from theory to shipped code.

Scientific anchors (inspired-by, not medical claims)

  • Encoding & salience — each round distills core sentences and keywords, similar to gist-based memory rather than raw log dumps.
  • Associative recall — weighted word–sentence graphs echo spreading activation: recall is explainable paths, not opaque cosine scores.
  • Consolidation (“dream”) — nightly pruning, merging, and decay resemble sleep-dependent memory reorganization in the literature.
  • Forgetting as a feature — low-value links are trimmed so the agent does not drown in stale context—analogous to adaptive forgetting.

Technical whitepaper

We publish a detailed walkthrough of Memok's algorithms: graph construction, sampling-based recall, weight updates, dream pipelines, and complexity trade-offs versus dense vector indexes. Use it for security reviews, academic citations, or internal architecture boards.

Who Memok Is For

Beyond solo AI developers: teams shipping agents inside real organizations and classrooms.

🛠️

AI builders & integrators

Plugin authors, platform engineers, and OpenClaw operators who need durable memory without running a separate vector SaaS.

🏢

Enterprise & internal AI

Copilots on CRM, support, ops, and knowledge bases—where conversations must compound week over week, not reset every ticket.

🎓

Education & research

Reproducible local memory for labs, teaching assistants, and human–AI interaction studies—inspectable graphs instead of black-box retrieval.

🔒

Regulated & IP-heavy teams

Legal, finance, healthcare-adjacent workflows, and R&D groups that cannot ship transcripts to third-party embedding APIs by default.

Small models, outsized judgment

Frontier models are not the only path to “smart” agents. Memok gives resource-constrained teams the same persistent, associative memory layer: your 7B or sub-10B local model can reason with continuity, user preference, and institutional context that used to require a much larger bill. Spend budget on governance and UX—not only on parameter count.

Privacy-first, on your metal

Memories live in SQLite on your machine or VPC—no mandatory upload of conversation text to a vector cloud. That is a structural advantage for data-sensitive industries: easier DPIAs, clearer data residency, and simpler air-gapped or sovereign-cloud deployments. Pair with your existing KMS, backup, and access policies instead of negotiating yet another vendor subprocessors list.

  • Default path keeps embeddings out of the loop—smaller attack surface than full RAG stacks.
  • Works offline once installed; ideal for labs, edge sites, and classified-style environments (subject to your own compliance review).

Scenarios we design for

Composite journeys—illustrative of how memory behaves, not paid endorsements.

Replace names and numbers with your own pilot; the mechanics stay the same.

🎧

Customer support copilot

Each ticket inherits prior symptoms, SKU quirks, and policy exceptions the team already resolved. The model sees candidate memories before replying—fewer repeated questions, faster first-contact resolution.

📚

Teaching & tutoring assistant

Concepts a student struggled with last week resurface when related topics appear. Progress is encoded as sentences + links, not a giant chat log dump—easier to audit for instructors.

🏛️

Enterprise knowledge from dialogue

Post-mortems, runbooks, and sales calls accumulate as structured memory instead of dying in Slack threads. Recall stays on VPC-local SQLite for sensitive sectors.

Benchmarks

Real-world performance data from production use

<50ms
Recall Latency
SQLite local query
~200ms
Save Speed
Per conversation round
~15MB
Storage
For 1000+ sentences + links

vs Traditional Vector Databases

FeatureMemokPinecone/Weaviate
Deployment CostFree (SQLite)$25-200/month
Recall MethodWord association + weightsVector similarity only
Explainability Knows whyBlack box
Cold Start Works immediatelyNeeds large dataset
Privacy Data stays localUploaded to cloud
🧠

True Associative Recall

Unlike vector DBs that only find 'similar' items, memok can cross-topic associate. Mention 'React' and it might recall 'hooks', 'bundler', 'SSR'—contextually related but not semantically identical.

Self-Optimizing

Dream function automatically cleans 20-30% low-value memories nightly. Used memories get weight +1, becoming more accurate over time. After 3 days, effective memory ratio improved from 60% to 85%+.

🎯

Zero Config

No embedding models to tune, no dimensions to set, no indexes to build. Install and it works. One command setup, memories start flowing immediately.

How we validate (and what is still missing)

Numbers on this page come from long-running internal dogfood instances. They are useful for order-of-magnitude thinking—not a substitute for your own benchmarks on your hardware, privacy model, and prompt mix. We are publishing a reproducible harness (datasets + scripts) so third parties can rerun comparisons without trusting our prose alone.

Reference hardware (indicative)

ProfileSetupRecall p50 (local SQLite)
Developer laptop8–16 GB RAM, SSD, single-user<50 ms typical candidate fetch
Small VPC2 vCPU, co-located with OpenClawSame ballpark; dominated by disk + graph size
Workstation32 GB RAM, NVMe, multi-plugin hostHeadroom for larger graphs; still no remote vector hop

Task-shaped comparison (vector DB vs Memok)

  • Warm-handoff support— Vector search finds semantically similar tickets; Memok surfaces sequences of decisions (“refund + store credit”) tied to the same account story even when wording diverges.
  • Runbook Q&A— Pure cosine can miss procedural glue (“after step 3, roll back feature flag X”). Memok keeps those steps as linked sentences with rising weights when reused.
  • Small-model pairing — In dogfooding, nightly dream passes improved effective recall quality (human-rated usefulness) from ~60% to 85%+ after a few days—without swapping the underlying LLM. Your mileage will vary; treat it as motivation to run the upcoming public harness.
“Not the perfect vector search, but the most brain-like memory system—with forgetting, association, reinforcement, and zero cost, zero ops.”

How It Works

Mimicking human memory mechanisms: Encode → Store → Recall → Consolidate

Encode → Store → Recall → Consolidate
1

Dialogue Encoding

After each OpenClaw conversation, memok automatically extracts core sentences (core_idea) and keywords (core_words).

// Auto-triggered, no intervention needed
Conversation → Extract Core → Store in SQLite
2

Memory Storage

Hierarchical data storage: words (raw), normal_words (normalized concepts), sentences, links (associations).

sentences: 1,176 entries
normal_words: 1,548 concepts  
sentence_to_normal_link: 110,956 associations
3

Smart Recall

Before each conversation, randomly sample 20% vocabulary + query words, recall associated sentences, inject into system context.

// Auto-injected candidate memories
(memok) Below are candidate memories attached to each round...
- [id=123] memok v2 Pipeline 8 min faster than v1
- [id=456] Dream function design: sentence + word cleanup
4

Usage Feedback

If AI uses a memory, it calls memok_report_used_memory_ids to report it. That memory's weight increases by 1, making it more likely to be recalled.

// AI auto-reports
memok_report_used_memory_ids([123, 456])
→ Sentences 123, 456 weight + 1
5

Dream Organization

Runs automatically every early morning: duration decay, low-weight sentence cleanup, orphan word deletion, memory merging.

# Auto-executes daily at 03:00
predream-decay: duration -1 for all records
dreaming-pipeline: sample words → story → merge → cleanup

Architecture

Lightweight, local, scalable

SQLite
Local Storage

Single-file database, no deployment needed. Supports 1000+ user scale.

OpenClaw
Plugin Integration

Native plugin API, seamlessly integrated into conversation flow.

LLM API
Model Agnostic

Supports DeepSeek, Moonshot, OpenAI, or any compatible API.

Croner
Scheduled Tasks

In-process scheduler, no system crontab required.

Community & ecosystem

Ship with the upstream repo: issues for bugs, Discussions for design questions, stars for signal. Templates and integration guides land there first.

Install OpenClaw Plugin

Install memok as an OpenClaw plugin—one-click script or manual clone. Start using it the moment you install—no long setup before you can feel memory working.

Method 1One-Click Script (Recommended)

bash
# Linux / macOS
bash <(curl -fsSL https://raw.githubusercontent.com/galaxy8691/memok-ai/main/scripts/install-linux-macos.sh)

# Windows PowerShell
irm https://raw.githubusercontent.com/galaxy8691/memok-ai/main/scripts/install-windows.ps1 | iex

Method 2Manual: OpenClaw plugin from repo

bash
git clone https://github.com/galaxy8691/memok-ai.git
openclaw plugins install ./memok-ai
openclaw memok setup    # Interactive config for LLM and dream schedule
openclaw gateway restart

Learn, troubleshoot, go deeper

Video walkthroughs and a hosted sandbox are on the roadmap—today the fastest path is install + GitHub docs. Use the FAQ while we expand formal guides.

Video walkthrough

Recorded install + recall walkthrough is planned. Watch the repo and Discussions for the first drop.

Coming soon

Hosted playground

A read-only sandbox to feel recall before production install is on the roadmap—no public URL yet.

Roadmap

Commercial inquiries—describe your deployment and compliance needs in GitHub Discussions. This is not a commitment to enterprise SLAs until explicitly offered.

Install succeeded but no memories appear—what should I check?

Confirm OpenClaw is calling the plugin hooks, verify SQLite path permissions, and run one manual conversation round with logging enabled. See GitHub issues for known gateway edge cases.

Can I run without outbound calls to embedding APIs?

Yes—that is the default path. Memok builds associations from extracted words and graph weights; you only need whatever LLM endpoint you already configured for OpenClaw.

How do I back up or migrate memory?

Point backups at the SQLite file and configuration directory your setup uses. Treat it like any other stateful service: snapshot before upgrades.

Where do I request integrations (Slack, Teams, custom CRM)?

Open a Discussion with your workflow sketch. We prioritize integrations that map cleanly to the sentence/link model.

Roadmap & transparency

Open source and local-first—the community edition keeps evolving. Priorities shift with maintainer bandwidth and user signal—watch the repo and join Discussions to vote with concrete use cases.

Feature requests & bug tracker →

Completed
Community v1.0
OpenClaw plugin, auto-save, smart recall, dream optimization ✓
In Progress
Community Improvements
README docs, install scripts, example configs, integration guides for common stacks
Planned
Reproducible benchmark harness
Public scripts + datasets so teams can rerun Memok vs vector baselines on their own hardware
Planned
Onboarding video & hosted playground
Recorded setup walkthrough; optional read-only sandbox to feel recall before installing in production
Planned
Security pack for enterprises
Threat model summary, data flow diagrams, and DPA-friendly language (not a substitute for your counsel)

Security & compliance posture

Memok does not ship a substitute for your legal review. What we do provide is a small, inspectable surface: local SQLite, explicit recall candidates, and no mandatory third-party vector upload. Encryption at rest follows your disk / volume policy; transport security follows how you already terminate TLS to OpenClaw and your LLM provider.

  • Access control is your OS / container IAM plus OpenClaw configuration—rotate credentials the same way you do today.
  • We do not claim SOC 2 or HIPAA certification for the plugin itself; regulated teams should pair Memok with their existing controls and paperwork.
  • Privacy policy & data handling statements will live alongside the upstream repository so they version with the code.

Follow the repo for security advisories →

Let AI Truly Remember You

No more cold starts or amnesia loops. Give OpenClaw persistent, agent-grade memory—so any model, especially smaller ones, can reason with continuity and context like a much larger brain.