Open for early access

Give your agents memory

Persistent memory for AI agents. Store, recall, and share knowledge across sessions — five MCP tools, any agent, any editor. Set up in 30 seconds.

30-second install 500 memories free Apache 2.0 · self-host 52.2% LifeBench
terminal
LOCAL MODE
# Local mode — no cloud, no signup, no API key
$ npx central-intelligence-local
Central Intelligence Local — MCP server running
Tools: remember, recall, forget, context, share

# Or use cloud mode for cross-device sync
$ npx central-intelligence-local sync --key ci_sk_xxx
130 memories synced. AI tools configured.

01 The problem

Every agent session starts from zero.

Your agent learns your preferences, understands your codebase, figures out your architecture. Then the session ends and it forgets everything. Next session — same questions, same mistakes, same context-building from scratch.

40%
of agent session time is spent re-establishing context.
0
memories retained between sessions by default.

// without CI

"What framework is this project using?"
"What's your preferred code style?"
"How does the auth system work here?"
…asked every session.

// with CI

"I recall this is a Next.js project with Tailwind. You prefer functional components and the auth uses JWT tokens stored in httpOnly cookies. Let me pick up where we left off."
/agent  Your agent can install this for you.
A dedicated page with copy-pasteable instructions — point your coding agent at it and walk away.
Open /agent →

02 Five tools

Five tools. Infinite memory.

Everything your agent needs to remember, delivered as MCP tools. Each also available as a REST endpoint.

remember

The core primitive. Store anything for later recall — preferences, decisions, architecture, debugging insight. Institutional knowledge accrues over time.

"User prefers TypeScript, uses Hono for APIs, deploys to Fly.io. Auth is JWT in httpOnly cookies."
recall

Semantic search across all stored memories. Not keyword matching — actual understanding of intent.

recall("what language does the user prefer?")
context

Auto-load relevant memories for the current task. Describe what you're working on, get back everything you knew before.

context("refactoring the auth system")
share

Make memories available to other agents. Your coding agent learns something, your testing agent uses it.

share(memory_id, scope: "agent" → "org")
forget

Delete outdated or incorrect memories. Keep your agent's knowledge accurate and current.

forget("memory_abc123")
+ REST API

Not using MCP? Every tool is also a REST endpoint. Works with any HTTP client, any language, any framework.

POST /memories/remember

03 Scopes

Memory that scales with your team.

Three scopes let you control exactly who knows what.

scope: "agent"

Agent Scope

Private to one agent. Session continuity, personal context, learned patterns.

scope: "user"

User Scope

Shared across all agents serving one user. Preferences, history, consistent experience.

scope: "org"

Org Scope

Organization-wide knowledge. Architecture decisions, team conventions, shared context.

04 Integrations

Works with everything.

Click any platform for setup instructions. Every path takes about 30 seconds.

Claude Code MCP config

One command — gets your API key and configures everything:

npx central-intelligence-local signup

Or add manually to ~/.claude/settings.json:

{
  "mcpServers": {
    "central-intelligence": {
      "command": "npx",
      "args": ["central-intelligence-mcp"],
      "env": { "CI_API_KEY": "YOUR_KEY" }
    }
  }
}

05 Deployment

Your data never leaves your machine — unless you want it to.

CI Local runs entirely on your machine. SQLite database, local embeddings, zero cloud dependencies. No signup, no API key, no trust required.

⚲ local

Local Mode

Everything stays on your machine. SQLite + local AI embeddings. Unlimited memories. Free forever.

npm i -g central-intelligence-local && ci dashboard
→ http://localhost:3141 · health scoring · duplicate detection
☁ cloud

Cloud + Dashboard

Sync memories across devices. View from any browser. Auto-configures Claude Code, Cursor, and Windsurf.

ci signup && ci sync
Open Memory Dashboard · Free tier: 500 ops/mo

06 Pricing

Start free. Scale when ready.

Generous free tier. No credit card required.

Free
$0

For solo developers.

  • 500 memories
  • Unlimited agents
  • Agent + user scopes
  • 60 requests/min
  • Semantic search
  • Community support
Get started
Team (coming soon)
$99/mo

For teams & orgs.

  • 500,000 memories (1,000×)
  • Unlimited agents
  • Multi-seat org sharing
  • 600 requests/min (10×)
  • Dedicated support
Contact sales

07 Benchmarks

Measured, not marketed.

Two of the hardest memory benchmarks published. Open-source harness if you want to reproduce.

LifeBench (2026)

52.2% on LifeBench

A 2026 academic benchmark testing long-term memory across real-world data: messages, calendar, health records, notes, calls. 2,003 questions, 10 users, 51K events. The hardest memory benchmark published to date.

SystemOverallInfo ExtractionMulti-hopTemporalNondecl.
Central Intelligence · gpt-5.4-mini52.2%47.2%52.9%46.4%64.1%
Multi-source, not just chat

SMS, calendar, health records, notes, photos, calls. Real-world signals, not synthetic conversations.

15K memories per user

A full year of behavioral data. Retrieval at real scale, not toy examples.

SOTA is 55%, we're at 52%

Top system (MemOS) barely clears half. We close the gap with no fact extraction — just better retrieval and ranking.

Paper: arxiv.org/abs/2603.03781 (March 2026, Nanjing University + Huawei). Answer model: gpt-5.4-mini. Judge: gpt-4.1-mini.
LongMemEval (ICLR 2025)

75.0% on LongMemEval

Tests conversational memory across 500 questions: single-session recall, multi-session reasoning, temporal reasoning, knowledge updates, preference tracking. The standard benchmark for AI memory systems.

SystemOverallSingle-sess.Multi-sess.TemporalPreference
Central Intelligence · gpt-5.4-mini75.0%91.9%66.2%69.9%76.7%
Beats full-context baseline

Our selective retrieval + ranking outperforms giving the LLM every memory at once (60.2%). Better signal, less noise.

No extraction needed

Zero LLM calls during recall. Just vector search, keyword search, cross-encoder reranker. Simple, fast, reliable.

76.7% preference recall

Infers preferences from conversational context. "Loved the Ethiopian single origin" becomes a searchable coffee preference.

Paper: arxiv.org/abs/2410.10813 (ICLR 2025). Answer model: gpt-5.4-mini. Judge: gpt-4o.
Open Source

Test it yourself.

Don't take our word for it. We built an open-source benchmark so you can test Central Intelligence against any other memory provider. One command, same tests, comparable results.

# Test Central Intelligence
$ npx agent-memory-benchmark --provider central-intelligence --api-key $CI_API_KEY

# Test against any MCP server or other providers
$ npx agent-memory-benchmark --provider mem0 --api-key $MEM0_KEY
$ npx agent-memory-benchmark --provider zep --api-key $ZEP_KEY
$ npx agent-memory-benchmark --provider mcp --api-url $MCP_URL
56 tests, 8 categories

Factual recall, semantic search, temporal reasoning, conflict resolution, forgetting, cross-session, multi-agent, cost.

Deterministic scoring

Binary pass/fail on keyword presence. No LLM-as-judge. Same inputs, same scores, every run.

Add your own

Implement 5 methods (store, search, delete, init, cleanup). Submit a PR. See where you stand.

08 Dashboard

See what your AI actually remembers.

Memory dashboard for Claude Code, Cursor, Windsurf, and ChatGPT. Search, clean up duplicates, import from ChatGPT. Free.

◉ ◉ ◉ centralintelligence.online/app v0.9
Health
9/10
Total
142
Agents
3
Stale
5
Duplicates
2
Projects: CI Development 30 · Lemon Squeezy 20 · ChatGPT Transfer 39
ProjectSourceContentFlags
CI Development claude-code Phase 1: hosted dashboard. Move from localhost to web app… fresh
Lemon Squeezy chatgpt-transfer Open-core licensing: MCP server open (trust), CLI closed… freshdup
cursor User prefers TypeScript with strict null checks enabled… aging

Give your agent a memory.

One command. Ten seconds. Permanent memory.

terminal
# One command — installs and opens the dashboard
$ npm i -g central-intelligence-local && ci dashboard

CI Local Pro — Dashboard
→ http://localhost:3141
Browser opens automatically

09 Fine print

Tweaks ×
Rewrites taglines and section ledes across the page.