Multi-source, not just chat
SMS, calendar, health records, notes, photos, calls. Real-world signals, not synthetic conversations.
Persistent memory for AI agents. Store, recall, and share knowledge across sessions — five MCP tools, any agent, any editor. Set up in 30 seconds.
# Local mode — no cloud, no signup, no API key $ npx central-intelligence-local Central Intelligence Local — MCP server running Tools: remember, recall, forget, context, share # Or use cloud mode for cross-device sync $ npx central-intelligence-local sync --key ci_sk_xxx 130 memories synced. AI tools configured.
01 The problem
Your agent learns your preferences, understands your codebase, figures out your architecture. Then the session ends and it forgets everything. Next session — same questions, same mistakes, same context-building from scratch.
02 Five tools
Everything your agent needs to remember, delivered as MCP tools. Each also available as a REST endpoint.
The core primitive. Store anything for later recall — preferences, decisions, architecture, debugging insight. Institutional knowledge accrues over time.
Semantic search across all stored memories. Not keyword matching — actual understanding of intent.
Auto-load relevant memories for the current task. Describe what you're working on, get back everything you knew before.
Make memories available to other agents. Your coding agent learns something, your testing agent uses it.
Delete outdated or incorrect memories. Keep your agent's knowledge accurate and current.
Not using MCP? Every tool is also a REST endpoint. Works with any HTTP client, any language, any framework.
03 Scopes
Three scopes let you control exactly who knows what.
Private to one agent. Session continuity, personal context, learned patterns.
Shared across all agents serving one user. Preferences, history, consistent experience.
Organization-wide knowledge. Architecture decisions, team conventions, shared context.
04 Integrations
Click any platform for setup instructions. Every path takes about 30 seconds.
One command — gets your API key and configures everything:
npx central-intelligence-local signup
Or add manually to ~/.claude/settings.json:
{
"mcpServers": {
"central-intelligence": {
"command": "npx",
"args": ["central-intelligence-mcp"],
"env": { "CI_API_KEY": "YOUR_KEY" }
}
}
}05 Deployment
CI Local runs entirely on your machine. SQLite database, local embeddings, zero cloud dependencies. No signup, no API key, no trust required.
Everything stays on your machine. SQLite + local AI embeddings. Unlimited memories. Free forever.
Sync memories across devices. View from any browser. Auto-configures Claude Code, Cursor, and Windsurf.
06 Pricing
Generous free tier. No credit card required.
For solo developers.
For power users.
For teams & orgs.
07 Benchmarks
Two of the hardest memory benchmarks published. Open-source harness if you want to reproduce.
A 2026 academic benchmark testing long-term memory across real-world data: messages, calendar, health records, notes, calls. 2,003 questions, 10 users, 51K events. The hardest memory benchmark published to date.
| System | Overall | Info Extraction | Multi-hop | Temporal | Nondecl. |
|---|---|---|---|---|---|
| Central Intelligence · gpt-5.4-mini | 52.2% | 47.2% | 52.9% | 46.4% | 64.1% |
SMS, calendar, health records, notes, photos, calls. Real-world signals, not synthetic conversations.
A full year of behavioral data. Retrieval at real scale, not toy examples.
Top system (MemOS) barely clears half. We close the gap with no fact extraction — just better retrieval and ranking.
Tests conversational memory across 500 questions: single-session recall, multi-session reasoning, temporal reasoning, knowledge updates, preference tracking. The standard benchmark for AI memory systems.
| System | Overall | Single-sess. | Multi-sess. | Temporal | Preference |
|---|---|---|---|---|---|
| Central Intelligence · gpt-5.4-mini | 75.0% | 91.9% | 66.2% | 69.9% | 76.7% |
Our selective retrieval + ranking outperforms giving the LLM every memory at once (60.2%). Better signal, less noise.
Zero LLM calls during recall. Just vector search, keyword search, cross-encoder reranker. Simple, fast, reliable.
Infers preferences from conversational context. "Loved the Ethiopian single origin" becomes a searchable coffee preference.
Don't take our word for it. We built an open-source benchmark so you can test Central Intelligence against any other memory provider. One command, same tests, comparable results.
# Test Central Intelligence $ npx agent-memory-benchmark --provider central-intelligence --api-key $CI_API_KEY # Test against any MCP server or other providers $ npx agent-memory-benchmark --provider mem0 --api-key $MEM0_KEY $ npx agent-memory-benchmark --provider zep --api-key $ZEP_KEY $ npx agent-memory-benchmark --provider mcp --api-url $MCP_URL
Factual recall, semantic search, temporal reasoning, conflict resolution, forgetting, cross-session, multi-agent, cost.
Binary pass/fail on keyword presence. No LLM-as-judge. Same inputs, same scores, every run.
Implement 5 methods (store, search, delete, init, cleanup). Submit a PR. See where you stand.
08 Dashboard
Memory dashboard for Claude Code, Cursor, Windsurf, and ChatGPT. Search, clean up duplicates, import from ChatGPT. Free.
| Project | Source | Content | Flags |
|---|---|---|---|
| CI Development | claude-code | Phase 1: hosted dashboard. Move from localhost to web app… | fresh |
| Lemon Squeezy | chatgpt-transfer | Open-core licensing: MCP server open (trust), CLI closed… | freshdup |
| — | cursor | User prefers TypeScript with strict null checks enabled… | aging |
One command. Ten seconds. Permanent memory.
# One command — installs and opens the dashboard $ npm i -g central-intelligence-local && ci dashboard CI Local Pro — Dashboard → http://localhost:3141 Browser opens automatically
09 Fine print
Memories your agents create (content, tags, embeddings), your API key hash, and basic usage metadata. Nothing personal beyond what you provide at signup.
Solely to provide the service — storing, searching, and returning memories your agents request. We do not sell, share, or train models on your content. Embeddings are generated via OpenAI's API.
PostgreSQL on Fly.io. API keys hashed. TLS for all communication. Rate limiting, input validation, SQL injection protection.
Delete memories with the forget tool. Full-key deletion: contact us. Soft-deleted memories are purged within 30 days.
Any lawful purpose. You're responsible for what your agents store. No illegal content, malware, or rights violations. No circumventing rate limits or interfering with the service.
Keep your key secure. Don't share publicly. Revoke a compromised key via DELETE /keys/revoke. We may suspend abused keys.
We aim for high availability but do not guarantee uptime. The service is "as is." We are not liable for data loss or damages.
CI is open source under Apache 2.0. You may self-host the entire stack. These terms apply only to the hosted service.