The thinking layer between you and everything you know.
Mnema is a neural notebook that quietly maps every note, link, and half-formed idea into a living graph — then surfaces what you need before you ask.
01 / what it is
Three primitives. One continuously learning mind.
Capture
Type, paste, dictate, or clip from anywhere. Mnema embeds every entry into a 1024-dim vector the moment it lands.
Connect
A background process discovers semantic links between every note in your archive and re-weights them as you write.
Recall
Ask in plain language. Mnema returns a cited synthesis from your own writing — never a model's hallucination.
02 / how it learns you
A graph that rewires itself every time you write.
Every keystroke updates the embedding space. New nodes attach themselves to neighbors automatically; old, ignored ones quietly fade. The graph you see is always the one you actually use.
- 01
Embed on capture
Local 4-bit quantized model generates a vector in <40ms.
- 02
Cluster overnight
A nightly pass re-clusters the graph and resurfaces forgotten threads.
- 03
Cite on recall
Every synthesized answer links back to the source notes — always.
ask:mnema
● ready
> what did i conclude about attention residue last quarter?
Across 7 notes between Jan–Mar, you concluded that attention residue compounds across context-switches, and that batching small tasks into a single 90-minute block reduced your end-of-day fatigue by roughly 40%.
03 / pricing
Built for one mind.
Priced like it.
Synapse
most chosen- Unlimited notes
- Live graph view
- Nightly re-clustering
- Cited recall
- API access
Cortex
- Everything in Synapse
- Team graph (up to 5)
- Custom embedding model
- SOC-2 + audit log