System 3: Collective Intelligence in the Multiplayer AI Era
The winners of the next decade won't just build better AI tools.
Sentra is an independent research team studying how memory, reinforcement, and architectural primitives shape what an AI system can actually do. Our thesis: a 50× smaller model in the right architecture can match a frontier model in the wrong one. We publish papers, build products on top of them, and keep both close to each other.
Sambarta Ray Barman · Andrey Starenky · Sophia Bodnar · Nikhil Narasimhan · Ashwin Gopinath
Semantic memory filesystem.
Drop a folder. It becomes a brain. Lattice indexes local files and your cloud connectors (Drive, Notion, Dropbox, Slack, GitHub) into a unified, ACL-faithful knowledge graph. Every entity is a real directory; every relationship a symlink; every connector writes back into the same folder. Built on the Semantic Memory Filesystem research line.
Memory failure physics. Deterministic substrates. Optimization & alignment. World models. Each paper extends the thesis that structured negative feedback and careful architectural design matter more than model size.
Forgetting and false recall emerge from the geometry itself when memories are stored as vectors. The system exhibits power-law forgetting matching human cognitive patterns (exponent b ≈ 0.460 vs. human b ≈ 0.5), without any biologically-motivated mechanism - the failure modes are intrinsic to high-dimensional representation.
Establishes the No-Escape Theorem: interference-driven forgetting and false recall cannot be eliminated within semantic memory systems. We formalise the trade-offs between coverage, fidelity, and capacity, and show that no choice of embedding model or retrieval strategy escapes them.
If high-dimensional embeddings are intrinsically lossy (Act I), the substrate has to do work the embedding can't. We propose the Semantic Memory Filesystem - every entity a real directory, every relationship a symlink, every index a derived view. Deterministic, ACL-faithful, and the substrate Lattice runs on.
A formalism for using runtime monitor signals as the reinforcement substrate. Failures aren't noise to be smoothed away - they're the densest available supervision signal. We define the Monitor MDP and show how it changes what an agent can learn from a single deployment.
Alignment without preference data. We show that pure negative feedback - penalties for crossing constraints, with no positive reward shaping - is sufficient to produce substantively-aligned behaviour, and that it generalises further than RLHF along several axes.
Full writeups, methodology, and additional reading at sentra.app/research.
Long-form pieces from Nano Thoughts by @ashwingop on the work behind the papers and products.
The winners of the next decade won't just build better AI tools.
The AI/tech community seems to have collectively realized what biology knew all along: memory is a graph, not a database.
Sentra's Northstar. What it means for a company to develop an emergent mind of its own - and what infrastructure has to exist for that to happen.
Only 4 out of 128 dimensions in the KV cache carry meaningful signal.
Full archive at nanothoughts.substack.com.
We don't run a newsletter. The list is for ship announcements only - when a paper drops, when a product opens access, when something we're working on makes the leap. That's the entire promise. No spam, no drip campaign.
For everything else - hello@sentra.build.