Sentra

About

The intelligence is in the architecture.

Sentra is an independent research team studying how memory, reinforcement, and architectural primitives shape what an AI system can actually do. Our thesis: a 50× smaller model in the right architecture can match a frontier model in the wrong one. We publish papers, build products on top of them, and keep both close to each other.

Sambarta Ray Barman · Andrey Starenky · Sophia Bodnar · Nikhil Narasimhan · Ashwin Gopinath

Products

What we've shipped.

Lattice

Semantic memory filesystem.

Drop a folder. It becomes a brain. Lattice indexes local files and your cloud connectors (Drive, Notion, Dropbox, Slack, GitHub) into a unified, ACL-faithful knowledge graph. Every entity is a real directory; every relationship a symlink; every connector writes back into the same folder. Built on the Semantic Memory Filesystem research line.

One email when Lattice ships hosted access.

Research

Four acts.

Memory failure physics. Deterministic substrates. Optimization & alignment. World models. Each paper extends the thesis that structured negative feedback and careful architectural design matter more than model size.

  • The Geometry of Forgetting: How High-Dimensional Embeddings Reproduce Human Memory Phenomena

    S. Ray Barman · A. Starenky · S. Bodnar · N. Narasimhan · A. Gopinath · March 2026 · arXiv preprint

    Forgetting and false recall emerge from the geometry itself when memories are stored as vectors. The system exhibits power-law forgetting matching human cognitive patterns (exponent b ≈ 0.460 vs. human b ≈ 0.5), without any biologically-motivated mechanism - the failure modes are intrinsic to high-dimensional representation.

  • The Price of Meaning: Impossibility Theorems for Semantic Memory Systems

    S. Ray Barman · A. Starenky · S. Bodnar · N. Narasimhan · A. Gopinath · March 2026 · arXiv preprint

    Establishes the No-Escape Theorem: interference-driven forgetting and false recall cannot be eliminated within semantic memory systems. We formalise the trade-offs between coverage, fidelity, and capacity, and show that no choice of embedding model or retrieval strategy escapes them.

  • Semantic Memory Filesystem: Deterministic Organizational Memory Through Filesystem Primitives

    S. Ray Barman · A. Starenky · S. Bodnar · N. Narasimhan · A. Gopinath · April 2026 · arXiv preprint

    If high-dimensional embeddings are intrinsically lossy (Act I), the substrate has to do work the embedding can't. We propose the Semantic Memory Filesystem - every entity a real directory, every relationship a symlink, every index a derived view. Deterministic, ACL-faithful, and the substrate Lattice runs on.

  • Operational Reinforcement: Monitor MDPs for Structured Failure Feedback

    S. Ray Barman · A. Starenky · S. Bodnar · N. Narasimhan · A. Gopinath · 2026 · under review

    A formalism for using runtime monitor signals as the reinforcement substrate. Failures aren't noise to be smoothed away - they're the densest available supervision signal. We define the Monitor MDP and show how it changes what an agent can learn from a single deployment.

  • Avoidance Learning: Substantive Alignment Through Pure Negative Feedback

    S. Ray Barman · A. Starenky · S. Bodnar · N. Narasimhan · A. Gopinath · 2026 · under review

    Alignment without preference data. We show that pure negative feedback - penalties for crossing constraints, with no positive reward shaping - is sufficient to produce substantively-aligned behaviour, and that it generalises further than RLHF along several axes.

Full writeups, methodology, and additional reading at sentra.app/research.

Stay in the loop

One email when something ships.

We don't run a newsletter. The list is for ship announcements only - when a paper drops, when a product opens access, when something we're working on makes the leap. That's the entire promise. No spam, no drip campaign.

For everything else - hello@sentra.build.