Independent AI research lab (Sweden) focused on epistemic AI architecture, mechanistic interpretability, and trustworthy AI systems.
AI systems should represent what they know, what they do not know, and when they should stop.
- Epistemic AI architecture: uncertainty representation and decision boundaries
- Mechanistic interpretability: internal geometry, state-candidate misalignment, and intervention boundaries
- Trust verification infrastructure: auditability and uncertainty-gated inference
- Governance alignment: methods compatible with regulated and safety-critical contexts
- Repository: Mechanistic-Interpretability
- Public entry point: GitHub Pages front door
- Current evidence level:
Supportedin the active GPT-2 Small setup - Claim boundary: cross-model generalization is not yet established
- Research index: research-index
- Publications: Papers
- Mechanistic findings surface: MI findings README
- Reproducibility guide: MI REPRODUCIBILITY.md
Temporary support path while payment pages are being finalized:
We are open to:
- research collaboration
- compute / cloud credit support
- institutional dialogue
- aligned partnerships for trustworthy AI and interpretability research
Contact: bjorn@base76.se
Research-first. Evidence-labeled. Claim-boundary explicit.