Skip to content
View base76-research-lab's full-sized avatar

Block or report base76-research-lab

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Base76 Research Lab

Independent AI research lab (Sweden) focused on epistemic AI architecture, mechanistic interpretability, and trustworthy AI systems.

AI systems should represent what they know, what they do not know, and when they should stop.

Web ORCID MI Front Door Latest MI Release

What We Study

  • Epistemic AI architecture: uncertainty representation and decision boundaries
  • Mechanistic interpretability: internal geometry, state-candidate misalignment, and intervention boundaries
  • Trust verification infrastructure: auditability and uncertainty-gated inference
  • Governance alignment: methods compatible with regulated and safety-critical contexts

Current Focus

Mechanistic Interpretability

Epistemic Systems

Research Entry Points

Open Collaboration And Support

Temporary support path while payment pages are being finalized:

We are open to:

  • research collaboration
  • compute / cloud credit support
  • institutional dialogue
  • aligned partnerships for trustworthy AI and interpretability research

Contact: bjorn@base76.se


Research-first. Evidence-labeled. Claim-boundary explicit.

Pinned Loading

  1. Mechanistic-Interpretability Mechanistic-Interpretability Public

    Mechanistic interpretability research: residual-state geometry, sparse autoencoders, and hallucination-prone regime analysis in transformer models (GPT-2 Small).

    Python 2