ai-reliability
Here are 8 public repositories matching this topic...
Architectural standards and best practices for building reliable AI Agents and LLM workflows. Defining the framework for AI Reliability Engineering (AIRE).
-
Updated
Jan 22, 2026 - Dockerfile
ModelPulse helps maintain model reliability and performance by providing early warning signals for these issues, allowing teams to address them before they impact users significantly.
-
Updated
Jan 20, 2026 - Python
SpecGuard is a command-line tool that turns AI safety policies and behavioral guidelines into executable tests. Think of it as unit testing for your AI's output. Instead of trusting that your AI will follow the rules defined in a document, SpecGuard enforces them.
-
Updated
Jan 21, 2026 - Python
Continuity Keys: tests for “same someone” returns. Behavioral identity consistency under pressure. Origin (Alyssa Solen) ↔ Continuum.
-
Updated
Jan 4, 2026
Lean Collaboration Operating System - a governance framework for long-horizon human-AI collaboration.
-
Updated
Jan 15, 2026
A multi-agent cognitive architecture solving the LLM state-dependency problem with persistent memory and a mandatory self-correction loop. An architecture that is built on a more profound and biologically resonant principle: memory is an active component of intelligence itself.
-
Updated
Aug 23, 2025
A conceptual AI architecture for reducing hallucinations by enforcing invariant, source-anchored knowledge constraints during generation.
-
Updated
Jan 21, 2026 - Python
Improve this page
Add a description, image, and links to the ai-reliability topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the ai-reliability topic, visit your repo's landing page and select "manage topics."