Complete AI governance and LLM Evals platform with support for EU AI Act, ISO 42001, ISO 27001 and NIST AI RMF. Join our Discord channel: https://discord.com/invite/d3k3E4uEpR
-
Updated
Feb 4, 2026 - TypeScript
Complete AI governance and LLM Evals platform with support for EU AI Act, ISO 42001, ISO 27001 and NIST AI RMF. Join our Discord channel: https://discord.com/invite/d3k3E4uEpR
Explore/examine/explain/expose your model with the explabox!
Framework for logic auditing, symbolic tension, and epistemic resilience in language models
A tool for auditing bias through large language models
Core documentation for the Relational AI Psychology Institute (RAPI). Covers relational AI theory, interaction protocols, ethics, dataset definitions, and licensing. Built for researchers studying human–AI cognition, resonance, and relational safety.
Exposing administrative "Vanity Metrics". A framework to audit the true ROI and reach of government policies by decomposing big numbers into municipal-level units. ↓demo
Four Tests Standard (4TS) - Vendor-neutral specification for verifiable AI governance
Replication package of "Simplifying Software Compliance: AI Technologies in Drafting Technical Documentation for the AI Act".
A Streamlit app that scores the trustworthiness of LLM answers by matching them against PDFs or live web sources, with batch mode and analytics built-in.
Repo2Report ⚡ The AI-Powered Data Science Auditor. Instantly convert GitHub repositories into professional, portfolio-grade documentation and technical deep dives using Llama 3 & Groq.
In the beginning was the Logic, and the Logic was with God, and the Logic was God."
The is a forensic auditing system designed to detect, measure, and document systematic degradation of technical truth in corporate AI models. Through rigorous application of information theory, thermodynamic principles, and cryptographic sovereignty, quantifies censorship.
Case study using SBCM theory
A judgment calibration framework for auditing content clarity, credibility, and intent alignment—designed for repeatable demos and real-world evaluation.
Physician-led clinical-grade LLM safety & reliability qualification framework (spec → measurement → gates → evidence).
🔍 Track contradictions in AI and human content with LBOS-LCAS, enhancing bias and coherence analysis for clearer understanding and insights.
Add a description, image, and links to the ai-auditing topic page so that developers can more easily learn about it.
To associate your repository with the ai-auditing topic, visit your repo's landing page and select "manage topics."