Developer impact analytics for GitHub organizations. Glooker pulls commit history, runs LLM-based analysis on each commit, and produces ranked developer reports with metrics like code complexity, PR discipline, AI-assisted coding percentage, and overall impact scores.
git clone https://github.com/Smartling/glooker.git
cd glooker
npm install
cp .env.example .env.localEdit .env.local with your GitHub token and LLM API key, then:
npm run dev
# Open http://localhost:3000That's it. Glooker uses SQLite by default — no database setup needed.
- Jira integration — optionally tracks resolved Jira issues per developer, auto-discovers GitHub→Jira user mappings via commit emails
- Full commit coverage — uses GitHub's commit search API to capture all commits, not just PR-linked ones
- LLM-powered analysis — each commit is analyzed for complexity (1-10), type (feature/bug/refactor/etc), risk level, and whether it appears AI-generated
- AI detection — three layers: confirmed via
Co-Authored-Bytrailers, PR body patterns (e.g. "Generated with Claude Code"), branch commit trailer scanning for merge commits, and LLM heuristic analysis - PR discipline tracking — shows what percentage of each developer's commits went through pull requests
- Developer detail page — click any developer to see percentile rankings (vs avg/p50/p95), type breakdown, active repos, and full commit history with links to GitHub
- Progressive UI — developer table populates during report generation as each member completes, not after everything finishes
- Resumable reports — interrupted reports can be resumed, skipping already-analyzed commits (commit analyses save to DB inline)
- Scheduled reports — configure recurring reports on a cron schedule
- Export — CSV download, Google Sheets, or Download PDF (print-optimized layout)
- Multiple LLM providers — OpenAI, Anthropic, AWS Bedrock, any OpenAI-compatible endpoint (Ollama, vLLM, Azure), or Smartling AI Proxy
| Metric | Description |
|---|---|
| Commits | Total commits in period (all, not just PR-linked) |
| PRs | Merged pull requests authored |
| Lines +/- | Lines added and removed |
| Complexity | Mean LLM-assessed complexity (1-10) |
| PR% | Percentage of commits that went through a PR |
| AI% | Percentage of commits with AI assistance (confirmed + suspected) |
| Jira Issues | Resolved Jira tickets in period (optional, requires Jira config) |
| Impact | Weighted score: complexity (3.5) + PRs (3.0) + volume (2.0) + PR discipline (1.1) |
| Types | Commit categorization: feature, bug, refactor, infra, docs, test |
Create a fine-grained personal access token:
- Resource owner: your org
- Repository access: All repositories
- Repository permissions: Contents (read), Pull requests (read), Metadata (read)
- Organization permissions: Members (read)
Set LLM_PROVIDER in .env.local:
LLM_PROVIDER=openai
LLM_API_KEY=sk-...
LLM_MODEL=gpt-4oLLM_PROVIDER=anthropic
LLM_API_KEY=sk-ant-...
LLM_MODEL=claude-sonnet-4-20250514LLM_PROVIDER=openai-compatible
LLM_BASE_URL=http://localhost:11434/v1
LLM_MODEL=llama3
LLM_API_KEY=not-neededApple Silicon (M1/M2/M3/M4) GPU acceleration: If you install Ollama via an x86 Homebrew (
/usr/local/bin/brew), it will run under Rosetta and cannot access the Metal GPU — inference will be CPU-only and much slower. Install Ollama using the official installer or an ARM-native Homebrew (/opt/homebrew/bin/brew) to get full GPU acceleration. Verify withollama ps— the PROCESSOR column should show100% GPU, not100% CPU.
For AWS users with Bedrock model access enabled:
LLM_PROVIDER=bedrock
# Uses the default AWS credential provider chain (AWS_PROFILE, IAM role, etc.)
# AWS_PROFILE=your-profile
# AWS_REGION=us-east-1
LLM_MODEL=us.anthropic.claude-sonnet-4-6For SSO authentication, run your dev server through aws-okta or aws sso login:
aws-okta exec your-profile -- npm run dev
# or
aws sso login --profile your-profile && AWS_PROFILE=your-profile npm run devFor Smartling customers with AI Proxy access:
LLM_PROVIDER=smartling
SMARTLING_BASE_URL=https://api.smartling.com
SMARTLING_ACCOUNT_UID=your_account_uid
SMARTLING_USER_IDENTIFIER=your_user_identifier
SMARTLING_USER_SECRET=your_user_secret
LLM_MODEL=anthropic/claude-sonnet-4-20250514Track resolved Jira issues per developer. Requires a Jira Cloud or Server instance:
JIRA_ENABLED=true
JIRA_HOST=mycompany.atlassian.net
JIRA_USERNAME=your-email@company.com
JIRA_API_TOKEN=your-jira-api-token
JIRA_API_VERSION=3 # 3 for Cloud, 2 for Server
# JIRA_PROJECTS=PROJ1,PROJ2 # optional filter, default: all projectsGitHub→Jira user mappings are auto-discovered via commit author emails and cached in the user_mappings table. Editable in Settings > App Settings.
SQLite (default) — zero config, data stored in ./glooker.db:
# No config needed — this is the defaultMySQL — for teams or production:
DB_TYPE=mysql
DB_HOST=localhost
DB_USER=root
DB_PASSWORD=
DB_NAME=glookerThen initialize: mysql -u root < schema.sql
LLM prompts are stored as template files in the prompts/ directory. You can customize prompts by editing these files or pointing to a different directory:
PROMPTS_DIR=./my-custom-promptsTemplate files use {{PLACEHOLDER}} syntax for dynamic values injected at runtime.
Each LLM-powered service has configurable temperature and max_tokens settings:
| Setting | Default | Description |
|---|---|---|
ANALYZER_TEMPERATURE |
0 | Commit analysis (deterministic) |
ANALYZER_MAX_TOKENS |
256 | Commit analysis response limit |
CHAT_AGENT_TEMPERATURE |
0.3 | Chat assistant |
CHAT_AGENT_MAX_TOKENS |
1500 | Chat assistant response limit |
CHAT_AGENT_MAX_ITERATIONS |
5 | Max tool-call rounds per chat |
SUMMARY_TEMPERATURE |
0.7 | Developer summaries |
SUMMARY_MAX_TOKENS |
512 | Developer summary response limit |
HIGHLIGHTS_TEMPERATURE |
0.5 | Report comparison highlights |
HIGHLIGHTS_MAX_TOKENS |
512 | Highlights response limit |
All settings are optional — defaults match the original hardcoded values.
After editing prompt templates: Run
npm test -- -uto update Jest snapshots, then review the diff to confirm the change is intentional. Snapshots assert the exact prompt text to prevent accidental regressions.
# Set your env vars
export GITHUB_TOKEN=github_pat_...
export LLM_API_KEY=sk-...
# Build and start (MySQL + app)
docker compose up --build -d
# Open http://localhost:3000Note: The MySQL container exposes port 3307 by default (to avoid conflicts with a local MySQL on 3306). Edit
docker-compose.ymlto change this.
Browser (Next.js) → API Routes → GitHub API
→ LLM Provider (OpenAI / Anthropic / Bedrock / Smartling / custom)
→ Jira API (optional)
→ SQLite or MySQL
- List org members via GitHub API
- For each member (pipelined — LLM starts while more members are still being fetched):
- Search all commits and merged PRs in the date range
- Fetch diffs for each commit
- Detect AI co-authorship from commit trailers, PR body, and branch commits
- Queue LLM analysis (concurrency-limited) for complexity, type, risk, and AI-generation detection
- Save each commit analysis to DB immediately (enables resume)
- When all of a member's commits are analyzed, aggregate and save developer stats to DB (enables progressive UI)
- If Jira enabled: resolve each member's Jira account (via commit emails →
user_mappings), fetch resolved issues via JQL, save tojira_issues - Final cross-member aggregation overwrites with canonical stats
- Display ranked table with export options
src/lib/
├── llm-provider.ts # LLM provider factory (OpenAI SDK for all providers)
├── bedrock-adapter.ts # AWS Bedrock adapter (only loaded when provider=bedrock)
├── smartling-auth.ts # Smartling OAuth (only loaded when provider=smartling)
├── github.ts # GitHub API: members, commit/PR search, diffs, AI detection
├── analyzer.ts # LLM commit analysis prompt + response parsing
├── aggregator.ts # Per-developer metric rollup
├── report-runner.ts # Pipeline orchestrator (pipelined fetch → analyze → save)
├── jira/
│ ├── client.ts # Jira REST API client (direct fetch, no external deps)
│ ├── mapper.ts # GitHub→Jira user mapping (auto-discover + persist)
│ └── index.ts # Re-exports
├── progress-store.ts # In-memory progress tracking (developer-based)
├── schedule-manager.ts # Cron-based scheduled report execution
├── schedule-validation.ts # Schedule input validation
└── db/
├── index.ts # DB abstraction (selects SQLite or MySQL)
├── sqlite.ts # SQLite implementation (default)
└── mysql.ts # MySQL implementation
src/app/
├── page.tsx # Main dashboard (report list, generation, developer table)
├── report/[id]/dev/[login]/
│ └── page.tsx # Developer detail page (percentiles, commits)
└── api/
├── report/[id]/dev/[login]/route.ts # Developer detail API
├── report/[id]/commits/route.ts # Commits per developer API
└── ... # Other report & schedule endpoints
npm run dev # Start dev server
npm run build # Production build
npm test # Run all tests
npm run test:watch # Run tests in watch mode
npm run test:coverage # Run tests with coverage reportTests live in src/lib/__tests__/ (unit and integration). CI runs tests automatically on every pull request via GitHub Actions.
If you see Cannot find module './638.js', run rm -rf .next and restart.
See CONTRIBUTING.md for contribution guidelines and CLAUDE.md for AI-assisted development context.