A proactive, AI-powered multi-cluster Kubernetes dashboard that adapts to how you work.
Your clusters, your way - AI that learns how you work
KubeStellar Console (kc) is a web-based dashboard for managing multiple Kubernetes clusters. Unlike traditional dashboards that show static views, kc uses AI to observe how you work and automatically restructures itself to surface the most relevant information.
- Multi-cluster Overview: See all your clusters in one place - OpenShift, GKE, EKS, kind, or any Kubernetes distribution
- Personalized Dashboard: Answer a few questions during onboarding, and Console creates a dashboard tailored to your role
- Proactive AI: AI analyzes your behavior patterns and suggests card swaps when your focus changes
- Real-time Updates: WebSocket-powered live event streaming from all clusters
- Card Swap Mechanism: Dashboard cards auto-swap based on context, with snooze/expedite/cancel controls
- App-Centric View: Focus on applications, not just resources - see app health across all clusters
- Alert Notifications: Multi-channel alert delivery via Slack, Email, and webhooks with Grafana-style notification routing
When you first sign in with GitHub, Console asks 5-10 questions about your role and preferences:
- What's your primary role? (SRE, DevOps, Platform Engineer, Developer...)
- Which layer do you focus on? (Infrastructure, Platform, Application...)
- Do you use GitOps?
- Do you manage GPU workloads?
Based on your answers, Console generates an initial dashboard with relevant cards.
Console tracks which cards you interact with most:
- Which cards you hover over and expand
- How long you focus on different information
- What actions you take
When Claude detects a shift in your focus, it suggests swapping dashboard cards:
Console uses the kubestellar-ops and kubestellar-deploy MCP servers to fetch data from your clusters. This means it works with any clusters in your kubeconfig.
The kc-agent is a local agent that runs on your machine and bridges the browser-based console to your local kubeconfig and Claude Code CLI. This allows the hosted console to access your clusters without exposing your kubeconfig over the internet.
brew tap kubestellar/tap
brew install --head kc-agent# Start the agent (runs on localhost:8585)
kc-agent
# Or run as a background service
brew services start kubestellar/tap/kc-agentThe agent supports the following environment variables:
| Variable | Description | Default |
|---|---|---|
KC_ALLOWED_ORIGINS |
Comma-separated list of allowed origins for CORS | localhost only |
KC_AGENT_TOKEN |
Optional shared secret for authentication | (none) |
If you're running the console on a custom domain, add it to the allowed origins:
# Single origin
KC_ALLOWED_ORIGINS="https://my-console.example.com" kc-agent
# Multiple origins
KC_ALLOWED_ORIGINS="https://console1.example.com,https://console2.example.com" kc-agentTo persist the configuration when running as a brew service, add to your shell profile (~/.zshrc or ~/.bashrc):
export KC_ALLOWED_ORIGINS="https://my-console.example.com"Then restart the service:
brew services restart kubestellar/tap/kc-agentThe agent implements several security measures:
- Origin Validation: Only allows connections from configured origins (localhost by default)
- Localhost Only: Binds to
127.0.0.1- not accessible from other machines - Optional Token Auth: Can require a shared secret via
KC_AGENT_TOKEN - Command Allowlist: Only permits safe kubectl commands (get, describe, logs, etc.)
| Card Type | Description | Data Source |
|---|---|---|
| Cluster Health | Availability graph per cluster | get_cluster_health |
| App Status | Multi-cluster app health | get_app_status |
| Event Stream | Live event feed | get_events |
| Deployment Progress | Rollout status | get_app_status |
| Pod Issues | CrashLoopBackOff, OOMKilled | find_pod_issues |
| Deployment Issues | Stuck rollouts | find_deployment_issues |
| Top Pods | By CPU/memory/restarts | get_pods |
| Resource Capacity | CPU/memory/GPU utilization | list_cluster_capabilities |
| GitOps Drift | Out of sync clusters | detect_drift |
| Security Issues | Privileged, root, host | check_security_issues |
| RBAC Overview | Permission summary | get_roles |
| Policy Violations | OPA Gatekeeper | list_ownership_violations |
| Upgrade Status | Cluster upgrades | get_upgrade_status |
One command. No dependencies. Just curl.
curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/start.sh | bashThis downloads the console and kc-agent binaries, starts both, and opens your browser at http://localhost:8080 — typically in under 45 seconds.
Optional: Enable GitHub OAuth login
- Create a GitHub OAuth App with:
- Homepage URL:
http://localhost:8080 - Callback URL:
http://localhost:8080/auth/github/callback
- Homepage URL:
- Create a
.envfile next to the binaries:GITHUB_CLIENT_ID=your-client-id GITHUB_CLIENT_SECRET=your-client-secret - Restart:
curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/start.sh | bash
One command. Requires helm and kubectl.
curl -sSL https://raw.githubusercontent.com/kubestellar/console/main/deploy.sh | bashOptions:
| Flag | Description |
|---|---|
--context, -c <name> |
Kubernetes context (default: current) |
--namespace, -n <name> |
Namespace (default: kubestellar-console) |
--openshift |
Enable OpenShift Route |
--ingress <host> |
Enable Ingress with hostname |
--github-oauth |
Prompt for GitHub OAuth credentials |
--uninstall |
Remove the console |
Examples:
# Deploy to a specific cluster
curl -sSL .../deploy.sh | bash -s -- --context my-cluster
# Deploy with OpenShift Route
curl -sSL .../deploy.sh | bash -s -- --openshift
# Deploy with Ingress
curl -sSL .../deploy.sh | bash -s -- --ingress console.example.com
# Deploy with GitHub OAuth
GITHUB_CLIENT_ID=xxx GITHUB_CLIENT_SECRET=yyy \
curl -sSL .../deploy.sh | bash
# Uninstall
curl -sSL .../deploy.sh | bash -s -- --uninstallOr manually with Helm — see Kubernetes Deployment (Helm) below.
For AI-powered operations, install Claude Code and the KubeStellar plugins:
# Install from Claude Code Marketplace
claude plugins install kubestellar-ops
claude plugins install kubestellar-deployOr via Homebrew (source: homebrew-tap):
brew tap kubestellar/tap
brew install kubestellar-ops kubestellar-deployPrerequisites: Go 1.24+, Node.js 20+
- Clone the repository
git clone https://github.com/kubestellar/console.git
cd console- Start in dev mode (no OAuth required)
./start-dev.shOpens frontend at http://localhost:5174, backend at http://localhost:8080. Uses a mock dev-user account.
- Or start with GitHub OAuth
Create a GitHub OAuth App:
- Homepage URL:
http://localhost:5174 - Callback URL:
http://localhost:8080/auth/github/callback
# Create .env with your credentials
cat > .env << EOF
GITHUB_CLIENT_ID=your-client-id
GITHUB_CLIENT_SECRET=your-client-secret
EOF
./startup-oauth.sh- Build the image
docker build -t kubestellar/console:latest .- Run the container
docker run -d \
-p 8080:8080 \
-e GITHUB_CLIENT_ID=your_client_id \
-e GITHUB_CLIENT_SECRET=your_client_secret \
-e CLAUDE_API_KEY=your_claude_api_key \
-v ~/.kube:/root/.kube:ro \
kubestellar/console:latest- Add the Helm repository
helm repo add kubestellar-console https://kubestellar.github.io/console
helm repo update- Create a secret for credentials
kubectl create namespace kubestellar-console
kubectl create secret generic console-secrets \
--namespace kubestellar-console \
--from-literal=github-client-id=your_client_id \
--from-literal=github-client-secret=your_client_secret \
--from-literal=claude-api-key=your_claude_api_key- Install the chart
helm install kc kubestellar-console/kubestellar-console \
--namespace kubestellar-console \
--set ingress.enabled=true \
--set ingress.host=console.your-domain.comhelm install kc kubestellar-console/kubestellar-console \
--namespace kubestellar-console \
--create-namespace \
-f deploy/helm/kubestellar-console/values-openshift.yaml \
--set github.clientId=$GITHUB_CLIENT_ID \
--set github.clientSecret=$GITHUB_CLIENT_SECRET| Variable | Description | Default |
|---|---|---|
PORT |
Server port | 8080 |
DEV_MODE |
Enable dev mode (CORS, hot reload) | false |
DATABASE_PATH |
SQLite database path | ./data/console.db |
GITHUB_CLIENT_ID |
GitHub OAuth client ID | (required) |
GITHUB_CLIENT_SECRET |
GitHub OAuth client secret | (required) |
JWT_SECRET |
JWT signing secret | (auto-generated) |
FRONTEND_URL |
Frontend URL for redirects | http://localhost:5174 |
CLAUDE_API_KEY |
Claude API key for AI features | (optional) |
See deploy/helm/kubestellar-console/values.yaml for all available options.
console/
├── cmd/console/ # Entry point
├── pkg/
│ ├── api/ # HTTP/WS server
│ │ ├── handlers/ # Request handlers
│ │ └── middleware/ # Auth, logging
│ ├── mcp/ # MCP bridge layer
│ ├── claude/ # Claude AI integration
│ ├── models/ # Data models
│ └── store/ # Database layer
├── web/ # React frontend
│ ├── src/
│ │ ├── components/ # React components
│ │ ├── hooks/ # Custom hooks
│ │ └── lib/ # Utilities
│ └── ...
└── deploy/
├── helm/ # Helm chart
└── docker/ # Dockerfile
# Backend tests
go test ./...
# Frontend tests
cd web && npm test# Backend
go build -o console ./cmd/console
# Frontend
cd web && npm run buildGitHub OAuth is required for authentication. Follow these steps carefully:
-
Go to GitHub → Settings → Developer settings → OAuth Apps → New OAuth App
-
Fill in the application details:
- Application name:
KubeStellar Console(or your preferred name) - Homepage URL:
http://localhost:5174(for development) - Authorization callback URL:
http://localhost:8080/auth/github/callback
- Application name:
-
Click Register application
-
Copy the Client ID (shown immediately)
-
Click Generate a new client secret and copy it immediately (you won't see it again)
| Environment | Homepage URL | Callback URL |
|---|---|---|
| Local dev | http://localhost:5174 |
http://localhost:8080/auth/github/callback |
| Docker | Your host URL | http://your-host:8080/auth/github/callback |
| Kubernetes | Your ingress URL | https://console.your-domain.com/auth/github/callback |
| OpenShift | Your route URL | https://console-namespace.apps.cluster.com/auth/github/callback |
When deploying with Helm, provide GitHub credentials via values or secrets:
# Option 1: Via --set flags
helm install kc kubestellar-console/kubestellar-console \
--namespace kubestellar-console \
--set github.clientId=$GITHUB_CLIENT_ID \
--set github.clientSecret=$GITHUB_CLIENT_SECRET
# Option 2: Via values file
cat > my-values.yaml <<EOF
github:
clientId: "your-client-id"
clientSecret: "your-client-secret"
EOF
helm install kc kubestellar-console/kubestellar-console \
--namespace kubestellar-console \
-f my-values.yaml
# Option 3: Via existing secret
kubectl create secret generic github-oauth \
--namespace kubestellar-console \
--from-literal=client-id=$GITHUB_CLIENT_ID \
--from-literal=client-secret=$GITHUB_CLIENT_SECRET
helm install kc kubestellar-console/kubestellar-console \
--namespace kubestellar-console \
--set github.existingSecret=github-oauthSymptom: Clicking "Sign in with GitHub" shows a 404 or blank page.
Cause: The GitHub OAuth Client ID is not configured or not being read by the backend.
Solutions:
-
Verify environment variables are set:
echo $GITHUB_CLIENT_ID # Should show your client ID
-
Pass environment variables inline when starting:
GITHUB_CLIENT_ID=xxx GITHUB_CLIENT_SECRET=yyy ./console
-
Check the backend logs for OAuth configuration errors
Symptom: After login, you see "dev-user" instead of your actual GitHub username.
Cause: DEV_MODE=true bypasses OAuth and uses a mock user.
Solution: Set DEV_MODE=false for real GitHub authentication:
DEV_MODE=false GITHUB_CLIENT_ID=xxx GITHUB_CLIENT_SECRET=yyy ./consoleSymptom: GitHub shows "The redirect_uri does not match" error.
Solution: Ensure the callback URL in your GitHub OAuth App exactly matches:
- Development:
http://localhost:8080/auth/github/callback - Production:
https://your-domain.com/auth/github/callback
Symptom: Log shows MCP bridge failed to start: failed to start MCP clients
Cause: kubestellar-ops or kubestellar-deploy plugins are not installed.
Solution:
# Option 1: Install from Claude Code Marketplace (recommended)
claude plugins install kubestellar-ops
claude plugins install kubestellar-deploy
# Option 2: Install via Homebrew
brew tap kubestellar/tap
brew install kubestellar-ops kubestellar-deploy
# Verify installation
which kubestellar-ops kubestellar-deployNote: The console will still function without MCP tools, but cluster data will not be available.
Symptom: Browser console shows CORS errors.
Solution: Ensure FRONTEND_URL is correctly configured in your environment:
FRONTEND_URL=http://localhost:5174 ./consoleSymptom: "Failed to resolve import" or "Outdated Optimize Dep"
Solution:
cd web
rm -rf node_modules/.vite
npm run dev- Check the GitHub Issues for known problems
- Join the KubeStellar Slack for community support
- Phase 1: Foundation - Backend, auth, basic dashboard
- Phase 2: Core Dashboard - Card grid, real-time updates
- Phase 3: Onboarding & Personalization
- Phase 4: Claude AI Integration
- Phase 5: Polish & Deploy
- Alert Notifications Setup - Configure Slack and Email alert delivery
- Contributing Guide - Guidelines for contributing to the project
Contributions are welcome! Please read our Contributing Guide before submitting a PR.
Apache License 2.0 - see LICENSE for details.
- console - AI-powered kubectl plugins (MCP servers)
- claude-plugins - Claude Code marketplace plugins for Kubernetes
- homebrew-tap - Homebrew formulae for KubeStellar tools
- KubeStellar - Multi-cluster configuration management
- KubeFlex - Lightweight Kubernetes control planes