Popcorn is an ultra-fast browser isolation service. It is designed to scale globally on Kubernetes, providing sub-second access to "warm" browser instances.
- Pool Manager: Node.js service that assigns idle pods to users.
- Browser Node: Custom Docker image running Chromium with Neko.
- Agones (Fleets):
- We use Agones to manage the lifecycle of browser sessions.
- Fleets maintain a set of warm GameServers.
- GameServers are "allocated" (marked as Busy) when a user connects.
- Gateway: OpenResty (Nginx) for sticky session routing.
We use Kind (Kubernetes in Docker) to replicate the production environment locally.
Use the Makefile to control the local environment.
-
Build Images & Start Local Cluster:
make build make up
This starts a Kind cluster named
popcornand installs Agones.kubectl apply -k kustomize/dev # example, you need to provide your own manifests -
Connect:
make connect
Forwards port 8080 to the gateway. Access at
http://localhost:8080. -
Reset:
make clean
Popcorn is designed to be deployed on any standard Kubernetes cluster.
- Build and push your own Docker images.
- Deploy the core services (
pool-manager,gateway,ttl-controller) using standard Kubernetes manifests or Helm charts. - Install Agones on your cluster and configure your browser-node fleets.
- SEV-SNP Hardware Attestation: Popcorn instances run within an AMD cryptographic enclave. Attestation proofs dynamically bind the actively running container digests (e.g.,
nekobrowser) to a nonce, guaranteeing the exact codebase is running securely. Proofs can be fetched and verified using the tools inscripts/attestation/. - Isolation: Every session runs in a dedicated ephemeral pod.
- Network: WebRTC traffic is routed via a private TURN server (Coturn) or internal ClusterIPs.
- Clone the repo.
- Run
make upto stand up the local dev stack. - Make changes to
services/. - Run
make buildand deploy to test.