A modular AI ecosystem focused on frame-based image generation, training, and visualization.
-
Training
FrameForge
AI training, dataset preparation, and orchestration within the Frame ecosystem. -
Viewing
FrameView
Visualization, inspection, and analysis of generated frames and training results. -
Generating
FrameCreate
Generative image AI of the Frame ecosystem.
Work in Progress
FrameCreate is the generative core of the FrameFamily. It gives you a clean, fast image generator with a calm UI, model control, and a clear history of every output.
Work in Progress
Notice: Right now only SDXL-based models are supported. Embeddings are not wired yet.
FrameCreate is built for creative, synthetic, and stylized content. Use on real individuals without consent is explicitly discouraged.
Support and Questions -> Discord
https://discord.gg/TB5DHMNa5J
- One place to generate, manage models, and review results.
- A clear, uncluttered workflow that stays consistent with FrameFamily.
- Fast queue handling so the machine stays focused on generation.
- Built to stay fully open and self-hosted.
- Generate images with live preview and stop running jobs when needed.
- Manage base models, LoRAs, and VAEs in one place.
- Stack up to three LoRAs and control each strength.
- Browse history with metadata, reuse prompts, and delete what you do not need.
- Use preset styles and wildcard prompts to speed up prompting.
- Set default sampling and live preview settings in System.
./scripts/setup.shOpen the Web UI at http://localhost:5174.
The setup script installs dependencies, prepares the database, runs migrations, and enables systemd services.
What you need: Node.js + npm, Python 3, and Postgres. A GPU is recommended for generation.
- Run the setup command above.
- Open the web UI.
- Drop your models into the
storage/folders (see below). - Use the Model Manager to rescan.
- Generate your first image.
FrameCreate stores everything it needs under the storage/ folder. You can drop your models there and FrameCreate will find them.
storage/
models/ # base checkpoints (.safetensors)
loras/ # LoRA adapters (.safetensors)
vaes/ # VAE weights
embeddings/ # text embeddings
outputs/ # generated images
thumbnails/ # UI thumbnails
wildcards/ # prompt wildcard lists (.txt)
Tip: After adding models, open the Model Manager and click Rescan.
Drop a text file into storage/wildcards/. Each line is one option. Use it in your prompt like __colors__.
Example:
storage/wildcards/colors.txtred blue green- Prompt:
a __colors__ car
Each image in a series uses the next line from the file; when the batch exceeds the list, values cycle from the top. Lines without letters are ignored.
Optional: send wildcard_strategy in the job request (sequential, cycle, random) to control selection.
If you want to change ports, database settings, or runtime options, edit .env. You can start from .env.example.
MIT