WhatschatDocsCloud Computing
Related
Overcoming Container Security Scans: Deploying ClickHouse with Docker Hardened ImagesMastering Distributed Caching in .NET with Azure PostgreSQLHow to Accelerate AI Development with Runpod Flash: A Step-by-Step Guide to Container-Free GPU DeploymentSecuring Autonomous AI Agents on Kubernetes: A Q&A Guide to Trust Boundaries, Credentials, and ObservabilityHow to Safeguard Your SaaS Against Rogue AI Agents: A Comprehensive Data Recovery GuideCloudflare Unveils Dynamic Workflows: Durable Execution Meets Multi-Tenant FlexibilityTailoring Cloud Service Dashboards in Grafana Cloud: Customize AWS, Azure, and GCP Views10 Ways Runpod Flash Revolutionizes AI Development by Cutting Out Containers

Set Up Your Own Private AI Image Generator with Docker and Open WebUI

Last updated: 2026-05-05 22:38:34 · Cloud Computing

Introduction

Imagine having the power of a DALL-E-like image generator running entirely on your own machine—no cloud fees, no privacy worries, no annoying content filters. With Docker Model Runner and Open WebUI, this is not just a dream but a simple setup you can complete in minutes. This guide walks you through pulling an image generation model, connecting it to a polished chat interface, and generating images locally. You'll gain full control over your AI workflows while keeping your data private. Let's get started.

Set Up Your Own Private AI Image Generator with Docker and Open WebUI
Source: www.docker.com

What You Need

  • Docker Desktop (macOS) or Docker Engine (Linux) – latest version installed
  • At least 8 GB of free RAM for a small image model; 16 GB or more recommended for better performance
  • A GPU is optional but highly recommended: NVIDIA (CUDA) on Windows/Linux, Apple Silicon (MPS) on Mac, or CPU fallback (slower)
  • Basic command-line familiarity – you should be comfortable running terminal commands
  • Internet connection – needed only for initial model download (after that, everything runs offline)

To verify Docker is ready, run: docker model version. If it returns version info without errors, you're set.

How This All Connects: The Big Picture

Before diving into steps, understand the architecture: Docker Model Runner acts as a control plane that downloads image generation models (packaged in DDUF format), manages inference backends, and exposes a fully OpenAI-compatible API—including the critical POST /v1/images/generations endpoint. Open WebUI, a feature-rich chat interface, is pre-configured to talk to that endpoint. The result: you type a prompt in a beautiful chat window, and images appear as if by magic, all running locally.

Step-by-Step Guide

Step 1: Pull an Image Generation Model

Docker Model Runner uses the DDUF (Diffusers Unified Format) to package diffusion models as OCI artifacts on Docker Hub. This single-file format bundles the text encoder, VAE, UNet/DiT, and scheduler config into one portable artifact.

  1. Open a terminal and run:
    docker model pull stable-diffusion
  2. Wait for the download to complete – the model size is around 7 GB, so grab a coffee.
  3. Confirm the model is ready by inspecting it:
    docker model inspect stable-diffusion
    You should see output similar to:
    { "id": "sha256:5f60862074a4c585126288d08555e5ad9ef65044bf490ff3a64855fc84d06823", "tags": ["docker.io/ai/stable-diffusion:latest"], "created": 1768470632, "config": { "format": "diffusers", "architecture": "diffusers", "size": "6.94GB", "diffusers": { "dduf_file": "stable-diffusion-xl-base-1.0-FP16.dduf", "layout": "dduf" } } }

Tip: If you have limited disk space, you can specify a different model using docker model pull <model-name>. Check Docker Hub for available alternatives.

Step 2: Launch Open WebUI

Here's the magic part: Docker Model Runner has a built-in launch command that automatically wires up Open WebUI against your local inference endpoint. No manual configuration needed.

  1. Run this single command:
    docker model launch openwebui
  2. Wait for the container to start – you'll see logs indicating the web interface is ready at http://localhost:8080 (or a different port if 8080 is busy).
  3. Open your browser and navigate to that URL. You should see the Open WebUI login/registration page.
  4. Create an account (local, no email needed) and log in.

That's it! Open WebUI is now connected to Docker Model Runner's API. You can start a new chat and use the image generation feature by typing your prompt (e.g., "a dragon wearing a business suit in a corporate boardroom").

Step 3: Generate Your First Image

Once Open WebUI is running, image generation is as simple as typing a prompt.

  1. In the chat interface, select the image generation mode (usually a toggle or button in the input area).
  2. Enter a descriptive prompt – be creative! For example: "a cyberpunk cat riding a hoverboard through a neon-lit city, photorealistic".
  3. Adjust optional parameters like image size (e.g., 1024x1024), number of images, negative prompt, etc., if the interface exposes them.
  4. Click the generate button and watch the magic happen. The model runs locally, so no data leaves your machine.
  5. View and download the generated images directly from the chat. Each image is stored locally in your Docker volume.

Note: The first generation may be slower because the model loads into memory. Subsequent generations will be faster.

Set Up Your Own Private AI Image Generator with Docker and Open WebUI
Source: www.docker.com

Step 4: Manage Your Models and Resources

You can pull additional models and switch between them easily.

  1. List all locally available models:
    docker model list
  2. Switch to another model (e.g., a faster version or a different style) by using the Open WebUI settings or by restarting the launch command with a different model name: docker model launch openwebui --model <model-name> (check the CLI documentation for exact syntax).
  3. Remove an unused model to free up disk space:
    docker model rm <model-name>
  4. Monitor GPU usage with nvidia-smi (Linux/Windows) or powermetrics (macOS) to ensure your system isn't overwhelmed.

Step 5: Customize the Experience

Open WebUI offers many settings to tailor the interface and generation behaviour.

  1. Change the theme under Settings – Appearance. Choose between light, dark, or any custom color.
  2. Enable conversation history to revisit past prompts and generations.
  3. Set default generation parameters (like always use a negative prompt for “ugly, blurry, low quality”).
  4. Integrate with other Docker services – since Open WebUI runs in a container, you can add it to a Docker Compose stack with other AI tools.

Tips & Troubleshooting

  • Performance: If generations are slow, close other memory-hungry apps. Use docker stats to see container resource usage.
  • GPU not detected? Ensure Docker is configured to use your GPU. For NVIDIA, install the NVIDIA Container Toolkit. For Apple Silicon, Docker Desktop should automatically use Metal.
  • Storage space: DDUF models can be several GB. Regularly clean unused models with docker model prune (removes all cached models you no longer need).
  • Safety filters: By default, some models come with built-in NSFW filters. You can disable them (at your own risk) by passing environment variables during launch – check the model's documentation.
  • Backup your creations: Generated images are stored inside the Open WebUI container volume. To persist them outside, configure a bind mount or copy them out manually.
  • Update: Keep Docker Model Runner and Open WebUI up to date: docker model update and docker pull ghcr.io/open-webui/open-webui:main (or use the launch command which always pulls the latest).
  • Community models: Explore custom models on Docker Hub or create your own DDUF packages from Hugging Face checkpoints.
  • Remember: All processing is local – no internet required after the initial download. Perfect for sensitive projects or offline tinkering.

Enjoy your private, uncensored, always-available image generator. You've just built your own AI art studio, and it's all yours.