After showing you how to install Ollama and Open WebUI on the Gigabyte AI TOP ATOM in my previous posts, now comes something for everyone who wants to experiment with AI-generated images and short AI-generated videos: ComfyUI – a powerful, node-based interface for creating AI images with diffusion models like Stable Diffusion, SDXL, or Flux.

In this post, I’ll show you how I installed ComfyUI on my Gigabyte AI TOP ATOM and configured it so that it’s accessible across the entire network. ComfyUI takes full advantage of the GPU performance of the Blackwell architecture and allows you to create complex workflows for image generation – all locally on your own server. Since the system is based on the same platform as the NVIDIA DGX Spark, the official NVIDIA playbooks work just as reliably here.

The exciting question for me is whether I can really install custom workflows or if I will hit the limits of the ARM architecture used by NVIDIA.

GIGABYTE AI TOP ATOM - ComfyUI wanmove

GIGABYTE AI TOP ATOM – ComfyUI wanmove

The Basic Idea: Node-based image generation for your own local network

Before I dive into the technical details, one important point: ComfyUI is a web application that runs directly on the Gigabyte AI TOP ATOM and uses the GPU performance of the Blackwell architecture to generate images with Stable Diffusion models. Unlike simple image generation tools, ComfyUI uses a node-based system: each step of image generation (loading a model, entering text, setting sampling parameters) is represented as a node that you connect to one another to create complex workflows.

What’s special about it: These workflows are saved as JSON files, so you can version, share, and reproduce them. This makes ComfyUI particularly interesting for anyone who wants to work seriously with AI image generation – whether for creative projects, prototyping, or even commercial applications.

The ComfyUI Marketplace or Browser for Custom Workflows integrated into ComfyUI makes it all the more interesting as you can learn and experiment very quickly.

<Insert image of browser>

What you need for this:

  • A Gigabyte AI TOP ATOM, ASUS Ascent, MSI EdgeXpert (or NVIDIA DGX Spark) connected to the network

  • A connected monitor or terminal access to the AI TOP ATOM

  • A computer on the same network with a modern browser (optional but recommended)

  • Basic knowledge of terminal commands and Python

  • At least 20 GB of free storage space for models and dependencies, but more is better.

  • The IP address of your AI TOP ATOM in the network (found with ip addr or hostname -I)

Phase 1: Check System Requirements

For the rest of my instructions, I am assuming that you are sitting directly in front of the AI TOP ATOM with a monitor, keyboard, and mouse connected. First, I’ll check if all necessary system requirements are met. To do this, I open a terminal on my AI TOP ATOM and run the following commands.

The following command shows you if Python 3 is installed and which version is running.

Command: python3 --version

You should see Python 3.8 or higher. Next, I check if pip is available:

Command: pip3 --version

Now I check if the CUDA Toolkit is installed:

Command: nvcc --version

And finally, I check if the GPU is recognized:

Command: nvidia-smi

You should now see the GPU information, similar to my previous Ollama blog post. If any of these commands fail, you must install the corresponding components first.

GIGABYTE AI TOP ATOM - NVIDIA-SMI

GIGABYTE AI TOP ATOM – NVIDIA-SMI

Phase 2: Clone the ComfyUI Repository

Now I download the ComfyUI source code from the official GitHub repository:

Command: git clone https://github.com/comfyanonymous/ComfyUI.git

After cloning, I switch to the ComfyUI directory:

Command: cd ComfyUI/

The repository contains all necessary files for ComfyUI, including web interface components and model handling libraries.

Phase 3: Create a Python Virtual Environment

To avoid conflicts with system packages, I create an isolated Python virtual environment for ComfyUI. This is a best practice and also makes it easier to remove ComfyUI later if necessary.

If you are not already in the ComfyUI/ folder, switch to the folder.

Command: cd ComfyUI/

Now I have the virtual environment created.

Command: python3 -m venv comfyui-env

Now I activate the virtual environment:

Command: source comfyui-env/bin/activate

You should now see (comfyui-env) in the terminal prompt – this indicates that the virtual environment is active. If not, check if the command was executed correctly.

GIGABYTE AI TOP ATOM - preparation

GIGABYTE AI TOP ATOM – preparation

Phase 4: Install PyTorch with CUDA Support

ComfyUI requires PyTorch with CUDA support to use the GPU. For the Blackwell architecture on the AI TOP ATOM, I install PyTorch with CUDA 13.0 support:

Command: pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu130

This installation can take a few minutes as PyTorch is a large package. I just let the download run and do something else in the meantime. The installation is specifically optimized for CUDA 13.0, which works perfectly with the Blackwell architecture.

GIGABYTE AI TOP ATOM - torch installation

GIGABYTE AI TOP ATOM – torch installation

Phase 5: Install ComfyUI Dependencies

Now I install all necessary Python packages for ComfyUI. The repository contains a requirements.txt file with all required dependencies:

Command: pip install -r requirements.txt

Command: pip install -r manager_requirements.txt

This installation can also take a few minutes as many packages need to be downloaded. I just let the process run and wait until all packages are installed.

Phase 6: Download Stable Diffusion Checkpoint

To be able to generate images, ComfyUI needs a model. I download the Stable Diffusion 1.5 model, which is a good entry point. First, I switch to the checkpoints directory:

Command: cd models/checkpoints/

Now I download the model from Hugging Face:

Command: wget https://huggingface.co/Comfy-Org/stable-diffusion-v1-5-archive/resolve/main/v1-5-pruned-emaonly-fp16.safetensors

The model is about 2 GB in size and can take several minutes depending on internet speed. You will see a progress bar during the download. After the download, I switch back to the main directory:

Command: cd ../../

Important Note: Make sure enough storage space is available on your AI TOP ATOM. If the download aborts with a memory error, you can also add more models later or use smaller models.

Phase 7: Start ComfyUI Server

Now comes the crucial step: I start the ComfyUI server so that it is accessible across the entire network. To do this, I use the parameter --listen 0.0.0.0 so that ComfyUI listens on all network interfaces:

Command: python main.py --listen 0.0.0.0

The server starts now and binds to port 8188. You should see output that looks something like this:

Starting server
To see the GUI go to: http://0.0.0.0:8188

The server is now running in the foreground. Let it run and open a new terminal window if you want to execute further commands.

Here is an image of the ComfyUI web interface as it should look now after the fresh installation. I have already generated the image on the right at the end of the workflow.

Please open the following URL directly in a browser on the AI TOP ATOM.

URL: http://localhost:8188

GIGABYTE AI TOP ATOM - ComfyUI Web-Interface

GIGABYTE AI TOP ATOM – ComfyUI Web-Interface

To stop the server, press:

Command: Ctrl+C

Phase 8: Test Installation and Configure Network Access

First, I check if the server is running locally and for this, open the following URL in your browser:

URL: http://localhost:8188

You should see an HTTP 200 response, which shows that the web server is running.

Now I check the IP address of my AI TOP ATOM in the network:

Command: hostname -I

I make a note of the IP address (e.g., 192.168.2.100). If a firewall is active, I must open port 8188:

Command: sudo ufw allow 8188

Now I open a browser on another computer in the network and navigate to http://<IP-Address-AI-TOP-ATOM>:8188 (replace <IP-Address-AI-TOP-ATOM> with the IP address of your AI TOP ATOM). The ComfyUI interface should open.

Important Note: On the first start, it may take a few seconds for the page to load. ComfyUI is initializing and loading the necessary components.

Phase 9: Test Your First Image Generation

When the ComfyUI interface is loaded, you will see a node-based interface. By default, a simple workflow should already be loaded. To generate your first image:

  1. Click the “Queue Prompt” button (or press Ctrl+Enter)

  2. The model will be loaded and image generation will start

  3. You will see the progress in real-time

  4. After 30-60 seconds, the first image should be ready

In a separate terminal, you can monitor GPU usage with nvidia-smi to see how the Blackwell GPU performs the image generation.

Try Out Other Models

The beauty of ComfyUI is the wide range of available models. You can download more models from Hugging Face or other sources. Popular models include:

  • Stable Diffusion XL (SDXL) – Higher resolution and better quality

  • Flux – Very high-quality results, optimized for modern GPUs

  • Stable Diffusion 2.1 – Improved version of Stable Diffusion

  • Custom Models – Many community models with special styles

To add more models, simply download them into the models/checkpoints/ directory. ComfyUI will automatically recognize them on the next start.

Troubleshooting: Common Problems and Solutions

In my time with ComfyUI on the AI TOP ATOM, I have encountered some typical problems. Here are the most common ones and how I solved them:

  • PyTorch CUDA not available: Check if PyTorch was installed correctly with CUDA support. Run python -c "import torch; print(torch.cuda.is_available())" – it should return True.

  • Model download fails: Check your internet connection and available storage space. You can check storage space with df -h.

  • Web interface not reachable: Check if the firewall is blocking port 8188. You can open the port with sudo ufw allow 8188. Also check if both computers are on the same network.

  • Out of GPU Memory error: The model might be too large for the available GPU memory. Try a smaller model or check GPU usage with nvidia-smi. On the DGX Spark platform with Unified Memory Architecture, you can manually clear the buffer cache if memory problems occur:

    • Command: sudo sh -c 'sync; echo 3 > /proc/sys/vm/drop_caches'
  • Virtual Environment not active: Make sure the virtual environment is activated. The prompt should show (comfyui-env). If not, run source comfyui-env/bin/activate.

  • Model not found: Check if the model is in the correct directory (models/checkpoints/). The file should have the extension .safetensors or .ckpt.

Run ComfyUI in the Background (for testing only)

If you want to keep ComfyUI running permanently, you can set it up as a systemd service or run it in a session with screen or tmux. A simple solution is to use screen:

Command: screen -S comfyui

Then start ComfyUI as usual. To leave the session (without stopping ComfyUI), press Ctrl+A followed by D. To return to the session:

Command: screen -r comfyui

Set Up ComfyUI as a Systemd Service (recommended)

For a professional setup that starts ComfyUI automatically after every reboot, set up a systemd service. First, determine the full path to your ComfyUI directory and your username:

Command: pwd

Note the path (e.g., /home/<username>/ComfyUI). Now create the systemd service file:

Command: sudo nano /etc/systemd/system/comfyui.service

Insert the following content (replacing /home/username with your actual username and /home/username/ComfyUI with your actual ComfyUI path):

[Unit]
Description=ComfyUI Service
After=network.target

[Service]
Type=simple
User=username
WorkingDirectory=/home/username/ComfyUI
Environment="PATH=/home/username/ComfyUI/comfyui-env/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
ExecStart=/home/username/ComfyUI/comfyui-env/bin/python /home/username/ComfyUI/main.py --listen 0.0.0.0 --enable-manager
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Save the file with Ctrl+O, confirm with Enter, and exit the editor with Ctrl+X.

Now reload the systemd configuration and activate the service:

Command: sudo systemctl daemon-reload

Command: sudo systemctl enable comfyui

Start the service:

Command: sudo systemctl start comfyui

Check the status of the service:

Command: sudo systemctl status comfyui

You should see that the service is active and running. If there are errors, check the logs with:

Command: sudo journalctl -u comfyui -f

The service will now start automatically after every reboot. To stop the service manually:

Command: sudo systemctl stop comfyui

To deactivate the service (will no longer start automatically after reboot):

Command: sudo systemctl disable comfyui

To completely remove the service:

Command: sudo systemctl stop comfyui

Command: sudo systemctl disable comfyui

Command: sudo rm /etc/systemd/system/comfyui.service

Command: sudo systemctl daemon-reload

Rollback: Remove ComfyUI Again

If you want to completely remove ComfyUI from the AI TOP ATOM, execute the following commands on the system:

First, stop the server with Ctrl+C (if it is still running) and deactivate the virtual environment:

Command: deactivate

Then remove the virtual environment and the ComfyUI directory:

Command: rm -rf comfyui-env/

Command: rm -rf ComfyUI/

Important Note: These commands remove all ComfyUI files and also all downloaded models. Make sure you really want to remove everything before you run these commands.

But also remember to delete the ComfyUI service if it was set up.

Summary & Conclusion

The installation of ComfyUI on the Gigabyte AI TOP ATOM is surprisingly straightforward thanks to compatibility with NVIDIA DGX Spark playbooks. In about 30-45 minutes, I have a fully functional image generation solution running that is accessible across the entire network.

What particularly excites me: The performance of the Blackwell GPU is fully utilized, and the node-based interface allows for creating complex workflows for image generation. This makes ComfyUI especially interesting for anyone who wants to work seriously with AI image generation – whether for creative projects, prototyping, or even commercial applications.

I also find it particularly practical that the workflows are saved as JSON files. This allows workflows to be versioned, shared, and reproduced – just like code. This makes ComfyUI a powerful tool for everyone who wants to work professionally with AI image generation.

For teams or creative projects, this is a perfect solution: a central server with full GPU power that everyone can access via a browser. No local installations needed, no complex configurations – just open the IP address in the browser and get started.

If you have questions or encounter problems, feel free to check the official NVIDIA DGX Spark documentation, the ComfyUI documentation, or the ComfyUI Wiki. The community is very helpful, and most problems can be solved quickly.

Next Step: Expand and Optimize Workflows

You have now successfully installed ComfyUI and exposed it to the network. The basic installation works, but that is just the beginning. ComfyUI offers a huge selection of custom nodes and extensions that make your workflows even more powerful.

In the next step, you can look into custom nodes that offer additional functions like Face Restoration, Upscaling, ControlNet, or LoRA support. The ComfyUI community is constantly developing new nodes and workflows that you can use directly.

Good luck experimenting with ComfyUI on your Gigabyte AI TOP ATOM. I am excited to see what creative projects and workflows you develop with it! Let me and my readers know here in the comments.