Comfy UI on DGX Spark

 I took a brief break from Nanochat to try out some other workloads on the Spark. NVIDIA has a nice start up guide for the Spark here: 

https://build.nvidia.com/spark

The first step is to verify prerequisites. Running the suggested steps in the Comfy UI guide from the link above, I found that I don't have the NVIDIA CUDA toolkit installed. That's rectified with opening a terminal session to the Spark from NVIDIA Sync and running (the other pre-reqs were preinstalled for me but you may want to check them from the link above):

sudo apt install nvidia-cuda-toolkit

After that, I pulled down the Comfy repository. I did this before the venv steps because I want the virtual environment in the ComfyUI directory, and pulling down the repository creates it.

git clone https://github.com/comfyanonymous/ComfyUI.git

Then they have me set up a virtual environment using venv: 

python3 -m venv comfyui-env
source comfyui-env/bin/activate

I'm not going off script on this one to use uv, but I probably will at someone in the future. I heard anecdotally that Comfy UI supports uv. That's for another article. Interesting insight for me that they install torch before Comfy UI. This suggests to me that it's important to pre-empt the install of torch by Comfy's requiements and presumably already meet them? I did decide to go a bit off script and install torch with support for CUDA 13.0 instead of 12.9 as suggested in the docs:

pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu130

After which I ran my script from last article, adjusted to not use uv, to see the torch version and everything looked good:

Torch Version: 2.9.0+cu130
CUDA Available: True
CUDA Version: 13.0
CUDA Device: NVIDIA GB10

Now installing the requirements:

pip install -r requirements.txt

I verified the torch version again after this because I'm paranoid. It was unmodified. Good. I skipped downloading a model as I know that comfy will prompt me for the models that I need for a given workflow (at least for their built in workflow templates). So the next thing to do is start ComfyUI:

python main.py --listen 0.0.0.0

Now, from my desktop, browsing the spark domain at port 8188 does the trick. I see ComfyUI!


Now I'll select a workflow that needs a bit of VRAM to test it out. I'm going with image generation with Qwen Image Text to Image. Sure enough, loading this workflow prompts me to install the models. 


But now a fun bit. Hitting download will download them to my PC, but I need them on the Spark. So barring a better way I'm going with the Copy URL button and using curl to download the models in the appropriate directory from an SSH shell I'm running to the Spark. For this workflow:

curl -L -o models/vae/qwen_image_vae.safetensors https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/vae/qwen_image_vae.safetensors

curl -L -o models/text_encoders/qwen_2.5_vl_7b_fp8_scaled.safetensors https
://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/text_encoders/qwen_2.5_vl_7b_fp8_scaled.safetens
ors

curl -L -o models/loras/Qwen-Image-Lightning-8steps-V1.0.safetensors https:
//huggingface.co/lightx2v/Qwen-Image-Lightning/resolve/main/Qwen-Image-Lightning-8steps-V1.0.safetensors

curl -L -o models/diffusion_models/qwen_image_fp8_e4m3fn.safetensors https:
//huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/resolve/main/split_files/diffusion_models/qwen_image_fp8_e4m3fn.safetensor
s

After having placed all the models in the appropriate directories, I can now dismiss the dialog, enter a text prompt, and hit run:



I saw 80% GPU utilization and memory usage up to 120gb during generation which took about a two minutes. Not particularly fast, but it does work. Right clicking on the resultant image and selecting "Save Image" does result in it downloading to the desktop (not on the Spark):


While the actual DGX Spark looks a lot nicer, it doesn't have the Apple logo or the tiny people. That's all for now!





Comments

Popular posts from this blog

Andrej Karpathy's Nanochat

Training Nanochat on DGX Spark