Comfy UI on DGX Spark
 I took a brief break from Nanochat to try out some other workloads on the Spark. NVIDIA has a nice start up guide for the Spark here: 
https://build.nvidia.com/spark
The first step is to verify prerequisites. Running the suggested steps in the Comfy UI guide from the link above, I found that I don't have the NVIDIA CUDA toolkit installed. That's rectified with opening a terminal session to the Spark from NVIDIA Sync and running (the other pre-reqs were preinstalled for me but you may want to check them from the link above):
sudo apt install nvidia-cuda-toolkit
After that, I pulled down the Comfy repository. I did this before the venv steps because I want the virtual environment in the ComfyUI directory, and pulling down the repository creates it.
git clone https://github.com/comfyanonymous/ComfyUI.git
Then they have me set up a virtual environment using venv:
python3 -m venv comfyui-env
source comfyui-env/bin/activate
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu130
Now installing the requirements:
I verified the torch version again after this because I'm paranoid. It was unmodified. Good. I skipped downloading a model as I know that comfy will prompt me for the models that I need for a given workflow (at least for their built in workflow templates). So the next thing to do is start ComfyUI:
But now a fun bit. Hitting download will download them to my PC, but I need them on the Spark. So barring a better way I'm going with the Copy URL button and using curl to download the models in the appropriate directory from an SSH shell I'm running to the Spark. For this workflow:
curl -L -o models/text_encoders/qwen_2.5_vl_7b_fp8_scaled.safetensors https
I saw 80% GPU utilization and memory usage up to 120gb during generation which took about a two minutes. Not particularly fast, but it does work. Right clicking on the resultant image and selecting "Save Image" does result in it downloading to the desktop (not on the Spark):
Comments
Post a Comment