Use CPU if no CUDA device is detected#132
Conversation
|
@philippschw how did the duration for the run change? i feel like it would take way longer, if it works at all? |
|
@JustinGuese, it works but you are also right, it takes over an hour to make a single inference call on my MacBook Pro. 2.9 GHz 6-Core Intel i9 and 16GB RAM. |
|
That can't be right. I have an i7-1260p with 8 actual cores. A 512x512 image with 50 steps will inference in 12 minutes on my branch. |
|
yeah anyways, cpu time is way too long out of curiosity: if I would provide a "hosted" stable-diffusion, would you actually pay for it? or rather use the CPU for cost reasons? like the server would cost me ~600€/month, so I would need a lot of interest/people to be worth it |
fails on mac, not sure if this PR should include an alternative environment.yaml? |
sorry, didn't realize there is |
|
There is a bug: undefined variable |
Thanks @jpdeleon, I updated the code. |
|
|
||
| parser.add_argument( | ||
| "--init-img", | ||
| "--init_img", |
There was a problem hiding this comment.
This is a non-additive interface change. I'd recommend against as it impacts anyone writing scripts to interact with the original code. It's also beyond the intent of the commit.
similar for me here |
No description provided.