Replies: 5 comments 36 replies
-
The article says 9.9x faster, but it does not apply to every AMD gpus. |
Beta Was this translation helpful? Give feedback.
-
After running the above, when I launch webui-user.bat with --olive --backend directml --skip-install It gives the error about Pytorch Lightning again:
If I remove --skip-install two errors appear:
and the DLL registration failure:
When trying to manually register the DLL with RegSvr32, it shows the following error: |
Beta Was this translation helpful? Give feedback.
-
How to setup all of this? Update: I got this instead. |
Beta Was this translation helpful? Give feedback.
-
to get upside on AMD GPU, are using the latest driver from: |
Beta Was this translation helpful? Give feedback.
-
Can confirm!! Successfully getting ~18 iterations/second with 1.5 based models once converted to Olive at 512 X 512 see comment in the About Olive Support (#149) discussion: See important comment from lshqqytiger above mine. I was configuring the Optimize Checkpoint tab wrong, AND I thought I needed to double-convert a model both Optimize Checkpoint and Optimize Onnx Model tab |
Beta Was this translation helpful? Give feedback.
-
AMD published an article discussing how to generate Olive optimized models that runs 18X faster... anybody try to test this yet?
https://community.amd.com/t5/gaming/how-to-running-optimized-automatic1111-stable-diffusion-webui-on/ba-p/625585
Beta Was this translation helpful? Give feedback.
All reactions