Hugging Face Diffusers can correctly load LoRA now | by Andrew Zhu | Jul, 2023

Using the Latest Diffusers Monkey Patching function to load LoRA produces exactly the same result compare with A1111

Pull the latest code from Hugging Face’s Diffusers code repository, and found that the newest code updated related to LoRA loading is updated and can do Monkey-Patching LoRA loading now.

To install the latest Diffusers:

pip install -U git+

The LoRA loading function was generating slightly faulty results yesterday, according to my test. This article discusses how to use the latest LoRA loader from the Diffusers package.

Load LoRA and update the Stable Diffusion model weight

It has been a while since programmers using Diffusers can’t have the LoRA loaded in an easy way. To load LoRA to a checkpoint model and output the same result as A1111’s Stable Diffusion Webui did, we need to use additional custom code to load the weights as I provided in this article.

The solution provided in this article works well and fast, while it requires additional management on the LoRA alpha weight, we need to create a variable to remember the current LoRA weight α. Because the load LoRA code simply adds put the A and B matrix from LoRA together.

And then merge with the main checkpoint model weight W.

To remove the LoRA weights, we will need a negative -α to remove the LoRA weights, or recreate the pipeline.

The Monkey-Patching way to load LoRA

Another way to use LoRA is patching the code that executes the module forward process, and bringing the LoRA weights during the time of calculating text embedding and attention score.

And this is how Diffusers LoraLoaderMixin’s approach to LoRA loading. The good part of this approach is that no model weight is updated, we can easily reset the LoRA and provide a new α to define the LoRA weight.

Source link

Leave a Comment