


Setattr(resolved_obj, func_path, lambda *args, **kwargs: self(*args, **kwargs))įile "G:\SD\WINAMD\stable-diffusion-webui-directml-master\modules\sd_hijack_utils.py", line 28, in callįile "G:\SD\WINAMD\stable-diffusion-webui-directml-master\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_modelįile "G:\SD\WINAMD\stable-diffusion-webui-directml-master\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward Return self.inner_model.apply_model(*args, **kwargs)įile "G:\SD\WINAMD\stable-diffusion-webui-directml-master\modules\sd_hijack_utils.py", line 17, in Samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=)įile "G:\SD\WINAMD\stable-diffusion-webui-directml-master\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_implįile "G:\SD\WINAMD\stable-diffusion-webui-directml-master\repositories\k-diffusion\k_diffusion\external.py", line 112, in forwardĮps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)įile "G:\SD\WINAMD\stable-diffusion-webui-directml-master\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps Samples = (self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))įile "G:\SD\WINAMD\stable-diffusion-webui-directml-master\modules\sd_samplers_kdiffusion.py", line 323, in sample Samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)įile "G:\SD\WINAMD\stable-diffusion-webui-directml-master\modules\processing.py", line 828, in sample Runtime error here : (trying to run 1024x1024)įile "G:\SD\WINAMD\stable-diffusion-webui-directml-master\modules\call_queue.py", line 56, in fįile "G:\SD\WINAMD\stable-diffusion-webui-directml-master\modules\call_queue.py", line 37, in fįile "G:\SD\WINAMD\stable-diffusion-webui-directml-master\modules\txt2img.py", line 56, in txt2imgįile "G:\SD\WINAMD\stable-diffusion-webui-directml-master\modules\processing.py", line 486, in process_imagesįile "G:\SD\WINAMD\stable-diffusion-webui-directml-master\modules\processing.py", line 628, in process_images_inner Seems like when it's slow my gpu is drawing around 200/250W with maxed out core clock around 3000mhz, when it's fast it's running 350W 2600mhz

I'm running AbyssOrangeMix2_sfw safetensors/ckpt (tried both) with it's vae.ptĬan't get anything higher than 640x512 (if I'm lucky, more so 512x512) to first run fast, trying higher res give me either runtime error or random error with vram I think (couldn't reproduce), sometime it work but with at least 10s/it Second run (same prompt) can be as slow as 11s/it (getting slower with time, start at 2s/it) Hi, so I've got it working on a 7900XTX but idk if I done something wrong : the bigger the batch size the more it appears to cause a memory leak with each subsequent image after the firstīeta Was this translation helpful? Give feedback. It seems for all 4 people if they do more than a batch size of 1 ram usage appears to max out their gpu or cause issues. I mean for the people that didn't have to do the method i described here
#GPU SHARK 64 BIT UPDATE#
I will update with more info once they messege back. Idk if the other people have that issue that used the normal method,

#GPU SHARK 64 BIT INSTALL#
>this had to be names stable-diffusion-stability-ai with this one /githubs into the same folder as the other 2 to get it working because it would error out for them and fail to install fully but once they did that they were able to run webui no issuesīased on the feedback i recieved there appears to be a memory leak using the above method for the people that had to do it the way.
#GPU SHARK 64 BIT SERIES#
Ive had 4 people test this on r圆000 series cards and 2 of them had to copy these &
