国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

【AI作畫】使用stable-diffusion-webui搭建AI作畫平臺

這篇具有很好參考價值的文章主要介紹了【AI作畫】使用stable-diffusion-webui搭建AI作畫平臺。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點(diǎn)擊"舉報違法"按鈕提交疑問。

一、安裝配置Anaconda

進(jìn)入官網(wǎng)下載安裝包https://www.anaconda.com/并安裝,然后將Anaconda配置到環(huán)境變量中。

installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI

打開命令行,依次通過如下命令創(chuàng)建Python運(yùn)行虛擬環(huán)境。

conda env create novelai python==3.10.6
E:\workspace\02_Python\novalai>conda info --envs
# conda environments:
#
base                  *  D:\anaconda3
novelai                  D:\anaconda3\envs\novelai
conda activate novelai

二、安裝CUDA

筆者的顯卡為NVIDIA,需安裝NVIDIA的開發(fā)者工具進(jìn)入官網(wǎng)https://developer.nvidia.com/,根據(jù)自己計算機(jī)的系統(tǒng)情況,選擇合適的安裝包下載安裝。

installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI

打開安裝程序后,依照提示完成安裝。
installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAIinstalling clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAIinstalling clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI

安裝完成后,在命令窗口輸入如下命令,輸出CUDA版本即安裝成功。

C:\Users\yefuf>nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

三、安裝pytorch

進(jìn)入官網(wǎng)https://pytorch.org/,根據(jù)計算機(jī)配置選擇合適的版本進(jìn)行安裝。這里需要注意的是CUDA的平臺選擇,先打開NVIDIA控制面板-幫助-系統(tǒng)信息-組件查看CUDA版本,官網(wǎng)上選擇的計算平臺需要低于計算機(jī)的NVIDIA版本。
installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI
配置選擇完成后,官網(wǎng)會生成相應(yīng)的安裝命令。
installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI
將安裝命令復(fù)制出,命令窗口執(zhí)行安裝即可。

conda install pytorch torchvision torchaudio pytorch-cuda=11.6 -c pytorch -c nvidia

當(dāng)查到Pytorch官網(wǎng)推薦的CUDA版本跟你的顯卡版本不匹配時,就需要根據(jù)官網(wǎng)的CUDA版本找到對應(yīng)的顯卡驅(qū)動版本并升級顯卡驅(qū)動,對應(yīng)關(guān)系可通過https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html查看
installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI

四、安裝git

進(jìn)入git官網(wǎng)https://git-scm.com/,下載安裝即可。

五、搭建stable-diffusion-webui

進(jìn)入項目地址https://github.com/AUTOMATIC1111/stable-diffusion-webui,通過git將項目克隆下來。

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
Cloning into 'stable-diffusion-webui'...
remote: Enumerating objects: 10475, done.
remote: Counting objects: 100% (299/299), done.
remote: Compressing objects: 100% (199/199), done.
remote: Total 10475 (delta 178), reused 199 (delta 100), pack-reused 10176
Receiving objects: 100% (10475/10475), 23.48 MiB | 195.00 KiB/s, done.
Resolving deltas: 100% (7312/7312), done.

克隆下載擴(kuò)展庫。

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients “extensions/aesthetic-gradients”
Cloning into 'extensions/aesthetic-gradients'...
remote: Enumerating objects: 21, done.
remote: Counting objects: 100% (21/21), done.
remote: Compressing objects: 100% (12/12), done.
remote: Total 21 (delta 3), reused 18 (delta 3), pack-reused 0
Receiving objects: 100% (21/21), 1.09 MiB | 1.34 MiB/s, done.
Resolving deltas: 100% (3/3), done.
git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser “extensions/images-browser”
Cloning into 'extensions/images-browser'...
remote: Enumerating objects: 118, done.
remote: Counting objects: 100% (118/118), done.
remote: Compressing objects: 100% (70/70), done.
remote: Total 118 (delta 42), reused 65 (delta 24), pack-reused 0
Receiving objects: 100% (118/118), 33.01 KiB | 476.00 KiB/s, done.
Resolving deltas: 100% (42/42), done.

克隆完成后,extensions目錄會多如下文件夾:
installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI
下載模型庫https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies,并將下載的.ckpt

放到models/Stable-diffusion文件夾中。模型很大,推薦使用下載器。
installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI
安裝項目所需的Python依賴庫。

pip install -r requirements.txt

installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI
安裝完成之后,運(yùn)行如下命令,順利的話,當(dāng)程序加載完成模型之后,會自動打開http://127.0.0.1:7860/顯示平臺主頁。

python launch.py --autolaunch

進(jìn)入平臺的設(shè)置頁面,選擇語言為中文,重啟程序之后,即可看到頁面顯示為中文。

installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI
在界面中輸入作畫內(nèi)容的正向提示詞(畫想要什么特征)和反向提示詞(畫不想要什么特征),點(diǎn)擊生成即可開始自動作畫。

installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI
如上述的提示詞作出的畫如圖(由于隨機(jī)種子不同,生成的畫會有差異)。
installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI

六、如何設(shè)置提示詞

這里建議使用元素法典https://docs.qq.com/doc/DWHl3am5Zb05QbGVs,上面有前人整理好的提示詞及效果,以供參考。
installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI

七、可能遇到的問題

1、GitHub訪問不了或訪問慢

一般為DNS解析問題,需要修改本地host文件,增加配置內(nèi)容,繞過域名解析,達(dá)到加速訪問的目的。

訪問https://www.ipaddress.com/,分別輸入github.comgithub.global.ssl.fastly.net,獲取域名對應(yīng)的IP地址。
installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI
installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI
打開系統(tǒng)的Host文件,將IP和域名的對應(yīng)關(guān)系配置到Host文件中。
installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI
配置文件內(nèi)容如下:

140.82.114.4	github.com
199.232.5.194	github.global.ssl.fastly.net

執(zhí)行命令ipconfig /flushdns刷新DNS即可。

2、pip安裝依賴庫慢或常下載失敗

pip安裝依賴庫時默認(rèn)選擇國外的源,安裝速度會非常慢,可以考慮切換為國內(nèi)源,常用的國內(nèi)源如下:

阿里云 https://mirrors.aliyun.com/pypi/simple/ 
中國科技大學(xué) https://pypi.mirrors.ustc.edu.cn/simple/ 
豆瓣(douban) https://pypi.douban.com/simple/ 
清華大學(xué) https://pypi.tuna.tsinghua.edu.cn/simple/ 
中國科學(xué)技術(shù)大學(xué) https://pypi.mirrors.ustc.edu.cn/simple/

在安裝依賴庫時,可使用pip install -i 源 空格 安裝包名稱進(jìn)行源的選擇,如pip install -i https://mirrors.aliyun.com/pypi/simple numpy

也可以通過增加配置文件,使安裝依賴庫時默認(rèn)選擇國內(nèi)的源,在用戶目錄下增加pip.ini文件。
installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI
在文件中寫入如下內(nèi)容。

[global]
timeout = 60000
index-url = https://pypi.tuna.tsinghua.edu.cn/simple

[install]
use-mirrors = true
mirrors = https://pypi.tuna.tsinghua.edu.cn

3、安裝CLIP時提示Connection was aborted, errno 10053

出錯時的錯誤打印如下:

(novelai) E:\workspace\02_Python\novalai\stable-diffusion-webui>python launch.py
Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)]
Commit hash: b8f2dfed3c0085f1df359b9dc5b3841ddc2196f0
Installing clip
Traceback (most recent call last):
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\launch.py", line 251, in <module>
    prepare_enviroment()
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\launch.py", line 178, in prepare_enviroment
    run_pip(f"install {clip_package}", "clip")
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\launch.py", line 63, in run_pip
    return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\launch.py", line 34, in run
    raise RuntimeError(message)
RuntimeError: Couldn't install clip.
Command: "D:\anaconda3\envs\novelai\python.exe" -m pip install git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1 --prefer-binary
Error code: 1
stdout: Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1
  Cloning https://github.com/openai/CLIP.git (to revision d50d76daa670286dd6cacf3bcd80b5e4823fc8e1) to c:\users\yefuf\appdata\local\temp\pip-req-build-f8w7kbzg

stderr:   Running command git clone --filter=blob:none --quiet https://github.com/openai/CLIP.git 'C:\Users\yefuf\AppData\Local\Temp\pip-req-build-f8w7kbzg'
  fatal: unable to access 'https://github.com/openai/CLIP.git/': OpenSSL SSL_read: Connection was aborted, errno 10053
  error: subprocess-exited-with-error

  git clone --filter=blob:none --quiet https://github.com/openai/CLIP.git 'C:\Users\yefuf\AppData\Local\Temp\pip-req-build-f8w7kbzg' did not run successfully.
  exit code: 128

  See above for output.

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

git clone --filter=blob:none --quiet https://github.com/openai/CLIP.git 'C:\Users\yefuf\AppData\Local\Temp\pip-req-build-f8w7kbzg' did not run successfully.
exit code: 128

See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

通過訪CLIP項目GitHub主頁,發(fā)現(xiàn)該項目可以通過如下命令進(jìn)行安裝解決。

pip install ftfy regex tqdm
pip install git+https://github.com/openai/CLIP.git

4、項目啟動中提示Connection was reset in connection to github.com

出錯時的錯誤打印如下:

(novelai) E:\workspace\02_Python\novalai\stable-diffusion-webui>python launch.py
Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)]
Commit hash: b8f2dfed3c0085f1df359b9dc5b3841ddc2196f0
Cloning Stable Diffusion into repositories\stable-diffusion...
Cloning Taming Transformers into repositories\taming-transformers...
Traceback (most recent call last):
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\launch.py", line 251, in <module>
    prepare_enviroment()
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\launch.py", line 201, in prepare_enviroment
    git_clone(taming_transformers_repo, repo_dir('taming-transformers'), "Taming Transformers", taming_transformers_commit_hash)
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\launch.py", line 85, in git_clone
    run(f'"{git}" clone "{url}" "{dir}"', f"Cloning {name} into {dir}...", f"Couldn't clone {name}")
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\launch.py", line 34, in run
    raise RuntimeError(message)
RuntimeError: Couldn't clone Taming Transformers.
Command: "git" clone "https://github.com/CompVis/taming-transformers.git" "repositories\taming-transformers"
Error code: 128
stdout: <empty>
stderr: Cloning into 'repositories\taming-transformers'...
fatal: unable to access 'https://github.com/CompVis/taming-transformers.git/': OpenSSL SSL_connect: Connection was reset in connection to github.com:443

在命令窗口中輸入如下命令,然后重新運(yùn)行程序,但實際操作下來,仍有較大概率在克隆項目的過程中失敗。

git config --global http.postBuffer 524288000
git config --global http.sslVerify "false"

查看lauch.py中的代碼可以發(fā)現(xiàn),程序在啟動時有對依賴項目進(jìn)行檢查,如項目不存在,則克隆下來。

def prepare_enviroment():
    torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113")
    requirements_file = os.environ.get('REQS_FILE', "requirements_versions.txt")
    commandline_args = os.environ.get('COMMANDLINE_ARGS', "")

    gfpgan_package = os.environ.get('GFPGAN_PACKAGE', "git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379")
    clip_package = os.environ.get('CLIP_PACKAGE', "git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1")
    deepdanbooru_package = os.environ.get('DEEPDANBOORU_PACKAGE', "git+https://github.com/KichangKim/DeepDanbooru.git@d91a2963bf87c6a770d74894667e9ffa9f6de7ff")

    xformers_windows_package = os.environ.get('XFORMERS_WINDOWS_PACKAGE', 'https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl')

    stable_diffusion_repo = os.environ.get('STABLE_DIFFUSION_REPO', "https://github.com/CompVis/stable-diffusion.git")
    taming_transformers_repo = os.environ.get('TAMING_REANSFORMERS_REPO', "https://github.com/CompVis/taming-transformers.git")
    k_diffusion_repo = os.environ.get('K_DIFFUSION_REPO', 'https://github.com/crowsonkb/k-diffusion.git')
    codeformer_repo = os.environ.get('CODEFORMET_REPO', 'https://github.com/sczhou/CodeFormer.git')
    blip_repo = os.environ.get('BLIP_REPO', 'https://github.com/salesforce/BLIP.git')

    stable_diffusion_commit_hash = os.environ.get('STABLE_DIFFUSION_COMMIT_HASH', "69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc")
    taming_transformers_commit_hash = os.environ.get('TAMING_TRANSFORMERS_COMMIT_HASH', "24268930bf1dce879235a7fddd0b2355b84d7ea6")
    k_diffusion_commit_hash = os.environ.get('K_DIFFUSION_COMMIT_HASH', "f4e99857772fc3a126ba886aadf795a332774878")
    codeformer_commit_hash = os.environ.get('CODEFORMER_COMMIT_HASH', "c5b4593074ba6214284d6acd5f1719b6c5d739af")
    blip_commit_hash = os.environ.get('BLIP_COMMIT_HASH', "48211a1594f1321b00f14c9f7a5b4813144b2fb9")

因此,我們打開git bash重新執(zhí)行上述兩條git命令,預(yù)先將項目克隆下來。

git clone https://github.com/CompVis/taming-transformers.git "repositories\taming-transformers"

git clone https://github.com/crowsonkb/k-diffusion.git "repositories\k-diffusion"

git clone https://github.com/sczhou/CodeFormer.git "repositories\CodeFormer"

git clone https://github.com/salesforce/BLIP.git "repositories\BLIP"

克隆完成之后如圖:
installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI

5、項目啟動中提示CUDA out of memory

出錯時的錯誤打印如下:

(novelai) E:\workspace\02_Python\novalai\stable-diffusion-webui>python launch.py
Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)]
Commit hash: b8f2dfed3c0085f1df359b9dc5b3841ddc2196f0
Fetching updates for BLIP...
Checking out commit for BLIP with hash: 48211a1594f1321b00f14c9f7a5b4813144b2fb9...
Installing requirements for CodeFormer
Installing requirements for Web UI
Launching Web UI with arguments:
Moving sd-v1-4.ckpt from E:\workspace\02_Python\novalai\stable-diffusion-webui\models to E:\workspace\02_Python\novalai\stable-diffusion-webui\models\Stable-diffusion.
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Downloading: 100%|██████████████████████████████████████████████████████████████████| 939k/939k [00:00<00:00, 1.26MB/s]
Downloading: 100%|███████████████████████████████████████████████████████████████████| 512k/512k [00:01<00:00, 344kB/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████| 389/389 [00:00<?, ?B/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████████| 905/905 [00:00<?, ?B/s]
Downloading: 100%|████████████████████████████████████████████████████████████████████████| 4.41k/4.41k [00:00<?, ?B/s]
Downloading: 100%|████████████████████████████████████████████████████████████████| 1.59G/1.59G [03:56<00:00, 7.23MB/s]
Loading weights [7460a6fa] from E:\workspace\02_Python\novalai\stable-diffusion-webui\models\Stable-diffusion\sd-v1-4.ckpt
Global Step: 470000
Traceback (most recent call last):
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\launch.py", line 252, in <module>
    start()
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\launch.py", line 247, in start
    webui.webui()
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\webui.py", line 148, in webui
    initialize()
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\webui.py", line 83, in initialize
    modules.sd_models.load_model()
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\modules\sd_models.py", line 252, in load_model
    sd_model.to(shared.device)
  File "D:\anaconda3\envs\novelai\lib\site-packages\pytorch_lightning\core\mixins\device_dtype_mixin.py", line 113, in to
    return super().to(*args, **kwargs)
  File "D:\anaconda3\envs\novelai\lib\site-packages\torch\nn\modules\module.py", line 987, in to
    return self._apply(convert)
  File "D:\anaconda3\envs\novelai\lib\site-packages\torch\nn\modules\module.py", line 639, in _apply
    module._apply(fn)
  File "D:\anaconda3\envs\novelai\lib\site-packages\torch\nn\modules\module.py", line 639, in _apply
    module._apply(fn)
  File "D:\anaconda3\envs\novelai\lib\site-packages\torch\nn\modules\module.py", line 639, in _apply
    module._apply(fn)
  [Previous line repeated 2 more times]
  File "D:\anaconda3\envs\novelai\lib\site-packages\torch\nn\modules\module.py", line 662, in _apply
    param_applied = fn(param)
  File "D:\anaconda3\envs\novelai\lib\site-packages\torch\nn\modules\module.py", line 985, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 2.00 GiB total capacity; 1.68 GiB already allocated; 0 bytes free; 1.72 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

根據(jù)提示,先嘗試用如下命令改變pytorch配置,仍舊報錯!

set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128

嘗試增加代碼with torch.no_grad(),使內(nèi)存就不會分配參數(shù)梯度的空間,仍舊報錯!
installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI
由于提示內(nèi)存溢出,先通過控制面板->所有控制面板項->管理工具->系統(tǒng)信息,查看顯卡內(nèi)存大小。
installing clip,機(jī)器學(xué)習(xí),人工智能,python,pytorch,NovalAI
官方推薦的顯卡內(nèi)存大小為4GB以上,而筆者的顯卡內(nèi)存只有2GB,顯然GPU不符合要求。查看項目的命令選項,發(fā)現(xiàn)項目支持CPU計算--use-cpu。

(novelai) E:\workspace\02_Python\novalai\stable-diffusion-webui>python launch.py -h
Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)]
Commit hash: b8f2dfed3c0085f1df359b9dc5b3841ddc2196f0
Installing requirements for Web UI
Launching Web UI with arguments: -h
usage: launch.py [-h] [--config CONFIG] [--ckpt CKPT] [--ckpt-dir CKPT_DIR] [--gfpgan-dir GFPGAN_DIR]
                 [--gfpgan-model GFPGAN_MODEL] [--no-half] [--no-half-vae] [--no-progressbar-hiding]
                 [--max-batch-count MAX_BATCH_COUNT] [--embeddings-dir EMBEDDINGS_DIR]
                 [--hypernetwork-dir HYPERNETWORK_DIR] [--localizations-dir LOCALIZATIONS_DIR] [--allow-code]
                 [--medvram] [--lowvram] [--lowram] [--always-batch-cond-uncond] [--unload-gfpgan]
                 [--precision {full,autocast}] [--share] [--ngrok NGROK] [--ngrok-region NGROK_REGION]
                 [--enable-insecure-extension-access] [--codeformer-models-path CODEFORMER_MODELS_PATH]
                 [--gfpgan-models-path GFPGAN_MODELS_PATH] [--esrgan-models-path ESRGAN_MODELS_PATH]
                 [--bsrgan-models-path BSRGAN_MODELS_PATH] [--realesrgan-models-path REALESRGAN_MODELS_PATH]
                 [--scunet-models-path SCUNET_MODELS_PATH] [--swinir-models-path SWINIR_MODELS_PATH]
                 [--ldsr-models-path LDSR_MODELS_PATH] [--clip-models-path CLIP_MODELS_PATH] [--xformers]
                 [--force-enable-xformers] [--deepdanbooru] [--opt-split-attention] [--opt-split-attention-invokeai]
                 [--opt-split-attention-v1] [--disable-opt-split-attention]
                 [--use-cpu {all,sd,interrogate,gfpgan,swinir,esrgan,scunet,codeformer} [{all,sd,interrogate,gfpgan,swinir,esrgan,scunet,codeformer} ...]]
                 [--listen] [--port PORT] [--show-negative-prompt] [--ui-config-file UI_CONFIG_FILE]
                 [--hide-ui-dir-config] [--freeze-settings] [--ui-settings-file UI_SETTINGS_FILE] [--gradio-debug]
                 [--gradio-auth GRADIO_AUTH] [--gradio-img2img-tool {color-sketch,editor}] [--opt-channelslast]
                 [--styles-file STYLES_FILE] [--autolaunch] [--theme THEME] [--use-textbox-seed]
                 [--disable-console-progressbars] [--enable-console-prompts] [--vae-path VAE_PATH]
                 [--disable-safe-unpickle] [--api] [--nowebui] [--ui-debug-mode] [--device-id DEVICE_ID]
                 [--administrator] [--cors-allow-origins CORS_ALLOW_ORIGINS] [--tls-keyfile TLS_KEYFILE]
                 [--tls-certfile TLS_CERTFILE] [--server-name SERVER_NAME]

options:
  -h, --help            show this help message and exit
  --config CONFIG       path to config which constructs model
  --ckpt CKPT           path to checkpoint of stable diffusion model; if specified, this checkpoint will be added to
                        the list of checkpoints and loaded
  --ckpt-dir CKPT_DIR   Path to directory with stable diffusion checkpoints
  --gfpgan-dir GFPGAN_DIR
                        GFPGAN directory
  --gfpgan-model GFPGAN_MODEL
                        GFPGAN model file name
  --no-half             do not switch the model to 16-bit floats
  --no-half-vae         do not switch the VAE model to 16-bit floats
  --no-progressbar-hiding
                        do not hide progressbar in gradio UI (we hide it because it slows down ML if you have hardware
                        acceleration in browser)
  --max-batch-count MAX_BATCH_COUNT
                        maximum batch count value for the UI
  --embeddings-dir EMBEDDINGS_DIR
                        embeddings directory for textual inversion (default: embeddings)
  --hypernetwork-dir HYPERNETWORK_DIR
                        hypernetwork directory
  --localizations-dir LOCALIZATIONS_DIR
                        localizations directory
  --allow-code          allow custom script execution from webui
  --medvram             enable stable diffusion model optimizations for sacrificing a little speed for low VRM usage
  --lowvram             enable stable diffusion model optimizations for sacrificing a lot of speed for very low VRM
                        usage
  --lowram              load stable diffusion checkpoint weights to VRAM instead of RAM
  --always-batch-cond-uncond
                        disables cond/uncond batching that is enabled to save memory with --medvram or --lowvram
  --unload-gfpgan       does not do anything.
  --precision {full,autocast}
                        evaluate at this precision
  --share               use share=True for gradio and make the UI accessible through their site
  --ngrok NGROK         ngrok authtoken, alternative to gradio --share
  --ngrok-region NGROK_REGION
                        The region in which ngrok should start.
  --enable-insecure-extension-access
                        enable extensions tab regardless of other options
  --codeformer-models-path CODEFORMER_MODELS_PATH
                        Path to directory with codeformer model file(s).
  --gfpgan-models-path GFPGAN_MODELS_PATH
                        Path to directory with GFPGAN model file(s).
  --esrgan-models-path ESRGAN_MODELS_PATH
                        Path to directory with ESRGAN model file(s).
  --bsrgan-models-path BSRGAN_MODELS_PATH
                        Path to directory with BSRGAN model file(s).
  --realesrgan-models-path REALESRGAN_MODELS_PATH
                        Path to directory with RealESRGAN model file(s).
  --scunet-models-path SCUNET_MODELS_PATH
                        Path to directory with ScuNET model file(s).
  --swinir-models-path SWINIR_MODELS_PATH
                        Path to directory with SwinIR model file(s).
  --ldsr-models-path LDSR_MODELS_PATH
                        Path to directory with LDSR model file(s).
  --clip-models-path CLIP_MODELS_PATH
                        Path to directory with CLIP model file(s).
  --xformers            enable xformers for cross attention layers
  --force-enable-xformers
                        enable xformers for cross attention layers regardless of whether the checking code thinks you
                        can run it; do not make bug reports if this fails to work
  --deepdanbooru        enable deepdanbooru interrogator
  --opt-split-attention
                        force-enables Doggettx's cross-attention layer optimization. By default, it's on for torch
                        cuda.
  --opt-split-attention-invokeai
                        force-enables InvokeAI's cross-attention layer optimization. By default, it's on when cuda is
                        unavailable.
  --opt-split-attention-v1
                        enable older version of split attention optimization that does not consume all the VRAM it can
                        find
  --disable-opt-split-attention
                        force-disables cross-attention layer optimization
  --use-cpu {all,sd,interrogate,gfpgan,swinir,esrgan,scunet,codeformer} [{all,sd,interrogate,gfpgan,swinir,esrgan,scunet,codeformer} ...]
                        use CPU as torch device for specified modules
  --listen              launch gradio with 0.0.0.0 as server name, allowing to respond to network requests
  --port PORT           launch gradio with given server port, you need root/admin rights for ports < 1024, defaults to
                        7860 if available
  --show-negative-prompt
                        does not do anything
  --ui-config-file UI_CONFIG_FILE
                        filename to use for ui configuration
  --hide-ui-dir-config  hide directory configuration from webui
  --freeze-settings     disable editing settings
  --ui-settings-file UI_SETTINGS_FILE
                        filename to use for ui settings
  --gradio-debug        launch gradio with --debug option
  --gradio-auth GRADIO_AUTH
                        set gradio authentication like "username:password"; or comma-delimit multiple like
                        "u1:p1,u2:p2,u3:p3"
  --gradio-img2img-tool {color-sketch,editor}
                        gradio image uploader tool: can be either editor for ctopping, or color-sketch for drawing
  --opt-channelslast    change memory type for stable diffusion to channels last
  --styles-file STYLES_FILE
                        filename to use for styles
  --autolaunch          open the webui URL in the system's default browser upon launch
  --theme THEME         launches the UI with light or dark theme
  --use-textbox-seed    use textbox for seeds in UI (no up/down, but possible to input long seeds)
  --disable-console-progressbars
                        do not output progressbars to console
  --enable-console-prompts
                        print prompts to console when generating with txt2img and img2img
  --vae-path VAE_PATH   Path to Variational Autoencoders model
  --disable-safe-unpickle
                        disable checking pytorch models for malicious code
  --api                 use api=True to launch the api with the webui
  --nowebui             use api=True to launch the api instead of the webui
  --ui-debug-mode       Don't load model to quickly launch UI
  --device-id DEVICE_ID
                        Select the default CUDA device to use (export CUDA_VISIBLE_DEVICES=0,1,etc might be needed
                        before)
  --administrator       Administrator rights
  --cors-allow-origins CORS_ALLOW_ORIGINS
                        Allowed CORS origins
  --tls-keyfile TLS_KEYFILE
                        Partially enables TLS, requires --tls-certfile to fully function
  --tls-certfile TLS_CERTFILE
                        Partially enables TLS, requires --tls-keyfile to fully function
  --server-name SERVER_NAME
                        Sets hostname of server

嘗試構(gòu)造如下運(yùn)行參數(shù),--use-cpu all使所有模塊均使用CPU計算,--lowram --always-batch-cond-uncond使用低內(nèi)存配置選項,程序可以成功運(yùn)行。

(novelai) E:\workspace\02_Python\novalai\stable-diffusion-webui>python launch.py --lowram --always-batch-cond-uncond --use-cpu all
Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)]
Commit hash: b8f2dfed3c0085f1df359b9dc5b3841ddc2196f0
Installing requirements for Web UI
Launching Web UI with arguments: --lowram --always-batch-cond-uncond --use-cpu all
Warning: caught exception 'Expected a cuda device, but got: cpu', memory monitor disabled
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Loading weights [7460a6fa] from E:\workspace\02_Python\novalai\stable-diffusion-webui\models\Stable-diffusion\sd-v1-4.ckpt
Global Step: 470000
Applying cross attention optimization (Doggettx).
Model loaded.
Loaded a total of 0 textual inversion embeddings.
Embeddings:
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.

然而,開始作畫時提示RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'錯誤!如果安裝網(wǎng)上的處理方法,將half函數(shù)在工程中替換為float函數(shù),則會出現(xiàn)device不匹配問題。

Traceback (most recent call last):
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\modules\ui.py", line 185, in f
    res = list(func(*args, **kwargs))
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\webui.py", line 57, in f
    res = func(*args, **kwargs)
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\modules\txt2img.py", line 48, in txt2img
    processed = process_images(p)
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\modules\processing.py", line 423, in process_images
    res = process_images_inner(p)
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\modules\processing.py", line 508, in process_images_inner
    uc = prompt_parser.get_learned_conditioning(shared.sd_model, len(prompts) * [p.negative_prompt], p.steps)
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\modules\prompt_parser.py", line 138, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\repositories\stable-diffusion\ldm\models\diffusion\ddpm.py", line 558, in get_learned_conditioning
    c = self.cond_stage_model(c)
  File "D:\anaconda3\envs\novelai\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\modules\sd_hijack.py", line 338, in forward
    z1 = self.process_tokens(tokens, multipliers)
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\extensions\aesthetic-gradients\aesthetic_clip.py", line 202, in __call__
    z = self.process_tokens(remade_batch_tokens, multipliers)
  File "E:\workspace\02_Python\novalai\stable-diffusion-webui\modules\sd_hijack.py", line 353, in process_tokens
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "D:\anaconda3\envs\novelai\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\anaconda3\envs\novelai\lib\site-packages\transformers\models\clip\modeling_clip.py", line 722, in forward
    return self.text_model(
  File "D:\anaconda3\envs\novelai\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\anaconda3\envs\novelai\lib\site-packages\transformers\models\clip\modeling_clip.py", line 643, in forward
    encoder_outputs = self.encoder(
  File "D:\anaconda3\envs\novelai\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\anaconda3\envs\novelai\lib\site-packages\transformers\models\clip\modeling_clip.py", line 574, in forward
    layer_outputs = encoder_layer(
  File "D:\anaconda3\envs\novelai\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\anaconda3\envs\novelai\lib\site-packages\transformers\models\clip\modeling_clip.py", line 316, in forward
    hidden_states = self.layer_norm1(hidden_states)
  File "D:\anaconda3\envs\novelai\lib\site-packages\torch\nn\modules\module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\anaconda3\envs\novelai\lib\site-packages\torch\nn\modules\normalization.py", line 190, in forward
    return F.layer_norm(
  File "D:\anaconda3\envs\novelai\lib\site-packages\torch\nn\functional.py", line 2515, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

考慮到--use-cpu參數(shù)可以指定模塊,則嘗試使工程中的部分模塊用CPU計算,其余在可用內(nèi)存方位內(nèi)用GPU計算,最終構(gòu)造參數(shù)如下,項目可成功作畫。

然而,此方式作畫效率非常低,一般每張圖片約5-6分鐘。當(dāng)參數(shù)設(shè)置較大時,會達(dá)到數(shù)小時。因此如果有條件可以升級計算機(jī)的顯卡配置,或租賃云服務(wù)器效果會更好。


(novelai) E:\workspace\02_Python\novalai\stable-diffusion-webui>python launch.py --lowram --always-batch-cond-uncond  --precision full --no-half --opt-split-attention-v1 --use-cpu sd --autolaunch
Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)]
Commit hash: b8f2dfed3c0085f1df359b9dc5b3841ddc2196f0
Installing requirements for Web UI
Launching Web UI with arguments: --lowram --always-batch-cond-uncond --precision full --no-half --opt-split-attention-v1 --use-cpu sd
Warning: caught exception 'Expected a cuda device, but got: cpu', memory monitor disabled
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Loading weights [7460a6fa] from E:\workspace\02_Python\novalai\stable-diffusion-webui\models\Stable-diffusion\sd-v1-4.ckpt
Global Step: 470000
Applying v1 cross attention optimization.
Model loaded.
Loaded a total of 0 textual inversion embeddings.
Embeddings:
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [06:30<00:00, 19.50s/it]
Total progress: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [06:10<00:00, 18.51s/it]

參考文獻(xiàn):

AI作畫保姆級教程來了!逆天,太強(qiáng)了!

【作者:墨葉扶風(fēng)http://blog.csdn.net/yefufeng】文章來源地址http://www.zghlxwxcb.cn/news/detail-813017.html

到了這里,關(guān)于【AI作畫】使用stable-diffusion-webui搭建AI作畫平臺的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實不符,請點(diǎn)擊違法舉報進(jìn)行投訴反饋,一經(jīng)查實,立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 無顯卡也能AI作畫 | Colab + Stable Diffusion WebUI

    無顯卡也能AI作畫 | Colab + Stable Diffusion WebUI

    事情起因是這樣的,我之前寫了如何在linux上用Stable Diffusion WebUI。 里邊提到我遲遲沒有弄webui是因為我筆記本A卡,臺式機(jī)顯卡帶不動。所以無奈只能使用學(xué)校服務(wù)器搭一個。 當(dāng)時有人說我,你自己電腦不行怎么不用colab,我當(dāng)時懶得弄。 原因如下: 服務(wù)器在我們本地,不用

    2024年02月10日
    瀏覽(28)
  • 建Stable-Diffusion-Webui的AI

    人工智能,丹青圣手,全平臺(原生/Docker)構(gòu)建Stable-Diffusion-Webui的AI繪畫庫教程(Python3.10/Pytorch1.13.0) - 知乎

    2024年02月16日
    瀏覽(21)
  • novel Ai (stable-diffusion-webui)安裝

    novel Ai (stable-diffusion-webui)安裝

    環(huán)境 英偉達(dá)顯卡 win11 可以連接github并下載內(nèi)容 安裝 python 3.10.6及以上,安裝并添加到path 安裝 cuDNN 和 CUDAToolKit 先更新主機(jī)的顯卡驅(qū)動,檢查系統(tǒng)的顯卡驅(qū)動版本的CUDA版本(電腦右下角:NVIDIA設(shè)置 - NVIDIA控制面板 - 左下角:系統(tǒng)信息 - 選項卡:組件 - 3D設(shè)置找CUDA:產(chǎn)品名稱)

    2024年02月02日
    瀏覽(22)
  • AI繪圖入門 安裝 stable-diffusion-webui

    AI繪圖入門 安裝 stable-diffusion-webui

    先看樣張: https://www.python.org/downloads/ Git - Downloads (git-scm.com) 1. 新建文件夾 (不能有中文,建議放在空閑比較多的硬盤上),然后再文件夾打開命令行 2. 克隆 Nvidia顯卡 AMD顯卡 AMD GPU 說明 CPU 選擇上面任意一個都可以 下載慢可以請求頭加https://ghproxy.com/ 如: https://ghproxy.com/

    2024年02月06日
    瀏覽(53)
  • AIGC-Stable Diffusion Webui-AI作畫初體驗

    AIGC-Stable Diffusion Webui-AI作畫初體驗

    一、項目地址 :傳送門:github:stable-diffusion-webui 二、安裝方式 :本地安裝?or Google Colab.(下文介紹中關(guān)于安裝的部分均以本地安裝為例) 本地安裝 : 1、git 2、python(推薦3.10.8) 3、下載git倉庫代碼 4、配置:python及git路徑 5、啟動:(windows) (第一次啟動會很慢,會下載安裝很多文

    2024年02月13日
    瀏覽(28)
  • stable-diffusion-webui教程(AI繪畫真人教程)

    stable-diffusion-webui教程(AI繪畫真人教程)

    首先給大家看看效果,這個就是新一代的AI繪畫的造物,是不是很漂亮,AI自定義老婆從此不再是夢了。 下面就給大家演示一下,這個軟件如何部署,和部署完成之后如何獲取相關(guān)的,和模型,然后大家也可以自己做出來自己喜歡的AI老婆。 這里是別的大佬的項目的地

    2024年02月02日
    瀏覽(21)
  • 【AI作畫】stable diffusion webui Linux虛擬機(jī) Centos 詳細(xì)部署教程

    【AI作畫】stable diffusion webui Linux虛擬機(jī) Centos 詳細(xì)部署教程

    環(huán)境:虛擬機(jī)Centos7、6處理器、8G內(nèi)存+10G交換內(nèi)存、沒有GPU使用CPU硬解 windows版本的可以直接使用整合包:看評論的轉(zhuǎn)載鏈接自行下載,解壓即可用 提示:這里可能source后版本是1.8.3,只需要重新yum remove git source /etc/profile 提示:注意最后修改vim 和 urlgrabber-ext-down 提示:我這里

    2024年02月11日
    瀏覽(30)
  • AI繪畫stable-diffusion-webui指定GPU運(yùn)行

    在cmd_args.py文件中,進(jìn)行運(yùn)行參數(shù)的設(shè)定,其中可以指定gpu信息 其中: 命令行運(yùn)行的時候指定具體gpu的id,例如: 表示使用id為3的gpu卡運(yùn)行,其中--listen表示允許遠(yuǎn)程訪問。

    2024年02月11日
    瀏覽(23)
  • AI繪畫stable-diffusion-webui+ChilloutMix云部署

    騰訊GPU實驗室:https://cloud.tencent.com/act/pro/gpu-study?from=10680 實例規(guī)格:GPU計算型GN7 | GN7.2XLARGE32 系統(tǒng):Ubuntu Server 20.04 LTS 64位 根據(jù)安裝要求python版本需要為3.10.6. 下載python3.10.6. 網(wǎng)絡(luò)慢的話手動上傳到服務(wù)器。 解壓并進(jìn)入該目錄,后續(xù)的所有命令均在該目錄中執(zhí)行: 1.下載代碼

    2024年01月20日
    瀏覽(25)
  • Window 10搭建AI繪畫平臺-stable-diffusion-webui

    Window 10搭建AI繪畫平臺-stable-diffusion-webui

    一,安裝環(huán)境: ????????1,Python Release Python 3.10.6 | Python.org ????????一定要安裝 Python3.10.6 以上版本,并配好環(huán)境變量。 ? ? ? ? ? ? ? 2,Pytorch??CUDA Toolkit Archive | NVIDIA Developer ? ? ? ? ? ? ??安裝CUDA,下載如下部署電腦截圖的對應(yīng)版本。 ? ? ? ? ? ? ? ? ? ? ?

    2024年02月16日
    瀏覽(28)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包