注意:如果沒有顯卡或顯卡性能較弱的同學(xué)們不要嘗試,不然到最后也還是不行!當(dāng)然這些同學(xué)們可以去(免費(fèi),效果稍遜)
Stable Diffusion - a Hugging Face Space by stabilityaiDiscover amazing ML apps made by the communityhttps://huggingface.co/spaces/stabilityai/stable-diffusion
或者(收費(fèi),但是效果更好)
DreamStudioDreamStudio by Stability AI is a new AI system powered by Stable Diffusion that can create realistic images, art and animation from a description in natural language.https://beta.dreamstudio.ai/dream這兩個(gè)網(wǎng)站上過過癮。
本文主要參考 GitHub 上的使用描述
1. GitHub下載源碼包
GitHub - CompVis/stable-diffusion: A latent text-to-image diffusion modelA latent text-to-image diffusion model. Contribute to CompVis/stable-diffusion development by creating an account on GitHub.https://github.com/CompVis/stable-diffusion
git clone https://github.com/CompVis/stable-diffusion
2. 下載模型參數(shù)
下載地址:https://huggingface.co/CompVis
根據(jù)自己的需要下載(建議下載-original版本):如需要 stable-diffusion-v-1-1-original 版本,則直接點(diǎn)擊 stable-diffusion-v-1-1-original:
?再點(diǎn)擊 sd-v1-1.ckpt 下載即可(文件一般是以G為單位):
?3. anaconda創(chuàng)建 ldm 環(huán)境
不熟悉 anaconda 的同學(xué)可以參考其他博客,這里不再贅述。
(建議在stable-diffusion目錄下)打開命令行運(yùn)行下面代碼:
conda env create -f environment.yaml
conda activate ldm
如果已經(jīng)有 ldm 環(huán)境的同學(xué),也可以運(yùn)行下面代碼進(jìn)行內(nèi)部安裝包更新:
conda install pytorch torchvision -c pytorch
pip install transformers==4.19.2 diffusers invisible-watermark
pip install -e .
4. 設(shè)置模型參數(shù)存放位置
Linux系統(tǒng)的同學(xué)運(yùn)行下面代碼:
mkdir -p models/ldm/stable-diffusion-v1/
ln -s <path/to/model.ckpt> models/ldm/stable-diffusion-v1/model.ckpt
其中 <path/to/model.ckpt>是你上面下載模型參數(shù)時(shí)的存放位置
如果不想使用命令行或Window的同學(xué)可以先在 models/ldm 下手動(dòng)創(chuàng)建 stable-diffusion-v1 文件夾,然后將剛才下載的模型參數(shù)改名為 model.ckpt 并放在?stable-diffusion-v1 目錄下
5. 運(yùn)行 stable-diffusion
在?stable-diffusion 目錄下運(yùn)行下面代碼(確保此時(shí)已經(jīng)激活了 ldm 環(huán)境):
python scripts/txt2img.py --prompt "a photograph of an astronaut riding a horse" --plms
其中 --prompt 后面的即是描述性文字,可以根據(jù)需要更改(不要想著生成一些不好的東西,沒有用的)。
一些可以調(diào)節(jié)的參數(shù)如下:文章來源:http://www.zghlxwxcb.cn/news/detail-413138.html
usage: txt2img.py [-h] [--prompt [PROMPT]] [--outdir [OUTDIR]] [--skip_grid] [--skip_save] [--ddim_steps DDIM_STEPS] [--plms] [--laion400m] [--fixed_code] [--ddim_eta DDIM_ETA]
[--n_iter N_ITER] [--H H] [--W W] [--C C] [--f F] [--n_samples N_SAMPLES] [--n_rows N_ROWS] [--scale SCALE] [--from-file FROM_FILE] [--config CONFIG] [--ckpt CKPT]
[--seed SEED] [--precision {full,autocast}]
optional arguments:
-h, --help show this help message and exit
--prompt [PROMPT] the prompt to render
--outdir [OUTDIR] dir to write results to
--skip_grid do not save a grid, only individual samples. Helpful when evaluating lots of samples
--skip_save do not save individual samples. For speed measurements.
--ddim_steps DDIM_STEPS
number of ddim sampling steps
--plms use plms sampling
--laion400m uses the LAION400M model
--fixed_code if enabled, uses the same starting code across samples
--ddim_eta DDIM_ETA ddim eta (eta=0.0 corresponds to deterministic sampling
--n_iter N_ITER sample this often
--H H image height, in pixel space
--W W image width, in pixel space
--C C latent channels
--f F downsampling factor
--n_samples N_SAMPLES
how many samples to produce for each given prompt. A.k.a. batch size
--n_rows N_ROWS rows in the grid (default: n_samples)
--scale SCALE unconditional guidance scale: eps = eps(x, empty) + scale * (eps(x, cond) - eps(x, empty))
--from-file FROM_FILE
if specified, load prompts from this file
--config CONFIG path to config which constructs model
--ckpt CKPT path to checkpoint of model
--seed SEED the seed (for reproducible sampling)
--precision {full,autocast}
evaluate at this precision
也可以實(shí)現(xiàn)圖片轉(zhuǎn)換圖片,運(yùn)行下面代碼:文章來源地址http://www.zghlxwxcb.cn/news/detail-413138.html
python scripts/img2img.py --prompt "A fantasy landscape, trending on artstation" --init-img <path-to-img.jpg> --strength 0.8
到了這里,關(guān)于Stable Diffusion 本地部署的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!