目錄
前言
準(zhǔn)備工作
Git?
Python3.9?
Cmake
下載模型?
合并模型
部署模型?
前言
想必有小伙伴也想跟我一樣體驗下部署大語言模型, 但礙于經(jīng)濟實力, 不過民間上出現(xiàn)了大量的量化模型, 我們平民也能體驗體驗啦~, 該模型可以在筆記本電腦上部署, 確保你電腦至少有16G運行內(nèi)存
開原地址:GitHub - ymcui/Chinese-LLaMA-Alpaca: 中文LLaMA&Alpaca大語言模型+本地CPU部署 (Chinese LLaMA & Alpaca LLMs)
Linux和Mac的教程在開源的倉庫中有提供,當(dāng)然如果你是M1的也可以參考以下文章:
https://gist.github.com/cedrickchee/e8d4cb0c4b1df6cc47ce8b18457ebde0
準(zhǔn)備工作
最好是有代理, 不然你下載東西可能失敗, 我為了下個模型花了一天時間, 痛哭~?
我們需要先在電腦上安裝以下環(huán)境:??
- Git
- Python3.9(使用Anaconda3創(chuàng)建該環(huán)境)?
- Cmake(如果你電腦沒有C和C++的編譯環(huán)境還需要安裝mingw)
Git?
下載地址:Git - Downloading Package?
下載好安裝包后打開, 一直點下一步安裝即可...?
在cmd窗口輸入以下如果有版本號顯示說明已經(jīng)安裝成功
git -v
Python3.9?
?我這里使用Anaconda3來使用Python,?Anaconda3是什么?
如果你熟悉docker, 那么你可以把docker的概念帶過來, docker可以創(chuàng)建很多個容器, 每個容器的環(huán)境可能一樣也可能不一樣,?Anaconda3也是一樣的, 它可以創(chuàng)建很多個不同的Python版本, 互相不沖突, 想用哪個版本就切換到哪個版本...
?Anaconda3下載地址:Anaconda | Anaconda Distribution
?安裝步驟參考:
?等待安裝好后一直點next,?直到點Finish關(guān)閉即可
在cmd窗口輸入以下命令, 顯示版本號則說明安裝成功
conda -V
接下來我們在cmd窗口輸入以下命令創(chuàng)建一個python3.9的環(huán)境?
conda create --name py39 python=3.9 -y
--name后面的py39是環(huán)境名字, 可以自己任意起, 切換環(huán)境的時候需要它
python=3.9是指定python版本
添加-y后就不需要手動輸入y去確認(rèn)安裝了
查看有哪些環(huán)境的命令:
conda info -e
激活/切換環(huán)境的命令:
conda activate py39
?要使用哪個環(huán)境的話換成對應(yīng)名字即可
?進入環(huán)境后你就可以在這輸入python相關(guān)的命令了, 如:
?要退出環(huán)境的話輸入:
conda deactivate
當(dāng)我退出環(huán)境后再查看python版本的話會提示我不是內(nèi)部或外部命令,也不是可運行的程序
或批處理文件。如:
Cmake
這是一個編譯工具, 我們需要使用它去編譯llama.cpp, 量化模型需要用到, 不量化模型個人電腦跑不起來, 覺得量化這個概念不理解的可以理解為壓縮, 這種概念是不對的, 只是為了幫助你更好的理解.
在安裝之前我們需要安裝mingw, 避免編譯時找不到編譯環(huán)境, 按下win+r快捷鍵輸入powershell
輸入命令安裝scoop, 這是一個包管理器, 我們使用它來下載安裝mingw:
這個地方如果沒有開代理的話可能會出錯?
iex "& {$(irm get.scoop.sh)} -RunAsAdmin"
安裝好后分別運行下面兩個命令(添加庫):
scoop bucket add extras
scoop bucket add main
?輸入命令安裝mingw
scoop install mingw
到這就已經(jīng)安裝好mingw了, 如果報錯了請評論, 我看到了會回復(fù)
接下來安裝Cmake
地址:Download | CMake?
?安裝參考:
?安裝好后點Finish即可
下載模型?
我們需要下載兩個模型, 一個是原版的LLaMA模型, 一個是擴充了中文的模型, 后續(xù)會進行一個合并模型的操作
- 原版模型下載地址(要代理):https://ipfs.io/ipfs/Qmb9y5GCkTG7ZzbBWMu2BXwMkzyCKcUjtEKPpgdZ7GEFKm/
- 備用:nyanko7/LLaMA-7B at main
?
- 擴充了中文的模型下載:
建議在D盤上新建一個文件夾, 在里面進行下載操作, 如下:
?
?在彈出的框中分別輸入以下命令:
git lfs install
git clone https://huggingface.co/ziqingyang/chinese-alpaca-lora-7b
這里可能會因為網(wǎng)絡(luò)問題一直失敗......一直重試就行, 有別的問題請評論, 看到會回復(fù)
合并模型
終于寫到這里了, 累~
在你下載了模型的目錄內(nèi)打開cmd窗口, 如下:
?
這里我先說下這圖片中的兩個目錄里文件是啥吧
先是chinese-alpaca-lora-7b目錄, 這個目錄一般你下載下來就不用動了, 格式如下:
chinese-alpaca-lora-7b/
????????- adapter_config.json
????????- adapter_model.bin
????????- special_tokens_map.json
????????- tokenizer_config.json
????????- tokenizer.model然后是path_to_original_llama_root_dir目錄, 這個文件夾需要創(chuàng)建, 保持一致的文件名, 目錄內(nèi)的格式如下:
path_to_original_llama_root_dir/
? ? ? ? - 7B/????????#這是一個名為7B的文件夾
? ? ? ? ? ? ? ? - checklist.chk
? ? ? ? ? ? ? ? -?consolidated.00.pth
? ? ? ? ? ? ? ? -?params.json
? ? ? ? ? ? ? ? -?tokenizer_checklist.chk
? ? ? ? - tokenizer.model
自行按照上面的格式存放
?打開窗口后需要先激活python環(huán)境, 使用的就是前面裝Anaconda3
# 不記得有哪些環(huán)境的先運行以下命令
conda info -e
# 然后激活你需要的環(huán)境 我的環(huán)境名是py39
conda activate py39
切換好后分別執(zhí)行以下命令安裝依賴庫
pip install git+https://github.com/huggingface/transformers
pip install sentencepiece==0.1.97
pip install peft==0.2.0
執(zhí)行命令安裝成功后會有Successfully的字眼
?接下來需要將原版模型轉(zhuǎn)HF格式, 需要借助最新版??transformers提供的腳本convert_llama_weights_to_hf.py
在目錄內(nèi)新建一個convert_llama_weights_to_hf.py文件, 用記事本打開后把以下代碼粘貼進去
注意:我這里是為了方便直接拷貝出來了,腳本可能會更新,建議直接去以下地址拷貝最新的:
transformers/convert_llama_weights_to_hf.py at main · huggingface/transformers · GitHub
# Copyright 2022 EleutherAI and The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import gc
import json
import math
import os
import shutil
import warnings
import torch
from transformers import LlamaConfig, LlamaForCausalLM, LlamaTokenizer
try:
from transformers import LlamaTokenizerFast
except ImportError as e:
warnings.warn(e)
warnings.warn(
"The converted tokenizer will be the `slow` tokenizer. To use the fast, update your `tokenizers` library and re-run the tokenizer conversion"
)
LlamaTokenizerFast = None
"""
Sample usage:
```
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path
```
Thereafter, models can be loaded via:
```py
from transformers import LlamaForCausalLM, LlamaTokenizer
model = LlamaForCausalLM.from_pretrained("/output/path")
tokenizer = LlamaTokenizer.from_pretrained("/output/path")
```
Important note: you need to be able to host the whole model in RAM to execute this script (even if the biggest versions
come in several checkpoints they each contain a part of each weight of the model, so we need to load them all in RAM).
"""
INTERMEDIATE_SIZE_MAP = {
"7B": 11008,
"13B": 13824,
"30B": 17920,
"65B": 22016,
}
NUM_SHARDS = {
"7B": 1,
"13B": 2,
"30B": 4,
"65B": 8,
}
def compute_intermediate_size(n):
return int(math.ceil(n * 8 / 3) + 255) // 256 * 256
def read_json(path):
with open(path, "r") as f:
return json.load(f)
def write_json(text, path):
with open(path, "w") as f:
json.dump(text, f)
def write_model(model_path, input_base_path, model_size):
os.makedirs(model_path, exist_ok=True)
tmp_model_path = os.path.join(model_path, "tmp")
os.makedirs(tmp_model_path, exist_ok=True)
params = read_json(os.path.join(input_base_path, "params.json"))
num_shards = NUM_SHARDS[model_size]
n_layers = params["n_layers"]
n_heads = params["n_heads"]
n_heads_per_shard = n_heads // num_shards
dim = params["dim"]
dims_per_head = dim // n_heads
base = 10000.0
inv_freq = 1.0 / (base ** (torch.arange(0, dims_per_head, 2).float() / dims_per_head))
# permute for sliced rotary
def permute(w):
return w.view(n_heads, dim // n_heads // 2, 2, dim).transpose(1, 2).reshape(dim, dim)
print(f"Fetching all parameters from the checkpoint at {input_base_path}.")
# Load weights
if model_size == "7B":
# Not shared
# (The sharded implementation would also work, but this is simpler.)
loaded = torch.load(os.path.join(input_base_path, "consolidated.00.pth"), map_location="cpu")
else:
# Sharded
loaded = [
torch.load(os.path.join(input_base_path, f"consolidated.{i:02d}.pth"), map_location="cpu")
for i in range(num_shards)
]
param_count = 0
index_dict = {"weight_map": {}}
for layer_i in range(n_layers):
filename = f"pytorch_model-{layer_i + 1}-of-{n_layers + 1}.bin"
if model_size == "7B":
# Unsharded
state_dict = {
f"model.layers.{layer_i}.self_attn.q_proj.weight": permute(
loaded[f"layers.{layer_i}.attention.wq.weight"]
),
f"model.layers.{layer_i}.self_attn.k_proj.weight": permute(
loaded[f"layers.{layer_i}.attention.wk.weight"]
),
f"model.layers.{layer_i}.self_attn.v_proj.weight": loaded[f"layers.{layer_i}.attention.wv.weight"],
f"model.layers.{layer_i}.self_attn.o_proj.weight": loaded[f"layers.{layer_i}.attention.wo.weight"],
f"model.layers.{layer_i}.mlp.gate_proj.weight": loaded[f"layers.{layer_i}.feed_forward.w1.weight"],
f"model.layers.{layer_i}.mlp.down_proj.weight": loaded[f"layers.{layer_i}.feed_forward.w2.weight"],
f"model.layers.{layer_i}.mlp.up_proj.weight": loaded[f"layers.{layer_i}.feed_forward.w3.weight"],
f"model.layers.{layer_i}.input_layernorm.weight": loaded[f"layers.{layer_i}.attention_norm.weight"],
f"model.layers.{layer_i}.post_attention_layernorm.weight": loaded[f"layers.{layer_i}.ffn_norm.weight"],
}
else:
# Sharded
# Note that in the 13B checkpoint, not cloning the two following weights will result in the checkpoint
# becoming 37GB instead of 26GB for some reason.
state_dict = {
f"model.layers.{layer_i}.input_layernorm.weight": loaded[0][
f"layers.{layer_i}.attention_norm.weight"
].clone(),
f"model.layers.{layer_i}.post_attention_layernorm.weight": loaded[0][
f"layers.{layer_i}.ffn_norm.weight"
].clone(),
}
state_dict[f"model.layers.{layer_i}.self_attn.q_proj.weight"] = permute(
torch.cat(
[
loaded[i][f"layers.{layer_i}.attention.wq.weight"].view(n_heads_per_shard, dims_per_head, dim)
for i in range(num_shards)
],
dim=0,
).reshape(dim, dim)
)
state_dict[f"model.layers.{layer_i}.self_attn.k_proj.weight"] = permute(
torch.cat(
[
loaded[i][f"layers.{layer_i}.attention.wk.weight"].view(n_heads_per_shard, dims_per_head, dim)
for i in range(num_shards)
],
dim=0,
).reshape(dim, dim)
)
state_dict[f"model.layers.{layer_i}.self_attn.v_proj.weight"] = torch.cat(
[
loaded[i][f"layers.{layer_i}.attention.wv.weight"].view(n_heads_per_shard, dims_per_head, dim)
for i in range(num_shards)
],
dim=0,
).reshape(dim, dim)
state_dict[f"model.layers.{layer_i}.self_attn.o_proj.weight"] = torch.cat(
[loaded[i][f"layers.{layer_i}.attention.wo.weight"] for i in range(num_shards)], dim=1
)
state_dict[f"model.layers.{layer_i}.mlp.gate_proj.weight"] = torch.cat(
[loaded[i][f"layers.{layer_i}.feed_forward.w1.weight"] for i in range(num_shards)], dim=0
)
state_dict[f"model.layers.{layer_i}.mlp.down_proj.weight"] = torch.cat(
[loaded[i][f"layers.{layer_i}.feed_forward.w2.weight"] for i in range(num_shards)], dim=1
)
state_dict[f"model.layers.{layer_i}.mlp.up_proj.weight"] = torch.cat(
[loaded[i][f"layers.{layer_i}.feed_forward.w3.weight"] for i in range(num_shards)], dim=0
)
state_dict[f"model.layers.{layer_i}.self_attn.rotary_emb.inv_freq"] = inv_freq
for k, v in state_dict.items():
index_dict["weight_map"][k] = filename
param_count += v.numel()
torch.save(state_dict, os.path.join(tmp_model_path, filename))
filename = f"pytorch_model-{n_layers + 1}-of-{n_layers + 1}.bin"
if model_size == "7B":
# Unsharded
state_dict = {
"model.embed_tokens.weight": loaded["tok_embeddings.weight"],
"model.norm.weight": loaded["norm.weight"],
"lm_head.weight": loaded["output.weight"],
}
else:
state_dict = {
"model.norm.weight": loaded[0]["norm.weight"],
"model.embed_tokens.weight": torch.cat(
[loaded[i]["tok_embeddings.weight"] for i in range(num_shards)], dim=1
),
"lm_head.weight": torch.cat([loaded[i]["output.weight"] for i in range(num_shards)], dim=0),
}
for k, v in state_dict.items():
index_dict["weight_map"][k] = filename
param_count += v.numel()
torch.save(state_dict, os.path.join(tmp_model_path, filename))
# Write configs
index_dict["metadata"] = {"total_size": param_count * 2}
write_json(index_dict, os.path.join(tmp_model_path, "pytorch_model.bin.index.json"))
config = LlamaConfig(
hidden_size=dim,
intermediate_size=compute_intermediate_size(dim),
num_attention_heads=params["n_heads"],
num_hidden_layers=params["n_layers"],
rms_norm_eps=params["norm_eps"],
)
config.save_pretrained(tmp_model_path)
# Make space so we can load the model properly now.
del state_dict
del loaded
gc.collect()
print("Loading the checkpoint in a Llama model.")
model = LlamaForCausalLM.from_pretrained(tmp_model_path, torch_dtype=torch.float16, low_cpu_mem_usage=True)
# Avoid saving this as part of the config.
del model.config._name_or_path
print("Saving in the Transformers format.")
model.save_pretrained(model_path)
shutil.rmtree(tmp_model_path)
def write_tokenizer(tokenizer_path, input_tokenizer_path):
# Initialize the tokenizer based on the `spm` model
tokenizer_class = LlamaTokenizer if LlamaTokenizerFast is None else LlamaTokenizerFast
print("Saving a {tokenizer_class} to {tokenizer_path}")
tokenizer = tokenizer_class(input_tokenizer_path)
tokenizer.save_pretrained(tokenizer_path)
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"--input_dir",
help="Location of LLaMA weights, which contains tokenizer.model and model folders",
)
parser.add_argument(
"--model_size",
choices=["7B", "13B", "30B", "65B", "tokenizer_only"],
)
parser.add_argument(
"--output_dir",
help="Location to write HF model and tokenizer",
)
args = parser.parse_args()
if args.model_size != "tokenizer_only":
write_model(
model_path=args.output_dir,
input_base_path=os.path.join(args.input_dir, args.model_size),
model_size=args.model_size,
)
spm_path = os.path.join(args.input_dir, "tokenizer.model")
write_tokenizer(args.output_dir, spm_path)
if __name__ == "__main__":
main()
在cmd窗口執(zhí)行命令(如果你使用了anaconda,執(zhí)行命令前請先激活環(huán)境):
python convert_llama_weights_to_hf.py --input_dir path_to_original_llama_root_dir --model_size 7B --output_dir path_to_original_llama_hf_dir
經(jīng)過漫長的等待....
?
?接下來合并輸出PyTorch版本權(quán)重(.pth
文件),使用merge_llama_with_chinese_lora.py
腳本
在目錄新建一個merge_llama_with_chinese_lora.py文件, 用記事本打開將以下代碼粘貼進去
注意:我這里是為了方便直接拷貝出來了,腳本可能會更新,建議直接去以下地址拷貝最新的:?
Chinese-LLaMA-Alpaca/merge_llama_with_chinese_lora.py at main · ymcui/Chinese-LLaMA-Alpaca · GitHub
"""
Borrowed and modified from https://github.com/tloen/alpaca-lora
"""
import argparse
import os
import json
import gc
import torch
import transformers
import peft
from peft import PeftModel
parser = argparse.ArgumentParser()
parser.add_argument('--base_model',default=None,required=True,type=str,help="Please specify a base_model")
parser.add_argument('--lora_model',default=None,required=True,type=str,help="Please specify a lora_model")
# deprecated; the script infers the model size from the checkpoint
parser.add_argument('--model_size',default='7B',type=str,help="Size of the LLaMA model",choices=['7B','13B'])
parser.add_argument('--offload_dir',default=None,type=str,help="(Optional) Please specify a temp folder for offloading (useful for low-RAM machines). Default None (disable offload).")
parser.add_argument('--output_dir',default='./',type=str)
args = parser.parse_args()
assert (
"LlamaTokenizer" in transformers._import_structure["models.llama"]
), "LLaMA is now in HuggingFace's main branch.\nPlease reinstall it: pip uninstall transformers && pip install git+https://github.com/huggingface/transformers.git"
from transformers import LlamaTokenizer, LlamaForCausalLM
BASE_MODEL = args.base_model
LORA_MODEL = args.lora_model
output_dir = args.output_dir
assert (
BASE_MODEL
), "Please specify a BASE_MODEL in the script, e.g. 'decapoda-research/llama-7b-hf'"
tokenizer = LlamaTokenizer.from_pretrained(LORA_MODEL)
if args.offload_dir is not None:
# Load with offloading, which is useful for low-RAM machines.
# Note that if you have enough RAM, please use original method instead, as it is faster.
base_model = LlamaForCausalLM.from_pretrained(
BASE_MODEL,
load_in_8bit=False,
torch_dtype=torch.float16,
offload_folder=args.offload_dir,
offload_state_dict=True,
low_cpu_mem_usage=True,
device_map={"": "cpu"},
)
else:
# Original method without offloading
base_model = LlamaForCausalLM.from_pretrained(
BASE_MODEL,
load_in_8bit=False,
torch_dtype=torch.float16,
device_map={"": "cpu"},
)
base_model.resize_token_embeddings(len(tokenizer))
assert base_model.get_input_embeddings().weight.size(0) == len(tokenizer)
tokenizer.save_pretrained(output_dir)
print(f"Extended vocabulary size: {len(tokenizer)}")
first_weight = base_model.model.layers[0].self_attn.q_proj.weight
first_weight_old = first_weight.clone()
## infer the model size from the checkpoint
emb_to_model_size = {
4096 : '7B',
5120 : '13B',
6656 : '30B',
8192 : '65B',
}
embedding_size = base_model.get_input_embeddings().weight.size(1)
model_size = emb_to_model_size[embedding_size]
print(f"Loading LoRA for {model_size} model")
lora_model = PeftModel.from_pretrained(
base_model,
LORA_MODEL,
device_map={"": "cpu"},
torch_dtype=torch.float16,
)
assert torch.allclose(first_weight_old, first_weight)
# merge weights
print(f"Peft version: {peft.__version__}")
print(f"Merging model")
if peft.__version__ > '0.2.0':
# merge weights - new merging method from peft
lora_model = lora_model.merge_and_unload()
else:
# merge weights
for layer in lora_model.base_model.model.model.layers:
if hasattr(layer.self_attn.q_proj,'merge_weights'):
layer.self_attn.q_proj.merge_weights = True
if hasattr(layer.self_attn.v_proj,'merge_weights'):
layer.self_attn.v_proj.merge_weights = True
if hasattr(layer.self_attn.k_proj,'merge_weights'):
layer.self_attn.k_proj.merge_weights = True
if hasattr(layer.self_attn.o_proj,'merge_weights'):
layer.self_attn.o_proj.merge_weights = True
if hasattr(layer.mlp.gate_proj,'merge_weights'):
layer.mlp.gate_proj.merge_weights = True
if hasattr(layer.mlp.down_proj,'merge_weights'):
layer.mlp.down_proj.merge_weights = True
if hasattr(layer.mlp.up_proj,'merge_weights'):
layer.mlp.up_proj.merge_weights = True
lora_model.train(False)
# did we do anything?
assert not torch.allclose(first_weight_old, first_weight)
lora_model_sd = lora_model.state_dict()
del lora_model, base_model
num_shards_of_models = {'7B': 1, '13B': 2}
params_of_models = {
'7B':
{
"dim": 4096,
"multiple_of": 256,
"n_heads": 32,
"n_layers": 32,
"norm_eps": 1e-06,
"vocab_size": -1,
},
'13B':
{
"dim": 5120,
"multiple_of": 256,
"n_heads": 40,
"n_layers": 40,
"norm_eps": 1e-06,
"vocab_size": -1,
},
}
params = params_of_models[model_size]
num_shards = num_shards_of_models[model_size]
n_layers = params["n_layers"]
n_heads = params["n_heads"]
dim = params["dim"]
dims_per_head = dim // n_heads
base = 10000.0
inv_freq = 1.0 / (base ** (torch.arange(0, dims_per_head, 2).float() / dims_per_head))
def permute(w):
return (
w.view(n_heads, dim // n_heads // 2, 2, dim).transpose(1, 2).reshape(dim, dim)
)
def unpermute(w):
return (
w.view(n_heads, 2, dim // n_heads // 2, dim).transpose(1, 2).reshape(dim, dim)
)
def translate_state_dict_key(k):
k = k.replace("base_model.model.", "")
if k == "model.embed_tokens.weight":
return "tok_embeddings.weight"
elif k == "model.norm.weight":
return "norm.weight"
elif k == "lm_head.weight":
return "output.weight"
elif k.startswith("model.layers."):
layer = k.split(".")[2]
if k.endswith(".self_attn.q_proj.weight"):
return f"layers.{layer}.attention.wq.weight"
elif k.endswith(".self_attn.k_proj.weight"):
return f"layers.{layer}.attention.wk.weight"
elif k.endswith(".self_attn.v_proj.weight"):
return f"layers.{layer}.attention.wv.weight"
elif k.endswith(".self_attn.o_proj.weight"):
return f"layers.{layer}.attention.wo.weight"
elif k.endswith(".mlp.gate_proj.weight"):
return f"layers.{layer}.feed_forward.w1.weight"
elif k.endswith(".mlp.down_proj.weight"):
return f"layers.{layer}.feed_forward.w2.weight"
elif k.endswith(".mlp.up_proj.weight"):
return f"layers.{layer}.feed_forward.w3.weight"
elif k.endswith(".input_layernorm.weight"):
return f"layers.{layer}.attention_norm.weight"
elif k.endswith(".post_attention_layernorm.weight"):
return f"layers.{layer}.ffn_norm.weight"
elif k.endswith("rotary_emb.inv_freq") or "lora" in k:
return None
else:
print(layer, k)
raise NotImplementedError
else:
print(k)
raise NotImplementedError
def save_shards(lora_model_sd, num_shards: int):
# Add the no_grad context manager
with torch.no_grad():
if num_shards == 1:
new_state_dict = {}
for k, v in lora_model_sd.items():
new_k = translate_state_dict_key(k)
if new_k is not None:
if "wq" in new_k or "wk" in new_k:
new_state_dict[new_k] = unpermute(v)
else:
new_state_dict[new_k] = v
os.makedirs(output_dir, exist_ok=True)
print(f"Saving shard 1 of {num_shards} into {output_dir}/consolidated.00.pth")
torch.save(new_state_dict, output_dir + "/consolidated.00.pth")
with open(output_dir + "/params.json", "w") as f:
json.dump(params, f)
else:
new_state_dicts = [dict() for _ in range(num_shards)]
for k in list(lora_model_sd.keys()):
v = lora_model_sd[k]
new_k = translate_state_dict_key(k)
if new_k is not None:
if new_k=='tok_embeddings.weight':
print(f"Processing {new_k}")
assert v.size(1)%num_shards==0
splits = v.split(v.size(1)//num_shards,dim=1)
elif new_k=='output.weight':
print(f"Processing {new_k}")
splits = v.split(v.size(0)//num_shards,dim=0)
elif new_k=='norm.weight':
print(f"Processing {new_k}")
splits = [v] * num_shards
elif 'ffn_norm.weight' in new_k:
print(f"Processing {new_k}")
splits = [v] * num_shards
elif 'attention_norm.weight' in new_k:
print(f"Processing {new_k}")
splits = [v] * num_shards
elif 'w1.weight' in new_k:
print(f"Processing {new_k}")
splits = v.split(v.size(0)//num_shards,dim=0)
elif 'w2.weight' in new_k:
print(f"Processing {new_k}")
splits = v.split(v.size(1)//num_shards,dim=1)
elif 'w3.weight' in new_k:
print(f"Processing {new_k}")
splits = v.split(v.size(0)//num_shards,dim=0)
elif 'wo.weight' in new_k:
print(f"Processing {new_k}")
splits = v.split(v.size(1)//num_shards,dim=1)
elif 'wv.weight' in new_k:
print(f"Processing {new_k}")
splits = v.split(v.size(0)//num_shards,dim=0)
elif "wq.weight" in new_k or "wk.weight" in new_k:
print(f"Processing {new_k}")
v = unpermute(v)
splits = v.split(v.size(0)//num_shards,dim=0)
else:
print(f"Unexpected key {new_k}")
raise ValueError
for sd,split in zip(new_state_dicts,splits):
sd[new_k] = split.clone()
del split
del splits
del lora_model_sd[k],v
gc.collect() # Effectively enforce garbage collection
os.makedirs(output_dir, exist_ok=True)
for i,new_state_dict in enumerate(new_state_dicts):
print(f"Saving shard {i+1} of {num_shards} into {output_dir}/consolidated.0{i}.pth")
torch.save(new_state_dict, output_dir + f"/consolidated.0{i}.pth")
with open(output_dir + "/params.json", "w") as f:
print(f"Saving params.json into {output_dir}/params.json")
json.dump(params, f)
save_shards(lora_model_sd=lora_model_sd, num_shards=num_shards)
?執(zhí)行命令(如果你使用了anaconda,執(zhí)行命令前請先激活環(huán)境):
python merge_llama_with_chinese_lora.py --base_model path_to_original_llama_hf_dir --lora_model chinese-alpaca-lora-7b --output_dir path_to_output_dir
參數(shù)說明:
-
--base_model
:存放HF格式的LLaMA模型權(quán)重和配置文件的目錄(前面步驟中轉(zhuǎn)的hf格式) -
--lora_model
:擴充了中文的模型目錄 -
--output_dir
:指定保存全量模型權(quán)重的目錄,默認(rèn)為./(合并出來的目錄)
- (可選)
--offload_dir
:對于低內(nèi)存用戶需要指定一個offload緩存路徑
更詳細(xì)的請看開原倉庫:GitHub - ymcui/Chinese-LLaMA-Alpaca: 中文LLaMA&Alpaca大語言模型+本地CPU/GPU部署 (Chinese LLaMA & Alpaca LLMs)
到這里就已經(jīng)合并好模型了, 目錄:
?
接下來就準(zhǔn)備部署吧
部署模型?
我們需要先下載llama.cpp進行模型的量化, 輸入以下命令:?
git clone https://github.com/ggerganov/llama.cpp
目錄如:?
?
?重點來了, 在窗口中輸入以下命令進入剛剛下載的llama.cpp
cd llama.cpp
?如果你是跟著教程使用scoop(包管理器)安裝的MinGW,請使用以下命令(不是的請往后看):
cmake . -G "MinGW Makefiles"
cmake --build . --config Release
?走完以上命令后你應(yīng)該能在llama.cpp的bin目錄內(nèi)看到以下文件:
?
?如果你是使用的安裝包的方式安裝的MinGW,請使用以下命令:
mkdir build
cd build
cmake ..
cmake --build . --config Release
走完以上命令后在build =》Release =》bin目錄下應(yīng)該會有以下文件:
?
以上命令不能都輸入,看你自己的情況選擇命令?。?!?
如果沒有以上的文件, 那你應(yīng)該是報錯了, 基本上要么就是下載依賴的地方錯, 要么就是編譯的地方出錯, 我在這里摸索了好久?
接下來在llama.cpp內(nèi)新建一個zh-models文件夾, 準(zhǔn)備生成量化版本模型
zh-models的目錄格式如下:
zh-models/
????????- 7B/? ? ? ? #這是一個名為7B的文件夾
????????????????- consolidated.00.pth
????????????????- params.json
????????- tokenizer.model把path_to_output_dir文件夾內(nèi)的consolidated.00.pth和params.json文件放入上面格式中的位置
把path_to_output_dir文件夾內(nèi)的tokenizer.model文件放在跟7B文件夾同級的位置
?
接著在窗口中輸入命令將上述.pth
模型權(quán)重轉(zhuǎn)換為ggml的FP16格式,生成文件路徑為zh-models/7B/ggml-model-f16.bin
python convert-pth-to-ggml.py zh-models/7B/ 1
?
?進一步對FP16模型進行4-bit量化,生成量化模型文件路徑為zh-models/7B/ggml-model-q4_0.bin
D:\llama\llama.cpp\bin\quantize.exe ./zh-models/7B/ggml-model-f16.bin ./zh-models/7B/ggml-model-q4_0.bin 2
quantize.exe文件在bin目錄內(nèi), 自行根據(jù)路徑更改
?到這就已經(jīng)量化好了, 可以進行部署看看效果了, 部署的話如果你電腦配置好的可以選擇部署f16的,否則就部署q4_0的....
D:\llama\llama.cpp\bin\main.exe -m zh-models/7B/ggml-model-q4_0.bin --color -f prompts/alpaca.txt -ins -c 2048 --temp 0.2 -n 256 --repeat_penalty 1.3
在提示符?>
?之后輸入你的prompt,cmd/ctrl+c
中斷輸出,多行信息以\
作為行尾?
常用參數(shù)(更多參數(shù)請執(zhí)行D:\llama\llama.cpp\bin\main.exe -h命令):
-ins 啟動類ChatGPT對話交流的運行模式
-f 指定prompt模板,alpaca模型請加載prompts/alpaca.txt
-c 控制上下文的長度,值越大越能參考更長的對話歷史(默認(rèn):512)
-n 控制回復(fù)生成的最大長度(默認(rèn):128)
-b 控制batch size(默認(rèn):8),可適當(dāng)增加
-t 控制線程數(shù)量(默認(rèn):4),可適當(dāng)增加
--repeat_penalty 控制生成回復(fù)中對重復(fù)文本的懲罰力度
--temp 溫度系數(shù),值越低回復(fù)的隨機性越小,反之越大
--top_p, top_k 控制解碼采樣的相關(guān)參數(shù)想要部署f16的可以把命令中-m參數(shù)換成zh-models/7B/ggml-model-f16.bin即可
部署效果:
?
終于寫完了~
參考:文章來源:http://www.zghlxwxcb.cn/news/detail-444988.html
- GitHub - ymcui/Chinese-LLaMA-Alpaca: 中文LLaMA&Alpaca大語言模型+本地CPU/GPU部署 (Chinese LLaMA & Alpaca LLMs)
??點贊,你的認(rèn)可是我創(chuàng)作的動力 !
??收藏,你的青睞是我努力的方向!
??評論,你的意見是我進步的財富!???文章來源地址http://www.zghlxwxcb.cn/news/detail-444988.html
到了這里,關(guān)于【LLM】Windows本地CPU部署民間版中文羊駝模型(Chinese-LLaMA-Alpaca)踩坑記錄的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!