PEFT安裝
由于LORA,AdaLORA都集成在PEFT上了,所以在使用的時候安裝PEFT是必備項
方法一:PyPI
To install ?? PEFT from PyPI:
pip install peft
方法二:Source
New features that haven’t been released yet are added every day, which also means there may be some bugs. To try them out, install from the GitHub repository:
pip install git+https://github.com/huggingface/peft
If you’re working on contributing to the library or wish to play with the source code and see live results as you run the code, an editable version can be installed from a locally-cloned version of the repository:
git clone https://github.com/huggingface/peft
cd peft
pip install -e .
LORA使用:
推薦文章:
HuggingFace 官方文檔doc
HuggingFace官方認證文章(知乎)
HuggingFace官網(wǎng)blog,PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware
視頻手把手教學:Boost Fine-Tuning Performance of LLM: Optimal Architecture w/ PEFT LoRA Adapter-Tuning on Your GPU
八月末最新出爐:使用 LoRA 進行高效微調(diào):大型語言模型最佳參數(shù)選擇指南
大模型高效微調(diào)-PEFT框架介紹
大模型微調(diào)(finetune)方法總結(jié)-LoRA,Adapter,Prefix-tuning,P-tuning,Prompt-tuning
peft代碼解讀:Prefix Tuning/LoRA/P-Tuning/Prompt Tuning文章來源:http://www.zghlxwxcb.cn/news/detail-695738.html
學習了幾天,發(fā)現(xiàn)這些文章是最有含金量的,這里幫大家總結(jié)好了,看完就可肯定會了,不用自己再東找西找資源了,如果感覺有用的話,歡迎點贊收藏文章來源地址http://www.zghlxwxcb.cn/news/detail-695738.html
到了這里,關于PEFT學習:使用LORA進行LLM微調(diào)的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網(wǎng)!