Windows 下amd顯卡訓練transformer 模型。安裝方法參見 :?Windows下用amd顯卡訓練 : Pytorch-directml 重大升級,改為pytorch插件形式,兼容更好_amd顯卡 pytorch_znsoft的博客-CSDN博客?文章來源:http://www.zghlxwxcb.cn/news/detail-524704.html
import os
import imp
try:
imp.find_module('torch_directml')
found_directml = True
import torch_directml
except ImportError:
found_directml = False
import torch
from transformers import RobertaTokenizer, RobertaConfig, RobertaModel, RobertaForMaskedLM,pipeline
DIR="E:/transformers"
MODEL_NAME="microsoft/codebert-base"
from transformers import AutoTokenizer, AutoModel
if found_directml:
device=torch_directml.device()
else:
device=torch.device("cpu")
# tokenizer = AutoTokenizer.from_pretrained(DIR+os.sep+MODEL_NAME)
# model = AutoModel.from_pretrained(DIR+os.sep+MODEL_NAME).to(device)
# nl_tokens=tokenizer.tokenize("return maximum value")
# code_tokens=tokenizer.tokenize("def max(a,b): if a>b: return a else return b")
# tokens=[tokenizer.cls_token]+nl_tokens+[tokenizer.sep_token]+code_tokens+[tokenizer.eos_token]
# tokens_ids=tokenizer.convert_tokens_to_ids(tokens)
# tokens_ids=torch.tensor(tokens_ids)[None,:]
# tokens_ids.to(device)
# context_embeddings=model()[0]
# print(context_embeddings)
MODEL_NAME="microsoft/codebert-base-mlm"
model = RobertaForMaskedLM.from_pretrained(DIR+os.sep+MODEL_NAME)
tokenizer = RobertaTokenizer.from_pretrained(DIR+os.sep+MODEL_NAME)
model.to(device)
CODE = "if (x is not None) <mask> (x>1)"
code=tokenizer(CODE)
#.to(device)
input_ids=torch.tensor([code["input_ids"]]).to(device)
attention_mask=torch.tensor([code["attention_mask"]]).to(device)
for i in range(1000):
out=model(input_ids=input_ids,attention_mask=attention_mask)
print(out)
注意,如果直接使用pipeline可能會有問題,應(yīng)該是pipeline不兼容導致的。只需要自己編寫具體代碼,避開pipeline即可。? amd GPU占用率能上去。文章來源地址http://www.zghlxwxcb.cn/news/detail-524704.html
到了這里,關(guān)于Windows 下 AMD顯卡訓練模型有救了:pytorch_directml 下運行Transformers的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!