官網(wǎng)鏈接
NLP From Scratch: Classifying Names with a Character-Level RNN — PyTorch Tutorials 2.0.1+cu117 documentation
使用CHARACTER-LEVEL RNN 對(duì)名字分類
我們將建立和訓(xùn)練一個(gè)基本的字符級(jí)遞歸神經(jīng)網(wǎng)絡(luò)(RNN)來分類單詞。本教程以及另外兩個(gè)“from scratch”的自然語言處理(NLP)教程 NLP From Scratch: Generating Names with a Character-Level RNN 和 NLP From Scratch: Translation with a Sequence to Sequence Network and Attention,演示如何預(yù)處理數(shù)據(jù)以建立NLP模型。特別是,這些教程沒有使用torchtext的許多便利功能,因此您可以看到如何簡單使用預(yù)處理模型NLP。
字符級(jí)RNN將單詞作為一系列字符來讀取 ,每一步輸出一個(gè)預(yù)測和“隱藏狀態(tài)”,將之前的隱藏狀態(tài)輸入到下一步。我們把最后的預(yù)測作為輸出,即這個(gè)詞屬于哪個(gè)類。
具體來說,我們將訓(xùn)練來自18種語言的幾千個(gè)姓氏,并根據(jù)拼寫來預(yù)測一個(gè)名字來自哪種語言:
$ python predict.py Hinton
(-0.47) Scottish
(-1.52) English
(-3.57) Irish
$ python predict.py Schmidhuber
(-0.19) German
(-2.48) Czech
(-2.68) Dutch
建議準(zhǔn)備
在開始本教程之前,建議您安裝PyTorch,并對(duì)Python編程語言和張量有基本的了解:
- PyTorch 有關(guān)安裝說明
- Deep Learning with PyTorch: A 60 Minute Blitz 開始使用PyTorch并學(xué)習(xí)張量的基礎(chǔ)知識(shí)
- Learning PyTorch with Examples 使用概述
- PyTorch for Former Torch Users 如果您是前Lua Torch用戶
了解rnn及其工作原理也很有用:
- The Unreasonable Effectiveness of Recurrent Neural Networks 展示了一些現(xiàn)實(shí)生活中的例子
- Understanding LSTM Networks 是專門關(guān)于LSTMs的,但也有關(guān)于RNNs的信息
準(zhǔn)備數(shù)據(jù)
從這里下載數(shù)據(jù)并將其解壓縮到當(dāng)前目錄。here
“data/names”目錄下包含18個(gè)文本文件,文件名為“[Language].txt”。每個(gè)文件包含一堆名稱,每行一個(gè)名稱,大多數(shù)是羅馬化的(但我們?nèi)匀恍枰獜腢nicode轉(zhuǎn)換為ASCII)。
我們最終會(huì)得到一個(gè)包含每種語言名稱列表的字典,{language: [names ...]}。通用變量“category”和“l(fā)ine”(在本例中表示語言和名稱)用于以后的可擴(kuò)展性。
from io import open
import glob
import os
def findFiles(path): return glob.glob(path)
print(findFiles('data/names/*.txt'))
import unicodedata
import string
all_letters = string.ascii_letters + " .,;'"
n_letters = len(all_letters)
# Turn a Unicode string to plain ASCII, thanks to https://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
and c in all_letters
)
print(unicodeToAscii('?lusàrski'))
# Build the category_lines dictionary, a list of names per language
category_lines = {}
all_categories = []
# Read a file and split into lines
def readLines(filename):
lines = open(filename, encoding='utf-8').read().strip().split('\n')
return [unicodeToAscii(line) for line in lines]
for filename in findFiles('data/names/*.txt'):
category = os.path.splitext(os.path.basename(filename))[0]
all_categories.append(category)
lines = readLines(filename)
category_lines[category] = lines
n_categories = len(all_categories)
輸出
['data/names/Arabic.txt', 'data/names/Chinese.txt', 'data/names/Czech.txt', 'data/names/Dutch.txt', 'data/names/English.txt', 'data/names/French.txt', 'data/names/German.txt', 'data/names/Greek.txt', 'data/names/Irish.txt', 'data/names/Italian.txt', 'data/names/Japanese.txt', 'data/names/Korean.txt', 'data/names/Polish.txt', 'data/names/Portuguese.txt', 'data/names/Russian.txt', 'data/names/Scottish.txt', 'data/names/Spanish.txt', 'data/names/Vietnamese.txt']
Slusarski
現(xiàn)在我們有了category_lines,這是一個(gè)將每個(gè)類別(語言)映射到行(名稱)列表的字典。我們還記錄了all_categories(只是一個(gè)語言列表)和n_categories,以供以后參考。
print(category_lines['Italian'][:5])
輸出
['Abandonato', 'Abatangelo', 'Abatantuono', 'Abate', 'Abategiovanni']
把名字變成張量
現(xiàn)在我們已經(jīng)組織好了所有的名字,我們需要把它們變成張量來使用它們。
為了表示單個(gè)字母,我們使用大小為<1 x n_letters> 的 “one-hot vector”。一個(gè)獨(dú)熱向量被0填充,除了當(dāng)前字母所以處是1。例如:"b" = <0 1 0 0 0 ...>.
為了組成一個(gè)單詞,我們將一堆這樣的單詞連接到一個(gè)二維矩陣中<line_length x 1 x n_letters>.
額外的1維度是因?yàn)镻yTorch假設(shè)所有的東西都是分批的——我們在這里只是使用1的批大小。
import torch
# Find letter index from all_letters, e.g. "a" = 0
def letterToIndex(letter):
return all_letters.find(letter)
# Just for demonstration, turn a letter into a <1 x n_letters> Tensor
def letterToTensor(letter):
tensor = torch.zeros(1, n_letters)
tensor[0][letterToIndex(letter)] = 1
return tensor
# Turn a line into a <line_length x 1 x n_letters>,
# or an array of one-hot letter vectors
def lineToTensor(line):
tensor = torch.zeros(len(line), 1, n_letters)
for li, letter in enumerate(line):
tensor[li][0][letterToIndex(letter)] = 1
return tensor
print(letterToTensor('J'))
print(lineToTensor('Jones').size())
輸出
tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0.]])
torch.Size([5, 1, 57])
創(chuàng)建網(wǎng)絡(luò)
在autograd之前,在Torch中創(chuàng)建循環(huán)神經(jīng)網(wǎng)絡(luò)涉及到在幾個(gè)時(shí)間步上克隆一層的參數(shù)。圖層包含隱藏狀態(tài)和梯度,現(xiàn)在完全由圖形本身處理。這意味著你可以以一種非?!凹兇狻钡姆绞綄?shí)現(xiàn)RNN,作為常規(guī)的前饋層。
這個(gè)RNN模塊(主要是從the PyTorch for Torch users tutorial復(fù)制的)只有2個(gè)線性層,在輸入和隱藏狀態(tài)上操作,在輸出之后有一個(gè)LogSoftmax層。
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
self.h2o = nn.Linear(hidden_size, output_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, input, hidden):
combined = torch.cat((input, hidden), 1)
hidden = self.i2h(combined)
output = self.h2o(hidden)
output = self.softmax(output)
return output, hidden
def initHidden(self):
return torch.zeros(1, self.hidden_size)
n_hidden = 128
rnn = RNN(n_letters, n_hidden, n_categories)
為了運(yùn)行這個(gè)網(wǎng)絡(luò)的一個(gè)步驟,我們需要傳遞一個(gè)輸入(在我們的例子中,是當(dāng)前字母的張量)和一個(gè)先前的隱藏狀態(tài)(我們一開始將其初始化為零)。我們將返回輸出(每種語言的概率)和下一個(gè)隱藏狀態(tài)(我們將其保留到下一步)。
input = letterToTensor('A')
hidden = torch.zeros(1, n_hidden)
output, next_hidden = rnn(input, hidden)
為了提高效率,我們不想為每一步都創(chuàng)建一個(gè)新的張量,所以我們將使用lineToTensor而不是letterToTensor并使用切片。這可以通過預(yù)計(jì)算張量批次來進(jìn)一步優(yōu)化。
input = lineToTensor('Albert')
hidden = torch.zeros(1, n_hidden)
output, next_hidden = rnn(input[0], hidden)
print(output)
輸出
tensor([[-2.9083, -2.9270, -2.9167, -2.9590, -2.9108, -2.8332, -2.8906, -2.8325,
-2.8521, -2.9279, -2.8452, -2.8754, -2.8565, -2.9733, -2.9201, -2.8233,
-2.9298, -2.8624]], grad_fn=<LogSoftmaxBackward0>)
正如您所看到的,輸出是一個(gè)<1 x n_categories> 張量,其中每個(gè)項(xiàng)目是該類別的可能性(越高越有可能)。
訓(xùn)練
訓(xùn)練準(zhǔn)備
在開始訓(xùn)練之前,我們應(yīng)該編寫一些輔助函數(shù)。首先是解釋網(wǎng)絡(luò)的輸出,我們知道這是每個(gè)類別的可能性。我們可以用Tensor.topk得到最大值的索引:
def categoryFromOutput(output):
top_n, top_i = output.topk(1)
category_i = top_i[0].item()
return all_categories[category_i], category_i
print(categoryFromOutput(output))
輸出
('Scottish', 15)
我們還需要一種快速獲取訓(xùn)練示例(名稱及其語言)的方法:
import random
def randomChoice(l):
return l[random.randint(0, len(l) - 1)]
def randomTrainingExample():
category = randomChoice(all_categories)
line = randomChoice(category_lines[category])
category_tensor = torch.tensor([all_categories.index(category)], dtype=torch.long)
line_tensor = lineToTensor(line)
return category, line, category_tensor, line_tensor
for i in range(10):
category, line, category_tensor, line_tensor = randomTrainingExample()
print('category =', category, '/ line =', line)
輸出
category = Chinese / line = Hou
category = Scottish / line = Mckay
category = Arabic / line = Cham
category = Russian / line = V'Yurkov
category = Irish / line = O'Keeffe
category = French / line = Belrose
category = Spanish / line = Silva
category = Japanese / line = Fuchida
category = Greek / line = Tsahalis
category = Korean / line = Chang
訓(xùn)練網(wǎng)絡(luò)
現(xiàn)在訓(xùn)練這個(gè)網(wǎng)絡(luò)所需要做的就是給它看一堆例子,讓它猜測,然后告訴它是否錯(cuò)了。
對(duì)于損失函數(shù)nn.NLLLoss是合適的,因?yàn)镽NN的最后一層是nn.LogSoftmax.。
criterion = nn.NLLLoss()
每個(gè)訓(xùn)練循環(huán)將:
- 創(chuàng)建輸入張量和目標(biāo)張量
- 創(chuàng)建一個(gè)零初始隱藏狀態(tài)
- 讀取每個(gè)字母
-
- 為下一個(gè)字母保存隱藏狀態(tài)
- 將最終輸出與目標(biāo)進(jìn)行比較
- 反向傳播
- 返回輸出和損失
learning_rate = 0.005 # If you set this too high, it might explode. If too low, it might not learn
def train(category_tensor, line_tensor):
hidden = rnn.initHidden()
rnn.zero_grad()
for i in range(line_tensor.size()[0]):
output, hidden = rnn(line_tensor[i], hidden)
loss = criterion(output, category_tensor)
loss.backward()
# Add parameters' gradients to their values, multiplied by learning rate
for p in rnn.parameters():
p.data.add_(p.grad.data, alpha=-learning_rate)
return output, loss.item()
現(xiàn)在我們只需要用一堆例子來運(yùn)行它。由于train函數(shù)返回輸出和損失,我們可以打印它的猜測并跟蹤損失以便繪制。由于有1000個(gè)示例,我們只打印每個(gè)print_every示例,并取損失的平均值。
import time
import math
n_iters = 100000
print_every = 5000
plot_every = 1000
# Keep track of losses for plotting
current_loss = 0
all_losses = []
def timeSince(since):
now = time.time()
s = now - since
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
start = time.time()
for iter in range(1, n_iters + 1):
category, line, category_tensor, line_tensor = randomTrainingExample()
output, loss = train(category_tensor, line_tensor)
current_loss += loss
# Print ``iter`` number, loss, name and guess
if iter % print_every == 0:
guess, guess_i = categoryFromOutput(output)
correct = '?' if guess == category else '? (%s)' % category
print('%d %d%% (%s) %.4f %s / %s %s' % (iter, iter / n_iters * 100, timeSince(start), loss, line, guess, correct))
# Add current loss avg to list of losses
if iter % plot_every == 0:
all_losses.append(current_loss / plot_every)
current_loss = 0
輸出
5000 5% (0m 33s) 2.6379 Horigome / Japanese ?
10000 10% (1m 5s) 2.0172 Miazga / Japanese ? (Polish)
15000 15% (1m 39s) 0.2680 Yukhvidov / Russian ?
20000 20% (2m 12s) 1.8239 Mclaughlin / Irish ? (Scottish)
25000 25% (2m 45s) 0.6978 Banh / Vietnamese ?
30000 30% (3m 18s) 1.7433 Machado / Japanese ? (Portuguese)
35000 35% (3m 51s) 0.0340 Fotopoulos / Greek ?
40000 40% (4m 23s) 1.4637 Quirke / Irish ?
45000 45% (4m 57s) 1.9018 Reier / French ? (German)
50000 50% (5m 30s) 0.9174 Hou / Chinese ?
55000 55% (6m 2s) 1.0506 Duan / Vietnamese ? (Chinese)
60000 60% (6m 35s) 0.9617 Giang / Vietnamese ?
65000 65% (7m 9s) 2.4557 Cober / German ? (Czech)
70000 70% (7m 42s) 0.8502 Mateus / Portuguese ?
75000 75% (8m 14s) 0.2750 Hamilton / Scottish ?
80000 80% (8m 47s) 0.7515 Maessen / Dutch ?
85000 85% (9m 20s) 0.0912 Gan / Chinese ?
90000 90% (9m 53s) 0.1190 Bellomi / Italian ?
95000 95% (10m 26s) 0.0137 Vozgov / Russian ?
100000 100% (10m 59s) 0.7808 Tong / Vietnamese ?
繪制結(jié)果
繪制all_losses的歷史損失圖顯示了網(wǎng)絡(luò)的學(xué)習(xí)情況:
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
plt.figure()
plt.plot(all_losses)
輸出
[<matplotlib.lines.Line2D object at 0x7f16606095a0>]
評(píng)估結(jié)果
為了了解網(wǎng)絡(luò)在不同類別上的表現(xiàn)如何,我們將創(chuàng)建一個(gè)混淆矩陣,表示網(wǎng)絡(luò)猜測(列)的每種語言(行)。為了計(jì)算混淆矩陣,使用evaluate(),在網(wǎng)絡(luò)中運(yùn)行一堆樣本,這與 train() 去掉反向傳播相同。
# Keep track of correct guesses in a confusion matrix
confusion = torch.zeros(n_categories, n_categories)
n_confusion = 10000
# Just return an output given a line
def evaluate(line_tensor):
hidden = rnn.initHidden()
for i in range(line_tensor.size()[0]):
output, hidden = rnn(line_tensor[i], hidden)
return output
# Go through a bunch of examples and record which are correctly guessed
for i in range(n_confusion):
category, line, category_tensor, line_tensor = randomTrainingExample()
output = evaluate(line_tensor)
guess, guess_i = categoryFromOutput(output)
category_i = all_categories.index(category)
confusion[category_i][guess_i] += 1
# Normalize by dividing every row by its sum
for i in range(n_categories):
confusion[i] = confusion[i] / confusion[i].sum()
# Set up plot
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(confusion.numpy())
fig.colorbar(cax)
# Set up axes
ax.set_xticklabels([''] + all_categories, rotation=90)
ax.set_yticklabels([''] + all_categories)
# Force label at every tick
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
# sphinx_gallery_thumbnail_number = 2
plt.show()
輸出
/var/lib/jenkins/workspace/intermediate_source/char_rnn_classification_tutorial.py:445: UserWarning:
FixedFormatter should only be used together with FixedLocator
/var/lib/jenkins/workspace/intermediate_source/char_rnn_classification_tutorial.py:446: UserWarning:
FixedFormatter should only be used together with FixedLocator
你可以從主軸上挑出亮點(diǎn),顯示它猜錯(cuò)了哪些語言,例如中文猜錯(cuò)了韓語,西班牙語猜錯(cuò)了意大利語。它似乎在希臘語上表現(xiàn)得很好,而在英語上表現(xiàn)得很差(可能是因?yàn)榕c其他語言重疊)。
運(yùn)行用戶輸入
def predict(input_line, n_predictions=3):
print('\n> %s' % input_line)
with torch.no_grad():
output = evaluate(lineToTensor(input_line))
# Get top N categories
topv, topi = output.topk(n_predictions, 1, True)
predictions = []
for i in range(n_predictions):
value = topv[0][i].item()
category_index = topi[0][i].item()
print('(%.2f) %s' % (value, all_categories[category_index]))
predictions.append([value, all_categories[category_index]])
predict('Dovesky')
predict('Jackson')
predict('Satoshi')
輸出
> Dovesky
(-0.57) Czech
(-0.97) Russian
(-3.43) English
> Jackson
(-1.02) Scottish
(-1.49) Russian
(-1.96) English
> Satoshi
(-0.42) Japanese
(-1.70) Polish
(-2.74) Italian
in the Practical PyTorch repo中腳本的最終版本將上述代碼拆分為幾個(gè)文件:
- data.py (加載文件)
- model.py (定義 RNN)
- train.py (執(zhí)行訓(xùn)練)
- predict.py (運(yùn)行帶有命令行參數(shù)的predict() )
- server.py (使用bottle.py作為JSON API提供預(yù)測)
運(yùn)行train.py來訓(xùn)練和保存網(wǎng)絡(luò)。
運(yùn)行predict.py并輸入一個(gè)名稱來查看預(yù)測:
$ python predict.py Hazaki
(-0.42) Japanese
(-1.39) Polish
(-3.51) Czech
運(yùn)行server.py 并訪問http://localhost:5533/Yourname以獲得預(yù)測的JSON輸出。
練習(xí)
嘗試使用不同的數(shù)據(jù)集 -> 類別,例如:文章來源:http://www.zghlxwxcb.cn/news/detail-651479.html
- 任何單詞->語言
- 名字->性別
- 角色名稱->作家
- 頁面標(biāo)題 -> 博客或社交新聞網(wǎng)站子版塊
使用一個(gè)更大的和/或更好的形狀網(wǎng)絡(luò),可以獲得更好的結(jié)果文章來源地址http://www.zghlxwxcb.cn/news/detail-651479.html
- 添加更多線性圖層
- 試試 nn.LSTM 和 nn.GRU 網(wǎng)絡(luò)層
- 將這些RNNs組合成一個(gè)更高級(jí)的網(wǎng)絡(luò)
到了這里,關(guān)于PyTorch翻譯官網(wǎng)教程-NLP FROM SCRATCH: CLASSIFYING NAMES WITH A CHARACTER-LEVEL RNN的文章就介紹完了。如果您還想了解更多內(nèi)容,請?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!