該系統(tǒng)實(shí)現(xiàn)了基于深度框架的語(yǔ)音識(shí)別中的聲學(xué)模型和語(yǔ)言模型建模,其中聲學(xué)模型包括 CNN-CTC、GRU-CTC、CNN-RNN-CTC,語(yǔ)言模型包含 transformer、CBHG,數(shù)據(jù)集包含 stc、primewords、Aishell、thchs30 四個(gè)數(shù)據(jù)集。
本項(xiàng)目現(xiàn)已訓(xùn)練一個(gè)迷你的語(yǔ)音識(shí)別系統(tǒng),將項(xiàng)目下載到本地上,下載 thchs 數(shù)據(jù)集并解壓至 data,運(yùn)行 test.py,不出意外能夠進(jìn)行識(shí)別,結(jié)果如下:
the 0 th example.
文本結(jié)果: lv4 shi4 yang2 chun1 yan1 jing3 da4 kuai4 wen2 zhang1 de di3 se4 si4 yue4 de lin2 luan2 geng4 shi4 lv4 de2 xian1 huo2 xiu4 mei4 shi1 yi4 ang4 ran2
原文結(jié)果: lv4 shi4 yang2 chun1 yan1 jing3 da4 kuai4 wen2 zhang1 de di3 se4 si4 yue4 de lin2 luan2 geng4 shi4 lv4 de2 xian1 huo2 xiu4 mei4 shi1 yi4 ang4 ran2
原文漢字: 綠是陽(yáng)春煙景大塊文章的底色四月的林巒更是綠得鮮活秀媚詩(shī)意盎然
識(shí)別結(jié)果: 綠是陽(yáng)春煙景大塊文章的底色四月的林巒更是綠得鮮活秀媚詩(shī)意盎然
若自己建立模型則需要?jiǎng)h除現(xiàn)有模型,重新配置參數(shù)訓(xùn)練,具體實(shí)現(xiàn)流程參考本頁(yè)最后。
2. 聲學(xué)模型
聲學(xué)模型采用 CTC 進(jìn)行建模,采用 CNN-CTC、GRU-CTC、FSMN 等模型 model_speech,采用 keras 作為編寫(xiě)框架。
論文地址:http://www.infocomm-journal.com/dxkx/CN/article/openArticlePDFabs.jsp?id=166970
tutorial:https://blog.csdn.net/chinatelecom08/article/details/85013535
3. 語(yǔ)言模型
新增基于 self-attention 結(jié)構(gòu)的語(yǔ)言模型 model_language\transformer.py,該模型已經(jīng)被證明有強(qiáng)于其他框架的語(yǔ)言表達(dá)能力。
論文地址:https://arxiv.org/abs/1706.03762。
tutorial:https://blog.csdn.net/chinatelecom08/article/details/85051817
基于 CBHG 結(jié)構(gòu)的語(yǔ)言模型 model_language\cbhg.py,該模型之前用于谷歌聲音合成,移植到該項(xiàng)目中作為基于神經(jīng)網(wǎng)絡(luò)的語(yǔ)言模型。
原理地址:https://github.com/crownpku/Somiao-Pinyin
tutorial:https://blog.csdn.net/chinatelecom08/article/details/85048019
4. 數(shù)據(jù)集
包括 stc、primewords、Aishell、thchs30 四個(gè)數(shù)據(jù)集,共計(jì)約 430 小時(shí), 相關(guān)鏈接:http://www.openslr.org/resources.php
Name |
train |
dev |
test |
aishell |
120098 |
14326 |
7176 |
primewords |
40783 |
5046 |
5073 |
thchs-30 |
10000 |
893 |
2495 |
st-cmd |
10000 |
600 |
2000 |
數(shù)據(jù)標(biāo)簽整理在 data 路徑下,其中 primewords、st-cmd 目前未區(qū)分訓(xùn)練集測(cè)試集。
若需要使用所有數(shù)據(jù)集,只需解壓到統(tǒng)一路徑下,然后設(shè)置 utils.py 中 datapath 的路徑即可。
與數(shù)據(jù)相關(guān)參數(shù)在 utils.py 中:
data_type: train, test, dev
data_path: 對(duì)應(yīng)解壓數(shù)據(jù)的路徑
thchs30, aishell, prime, stcmd: 是否使用該數(shù)據(jù)集
batch_size: batch_size
data_length: 我自己做實(shí)驗(yàn)時(shí)寫(xiě)小一些看效果用的,正常使用設(shè)為 None 即可
shuffle:正常訓(xùn)練設(shè)為 True,是否打亂訓(xùn)練順序
defdata_hparams():params=tf.contrib.training.HParams(# vocabdata_type='train',data_path='data/',thchs30=True,aishell=True,prime=False,stcmd=False,batch_size=1,data_length=None,shuffle=False)returnparams
5. 配置
使用 train.py 文件進(jìn)行模型的訓(xùn)練。
聲學(xué)模型可選 cnn-ctc、gru-ctc,只需修改導(dǎo)入路徑即可:
from model_speech.cnn_ctc import Am, am_hparams
from model_speech.gru_ctc import Am, am_hparams
語(yǔ)言模型可選 transformer 和 cbhg:
from model_language.transformer import Lm, lm_hparams
from model_language.cbhg import Lm, lm_hparams
模型識(shí)別
使用 test.py 檢查模型識(shí)別效果。 模型選擇需和訓(xùn)練一致。
一個(gè)簡(jiǎn)單的例子
1. 聲學(xué)模型訓(xùn)練
train.py 文件
import os
import tensorflow as tf
from utils import get_data, data_hparams
# 準(zhǔn)備訓(xùn)練所需數(shù)據(jù)
data_args = data_hparams()
data_args.data_length = 10
train_data = get_data(data_args)
# 1.聲學(xué)模型訓(xùn)練-----------------------------------
from model_speech.cnn_ctc import Am, am_hparams
am_args = am_hparams()
am_args.vocab_size = len(train_data.am_vocab)
am = Am(am_args)
if os.path.exists('logs_am/model.h5'):
print('load acoustic model...')
am.ctc_model.load_weights('logs_am/model.h5')
epochs = 10
batch_num = len(train_data.wav_lst) // train_data.batch_size
for k in range(epochs):
print('this is the', k+1, 'th epochs trainning !!!')
#shuffle(shuffle_list)
batch = train_data.get_am_batch()
am.ctc_model.fit_generator(batch, steps_per_epoch=batch_num, epochs=1)
am.ctc_model.save_weights('logs_am/model.h5')
get source list...
load thchs_train.txt data...
100%|████████████████████████████████████████████████████████████████████████| 10000/10000 [00:00<00:00, 236865.96it/s]
load aishell_train.txt data...
100%|██████████████████████████████████████████████████████████████████████| 120098/120098 [00:00<00:00, 260863.15it/s]
make am vocab...
100%|████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 9986.44it/s]
make lm pinyin vocab...
100%|████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 9946.18it/s]
make lm hanzi vocab...
100%|████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 9950.90it/s]
Using TensorFlow backend.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_inputs (InputLayer) (None, None, 200, 1) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, None, 200, 32) 320
_________________________________________________________________
batch_normalization_1 (Batch (None, None, 200, 32) 128
_________________________________________________________________
conv2d_2 (Conv2D) (None, None, 200, 32) 9248
_________________________________________________________________
batch_normalization_2 (Batch (None, None, 200, 32) 128
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, None, 100, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, None, 100, 64) 18496
_________________________________________________________________
batch_normalization_3 (Batch (None, None, 100, 64) 256
_________________________________________________________________
conv2d_4 (Conv2D) (None, None, 100, 64) 36928
_________________________________________________________________
batch_normalization_4 (Batch (None, None, 100, 64) 256
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, None, 50, 64) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, None, 50, 128) 73856
_________________________________________________________________
batch_normalization_5 (Batch (None, None, 50, 128) 512
_________________________________________________________________
conv2d_6 (Conv2D) (None, None, 50, 128) 147584
_________________________________________________________________
batch_normalization_6 (Batch (None, None, 50, 128) 512
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, None, 25, 128) 0
_________________________________________________________________
conv2d_7 (Conv2D) (None, None, 25, 128) 147584
_________________________________________________________________
batch_normalization_7 (Batch (None, None, 25, 128) 512
_________________________________________________________________
conv2d_8 (Conv2D) (None, None, 25, 128) 147584
_________________________________________________________________
batch_normalization_8 (Batch (None, None, 25, 128) 512
_________________________________________________________________
conv2d_9 (Conv2D) (None, None, 25, 128) 147584
_________________________________________________________________
batch_normalization_9 (Batch (None, None, 25, 128) 512
_________________________________________________________________
conv2d_10 (Conv2D) (None, None, 25, 128) 147584
_________________________________________________________________
batch_normalization_10 (Batc (None, None, 25, 128) 512
_________________________________________________________________
reshape_1 (Reshape) (None, None, 3200) 0
_________________________________________________________________
dense_1 (Dense) (None, None, 256) 819456
_________________________________________________________________
dense_2 (Dense) (None, None, 230) 59110
=================================================================
Total params: 1,759,174
Trainable params: 1,757,254
Non-trainable params: 1,920
_________________________________________________________________
load acoustic model...
2.語(yǔ)言模型訓(xùn)練
# 2.語(yǔ)言模型訓(xùn)練-------------------------------------------
from model_language.transformer import Lm, lm_hparams
lm_args = lm_hparams()
lm_args.input_vocab_size = len(train_data.pny_vocab)
lm_args.label_vocab_size = len(train_data.han_vocab)
lm = Lm(lm_args)
epochs = 10
with lm.graph.as_default():
saver =tf.train.Saver()
with tf.Session(graph=lm.graph) as sess:
merged = tf.summary.merge_all()
sess.run(tf.global_variables_initializer())
if os.path.exists('logs_lm/model.meta'):
print('loading language model...')
saver.restore(sess, 'logs_lm/model')
writer = tf.summary.FileWriter('logs_lm/tensorboard', tf.get_default_graph())
for k in range(epochs):
total_loss = 0
batch = train_data.get_lm_batch()
for i in range(batch_num):
input_batch, label_batch = next(batch)
feed = {lm.x: input_batch, lm.y: label_batch}
cost,_ = sess.run([lm.mean_loss,lm.train_op], feed_dict=feed)
total_loss += cost
if (k * batch_num + i) % 10 == 0:
rs=sess.run(merged, feed_dict=feed)
writer.add_summary(rs, k * batch_num + i)
if (k+1) % 5 == 0:
print('epochs', k+1, ': average loss = ', total_loss/batch_num)
saver.save(sess, 'logs_lm/model')
writer.close()
loading language model...
INFO:tensorflow:Restoring parameters from logs_lm/model
3. 模型測(cè)試
整合聲學(xué)模型和語(yǔ)言模型
test.py 文件文章來(lái)源:http://www.zghlxwxcb.cn/news/detail-510623.html
定義解碼器
import os
import tensorflow as tf
import numpy as np
from keras import backend as K
# 定義解碼器------------------------------------
def decode_ctc(num_result, num2word):
result = num_result[:, :, :]
in_len = np.zeros((1), dtype = np.int32)
in_len[0] = result.shape[1]
r = K.ctc_decode(result, in_len, greedy = True, beam_width=10, top_paths=1)
r1 = K.get_value(r[0][0])
r1 = r1[0]
text = []
for i in r1:
text.append(num2word[i])
return r1, text
準(zhǔn)備測(cè)試數(shù)據(jù)
# 0. 準(zhǔn)備解碼所需字典,需和訓(xùn)練一致,也可以將字典保存到本地,直接進(jìn)行讀取
from utils import get_data, data_hparams
data_args = data_hparams()
data_args.data_length = 10 # 重新訓(xùn)練需要注釋該行
train_data = get_data(data_args)
# 3. 準(zhǔn)備測(cè)試所需數(shù)據(jù), 不必和訓(xùn)練數(shù)據(jù)一致,通過(guò)設(shè)置data_args.data_type測(cè)試,
# 此處應(yīng)設(shè)為'test',我用了'train'因?yàn)檠菔灸P洼^小,如果使用'test'看不出效果,
# 且會(huì)出現(xiàn)未出現(xiàn)的詞。
data_args.data_type = 'train'
test_data = get_data(data_args)
am_batch = test_data.get_am_batch()
lm_batch = test_data.get_lm_batch()
get source list...
load thchs_train.txt data...
100%|████████████████████████████████████████████████████████████████████████| 10000/10000 [00:00<00:00, 226097.06it/s]
load aishell_train.txt data...
100%|██████████████████████████████████████████████████████████████████████| 120098/120098 [00:00<00:00, 226827.96it/s]
make am vocab...
100%|████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 9950.90it/s]
make lm pinyin vocab...
100%|██████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<?, ?it/s]
make lm hanzi vocab...
100%|████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 9953.26it/s]
加載聲學(xué)模型和語(yǔ)言模型
# 1.聲學(xué)模型-----------------------------------
from model_speech.cnn_ctc import Am, am_hparams
am_args = am_hparams()
am_args.vocab_size = len(train_data.am_vocab)
am = Am(am_args)
print('loading acoustic model...')
am.ctc_model.load_weights('logs_am/model.h5')
# 2.語(yǔ)言模型-------------------------------------------
from model_language.transformer import Lm, lm_hparams
lm_args = lm_hparams()
lm_args.input_vocab_size = len(train_data.pny_vocab)
lm_args.label_vocab_size = len(train_data.han_vocab)
print('loading language model...')
lm = Lm(lm_args)
sess = tf.Session(graph=lm.graph)
with lm.graph.as_default():
saver =tf.train.Saver()
with sess.as_default():
saver.restore(sess, 'logs_lm/model')
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_inputs (InputLayer) (None, None, 200, 1) 0
_________________________________________________________________
conv2d_11 (Conv2D) (None, None, 200, 32) 320
_________________________________________________________________
batch_normalization_11 (Batc (None, None, 200, 32) 128
_________________________________________________________________
conv2d_12 (Conv2D) (None, None, 200, 32) 9248
_________________________________________________________________
batch_normalization_12 (Batc (None, None, 200, 32) 128
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, None, 100, 32) 0
_________________________________________________________________
conv2d_13 (Conv2D) (None, None, 100, 64) 18496
_________________________________________________________________
batch_normalization_13 (Batc (None, None, 100, 64) 256
_________________________________________________________________
conv2d_14 (Conv2D) (None, None, 100, 64) 36928
_________________________________________________________________
batch_normalization_14 (Batc (None, None, 100, 64) 256
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, None, 50, 64) 0
_________________________________________________________________
conv2d_15 (Conv2D) (None, None, 50, 128) 73856
_________________________________________________________________
batch_normalization_15 (Batc (None, None, 50, 128) 512
_________________________________________________________________
conv2d_16 (Conv2D) (None, None, 50, 128) 147584
_________________________________________________________________
batch_normalization_16 (Batc (None, None, 50, 128) 512
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, None, 25, 128) 0
_________________________________________________________________
conv2d_17 (Conv2D) (None, None, 25, 128) 147584
_________________________________________________________________
batch_normalization_17 (Batc (None, None, 25, 128) 512
_________________________________________________________________
conv2d_18 (Conv2D) (None, None, 25, 128) 147584
_________________________________________________________________
batch_normalization_18 (Batc (None, None, 25, 128) 512
_________________________________________________________________
conv2d_19 (Conv2D) (None, None, 25, 128) 147584
_________________________________________________________________
batch_normalization_19 (Batc (None, None, 25, 128) 512
_________________________________________________________________
conv2d_20 (Conv2D) (None, None, 25, 128) 147584
_________________________________________________________________
batch_normalization_20 (Batc (None, None, 25, 128) 512
_________________________________________________________________
reshape_2 (Reshape) (None, None, 3200) 0
_________________________________________________________________
dense_3 (Dense) (None, None, 256) 819456
_________________________________________________________________
dense_4 (Dense) (None, None, 230) 59110
=================================================================
Total params: 1,759,174
Trainable params: 1,757,254
Non-trainable params: 1,920
_________________________________________________________________
loading acoustic model...
loading language model...
INFO:tensorflow:Restoring parameters from logs_lm/model
使用語(yǔ)音識(shí)別系統(tǒng)
for i in range(5):
print('\n the ', i, 'th example.')
# 載入訓(xùn)練好的模型,并進(jìn)行識(shí)別
inputs, outputs = next(am_batch)
x = inputs['the_inputs']
y = inputs['the_labels'][0]
result = am.model.predict(x, steps=1)
# 將數(shù)字結(jié)果轉(zhuǎn)化為文本結(jié)果
_, text = decode_ctc(result, train_data.am_vocab)
text = ' '.join(text)
print('文本結(jié)果:', text)
print('原文結(jié)果:', ' '.join([train_data.am_vocab[int(i)] for i in y]))
with sess.as_default():
_, y = next(lm_batch)
text = text.strip('\n').split(' ')
x = np.array([train_data.pny_vocab.index(pny) for pny in text])
x = x.reshape(1, -1)
preds = sess.run(lm.preds, {lm.x: x})
got = ''.join(train_data.han_vocab[idx] for idx in preds[0])
print('原文漢字:', ''.join(train_data.han_vocab[idx] for idx in y[0]))
print('識(shí)別結(jié)果:', got)
sess.close()
the 0 th example.
文本結(jié)果: lv4 shi4 yang2 chun1 yan1 jing3 da4 kuai4 wen2 zhang1 de di3 se4 si4 yue4 de lin2 luan2 geng4 shi4 lv4 de2 xian1 huo2 xiu4 mei4 shi1 yi4 ang4 ran2
原文結(jié)果: lv4 shi4 yang2 chun1 yan1 jing3 da4 kuai4 wen2 zhang1 de di3 se4 si4 yue4 de lin2 luan2 geng4 shi4 lv4 de2 xian1 huo2 xiu4 mei4 shi1 yi4 ang4 ran2
原文漢字: 綠是陽(yáng)春煙景大塊文章的底色四月的林巒更是綠得鮮活秀媚詩(shī)意盎然
識(shí)別結(jié)果: 綠是陽(yáng)春煙景大塊文章的底色四月的林巒更是綠得鮮活秀媚詩(shī)意盎然
the 1 th example.
文本結(jié)果: ta1 jin3 ping2 yao1 bu4 de li4 liang4 zai4 yong3 dao4 shang4 xia4 fan1 teng2 yong3 dong4 she2 xing2 zhuang4 ru2 hai3 tun2 yi4 zhi2 yi3 yi1 tou2 de you1 shi4 ling3 xian1
原文結(jié)果: ta1 jin3 ping2 yao1 bu4 de li4 liang4 zai4 yong3 dao4 shang4 xia4 fan1 teng2 yong3 dong4 she2 xing2 zhuang4 ru2 hai3 tun2 yi4 zhi2 yi3 yi1 tou2 de you1 shi4 ling3 xian1
原文漢字: 他僅憑腰部的力量在泳道上下翻騰蛹動(dòng)蛇行狀如海豚一直以一頭的優(yōu)勢(shì)領(lǐng)先
識(shí)別結(jié)果: 他僅憑腰部的力量在泳道上下翻騰蛹動(dòng)蛇行狀如海豚一直以一頭的優(yōu)勢(shì)領(lǐng)先
the 2 th example.
文本結(jié)果: pao4 yan3 da3 hao3 le zha4 yao4 zen3 me zhuang1 yue4 zheng4 cai2 yao3 le yao3 ya2 shu1 di4 tuo1 qu4 yi1 fu2 guang1 bang3 zi chong1 jin4 le shui3 cuan4 dong4
原文結(jié)果: pao4 yan3 da3 hao3 le zha4 yao4 zen3 me zhuang1 yue4 zheng4 cai2 yao3 le yao3 ya2 shu1 di4 tuo1 qu4 yi1 fu2 guang1 bang3 zi chong1 jin4 le shui3 cuan4 dong4
原文漢字: 炮眼打好了炸藥怎么裝岳正才咬了咬牙倏地脫去衣服光膀子沖進(jìn)了水竄洞
識(shí)別結(jié)果: 炮眼打好了炸藥怎么裝岳正才咬了咬牙倏地脫去衣服光膀子沖進(jìn)了水竄洞
the 3 th example.
文本結(jié)果: ke3 shei2 zhi1 wen2 wan2 hou4 ta1 yi1 zhao4 jing4 zi zhi1 jian4 zuo3 xia4 yan3 jian3 de xian4 you4 cu1 you4 hei1 yu3 you4 ce4 ming2 xian3 bu2 dui4 cheng1
原文結(jié)果: ke3 shei2 zhi1 wen2 wan2 hou4 ta1 yi1 zhao4 jing4 zi zhi1 jian4 zuo3 xia4 yan3 jian3 de xian4 you4 cu1 you4 hei1 yu3 you4 ce4 ming2 xian3 bu2 dui4 cheng1
原文漢字: 可誰(shuí)知紋完后她一照鏡子只見(jiàn)左下眼瞼的線(xiàn)又粗又黑與右側(cè)明顯不對(duì)稱(chēng)
識(shí)別結(jié)果: 可誰(shuí)知紋完后她一照鏡子知見(jiàn)左下眼瞼的線(xiàn)右粗右黑與右側(cè)明顯不對(duì)稱(chēng)
the 4 th example.
文本結(jié)果: yi1 jin4 men2 wo3 bei4 jing1 dai1 le zhe4 hu4 ming2 jiao4 pang2 ji2 de lao3 nong2 shi4 kang4 mei3 yuan2 chao2 fu4 shang1 hui2 xiang1 de lao3 bing1 qi1 zi3 chang2 nian2 you3 bing4 jia1 tu2 si4 bi4 yi1 pin2 ru2 xi3
原文結(jié)果: yi1 jin4 men2 wo3 bei4 jing1 dai1 le zhe4 hu4 ming2 jiao4 pang2 ji2 de lao3 nong2 shi4 kang4 mei3 yuan2 chao2 fu4 shang1 hui2 xiang1 de lao3 bing1 qi1 zi3 chang2 nian2 you3 bing4 jia1 tu2 si4 bi4 yi1 pin2 ru2 xi3
原文漢字: 一進(jìn)門(mén)我被驚呆了這戶(hù)名叫龐吉的老農(nóng)是抗美援朝負(fù)傷回鄉(xiāng)的老兵妻子長(zhǎng)年有病家徒四壁一貧如洗
識(shí)別結(jié)果: 一進(jìn)門(mén)我被驚呆了這戶(hù)名叫龐吉的老農(nóng)是抗美援朝負(fù)傷回鄉(xiāng)的老兵妻子長(zhǎng)年有病家徒四壁一貧如洗
完整代碼:https://download.csdn.net/download/qq_38735017/87387313?ops_request_misc=%257B%2522request%255Fid%2522%253A%2522167400364016800217085113%2522%252C%2522scm%2522%253A%252220140713.130102334.pc%255Fdownload.%2522%257D&request_id=167400364016800217085113&biz_id=1&utm_medium=distribute.pc_search_result.none-task-download-2~download~first_rank_ecpm_v1~rank_v31_ecpm-1-87387313-null-null.pc_v2_rank_dl_default&utm_term=%E5%9F%BA%E4%BA%8E%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9%A0%E7%9A%84%E4%B8%AD%E6%96%87%E8%AF%AD%E9%9F%B3%E8%AF%86%E5%88%AB%E7%B3%BB%E7%BB%9F%28%E5%AE%8C%E6%95%B4%E4%BB%A3%E7%A0%81%2B%E6%8A%A5%E5%91%8A%2B%E6%AF%95%E4%B8%9A%E8%AE%BE%E8%AE%A1%29&spm=1018.2226.3001.4451.1文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-510623.html
到了這里,關(guān)于基于深度學(xué)習(xí)的中文語(yǔ)音識(shí)別系統(tǒng)(計(jì)算機(jī)畢設(shè) 附完整代碼)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!