WhisperX 是一個優(yōu)秀的開源Python語音識別庫。
下面記錄Windows10系統(tǒng)下部署Whisper
1、在操作系統(tǒng)中安裝 Python環(huán)境
2、安裝 CUDA環(huán)境
3、安裝Annaconda或Minconda環(huán)境
4、下載安裝ffmpeg
下載release-builds包,如下圖所示
將下載的包解壓到你想要的路徑,然后配置系統(tǒng)環(huán)境:我的電腦->高級系統(tǒng)設(shè)置->環(huán)境變量->Path
設(shè)置完成后打開cmd窗口輸入
ffmpeg
5、conda環(huán)境安裝指定位置的虛擬環(huán)境
conda create --prefix=D:\Projects\LiimouDemo\WhisperX\Code\whisperX\whisperXVenv python=3.10
6、激活虛擬環(huán)境
conda activate D:\Projects\LiimouDemo\WhisperX\Code\whisperX\whisperXVenv
7、安裝WhisperX庫
pip install git+https://github.com/m-bain/whisperx.git
8、更新WhisperX庫
pip install git+https://github.com/m-bain/whisperx.git --upgrade
9、在Python中使用
import whisperx
import time
import zhconv
device = "cuda"
audio_file = "data/test.mp3"
batch_size = 16 # reduce if low on GPU mem
compute_type = "float16" # change to "int8" if low on GPU mem (may reduce accuracy)
# compute_type = "int8" # change to "int8" if low on GPU mem (may reduce accuracy)
print('開始加載模型')
start = time.time()
# 1. Transcribe with original whisper (batched)
model = whisperx.load_model("large-v2", device, compute_type=compute_type)
# model = whisperx.load_model("small", device, compute_type=compute_type)
end = time.time()
print('加載使用的時間:',end-start,'s')
start = time.time()
audio = whisperx.load_audio(audio_file)
result = model.transcribe(audio, batch_size=batch_size)
print(result["segments"][0]["text"]) # before alignment
end = time.time()
print('識別使用的時間:',end-start,'s')
封裝上述代碼,初始化時調(diào)用一次loadModel()方法,之后使用就直接調(diào)用asr(path)方法文章來源:http://www.zghlxwxcb.cn/news/detail-568326.html
import whisperx
import zhconv
from whisperx.asr import FasterWhisperPipeline
import time
class WhisperXTool:
device = "cuda"
audio_file = "data/test.mp3"
batch_size = 16 # reduce if low on GPU mem
compute_type = "float16" # change to "int8" if low on GPU mem (may reduce accuracy)
# compute_type = "int8" # change to "int8" if low on GPU mem (may reduce accuracy)
fast_model: FasterWhisperPipeline
def loadModel(self):
# 1. Transcribe with original whisper (batched)
self.fast_model = whisperx.load_model("large-v2", self.device, compute_type=self.compute_type)
print("模型加載完成")
def asr(self, filePath: str):
start = time.time()
audio = whisperx.load_audio(filePath)
result = self.fast_model.transcribe(audio, batch_size=self.batch_size)
s = result["segments"][0]["text"]
s1 = zhconv.convert(s, 'zh-cn')
print(s1)
end = time.time()
print('識別使用的時間:', end - start, 's')
return s1
zhconv是中文簡體繁體轉(zhuǎn)換的庫,安裝命令如下文章來源地址http://www.zghlxwxcb.cn/news/detail-568326.html
pip install zhconv
到了這里,關(guān)于whisperX 語音識別本地部署的文章就介紹完了。如果您還想了解更多內(nèi)容,請?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!