基础模型:
- antony66/whisper-large-v3-russian
- bond005/whisper-large-v3-ru-podlodka
语言:
- 俄语
库名称: transformers
标签:
- 自动语音识别
- whisper
- 俄语
- 模型合并工具
- 合并
数据集:
- mozilla-foundation/common_voice_17_0
- bond005/taiga_speech_v2
- bond005/podlodka_speech
- bond005/rulibrispeech
评估指标:
- 词错误率
新版已发布: Apel-sin/whisper-large-v3-russian-ties-podlodka-v1.2
模型详情
本模型采用TIES合并方法融合而成。
合并方法: ties
参数:
连接密度: 0.85
编码器权重:
- 0.65
- 0.35
解码器权重:
- 0.6
- 0.4
模型:
模型A: "/mnt/cloud/llm/whisper/whisper-large-v3-russian"
模型B: "/mnt/cloud/llm/whisper/whisper-large-v3-ru-podlodka"
输出目录: "/mnt/cloud/llm/whisper/whisper-large-v3-russian-ties-podlodka"
简易API服务
可配合开源OpenAI兼容API服务器使用: https://github.com/kreolsky/whisper-api-server/
使用指南
处理电话录音时强烈建议先进行音量预处理,例如:
sox 录音.wav -r 8000 标准化录音.wav norm -0.5 compand 0.3,1 -90,-90,-70,-50,-40,-15,0,0 -7 0 0.15
ASR代码参考示例:
import torch
from transformers import WhisperForConditionalGeneration, WhisperProcessor, pipeline
torch_dtype = torch.bfloat16
device = 'cpu'
if torch.cuda.is_available():
device = 'cuda'
elif torch.backends.mps.is_available():
device = 'mps'
setattr(torch.distributed, "is_initialized", lambda : False)
device = torch.device(device)
whisper = WhisperForConditionalGeneration.from_pretrained(
"antony66/whisper-large-v3-russian", torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True,
)
processor = WhisperProcessor.from_pretrained("antony66/whisper-large-v3-russian")
asr_pipeline = pipeline(
"automatic-speech-recognition",
model=whisper,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
max_new_tokens=256,
chunk_length_s=30,
batch_size=16,
return_timestamps=True,
torch_dtype=torch_dtype,
device=device,
)
from io import BytesIO
wav = BytesIO()
with open('标准化录音.wav', 'rb') as f:
wav.write(f.read())
wav.seek(0)
asr = asr_pipeline(wav, generate_kwargs={"language": "russian", "max_new_tokens": 256}, return_timestamps=False)
print(asr['text'])
开发进展
当前为开发测试版本,目标是优化电话语音识别效果。如有优质数据集或改进建议,欢迎联系贡献,我们将不胜感激。