🚀 Whisper Large v3 用于语音流畅度分类
本模型可用于语音流畅度分类,能有效识别语音是否流畅,并对不流畅语音的类型进行细分,为语音分析提供了强大的支持。
🚀 快速开始
模型描述
该模型实现了语音流畅度分类,相关内容在论文 Vox-Profile: A Speech Foundation Model Benchmark for Characterizing Diverse Speaker and Speech Traits (https://arxiv.org/pdf/2505.14648) 中有详细描述。
模型首先以 3 秒的窗口大小和 1 秒的步长对语音进行预测,判断语音属于以下类别:
["流畅", "不流畅"]
若检测到不流畅语音,则进一步预测不流畅类型,包括:
[
"卡顿",
"延长音",
"声音重复",
"词语重复",
"插入语"
]
如何使用此模型
下载仓库
git clone git@github.com:tiantiaf0627/vox-profile-release.git
安装包
conda create -n vox_profile python=3.8
cd vox-profile-release
pip install -e .
加载模型
import torch
import torch.nn.functional as F
from src.model.fluency.whisper_fluency import WhisperWrapper
device = torch.device("cuda") if torch.cuda.is_available() else "cpu"
model = WhisperWrapper.from_pretrained("tiantiaf/whisper-large-v3-speech-flow").to(device)
model.eval()
预测
audio_data = torch.zeros([1, 16000*10]).float().to(device)
audio_segment = (audio_data.shape[1] - 3*16000) // 16000 + 1
if audio_segment < 1: audio_segment = 1
input_audio = list()
input_audio_length = list()
for idx in range(audio_segment):
input_audio.append(audio_data[0, 16000*idx:16000*idx+3*16000])
input_audio_length.append(torch.tensor(len(audio_data[0, 16000*idx:16000*idx+3*16000])))
input_audio = torch.stack(input_audio, dim=0)
input_audio_length = torch.stack(input_audio_length, dim=0)
fluency_outputs, disfluency_type_outputs = model(input_audio, length=input_audio_length)
fluency_prob = F.softmax(fluency_outputs, dim=1).detach().cpu().numpy().astype(float).tolist()
disfluency_type_prob = nn.Sigmoid()(disfluency_type_outputs)
disfluency_type_predictions = (disfluency_type_prob > 0.7).int().detach().cpu().numpy().tolist()
disfluency_type_prob = disfluency_type_prob.cpu().numpy().astype(float).tolist()
汇总预测结果
utterance_fluency_list = list()
utterance_disfluency_list = list()
for audio_idx in range(audio_segment):
disfluency_type = list()
if fluency_prob[audio_idx][0] > 0.5:
utterance_fluency_list.append("流畅")
else:
utterance_fluency_list.append("不流畅")
predictions = disfluency_type_predictions[audio_idx]
for label_idx in range(len(predictions)):
if predictions[label_idx] == 1:
disfluency_type.append(disfluency_type_labels[label_idx])
utterance_disfluency_list.append(disfluency_type)
print(utterance_fluency_list)
print(utterance_disfluency_list)
联系信息
若有任何疑问,请联系:Tiantian Feng (tiantiaf@usc.edu)
引用说明
如果您使用了我们的模型或发现它在您的工作中很有用,请引用我们的论文:
@article{feng2025vox,
title={Vox-Profile: A Speech Foundation Model Benchmark for Characterizing Diverse Speaker and Speech Traits},
author={Feng, Tiantian and Lee, Jihwan and Xu, Anfeng and Lee, Yoonjeong and Lertpetchpun, Thanathai and Shi, Xuan and Wang, Helin and Thebaud, Thomas and Moro-Velazquez, Laureano and Byrd, Dani and others},
journal={arXiv preprint arXiv:2505.14648},
year={2025}
}
信息表格
属性 |
详情 |
模型类型 |
语音流畅度分类模型 |
基础模型 |
openai/whisper-large-v3 |
任务类型 |
音频分类 |
评估指标 |
准确率 |
许可证 |
Apache-2.0 |