基于facebook/wav2vec2-large-xlsr-53在台湾普通话(zh-tw)上微调的语音识别模型,支持16kHz采样率语音输入
下载量 47
发布时间 : 3/2/2022
模型介绍
内容详情
替代品
模型简介
这是一个针对台湾普通话优化的自动语音识别(ASR)模型,基于Facebook的wav2vec2-large-xlsr-53架构微调而成,在Common Voice zh-TW数据集上训练
模型特点
台湾普通话优化
专门针对台湾普通话语音特点进行微调
支持语言模型融合
可与GPT或BERT等语言模型结合使用,提高识别准确率
高效推理
在Common Voice测试集上CER为18.36%,推理速度较快
模型能力
台湾普通话语音识别
支持16kHz采样率音频处理
可与语言模型结合使用
使用案例
语音转文字
台湾普通话语音转录
将台湾普通话的语音内容转换为文字
CER 18.36% (使用GPT+束搜索评估)
语音助手
台湾地区语音指令识别
用于识别台湾普通话的语音指令
语言: zh-TW
数据集:
- common_voice
标签: - 音频
- 自动语音识别
- hf-asr-leaderboard
- robust-speech-event
- 语音
- xlsr-微调周
许可证: apache-2.0
模型索引: - 名称: Voidful的XLSR Wav2Vec2台湾普通话(zh-tw)
结果:- 任务:
名称: 语音识别
类型: automatic-speech-recognition
数据集:
名称: Common Voice zh-TW
类型: common_voice
参数: zh-TW
指标:- 名称: 测试CER
类型: cer
值: 18.36
- 名称: 测试CER
- 任务:
Wav2Vec2-Large-XLSR-53-tw-gpt
基于facebook/wav2vec2-large-xlsr-53在zh-tw上使用Common Voice进行微调。
使用此模型时,请确保语音输入采样率为16kHz。
使用方法
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
AutoTokenizer,
AutoModelWithLMHead
)
import torch
import re
import sys
model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]"
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese")
gpt_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device)
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def load_file_to_data(file):
batch = {}
speech, _ = torchaudio.load(file)
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
return batch
def predict(data):
features = processor(data["speech"], sampling_rate=data["sampling_rate"], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
pred_ids = torch.argmax(logit, dim=-1)
mask = pred_ids.ge(1).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
gpt_input = torch.cat((torch.tensor([tokenizer.cls_token_id]).to(device),pred_ids[pred_ids>0]), 0)
gpt_prob = torch.nn.functional.softmax(gpt_model(gpt_input).logits, dim=-1)[:voice_prob.size()[0],:]
comb_pred_ids = torch.argmax(gpt_prob*voice_prob, dim=-1)
decoded_results.append(processor.decode(comb_pred_ids))
return decoded_results
预测
predict(load_file_to_data('语音文件路径'))
评估
该模型可以在Common Voice的zh-tw测试数据上进行如下评估。
CER计算参考 https://huggingface.co/ctl/wav2vec2-large-xlsr-cantonese
环境设置:
!pip install editdistance
!pip install torchaudio
!pip install datasets transformers
无语言模型评估:
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
from transformers import AutoTokenizer, AutoModelWithLMHead
from datasets import Audio
from math import log
model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]"
tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese")
lm_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device)
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
ds = load_dataset("common_voice", 'zh-TW', split="test")
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
def map_to_array(batch):
audio = batch["audio"]
batch["speech"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
batch["sampling_rate"] = audio["sampling_rate"]
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["predicted"] = processor.batch_decode(pred_ids)
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=3, remove_columns=list(ds.features.keys()))
def cer_cal(groundtruth, hypothesis):
err = 0
tot = 0
for p, t in zip(hypothesis, groundtruth):
err += float(ed.eval(p.lower(), t.lower()))
tot += len(t)
return err / tot
print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"])))
CER: 28.70
.
时间: 04:08 分钟
使用GPT评估:
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
from transformers import AutoTokenizer, AutoModelWithLMHead
from datasets import Audio
from math import log
model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]"
tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese")
lm_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device)
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
ds = load_dataset("common_voice", 'zh-TW', split="test")
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
def map_to_array(batch):
audio = batch["audio"]
batch["speech"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
batch["sampling_rate"] = audio["sampling_rate"]
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
pred_ids = torch.argmax(logit, dim=-1)
mask = pred_ids.ge(1).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
lm_input = torch.cat((torch.tensor([tokenizer.cls_token_id]).to(device),pred_ids[pred_ids>0]), 0)
lm_prob = torch.nn.functional.softmax(lm_model(lm_input).logits, dim=-1)[:voice_prob.size()[0],:]
comb_pred_ids = torch.argmax(lm_prob*voice_prob, dim=-1)
decoded_results.append(processor.decode(comb_pred_ids))
batch["predicted"] = decoded_results
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=3, remove_columns=list(ds.features.keys()))
def cer_cal(groundtruth, hypothesis):
err = 0
tot = 0
for p, t in zip(hypothesis, groundtruth):
err += float(ed.eval(p.lower(), t.lower()))
tot += len(t)
return err / tot
print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"])))
CER 25.70
.
时间: 06:04 分钟
使用GPT+束搜索评估:
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
from transformers import AutoTokenizer, AutoModelWithLMHead
from datasets import Audio
from math import log
model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]"
tokenizer = AutoTokenizer.from_pretrained("ckiplab/gpt2-base-chinese")
lm_model = AutoModelWithLMHead.from_pretrained("ckiplab/gpt2-base-chinese").to(device)
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
ds = load_dataset("common_voice", 'zh-TW', split="test")
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
def map_to_array(batch):
audio = batch["audio"]
batch["speech"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
batch["sampling_rate"] = audio["sampling_rate"]
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
sequences = [[[], 1.0]]
pred_ids = torch.argmax(logit, dim=-1)
mask = pred_ids.ge(1).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
while True:
all_candidates = list()
exceed = False
for seq in sequences:
tokens, score = seq
gpt_input = torch.tensor([tokenizer.cls_token_id]+tokens).to(device)
gpt_prob = torch.nn.functional.softmax(lm_model(gpt_input).logits, dim=-1)[:len(gpt_input),:]
if len(gpt_input) >= len(voice_prob):
exceed = True
comb_pred_ids = gpt_prob*voice_prob[:len(gpt_input)]
v,i = torch.topk(comb_pred_ids,50,dim=-1)
for tok_id,tok_prob in zip(i.tolist()[-1],v.tolist()[-1]):
candidate = [tokens + [tok_id], score + -log(tok_prob)]
all_candidates.append(candidate)
ordered = sorted(all_candidates, key=lambda tup: tup[1])
sequences = ordered[:10]
if exceed:
break
decoded_results.append(processor.decode(sequences[0][0]))
batch["predicted"] = decoded_results
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=3, remove_columns=list(ds.features.keys()))
def cer_cal(groundtruth, hypothesis):
err = 0
tot = 0
for p, t in zip(hypothesis, groundtruth):
err += float(ed.eval(p.lower(), t.lower()))
tot += len(t)
return err / tot
print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"])))
CER 18.36
.
使用BERT评估:
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]"
tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
lm_model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese").to(device)
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
ds = load_dataset("common_voice", 'zh-TW', data_dir="./cv-corpus-6.1-2020-12-11", split="test")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
pred_ids = torch.argmax(logit, dim=-1)
mask = ~pred_ids.eq(tokenizer.pad_token_id).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
lm_input = torch.masked_select(pred_ids, ~pred_ids.eq(tokenizer.pad_token_id)).unsqueeze(0)
mask_lm_prob = voice_prob.clone()
for i in range(lm_input.shape[-1]):
masked_lm_input = lm_input.clone()
masked_lm_input[0][i] = torch.tensor(tokenizer.mask_token_id).to('cuda')
lm_prob = torch.nn.functional.softmax(lm_model(masked_lm_input).logits, dim=-1).squeeze(0)
mask_lm_prob[i] = lm_prob[i]
comb_pred_ids = torch.argmax(mask_lm_prob*voice_prob, dim=-1)
decoded_results.append(processor.decode(comb_pred_ids))
batch["predicted"] = decoded_results
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
def cer_cal(groundtruth, hypothesis):
err = 0
tot = 0
for p, t in zip(hypothesis, groundtruth):
err += float(ed.eval(p.lower(), t.lower()))
tot += len(t)
return err / tot
print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"])))
CER 25.57
.
时间: 09:49 分钟
使用T-TA评估:
设置
!git clone https://github.com/voidful/pytorch-tta.git
!mv ./pytorch-tta/tta ./tta
!wget https://github.com/voidful/pytorch-tta/releases/download/wiki_zh/wiki_zh.pt
import torchaudio
from datasets import load_dataset, load_metric
from transformers import (
Wav2Vec2ForCTC,
Wav2Vec2Processor,
)
import torch
import re
import sys
from tta.modeling_tta import TTALMModel
from transformers import AutoTokenizer
import torch
model_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
device = "cuda"
processor_name = "voidful/wav2vec2-large-xlsr-53-tw-gpt"
chars_to_ignore_regex = r"[¥•"#$%&'()*+,-/:;<=>@[\]^_`{|}~⦅⦆「」、 、〃〈〉《》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏﹑﹔·'℃°•·.﹑︰〈〉─《﹖﹣﹂﹁﹔!?。。"#$%&'()*+,﹐-/:;<=>@[\]^_`{|}~⦅⦆「」、、〃》「」『』【】〔〕〖〗〘〙〚〛〜〝〞〟〰〾〿–—‘’‛“”„‟…‧﹏..!\"#$%&()*+,\-.\:;<=>?@\[\]\\\/^_`{|}~]"
tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
lm_model = TTALMModel("bert-base-chinese")
tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
lm_model.load_state_dict(torch.load("./wiki_zh.pt",map_location=torch.device('cuda')))
lm_model.to('cuda')
lm_model.eval()
model = Wav2Vec2ForCTC.from_pretrained(model_name).to(device)
processor = Wav2Vec2Processor.from_pretrained(processor_name)
ds = load_dataset("common_voice", 'zh-TW', data_dir="./cv-corpus-6.1-2020-12-11", split="test")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
ds = ds.map(map_to_array)
def map_to_pred(batch):
features = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0], padding=True, return_tensors="pt")
input_values = features.input_values.to(device)
attention_mask = features.attention_mask.to(device)
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
decoded_results = []
for logit in logits:
pred_ids = torch.argmax(logit, dim=-1)
mask = ~pred_ids.eq(tokenizer.pad_token_id).unsqueeze(-1).expand(logit.size())
vocab_size = logit.size()[-1]
voice_prob = torch.nn.functional.softmax((torch.masked_select(logit, mask).view(-1,vocab_size)),dim=-1)
lm_input = torch.masked_select(pred_ids, ~pred_ids.eq(tokenizer.pad_token_id)).unsqueeze(0)
lm_prob = torch.nn.functional.softmax(lm_model.forward(lm_input)[0], dim=-1).squeeze(0)
comb_pred_ids = torch.argmax(lm_prob*voice_prob, dim=-1)
decoded_results.append(processor.decode(comb_pred_ids))
batch["predicted"] = decoded_results
batch["target"] = batch["sentence"]
return batch
result = ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=list(ds.features.keys()))
def cer_cal(groundtruth, hypothesis):
err = 0
tot = 0
for p, t in zip(hypothesis, groundtruth):
err += float(ed.eval(p.lower(), t.lower()))
tot += len(t)
return err / tot
print("CER: {:2f}".format(100 * cer_cal(result["target"],result["predicted"])))
CER: 25.77
.
时间: 06:01 分钟
Voice Activity Detection
MIT
基于pyannote.audio 2.1版本的语音活动检测模型,用于识别音频中的语音活动时间段
语音识别
V
pyannote
7.7M
181
Wav2vec2 Large Xlsr 53 Portuguese
Apache-2.0
这是一个针对葡萄牙语语音识别任务微调的XLSR-53大模型,基于Common Voice 6.1数据集训练,支持葡萄牙语语音转文本。
语音识别
其他
W
jonatasgrosman
4.9M
32
Whisper Large V3
Apache-2.0
Whisper是由OpenAI提出的先进自动语音识别(ASR)和语音翻译模型,在超过500万小时的标注数据上训练,具有强大的跨数据集和跨领域泛化能力。
语音识别
支持多种语言
W
openai
4.6M
4,321
Whisper Large V3 Turbo
MIT
Whisper是由OpenAI开发的最先进的自动语音识别(ASR)和语音翻译模型,经过超过500万小时标记数据的训练,在零样本设置下展现出强大的泛化能力。
语音识别
Transformers

支持多种语言
W
openai
4.0M
2,317
Wav2vec2 Large Xlsr 53 Russian
Apache-2.0
基于facebook/wav2vec2-large-xlsr-53模型微调的俄语语音识别模型,支持16kHz采样率的语音输入
语音识别
其他
W
jonatasgrosman
3.9M
54
Wav2vec2 Large Xlsr 53 Chinese Zh Cn
Apache-2.0
基于facebook/wav2vec2-large-xlsr-53模型微调的中文语音识别模型,支持16kHz采样率的语音输入。
语音识别
中文
W
jonatasgrosman
3.8M
110
Wav2vec2 Large Xlsr 53 Dutch
Apache-2.0
基于facebook/wav2vec2-large-xlsr-53微调的荷兰语语音识别模型,在Common Voice和CSS10数据集上训练,支持16kHz音频输入。
语音识别
其他
W
jonatasgrosman
3.0M
12
Wav2vec2 Large Xlsr 53 Japanese
Apache-2.0
基于facebook/wav2vec2-large-xlsr-53模型微调的日语语音识别模型,支持16kHz采样率的语音输入
语音识别
日语
W
jonatasgrosman
2.9M
33
Mms 300m 1130 Forced Aligner
基于Hugging Face预训练模型的文本与音频强制对齐工具,支持多种语言,内存效率高
语音识别
Transformers

支持多种语言
M
MahmoudAshraf
2.5M
50
Wav2vec2 Large Xlsr 53 Arabic
Apache-2.0
基于facebook/wav2vec2-large-xlsr-53微调的阿拉伯语语音识别模型,在Common Voice和阿拉伯语语音语料库上训练
语音识别
阿拉伯语
W
jonatasgrosman
2.3M
37
精选推荐AI模型
Llama 3 Typhoon V1.5x 8b Instruct
专为泰语设计的80亿参数指令模型,性能媲美GPT-3.5-turbo,优化了应用场景、检索增强生成、受限生成和推理任务
大型语言模型
Transformers

支持多种语言
L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-Tiny是一个基于SODA数据集训练的超小型对话模型,专为边缘设备推理设计,体积仅为Cosmo-3B模型的2%左右。
对话系统
Transformers

英语
C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
基于RoBERTa架构的中文抽取式问答模型,适用于从给定文本中提取答案的任务。
问答系统
中文
R
uer
2,694
98
AIbase是一个专注于MCP服务的平台,为AI开发者提供高质量的模型上下文协议服务,助力AI应用开发。
简体中文