模型简介
模型特点
模型能力
使用案例
pipeline_tag: 文本转语音 library_name: cosyvoice
CosyVoice
👉🏻 CosyVoice2 演示 👈🏻
[CosyVoice2 论文][CosyVoice2 工作室]
👉🏻 CosyVoice 演示 👈🏻
[CosyVoice 论文][CosyVoice 工作室][CosyVoice 代码]
关于 SenseVoice
,请访问 SenseVoice 仓库 和 SenseVoice 空间。
路线图
-
[x] 2024年12月
- [x] 发布 CosyVoice2-0.5B 模型
- [x] CosyVoice2-0.5B 流式推理,无质量下降
-
[x] 2024年7月
- [x] 支持流匹配训练
- [x] 当 ttsfrd 不可用时,支持 WeTextProcessing
- [x] Fastapi 服务器和客户端
-
[x] 2024年8月
- [x] 重复感知采样(RAS)推理,提高 LLM 稳定性
- [x] 支持流式推理模式,包括 kv 缓存和 sdpa 优化 RTF
-
[x] 2024年9月
- [x] 25Hz CosyVoice 基础模型
- [x] 25Hz CosyVoice 语音转换模型
-
[ ] 待定
- [ ] 支持 CosyVoice2-0.5B 双流推理
- [ ] CosyVoice2-0.5B 训练和微调方案
- [ ] 使用更多多语言数据训练的 CosyVoice-500M
- [ ] 更多...
安装
克隆并安装
- 克隆仓库
git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
# 如果由于网络问题克隆子模块失败,请运行以下命令直到成功
cd CosyVoice
git submodule update --init --recursive
- 安装 Conda:请参考 https://docs.conda.io/en/latest/miniconda.html
- 创建 Conda 环境:
conda create -n cosyvoice python=3.10
conda activate cosyvoice
# pynini 是 WeTextProcessing 的依赖项,使用 conda 安装以确保跨平台兼容性。
conda install -y -c conda-forge pynini==2.1.5
pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
# 如果遇到 sox 兼容性问题
# ubuntu
sudo apt-get install sox libsox-dev
# centos
sudo yum install sox sox-devel
模型下载
强烈建议下载我们预训练的 CosyVoice-300M
、CosyVoice-300M-SFT
、CosyVoice-300M-Instruct
模型以及 CosyVoice-ttsfrd
资源。
如果你是该领域的专家,并且只对从头开始训练自己的 CosyVoice 模型感兴趣,可以跳过此步骤。
# SDK 模型下载
from modelscope import snapshot_download
snapshot_download('iic/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B')
snapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')
snapshot_download('iic/CosyVoice-300M-25Hz', local_dir='pretrained_models/CosyVoice-300M-25Hz')
snapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')
snapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')
snapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')
# git 模型下载,请确保已安装 git lfs
mkdir -p pretrained_models
git clone https://www.modelscope.cn/iic/CosyVoice2-0.5B.git pretrained_models/CosyVoice2-0.5B
git clone https://www.modelscope.cn/iic/CosyVoice-300M.git pretrained_models/CosyVoice-300M
git clone https://www.modelscope.cn/iic/CosyVoice-300M-25Hz.git pretrained_models/CosyVoice-300M-25Hz
git clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git pretrained_models/CosyVoice-300M-SFT
git clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git pretrained_models/CosyVoice-300M-Instruct
git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git pretrained_models/CosyVoice-ttsfrd
可选地,你可以解压 ttsfrd
资源并安装 ttsfrd
包以获得更好的文本归一化性能。
注意,此步骤不是必需的。如果不安装 ttsfrd
包,我们将默认使用 WeTextProcessing。
cd pretrained_models/CosyVoice-ttsfrd/
unzip resource.zip -d .
pip install ttsfrd-0.3.6-cp38-cp38-linux_x86_64.whl
基本用法
对于 zero_shot/跨语言推理,请使用 CosyVoice2-0.5B
或 CosyVoice-300M
模型。
对于 sft 推理,请使用 CosyVoice-300M-SFT
模型。
对于 instruct 推理,请使用 CosyVoice-300M-Instruct
模型。
强烈建议使用 CosyVoice2-0.5B
模型以获得更好的流式性能。
首先,将 third_party/Matcha-TTS
添加到你的 PYTHONPATH
中。
export PYTHONPATH=third_party/Matcha-TTS
from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
from cosyvoice.utils.file_utils import load_wav
import torchaudio
## cosyvoice2 用法
cosyvoice2 = CosyVoice('pretrained_models/CosyVoice-300M-SFT', load_jit=False, load_onnx=False, load_trt=False)
# sft 用法
prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
for i, j in enumerate(cosyvoice2.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=True)):
torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice2.sample_rate)
## cosyvoice 用法
cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-SFT', load_jit=True, load_onnx=False, fp16=True)
# sft 用法
print(cosyvoice.list_avaliable_spks())
# 将 stream=True 改为流式分块推理
for i, j in enumerate(cosyvoice.inference_sft('你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?', '中文女', stream=False)):
torchaudio.save('sft_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-25Hz') # 或改为 pretrained_models/CosyVoice-300M 进行 50Hz 推理
# zero_shot 用法,<|zh|><|en|><|jp|><|yue|><|ko|> 分别表示中文/英文/日语/粤语/韩语
prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
for i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):
torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
# 跨语言用法
prompt_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
for i, j in enumerate(cosyvoice.inference_cross_lingual('<|en|>And then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that\'s coming into the family is a reason why sometimes we don\'t buy the whole thing.', prompt_speech_16k, stream=False)):
torchaudio.save('cross_lingual_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
# 语音转换用法
prompt_speech_16k = load_wav('zero_shot_prompt.wav', 16000)
source_speech_16k = load_wav('cross_lingual_prompt.wav', 16000)
for i, j in enumerate(cosyvoice.inference_vc(source_speech_16k, prompt_speech_16k, stream=False)):
torchaudio.save('vc_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
cosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-Instruct')
# instruct 用法,支持 <laughter></laughter><strong></strong>[laughter][breath]
for i, j in enumerate(cosyvoice.inference_instruct('在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。', '中文男', 'Theo \'Crimson\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.', stream=False)):
torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
启动 Web 演示
你可以使用我们的 Web 演示页面快速熟悉 CosyVoice。 我们在 Web 演示中支持 sft/zero_shot/跨语言/instruct 推理。
详情请参阅演示网站。
# 将 iic/CosyVoice-300M-SFT 改为 sft 推理,或 iic/CosyVoice-300M-Instruct 改为 instruct 推理
python3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M
高级用法
对于高级用户,我们在 examples/libritts/cosyvoice/run.sh
中提供了训练和推理脚本。
你可以按照此方案熟悉 CosyVoice。
部署构建
可选地,如果你想使用 grpc 进行服务部署, 可以运行以下步骤。否则,可以直接忽略此步骤。
cd runtime/python
docker build -t cosyvoice:v1.0 .
# 将 iic/CosyVoice-300M 改为 iic/CosyVoice-300M-Instruct 以使用 instruct 推理
# grpc 用法
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity"
cd grpc && python3 client.py --port 50000 --mode <sft|zero_shot|cross_lingual|instruct>
# fastapi 用法
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c "cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && python3 server.py --port 50000 --model_dir iic/CosyVoice-300M && sleep infinity"
cd fastapi && python3 client.py --port 50000 --mode <sft|zero_shot|cross




