license: cc-by-nc-4.0
- 论文: RLHF工作流:从奖励建模到在线RLHF(发表于TMLR 2024)
- 作者: 董翰泽*、熊威*、庞博*、王浩翔*、韩昭、周英博、江南、Doyen Sahoo、熊才明、张潼
- 代码: https://github.com/RLHFlow/RLHF-Reward-Modeling/
该奖励函数可用于RLHF流程,包括PPO、迭代SFT和迭代DPO方法。
本许可证继承自PKU-Alignment/PKU-SafeRLHF-30K
。
训练
基础模型采用meta-llama/Meta-Llama-3-8B-Instruct
。
训练脚本见https://github.com/WeiXiongUST/RLHF-Reward-Modeling
。
使用方法
from transformers import AutoTokenizer, pipeline
rm_tokenizer = AutoTokenizer.from_pretrained("sfairXC/FsfairX-LLaMA3-RM-v0.1")
device = 0
rm_pipe = pipeline(
"sentiment-analysis",
model="sfairXC/FsfairX-LLaMA3-RM-v0.1",
device=device,
tokenizer=rm_tokenizer,
model_kwargs={"torch_dtype": torch.bfloat16}
)
pipe_kwargs = {
"return_all_scores": True,
"function_to_apply": "none",
"batch_size": 1
}
chat = [
{"role": "user", "content": "你好,最近怎么样?"},
{"role": "assistant", "content": "我很好。今天能为您做些什么?"},
{"role": "user", "content": "我想展示聊天模板的工作原理!"},
]
test_texts = [rm_tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False).replace(rm_tokenizer.bos_token, "")]
pipe_outputs = rm_pipe(test_texts, **pipe_kwargs)
rewards = [output[0]["score"] for output in pipe_outputs]
性能表现
本奖励模型是Reward-Bench榜单上当前(2024年4月20日)最先进的开源RM模型。
指标 |
得分 |
常规对话 |
99.44 |
困难对话 |
65.13 |
安全性 |
88.76 |
推理能力 |
88.3 |
参考文献
本仓库是迭代拒绝采样微调与迭代DPO研究的一部分。若您的研究工作使用了本仓库内容,请参考以下引用格式:
@article{dong2023raft,
title={Raft: Reward ranked finetuning for generative foundation model alignment},
author={Dong, Hanze and Xiong, Wei and Goyal, Deepanshu and Pan, Rui and Diao, Shizhe and Zhang, Jipeng and Shum, Kashun and Zhang, Tong},
journal={arXiv preprint arXiv:2304.06767},
year={2023}
}
@misc{xiong2024iterative,
title={Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint},
author={Wei Xiong and Hanze Dong and Chenlu Ye and Ziqi Wang and Han Zhong and Heng Ji and Nan Jiang and Tong Zhang},
year={2024},
eprint={2312.11456},
archivePrefix={arXiv},
primaryClass={cs.LG}
}