库名称: transformers
标签:
- 文本生成推理
- transformers
- unsloth
- trl
- llama
语言:
- 英文
基础模型: meta-llama/Meta-Llama-3-8B-Instruct
模型描述
该模型基于 meta-llama/Meta-Llama-3-8B-Instruct 进行了微调,专门用于函数调用和 JSON 模式。
使用方法
JSON 模式
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "你是一个乐于助人的助手,请以 JSON 格式回答,键名为 \"message\""},
{"role": "user", "content": "你是谁?"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
函数调用
函数调用需要两步推理,示例如下:
第一步:
functions_metadata = [
{
"type": "function",
"function": {
"name": "get_temperature",
"description": "获取城市的温度",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "城市名称"
}
},
"required": [
"city"
]
}
}
}
]
messages = [
{ "role": "system", "content": f"""你是一个乐于助人的助手,可以使用以下函数:\n {str(functions_metadata)}\n\n使用这些函数时,请以以下格式响应:\n<functioncall> {{ "name": "function_name", "arguments": {{ "arg_1": "value_1", "arg_1": "value_1", ... }} }} </functioncall>\n\n需要注意的情况:\n - 如果没有匹配用户请求的函数,请礼貌地表示无法提供帮助。"""},
{ "role": "user", "content": "东京现在的温度是多少?"}
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
第二步:
messages = [
{ "role": "system", "content": f"""你是一个乐于助人的助手,可以使用以下函数:\n {str(functions_metadata)}\n\n使用这些函数时,请以以下格式响应:\n<functioncall> {{ "name": "function_name", "arguments": {{ "arg_1": "value_1", "arg_1": "value_1", ... }} }} </functioncall>\n\n需要注意的情况:\n - 如果没有匹配用户请求的函数,请礼貌地表示无法提供帮助。"""},
{ "role": "user", "content": "东京现在的温度是多少?"},
{ "role": "assistant", "content": """<functioncall> {"name": "get_temperature", "arguments": '{"city": "东京"}'} </functioncall>"""},
{ "role": "user", "content": """<function_response> {"temperature":30 C} </function_response>"""}
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
上传模型
该模型使用 Unsloth 和 Huggingface 的 TRL 库训练,速度提升了 2 倍。
