基础模型:
- zerofata/L3.3-GeneticLemonade-Unleashed-70B
库名称: transformers
许可证: llama3
基因柠檬水释放版 v3

尝试调整这些设置,它们不是"最佳"设置,只是一个稳定的基准。值得注意的是,该模型支持比其他L3模型通常推荐更高的温度值。
推荐采样器
>
温度:
0.9 - 1.2
>
最小P:
0.03 - 0.04
>
顶部P:
0.9 - 1.0
>
干燥:
0.8, 1.75, 4
指令
使用Llama-3-Instruct-Names模板,但需要取消勾选"系统与用户相同"。
模型首先使用290万token的小型合成数据集(约750个对话)进行SFT训练。主要是RP数据,包含少量随机指令/助手数据和创意写作。
然后使用SFT数据集中约1100个高质量样本进行DPO训练,这些样本展示了出色的指令遵循能力。拒绝样本来自另一个已知指令遵循能力较差的Llama 3.3微调模型生成。
Axolotl配置
两者均未针对成本/性能效率进行优化,效果可能因人而异。
SFT 1*H200
base_model: zerofata/L3.3-GeneticLemonade-Unleashed-70B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
special_tokens:
pad_token: "<|finetune_right_pad_id|>"
chat_template: llama3
datasets:
- path: ./dataset.jsonl
type: chat_template
split: train
chat_template_strategy: tokenizer
field_messages: messages
message_property_mappings:
role: role
content: content
roles:
user: ["user"]
assistant: ["assistant"]
system: ["system"]
test_datasets:
- path: ./validate_dataset.jsonl
type: chat_template
split: train
chat_template_strategy: tokenizer
field_messages: messages
message_property_mappings:
role: role
content: content
roles:
user: ["user"]
assistant: ["assistant"]
system: ["system"]
dataset_prepared_path:
train_on_inputs: false
adapter: qlora
load_in_4bit: true
lora_r: 64
lora_alpha: 128
lora_dropout: 0.1
lora_target_linear: true
num_epochs: 2
micro_batch_size: 4
gradient_accumulation_steps: 2
learning_rate: 1.5e-5
optimizer: paged_adamw_8bit
lr_scheduler: rex
warmup_ratio: 0.05
weight_decay: 0.01
max_grad_norm: 1.0
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
bf16: auto
flash_attention: true
gradient_checkpointing: true
evaluation_strategy: steps
eval_steps: 5
save_strategy: steps
save_steps: 5
save_total_limit: 5
load_best_model_at_end: true
metric_for_best_model: eval_loss
greater_is_better: false
early_stopping_patience: 5
output_dir: ./output_model
logging_steps: 2
save_safetensors: true
wandb_project: project_name
DPO 2*H200
base_model: ApocalypseParty/unleashed-fulldata30
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
special_tokens: {}
chat_template: tokenizer_default
rl: dpo
rl_beta: 0.07
datasets:
- path: ./dpo_cleaned-v3_deduplicated.jsonl
type: chat_template.default
field_messages: conversation
field_chosen: chosen
field_rejected: rejected
message_property_mappings:
role: role
content: content
roles:
system: ["system"]
user: ["user"]
assistant: ["assistant"]
dataset_prepared_path:
train_on_inputs: false
adapter: qlora
load_in_4bit: true
lora_r: 32
lora_alpha: 64
lora_dropout: 0.05
lora_target_linear: true
num_epochs: 1
micro_batch_size: 4
gradient_accumulation_steps: 2
learning_rate: 2e-6
optimizer: adamw_8bit
lr_scheduler: cosine
warmup_steps: 5
weight_decay: 0.01
max_grad_norm: 1.0
sequence_len: 4096
pad_to_sequence_len: true
bf16: auto
tf32: false
flash_attention: true
gradient_checkpointing: offload
deepspeed: deepspeed_configs/zero1.json
save_steps: 10
save_total_limit: 10
load_best_model_at_end: true
metric_for_best_model: eval_loss
greater_is_better: false
output_dir: ./dpo_model
logging_steps: 2
save_safetensors: true
wandb_project: project_name