qwen35-27b-stage3-instruct-v2
This model was fine-tuned using SFT.
W&B run: https://wandb.ai/cooawoo-personal/huggingface/runs/iqxi3qen
Training procedure
Hyperparameters
| Parameter | Value |
|---|---|
| Learning rate | 1e-05 |
| LR scheduler | SchedulerType.CONSTANT_WITH_WARMUP |
| Per-device batch size | 1 |
| Gradient accumulation | 8 |
| Effective batch size | 8 |
| Epochs | 1 |
| Max sequence length | 6144 |
| Optimizer | OptimizerNames.PAGED_ADAMW_8BIT |
| Weight decay | 0.01 |
| Warmup ratio | 0.03 |
| Max gradient norm | 1.0 |
| Precision | bf16 |
| Loss type | nll |
| Chunked cross-entropy | yes |
LoRA configuration
| Parameter | Value |
|---|---|
| Rank (r) | 64 |
| Alpha | 64 |
| Target modules | down_proj, gate_proj, in_proj_a, in_proj_b, in_proj_qkv, in_proj_z, k_proj, o_proj, out_proj, q_proj, up_proj, v_proj |
| rsLoRA | yes |
| Quantization | 4-bit (nf4) |
Dataset statistics
| Dataset | Samples | Total tokens | Trainable tokens |
|---|---|---|---|
| json//home/aibox/data/stage3_marvin_seed_only_lastonly.jsonl | 5,228 | 23,663,028 | 23,663,028 |
Training config
model_name_or_path: merged
output_dir: runs/qwen35-27b-stage3-instruct-v2
attn_implementation: flash_attention_2
bf16: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
use_cce: true
model_parallel: true
max_memory:
0: 18GiB
1: 18GiB
chunked_mlp: true
chunked_mlp_chunks: 8
max_length: 6144
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
use_peft: true
load_in_4bit: true
bnb_4bit_quant_type: nf4
lora_r: 64
lora_alpha: 64
lora_dropout: 0.0
use_rslora: true
lora_target_modules:
- in_proj_qkv
- in_proj_z
- in_proj_a
- in_proj_b
- out_proj
- q_proj
- k_proj
- v_proj
- o_proj
- gate_proj
- up_proj
- down_proj
data_config: configs/qwen35-27b-stage3-instruct-v2/data.yaml
prepared_dataset: runs/qwen35-27b-stage3-instruct-v2/prepared
auto_mask_reasoning: true
learning_rate: 1.0e-05
lr_scheduler_type: constant_with_warmup
warmup_ratio: 0.03
weight_decay: 0.01
max_grad_norm: 1.0
optim: paged_adamw_8bit
num_train_epochs: 1
logging_steps: 1
disable_tqdm: false
save_strategy: steps
save_steps: 250
save_total_limit: 3
report_to: wandb
run_name: qwen35-27b-stage3-instruct-v2
Data config
datasets:
- path: json
data_files: /home/aibox/data/stage3_marvin_seed_only_lastonly.jsonl
split: train
Framework versions
- PEFT 0.18.1
- Loft: 0.1.0
- Transformers: 5.5.0
- Pytorch: 2.10.0
- Datasets: 4.5.0
- Tokenizers: 0.22.2
- Downloads last month
- -
Model tree for ToastyPigeon/Qwen3.5-27B-Stage3-Instruct-V2
Base model
Qwen/Qwen3.5-27B Finetuned
ArliAI/Qwen3.5-27B-Derestricted