Jackrong commited on
Commit
e0e1a37
·
verified ·
1 Parent(s): a608b29

Unsloth Model Card

Browse files
Files changed (1) hide show
  1. README.md +13 -75
README.md CHANGED
@@ -1,83 +1,21 @@
1
  ---
2
- language:
3
- - en
4
- - zh
5
- license: lgpl-3.0
6
- base_model: Qwen/Qwen3.5-27B
7
  tags:
 
 
8
  - unsloth
9
- - qwen
10
- - qwen3.5
11
- - reasoning
12
- - chain-of-thought
13
- - Dense
14
- pipeline_tag: text-generation
15
  ---
16
 
17
- # 🌟 Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled
18
-
19
-
20
- ![HB8AleUaMAArNyM](https://cdn-uploads.huggingface.co/production/uploads/66309bd090589b7c65950665/GHkMJL6I383eIwK1qj80K.jpeg)
21
-
22
- ## 💡 Model Introduction
23
- **Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled** is a highly capable reasoning model fine-tuned on top of the powerful Qwen3.5 architecture. The model's core directive is to leverage state-of-the-art Chain-of-Thought (CoT) distillation primarily sourced from Claude-4.6 Opus interactions.
24
-
25
- Through Supervised Fine-Tuning (SFT) focusing specifically on structured reasoning logic, this model excels in breaking down complex user problems, planning step-by-step methodologies within strictly formatted `<think>` tags, and ultimately delivering precise, nuanced solutions.
26
-
27
- ### 🧠 Example of Learned Reasoning Scaffold(Example)
28
-
29
- The model includes targeted optimizations addressing Qwen3.5’s tendency toward excessive transitional or repetitive reasoning on simple queries. Through deep distillation and structural imitation of Claude-4.6-Opus reasoning chains, the model adopts a more efficient structured thinking pattern:
30
- **“Let me analyze this request carefully: 1..2..3...”.**
31
- This streamlined reasoning paradigm significantly reduces redundant cognitive loops while preserving deep analytical capacity, resulting in substantially improved inference efficiency.
32
-
33
- ```text
34
- Let me analyze this request carefully:
35
-
36
- 1. Identify the core objective of the problem.
37
- 2. Break the task into clearly defined subcomponents.
38
- 3. Evaluate constraints and edge cases.
39
- 4. Formulate a step-by-step solution plan.
40
- 5. Execute the reasoning sequentially and verify consistency.
41
- .
42
- .
43
- .
44
- ```
45
-
46
- ## 🗺️ Training Pipeline Overview
47
-
48
- ```text
49
- Base Model (Qwen3.5-27B)
50
-
51
-
52
- Supervised Fine-Tuning (SFT) + LoRA
53
-
54
-
55
- Final Model (Claude-4.6-Opus-Reasoning-Distilled,text-only)
56
- ```
57
-
58
- ## 📋 Stage Details
59
-
60
- ### 🔹 Supervised Fine-Tuning (SFT)
61
- - **Objective:** To inject high-density reasoning logic and establish a strict format for problem-solving involving an internal thinking state prior to outputting the final response.
62
- - **Methodology:** We utilized **Unsloth** for highly efficient memory and compute optimization (LoRA Rank = 64). A critical component of this stage is the `train_on_responses_only` strategy, masking instructions so the loss is purely calculated over the generation of the `<think>` sequences and the subsequent solutions.
63
- - **Format Enforcement:** All training samples were systematically normalized so the model strictly abides by the structure `<think> {internal reasoning} </think>\n {final answer}`.
64
-
65
- ### 📚 All Datasets Used
66
- The dataset consists of high-quality, filtered reasoning distillation data:
67
-
68
- | Dataset Name | Description / Purpose |
69
- |--------------|-----------------------|
70
- | [nohurry/Opus-4.6-Reasoning-3000x-filtered](https://huggingface.co/datasets/nohurry/Opus-4.6-Reasoning-3000x-filtered) | Provides comprehensive Claude 4.6 Opus reasoning trajectories. |
71
- | [TeichAI/claude-4.5-opus-high-reasoning-250x](https://huggingface.co/datasets/TeichAI/claude-4.5-opus-high-reasoning-250x) | Injecting high-intensity, structured reasoning instances.|
72
 
73
- ## 🌟 Core Skills & Capabilities
74
- 1. **Modular & Structured Thinking:** Inheriting traits from Opus-level reasoning, the model demonstrates confident parsing of the prompt, establishing an outlined plan in its `<think>` block sequentially rather than exploratory "trial-and-error" self-doubt.
75
- 2. **Extended Context Support:** Fine-tuned smoothly with an 8192 context window allowing complex multi-step reasoning traces to exist gracefully within memory limits.
76
 
77
- ## ⚠️ Limitations & Intended Use
78
- - **Hallucination Risk:** While reasoning is strong, the model remains an autoregressive LLM; external facts provided during the thinking sequence may occasionally contain hallucinations if verifying real-world events.
79
- - **Intended Scenario:** Best suited for offline analytical tasks, coding, math, and heavy logic-dependent prompting where the user needs to transparently follow the AI's internal logic.
80
- - **Preview Version Notice:** Because this model is relatively new and intentionally lightweight, the surrounding ecosystem — including inference templates, fine-tuning pipelines, routing configurations, and tooling integrations — may not yet be fully mature or standardized. As a result, users may encounter occasional bugs, compatibility inconsistencies, or integration edge cases. The current release should be considered a preview build while the broader architectural stack and supporting utilities continue to stabilize and improve.
81
 
82
- ## 🙏 Acknowledgements
83
- Significant thanks to the [Unsloth AI](https://unsloth.ai/) team for making rapid fine-tuning of MoE and large LLM models accessible. Additionally, we acknowledge Qwen internally, and the open-source community developers producing exceptional distilled datasets (`nohurry` and `TeichAI`).
 
1
  ---
2
+ base_model: qwen/Qwen3.5-27B
 
 
 
 
3
  tags:
4
+ - text-generation-inference
5
+ - transformers
6
  - unsloth
7
+ - qwen3_5
8
+ license: apache-2.0
9
+ language:
10
+ - en
 
 
11
  ---
12
 
13
+ # Uploaded finetuned model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
+ - **Developed by:** Jackrong
16
+ - **License:** apache-2.0
17
+ - **Finetuned from model :** qwen/Qwen3.5-27B
18
 
19
+ This qwen3_5 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
 
20
 
21
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)