gpt-oss-20b-Ja-Fin-Thinking
A Japanese financial domain reasoning model, built through supervised fine-tuning of gpt-oss-20b-Ja-Fin-CPT.
Model Overview
Trained to provide high-quality responses with explicit reasoning traces for Japanese financial domain tasks.
- Base Model: gpt-oss-20b-Ja-Fin-CPT
- Training Stage: Supervised Fine-Tuning (SFT)
- Domain: Japanese Finance
- Language: Japanese, English
Benchmark Results
japanese-lm-fin-harness
| Model | Avg. | chabsa | cma | cpa | fp2 | ss1 |
|---|---|---|---|---|---|---|
| gpt-oss-20b (official) | 66.93 | 91.80 | 90.46 | 38.51 | 49.74 | 64.15 |
| gpt-oss-20b-Ja-Fin-Thinking (Ours) | 72.50 | 91.89 | 94.24 | 45.51 | 62.71 | 68.15 |
+5.57 points improvement over the official instruction-tuned model.
pfmt-bench-fin-ja
| Model | Avg. | turn1 | turn2 |
|---|---|---|---|
| gpt-oss-20b (official) | 7.883 | 7.858 | 7.908 |
| gpt-oss-20b-Ja-Fin-Thinking (Ours) | 8.209 | 7.992 | 8.425 |
Training
Supervised Fine-Tuning
Fine-tuned on our synthetic instruction dataset with reasoning traces:
- Dataset: nri-fin-reasoning + supplementary data
- Total samples: ~1.44M
- Total tokens: ~9.5B
- Epochs: 2
Training Infrastructure:
- Hardware: AWS p5en.48xlarge (NVIDIA H200 Tensor Core GPU x 8)
- Training time: ~240 hours
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "nri-ai/gpt-oss-20b-Ja-Fin-Thinking"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
messages = [
{"role": "user", "content": "分散投資のメリットとデメリットを説明してください。"}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=8192)
response = tokenizer.decode(outputs[0][inputs.input_ids.shape[-1]:], skip_special_tokens=True)
print(response)
Intended Use
Primary Use Cases
- Financial question answering in Japanese
- Financial document analysis and summarization
- Financial reasoning and calculation tasks
- Multi-turn financial advisory conversations
Out-of-Scope Uses
- Production deployment without additional safety evaluation
- Professional financial advice (this is a research model)
- Non-financial domain applications
Limitations
- Domain specificity: Optimized for Japanese financial domain; performance on other domains may vary
- Synthetic training data: May contain hallucinations despite quality filtering
- Language coverage: Primarily Japanese and English
Ethical Considerations
- Financial information generated by this model should not be used as professional financial advice without review by qualified experts
- Users should verify important financial information against authoritative sources and professional guidance before making decisions
- The model may reflect biases present in training data
License
This model is released under the Apache 2.0 license.
Privacy Notice
For details on how personal information is handled, please see the Privacy Notice (日本語).
Citation
@inproceedings{okochiDomainSpecificLLM2026,
author = {大河内 悠磨 and Sim, Fabio Milentiansen and 岡田 智靖},
title = {ドメイン特化LLMの推論能力向上を目的とした合成指示データセットの構築と金融ドメインにおける評価},
booktitle = {言語処理学会第32回年次大会 (NLP2026) },
year = {2026},
month = mar,
address = {Utsunomiya, Tochigi, Japan},
publisher = {言語処理学会},
note = {Paper ID: C7-2},
url = {https://www.anlp.jp/proceedings/annual_meeting/2026/pdf_dir/C7-2.pdf}
}
@misc{okochi2026constructingsyntheticinstructiondatasets,
title = {Constructing Synthetic Instruction Datasets for Improving Reasoning in Domain-Specific LLMs: A Case Study in the Japanese Financial Domain},
author = {Yuma Okochi and Fabio Milentiansen Sim and Tomoyasu Okada},
year = {2026},
eprint = {2603.01353},
archivePrefix = {arXiv},
primaryClass = {cs.LG},
url = {https://arxiv.org/abs/2603.01353}
}
Acknowledgments
This model was developed with the support of the "GENIAC (Generative AI Accelerator Challenge)" project, implemented by the Ministry of Economy, Trade and Industry (METI) and the New Energy and Industrial Technology Development Organization (NEDO), with the aim of strengthening Japan's development capabilities in generative AI.
- Downloads last month
- 405