ModernBERT-JP-0.5B-PT-stage2

English / Japanese

Overview

ModernBERT-JP-0.5B-PT-stage2 builds on iamtatsuki05/ModernBERT-JP-0.5B-PT-stage1 with additional pre-training on fujiki/wiki40b_ja. The model observes roughly 1B tokens under 8,192-token contexts to improve encyclopedic coverage and long-context understanding in Japanese.

Consept

Usage

Requirements

transformers>=4.51.0
accelerate>=1.6.0
sentencepiece>=0.2.0
flash-attn>=2.7.3

Sample Code

import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer

model_name = "iamtatsuki05/ModernBERT-JP-0.5B-PT-stage2"
model_kwargs = {
  "torch_dtype": torch.bfloat16,
  "attn_implementation": "flash_attention_2",
  "device_map": "auto",
}
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, **model_kwargs)

text = f"ハチワレは{tokenizer.mask_token}のキャラクターです。"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model(**inputs)
masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id)
print(tokenizer.decode(outputs.logits[0, masked_index].argmax(axis=-1)))

Model Details

  • Base model: iamtatsuki05/ModernBERT-JP-0.5B-PT-stage1
  • Architecture: ModernBERT
  • Maximum sequence length: 8,192 tokens
  • Embedding dimension: 1280
  • Tokenizer: SentencePiece / vocabulary size 102,400
  • Positional encoding: RoPE
  • Supported languages: Japanese

Model Series

These models are further pre-trained on fujiki/wiki40b_ja for roughly 1B tokens with 8,192-token sequence lengths to enrich encyclopedic knowledge.

ID Architecture #Param. #Param.
w/o Emb.
JGLUE-Avg JGLUE-JSTS JGLUE-JNLI JGLUE-JCoLA
iamtatsuki05/ModernBERT-JP-0.5B-PT-stage2
(this model)
ModernBERT 679M 548M 86.32 89.29 86.68 83.00
iamtatsuki05/Llama-JP-0.5B-PT-stage2 Llama 661M 530M 81.88 82.91 78.67 84.06

Licence

This model is distributed under the MIT License.

How to Cite

@article{MIREI
  title={同一条件下における Encoder/Decoder アーキテクチャによる文埋め込みの性能分析},
  author={岡田 龍樹 and 杉本 徹},
  journal={言語処理学会第 32 回年次大会 (NLP2026)},
  year={2026}
}
Downloads last month
2
Safetensors
Model size
0.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for iamtatsuki05/ModernBERT-JP-0.5B-PT-stage2

Finetuned
(1)
this model
Finetunes
1 model

Dataset used to train iamtatsuki05/ModernBERT-JP-0.5B-PT-stage2

Collection including iamtatsuki05/ModernBERT-JP-0.5B-PT-stage2