MIREI
Collection
MIREI: Matched Investigation of Representation Embedding Insights, paper: https://www.anlp.jp/proceedings/annual_meeting/2026/pdf_dir/C9-1.pdf • 14 items • Updated • 1
English / Japanese
ModernBERT-JP-0.5B-PT-stage2 builds on iamtatsuki05/ModernBERT-JP-0.5B-PT-stage1 with additional pre-training on fujiki/wiki40b_ja. The model observes roughly 1B tokens under 8,192-token contexts to improve encyclopedic coverage and long-context understanding in Japanese.
transformers>=4.51.0
accelerate>=1.6.0
sentencepiece>=0.2.0
flash-attn>=2.7.3
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer
model_name = "iamtatsuki05/ModernBERT-JP-0.5B-PT-stage2"
model_kwargs = {
"torch_dtype": torch.bfloat16,
"attn_implementation": "flash_attention_2",
"device_map": "auto",
}
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, **model_kwargs)
text = f"ハチワレは{tokenizer.mask_token}のキャラクターです。"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model(**inputs)
masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id)
print(tokenizer.decode(outputs.logits[0, masked_index].argmax(axis=-1)))
These models are further pre-trained on fujiki/wiki40b_ja for roughly 1B tokens with 8,192-token sequence lengths to enrich encyclopedic knowledge.
| ID | Architecture | #Param. | #Param. w/o Emb. |
JGLUE-Avg | JGLUE-JSTS | JGLUE-JNLI | JGLUE-JCoLA |
|---|---|---|---|---|---|---|---|
| iamtatsuki05/ModernBERT-JP-0.5B-PT-stage2 (this model) |
ModernBERT | 679M | 548M | 86.32 | 89.29 | 86.68 | 83.00 |
| iamtatsuki05/Llama-JP-0.5B-PT-stage2 | Llama | 661M | 530M | 81.88 | 82.91 | 78.67 | 84.06 |
This model is distributed under the MIT License.
@article{MIREI
title={同一条件下における Encoder/Decoder アーキテクチャによる文埋め込みの性能分析},
author={岡田 龍樹 and 杉本 徹},
journal={言語処理学会第 32 回年次大会 (NLP2026)},
year={2026}
}
Base model
iamtatsuki05/ModernBERT-JP-0.5B-init