Instructions to use answerdotai/ModernBERT-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use answerdotai/ModernBERT-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="answerdotai/ModernBERT-base")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("answerdotai/ModernBERT-base") model = AutoModelForMaskedLM.from_pretrained("answerdotai/ModernBERT-base") - Notebooks
- Google Colab
- Kaggle
Training Data?
#32
by binarymax - opened
Hi! Excellent work on this model. Can you please share more information on the training data used? The sources are quite vague, and it would be good to know more specifics to understand what content/domains this might better align with than others.
Hello,
Unfortunately, this is the most we can share about the data, I am deeply sorry about this.
Hopefully the broad domains and experiments can give signals about the domains ModernBERT is aligned with ; the contents in themselves should be quite diverse.