Instructions to use google/mobilebert-uncased with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/mobilebert-uncased with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForPreTraining tokenizer = AutoTokenizer.from_pretrained("google/mobilebert-uncased") model = AutoModelForPreTraining.from_pretrained("google/mobilebert-uncased") - Notebooks
- Google Colab
- Kaggle
Extremely high logits
#9
by Thomas2419 - opened
Hello I've found this model to have extremely high logits, and loss on new tasks because of that fact into the millions compares to Bert base, roberta, deberta, and other models I tested identically to mobile bert. Is this an intentional facet of mobilebert? It seems to render finetuning new heads onto the frozen model impossible due to instability?