Fill-Mask
Transformers
PyTorch
Safetensors
Russian
English
bert
pretraining
russian
embeddings
masked-lm
tiny
feature-extraction
sentence-similarity
Instructions to use cointegrated/rubert-tiny with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use cointegrated/rubert-tiny with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="cointegrated/rubert-tiny")# Load model directly from transformers import AutoTokenizer, AutoModelForPreTraining tokenizer = AutoTokenizer.from_pretrained("cointegrated/rubert-tiny") model = AutoModelForPreTraining.from_pretrained("cointegrated/rubert-tiny") - Inference
- Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 8319dfbe9007d49334439340014d9bcebf9225788b131974dac3e013346cbc63
- Size of remote file:
- 259 MB
- SHA256:
- 1a1ac6cb709ba77b233e14c25a81cb660c1e32fee79464785a1ee6711615d82a
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.