Instructions to use hf-tiny-model-private/tiny-random-XLMWithLMHeadModel with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use hf-tiny-model-private/tiny-random-XLMWithLMHeadModel with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="hf-tiny-model-private/tiny-random-XLMWithLMHeadModel")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("hf-tiny-model-private/tiny-random-XLMWithLMHeadModel") model = AutoModelForMaskedLM.from_pretrained("hf-tiny-model-private/tiny-random-XLMWithLMHeadModel") - Notebooks
- Google Colab
- Kaggle
| { | |
| "additional_special_tokens": [ | |
| "<special0>", | |
| "<special1>", | |
| "<special2>", | |
| "<special3>", | |
| "<special4>", | |
| "<special5>", | |
| "<special6>", | |
| "<special7>", | |
| "<special8>", | |
| "<special9>" | |
| ], | |
| "bos_token": "<s>", | |
| "cls_token": "</s>", | |
| "mask_token": "<special1>", | |
| "pad_token": "<pad>", | |
| "sep_token": "</s>", | |
| "unk_token": "<unk>" | |
| } | |