Instructions to use whitefoxredhell/language_identification with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use whitefoxredhell/language_identification with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="whitefoxredhell/language_identification")# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("whitefoxredhell/language_identification") model = AutoModelForSeq2SeqLM.from_pretrained("whitefoxredhell/language_identification") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 30c92ef191178979616c243d60e7f3a6484ffb0968dfeec459064ff8846bc268
- Size of remote file:
- 16.3 MB
- SHA256:
- 99cc999819aaabf74898a252863b10d86fbcd86e8b3f65c118ff334ff85c5ea5
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.