Robotics
Transformers
Safetensors
qwen2_5_vl
image-text-to-text
vision-language-action-model
vision-language-model
text-generation-inference
Instructions to use InternRobotics/InternVLA-M1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use InternRobotics/InternVLA-M1 with Transformers:
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("InternRobotics/InternVLA-M1") model = AutoModelForImageTextToText.from_pretrained("InternRobotics/InternVLA-M1") - Notebooks
- Google Colab
- Kaggle
Adding `transformers` as the library tag
#1
by ariG23498 HF Staff - opened
Hey team! Congratulations on being among the top 100 trending models in the Hub today.
I have added the transformers library tag so that it is:
- More visible to the Hub
- Usage becomes easier, as this tag would trigger a "Use the model" in the model card automatically
Some ideas to increase visibility:
- Add a code snippet for inference inside the README
- A Hugging Face Space to demo the model
- An organization blog post that gets the most visibility.
Let me know if you need help in any of the above.
chenyilun95 changed pull request status to merged