Video-Text-to-Text
Transformers
Safetensors
English
qwen2
text-generation
Action
Video
MQA
multimodal
VLM
LLaVAction
MLLMs
Eval Results (legacy)
text-generation-inference
Instructions to use MLAdaptiveIntelligence/LLaVAction-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MLAdaptiveIntelligence/LLaVAction-7B with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MLAdaptiveIntelligence/LLaVAction-7B") model = AutoModelForCausalLM.from_pretrained("MLAdaptiveIntelligence/LLaVAction-7B") - Notebooks
- Google Colab
- Kaggle
Add link to project page and correct pipeline tag
#2
by nielsr HF Staff - opened
This PR improves the model card by adding a link to the project page at https://mmathislab.github.io/llavaction/. It also corrects the pipeline_tag to video-text-to-text, which accurately reflects the model's functionality.