Instructions to use smktech9/ltx-video with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use smktech9/ltx-video with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Lightricks/LTX-Video", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("smktech9/ltx-video") prompt = "A man with short gray hair plays a red electric guitar." output = pipe(prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
LoRA Finetune
Model description
This is a lora finetune of model: Lightricks/LTX-Video.
The model was trained using finetrainers.
id_token used: BW_STYLE (if it's not None, it should be used in the prompts.)
Download model
Download LoRA in the Files & Versions tab.
Usage
Requires the 🧨 Diffusers library installed.
TODO
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers.
Intended uses & limitations
How to use
# TODO: add an example code snippet for running this diffusion pipeline
Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
Training details
[TODO: describe the data used to train the model]
- Downloads last month
- 14
Model tree for smktech9/ltx-video
Base model
Lightricks/LTX-Video