Instructions to use arnavgrg/codealpaca-qlora with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use arnavgrg/codealpaca-qlora with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf") model = PeftModel.from_pretrained(base_model, "arnavgrg/codealpaca-qlora") - Notebooks
- Google Colab
- Kaggle
System crashing on collab
#1
by tunesh - opened
While I am loading your models in the collab or in fact my trained model using peft the kernel is crashing. Do I need to do something else? Please let me know.
This is happening due to the less amount RAM available on collab.
I tried to load the fine-tuned model in my VM (30GB RAM and GPU T4) but still my system crashed due to OOM. Is there any other tested way to load the fine-tuned model binaries with ludwig?
arnavgrg changed discussion status to closed