Instructions to use arnavgrg/codealpaca-qlora with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use arnavgrg/codealpaca-qlora with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf") model = PeftModel.from_pretrained(base_model, "arnavgrg/codealpaca-qlora") - Notebooks
- Google Colab
- Kaggle
How to use the model section in model card.
#4
by sudhir2016 - opened
This looks incorrect. There is no option to load meta-llama/Llama-2-7b-hf in 4bits. Please suggest.
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", load_in_4bit=True)