Instructions to use MatteoKhan/CodeLlama-7B-Merged-Python with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MatteoKhan/CodeLlama-7B-Merged-Python with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="MatteoKhan/CodeLlama-7B-Merged-Python")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MatteoKhan/CodeLlama-7B-Merged-Python") model = AutoModelForCausalLM.from_pretrained("MatteoKhan/CodeLlama-7B-Merged-Python") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use MatteoKhan/CodeLlama-7B-Merged-Python with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "MatteoKhan/CodeLlama-7B-Merged-Python" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MatteoKhan/CodeLlama-7B-Merged-Python", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/MatteoKhan/CodeLlama-7B-Merged-Python
- SGLang
How to use MatteoKhan/CodeLlama-7B-Merged-Python with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "MatteoKhan/CodeLlama-7B-Merged-Python" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MatteoKhan/CodeLlama-7B-Merged-Python", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "MatteoKhan/CodeLlama-7B-Merged-Python" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MatteoKhan/CodeLlama-7B-Merged-Python", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use MatteoKhan/CodeLlama-7B-Merged-Python with Docker Model Runner:
docker model run hf.co/MatteoKhan/CodeLlama-7B-Merged-Python
Use Docker
docker model run hf.co/MatteoKhan/CodeLlama-7B-Merged-Pythonπ CodeLlama-Hybrid-7B: Optimized for Code Generation
π Overview
CodeLlama-Hybrid-7B is an experimental hybrid language model that merges the capabilities of two CodeLlama variants. Built using MergeKit, this model is optimized for programming-related tasks, balancing efficiency and performance in code generation and understanding.
π Created by: Matteo Khan
π Affiliation: Apprentice at TW3 Partners (Generative AI Research)
π License: MIT
π Connect with me on LinkedIn
π Model on Hugging Face
π§ Model Details
- Model Type: Hybrid Language Model (Merged for Code Generation)
- Parent Models:
- Merging Technique: Linear Merge (MergeKit)
- Tokenizer Source:
codellama/CodeLlama-7b-hf
π― Intended Use
This model is designed for code-related tasks and experimentation in hybrid model optimization. Possible applications include:
- β Code Generation
- β Code Completion & Assistance
- β Code Understanding & Refactoring
- β Exploration of Model Merging Effects on Programming Tasks
β οΈ Limitations & Considerations
While CodeLlama-Hybrid-7B provides enhanced code generation capabilities, it inherits some limitations from its parent models:
- β May produce incorrect or insecure code
- β οΈ Can generate biased, offensive, or inappropriate content
- π Merging may introduce unpredictable behaviors
- π Performance may vary depending on the programming language and context
π¬ Merging Process & Configuration
This is not a newly trained model, but rather a merge of existing models using the following configuration:
merge_method: linear
dtype: float16
allow_crimes: true
models:
- model: "codellama/CodeLlama-7b-hf"
parameters:
t: 1.0
weight: 0.5
- model: "codellama/CodeLlama-7b-Python-hf"
parameters:
t: 1.0
weight: 0.5
parameters:
normalize: true
int8_mask: false
ignore_mismatched_sizes: true
layers:
- pattern: "model.*"
tokenizer_source: "codellama/CodeLlama-7b-hf"
π No formal evaluation has been conducted yet. Users are encouraged to benchmark and share feedback!
π Environmental Impact
By utilizing model merging instead of training from scratch, CodeLlama-Hybrid-7B significantly reduces computational and environmental costs.
π How to Use
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "MatteoKhan/CodeLlama-Hybrid-7B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Write a Python function to calculate Fibonacci numbers."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
π© Feedback & Contact: Reach out via Hugging Face.
π Happy Coding! π
- Downloads last month
- 13
Model tree for MatteoKhan/CodeLlama-7B-Merged-Python
Base model
codellama/CodeLlama-7b-Python-hf
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "MatteoKhan/CodeLlama-7B-Merged-Python"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MatteoKhan/CodeLlama-7B-Merged-Python", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'