Text Generation
Transformers
Safetensors
English
llama
mergekit
merged-model
codellama
programming
language-model
text-generation-inference
Instructions to use MatteoKhan/CodeLlama-7B-Merged-Python with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use MatteoKhan/CodeLlama-7B-Merged-Python with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="MatteoKhan/CodeLlama-7B-Merged-Python")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MatteoKhan/CodeLlama-7B-Merged-Python") model = AutoModelForCausalLM.from_pretrained("MatteoKhan/CodeLlama-7B-Merged-Python") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use MatteoKhan/CodeLlama-7B-Merged-Python with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "MatteoKhan/CodeLlama-7B-Merged-Python" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MatteoKhan/CodeLlama-7B-Merged-Python", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/MatteoKhan/CodeLlama-7B-Merged-Python
- SGLang
How to use MatteoKhan/CodeLlama-7B-Merged-Python with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "MatteoKhan/CodeLlama-7B-Merged-Python" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MatteoKhan/CodeLlama-7B-Merged-Python", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "MatteoKhan/CodeLlama-7B-Merged-Python" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "MatteoKhan/CodeLlama-7B-Merged-Python", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use MatteoKhan/CodeLlama-7B-Merged-Python with Docker Model Runner:
docker model run hf.co/MatteoKhan/CodeLlama-7B-Merged-Python
File size: 3,492 Bytes
b73c337 68cfd3f b73c337 68cfd3f b73c337 68cfd3f b73c337 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 | ---
license: mit
language:
- en
base_model:
- codellama/CodeLlama-7b-hf
- codellama/CodeLlama-7b-Python-hf
library_name: transformers
tags:
- mergekit
- merged-model
- codellama
- programming
- language-model
---
# π CodeLlama-Hybrid-7B: Optimized for Code Generation
## π Overview
**CodeLlama-Hybrid-7B** is an **experimental hybrid language model** that merges the capabilities of two CodeLlama variants. Built using **MergeKit**, this model is optimized for programming-related tasks, balancing efficiency and performance in code generation and understanding.
π **Created by**: Matteo Khan
π **Affiliation**: Apprentice at TW3 Partners (Generative AI Research)
π **License**: MIT
π [Connect with me on LinkedIn](https://www.linkedin.com/in/matteo-khan-a10309263/)
π [Model on Hugging Face](https://huggingface.co/MatteoKhan/CodeLlama-Hybrid-7B)
## π§ Model Details
- **Model Type**: Hybrid Language Model (Merged for Code Generation)
- **Parent Models**:
- [CodeLlama-7B](https://huggingface.co/codellama/CodeLlama-7b-hf)
- [CodeLlama-7B-Python](https://huggingface.co/codellama/CodeLlama-7b-Python-hf)
- **Merging Technique**: Linear Merge (MergeKit)
- **Tokenizer Source**: `codellama/CodeLlama-7b-hf`
## π― Intended Use
This model is designed for **code-related tasks** and experimentation in hybrid model optimization. Possible applications include:
- β
Code Generation
- β
Code Completion & Assistance
- β
Code Understanding & Refactoring
- β
Exploration of Model Merging Effects on Programming Tasks
## β οΈ Limitations & Considerations
While **CodeLlama-Hybrid-7B** provides enhanced code generation capabilities, it inherits some limitations from its parent models:
- β May produce **incorrect or insecure** code
- β οΈ Can generate **biased, offensive, or inappropriate** content
- π Merging may introduce **unpredictable behaviors**
- π Performance may **vary depending on the programming language and context**
## π¬ Merging Process & Configuration
This is **not a newly trained model**, but rather a merge of existing models using the following configuration:
```yaml
merge_method: linear
dtype: float16
allow_crimes: true
models:
- model: "codellama/CodeLlama-7b-hf"
parameters:
t: 1.0
weight: 0.5
- model: "codellama/CodeLlama-7b-Python-hf"
parameters:
t: 1.0
weight: 0.5
parameters:
normalize: true
int8_mask: false
ignore_mismatched_sizes: true
layers:
- pattern: "model.*"
tokenizer_source: "codellama/CodeLlama-7b-hf"
```
π **No formal evaluation** has been conducted yet. Users are encouraged to **benchmark and share feedback**!
## π Environmental Impact
By utilizing **model merging** instead of training from scratch, **CodeLlama-Hybrid-7B** significantly reduces computational and environmental costs.
## π How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "MatteoKhan/CodeLlama-Hybrid-7B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example usage
prompt = "Write a Python function to calculate Fibonacci numbers."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
π© **Feedback & Contact**: Reach out via [Hugging Face](https://huggingface.co/MatteoKhan).
π **Happy Coding!** π
|