Instructions to use KoinicLabs/AXL-Comment-5M with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use KoinicLabs/AXL-Comment-5M with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="KoinicLabs/AXL-Comment-5M")# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("KoinicLabs/AXL-Comment-5M", dtype="auto") - llama-cpp-python
How to use KoinicLabs/AXL-Comment-5M with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="KoinicLabs/AXL-Comment-5M", filename="axl-comment-f16.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use KoinicLabs/AXL-Comment-5M with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf KoinicLabs/AXL-Comment-5M:Q4_K_M # Run inference directly in the terminal: llama-cli -hf KoinicLabs/AXL-Comment-5M:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf KoinicLabs/AXL-Comment-5M:Q4_K_M # Run inference directly in the terminal: llama-cli -hf KoinicLabs/AXL-Comment-5M:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf KoinicLabs/AXL-Comment-5M:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf KoinicLabs/AXL-Comment-5M:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf KoinicLabs/AXL-Comment-5M:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf KoinicLabs/AXL-Comment-5M:Q4_K_M
Use Docker
docker model run hf.co/KoinicLabs/AXL-Comment-5M:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use KoinicLabs/AXL-Comment-5M with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "KoinicLabs/AXL-Comment-5M" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "KoinicLabs/AXL-Comment-5M", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/KoinicLabs/AXL-Comment-5M:Q4_K_M
- SGLang
How to use KoinicLabs/AXL-Comment-5M with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "KoinicLabs/AXL-Comment-5M" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "KoinicLabs/AXL-Comment-5M", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "KoinicLabs/AXL-Comment-5M" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "KoinicLabs/AXL-Comment-5M", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Ollama
How to use KoinicLabs/AXL-Comment-5M with Ollama:
ollama run hf.co/KoinicLabs/AXL-Comment-5M:Q4_K_M
- Unsloth Studio new
How to use KoinicLabs/AXL-Comment-5M with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for KoinicLabs/AXL-Comment-5M to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for KoinicLabs/AXL-Comment-5M to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for KoinicLabs/AXL-Comment-5M to start chatting
- Docker Model Runner
How to use KoinicLabs/AXL-Comment-5M with Docker Model Runner:
docker model run hf.co/KoinicLabs/AXL-Comment-5M:Q4_K_M
- Lemonade
How to use KoinicLabs/AXL-Comment-5M with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull KoinicLabs/AXL-Comment-5M:Q4_K_M
Run and chat with the model
lemonade run user.AXL-Comment-5M-Q4_K_M
List all available models
lemonade list
output = llm(
"Once upon a time,",
max_tokens=512,
echo=True
)
print(output)AXL-Comment-5M
Code commenting. 7.2M params. PPL 1.01. Context 512 bytes. Part of the AXL model family by KoinicLabs.
Model Details
| Property | Value |
|---|---|
| Developed by | KoinicLabs |
| Architecture | Multi-Scale Transformer |
| Parameters | 7M |
| Optimizer | Lion |
| Attention | SDPA |
| Vocab Size | 258 (byte-level) |
| Context Window | 512 bytes |
| d_model | 192 |
| Attention Heads | 3 |
| Layers per Scale | 3 |
| Downsample Factors | [1, 2, 4] |
| License | Apache 2.0 |
Sources
- Repository: GitHub
- Organization: KoinicLabs
Uses
Direct Use
Code commenting.
import torch
from multiscale_transformer.model.model import MultiScaleTransformer
from multiscale_transformer.training.tokenizer import ByteTokenizer
ckpt = torch.load("axl_comment_5m.pt", map_location="cpu")
model = MultiScaleTransformer(config)
model.load_state_dict(ckpt["model_state_dict"])
model.eval()
tokenizer = ByteTokenizer()
ids = torch.tensor([tokenizer.encode("def hello():")], dtype=torch.long)
with torch.no_grad():
out = model.generate(ids, max_new_tokens=50, temperature=0.8)
print(tokenizer.decode(out[0].tolist()))
Out-of-Scope Use
Not for production code generation. Not for non-code NLP tasks. For integration with tools like Continue.dev, LlamaIndex, or LangChain, use the Python API server which provides OpenAI-compatible endpoints.
Bias, Risks, and Limitations
Byte-level perplexity is not comparable to BPE-level perplexity. Max context 512 bytes. Note: GGUF files for Ollama use a simplified single-stack encoder. For full AXL quality, use the Python API server.
Recommendations
- Use for prototyping and experimentation, not production code generation.
- Byte-level perplexity (258 vocab) is not comparable to BPE-level perplexity (32K vocab).
- For better results, use the Lion-optimized version if available.
Training Details
Training Data
Retrained with Lion on 20MB commenting pairs. 263 steps in 10 min.
Preprocessing
Byte-level tokenization with vocabulary size 258 (256 bytes + BOS + EOS). No vocabulary training required.
Speeds, Sizes, Times
| Metric | Value |
|---|---|
| Training Steps | 263 |
| Training Time | 10 min |
| Final Loss | 0.1476 |
Evaluation
Metrics
Perplexity on held-out Python code using byte-level tokenization.
Results
| Metric | Value |
|---|---|
| Perplexity (byte-level) | 1.01 |
| Final Loss | 0.1476 |
| Training Steps | 263 |
| Training Time | 10 min |
Summary: Adds inline comments to explain code logic.
Environmental Impact
| Property | Value |
|---|---|
| Hardware | AMD Ryzen 5 5600G |
| Hours Used | 0.167 |
| Carbon Emitted | 0.0070 kg CO2 |
| Cloud Provider | None (local CPU) |
Technical Specifications
Model Architecture
Multi-Scale Transformer with three parallel encoder stacks at resolution scales 1x, 2x, and 4x. Cross-scale attention connects all scale pairs. Adaptive gating fusion. SwiGLU feed-forward. RoPE positional encoding.
Compute Infrastructure
| Property | Value |
|---|---|
| Hardware | AMD Ryzen 5 5600G (6 cores, 12 threads) |
| RAM | 16 GB |
| GPU | None (CPU-only) |
Citation
@misc{axl_2026,
title={AXL: AXL-Comment-5M - Multi-Scale Transformer for CPU Code Generation},
author={Koinic},
year={2026},
url={https://huggingface.co/KoinicLabs}
}
How to Get Started
With Ollama
ollama create axl-comment-5m -f Modelfile
ollama run axl-comment-5m "def fibonacci():"
With Python
import torch
from multiscale_transformer.model.config import load_config
from multiscale_transformer.model.model import MultiScaleTransformer
from multiscale_transformer.training.tokenizer import ByteTokenizer
config = load_config("config.json")
model = MultiScaleTransformer(config)
ckpt = torch.load("axl_comment_5m.pt", map_location="cpu")
model.load_state_dict(ckpt["model_state_dict"])
model.eval()
tokenizer = ByteTokenizer()
prompt = "def fibonacci():"
ids = torch.tensor([tokenizer.encode(prompt)], dtype=torch.long)
with torch.no_grad():
out = model.generate(ids, max_new_tokens=100, temperature=0.8, top_k=40)
print(tokenizer.decode(out[0].tolist()))
- Downloads last month
- 515
4-bit
16-bit
Datasets used to train KoinicLabs/AXL-Comment-5M
theblackcat102/evol-codealpaca-v1
Collection including KoinicLabs/AXL-Comment-5M
Evaluation results
- Perplexity (byte-level)self-reported1.010
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="KoinicLabs/AXL-Comment-5M", filename="", )