Instructions to use Delta-Vector/Austral-32B-GLM4-Winton with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Delta-Vector/Austral-32B-GLM4-Winton with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Delta-Vector/Austral-32B-GLM4-Winton") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Delta-Vector/Austral-32B-GLM4-Winton") model = AutoModelForCausalLM.from_pretrained("Delta-Vector/Austral-32B-GLM4-Winton") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Delta-Vector/Austral-32B-GLM4-Winton with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Delta-Vector/Austral-32B-GLM4-Winton" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Delta-Vector/Austral-32B-GLM4-Winton", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Delta-Vector/Austral-32B-GLM4-Winton
- SGLang
How to use Delta-Vector/Austral-32B-GLM4-Winton with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Delta-Vector/Austral-32B-GLM4-Winton" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Delta-Vector/Austral-32B-GLM4-Winton", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Delta-Vector/Austral-32B-GLM4-Winton" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Delta-Vector/Austral-32B-GLM4-Winton", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Delta-Vector/Austral-32B-GLM4-Winton with Docker Model Runner:
docker model run hf.co/Delta-Vector/Austral-32B-GLM4-Winton
Austral 32B GLM4 Winton
Overview
Austral 32B - Winton
More than 1.5-metres tall, about six-metres long and up to 1000-kilograms heavy, Australovenator Wintonensis was a fast and agile hunter. The largest known Australian theropod.
This is a finetune of Delta-Vector/GLM-4-32B-Tulu-Instruct to be a generalist Roleplay/Adventure model. I've removed some of the "slops" that i noticed in an otherwise great model aswell as improving the general writing of the model, This was a multi-stage finetune, all previous checkpoints are released aswell. In testing it has shown to be a great model for Adventure cards & Roleplay, Often pushing the plot forward better then other models, While avoiding some of the slops you'd find in models from Drummer and Co.
Support my finetunes / Me on Kofi: https://Ko-fi.com/deltavector | Thank you to Auri for helping/Testing ♥
Chat Format
This model utilizes ChatML.
<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
Training
As is known by now, I trained off my own Instruct tune from base. `Delta-Vector/GLM-4-32B-Tulu-Instruct`, After which it was trained for 4 epochs on a datamix of Light Novels/Natural Human writing datasets, etc, The resulting model is kinda incoherent, so we end up having to KTO the model to improve coherency and cohesiveness but that resulted in the model not being as "creative" as once thought, So usually i'd SFT with Pocketdoc's rep-remover data, however this time, I decided to convert the dataset into a KTO format and that resulted in a better model. Thankies to Pocket for that dataset.
Config(Post-KTO V2)
https://wandb.ai/new-eden/Austral-32B/runs/zlhv6tfw?nw=nwuserdeltavector
This model was trained over 4 epochs using 8 x A100s (Ty to my work, Quixi.AI) for the base SFT, Then i used KTO to clean up some coherency issues for 1 epoch, then finally training for another 1 epoch on Rep_Remover to delete slops. Total was roughly 80 hours total.
Credits
TYSM to my friends: Auri, Lucy, Trappu, Alicat, Kubernetes Bad, Intervitens, NyxKrage & Kalomaze
- Downloads last month
- 16
Model tree for Delta-Vector/Austral-32B-GLM4-Winton
Base model
zai-org/GLM-4-32B-Base-0414