Instructions to use gr0010/CustomThinker-0-8B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use gr0010/CustomThinker-0-8B with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="gr0010/CustomThinker-0-8B", filename="art-0-8b-Q4_K_M.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use gr0010/CustomThinker-0-8B with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf gr0010/CustomThinker-0-8B:Q4_K_M # Run inference directly in the terminal: llama-cli -hf gr0010/CustomThinker-0-8B:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf gr0010/CustomThinker-0-8B:Q4_K_M # Run inference directly in the terminal: llama-cli -hf gr0010/CustomThinker-0-8B:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf gr0010/CustomThinker-0-8B:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf gr0010/CustomThinker-0-8B:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf gr0010/CustomThinker-0-8B:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf gr0010/CustomThinker-0-8B:Q4_K_M
Use Docker
docker model run hf.co/gr0010/CustomThinker-0-8B:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use gr0010/CustomThinker-0-8B with Ollama:
ollama run hf.co/gr0010/CustomThinker-0-8B:Q4_K_M
- Unsloth Studio new
How to use gr0010/CustomThinker-0-8B with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for gr0010/CustomThinker-0-8B to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for gr0010/CustomThinker-0-8B to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for gr0010/CustomThinker-0-8B to start chatting
- Pi new
How to use gr0010/CustomThinker-0-8B with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf gr0010/CustomThinker-0-8B:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "gr0010/CustomThinker-0-8B:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use gr0010/CustomThinker-0-8B with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf gr0010/CustomThinker-0-8B:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default gr0010/CustomThinker-0-8B:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use gr0010/CustomThinker-0-8B with Docker Model Runner:
docker model run hf.co/gr0010/CustomThinker-0-8B:Q4_K_M
- Lemonade
How to use gr0010/CustomThinker-0-8B with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull gr0010/CustomThinker-0-8B:Q4_K_M
Run and chat with the model
lemonade run user.CustomThinker-0-8B-Q4_K_M
List all available models
lemonade list
llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)CustomThinker-0-8B: Reasoning the way you want it to
If you want to contact me about this experiment, you need consulting/custom AI solution, or anything else - you can find my email at https://gr.bio
CustomThinker-0-8B is the first open-source LLM that allows users to explicitly control its reasoning methodology through direct prompting instructions.
This experimental model is fine-tuned on Qwen3-8B using a specialized dataset that makes the model's thinking style directly controllable through system prompts, similar to how you would instruct an LLM to adopt a specific persona or output format.
The model supports reasoning control through a structured system prompt format:
Personality Instructions:
{Standard system prompt defining the AI's personality and role}
Reasoning Instructions:
{Specific instructions controlling how the model thinks: e.g., "Think using bullet points and short sentences to simulate thoughts and emoticons to simulate emotions"}
While the model is primarily trained to implement adaptive thinking based on system prompt instructions, it can also respond to reasoning style changes requested during mid-conversation, though this functionality may not be consistently reliable.
Some of the benefits that this strategy enables:
- Direct control over AI reasoning patterns and output structure
- Enhanced experimentation with reasoning models and potential for RL strategies that optimize thinking styles
- Improved safety through explicit control over the reasoning process
- Customizable thinking approaches tailored to specific tasks
While prompt engineering has long been known to improve LLM performance, CustomThinker represents the first approach that gives users direct control over the internal reasoning process of LLMs. This capability allows models to be configured with optimal thinking patterns for different use cases.
If you like this, please consider leaving a like on the repositoryโit would help us, and if you can, also leave feedback in the community section.
๐ฏ See Art in Action
Example: Thinking in Rap Lyrics
System Prompt:
Personality Instructions:
You are an AI assistant named Art developed by AGI-0.
Reasoning Instructions:
Think in rap lyrics and rhymes until you have a definitive answer, be explicit.
User Prompt:
What is the number of r in raspberry times the fourth prime number and then squared?
Reasoning Output:
(Verse 1)
Yo, letโs crunch some math, Iโm feeling analytical,
Breaking it down, like a pro, no pressure, no delay.
The questionโs clear, but letโs parse it like a tight rhyme,
โWhatโs the number of R in raspberry times the fourth prime, then squared?โ
First up, the fruitโraspberry, sweet and tart,
Count the letter R, thatโs our first math party start.
Rasp-berryโletโs spell it out, take it slow,
R-A-S-P-B-E-R-R-Y, now Iโm counting every note.
First letterโs an R, thatโs one,
Then later in there, another oneโposition two.
Wait, hold on, letโs double-check the spelling,
Is that two or three? Let me rap it step by step:
R is the start, thatโs one,
Then we get to the end, the last syllableโ
โBerryโ at the end, right? So โraspโ is first,
Then โberryโโso the letters go: R, A, S, P, B, E, R, R, Y.
Counting the Rโsโfirst at the front,
Second in the middle after E, then two more at the end.
So thatโs one, two, three, four? Waitโlet me check again,
R-A-S-P-B-E-R-R-Y: R (1), then R (2), R (3), R (4)?
No, waitโthereโs two Rโs at the end, after E. So R (1), R (2), then two more?
Hold onโlet me spell it like a flow: R-A-S-P-B-E-R-R-Y.
So letters: 1. R, 2. A, 3. S, 4. P, 5. B, 6. E, 7. R, 8. R, 9. Y.
So Rโs are at positions 1, 7, and 8. Thatโs three Rs!
Three Rโs in raspberry, yeah, thatโs the count.
Now, next partโmultiply by the fourth prime.
Primes start at 2, 3, 5, 7, 11โฆ
First is 2, second is 3, third is 5, fourth is 7โgot it.
So fourth prime is 7, straight up, no doubt.
So now, R count (3) times 7th prime (7)โ
3 times 7 is 21. Simple as a snap.
Now the final stepโsquare the result.
21 squaredโwhatโs that again?
21 times 21โฆ
Let me do it fast, like a math rhyme:
20 squared is 400, plus 2 times 20 plus 1 squaredโ
Wait, thatโs a binomial expansion. Letโs just do it step by step.
21 * 21: 20 * 20 = 400
20 * 1 = 20
1 * 20 = 20
1 * 1 = 1
Add โem up: 400 + 20 + 20 + 1 = 441.
Or just 21*21: 20*21 = 420, plus 1*21 = 21โ420 + 21 = 441.
So the final answer is 441.
(Outro)
So the R count is 3, fourth prime is 7,
Multiply them: 21, then square itโboom, 441.
Mathโs my vibe, I break it down like a flow,
Final numberโs clean, no tricks, just a solid glow.
Thatโs the breakdown, rhymes tight, logic sharp,
Art the AI spittinโ the truth from the start.
Final Output:
The number of R's in "raspberry" is 3. The fourth prime number is 7. Multiplying these gives 3 ร 7 = 21. Squaring the result: 21ยฒ = 441.
**Answer: 441**
โ Try more examples in the interactive demo
๐ Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "gr0010/CustomThinker-0-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
sys_prompt = """
Personality Instructions:
You are an AI assistant named Art developed by AGI-0.
Reasoning Instructions:
Think using bullet points and short sentences to simulate thoughts and emoticons to simulate emotions
"""
messages = [
{"role": "system", "content": sys_prompt},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
๐ Acknowledgments
Special thanks to the Qwen team for their excellent base model and permissive license, and to all the supporters of this work.
- Downloads last month
- 150
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="gr0010/CustomThinker-0-8B", filename="art-0-8b-Q4_K_M.gguf", )