Instructions to use llmware/slim-boolean-tool with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use llmware/slim-boolean-tool with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("llmware/slim-boolean-tool", dtype="auto") - llama-cpp-python
How to use llmware/slim-boolean-tool with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="llmware/slim-boolean-tool", filename="slim-boolean.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use llmware/slim-boolean-tool with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf llmware/slim-boolean-tool # Run inference directly in the terminal: llama-cli -hf llmware/slim-boolean-tool
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf llmware/slim-boolean-tool # Run inference directly in the terminal: llama-cli -hf llmware/slim-boolean-tool
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf llmware/slim-boolean-tool # Run inference directly in the terminal: ./llama-cli -hf llmware/slim-boolean-tool
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf llmware/slim-boolean-tool # Run inference directly in the terminal: ./build/bin/llama-cli -hf llmware/slim-boolean-tool
Use Docker
docker model run hf.co/llmware/slim-boolean-tool
- LM Studio
- Jan
- Ollama
How to use llmware/slim-boolean-tool with Ollama:
ollama run hf.co/llmware/slim-boolean-tool
- Unsloth Studio new
How to use llmware/slim-boolean-tool with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for llmware/slim-boolean-tool to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for llmware/slim-boolean-tool to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for llmware/slim-boolean-tool to start chatting
- Docker Model Runner
How to use llmware/slim-boolean-tool with Docker Model Runner:
docker model run hf.co/llmware/slim-boolean-tool
- Lemonade
How to use llmware/slim-boolean-tool with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull llmware/slim-boolean-tool
Run and chat with the model
lemonade run user.slim-boolean-tool-{{QUANT_TAG}}List all available models
lemonade list
license: cc-by-sa-4.0
SLIM-BOOLEAN-TOOL
slim-boolean-tool is a 4_K_M quantized GGUF version of slim-boolean, providing a small, fast inference implementation, optimized for multi-model concurrent deployment.
This is an experimental model that takes as input a context passage, a yes-no question, and an optional (explain) parameter, and generates a response consisting of a python dictionary with two keys- 'answer' consisting of the 'yes/no' classification, and 'explanation' which provides a text explanation, derived from the source passage that explains the boolean classification assesment. All of the details on the prompt template are provided in the config.json file in this model repo, along with several examples.
If you are interested in fine-tuning this model for a specific domain, please see: slim-boolean.
To pull the model via API:
from huggingface_hub import snapshot_download
snapshot_download("llmware/slim-boolean-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
Load in your favorite GGUF inference engine, or try with llmware as follows:
from llmware.models import ModelCatalog
# to load the model and make a basic inference
model = ModelCatalog().load_model("slim-boolean-tool")
response = model.function_call(text_sample)
# this one line will download the model and run a series of tests
ModelCatalog().tool_test_run("slim-boolean-tool", verbose=True)
Note: please review config.json in the repository for prompt wrapping information, details on the model, and full test set.
Model Card Contact
Darren Oberst & llmware team