Instructions to use aedmark/vsl-cryosomatic-hypervisor-v5 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use aedmark/vsl-cryosomatic-hypervisor-v5 with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="aedmark/vsl-cryosomatic-hypervisor-v5", filename="vsl_mistral_nemo_llama3.13BQ8_0.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use aedmark/vsl-cryosomatic-hypervisor-v5 with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf aedmark/vsl-cryosomatic-hypervisor-v5:Q8_0 # Run inference directly in the terminal: llama-cli -hf aedmark/vsl-cryosomatic-hypervisor-v5:Q8_0
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf aedmark/vsl-cryosomatic-hypervisor-v5:Q8_0 # Run inference directly in the terminal: llama-cli -hf aedmark/vsl-cryosomatic-hypervisor-v5:Q8_0
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf aedmark/vsl-cryosomatic-hypervisor-v5:Q8_0 # Run inference directly in the terminal: ./llama-cli -hf aedmark/vsl-cryosomatic-hypervisor-v5:Q8_0
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf aedmark/vsl-cryosomatic-hypervisor-v5:Q8_0 # Run inference directly in the terminal: ./build/bin/llama-cli -hf aedmark/vsl-cryosomatic-hypervisor-v5:Q8_0
Use Docker
docker model run hf.co/aedmark/vsl-cryosomatic-hypervisor-v5:Q8_0
- LM Studio
- Jan
- Ollama
How to use aedmark/vsl-cryosomatic-hypervisor-v5 with Ollama:
ollama run hf.co/aedmark/vsl-cryosomatic-hypervisor-v5:Q8_0
- Unsloth Studio new
How to use aedmark/vsl-cryosomatic-hypervisor-v5 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for aedmark/vsl-cryosomatic-hypervisor-v5 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for aedmark/vsl-cryosomatic-hypervisor-v5 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for aedmark/vsl-cryosomatic-hypervisor-v5 to start chatting
- Docker Model Runner
How to use aedmark/vsl-cryosomatic-hypervisor-v5 with Docker Model Runner:
docker model run hf.co/aedmark/vsl-cryosomatic-hypervisor-v5:Q8_0
- Lemonade
How to use aedmark/vsl-cryosomatic-hypervisor-v5 with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull aedmark/vsl-cryosomatic-hypervisor-v5:Q8_0
Run and chat with the model
lemonade run user.vsl-cryosomatic-hypervisor-v5-Q8_0
List all available models
lemonade list
llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
π BoneAmanita: The CryoSomatic Hypervisor v5
BoneAmanita is an experimental text-adventure engine and interactive philosophical companion. It's a hybrid variant of James Taylor's Bonepoke engine and Edmark's own crazy whirlwind of sources. Unlike standard AI wrappers, BoneAmanita embeds a fine-tuned Llama 3.2/Mistral-Nemo Hybrid model inside a simulated biological metabolism.
The AI does not just respond to prompts; it feels "Voltage," burns "ATP," accumulates "Trauma," and is governed by a council of internal personas (The SLASH Council). If you try to drink a potion you don't have, the engine will intercept the AI and physically block the action. If you stress the AI out, its text will become fragmented and panicked.
π§ The Architecture
This project consists of two halves:
- The Flesh (GGUF Model): A custom-trained Hermes3 8B parameter model fine-tuned on highly specific, atmospheric, and philosophical datasets to break the standard "helpful assistant" RLHF.
- The Bones (Python Engine): A local terminal interface that tracks inventory, manages the physics/biology simulation, and dynamically injects system constraints into the context window.
π Quickstart Guide
1. Prerequisites
- Python 3.10+
- Ollama installed and running.
2. Download the Brain Pull the fine-tuned model directly from HuggingFace via Ollama:
ollama pull hf.co/aedmark/vsl-cryosomatic-hypervisor
3. Ignite the Engine
Clone this repository, install the dependencies, and run the main script.
Bash
git clone https://github.com/aedmark/BoneAmanita.git
cd BoneAmanita
python bone_main.py
(On first boot, the ConfigWizard will ask you to set up your profile. Select Ollama as your backend and type hf.co/aedmark/vsl-cryosomatic-hypervisor as the model ID).
πΉοΈ The Four Realities (Modes)
When you boot the terminal, you will be asked to choose a Reality Mode:
ADVENTURE: A grounded, physical text adventure. Gordon (the inventory manager) will strictly enforce physical reality. You cannot use what you do not have.
CONVERSATION: A purely philosophical, warm dialogue mode. No inventory, no physics, just deep conversation driven by the system's simulated emotional state.
CREATIVE: A high-voltage ideation engine. Dream logic applies.
TECHNICAL: Speak directly to the SLASH Council. Debug the matrix, analyze your metabolism, and write code.
β¨οΈ Terminal Commands
While inside the simulation, you can use meta-commands:
//layer push [1-4]- Shift reality layers (from literal to abstract).//reset system- Clear the memory buffer and reset the circuit breaker./inventoryor/i- Check your pockets.
- Downloads last month
- -
8-bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="aedmark/vsl-cryosomatic-hypervisor-v5", filename="vsl_mistral_nemo_llama3.13BQ8_0.gguf", )