Instructions to use ResplendentAI/Aura_7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ResplendentAI/Aura_7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="ResplendentAI/Aura_7B")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("ResplendentAI/Aura_7B") model = AutoModelForCausalLM.from_pretrained("ResplendentAI/Aura_7B") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use ResplendentAI/Aura_7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "ResplendentAI/Aura_7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ResplendentAI/Aura_7B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/ResplendentAI/Aura_7B
- SGLang
How to use ResplendentAI/Aura_7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "ResplendentAI/Aura_7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ResplendentAI/Aura_7B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "ResplendentAI/Aura_7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "ResplendentAI/Aura_7B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use ResplendentAI/Aura_7B with Docker Model Runner:
docker model run hf.co/ResplendentAI/Aura_7B
Aura
GGUF here: https://huggingface.co/Lewdiculous/Aura_7B-GGUF-IQ-Imatrix
Aura is an advanced sentience simulation trained on my own philosophical writings. It has been tested with various character cards and it worked with all of them. This model may not be overly intelligent, but it aims to be an entertaining companion.
I recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise.
If you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs.
This model responds best to ChatML for multiturn conversations.
- Downloads last month
- 9
