Instructions to use Aikyam-Lab/CURE-MED-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Aikyam-Lab/CURE-MED-7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Aikyam-Lab/CURE-MED-7B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Aikyam-Lab/CURE-MED-7B") model = AutoModelForCausalLM.from_pretrained("Aikyam-Lab/CURE-MED-7B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Aikyam-Lab/CURE-MED-7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Aikyam-Lab/CURE-MED-7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Aikyam-Lab/CURE-MED-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Aikyam-Lab/CURE-MED-7B
- SGLang
How to use Aikyam-Lab/CURE-MED-7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Aikyam-Lab/CURE-MED-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Aikyam-Lab/CURE-MED-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Aikyam-Lab/CURE-MED-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Aikyam-Lab/CURE-MED-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Aikyam-Lab/CURE-MED-7B with Docker Model Runner:
docker model run hf.co/Aikyam-Lab/CURE-MED-7B
Model Card for Model ID
CURE-MED-7B is a 7 billion parameter large language model specialized for multilingual medical reasoning, fine-tuned from Qwen/Qwen2.5-7B using a curriculum-informed reinforcement learning framework to enhance logical correctness and language stability in healthcare applications.
Model Details
CURE-MED-7B is part of the CURE-MED family of models, designed to address the challenges of multilingual medical reasoning in large language models (LLMs). Built on the Qwen2.5-7B base model, it incorporates a curriculum-informed reinforcement learning approach that integrates code-switching-aware supervised fine-tuning (SFT) and Group Relative Policy Optimization (GRPO) to improve performance on open-ended medical queries across 13 languages, including underrepresented ones such as Amharic, Yoruba, and Swahili. The model is trained and evaluated using CUREMED-BENCH, a high-quality multilingual open-ended medical reasoning benchmark with single verifiable answers.
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub.
- Developed by: Eric Onyame, Akash Ghosh, Subhadip Baidya, Sriparna Saha, Xiuying Chen, Chirag Agarwal (Aikyam Lab and collaborators)
- Shared by: Aikyam Lab
- Model type: Multilingual medical reasoning large language model
- Language(s) (NLP): Amharic, Bengali, French, Hausa, Hindi, Japanese, Korean, Spanish, Swahili, Thai, Turkish, Vietnamese, Yoruba
- License: Apache 2.0
- Finetuned from model: Qwen2.5-Instruct (1.5B, 3B, 7B, 14B, 32B variants)
Model Sources
- Repository: https://github.com/AikyamLab/cure-med
- Paper: https://arxiv.org/abs/2601.13262
- Demo: https://cure-med.github.io/
Citation
BibTeX:
@article{onyame2026cure,
title={CURE-Med: Curriculum-Informed Reinforcement Learning for Multilingual Medical Reasoning},
author={Onyame, Eric and Ghosh, Akash and Baidya, Subhadip and Saha, Sriparna and Chen, Xiuying and Agarwal, Chirag},
journal={arXiv preprint arXiv:2601.13262},
year={2026}
}
- Downloads last month
- 10
