Add vLLM as one of the supported inference engines in the model card.
#9
by wangshangsam - opened
No description provided.
Closing since https://huggingface.co/nvidia/Qwen3.5-397B-A17B-NVFP4/commit/67eb9fda23a430b41f629c016c96fdc7bcbd43f5 is already (accidentally) commited to main.
wangshangsam changed pull request status to closed