Add vLLM as one of the supported inference engines in the model card.

#9
No description provided.
wangshangsam changed pull request status to closed

@wangshangsam can your team add the other qwen3.5 in nvfp4? Ex: 122b, 27b, and 35b

Sign up or log in to comment