Text-to-Speech
vllm
mistral-common

Getting undefined symbol: _ZN3c1013MessageLoggerC1EPKciib when following instructions

#12
by costelter - opened

Hi!

When I follow your instructions on installing vllm + vllm-omni I get the following error:

  File "/home/ubuntu/src/voxtral-tts/.venv/lib/python3.12/site-packages/vllm/utils/import_utils.py", line 111, in resolve_obj_by_qualname
    module = importlib.import_module(module_name)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/importlib/__init__.py", line 90, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/src/voxtral-tts/.venv/lib/python3.12/site-packages/vllm/platforms/cuda.py", line 19, in <module>
    import vllm._C  # noqa
    ^^^^^^^^^^^^^^
ImportError: /home/ubuntu/src/voxtral-tts/.venv/lib/python3.12/site-packages/vllm/_C.abi3.so: undefined symbol: _ZN3c1013MessageLoggerC1EPKciib

when running vllm serve. I thought this might by a Blackwell problem - so I rerun the steps on a Ada Lovelace GPU (L40), but got the same error.

What version of CUDA did you use? On this particular machine a cuda-toolkit 12.9 is installed running Ubuntu 24.04.

Also tried "latest" vLLM from source, but getting the same error.

Mistral AI_ org

Any chance you could open an issue on vLLM-Omni for this one: https://github.com/vllm-project/vllm-omni

Aye, will do.

here is the packages version that fixed this issue for me, good luck :

python -c "
import torch, torchvision, transformers, vllm
print('torch:', torch.version)
print('torchvision:', torchvision.version)
print('transformers:', transformers.version)
print('vllm:', vllm.version)
"
torch: 2.10.0+cu130
torchvision: 0.25.0+cu130
transformers: 5.3.0
vllm: 0.18.0

Yes, indeed this version combos work! Thank you. I will try that over the weekend on a Spark. too. ;-)

That combo does not work at all? it does not even support omni?

Mistral AI_ org

@Kwissbeats It use an vllm docker and build vllm-omni within the docker.

Sign up or log in to comment