runtime error

Exit code: 1. Reason: ........ .......... .......... .......... .......... 99% 36.0M 0s 1869700K .......... .......... .......... .......... .......... 99% 322M 0s 1869750K .......... .......... .......... .......... .......... 99% 364M 0s 1869800K .......... .......... .......... .......... .......... 99% 380M 0s 1869850K .......... .......... .......... .......... .......... 99% 18.7M 0s 1869900K .......... .......... .......... .......... .......... 99% 113M 0s 1869950K .......... .......... .......... .......... .......... 99% 220M 0s 1870000K .......... .......... .......... .......... .......... 99% 368M 0s 1870050K .......... .......... .......... .......... .......... 99% 341M 0s 1870100K .......... .......... .......... .......... .......... 99% 395M 0s 1870150K .......... .......... .......... .......... .......... 99% 312M 0s 1870200K .......... .......... .......... .......... .......... 99% 28.9M 0s 1870250K .......... .......... .......... .......... .......... 99% 323M 0s 1870300K .......... .......... .......... .......... .......... 99% 354M 0s 1870350K .......... .......... .......... .......... .......... 99% 42.9M 0s 1870400K .......... ..... 100% 276M=30s 2026-02-21 14:08:23 (61.9 MB/s) - ‘SmolLM3-Q4_K_M.gguf’ saved [1915305312/1915305312] Traceback (most recent call last): File "/app/app.py", line 79, in <module> llm = Llama( model_path=MODEL_FILE, ...<3 lines>... verbose=False ) File "/usr/local/lib/python3.13/site-packages/llama_cpp/llama.py", line 341, in __init__ self._model = _LlamaModel( ~~~~~~~~~~~^ path_model=self.model_path, params=self.model_params, verbose=self.verbose ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/usr/local/lib/python3.13/site-packages/llama_cpp/_internals.py", line 57, in __init__ raise ValueError(f"Failed to load model from file: {path_model}") ValueError: Failed to load model from file: SmolLM3-Q4_K_M.gguf

Container logs:

Fetching error logs...