cool

#1
by gopi87 - opened

good model but still testing & slow

https://github.com/turbo-tan/llama.cpp-tq3

hope one day they support rpc

That's for feedback. Can you explain how you want the support for rpc?

llama.cpp already support it GGML_RPC=ON but it is fragile , you can try

llama.cpp already support it GGML_RPC=ON but it is fragile , you can try

but its doest support the TQ3

https://github.com/turbo-tan/llama.cpp-tq3

Have you tried this fork?

YTan2000 changed discussion status to closed

Sign up or log in to comment