cool
#1
by gopi87 - opened
good model but still testing & slow
That's for feedback. Can you explain how you want the support for rpc?
llama.cpp already support it GGML_RPC=ON but it is fragile , you can try
llama.cpp already support it GGML_RPC=ON but it is fragile , you can try
but its doest support the TQ3
YTan2000 changed discussion status to closed