

2·
10 days agoYep, the OpenAI api and/or the ollama one work for this no problem in most projects. You just give it the address and port you want to connect to, and that port can be localhost, lan, another server on another network, whatever.
Yep, the OpenAI api and/or the ollama one work for this no problem in most projects. You just give it the address and port you want to connect to, and that port can be localhost, lan, another server on another network, whatever.
Oof rip sorry lol, thought I’d seen a post about boost and piefed
I’ve seen a couple of other clients add support for piefed recently as well, although I don’t follow them closely cuz I’m happy with voyager. Maybe boost or mlem supports this with piefed already?
It’s come a long way recently, I’m currently using piefed on voyager rn and it works great
This seems like the perfect use case for federated software to me
Did you use a heavily quantized version? Those models are much smaller than the state of the art ones to begin with, and if you chop their weights from float16 to float2 or something it reduces their capabilities a lot more