/r/Oobabooga
Benchmark update: I have added every Phi & Gemma llama.cpp quant (215 different models), added the size in GB for every model, added a Pareto frontier.
Mark as read: Add to a list
Mark as read: Add to a list
Mark as read: Add to a list
I kinda need help here... I'm new to this and ran to this problem ive been tryna solve this for days!
Mark as read: Add to a list
Newbie question: when I use models that load with the "transformers model loader", can I use both CPU and GPU, or is it recommended to use only one of them?
Mark as read: Add to a list
Mark as read: Add to a list
Mark as read: Add to a list