/r/LocalLLaMA
Mark as read: Add to a list
Mark as read: Add to a list
Want to have local TTS accompanying LLM, but seems the popular ones all use pickle, is that community just not caught up to safetensors yet?
Mark as read: Add to a list
What kind of contextual window is possible on locally run LLMs on iOS? Also, how slow can it be on iOS?
Mark as read: Add to a list
Arcee AI Released DistillKit: An Open Source, Easy-to-Use Tool Transforming Model Distillation for Creating Efficient, High-Performance Small Language Models
Mark as read: Add to a list