/r/LocalLLaMA
Why Llamma3.1 on llama.cpp repeatedly output a large amount of text that does not comply with the instructs. The results of my personal use cases are even worse than the ordinary 2B models.
Mark as read: Add to a list
Why Llamma3.1 on llama.cpp repeatedly output a large amount of text that does not comply with the instructs. The results of my personal use cases are even worse than the ordinary 2B models.
Mark as read: Add to a list
Llama3.1:8b Create new method to create new universe on Linux that is reverse of “endless inflation” use python3
Mark as read: Add to a list
Mark as read: Add to a list
Mark as read: Add to a list
Mark as read: Add to a list
