/u/zen_in_box's posts in /r/LocalLLaMA
Why Llamma3.1 on llama.cpp repeatedly output a large amount of text that does not comply with the instructs. The results of my personal use cases are even worse than the ordinary 2B models.
1 upvotes
Mark as read: Add to a list
