Why Llamma3.1 on llama.cpp repeatedly output a large amount of text that does not comply with the instructs. The results of my personal use cases are even worse than the ordinary 2B models.
by /u/zen_in_box in /r/LocalLLaMA
Upvotes: 1
Favorite this post:
Mark as read:
Your rating:
Add this post to a custom list 