How feasible is to run you own local LLM as assistant on a basic Mac Mini M2 with 8 GB of memory?
by /u/onturenio in /r/ollama
Upvotes: 12
Favorite this post:
Mark as read:
Your rating:
Add this post to a custom list
StoryNote Upvotes: 12