/r/LocalLLaMA
Vector Companion - Your %100 local, private, multimodal AI Companion, split into two Agents: Axiom and Axis, a voice-to-voice framework where your bots can view images/text, listen to computer audio and speak to you (and each other) directly, simultaneously in real-time, indefinitely!
Mark as read: Add to a list
Mark as read: Add to a list
Maxime Labonne: BigLlama-3.1-1T-Instruct (experimental self-merge using Meta-Llama-3.1-405B-Instruct and created with Arcee.AI's mergekit)
Mark as read: Add to a list
Mark as read: Add to a list
Mark as read: Add to a list
Mark as read: Add to a list
Mark as read: Add to a list
Mark as read: Add to a list
