StoryNote
Log in
|
Sign up
/r/LocalLLaMA
Year:
All
2024
Show search filters
Search by title:
Search by author:
Hide posts already read
Only show posts with narrations
Ollama Agent Roll Cage by BORCH: AI Maker Space Community Session (AIM)
1 upvotes
•
Glum_Ad_6021
Mark as read:
--
10
9
8
7
6
5
4
3
2
1
0
Add to a list
GGUF CPU offloading speed problem
1 upvotes
•
OutrageousMinimum191
Mark as read:
--
10
9
8
7
6
5
4
3
2
1
0
Add to a list
Gemma2:2b Write proposal solution for 3 body problem but inverse of Good one
1 upvotes
•
Worldly_Evidence9113
Mark as read:
--
10
9
8
7
6
5
4
3
2
1
0
Add to a list
Anyone else getting an error running Gemma2 2b gguf on llama.cpp?
1 upvotes
•
crischu
Mark as read:
--
10
9
8
7
6
5
4
3
2
1
0
Add to a list
I'm curious if it's possible to run llama-3.1-405b-4bit on 8 GPUs priced at $35 each
1 upvotes
•
Dr_Karminski
Mark as read:
--
10
9
8
7
6
5
4
3
2
1
0
Add to a list
Koboldcpp crashes when loading model?
1 upvotes
•
Benjamin_swoleman
Mark as read:
--
10
9
8
7
6
5
4
3
2
1
0
Add to a list
Best uncensored llm model that is out there as of today?today!today!
1 upvotes
•
OkGain2570
Mark as read:
--
10
9
8
7
6
5
4
3
2
1
0
Add to a list
Question about Mistral-Nemo-Instruct prompt format for storytelling
1 upvotes
•
Dazzling_Fishing7850
Mark as read:
--
10
9
8
7
6
5
4
3
2
1
0
Add to a list
I'm curious if it's possible to run llama-3.1-405b-4bit on tesla K80?
1 upvotes
•
Dr_Karminski
Mark as read:
--
10
9
8
7
6
5
4
3
2
1
0
Add to a list
Is anybody using groq in production?
1 upvotes
•
Fracternalai
Mark as read:
--
10
9
8
7
6
5
4
3
2
1
0
Add to a list
Title
Upvotes
Author
Mark as read
Favorited
Rating
Add to a list
Ollama Agent Roll Cage by BORCH: AI Maker Space Community Session (AIM)
1
Glum_Ad_6021
--
10
9
8
7
6
5
4
3
2
1
0
GGUF CPU offloading speed problem
1
OutrageousMinimum191
--
10
9
8
7
6
5
4
3
2
1
0
Gemma2:2b Write proposal solution for 3 body problem but inverse of Good one
1
Worldly_Evidence9113
--
10
9
8
7
6
5
4
3
2
1
0
Anyone else getting an error running Gemma2 2b gguf on llama.cpp?
1
crischu
--
10
9
8
7
6
5
4
3
2
1
0
I'm curious if it's possible to run llama-3.1-405b-4bit on 8 GPUs priced at $35 each
1
Dr_Karminski
--
10
9
8
7
6
5
4
3
2
1
0
Koboldcpp crashes when loading model?
1
Benjamin_swoleman
--
10
9
8
7
6
5
4
3
2
1
0
Best uncensored llm model that is out there as of today?today!today!
1
OkGain2570
--
10
9
8
7
6
5
4
3
2
1
0
Question about Mistral-Nemo-Instruct prompt format for storytelling
1
Dazzling_Fishing7850
--
10
9
8
7
6
5
4
3
2
1
0
I'm curious if it's possible to run llama-3.1-405b-4bit on tesla K80?
1
Dr_Karminski
--
10
9
8
7
6
5
4
3
2
1
0
Is anybody using groq in production?
1
Fracternalai
--
10
9
8
7
6
5
4
3
2
1
0
«
<
>
»
Page
of 68
Go