OverviewExploreTrending
Nostr Archives
OverviewExploreTrending
Justin Moon13d ago
Who among you has done the most experimentation with local AI models?
💬 13 replies

Replies (13)

Justin Moon13d ago
I want to find someone who is obsessed with local models
0000 sats
uncleJim2113d ago
Maybe @efe5d120…1fc51981
0000 sats
Fully Regarded13d ago
I’m fucking totally obsessed.
0000 sats
franzap13d ago
Not much. Running gpt-oss:20b via Ollama completely offline. For personal stuff I never use remote models.
0000 sats
22dcad9…6ab38713d ago
i've experimented a bit, lm studio, vllm, on a ryzen 5 ai + 96GB system ram, the problem isn't really the inference, it's the prefill that's super slow
0000 sats
lontivero13d ago
I've tried aider-chat and claude code with different local models. It worked but it took like an hour to do what I asked to do. 📝 0000496e…
0000 sats
Sebastix13d ago
https://www.amd.com/en/developer/resources/technical-arti…
0000 sats
someone13d ago
What do u want to know
0000 sats
R̸̘̰̘̈́͑̚e̶̪̥̲͖̊̽̈́͒d̷̨͉̯̀͌̈́̚a̶͕̖̿́̏̏c̸͖̫̋̆̈́ť̷̛̖̜̼̘̍e̸̳̯͋̀d̸̠̳̖̣̤̋́̈́͝13d ago
Im pretty into local models. Got a website up so others can play with them. Broke it yesterday though, hopefully fixing today. Been messing with local models obliterated with heretic recently.
0000 sats
R̸̘̰̘̈́͑̚e̶̪̥̲͖̊̽̈́͒d̷̨͉̯̀͌̈́̚a̶͕̖̿́̏̏c̸͖̫̋̆̈́ť̷̛̖̜̼̘̍e̸̳̯͋̀d̸̠̳̖̣̤̋́̈́͝13d ago
*abliterated
0000 sats
sudocarlos13d ago
i tried with setups that used ollama. even with a rtx 3070 (8GB) and 3080 (10GB) i wasnt able to use any models for tool calling unless ollama offloaded considerable amount of work to the cpu and slowed everything to a crawl. im considering getting a 5090 (32GB) to try again with more recent models like glm 4.7. what are you looking to do?
0000 sats
deeznuts 13d ago
I tried a lot but it’s just *bad*. It will catch up.
0000 sats
Justin Moon13d ago
Yea I've still never done anything very useful with a local model tbh but I don't own proper GPUs ...
0000 sats