I'm new to local hosting a LLM.
I've been using Claude Sonnet a lot and having lots of success with that. I'd like to explore a workflow where I leave a local LLM to run overnight on my hardware so it doesn't need to be fast, but I do need the quality of models such as sonnet & opus.
Is this achievable currently within these sorts of specs? Would doubling my hardware make it achievable, or is the kind of quality only available over API currently?
[link] [comments]




