I know this sub loves benchmarks and comparing model performance on coding tasks. my use case is way more boring and I want to share it because I think local models are underrated for simple practical stuff.
I'm a project manager. I have 4 to 6 meetings a day. the notes from those meetings need to turn into action items in jira and summary updates in confluence. that's it. I don't need gpt4 level intelligence for this. I need something that can take rough text and spit out a structured list of who needs to do what by when.
I'm running mistral 7b on my macbook through ollama. the input is whatever I have from the meeting, sometimes typed, sometimes it's a raw transcript I dictated into willow voice that's got no punctuation and half-finished sentences. doesn't matter. mistral handles both fine for this task.
my prompt is dead simple: ""here are notes from a project meeting. extract action items with owner and deadline. format as a bullet list."" it gets it right about 85% of the time. the other 15% is usually missing context that wasn't in the input to begin with, not a model failure.
the reason I went local instead of using chatgpt: our company has policies about putting meeting content into third party tools. running it locally means I'm not sending anything anywhere and I don't need to deal with infosec reviews.
the speed is fine. inference on 7b on an m2 pro is fast enough that it doesn't interrupt my workflow. I paste the text, wait maybe 10 seconds, copy the action items into jira.
anyone else using local models for mundane work stuff like this? I feel like this sub skews toward people pushing the limits but there's a huge practical middle ground.
[link] [comments]
