Running mistral locally for meeting notes and it's honestly good enough for my use case

Reddit r/LocalLLaMA / 3/22/2026

💬 OpinionTools & Practical UsageModels & Research

Key Points

  • The author, a project manager, handles 4 to 6 meetings per day and wants to convert notes into Jira action items and Confluence summaries rather than requiring GPT-4-level intelligence.
  • They run mistral 7b locally on a MacBook via Ollama, and the input can be typed or a raw dictated transcript.
  • Their simple prompt—"here are notes from a project meeting. extract action items with owner and deadline. format as a bullet list."—achieves about 85% accuracy; the remainder is due to missing context, not model failure.
  • They chose a local setup to satisfy company policies about third-party tools and avoid infosec reviews since data stays on-device.
  • Inference on a 7b model on an M2 Pro is fast enough (about 10 seconds) that they paste the action items into Jira without workflow interruption.

I know this sub loves benchmarks and comparing model performance on coding tasks. my use case is way more boring and I want to share it because I think local models are underrated for simple practical stuff.

I'm a project manager. I have 4 to 6 meetings a day. the notes from those meetings need to turn into action items in jira and summary updates in confluence. that's it. I don't need gpt4 level intelligence for this. I need something that can take rough text and spit out a structured list of who needs to do what by when.

I'm running mistral 7b on my macbook through ollama. the input is whatever I have from the meeting, sometimes typed, sometimes it's a raw transcript I dictated into willow voice that's got no punctuation and half-finished sentences. doesn't matter. mistral handles both fine for this task.

my prompt is dead simple: ""here are notes from a project meeting. extract action items with owner and deadline. format as a bullet list."" it gets it right about 85% of the time. the other 15% is usually missing context that wasn't in the input to begin with, not a model failure.

the reason I went local instead of using chatgpt: our company has policies about putting meeting content into third party tools. running it locally means I'm not sending anything anywhere and I don't need to deal with infosec reviews.

the speed is fine. inference on 7b on an m2 pro is fast enough that it doesn't interrupt my workflow. I paste the text, wait maybe 10 seconds, copy the action items into jira.

anyone else using local models for mundane work stuff like this? I feel like this sub skews toward people pushing the limits but there's a huge practical middle ground.

submitted by /u/kinky_guy_80085
[link] [comments]