AI Navigate

Whats the best LLM Model i can run on my olama with 3090 to ask normal stuff? recognize PDF Files and Pictures?

Reddit r/LocalLLaMA / 3/14/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The author uses a local setup (OLama/OpenWeb UI) with a dedicated RTX 3090 and currently employs Qwen3-coder:30b for coding tasks.
  • They are seeking a broadly capable LLM for general, non-coding tasks to run locally on their hardware.
  • They tested llama3.2-vision:11b-instruct-q8_0, which can describe images but cannot handle PDFs (e.g., uploading or processing PDFs).
  • The main goal is to enable both image understanding and PDF recognition/processing within a locally hosted LLM setup, rather than relying on cloud-based models.
  • The post is shared in a Reddit thread (LocalLLaMA) for community input and additional context.

I have a olama / openweb ui with a dedicated 3090 and it runs good so far. for coding i use qwen3-coder:30b but whats the best model for everything else? normal stuff?

i tried llama3.2-vision:11b-instruct-q8_0, it can describe pictures but i cannot upload pdf files etc.. to work with them.

submitted by /u/m4ntic0r
[link] [comments]