Built an AI + SQL Q&A System — How to Keep High Accuracy on Complex Queries Without Gemini?

Reddit r/LocalLLaMA / 3/28/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The post describes a Python + PostgreSQL AI Q&A pipeline where an LLM converts user questions to SQL, queries Postgres, and then performs reasoning/calculations on returned data to produce the final answer.
  • The author’s main challenge is maintaining high accuracy for complex, multi-parameter queries that require combining fields and deriving insights, not just summarizing simple trends.
  • They report practical issues including slow response times and the need for a free/open-source alternative to Gemini while retaining strong reasoning and calculation capability.
  • The key questions focus on techniques to improve accuracy and reasoning in this LLM+SQL architecture and on identifying open-source models/architectures that can approximate Gemini-level performance for derived insights.

Hey,

I’m working on a Python + PostgreSQL system where:

  • User query → LLM generates SQL
  • Data is fetched from PostgreSQL
  • LLM processes data (including calculations/derivations) to generate the final answer

Main issue: achieving high accuracy on complex, multi-parameter queries (not just simple trends), especially when the system needs to combine multiple fields and perform calculations/inference similar to Gemini.

Problems:

  • Slow response
  • Need a free/open-source alternative to Gemini
  • Want strong reasoning + calculation capability from the model

Questions:

  1. How can I improve accuracy and reasoning for complex, multi-parameter queries in this setup?
  2. Which free/open-source LLMs + architectures can match Gemini-level reasoning (including calculations and derived insights)?

Tech: Python, PostgreSQL

Any suggestions or real-world approaches would really help 🙏

submitted by /u/Past-Geologist4108
[link] [comments]