Running a non-profit that needs to OCR 64 million pages. Where can I apply for free or subsidized compute to run a local model?

Reddit r/LocalLLaMA / 4/10/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage

Key Points

  • A nonprofit managing a very large OCR workload (64 million pages) is seeking free or subsidized compute options after running out of credits from a prior provider.
  • The request focuses on alternatives that would enable running or supporting local OCR and/or local language-model workflows rather than relying solely on paid cloud credits.
  • It implicitly highlights cost and scalability constraints for large-scale document processing, motivating compute sources such as grants, credits, or community programs.
  • The user is asking the community for practical guidance on where to apply for assistance with compute so they can build a searchable knowledge base.
  • The post centers on operational decision-making for infrastructure planning under tight budgets (compute procurement, OCR throughput, and feasibility of local execution).

I'm running a not-for-profit and have the need to OCR 64 million pages for building a knowledge base. We don't have the funding and have been using Vast instance for OCR but recently ran out of credits. What are some alternatives where I can apply to get the compute?

submitted by /u/thereisnospooongeek
[link] [comments]