Right now I’m trying to build a workflow for extracting content from recent AI research papers (mostly arXiv PDFs) so I can speed up reading, indexing, and note-taking.
The catch is: these papers are not “clean text” documents. They usually include:
- Dense mathematical formulas (often LaTeX-heavy)
- Multi-column layouts
- Complex tables
- Figures/diagrams embedded with captions
- Mixed reading order issues
So for me, plain OCR accuracy is not enough—I care a lot about structure + formulas + layout consistency.
I’ve been experimenting and reading about some projects, such as:
FireRed-OCR
Looks promising for document-level OCR with better structure awareness. I’ve seen people mention it performs reasonably well on complex layouts, though I’m still unclear how robust it is on heavy math-heavy papers.
DeepSeek-OCR
Interesting direction, especially with the broader DeepSeek ecosystem pushing multimodal understanding. Curious if anyone has used it specifically for academic PDFs with formulas—does it actually preserve LaTeX-quality output or is it more “semantic transcription”?
MonkeyOCR
This one caught my attention because it seems lightweight and relatively easy to deploy. But I’m not sure how it performs on scientific papers vs more general document OCR.
I’m thinking of running a small benchmark myself by selecting around 20 recent arXiv papers with different layouts and comparing how well each model extracts plain text, formulas, and tables, while also measuring both accuracy and the amount of post-processing effort required.
Could you guys take a look at the models below and let me know which ones are actually worth testing?
[link] [comments]




![[P] I trained an AI to play Resident Evil 4 Remake using Behavioral Cloning + LSTM](/_next/image?url=https%3A%2F%2Fexternal-preview.redd.it%2FzgmJOxETuqgqlsgMxeBl7S4gZNDHf_K3U9w883ioT4M.jpeg%3Fwidth%3D320%26crop%3Dsmart%26auto%3Dwebp%26s%3Da63f97b9d03c40b846cd3eaac472e78050020a43&w=3840&q=75)