| Hey everyone! I've been working on this for months and today's the day. MacinAI Local is a complete local AI inference platform that runs natively on classic Macintosh hardware, no internet required. What makes this different from previous retro AI projects: Every "AI on old hardware" project I've seen (llama98.c on Windows 98, llama2.c64 on Commodore 64, llama2 on DOS) ports Karpathy's llama2.c with a single tiny 260K-parameter model. MacinAI Local is a ground-up platform:
The demo hardware: PowerBook G4 Titanium (2002), 1GHz G4, 1GB RAM, running Mac OS 9.2.2. Real hardware performance (PowerBook G4 1GHz, Mac OS 9.2, all Q8):
Technical specs:
What's next: Getting the 68040 build running on a 1993 LC 575 / Color Classic Mystic. The architecture already supports it, just need the hardware in hand. Demo: https://youtu.be/W0kV_CCzTAM Technical write-up: https://oldapplestuff.com/blog/MacinAI-Local/ Happy to answer any technical questions. I've got docs on the AltiVec optimization journey (finding a CodeWarrior compiler bug along the way), the training pipeline, and the model export process. Thanks for the read! [link] [comments] |
Running TinyLlama 1.1B locally on a PowerBook G4 from 2002. Mac OS 9, no internet, installed from a CD.
Reddit r/LocalLLaMA / 3/20/2026
📰 NewsTools & Practical Usage
Key Points
- MacinAI Local is a complete local AI inference platform that runs natively on classic Macintosh hardware with Mac OS 9 and no internet access required.
- It is model-agnostic and supports GPT-2 (124M), TinyLlama, Qwen (0.5B), SmolLM, and other HuggingFace/LLaMA-architecture models via a Python export script.
- The project uses a custom C89 inference engine, a 100M parameter Macintosh-specific transformer, and AltiVec SIMD optimizations that deliver about a 7.3x speedup on PowerPC G4, achieving 0.33 seconds per token with Q8 quantization.
- Disk paging enables running inference on machines with limited RAM by streaming layers from disk, demonstrated on a 1GB RAM PowerBook G4.
- Agentic Mac control allows the model to generate AppleScript for launching apps, managing files, and automating system tasks, with a safety confirmation before execution.
Related Articles

ベテランの若手育成負担を減らせ、PLC制御の「ラダー図」をAIで生成
日経XTECH

Your AI generated code is "almost right", and that is actually WORSE than it being "wrong".
Dev.to

Lessons from Academic Plagiarism Tools for SaaS Product Development
Dev.to

Windsurf’s New Pricing Explained: Simpler AI Coding or Hidden Trade-Offs?
Dev.to

Building Production RAG Systems with PostgreSQL: Complete Implementation Guide
Dev.to