AI Navigate

Running 8B Llama locally on Jetson Orin Nano (using only 2.5GB of memory)

Reddit r/LocalLLaMA / 3/13/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • A Reddit user claims to run an 8B Llama model locally on a Jetson Orin Nano using only about 2.5 GB of memory.
  • The post links to the Reddit submission and a video/demo, indicating a practical edge deployment.
  • This suggests potential for running mid-size LLMs on low-memory edge devices, expanding on-device AI possibilities.
  • The snippet notes the feasibility but does not provide detailed benchmarks or reproducible steps in the article itself.