AI Navigate

Examine a codebase for anything suspicious or malicious?

Reddit r/LocalLLaMA / 3/12/2026

💬 OpinionTools & Practical Usage

Key Points

  • A user without engineering background asks whether LLMs can be used to audit a repository for suspicious or malicious code before use.
  • They highlight the challenge of manually inspecting every file in a project to ensure safety.
  • The post discusses the potential of AI-assisted code review as a practical solution for non-experts.
  • It points to a Reddit discussion about applying AI tools to vet open-source code before adoption and asks for feasibility and best practices.

I often see interesting projects here on LocalLLaMA and elsewhere on github but I'm afraid to try them as I'm not an engineer and anyway I can't read every single file to check for any possible malicious code. Since we have LLMs, I was wondering if it would be possible for a 'normal' user to use them to check a repo before using it? Thanks in advance!

submitted by /u/TheGlobinKing
[link] [comments]