I built an autonomous AI Courtroom using Llama 3.1 8B and CrewAI running 100% locally on my 5070 Ti. The agents debate each other through contextual collaboration.

Reddit r/LocalLLaMA / 3/23/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical Usage

Key Points

  • The author demonstrates a fully offline, local AI courtroom system named AI-Court Supreme built with Llama 3.1 8B and CrewAI on an RTX 5070 Ti, executing multi-agent reasoning.
  • The system uses three agents: Chief Prosecutor, Defense Attorney, and Chief Presiding Judge, who collaborate contextually to generate indictments, identify legal loopholes, and synthesize judgments.
  • It runs entirely on consumer hardware (Ryzen 7 7800X3D, 32GB RAM) without cloud access, highlighting feasibility of local deployment for complex AI workflows.
  • The creator invites feedback and suggestions, framing the results as surprisingly strong for an 8B parameter model and seeking community discussion.
I built an autonomous AI Courtroom using Llama 3.1 8B and CrewAI running 100% locally on my 5070 Ti. The agents debate each other through contextual collaboration.

Salutations, I am Ali Suat, 15 years old, and have been actively developing myself in deep learning and autonomous systems for approximately four years. Today, I would like to introduce a Multi-Agent Reasoning project I am running on local hardware: AI-Court Supreme.

My objective with this project was to evaluate how consistently a local large language model, Llama 3.1 8B, could manage complex legal and technical processes within an agentic architecture. I established a hierarchical workflow using the CrewAI framework.

How the system operates:

Contextual Collaboration: I defined three distinct autonomous agents: a Chief Prosecutor, a Defense Attorney, and a Chief Presiding Judge.

When the Prosecutor creates an indictment, the Attorney takes this output as context and, through semantic analysis, identifies technical/legal loopholes such as algorithmic deviation or lack of intent, producing a counter-argument.

In the final stage, the Judge agent synthesizes data from both parties to perform a logical inference and pronounce the final judgment.

A model of 8B parameters demonstrating such high reasoning capability, particularly in cross-examination simulation, yielded results significantly better than my expectations. Your feedback regarding this completely local offline agentic workflow would be extremely valuable to me.

Hardware Stack:

GPU: NVIDIA RTX 5070 Ti

CPU: AMD Ryzen 7 7800X3D

Memory: 32GB DDR5

I am open to your development suggestions and technical inquiries; let's brainstorm in the comments section!

submitted by /u/avariabase0
[link] [comments]