GitHub stars are a trap.
Not in the "popularity doesn't equal quality" sense — that's obvious. I mean something more specific: for a daily AI tools digest, stars are a lagging signal that made my feed worse, and it took me three months to notice.
Here's the decision I made last week: I removed GitHub stars as a ranking input for ai-tldr.dev entirely. This is what happened, and what I replaced it with.
The problem with stars
When I built the first version of the digest, I used stars as a tiebreaker. Two repos released the same week? The one with more stars floats up. Seemed reasonable.
Except stars accumulate over time, and they spike on launch day. A repo that launched 18 months ago with a big HN post has 12,000 stars. A genuinely useful tool that shipped last Tuesday has 340. The older one looks more important in every query.
The result: my "recent AI tools" section kept surfacing things that were already known. The digest was becoming a remix of what everyone already saw six months ago, just slightly repackaged.
I ran a rough audit: of the last 60 tools I'd surfaced that had 5,000+ stars, about 40% were already covered by at least two major newsletters before I picked them up. I was late to the obvious, and missing the actually new.
What I'm using instead
The fix was simpler than I expected. I switched to three signals:
Recency weight — posts from the last 7 days get a strong boost regardless of star count. If it's new, it gets a chance.
Commit velocity — a repo with 12 commits in the last 10 days on a 2-month-old project is more interesting than a stable 3-year-old one. Stars don't capture this at all.
Category freshness — I track which categories I've covered recently. If I've done four "LLM fine-tuning" posts this week, a fifth needs a higher quality bar regardless of signals.
Stars still exist in the system, but they're a soft penalty for very low engagement (under 10 stars after 2 weeks), not a reward for being famous.
What changed
It's only been a week, so I can't claim results. But subjectively: the daily feed feels sharper. I'm picking up things I wouldn't have touched before — smaller tools, specific-use repos, things that solve one problem well rather than trying to be infrastructure.
Whether the readers notice, I don't know yet. But the editorial instinct feels better.
The broader lesson: signals that made sense at the start of a project can calcify into biases. Worth auditing them even when things seem fine.
