I stopped using GitHub stars to rank AI tools. Here's what I use instead.

Dev.to / 5/9/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • GitHubスターは人気の指標ではあるものの、AIツールの“最近”を扱う日次ダイジェストでは「遅行指標」になりやすく、既に広く知られた情報を再掲しがちだったためランキング入力として外した。
  • 影響を確認したところ、過去に取り上げた“スター上位(5,000+)”ツールの約40%が、すでに複数の大手ニュースレターでカバー済みで、明らかなものに遅れ、新規の掘り出し物を見落としていた。
  • 代替として、(1)直近7日投稿の強い加点(Recency weight)、(2)直近のコミット増加率(Commit velocity)、(3)特定カテゴリの取り上げ頻度に応じた基準の調整(Category freshness)の3つのシグナルに切り替えた。
  • スターは引き続きシステム内に残すが、報酬ではなく「2週間で10スター未満」のような極端な低反応に対するソフトな減点として扱うことにした。
  • まだ1週間のため定量結果は断言できないものの、主観的にはフィードが鋭くなり、小規模で特定用途に強いツールなど今まで触れていなかったものを拾えるようになったという。さらに、初期に妥当に見えた指標が偏りとして固定化するため、定期的な監査の重要性を強調している。

GitHub stars are a trap.

Not in the "popularity doesn't equal quality" sense — that's obvious. I mean something more specific: for a daily AI tools digest, stars are a lagging signal that made my feed worse, and it took me three months to notice.

Here's the decision I made last week: I removed GitHub stars as a ranking input for ai-tldr.dev entirely. This is what happened, and what I replaced it with.

The problem with stars

When I built the first version of the digest, I used stars as a tiebreaker. Two repos released the same week? The one with more stars floats up. Seemed reasonable.

Except stars accumulate over time, and they spike on launch day. A repo that launched 18 months ago with a big HN post has 12,000 stars. A genuinely useful tool that shipped last Tuesday has 340. The older one looks more important in every query.

The result: my "recent AI tools" section kept surfacing things that were already known. The digest was becoming a remix of what everyone already saw six months ago, just slightly repackaged.

I ran a rough audit: of the last 60 tools I'd surfaced that had 5,000+ stars, about 40% were already covered by at least two major newsletters before I picked them up. I was late to the obvious, and missing the actually new.

What I'm using instead

The fix was simpler than I expected. I switched to three signals:

Recency weight — posts from the last 7 days get a strong boost regardless of star count. If it's new, it gets a chance.

Commit velocity — a repo with 12 commits in the last 10 days on a 2-month-old project is more interesting than a stable 3-year-old one. Stars don't capture this at all.

Category freshness — I track which categories I've covered recently. If I've done four "LLM fine-tuning" posts this week, a fifth needs a higher quality bar regardless of signals.

Stars still exist in the system, but they're a soft penalty for very low engagement (under 10 stars after 2 weeks), not a reward for being famous.

What changed

It's only been a week, so I can't claim results. But subjectively: the daily feed feels sharper. I'm picking up things I wouldn't have touched before — smaller tools, specific-use repos, things that solve one problem well rather than trying to be infrastructure.

Whether the readers notice, I don't know yet. But the editorial instinct feels better.

The broader lesson: signals that made sense at the start of a project can calcify into biases. Worth auditing them even when things seem fine.