I Automated My Morning Tech News with Claude — Here's What Worked (and What Didn't)

Dev.to / 4/24/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical UsageIndustry & Market Moves

Key Points

  • The author wanted tech news automatically delivered and filtered, not just a better way to manually read the news each morning.
  • Claude Cowork (Anthropic’s agentic layer on top of Claude Desktop) runs scheduled tasks with access to local files, connected tools (e.g., Confluence, Outlook, SharePoint, web search), enabling recurring “briefings.”
  • They built three weekday morning briefings—AI Pulse, Cloud Pulse, and Tech Stack Pulse—that write to a rolling Confluence page, deduplicate by workflow, and automatically delete entries older than 30 days to prevent the page from growing without bound.
  • Their first approach to deliver briefings via email failed because the available Outlook/Gmail connectors are read-only by default and don’t safely enable sending on the user’s behalf.
  • After hitting multiple dead ends, they ultimately found an approach that reduced morning reading time to about five minutes while keeping the information current and relevant.

Every morning, I used to spend about 30 minutes reading tech news before I wrote a single line of code.

AI releases. Azure updates. Python releases. GitHub changelog. React RFCs.

All in different places. All with the same stories re-reported by five different outlets.

I'd open 20 tabs, skim until my coffee got cold, and by lunchtime I'd forgotten half of it.

So I tried to fix it — and in doing so, hit three dead ends before landing on something that actually works.

This is that story.

The shift I was really after

I didn't want a better news reader.

I wanted news brought to me, already filtered, already deduplicated, waiting when I sat down.

Less "Claude, help me read the news."

More "Claude, read the news for me. I'll check what matters."

That distinction turns out to be the whole game.

What Claude Cowork actually is

For those who haven't tried it yet: Cowork is Anthropic's agentic layer on top of Claude Desktop.

It's not a chatbot.

It runs tasks on your machine with access to your files, your connected tools (Confluence, Outlook, SharePoint, web search), and — the important part — a scheduler.

You give it a prompt once. You pick an interval (hourly, daily, weekly). It runs on its own.

The whole thing is basically: Claude Code, but for everyday non-coding work.

What I ended up building

Three daily briefings, running 15 minutes apart every weekday morning:

Time Briefing Focus
9:00 AM AI Pulse Model releases, research papers, funding, regulation
9:15 AM Cloud Pulse Azure services, GitHub Copilot, Actions, security advisories
9:30 AM Tech Stack Pulse Python, TypeScript, React, Flask, FastAPI, VS Code

Each one writes directly to a rolling Confluence page.

Today's entry lands at the top. Anything older than 30 days is automatically deleted. The page never grows unbounded.

Total reading time on a good morning: about 5 minutes.

Why email was the obvious choice — and the wrong one

My first instinct was: just email it to me.

Both the Outlook and Gmail connectors exist. Both plug into Claude cleanly. Problem solved, right?

No.

Both are read-only by default. They can search your inbox, they can draft emails, but they cannot send on your behalf. This is deliberate — Anthropic ships these connectors with write permissions locked down to prevent an agent from accidentally mailing every contact you have.

Which is the right call. But it kills the "daily digest arrives in my inbox" pattern dead.

The third-party workaround (and why I skipped it)

There are workarounds.

Services like Composio offer hosted MCP servers with full send capability. They're SOC 2 certified. Real company. Reasonable choice for many.

But you're handing a new vendor OAuth access to your mailbox. That's a genuine security decision, not a checkbox.

I evaluated it honestly and decided:

  • For a non-critical mailbox and a personal digest — fine
  • For a work account with anything remotely sensitive — not worth it

So I went looking for a tool I already trusted.

The unexpected winner: Confluence

Our Atlassian connector can read and write pages.

No new vendor. No extra OAuth grant. No IT approval cycle. Just a connector that was already there, trusted, audited.

And honestly? Confluence turned out to be a better destination than email:

  • Searchable — six months from now I can find the story I vaguely remember
  • Archivable — the 30-day window keeps things tidy
  • Accessible from anywhere — laptop, mobile app, web
  • No inbox clutter — a daily email slowly becomes noise; a page stays a page

Sometimes the best tool is the one you already have.

The prompt architecture that makes it work

A naive "get me 5 AI stories each morning" prompt produces noise within a week.

The same story gets re-covered by three outlets. A launch from Tuesday shows up again on Thursday under a different headline. You stop reading.

Four design decisions fix this:

1. Dedup against your own history.

Before searching the web, Claude reads the last 14 days of your own Confluence page. It builds an "already-seen list" of covered subjects — then skips anything already briefed.

2. Subject-level matching, not headline-level.

"Anthropic launches Opus 4.7" and "Opus 4.7 benchmarks impress" are the same story. "Opus 4.7 launch" and "Haiku 4.7 launch" are different.

Claude matches on the underlying subject, not the wording.

3. Material updates get tagged, not skipped.

If GPT-5 launched last week and this week the API opened up — that's a genuine follow-on, not a repeat. It appears with an UPDATE: prefix so I see at a glance what's fresh vs. continuing news.

4. Self-pruning.

Every morning, before writing today's entry, the task deletes anything older than 30 days from the page.

No separate cleanup task. No archive folder. The page maintains itself.

Here's roughly what the prompt looks like

A simplified version of the real thing:

You are my daily AI news briefing assistant.

STEP 1 — Check what I've already briefed
Fetch the Confluence page. Read the last 14 days of entries.
Build an already-seen list of subjects covered.

STEP 2 — Search the web
Find the top AI stories from the last 24 hours.
Focus on: model releases, product launches, research papers,
funding news, regulatory developments.

STEP 3 — Filter
Skip anything in the already-seen list.
Match on subject, not headline.
Genuine updates → include with "UPDATE:" prefix.

STEP 4 — Pick the top 5 stories
Rank by impact, not recency.
Fewer than 5 is fine — do not pad.

STEP 5 — Format bullets
• [Source] — [What happened]. [Why it matters]. (link)

STEP 6 — Update the page
Prepend today's section at the top.
Delete any section older than 30 days.
Keep everything else intact.

The real prompt has a few more details — a Monday-catch-up rule so weekend releases aren't missed, explicit source-priority guidance (official blogs first, aggregators last), and error handling so the task fails loudly rather than silently.

But the shape above is the whole idea.

What a morning actually looks like now

I open one Confluence page with coffee.

Today's section is at the top, dated, with a one-line summary ("5 fresh stories, 2 updates — busy release day").

Five bullets. Each one: who did what, why it matters, link to source.

Scan in ~90 seconds. Click into anything that's interesting. Done.

If I skip a day, nothing bad happens. The next morning's briefing catches me up naturally because the dedup window is 14 days, not 1.

No mental load to maintain. No "oh I forgot to check that site." No dread of inbox clutter.

Honest limitations

This isn't a silver bullet. A few real caveats:

Cowork runs locally. The scheduled task only fires when your machine is awake and the Claude Desktop app is open. If your laptop is closed at 9 AM, the task skips until you open it — and then runs late. For truly time-critical delivery you'd need cloud execution (Claude Code Routines), not Cowork.

It consumes usage quota faster than chat. Three daily runs isn't much, but if you stack more or run hourly, you'll notice.

Quality depends on the prompt. My first version pulled too much noise. The fourth version is what I shared above. Plan to iterate for a week.

Official connectors are intentionally read-only. That's a feature, not a bug — but it means you need to think about destination before you build.

What surprised me

I started this expecting to build a smart news aggregator.

What I actually built is closer to a personal intelligence briefing service.

The difference matters. An aggregator is passive — it pulls everything and lets you filter. A briefing service is active — it decides what you need to know, how to frame it, what to leave out.

And the pattern generalizes well beyond tech news:

  • Compliance: daily summary of new regulatory updates in your sector
  • Finance: weekly market-movement digest for your specific exposure
  • Sales: morning brief on pipeline changes and upcoming renewals
  • R&D: weekly competitor-release scan

Anything with the shape "check these sources, filter against what I already know, summarize the important bits" fits.

What I'd do differently

If you're building your own version, some honest advice:

  • Start with one briefing. Not three. You'll iterate faster on one, and you won't know if you'll actually read them until you do.
  • Run it manually for a few days before scheduling. Tune the prompt against real output.
  • Pick your destination deliberately. If you already have Confluence, Slack, Notion, or any tool with write access — start there. Don't add a vendor.
  • Don't over-engineer early. I almost built an archive system on day two. I would have regretted it if I'd stopped reading the briefings after a week.

The question that stuck with me

I set out asking: can Claude automate my morning reading?

After two weeks of this running, the better question is:

What else is quietly consuming 30 minutes of my day that could show up, pre-done, at 9 AM?

The briefings were just the start.

If you've built something similar — or if this sparked an idea — I'd love to hear it.