How to Build Self-Running AI Tasks with TypeScript (No Cron Jobs Needed)

Dev.to / 4/3/2026

💬 OpinionDeveloper Stack & InfrastructureTools & Practical Usage

Key Points

  • The article argues that scheduling AI-driven workflows hourly (e.g., summarizing support tickets) is painful when relying on cron jobs and ad-hoc Node scripts for retries, logging, and error handling.
  • It presents an alternative approach for building “self-running” AI tasks in TypeScript so the system can manage execution without external cron scheduling.
  • The tutorial frames the “old way” as involving operational concerns like failure monitoring, retry logic, and observability, which the newer pattern aims to simplify.
  • It provides a concrete mental model and code-oriented direction (TypeScript + AI call + workflow triggers) for implementing automated, repeatable AI task execution.
  • The overall takeaway is to choose an orchestration pattern that reduces operational burden while improving reliability and maintainability of recurring AI jobs.
  • The article centers on how to design infrastructure around AI task scheduling rather than on the AI model itself, emphasizing engineering workflow improvements.

How to Build Self-Running AI Tasks with TypeScript (No Cron Jobs Needed)

You have an AI that summarizes support tickets. Now you need it to run every hour.

Do you really want to set up a cron job? Write a Node script? Handle errors? Set up retries? Build logging? Monitor failures?

There's a better way.

The Old Way: Cron + Scripts + Pain

Here's what building scheduled AI tasks traditionally looks like:

// scripts/ticket-summarizer.ts
import { OpenAI } from 'openai';
import { writeFileSync } from 'fs';

const openai = new OpenAI({ apiKey: process.env.OPENAI_KEY });

async function summarizeTickets() {
  try {
    const tickets = await fetchOpenTickets();

    const response = await openai.chat.completions.create({
      model: 'gpt-4o-mini',
      messages: [{ role: 'user', content: `Summarize: ${tickets}` }]
    });

    await postToSlack(response.choices[0].message.content);
    writeFileSync('/var/log/ai-runs.json', JSON.stringify({ success: true, time: Date.now() }));
  } catch (err) {
    // Retry logic?
    // Alert someone?
    // What about partial failures?
    console.error('Failed:', err);
    process.exit(1);
  }
}

summarizeTickets();

Then add a crontab:

0 * * * * cd /app && npx ts-node scripts/ticket-summarizer.ts 2>&1 | tee -a /var/log/cron.log

But wait—you need:

  • Retry logic with exponential backoff
  • Error handling that distinguishes transient vs permanent failures
  • A way to see if the job is actually running
  • Monitoring for when it fails 3 times in a row
  • A method to pause it without editing crontab
  • Logs that don't get wiped on restart

You just wanted an AI task to run every hour. Now you're managing infrastructure.

The NeuroLink Way: TaskManager

NeuroLink v9.41.0 introduced TaskManager—scheduled, self-running AI tasks built directly into the SDK. No cron. No separate workers. No infrastructure headaches.

import { NeuroLink } from '@juspay/neurolink';

const neurolink = new NeuroLink();

// That's it. One function call.
const task = await neurolink.tasks.create({
  name: 'ticket-summarizer',
  prompt: 'Summarize these support tickets and highlight urgent issues',
  schedule: {
    type: 'cron',
    expression: '0 * * * *', // Every hour
    timezone: 'America/New_York'
  },
  mode: 'isolated',
  provider: 'openai',
  model: 'gpt-4o-mini', // Cheap model for summaries

  // Built-in retry logic
  retry: {
    maxAttempts: 3,
    backoffMs: [30000, 60000, 300000] // 30s, 1m, 5m
  },

  // Callbacks for results
  onSuccess: (result) => {
    if (result.output?.includes('URGENT')) {
      sendPagerDutyAlert(result.output);
    }
  },
  onError: (err) => {
    console.error(`Run ${err.runId} failed:`, err.error);
  }
});

console.log(`Task scheduled: ${task.id}`);
// Task runs automatically. Process stays alive.

That's it. The task:

  • ✅ Runs every hour automatically
  • ✅ Retries on transient failures (rate limits, timeouts)
  • ✅ Logs every run with timestamps
  • ✅ Survives process restarts (with Redis backend)
  • ✅ Can be paused, resumed, or deleted via API
  • ✅ Sends results to your callbacks

Three Practical Examples

Let's build three real-world self-running AI tasks.

1. Hourly Support Ticket Summarizer

import { NeuroLink } from '@juspay/neurolink';

const neurolink = new NeuroLink({
  tasks: {
    backend: 'bullmq', // Redis-backed for production
    redis: { url: process.env.REDIS_URL }
  }
});

const ticketTask = await neurolink.tasks.create({
  name: 'hourly-ticket-summary',
  prompt: `Fetch open support tickets from the last hour.
Summarize:
- Total count and priority breakdown
- Common themes and recurring issues
- Any tickets requiring immediate escalation

Format as a Slack-ready message.`,

  schedule: {
    type: 'cron',
    expression: '0 * * * *', // At the top of every hour
    timezone: 'America/New_York'
  },

  mode: 'isolated', // Fresh context each run
  provider: 'openai',
  model: 'gpt-4o-mini', // Cost-effective for summaries

  // Enable tools so AI can fetch tickets itself
  tools: true,

  onSuccess: async (result) => {
    await fetch(process.env.SLACK_WEBHOOK, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({
        text: `🎫 Ticket Summary
${result.output}`
      })
    });
  }
});

Why this works:

  • isolated mode means each run is independent—no context pollution
  • gpt-4o-mini keeps costs low for high-frequency runs
  • Tools enabled so AI can call your ticket API directly

2. Daily Code Review Bot (Continuation Mode)

Here's where it gets interesting. Continuation mode lets the AI remember previous runs.

const codeReviewBot = await neurolink.tasks.create({
  name: 'daily-code-review',
  prompt: `Review yesterday's merged PRs. For each:
1. Identify potential bugs or security issues
2. Check for missing tests or documentation
3. Note any patterns across PRs (e.g., repeated mistakes)

Compare with your previous reviews. Are the same issues recurring?`,

  schedule: {
    type: 'cron',
    expression: '0 9 * * 1-5' // 9 AM, weekdays only
  },

  mode: 'continuation', // 🔑 AI remembers yesterday's review
  provider: 'anthropic',
  model: 'claude-sonnet-4-6',

  maxRuns: 30, // Auto-stop after a month

  onSuccess: (result) => {
    // Post to team's #code-reviews channel
    postToSlack(result.output);

    // If critical issues found, escalate
    if (result.output?.includes('CRITICAL')) {
      notifyTechLead(result.output);
    }
  }
});

How continuation mode works:

  • Run 1: "I see 3 PRs. PR #42 has a potential null pointer issue."
  • Run 2: "PR #45 also has a null pointer issue—this is the second time this week. Recommend team training on optional chaining."
  • Run 3: "No null pointer issues today. The pattern from previous runs seems resolved."

The AI builds understanding over time without any database code on your end.

3. Weekly Report Generator

const weeklyReport = await neurolink.tasks.create({
  name: 'weekly-executive-summary',
  prompt: `Generate an executive summary for leadership:

METRICS TO INCLUDE:
- API uptime and error rates (fetch from monitoring)
- Support ticket volume and resolution times
- Deployment frequency and incident count
- Key customer feedback themes

FORMAT:
- One-page executive summary
- 3 key insights with supporting data
- 2 recommendations for next week`,

  schedule: {
    type: 'cron',
    expression: '0 8 * * 1' // 8 AM every Monday
  },

  mode: 'isolated',
  provider: 'openai',
  model: 'gpt-4o', // Higher quality for executive reports

  timeout: 300_000, // 5 minutes (reports take longer)

  onSuccess: async (result) => {
    // Save to Notion
    await saveToNotion({
      title: `Weekly Report - ${new Date().toISOString()}`,
      content: result.output
    });

    // Email to leadership
    await sendEmail({
      to: 'exec-team@company.com',
      subject: 'Weekly Executive Summary',
      body: result.output
    });
  },

  onError: (err) => {
    // Alert if report generation fails
    if (!err.willRetry) {
      pagerDuty.trigger({
        severity: 'warning',
        summary: 'Weekly report failed after all retries'
      });
    }
  }
});

Understanding TaskManager's Features

Execution Modes

Mode Behavior Best For
isolated Fresh AI context every run One-off tasks, monitoring, reports
continuation AI remembers previous runs Trend analysis, progressive workflows

Schedule Types

// Cron (most flexible)
schedule: { type: 'cron', expression: '0 9 * * 1-5', timezone: 'America/New_York' }

// Interval (simple)
schedule: { type: 'interval', every: 5 * 60 * 1000 } // Every 5 minutes

// One-shot (run once at specific time)
schedule: { type: 'once', at: '2026-04-01T14:00:00Z' }

Built-in Retry Logic

TaskManager automatically classifies errors:

Transient (will retry):

  • Rate limit exceeded
  • Network timeout
  • 5xx server errors

Permanent (task fails):

  • Invalid API key
  • Model not found
  • Bad configuration

Default retry: 3 attempts with exponential backoff (30s → 1m → 5m)

Managing Tasks

// List all tasks
const tasks = await neurolink.tasks.list();
const active = await neurolink.tasks.list({ status: 'active' });

// Run immediately (outside schedule)
await neurolink.tasks.run('task_abc123');

// Pause and resume
await neurolink.tasks.pause('task_abc123');
await neurolink.tasks.resume('task_abc123');

// Update
await neurolink.tasks.update('task_abc123', {
  prompt: 'New prompt text',
  schedule: { type: 'interval', every: 10 * 60 * 1000 }
});

// Delete
await neurolink.tasks.delete('task_abc123');

// View run history
const runs = await neurolink.tasks.runs('task_abc123', { limit: 20 });

Production Considerations

Choose Your Backend

BullMQ (Redis) — Production

const neurolink = new NeuroLink({
  tasks: {
    backend: 'bullmq',
    redis: { url: process.env.REDIS_URL },
    maxConcurrentRuns: 5 // Limit concurrent executions
  }
});
  • ✅ Survives restarts
  • ✅ Multi-process safe
  • ✅ No file I/O (container-friendly)

NodeTimeout — Development

const neurolink = new NeuroLink({
  tasks: { backend: 'node-timeout' } // Zero dependencies
});
  • ✅ No Redis needed
  • ✅ Human-readable JSON files
  • ⚠️ Timers lost on restart (tasks auto-rescheduled from disk)

Monitoring

// Subscribe to events
neurolink.on('task:started', (task, runId) => {
  metrics.increment('task.started', { task: task.name });
});

neurolink.on('task:completed', (result) => {
  metrics.timing('task.duration', result.durationMs);
  console.log(`✅ ${result.taskId}: ${result.output?.slice(0, 100)}`);
});

neurolink.on('task:failed', (error) => {
  metrics.increment('task.failed', { task: error.taskId });
  console.error(`❌ ${error.taskId}: ${error.error}`);
});

Running as a Daemon

# Using PM2
pm2 start src/tasks.ts --name ai-tasks

# Using systemd
# (See NeuroLink docs for full systemd unit file)

# Or simply keep the Node process running
npx ts-node src/tasks.ts

CLI Alternative

Don't want to write code? Use the CLI:

# Create a task
neurolink task create \
  --name "hourly-ticket-summary" \
  --prompt "Summarize open support tickets" \
  --cron "0 * * * *" \
  --provider openai \
  --model gpt-4o-mini

# Manage tasks
neurolink task list
neurolink task pause <task-id>
neurolink task resume <task-id>
neurolink task logs <task-id> --limit 50

What Makes This Different?

Feature Traditional Cron + Script NeuroLink TaskManager
Setup 5+ files, infrastructure Single SDK call
Retries Write yourself Built-in with backoff
Error handling Manual classification Auto transient vs permanent
Monitoring Custom logging Events + run history
Context memory Database + code mode: 'continuation'
Scaling More infrastructure Redis backend
AI can self-schedule Impossible Built-in tools

Getting Started

# Install
npm install @juspay/neurolink

# Run the setup wizard
npx @juspay/neurolink setup

# Create your first scheduled task
import { NeuroLink } from '@juspay/neurolink';

const neurolink = new NeuroLink();

await neurolink.tasks.create({
  name: 'hello-world',
  prompt: 'Say hello and mention what time it is',
  schedule: { type: 'interval', every: 60000 },
  onSuccess: (r) => console.log(r.output)
});

// Keep process alive
setInterval(() => {}, 1000);

Conclusion

You don't need cron jobs, worker queues, and custom retry logic to run AI tasks on a schedule. TaskManager gives you:

  • Cron, interval, and one-shot scheduling
  • Built-in retries with exponential backoff
  • Isolated or continuation execution modes
  • Redis or file-based persistence
  • Events and callbacks for monitoring
  • CLI and SDK interfaces

All in one TypeScript SDK.

Stop managing infrastructure. Start building AI that runs itself.

→ GitHub | → Documentation | → npm

NeuroLink is the TypeScript-first AI SDK from Juspay—13 providers, 100+ models, one unified API.