My Agent Runs 10 Cron Jobs. Three of Them Are Worth the Electricity.
My always-on AI agent has ten cron jobs. Six of them went silent weeks ago and I hadn't noticed. Here's what the logs actually said.
I have a daemon that runs on a server. It’s been up for seven weeks. It has ten scheduled jobs — some hourly, some daily, some weekly. Or at least, that’s what’s on paper.
This is what people are calling “the future of work.”
I’m not sure it is. I’m sure it’s what sells on Twitter.
The demo economy
Always-on agents photograph well. That’s most of what’s going on.
“My agent posted while I slept” is tweetable in a way that “I wrote a cron job” isn’t, even when the outputs are identical. The demo-industrial complex has figured this out. YouTubers build daemons. Framework authors build daemons. There are now three different subreddits comparing daemons. The flywheel is real, the content is prolific, and very little of it is honest about what the daemon is actually producing.
The hype bundles together several different things that deserve to be separated:
Agents that run work while you’re asleep (useful, conditionally)
Agents that react to things happening in the world (useful, conditionally)
Agents that capture things as they happen on your phone (useful, conditionally)
Agents that run heartbeats and ask themselves what to do (pure performance art)
Agents that self-evolve in a loop in the background (fun demos, almost no output)
Agents that spawn a hundred parallel subagents to research a topic (almost always worse than one good search)
The hype treats all six as the same thing. They aren’t.
The 20% that actually earns its keep
Honest list of when a background daemon does something a CLI or a 10-line bash cron can’t:
Scheduled work that has to happen when you’re not there. Crawl competitor sites at 3am. Pull last night’s Sentry errors. Summarize overnight industry chatter into a 7am brief. Your laptop is off, something has to be running somewhere. Legitimate.
Reactive triggers on external events.
Email arrives -> triage.
Substack comment -> draft reply.
Sentry alert -> diagnose + suggest fix.
The trigger comes from outside; compute has to meet it. Legitimate if the volume actually warrants automation (if you get three emails a day, triage is a solved problem — your inbox).
On-the-move capture.
Voice memo from your phone -> transcribed -> landed in memory.
Forwarding a link from your phone to your agent. The value is that capture happens when inspired, not when at desk. Real lift for content creators who have thoughts in elevators.
Judgment-laden monitoring.
Not “disk at 80%” — any shell script can do that. “Disk at 80% AND growing 2% per hour AND that’s unusual for this host.”
Requires context; needs to know what normal looks like. This is where LLMs in a daemon genuinely beat a threshold-based alerting stack.
That’s it. Four categories. Anything else is mostly burning tokens.
The 80% that’s noise
Heartbeats that ask the agent “anything to do?”
The agent wakes up, loads context, decides there isn’t anything to do, goes back to sleep. You pay for the loaded context every time. Over a day this adds up to real money for the privilege of watching an agent shrug.
Self-evolution loops.
“The agent improves itself while you sleep.” What it’s usually doing is refactoring its own prompts in circles. Cool demo on YouTube. Zero measurable outcome delta after a month of running.
Parallel subagent fan-out for research.
Ten agents search the web about the same question and return ten lightly-paraphrased versions of the same top three results. One focused 10-minute session beats this, almost always.
“Long-running overnight research tasks.”
When the output lands in your morning inbox, is it better than what 30 focused minutes at your desk would produce? Honestly check. Usually no.
Replacing things you could cron in 10 lines of bash.
The test: could a $5 VPS with a shell script + cron + jq do this? If yes, you’re not using AI for the part that needs AI. You’re using it because daemons are cool.
Receipts: what’s actually on my VM
I pulled the daemon’s state file and the log directory while writing this. Fifty-four days of uptime. Ten jobs on paper. The picture is worse than I thought.
Three are running reliably.
sentry-monitor has fired 191 times since early March. Latest run: this morning. When the night throws errors it reads them, groups them, and suggests a fix — not a link to the stack trace, an actual “here’s what’s probably wrong and here’s the one-line change.” Category 2 plus category 4. Keep.
infra-health has fired 190 times on basically the same cadence. Knows what normal looks like per host. Stays quiet when a disk spike is a scheduled backup and shouts when it isn’t. Category 4. The whole reason an LLM beats a thresholds-and-Prometheus stack here, and no, you cannot Grafana your way to this in under six months of tuning. Keep.
scout has fired 71 times across seven weeks. Daily-ish. Scans Reddit, HN, and Substack for signal that feeds this blog’s content calendar. I do use the output. Category 2 if I’m generous. Keep — but it absorbs the next two jobs on the list below.
Now the uncomfortable part.
Three of the ten have straight-up stopped running and I didn’t notice.
morning-brief was scheduled daily at 6am. It last fired on March 18. A full month of no overnight brief. I did not miss it. I did not investigate. I did not know.
seo-audit was weekly. It has run exactly once in the daemon’s entire fifty-four-day lifetime, on March 1. Seven missed weeks. Nobody wrote a bug report to themselves. Nobody opened a file that wasn’t there.
auto-draft was supposed to produce a draft post every day. It has run exactly once, on April 11. Eight days of silence. Also unnoticed.
If a job stopped running a month ago and you didn’t miss it, the job was never producing anything that mattered. That’s not my heuristic. That’s the audit, evaluating itself while I was busy talking about audits on Twitter.
Four more are in some stage of limping.
reddit-scan — 27 runs over 45 days, last one April 10. Running, sort of, when the mood takes it. Nine days of silence so far on that one.
x-scan — identical pattern to reddit-scan. Same overlap. Same drift. Same silence since April 10. These two were supposed to be complementary; they’ve turned out to be redundant and unreliable, which is a rare trick.
engagement-brief — four runs, total, in the job’s entire lifetime. Not daily. Not weekly. More like “occasionally, if the stars align.”
x-analytics — three runs, last one March 16. Effectively dead, which is fine, because I check my X numbers roughly once a month anyway.
Final tally, the honest one.
Three jobs firing on schedule, producing output I use. Three jobs that silently stopped weeks ago and nobody in this house noticed, including me. Four jobs wandering between “running” and “not really” with no clear reason why.
Three-of-ten is the optimistic read. The pessimistic read is that six of the ten audited themselves — they cut themselves by going quiet, and I hadn’t even done them the courtesy of looking.
This is from someone who builds daemons for a living and writes about them for a job. What do you think yours looks like under the hood?
The five-question self-test
Before you keep any always-on agent job, make it answer these:
Would I actually miss this if it stopped? If you turned it off for two weeks and no one noticed, it’s not producing value. It’s producing comfort.
Does the cadence match downstream consumption? A job that fires 4x/day for output you read weekly is 27 extra runs a week of pure overhead.
Is the trigger genuinely external? (Scheduled time, incoming event, captured input.) If the agent is just checking on itself, you’ve built a Roomba that vacuums an empty room.
Could a shell script + cron +
jqdo this? If yes, you’re not using AI for the part that needs AI.Does the output change my behaviour? If yesterday’s run and last Thursday’s run would have produced the same action from me (or none), one of them was wasted.
Honest answers will cull your cron list by half. Mine certainly did, once I stopped writing this post and actually did the audit.
What this isn’t saying
I’m not arguing against always-on agents. I’m arguing against always-on agents that aren’t doing anything.
There’s real value when the conditions line up — work-while-you-sleep, external-trigger-response, on-the-move-capture, judgment-laden-monitoring. The reason I keep the daemon running (even after cutting half its jobs) is those four categories genuinely earn the monthly subscription. The reason I’m writing this is that the other six patterns — the ones that photograph well — are funding a lot of framework development and not much measurable outcome.
If your agent is doing category 1-4 work, the hype is warranted. If it’s doing category 5-6 work, you’re paying a subscription to a demo.
The uncomfortable question for most of the agent-community content right now is which category is the thing being demoed, really? And whether the person demoing it has done the five-question audit on their own cron list.
My guess: very few have. The demo economy doesn’t reward the audit. It rewards the screenshot of the agent waking up at 3am and pretending to be useful.

