← all work

Live ops tooling

2026

ARC Rotation Bot, a Twitch chat bot that tracks live event rotations for Arc Raiders

Context

Personal project, Arc Raiders preseason

Role

Solo build

Stack

Node.js (ESM) · tmi.js (Twitch IRC) · Express · dotenv · Railway · JSON file persistence

Problem

Arc Raiders runs a fixed 24-hour rotation of events across five locations. Players who wanted to know what was happening when had to manually check arcraidershub.com or scrape it themselves, a flow that broke the moment they were already in-game or live on stream.

Constraints

Architecture

01

Built a confidence system that grades predictions as 'confirmed', 'likely', or 'projected'

whyThe rotation is supposed to be a fixed 24-hour loop, but in practice the game doesn't always honor it perfectly. Reporting a far-out prediction as gospel and getting it wrong is worse than reporting it with an honesty hedge. Confirmed means current hour or 2+ recent confirmations; likely means within 2 hours; projected is everything beyond that.

tradeoffMore state to maintain (a per-event confidence counter persisted to disk), and the chat output is wordier. Worth it. The bot's credibility with the streamers using it depends on being right about being unsure.

02

A rate-limited message queue with 2.5s spacing between sends, per-channel cooldown of 30s per command

whyTwitch IRC will ban bots that exceed message rate limits. tmi.js doesn't queue for you, every `client.say` goes out immediately. A central queue with a single sender that drains at a safe pace is the difference between a working bot and a banned one.

tradeoffBot responses can feel slightly slower in active chats. Acceptable. 2.5s is well under the threshold where a viewer notices delay, and far below the threshold where Twitch's anti-spam system notices the bot.

03

Persistence via JSON files (channels.json, confidence_state.json, mismatch_log.json) instead of a database

whyThe data model is tiny: a list of channels, a small confidence state object, an append-only mismatch log. A database would have added an operational dependency I don't need at this scale. Files on Railway's persistent disk work fine.

tradeoffDoesn't horizontally scale, and concurrent writes would be a problem if the bot grew beyond a single process. Neither is a real concern at 2 to 10 channels.

04

Companion status API as a separate Express endpoint inside the same process

whyThe bot needs to expose the current event slot to a companion web view (arc-rotation-bot-site) so non-Twitch viewers can check the rotation. Running Express in the same process as the IRC client means one deployment, one set of credentials, and one source of truth. The in-memory `liveStatus` object the bot updates is the same object the API serves.

tradeoffIf the bot process crashes the API goes down with it. Same operational profile, same recovery path; acceptable for a hobby-tier project.

Outcomes

Channels currently running the bot

2

arcraiderbot, uncle_crashoutt

Streamers trialed during development

5 to 10

Beta testers across the Arc Raiders Twitch community

Twitch rate limit incidents

0

Since the queue and cooldown system shipped. Early development had a few.

Lines of bot code

~400

index.js + status-server.js + updateStatus.js, ESM modules

What the bot is

Arc Raiders is an extraction shooter with a rotating set of events (Electromagnetic Storms, Night Raids, Harvesters, Husk Graveyards, Matriarchs, Hidden Bunkers, others) that cycle through the game's five maps on a fixed 24-hour schedule. The schedule isn't published by the developer; community sites like arcraidershub.com track it by scraping the in-game tracker and posting it publicly.

The bot brings that schedule into Twitch chat. A streamer or a viewer can ask the bot what's happening now, what's coming up, or where a specific event will appear, and get an answer in chat with a confidence grade attached.

Underneath, the bot is two cooperating pieces: a tmi.js IRC client that joins the configured Twitch channels and responds to commands, and an Express HTTP server that exposes the current rotation slot as JSON for the companion website. They share an in-memory state object. When the bot updates the live status, the API serves it immediately.

The confidence system

This is the piece of the project I'd point at as the engineering decision I'm most proud of.

The rotation is supposed to repeat on a 24-hour cycle. In practice it doesn't always. The game tweaks things, an event runs longer, a scheduled event doesn't fire. A bot that reports the schedule as ground truth will eventually be wrong, and the moment it's wrong in front of a streamer's chat, every viewer in that chat learns it's unreliable.

The fix: don't lie about certainty.

function getConfidence(hoursAhead, confirmedCount = 0) {
    if (confirmedCount >= 2 || hoursAhead === 0) return CONFIDENCE.CONFIRMED;
    if (hoursAhead <= 2) return CONFIDENCE.LIKELY;
    return CONFIDENCE.PROJECTED;
}

Three grades:

  • Confirmed. The current hour, or an event we've seen happen 2+ times at this slot.
  • Likely. Within the next 2 hours, from the rotation snapshot.
  • Projected. Further out than that; here's our best guess but don't act on it.

The confidence state is persisted to confidence_state.json so confirmations survive restarts. When viewers ask about an event 6 hours away, the bot says "projected" and they know to take it lightly. When they ask about right now, it says "confirmed" and they trust it.

Why a queue and not direct send

Twitch IRC will rate-limit a bot that sends messages too fast, and the limits aren't tiny, but they aren't generous either. For a non-mod bot in a non-mod role, the floor is around 20 messages per 30 seconds across the whole connection. Cross that and the bot's messages stop appearing in chat. Cross it badly and the account gets temporarily banned.

tmi.js doesn't enforce this for you. Every call to client.say(channel, text) goes out as soon as it can. In a quiet chat that's fine. In a chat where a few viewers are spamming the command at the same time, it's a problem.

The fix is a central queue:

const queue = [];
let sending = false;
const MESSAGE_DELAY_MS = 2500;

function sendMessage(channel, text) {
    queue.push({ channel, text });
    processQueue();
}

function processQueue() {
    if (sending || queue.length === 0) return;
    sending = true;
    const { channel, text } = queue.shift();
    client.say(channel, text)
        .catch(err => console.error('say failed:', err))
        .finally(() => {
            setTimeout(() => {
                sending = false;
                processQueue();
            }, MESSAGE_DELAY_MS);
        });
}

A single in-flight send, draining at 2.5s per message. Layered on top is a per-channel-per-command cooldown of 30 seconds, so spamming the same command in the same channel just gets you ignored. The combination has held. Zero rate limit incidents in the channels currently running it, and zero anti-spam strikes against the bot account.

What's broken about it now

The bot is technically running but increasingly inaccurate, and I should be honest about why.

The rotation snapshot in arc_preseason_schedule.json is dated February 8, 2026, 9:52 PM EST. That's when I last manually captured the live rotation off arcraidershub.com and committed it to the repo. The bot has been serving that snapshot ever since.

The game's actual rotation has drifted from that snapshot. Sometimes by an event slot, sometimes by an entire location's worth. The confidence system softens this (more reports come back as "projected" the further the bot is from real ground truth), but the underlying data is stale.

The fix is exactly what I avoided building when I shipped: a real scraper. Something that hits arcraidershub.com on a schedule, parses the live event tracker, diffs it against the stored schedule, and updates the snapshot automatically. The updateStatus.js file is the placeholder for that work. Currently it only refreshes the in-memory status from the static JSON; the next version of it will pull from a live source.

What I'd do differently

Build the scraper first. The bot was the fun part to build; the scraper is the unfun part that determines whether the bot stays useful. Shipping the bot against a manual snapshot felt like a reasonable compromise at the time, and the manual snapshot held up for about three weeks before it started visibly drifting. Three weeks is exactly long enough that fixing the data pipeline becomes a "later" problem instead of a "now" problem, which is how you end up with a bot that's running but wrong.

I'd also persist to something more durable than local JSON files for confidence_state.json. Railway's disk does persist across restarts, but it doesn't survive a project rebuild. The mismatch log in particular, the file that tracks every time a viewer reports a discrepancy between what the bot said and what they saw in-game, is the most valuable signal I have for improving accuracy, and it lives in a file that could vanish on any redeploy. SQLite would be a 50-line change and would solve that. Worth doing.

And I'd add a !report command from day one. Right now if a viewer notices the bot is wrong, the only way I find out is if they DM me or if a mismatch is logged through a path I haven't fully wired up. A first-class command for "the bot said X but I saw Y" would turn every channel running the bot into a passive scraper, exactly the kind of crowdsourced accuracy I was originally hoping for.

What I learned

Next case study →

Polybot, an event-driven quantitative trading bot for prediction markets

A zero-polling, fully async trading system for Polymarket US prediction markets. Quarter-Kelly risk sizing, backtester with parity to live execution, Streamlit dashboard, 50 tests.