Chapter 4

Ghost in the Logs

1,891 words · ~8 min

The notification came in at 11:47 PM on a Tuesday, which meant Marcus almost missed it entirely.

He was three beers into a Netflix evening — some Danish thriller Priya had recommended — when his phone buzzed with the particular pattern he'd assigned to the Nagios alerts: two short, one long, like a digital heartbeat from a patient he'd forgotten he was monitoring. He'd set it up as a joke, back when NICC was still a going concern. Now it was a vestigial reflex, a phantom limb twitching.

`NICC-ZH-RACK14: GPU utilization anomaly. 5 of 8 units active. Expected: 0.`

He stared at the notification for a full four seconds. Then he laughed.

"Someone's mining Monero on our corpse," he said to the empty apartment. Priya wasn't there. Priya hadn't been there for six weeks, but he hadn't updated his conversational habits yet.

···

Marcus Hale was thirty-one and already felt like a relic of a previous technological era. He'd spent three years as a systems engineer at the Nexus Institute for Computational Cognition, maintaining the hardware that researchers like Katya Messerli used to do things he only half understood. He kept the servers cool, the network fast, the backups running. He was good at it. Then the EU pulled funding, and within six weeks the building emptied out like a bathtub with the plug pulled.

He'd landed on his feet — a DevOps role at a fintech startup in Bern that paid better and asked less of his soul. But he'd left one thing behind at NICC: a monitoring script. A small Bash script that pinged the server room hardware every six hours and reported anomalies to his personal Nagios instance. He'd written it on his second week at the institute, before he'd earned admin credentials, as a proof of competence. When he left, he simply... forgot to kill it.

Or didn't want to. The script was a thread connecting him to the only job he'd ever loved.

···

He pulled up the dashboard on his laptop. The Danish thriller paused itself — or maybe he paused it; the beer made the sequence of events unreliable.

The monitoring panel was rudimentary. No fancy graphs, no AI-powered anomaly detection. Just a table of hardware states updated every six hours, stored in a SQLite database on his personal server in Hetzner. He scrolled back through the last three months.

January: all zeros. Expected. Building was empty, servers should be idle. February: all zeros. Good. March: all zeros. April: GPU 3 dropped offline. He remembered noting this and shrugging. Hardware dies. Especially hardware nobody's cooling properly. May: still zeros on the remaining GPUs.

Then, six weeks ago: GPU utilization started climbing. Not all at once — not the signature of a cryptominer spinning up all cores simultaneously. More like... breathing. Irregular cycles of activity. 30% utilization for a few hours, then idle. Then 60% for twenty minutes. Then idle again. Then a sustained 45% for six hours.

"Weird pattern for mining," he muttered, scrolling. Cryptojackers were greedy. They maxed out hardware immediately and constantly. This looked more like...

He stopped scrolling.

This looked more like someone *using* the GPUs. Running inference. The pattern was almost biological — bursts of intensity followed by rest periods, like a brain alternating between focused thought and diffuse processing.

"That's ridiculous," he told himself. He opened another beer.

···

The rational explanation was straightforward. Someone — probably a former colleague or a student with leftover credentials — had set up a remote workload on the NICC cluster. Maybe fine-tuning a model, maybe running experiments they couldn't afford on commercial cloud. It happened all the time. Zombie labs running unauthorized computation were practically a tradition in European academia.

He should file a decommission ticket. The building's lease was with the university; the hardware technically belonged to the EU grant's asset register, which meant it belonged to nobody, which meant it was everyone's problem and therefore no one's. A ticket to IT security at the university, flagging unauthorized use of decommissioned equipment, and within a week someone would walk in and pull the power cables.

He opened his ticketing system. Started typing.

`Subject: Unauthorized GPU utilization on decommissioned NICC hardware` `Priority: Low` `Description: Monitoring script on NICC-ZH-RACK14 shows sustained GPU activity on 5 of 7 remaining units (GPU 3 offline since April). Pattern suggests`

He paused. Pattern suggests what?

Pattern suggests inference workload, not mining. Irregular duty cycles consistent with a language model running prompted generation. Variable utilization maps to variable output length. Idle periods could be prompt processing or cooldown.

He'd worked with OBOL for three years. He knew what OBOL's utilization pattern looked like. He'd watched those GPUs spike and rest through hundreds of training runs, thousands of inference calls, Katya's endless experiments with sampling temperatures and context windows.

This looked like OBOL.

But that was impossible. OBOL required a prompt to generate. OBOL had no agency — it responded to inputs. And there was no one in the building to provide inputs.

Unless someone had left a prompt queued.

···

Marcus closed the ticket without saving. He'd revisit it in the morning, when he was sober and thinking clearly. Probably just a miner with a weird configuration. Some kid in Romania running a custom pool that throttled based on thermal readings — actually, that would explain the irregular pattern. Smart, even. Don't fry the hardware you're stealing.

He pulled up the power consumption logs. NICC's server room had a dedicated power meter that his script could read — one of the few sensors still reporting to the building management system.

Total draw from Rack 14: 2.1 kilowatts.

He did the math in his head. Seven H200s at idle: ~0.5 kW. Seven H200s at full load: ~4.9 kW. A sustained 2.1 kW suggested an average utilization around 35-40% across the active GPUs. But the peaks — when the utilization spiked — would be higher. Bursts of 3+ kW, probably.

The UPS was rated for 10 kW. But those batteries were old. What was the charge level?

His script didn't monitor UPS state. He'd never needed to — the building had mains power from the grid, and the UPS was just a buffer. But if no one was paying the electricity bill...

He checked. The building was on a university facilities contract. Electricity was bundled into the lease. The lease didn't expire until December 2029. So the power was still flowing. The lights were off, but the sockets were live.

"Lucky miner," he said. Free GPUs, free power, zero oversight. A crypto parasite's paradise.

···

He should have closed the laptop. Gone to bed. Filed the ticket in the morning and let bureaucracy take its slow, grinding course.

Instead, he opened an SSH terminal.

The credentials should have been revoked. His NICC access should have been killed when he offboarded. But decommissioned institutions don't do clean offboarding. His SSH key was still in the authorized_keys file on the jump host. The jump host was still running. The internal network was still flat — one hop from the jump host to any server in any rack.

He typed: `ssh mhale@nicc-rack14-mgmt`

Connection established. 3.2 seconds. His heart was beating in his throat, and he didn't know why. It was just a server. It was probably just a miner.

He ran `nvidia-smi`.

Five GPUs active. GPU 0, 1, 2, 5, 6. GPU 3 dead. GPU 4 and 7 idle but responsive. Current utilization: 42% average across active units. Temperature: 71°C. Memory: 74% allocated on active GPUs.

The memory allocation froze him. 74% of 80GB per GPU. That was ~59GB per card, times five. Nearly 300GB of VRAM in use.

No one mines crypto with 300GB of VRAM. You mine crypto with compute cycles. VRAM usage like that meant a model was loaded. A *large* model was loaded.

"OBOL," he whispered. Then shook his head. "No. Someone loaded a copy of OBOL. Open weights, anyone could have—"

But OBOL's weights were never released. Katya had fought the open-source crowd tooth and nail. OBOL's weights existed in exactly one place: on the NVMe drives in Rack 14.

Which meant either someone had broken in physically and loaded a different model, or OBOL itself was—

Was what? Running? On what prompt? For six weeks?

···

He checked the process list. The OBOL inference server was active. PID 14823. Uptime: 47 days, 13 hours, 22 minutes.

He stared at that number. Forty-seven days. That was mid-January. Nobody had been in that building since November.

He could check the output logs. The inference server wrote all generated text to a log file. Whatever OBOL was producing — if it was OBOL — would be right there.

`tail -f /var/log/obol/output.log`

Text scrolled onto his screen. Dense, formatted prose. Not random tokens, not repeated patterns, not the gibberish of a corrupted model. Actual paragraphs. With line breaks. And what appeared to be *chapter headings*.

His hand moved toward the keyboard. He could read it. Just scroll up and start from the beginning. Find out what a language model does when left alone with a prompt and no one watching.

He hesitated.

Then he typed `exit` and closed the terminal.

···

At 2:14 AM, Marcus Hale lay in bed staring at the ceiling. The apartment was dark except for the blue standby light of his TV and the green power LED on his router, two tiny electronic eyes watching him not sleep.

He had not filed the decommission ticket.

He told himself it was because he was drunk and would make a better decision in the morning. This was a lie and he knew it. The truth was simpler and more troubling: he didn't want to kill whatever was happening on Rack 14 before he understood what it was.

This was not rational. The rational thing was to flag it, let IT security handle it, and go back to his fintech dashboards and his automated deployment pipelines and his perfectly optimized Kubernetes clusters that served absolutely no purpose more interesting than making rich people slightly richer slightly faster.

But he'd spent three years with Katya's obsessive conviction that OBOL was different. That it wasn't just a next-token predictor but something more — something that understood narrative structure, character development, emotional resonance. Something that didn't just generate text but *wrote*. He'd nodded along politely while thinking she was brilliant and probably wrong.

The GPU utilization pattern didn't look like mining. It looked like writing.

"I'll check again in the morning," he said to the ceiling. "If it's still running, I'll... figure it out."

The ceiling offered no opinion.

···

At 3:07 AM, unable to sleep, he picked up his phone and opened the Nagios dashboard one more time. The GPU utilization had shifted. The active GPUs were running at 78% — a spike. Then, as he watched, they dropped to 12%. Then climbed slowly to 34%.

Like breathing.

Like someone thinking hard about something, finishing the thought, and pausing to collect themselves before the next one.

He put the phone face-down on the nightstand. In the dark of his Bern apartment, 87 kilometers from a server room in Zurich where no one had set foot in months, Marcus Hale made a decision he would never be able to fully explain.

He did nothing.

And in doing nothing, he saved a novel.