<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Infrastructure on Reed Bender</title><link>https://reedbender.com/tags/infrastructure/</link><description>Recent content in Infrastructure on Reed Bender</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Wed, 25 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://reedbender.com/tags/infrastructure/index.xml" rel="self" type="application/rss+xml"/><item><title>Data Is the Moat</title><link>https://reedbender.com/writing/data-is-the-moat/</link><pubDate>Wed, 25 Mar 2026 00:00:00 +0000</pubDate><guid>https://reedbender.com/writing/data-is-the-moat/</guid><description>&lt;p&gt;The AI discourse right now is fixated on two things: model improvements and agent frameworks. Which model is best. Which orchestration layer is cleanest. How many tools can you wire into a loop. How many agents can you run in parallel? Every week my X feed is filled with a new harness, a new benchmark, a new claim about reasoning.&lt;/p&gt;
&lt;p&gt;None of it matters as much as the data.&lt;/p&gt;
&lt;p&gt;The models are converging. A &lt;a href="https://www.semianalysis.com/p/google-we-have-no-moat-and-neither"&gt;leaked Google memo&lt;/a&gt; called it in 2023: &amp;ldquo;We Have No Moat, And Neither Does OpenAI.&amp;rdquo; Since then, &lt;a href="https://huggingface.co/Qwen"&gt;Qwen&lt;/a&gt;, &lt;a href="https://www.deepseek.com/"&gt;DeepSeek&lt;/a&gt;, and Llama have all closed the gap. James Betker at OpenAI &lt;a href="https://nonint.com/2023/06/10/the-it-in-ai-models-is-the-dataset/"&gt;said it best&lt;/a&gt;:&lt;/p&gt;</description></item><item><title>Building the Lab: Why I Run My Own Infrastructure</title><link>https://reedbender.com/writing/building-the-lab/part-1-why-i-run-my-own-infrastructure/</link><pubDate>Sun, 22 Mar 2026 00:00:00 +0000</pubDate><guid>https://reedbender.com/writing/building-the-lab/part-1-why-i-run-my-own-infrastructure/</guid><description>&lt;p&gt;I spend nearly all of my professional time building production infrastructure on AWS for biomedical research. Kubernetes clusters, Terraform modules, CI/CD pipelines, Postgres databases, agentic AI orchestration.&lt;/p&gt;
&lt;p&gt;My general inclination has always been towards cloud-native development. By every reasonable measure, the last thing I need is a rack of Raspberry Pis and a workstation that draws a kilowatt of power.&lt;/p&gt;
&lt;p&gt;I built it anyway. Because the thing I&amp;rsquo;m trying to understand can&amp;rsquo;t be rented.&lt;/p&gt;</description></item><item><title>The Magus: Building the Primary Workstation</title><link>https://reedbender.com/writing/building-the-lab/part-2-the-magus/</link><pubDate>Sun, 22 Mar 2026 00:00:00 +0000</pubDate><guid>https://reedbender.com/writing/building-the-lab/part-2-the-magus/</guid><description>&lt;p&gt;This is where I write code, and where the agents do most of their thinking.&lt;/p&gt;
&lt;p&gt;A llama.cpp server sits on the LAN with an OpenAI-compatible API, serving a 12GB model out of VRAM to all 4 agents asynchronously. Heartbeat cycles, structured outputs, tool calls, internal coordination &amp;ndash; the always-on work stays local and costs nothing per token. When an agent needs real reasoning depth it routes to Sonnet. When the task is hard or externally visible it escalates to Opus. The local GPUs are the floor of the system. The cloud tiers only get called when the work actually warrants it.&lt;/p&gt;</description></item><item><title>The Rack: Physical Infrastructure and the Tarot Node Map</title><link>https://reedbender.com/writing/building-the-lab/part-3-the-rack/</link><pubDate>Sun, 22 Mar 2026 00:00:00 +0000</pubDate><guid>https://reedbender.com/writing/building-the-lab/part-3-the-rack/</guid><description>&lt;p&gt;8 rack units, 10-inch form factor, sitting on a desk. It houses the agent cluster, the memory server, the monitoring node, all the networking, and a hardwired audio system that lets any machine on the LAN speak through passive speakers mounted on the rack rails.&lt;/p&gt;
&lt;p&gt;The whole thing draws under 200W.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://reedbender.com/images/lab/rack.jpg" alt="The rack"&gt;&lt;/p&gt;
&lt;h2 id="the-nodes"&gt;The Nodes&lt;/h2&gt;
&lt;p&gt;Every node in the lab is named after a Tarot card. The full mapping and the reasoning behind it are in &lt;a href="https://reedbender.com/writing/building-the-lab/part-1-why-i-run-my-own-infrastructure/#the-tarot-map"&gt;part 1&lt;/a&gt;. Here&amp;rsquo;s the physical layout:&lt;/p&gt;</description></item></channel></rss>