<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai on Reed Bender</title><link>https://reedbender.com/tags/ai/</link><description>Recent content in Ai on Reed Bender</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Wed, 25 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://reedbender.com/tags/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>Data Is the Moat</title><link>https://reedbender.com/writing/data-is-the-moat/</link><pubDate>Wed, 25 Mar 2026 00:00:00 +0000</pubDate><guid>https://reedbender.com/writing/data-is-the-moat/</guid><description>&lt;p&gt;The AI discourse right now is fixated on two things: model improvements and agent frameworks. Which model is best. Which orchestration layer is cleanest. How many tools can you wire into a loop. How many agents can you run in parallel? Every week my X feed is filled with a new harness, a new benchmark, a new claim about reasoning.&lt;/p&gt;
&lt;p&gt;None of it matters as much as the data.&lt;/p&gt;
&lt;p&gt;The models are converging. A &lt;a href="https://www.semianalysis.com/p/google-we-have-no-moat-and-neither"&gt;leaked Google memo&lt;/a&gt; called it in 2023: &amp;ldquo;We Have No Moat, And Neither Does OpenAI.&amp;rdquo; Since then, &lt;a href="https://huggingface.co/Qwen"&gt;Qwen&lt;/a&gt;, &lt;a href="https://www.deepseek.com/"&gt;DeepSeek&lt;/a&gt;, and Llama have all closed the gap. James Betker at OpenAI &lt;a href="https://nonint.com/2023/06/10/the-it-in-ai-models-is-the-dataset/"&gt;said it best&lt;/a&gt;:&lt;/p&gt;</description></item></channel></rss>