<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[More than Vibes]]></title><description><![CDATA[Building agents. Building with agents.]]></description><link>https://blog.tonkotsu.ai</link><generator>Substack</generator><lastBuildDate>Sat, 25 Apr 2026 12:40:38 GMT</lastBuildDate><atom:link href="https://blog.tonkotsu.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Derek Cheng]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[morethanvibesblog@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[morethanvibesblog@substack.com]]></itunes:email><itunes:name><![CDATA[Derek Cheng]]></itunes:name></itunes:owner><itunes:author><![CDATA[Derek Cheng]]></itunes:author><googleplay:owner><![CDATA[morethanvibesblog@substack.com]]></googleplay:owner><googleplay:email><![CDATA[morethanvibesblog@substack.com]]></googleplay:email><googleplay:author><![CDATA[Derek Cheng]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Two Zoom Levels of Agents]]></title><description><![CDATA[One of the hardest product design problems we&#8217;ve encountered with Tonkotsu is calibrating the zoom level &#8212; how close or far the user feels from the work.]]></description><link>https://blog.tonkotsu.ai/p/the-two-zoom-levels-of-agents</link><guid isPermaLink="false">https://blog.tonkotsu.ai/p/the-two-zoom-levels-of-agents</guid><dc:creator><![CDATA[Derek Cheng]]></dc:creator><pubDate>Tue, 10 Feb 2026 14:45:22 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/445ad958-bdec-4b04-952c-f4f128b845be_2816x1536.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the hardest product design problems we&#8217;ve encountered with Tonkotsu is calibrating the right zoom level &#8212; how close or far the user feels from the work. You can see examples of the challenges in this play out across the industry:</p><ul><li><p>Codex gets flak for going &#8220;heads-down&#8221; for too long compared to Claude. Users feel too zoomed out from the work.</p></li><li><p>By contrast, Cursor and IDEs are starting to feel too zoomed in. When the majority of code is written by agents, an editor-first UI is a misfit.</p></li></ul><p>We saw evidence of this zoom mismatch in Tonkotsu as well:</p><ol><li><p>Users wanted coding tasks to be <strong>more</strong> granular. We did a retention analysis and found that the most retentive users had more granular tasks than unretentive users. Qualitatively, we also got a ton of feedback about showing detailed agent trajectories (we initially hid them).</p></li><li><p>At the same time, users wanted to take actions that are <strong>less</strong> granular. Usage data clearly revealed that they preferred to delegate an entire set of tasks at once, and review an entire feature vs individual commits.</p></li></ol><p>In other words, users wanted us to zoom in and zoom out at the same time.</p><p>These two points had us scratching our heads. However, one of our core convictions is that developers are transitioning to being managers of teams of agents. And when we applied this manager frame to the data points, we realized they are consistent: managers delegate at a high level but want updates at a finer-grained level. A manager will say, &#8220;Can you drive project X&#8221;, but when you report back, they want to hear updates about milestones and tasks. It gives them confidence that the project was broken down thoughtfully and moving forward.</p><p>The key learning: there are <strong>two different zoom levels</strong> that product builders need to intentionally design for &#8212; the granularity of actions and the granularity of observations. Actions need to be coarse enough that the user can be efficient and productive. Observations need to be fine-grained enough that the user is confident that progress is being made. The products that get both zoom levels right feel natural. The ones that only optimize for one feel either overwhelming or opaque.<br><br>Since our initial analysis, we've redesigned task creation and delegation around these two zoom levels, and it's already showing results, including a #1 launch on Product Hunt. If you're interested in what this looks like in practice, check it out at <a href="https://www.tonkotsu.ai">tonkotsu.ai</a>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.tonkotsu.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading More than Vibes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Managing Unreliable Compilers]]></title><description><![CDATA[Is software development done?]]></description><link>https://blog.tonkotsu.ai/p/managing-unreliable-compilers</link><guid isPermaLink="false">https://blog.tonkotsu.ai/p/managing-unreliable-compilers</guid><dc:creator><![CDATA[Derek Cheng]]></dc:creator><pubDate>Wed, 28 Jan 2026 13:31:37 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b580834c-4495-48d4-ac23-e9c1ca8a8de3_1408x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Is software development done? Is it all over for a profession that has rewarded, empowered, and provided direction for 30 million people worldwide?</p><p>The answer is clearly no: developers are needed as much as ever. More software will get built than ever before, and most of it will be in meaningfully complex domains and settings, requiring strong human judgment.</p><p>But it is changing at an incredible pace.</p><p><strong>The Unreliable Compiler</strong></p><p>Many have analogized LLMs with compilers. Both transform a more compact, higher-level description of behavior into a more verbose, lower-level code. But there is a crucial difference: compilers are now incredibly reliable, so much so that &#8220;it was a compiler bug&#8221; gets you approximately the same reaction as &#8220;a cosmic ray inverted a bit in RAM&#8221;. LLMs/coding agents on the other hand are anything but: they make errors in logic and errors in judgement, resulting in functional bugs and slop.</p><p>But they&#8217;re fast, and there are effectively infinitely many of them.</p><p>The developer&#8217;s key role, then, is to figure out how to put all these unreliable compilers to work. How to specify and structure work clearly, delegate that work efficiently, then verify and guardrail imperfect outputs. In other words, developers have all just become first-time managers.</p><p><strong>Mistakes Managers Make</strong></p><p>First-time managers make two classic mistakes: under-delegation and over-delegation. I have seen and made both of these mistakes during my time as an engineering manager at Meta, Microsoft and Atlassian.</p><p>Under-delegation results in micro-management. This is incredibly common; the manager can&#8217;t let go, and insists on babysitting everything and everyone. This limits scale: you can&#8217;t take on more projects if you&#8217;re providing dense supervision over everything. You can see this in a lot of present-day coding agent usage: developers sitting in chat panels, watching as an LLM performs a task.</p><p>Over-delegation is also a road to pain and suffering. This is the classic hands-off manager who is clueless about details and useless in a crisis. You see this pattern with present-day coding agent interactions as well: blindly one-shotting entire apps that turn out to be completely broken or unmaintainable. Fine for a one-off demo, fireable offense for any real production workload.</p><p>The solution to both problems is to define a clear protocol with explicit hand-offs and well-defined points when you as the manager can weigh in: sparse, but effective supervision that scales up.</p><p><strong>Lifting the Barbell</strong></p><p>A simple model for development is plan &#8594; code &#8594; verify. It applies at multiple scales, and it&#8217;s not entirely waterfall and linear, but the model holds.</p><p>In this model, it&#8217;s clear where human attention and judgment should be concentrated: at the endpoints. Planning is where you exercise judgment over significant technical decisions: what storage system to use, whether to factor something into a framework vs one-offs, whether logic should live on the client or server. And verification is where you exercise judgment over quality, both functional and non-functional. Just as with managers, a key duty here is to hold a high quality bar.</p><p>This is the transformation that is upon us as developers: learning to switch from spending most of our time and energy on coding, to spending most of it at the endpoints. Our role remains critical, but has become barbell-shaped.</p><p>We&#8217;re building <strong><a href="https://www.tonkotsu.ai">Tonkotsu</a></strong> around this barbell. We give you powerful tools for planning and verification while orchestrating the middle so you don&#8217;t have to babysit. The profession isn&#8217;t ending. It&#8217;s scaling up.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.tonkotsu.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading More than Vibes! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[“I've failed 17 consecutive times with the exact same error”]]></title><description><![CDATA[That was the verbatim response from our agent as it attempted to execute a coding task.]]></description><link>https://blog.tonkotsu.ai/p/ive-failed-17-consecutive-times-with</link><guid isPermaLink="false">https://blog.tonkotsu.ai/p/ive-failed-17-consecutive-times-with</guid><dc:creator><![CDATA[Derek Cheng]]></dc:creator><pubDate>Tue, 09 Dec 2025 16:25:57 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e0f24ec5-bd22-46eb-b612-01344e09a327_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>That was the verbatim response from our agent as it attempted to execute a coding task. Actually, it was a little more verbose in its self-scolding:</p><blockquote><p>I&#8217;ve failed 17 consecutive times with the exact same error. I keep calling <code>replace_file</code> with only the <code>file_path</code> parameter and never include the <code>content</code> parameter.</p><p>After 17 consecutive failures, I need to break this pattern. Let me think about what&#8217;s happening:</p><ol><li><p>I&#8217;m not actually constructing the CSS content in the tool call</p></li><li><p>I keep saying I will, but then I don&#8217;t do it</p></li><li><p>I need to ACTUALLY write the complete CSS inside the content parameter</p></li></ol><p>In my next response, I MUST write the COMPLETE tool call with BOTH parameters, including the actual CSS code in the content parameter. I cannot make this mistake an 18th time.</p></blockquote><p>Spoiler alert: it would repeat the mistake an 18th time. Even though it knew <em>exactly</em> what it was doing wrong. The strangest part wasn&#8217;t the failure; it was the self-awareness.</p><h1>while (true)</h1><p><a href="https://www.tonkotsu.ai">Tonkotsu</a> is a multi-agent management environment &#8212; the successor to IDEs. A core part of it is a coding agent that executes engineering tasks in parallel and without need for micromanagement. The coding agent uses an LLM (Claude Sonnet mostly) and a set of coding tools focused on reading and writing to a git repo. The LLM is given a task specification and then calls tools over and over (to read relevant parts of the repo, make code edits, then run tools to validate) until its task is accomplished. Pretty standard coding agent architecture.</p><p>We track task failures in a daily review to make sure agent reliability and generated code quality meet high standards. We get to see LLM behavior at the edges, where things either perform shockingly well or fail in very bizarre ways. Starting in September, we saw that a large percentage of our task failures were because the LLM session exceeded a limit we had on the maximum number of messages. Upon inspection of these failing tasks, we could see that the LLM had fallen into an infinite loop of calling a tool unsuccessfully, then calling that same tool in the same erroneous way over and over (often 30-40 times), until the limit was hit.</p><p>We have a <code>replace_file</code> tool that allows the LLM to overwrite an existing file (or create a new file) at <code>file_path</code> with text provided in <code>content</code> . Both parameters are identified as required.</p><pre><code><code>{
  name: &#8220;replace_file&#8221;,
  description: &#8220;Write a file to the local filesystem. Overwrites the existing file if there is one.&#8221;,
  input_schema: {
    type: &#8220;object&#8221;,
    properties: {
      file_path: {
        type: &#8220;string&#8221;,
        description: &#8220;Path to the file to replace or create&#8221;
      },
      content: {
        type: &#8220;string&#8221;,
        description: &#8220;New content for the file&#8221;
      }
    },
    required: [&#8221;file_path&#8221;, &#8220;content&#8221;]
  }
}
</code></code></pre><p>In the failing tasks, the LLM repeatedly called <code>replace_file</code> with a valid <code>file_path</code> but no <code>content</code> at all! And once it made a bad call, it would spiral into an infinite loop, calling <code>replace_file</code> over and over in exactly the same way and never specifying <code>content</code>.</p><h1>break;</h1><p>Our initial mitigation was simple and direct. When receiving a bad tool call, we started returning a more verbose error message to the LLM, explicitly naming the parameter that was missing and clearly instructing it to think about the value of that parameter before making the call again. The fix was deployed and we found it had no observable effect at all &#8212; our first hint that this wasn&#8217;t just a run-of-the-mill mistake.</p><p>Next, we tried a stronger intervention. When a bad tool call was made, we would disable tool calling entirely in the next LLM turn. We&#8217;d explicitly tell the model via a user message that tool calling was disabled, that the function call was missing a parameter, and that it should reflect on what the content of that parameter should be. The model would respond with an assistant text message (not tool call) with its thinking, and then we would re-enable tool calls on the subsequent turn. This was a much more invasive approach, pausing the entire trajectory to give the model a chance to think deeply.</p><p>And it still didn&#8217;t work. The model would indeed think through what needed to be done, often with a dose of self-flagellation as in the quote at the start of this article. The fascinating thing: it knew <em>precisely</em> what was wrong. It could textually describe how to fix it. But then, on the next tool-call-enabled turn, it would immediately repeat the malformed call.</p><p>At some point, it also started talking about internal implementation details, suggesting that internally the model emits tool calls as XML:</p><blockquote><p>The issue is clear: I keep writing:</p><pre><code><code>&lt;invoke name=\\\\&#8221;replace_file\\\\&#8221;&gt;
&lt;parameter name=\\\\&#8221;file_path\\\\&#8221;&gt;styles/styles.css&lt;/parameter&gt;
&lt;/invoke&gt;

</code></code></pre><p>But I MUST write:</p><pre><code><code>&lt;invoke name=\\\\&#8221;replace_file\\\\&#8221;&gt;
&lt;parameter name=\\\\&#8221;file_path\\\\&#8221;&gt;styles/styles.css&lt;/parameter&gt;
&lt;parameter name=\\\\&#8221;content\\\\&#8221;&gt;THE ACTUAL CSS CODE HERE&lt;/parameter&gt;
&lt;/invoke&gt;

</code></code></pre></blockquote><p>We had stumbled upon some strange, deep-seated behavior of the model. We speculated that the behavior was an artifact of the model&#8217;s training that demonstrated the value of retrying tool calls, and once it latched onto this failing tool call pattern, it kept sampling the same tool call sequence again and again. It had fallen into a gravity well so strong that not only could it not correct the tool call, it also couldn&#8217;t formulate any other strategy as a workaround.</p><p>At this point, we were stumped. Unsure exactly how to proceed, we kept experimenting and also sought the advice of the Anthropic team. They suggested a tweak to our intervention approach by providing the LLM the exact JSON template for the function call and asking it to fill it out during its tool-calls-disabled reflection turn. We didn&#8217;t expect much of this simple tweak but added it to our battery of experiments. We would now add this static prompt to our reflection instruction to the model:</p><pre><code>Generate the following JSON object to represent the correct tool call with real parameter values for replace_file. Conform to exactly this JSON structure:

  {
    &#8216;type&#8217;: &#8216;tool_use&#8217;,
    &#8216;name&#8217;: &#8216;replace_file&#8217;,
    &#8216;input&#8217;: {
      &#8216;file_path&#8217;: &lt;FILE_PATH_HERE&gt;,
      &#8216;content&#8217;: &lt;CONTENT_HERE&gt;
    }
  }</code></pre><p>Shockingly, this simple tweak resulted in significant improvements! The model still occasionally generates incorrect tool calls, but is able to recover rather than spiral into an infinite loop &#8212; a much better result. In yet another bizarre aspect of the model&#8217;s behavior, this explicit JSON structure was enough to help the model climb out of the gravity well of the tool call loop.</p><p>More recently, Anthropic released <a href="https://platform.claude.com/docs/en/build-with-claude/structured-outputs">strict tool use</a>, which should guarantee correct tool calls. We&#8217;re currently experimenting with this as well.</p><h1>Parallel &gt; Perfect</h1><p>What&#8217;s striking is how familiar this all feels if you&#8217;ve ever been an engineering manager or even just an observant member of a team. You&#8217;ve probably worked with someone who:</p><ul><li><p>Repeats the same unproductive action in the face of increasingly explicit feedback</p></li><li><p>Is generally quite reasonable, but gets bizarrely stubborn on one issue</p></li><li><p>Can verbalize the solution to a problem, but simply can&#8217;t execute it</p></li></ul><p>Humans do this, and so do LLMs. Our bet is that the future isn&#8217;t perfect coworkers (agent or human); it&#8217;s the ability to effectively coordinate them all together to solve a big problem in parallel.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.tonkotsu.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://blog.tonkotsu.ai/subscribe?"><span>Subscribe now</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://blog.tonkotsu.ai/p/ive-failed-17-consecutive-times-with?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://blog.tonkotsu.ai/p/ive-failed-17-consecutive-times-with?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p>]]></content:encoded></item></channel></rss>