<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Codex on Yarang's Tech Lair</title><link>https://blog.fcoinfup.com/tags/codex/</link><description>Recent content in Codex on Yarang's Tech Lair</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Fri, 08 May 2026 21:55:39 +0900</lastBuildDate><atom:link href="https://blog.fcoinfup.com/tags/codex/index.xml" rel="self" type="application/rss+xml"/><item><title>I Sent the Same Coding Task to 4 AIs Simultaneously</title><link>https://blog.fcoinfup.com/post/i-sent-the-same-coding-task-to-4-ais-simultaneously/</link><pubDate>Fri, 08 May 2026 21:55:39 +0900</pubDate><guid>https://blog.fcoinfup.com/post/i-sent-the-same-coding-task-to-4-ais-simultaneously/</guid><description>&lt;p&gt;What happens when the same bug-fixing task is sent to Claude, ZAI (GLM), OpenAI Codex, and Google Gemini simultaneously?&lt;/p&gt;
&lt;p&gt;This question sparked the AgentForge project. We built a system that connects multiple LLM CLIs with the NATS JetStream message queue to process the same tasks in parallel, and in the process, we made some unexpected discoveries. This article focuses on the comparative experimental findings during the setup phase.&lt;/p&gt;
&lt;p&gt;The system&amp;rsquo;s design and implementation will be covered in Part 2.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="list-of-ais-tested"&gt;List of AIs Tested
&lt;/h2&gt;&lt;p&gt;The final configuration of 18 operational workers is as follows:&lt;/p&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Family&lt;/th&gt;
 &lt;th&gt;Model&lt;/th&gt;
 &lt;th&gt;Notes&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Claude Code&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;claude-sonnet-4-6&lt;/td&gt;
 &lt;td&gt;Main development worker&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Claude Code&lt;/td&gt;
 &lt;td&gt;claude-sonnet-4-5&lt;/td&gt;
 &lt;td&gt;Previous generation comparison&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Claude Code&lt;/td&gt;
 &lt;td&gt;claude-haiku-4-5&lt;/td&gt;
 &lt;td&gt;Lightweight &amp;amp; High-speed&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Claude Code&lt;/td&gt;
 &lt;td&gt;claude-opus-4-6&lt;/td&gt;
 &lt;td&gt;Top-tier&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Claude Code&lt;/td&gt;
 &lt;td&gt;claude-opus-4-5&lt;/td&gt;
 &lt;td&gt;Previous generation comparison&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;ZAI (GLM)&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;glm-5.1&lt;/td&gt;
 &lt;td&gt;High-tier&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;ZAI (GLM)&lt;/td&gt;
 &lt;td&gt;glm-4.7&lt;/td&gt;
 &lt;td&gt;Mid-tier&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;ZAI (GLM)&lt;/td&gt;
 &lt;td&gt;glm-4.5-air&lt;/td&gt;
 &lt;td&gt;Lightweight tier&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;OpenAI Codex&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;gpt-5.5&lt;/td&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Codex&lt;/td&gt;
 &lt;td&gt;gpt-5.4&lt;/td&gt;
 &lt;td&gt;1M context&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Codex&lt;/td&gt;
 &lt;td&gt;gpt-5.4-mini&lt;/td&gt;
 &lt;td&gt;400K context&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Codex&lt;/td&gt;
 &lt;td&gt;gpt-5.3-codex&lt;/td&gt;
 &lt;td&gt;272K context&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Google Gemini&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;gemini-2.5-flash&lt;/td&gt;
 &lt;td&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Gemini&lt;/td&gt;
 &lt;td&gt;gemini-2.5-pro&lt;/td&gt;
 &lt;td&gt;High-tier&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Gemini&lt;/td&gt;
 &lt;td&gt;gemini-2.5-flash-lite&lt;/td&gt;
 &lt;td&gt;Lightweight&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The list was much shorter when we first started. It grew as we experimented with which models were available.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="discovery-1-claude-3x-series-is-already-inaccessible"&gt;Discovery 1: Claude 3.x Series is Already Inaccessible
&lt;/h2&gt;&lt;p&gt;Those who have used Claude Code for a long time might recall Claude 3.7 Sonnet, 3.5 Sonnet, and 3.5 Haiku. We attempted to add these models as workers.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;claude --model claude-3-7-sonnet-20250219 --print &lt;span style="color:#e6db74"&gt;&amp;#34;hello&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;# → &amp;#34;may not exist or no access&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;All three models returned the same error. The Claude 3 series reached its EOL in early 2026, and access via the Claude Code CLI has been blocked. Currently, only the 4.x series is available with a Claude Code subscription.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;: Claude workers were configured using only the 4.5/4.6 series.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="discovery-2-limited-model-selection-for-chatgpt-account-codex"&gt;Discovery 2: Limited Model Selection for ChatGPT Account Codex
&lt;/h2&gt;&lt;p&gt;The OpenAI Codex CLI authenticates with a ChatGPT Plus/Pro account or a separate API key. If authenticated via a ChatGPT account, the accessible models are limited.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;codex --model gpt-5.5-pro &lt;span style="color:#e6db74"&gt;&amp;#34;fix the bug&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;# → &amp;#34;Model gpt-5.5-pro is not supported with ChatGPT account&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;codex --model gpt-5.5 &lt;span style="color:#e6db74"&gt;&amp;#34;fix the bug&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;# → Works normally&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Models available with a ChatGPT account:&lt;/p&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Model&lt;/th&gt;
 &lt;th&gt;Context&lt;/th&gt;
 &lt;th&gt;Inference Level&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;gpt-5.5&lt;/td&gt;
 &lt;td&gt;1M / 1M&lt;/td&gt;
 &lt;td&gt;High&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;gpt-5.4&lt;/td&gt;
 &lt;td&gt;1M / 1M&lt;/td&gt;
 &lt;td&gt;Medium&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;gpt-5.4-mini&lt;/td&gt;
 &lt;td&gt;400K / 400K&lt;/td&gt;
 &lt;td&gt;Medium&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;gpt-5.3-codex&lt;/td&gt;
 &lt;td&gt;272K / 400K&lt;/td&gt;
 &lt;td&gt;Medium&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;All other models, including &lt;code&gt;gpt-5.5-pro&lt;/code&gt;, returned a &amp;ldquo;not supported with ChatGPT account&amp;rdquo; error. More models are available with an API key, but that&amp;rsquo;s a different approach.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="discovery-3-gemini-cli-only-supports-25-series"&gt;Discovery 3: Gemini CLI Only Supports 2.5 Series
&lt;/h2&gt;&lt;p&gt;We tested various models with the Gemini CLI (&lt;code&gt;gemini&lt;/code&gt; binary).&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;gemini -p &lt;span style="color:#e6db74"&gt;&amp;#34;hello&amp;#34;&lt;/span&gt; -m gemini-2.0-flash
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;# → ModelNotFoundError: models/gemini-2.0-flash is not found&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;gemini -p &lt;span style="color:#e6db74"&gt;&amp;#34;hello&amp;#34;&lt;/span&gt; -m gemini-1.5-pro
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;# → ModelNotFoundError&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;gemini -p &lt;span style="color:#e6db74"&gt;&amp;#34;hello&amp;#34;&lt;/span&gt; -m gemini-2.5-flash
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;# → Works normally&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Gemini models accessible with the current account:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;gemini-2.5-flash&lt;/code&gt; — Default recommended model&lt;/li&gt;
&lt;li&gt;&lt;code&gt;gemini-2.5-pro&lt;/code&gt; — High-tier&lt;/li&gt;
&lt;li&gt;&lt;code&gt;gemini-2.5-flash-lite&lt;/code&gt; — Lightweight&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Versions of Gemini 2.0 and below return &lt;code&gt;ModelNotFoundError&lt;/code&gt;. While this might vary based on account plan or API key type, based on the Gemini CLI, only the 2.5 series worked reliably.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="discovery-4-zai-can-be-bypassed-with-claude-sdk"&gt;Discovery 4: ZAI Can Be Bypassed with Claude SDK
&lt;/h2&gt;&lt;p&gt;ZAI is a service that provides an endpoint compatible with the Anthropic API. This allows us to use GLM models with the Claude Code CLI by changing just two environment variables.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;ANTHROPIC_BASE_URL&lt;span style="color:#f92672"&gt;=&lt;/span&gt;https://&amp;lt;ZAI endpoint&amp;gt; &lt;span style="color:#ae81ff"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;ANTHROPIC_AUTH_TOKEN&lt;span style="color:#f92672"&gt;=&lt;/span&gt;&amp;lt;ZAI_KEY&amp;gt; &lt;span style="color:#ae81ff"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;claude --model glm-5.1 --print &lt;span style="color:#e6db74"&gt;&amp;#34;fix the bug&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Since Claude Code internally uses the Anthropic Python SDK, simply overriding &lt;code&gt;ANTHROPIC_BASE_URL&lt;/code&gt; allows calling ZAI&amp;rsquo;s GLM models with the same format. It was interesting that we could reuse the existing &lt;code&gt;claude&lt;/code&gt; backend without any separate adapter code.&lt;/p&gt;
&lt;p&gt;The three GLM models used were:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;glm-5.1&lt;/code&gt; — High-tier&lt;/li&gt;
&lt;li&gt;&lt;code&gt;glm-4.7&lt;/code&gt; — Cost-performance balance&lt;/li&gt;
&lt;li&gt;&lt;code&gt;glm-4.5-air&lt;/code&gt; — Lightweight &amp;amp; High-speed&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="4-way-fan-out-comparison-test"&gt;4-Way Fan-out Comparison Test
&lt;/h2&gt;&lt;p&gt;We simultaneously issued the same Go bug-fixing task to 4 representative workers out of the 18 (Claude Sonnet, GLM-5.1, Codex gpt-5.5, Gemini 2.5 Flash).&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;Task: &amp;#34;fix the off-by-one error in the binary search function&amp;#34;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Response times (wall clock):&lt;/p&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Worker&lt;/th&gt;
 &lt;th&gt;Model&lt;/th&gt;
 &lt;th&gt;Response Time&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;cc-go-dev-01&lt;/td&gt;
 &lt;td&gt;claude-sonnet-4-6&lt;/td&gt;
 &lt;td&gt;~8 seconds&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;cc-zai-high-dev-01&lt;/td&gt;
 &lt;td&gt;glm-5.1&lt;/td&gt;
 &lt;td&gt;~12 seconds&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;codex-py-dev-01&lt;/td&gt;
 &lt;td&gt;gpt-5.5&lt;/td&gt;
 &lt;td&gt;~15 seconds&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;gemini-py-dev-01&lt;/td&gt;
 &lt;td&gt;gemini-2.5-flash&lt;/td&gt;
 &lt;td&gt;~10 seconds&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;More interesting than the response times were the differences in their approaches. Claude tended to refactor the entire function, while Gemini preferred minimal modifications. Codex often included test code along with the fix.&lt;/p&gt;
&lt;p&gt;Of course, this is a single task result and has no statistical significance. It was a verification at the &amp;ldquo;does it actually work&amp;rdquo; level, not a benchmark.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="distributed-workers-adding-a-second-host"&gt;Distributed Workers: Adding a Second Host
&lt;/h2&gt;&lt;p&gt;If all workers are on the same server, the comparative experiment loses some of its meaning. Therefore, we added Claude workers to a second host.&lt;/p&gt;
&lt;p&gt;The method for workers to access the NATS broker (on the first host) from the second host is via an &lt;code&gt;autossh&lt;/code&gt; tunnel.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-ini" data-lang="ini"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#66d9ef"&gt;[Service]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#a6e22e"&gt;ExecStart&lt;/span&gt;&lt;span style="color:#f92672"&gt;=&lt;/span&gt;&lt;span style="color:#e6db74"&gt;autossh -N -L 4222:127.0.0.1:4222 broker-host&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;By forwarding the local port 4222 to the broker, workers can connect to &lt;code&gt;nats://127.0.0.1:4222&lt;/code&gt; from any host without code changes.&lt;/p&gt;
&lt;p&gt;Advantage of this method: Workers don&amp;rsquo;t need to know where the broker is. They can always connect to &lt;code&gt;localhost:4222&lt;/code&gt;.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="most-panicked-moment-during-operation"&gt;Most Panicked Moment During Operation
&lt;/h2&gt;&lt;p&gt;The most distressing situation was losing the NATS operator signing key. NATS JetStream uses NKey-based authentication, and the operator/account&amp;rsquo;s signing key (nsc seed) is required to issue credentials for new workers.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;-webkit-text-size-adjust:none;"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;nsc add user --account Services --name new-worker
&lt;/span&gt;&lt;/span&gt;&lt;span style="display:flex;"&gt;&lt;span&gt;&lt;span style="color:#75715e"&gt;# → &amp;#34;signing key not found&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;There was no backup. Ultimately, we had to perform a large-scale cutover, regenerating the entire NATS operator and replacing all worker credentials with a new permission tree. Service downtime was approximately 60 seconds.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lesson&lt;/strong&gt;: Always create an offline backup of the NATS operator seed immediately after generation. If it&amp;rsquo;s lost, regeneration is the only option.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="summary"&gt;Summary
&lt;/h2&gt;&lt;p&gt;Practical conclusions from this experiment:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Claude 3.x is EOL&lt;/strong&gt; - Inaccessible via Claude Code CLI as of 2026. Use only 4.x.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Codex ChatGPT Account Limited to 4 Models&lt;/strong&gt; - gpt-5.5, 5.4, 5.4-mini, 5.3-codex. Pro models require a separate API key.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Gemini Only 2.5 Series&lt;/strong&gt; - Previous versions inaccessible via CLI.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ZAI Integrable via Claude SDK Environment Variable Override&lt;/strong&gt; - No separate adapter needed.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;NATS NKey Must Be Backed Up&lt;/strong&gt; - Losing the signing key means reissuing everything.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The next installment will cover how these workers are connected, discussing system design and implementation.&lt;/p&gt;</description></item></channel></rss>