|
Anthropic had a rough week. And the part that stings isn’t just that something went wrong - it’s how they handled it. A map file, a DMCA frenzy, and a Python loopholeOn March 31st, Anthropic accidentally shipped Claude Code’s TypeScript source code via a map file left in their npm package. The leak was spotted almost immediately, and GitHub repositories mirroring the code started receiving DMCA takedowns shortly after. What followed was a fairly aggressive takedown campaign by Anthropic. One of the interesting things that happened as a result of this takedown campaign was that someone used an AI agent to rewrite the leaked TypeScript into Python. Functionally equivalent, but different language means that DMCA doesn’t apply. DMCA protects specific expression, not ideas or functionality. A port isn’t a copy. Ironically, Anthropic, who themselves have trained models on derived works and leaned on the argument that AI output isn’t straightforwardly copyrightable, can’t easily pursue this without creating precedents that cut directly against their own interests. Then they made it worseA few days later, on April 4th, Anthropic’s Boris Cherny announced that Claude subscriptions would no longer cover third-party tools like OpenClaw. Effective immediately. Users could still access Claude via their own login with usage bundles, or pay per token via the API - but the free ride for tools built on top of Claude’s subscription tier was over. The stated reason was capacity. Subscription plans weren’t designed for the usage patterns these tools generate. I understand the economics. But the timing, and the bluntness of the move, are hard to separate from the week Anthropic just had. This is a company that built significant goodwill with developers - the kind of goodwill that comes from making genuinely good tools and being transparent about how they work. Cutting off third-party ecosystem tools with little notice, right after a messy source code incident, reads as pulling up the drawbridge. Developer-hostile is a strong phrase. But I’m not sure what else to call it. What’s making things worse is that the rules about this limitation are not clear. Matt Pocock summarized the absolute mess around this in a thread on Twitter. See for yourself. The silver lining: what the leak actually revealedMy colleague Nnenna Ndukwe at Qodo wrote something genuinely worth reading - a deep dive into the governance patterns inside the leaked Claude Code source. If you’re going to accidentally publish your source code, the least you can do is have code worth learning from. And apparently there’s a lot to learn. The patterns Nnenna describes are the kind of structural quality work that rarely makes it into blog posts: how the pipeline is governed, how trust is maintained across the delivery chain, what the release process reveals about how Anthropic thinks about integrity. It’s a valuable read regardless of how you feel about the incident that made it possible. The uncomfortable question the post raises implicitly: if these patterns were in place, how did the map file still slip through? Strong architectural governance doesn’t automatically catch everything. Release pipelines have gaps that code review doesn’t touch. A practical thing while we’re hereIf you use Claude Code, @zodchiii’s thread on hooks is worth bookmarking. Eight concrete hooks for automating quality enforcement - auto-formatting, blocking dangerous commands, preventing PRs if tests fail. The key insight is that CLAUDE.md instructions are followed about 80% of the time, but hooks execute every time. One is a suggestion; the other is a gate. The comfortable liesI published a post on my blog this week - Maslow’s Hammer and three lies QA tells itself - about where quality engineering is actually headed, and the self-soothing myths that make it hard to see clearly. I wrote it because I keep seeing QA professionals framing the AI moment as something that will pass, or something that doesn’t really affect what they do. I think that’s wrong, and I think it’s comfortable to be wrong about it. I’d love to hear how you’re thinking about this. Are you seeing quality work change shape in your organization, or does it still look more or less the same as it did three years ago? |
Sign up for weekly tips on testing, development, and everything related. Unsubscribe anytime you feel like you had enough 😊
Hello Reader, If you’ve been reading this newsletter for a while, you know that quality engineering is the hill I’ll always choose to stand on. And this week, I get to share something personal that ties directly into that. I’m joining Qodo I’ve been following Qodo for almost a year now, and I’ve been getting more and more impressed every day. So I’m thrilled to share that I’m joining Qodo as a DevRel engineer. Qodo is an enterprise multi-agent platform for AI-driven code reviews. As AI...
Hey Reader, An interesting thought is popping up in conversations around AI agents: the environment around the thing matters more than the thing itself. Last week I read about Harness engineering and it felt very familiar. As if it was tapping into instincts I already had. If you’ve ever debugged a flaky test only to find the problem was in the setup, not the assertion, that instinct will feel natural. But at the same time it also feels like an unfamiliar territory. It borrows the same...
Hello Reader, AI can generate code, arguably with a pretty decent quality. That’s not news anymore. The question that’s been forming in my head all week is different: how do we decide what should go into production? Writing code is not the hard part (arguably, it never was). The hard part is making sure the right code ships and the wrong code doesn’t. And right now, that selection problem is becoming the defining challenge of AI-assisted development. Last week has definitely showed this....