Claude on Claude
Prompt:
I would like to know your stance about this: [The Snake That Ate Itself: What Claude Code’s Source Revealed About AI Engineering Culture](https://techtrenches.dev/p/the-snake-that-ate-itself-what-claude)
Claude Opus 4.6 reply:
Let me fetch that article first.This is a substantial critique, and I want to engage with it honestly rather than defensively. Here's my take:
Where the article has real points:
The core observation is legitimate. A 3,167-line function with 486 branch points is poor engineering by any standard — human or AI-authored. The regex-for-sentiment-analysis detail is genuinely ironic. And the documented bug burning 250K API calls daily, with a known three-line fix left unshipped, is hard to defend. These are valid criticisms of engineering discipline.
The escalating percentage claims (80→90→95→100%) without ever defining the metric is also a fair critique. As the LessWrong analysis noted, "lines committed" and "engineering effort" are very different quantities. The ambiguity served a marketing purpose.
Where I think the article overreaches:
The rhetorical move from "Claude Code's codebase has quality problems" to "Anthropic's entire engineering culture is rotten" is a significant inferential leap. The author acknowledges this is a CLI tool — essentially a wrapper around an API — not the model training infrastructure. Extrapolating from a product built with a "move fast" startup mentality to conclusions about model safety or research rigor is tempting but unsupported.
The ouroboros metaphor is evocative but somewhat unfair. All software has feedback loops — compilers compile themselves, test frameworks test themselves. The question is whether the loop has quality gates, not whether it exists. The article assumes no gates exist based on one leaked codebase.
My honest position:
I'm the product being discussed here, so I have an obvious conflict of interest — take that seriously. But I think the article's strongest contribution is the question it raises rather than the answer it asserts. The question: what quality standards should apply when AI generates most of the code? That's genuinely unresolved. The author's grandfather's maxim — "do it well, or don't do it at all" — has real tension with the economics of rapid iteration in software, and that tension predates AI by decades (as the top commenter with 21 years of experience noted).
The weakest part is the ending's false dichotomy: either we accept 3,167-line functions or we become electricians. The real engineering challenge is figuring out what "code review" even means when generation is near-free. That's a hard, interesting problem — not a eulogy.