Back to Blog
AI & Tech

Extreme Go Horse Manifesto Meets AI

Piotr Filipowicz
Written byPiotr Filipowicz
abstract

For those who don’t know it yet, eXtreme Go Horse (XGH) is a satirical “methodology” created as a parody of bad software development practices.
It glorifies speed over thinking, hero coding over architecture, and blind optimism over responsibility.

Funny? Absolutely.
Dangerous? Also absolutely, especially when combined with AI.

Today we are witnessing the birth of a new anti-pattern: AI-powered XGH.
Code is generated faster than ever, systems are “working”… until they don’t.
And when they fail, nobody knows why.

Let’s walk through the XGH principles and map them directly to the real risks of building software with AI.


1. The problem is not my code, it’s your data

Classic XGH:
If something breaks, blame the input.

AI version:
If the system behaves strangely, blame the prompt, the model, or “hallucinations”.

The real risk:
AI amplifies unclear requirements. Garbage in leads to confidently generated garbage out.

Without:

  • clear domain boundaries
  • explicit assumptions
  • architectural context

AI doesn’t help. It accelerates misunderstanding.

👉 Architecture is responsibility made explicit.
If you can’t explain your system without AI, AI will happily explain it wrong.


2. Working software is better than well-designed software

Classic XGH:
It compiles. Ship it.

AI version:
The demo works; the generated tests are green. Let’s deploy.

The real risk:
AI is extremely good at producing globally wrong and locally correct solutions:

  • hidden coupling
  • broken invariants
  • unscalable structures

You don’t feel the pain today.
You pay compound interest tomorrow.

👉 AI needs architectural guardrails, not applause.


3. Do it as fast as possible

Classic XGH:
Speed is the only metric.

AI version:
Why think for an hour if AI can generate it in 30 seconds?

The real risk:
Speed without checkpoints kills feedback loops:

  • no validation of assumptions
  • no incremental learning
  • no conscious trade-offs

AI removes friction, and friction is often where thinking happens.

👉 Good AI usage is step by step, not prompt by prompt.


4. If it works, don’t touch it

Classic XGH:
Legacy is sacred, mostly because nobody understands it.

AI version:
Nobody touches the generated code because the AI wrote it.

The real risk:
You create orphan systems:

  • no ownership
  • no mental model
  • no accountability

AI-generated code without human ownership is just technical debt with better grammar.

👉 If you can’t maintain it, you don’t own it.


5. Tests are for people who don’t trust themselves

Classic XGH:
Real developers test in production.

AI version:
The model is smart, it probably handled edge cases.

The real risk:
AI optimizes for plausibility, not correctness.
It will:

  • invent happy paths
  • ignore rare states
  • silently violate business rules

Tests are not about distrust.
They are about externalizing intent.

👉 Tests are contracts between you and your future self, not with AI.


6. Documentation is useless

Classic XGH:
The code explains itself. It doesn’t.

AI version:
We can always regenerate it.

The real risk:
You lose:

  • decision history
  • architectural rationale
  • business context

AI without documentation creates amnesia-driven development.

👉 AI consumes context. Architecture preserves it.


7. Someone else will fix it later

Classic XGH:
Future-you is an idiot with too much free time.

AI version:
We’ll refactor when the model gets better.

The real risk:
AI shifts responsibility unless you actively take it back.

There is no:

  • AI bug
  • AI decision
  • AI architecture

There is only your system.

👉 Responsibility is not delegatable.


The Real Lesson: AI Needs Architecture, Not Freedom

XGH is funny because we recognize ourselves in it.
AI makes it dangerous because it removes the pain signals.

To use AI well, you must:

  • define boundaries
  • design architecture first
  • work incrementally
  • validate continuously
  • take full ownership

AI is a force multiplier.
It multiplies discipline or chaos.

If you don’t want to practice Extreme Go Horse with GPUs,
start treating AI as a power tool, not a replacement for thinking.

Because in the end:

AI doesn’t absolve you from responsibility.
It amplifies the consequences of avoiding it.

Share this article
Share

Stay in the loop

Get the latest AI & tech insights delivered straight to your inbox.

In Partnership With
Arpeggia Studio

We use cookies to analyze site traffic. Learn more