
For a long time, keeping Architecture Decision Records in Confluence felt like the mature, responsible thing to do.
They were easy to search.
Business stakeholders could access them without friction.
Architecture was visible, discussable, and positioned as part of a strategic conversation rather than buried in code.
From an enterprise perspective, this made complete sense. Architecture is not only about implementation details. It is about trade-offs, intent, constraints, and consequences that often extend far beyond a single repository.
Nowadays, we are no longer designing systems exclusively with humans in mind. We are increasingly designing them in collaboration with AI agents.
And that shifts the gravity of the ADR conversation.
Levels of architecture
A few days ago, I published a post on LinkedIn about ADRs and where they should live. I didn’t expect it to spark such an intense discussion, but the comment section quickly filled with interesting, sometimes opposing perspectives from architects and engineering leaders.
Several experienced architects noted that many decisions do not map directly to the code. Enterprise-level decisions, such as governance models, capability design, or data ownership policies, often span multiple systems and stakeholders. These cannot realistically be reduced to a single repository context.
At the same time, many developers emphasized a recurring pain: documentation stored in Confluence (or Wiki) tends to drift away from implementation. When ADRs are not versioned, reviewed, and evolved alongside the code, they gradually lose their relevance and become historical snapshots rather than living guidance.
Both sides make strong arguments. So the core issue is not where ADRs physically reside.
The more important question is how we ensure they stay authoritative, actionable, and understandable by machines and humans.
Three levels of ADRs
We can roughly distinguish between three layers of decisions:
Code-level ADRs (local constraints)
These are the decisions that directly shape code in one repo:
- persistence model and migration approach
- API style (REST/GraphQL/events), error schema
- deployment approach, runtime constraints
- libraries that are explicitly allowed / disallowed
For agents, these are like “local rules of the road”.
System/platform ADRs (cross-repo constraints)
These decisions define how multiple repos are supposed to work together:
- communication patterns (sync vs async, events vs commands)
- shared security approach (authN/authZ, token handling)
- observability standards (tracing, logging semantics)
- data ownership boundaries and integration rules
- platform-provided golden paths
Enterprise ADRs (strategic constraints)
These are policies and decisions that may not map to one code change, but still shape everything:
- governance and risk constraints
- data classification and retention
- regulatory requirements and auditability
- M&A integration strategy, reference architectures
If you look at ISO/IEC/IEEE 42010, this maps nicely to the idea that architecture descriptions exist to address different stakeholders and concerns through viewpoints. Different ADR levels are essentially different concern-slices of the same system.
Problem with AI Agents
When coding agents generate or modify code, they operate within the repository boundary. If ADRs live outside that boundary, the agent is effectively deprived of architectural intent. It can see implementation, but not reasoning.
In an AI-augmented workflow, architectural context is not optional metadata. It becomes operational input.
An agent that understands architectural constraints, non-functional requirements, integration boundaries, and technology choices behaves differently from one that only sees code.
The catch is that most agents are still repo-scoped by default. They can read the local codebase, local README, local ADRs, but they don’t naturally understand the ecosystem. And modern systems are rarely single-repo. They are a landscape: multiple services, shared platforms, contracts, data products, and enterprise policies.
So, how do we make sure an AI agent working inside one repository still respects the architecture of the whole ecosystem?
That’s where three levels of ADRs become practical, not theoretical.
Make ADRs machine-readable
Local ADRs stay with the repo (team scope)
Keep code-level ADRs next to the code they affect, because this is where the feedback loop is fastest and where agents naturally operate.
How to make it work well:
- Put ADRs in a predictable location (e.g.,
/docs/adr/). - Use a consistent template and metadata (level, status, scope, affected areas, links).
- Make “ADR update” part of the PR definition of done when the change is architecture-significant.
- Treat ADRs like code: review them, request changes, and keep them small and decision-focused.
Outcome: agents and humans always start with the most relevant constraints for this repository, without searching.
Put shared ecosystem rules in one shared place
You need a single “source of truth” for decisions that apply across multiple repos.
Call it an architecture-registry or platform-standards repo.
This is where you keep decisions like:
- “How we integrate (REST vs events, allowed patterns)”
- “How authentication works”
- “API standards (errors, versioning, contracts)”
- “Domain boundaries and data ownership”
- “Minimum NFRs: logging/tracing/security baselines”
These are not “one-team ADRs”.
They are ecosystem constraints.
Each repo gets a small excerpt of the ecosystem rules
Because neither agents nor teams will read a huge central library every time.
So you add a small file (or folder) into every repo, for example:
/docs/ecosystem-rules.md
or/.agent-context/constraints.md
This is a short, repo-specific cheat sheet:
- the 10–20 rules that apply to this service
- links to the full decisions in the central registry
You can generate/update it automatically later, but even a manual approach works as a starting point.
Outcome: anyone (including an AI agent) opening the repo immediately sees “the rules of the ecosystem”.
Enforce the most important rules in CI
Documentation alone won’t protect the architecture.
Add a few gates in CI for the non-negotiables, for example:
- forbidden dependencies (“service A must not call service B directly”)
- API contract compatibility checks
- mandatory tracing/log correlation rules
- allowed base images / approved libraries
- data boundary constraints
You don’t need to automate everything.
Automate only the rules that create the most ecosystem damage when violated.
Stay in the loop
Get the latest AI & tech insights delivered straight to your inbox.
Stay in the loop
Get the latest AI & tech insights delivered straight to your inbox.




