Start Here

What this site is

I write about designing effective product delivery systems: long-lived teams, minimal dependencies, fast feedback loops, and clear ownership across the product lifecycle. The focus is on the mechanisms that make this work in practice — quality, reliability, platform rails, and measurement that surfaces the right signals to the right people.

The Otto Papers

The Otto Papers is my attempt to turn hard-won experience into a coherent set of models, principles, and patterns for designing product delivery systems that ship and improve through mechanisms. It’s not a universal playbook. It’s a map of what tends to work in specific conditions.

The Thesis

I believe quality, reliability, and platform work should live close to product delivery. Most orgs can ship tickets; they struggle to turn learning into mechanisms. When enabling capabilities are separated from the teams doing the shipping, feedback slows down, ownership gets muddy, and “standards” turn into dependency queues and compliance theater.

The alternative is federated enablement: small, senior, high-context teams aligned to a domain, building rails and mechanisms that make good practice the default. Underneath that is a simple feedback loop, and most dysfunction is just that loop breaking somewhere. I call it the Otto Loop.

I think of the product delivery system as a living system of feedback loops, guardrails, and operating norms. It needs active stewardship: tuning signals, pruning low-value checks, clarifying boundaries, and evolving mechanisms as the product and teams change. The goal isn’t enterprise-wide uniformity; it’s consistent principles with domain-specific implementation.

One implication: quality isn’t fungible labor. In complex domains, the highest leverage quality work is senior systems thinking and tight collaboration with product and delivery teams, not outsourced test throughput.

The Otto Loop

Change → Signal → Reinforcement → Learning

  • Change: a code/config/process change that could affect outcomes.
  • Signal: trustworthy feedback from tests, observability, user reports, metrics, incidents.
  • Reinforcement: what the system rewards or makes painful (reward, friction, status, escalation paths, priority, funding).
  • Learning: what actually changes afterward (design, tooling, ownership, standards, training).

When the loop is tight and honest, quality improves. When it’s slow or distorted, quality becomes narrative management.

Primitives vs. Planets

I think of platforms in two layers:

  • Enterprise primitives: shared building blocks (identity/access, runners, logging/metrics, baseline guardrails) that should be stable, versioned, and low-surprise.
  • Domain platforms (“planets”): high-context capabilities built close to delivery (golden paths, CI conventions, test data, dashboards) that optimize flow and learning inside a bounded context.

Central teams should provide capabilities and paved roads, not operate as a remote control plane that ships breaking change by surprise.

Tracks

  • Quality Engineering: feedback loops, test strategy, test data, release confidence
  • Resilience Engineering: incidents, recovery, observability, learning loops
  • Platform Engineering: self-service paths, paved roads, reducing handoffs and cognitive load
  • Leadership mechanics: decision rights, incentives, conflict, alignment without control

How to Read This

If you’re new here, start here:

  • The Otto Loop
  • Primitives vs. Planets
  • Mechanisms over Heroics

If you’re here for:

  • Shipping confidence → Quality Engineering
  • Incidents and recovery → Resilience Engineering
  • CI/CD, templates, self-service → Platform Engineering
  • Org design, ownership, incentives → Leadership mechanics

What I Optimize For

  • End-to-end flow (lead time, handoff latency)
  • Feedback integrity (trusted signals)
  • Learning rate (turning surprise into mechanisms)
  • Reduced dependency queues (self-service, paved roads)

Scope

Most of what I write is optimized for complex environments: multi-team dependency graphs, regulated domains, and systems where coordination cost is the real bottleneck. I try to be explicit about tradeoffs and failure modes so ideas don’t get cargo-culted. I’ll often describe the conditions where an idea works and where it doesn’t.