Writing
4 min read

Laws That Shape System Behavior

Across distributed systems, control theory, operations research, and large-scale execution, the same constraints surface again and again—often under different names.

This post is a curated set of those laws.

Not prescriptions. Not best practices. Just boundaries.

1. Nyquist Constraint (Observation Bounds Correction)

(Control Theory)

A system cannot correct faster than it can observe. If measurements arrive slowly, corrective actions will lag, overshoot, or persist after conditions have changed.

What it shows up as

  • delayed recovery after traffic spikes
  • oscillations during remediation
  • ghost load long after demand subsides

Why it matters

Many remediation problems are framed as logic failures. In practice, they’re signal-resolution problems.

You can’t fix in seconds what you only measure in minutes.

Where it’s used

  • control systems
  • feedback loops
  • autoscaling, rate limiting, and self-healing designs

2. CAP Theorem

(Distributed Systems)

Under a network partition, a system must choose between consistency and availability.

This is not a failure mode. It is a design choice.

What it shows up as

  • dropped requests vs stale reads
  • partial outages that preserve critical paths
  • “why did this user succeed while another failed?”

Why it matters

Reliability decisions always encode product intent.

Where it’s used

  • distributed systems

3. Tail Amplification Law

(Derived from probability, observed in practice)

End-to-end reliability is dominated by worst-case behavior, not averages.

As systems fan out, tail latency compounds—even when individual components look healthy.

What it shows up as

  • good service dashboards, poor user experience
  • “everything looks fine” incidents
  • surprises at scale

Why it matters

Optimizing local averages rarely fixes global outcomes.

4. Little’s Law

(Queueing Theory)

L = λ × W

Work in the system equals arrival rate times time in system.

What it shows up as

  • growing backlogs
  • stuck pipelines
  • “we didn’t increase traffic, but everything slowed down”

Why it matters

Throughput limits and latency are inseparable. You cannot reduce one without affecting the other.

5. Decision Latency Dominates Mechanism Quality

(Observed across systems and orgs)

Correct mechanisms applied late still fail.

Retries, hedging, escalation, or kill decisions lose value as delay increases.

What it shows up as

  • technically correct responses that arrive too late
  • expensive fixes with limited impact
  • momentum overriding judgment

Why it matters

Timing often matters more than sophistication.

6. Priority Inversion Under Load

(Operating Systems + Distributed Systems)

When multiple goals compete for shared resources, lower-priority work can block higher-priority outcomes unless explicitly ordered.

What it shows up as

  • retries overwhelming saturated services
  • latency optimizations reducing availability
  • self-inflicted amplification during incidents

Why it matters

Without explicit priority, systems optimize the wrong thing under stress.

7. Control–Execution Coupling Cost

(Systems Architecture)

When decision logic is embedded in execution paths, decisions inherit execution latency and failure modes.

What it shows up as

  • config changes requiring redeployments
  • slow response to dynamic conditions
  • brittle behavior during incidents

Why it matters

Separating deciding from doing increases adaptability.

8. Goodhart’s Law

(Social Systems / Metrics)

When a measure becomes a target, it stops being a good measure.

What it shows up as

  • velocity without value
  • green dashboards masking risk
  • optimization against the metric rather than the outcome

Why it matters

Metrics shape behavior. They must be designed with care.

9. Sunk Cost Bias (Execution Inertia)

(Behavioral Economics)

Past investment distorts future decision-making.

What it shows up as

  • reluctance to stop failing work
  • “we’ve already spent too much to quit”
  • delayed kill decisions

Why it matters

Execution systems need explicit mechanisms to counter human bias.