Notes

On prediction intervals, uncertainty quantification, the geometry of regression, and multi-agent AI.

Each note has three levels of depth — intuitive, technical, and advanced — so you can read at whatever level matches your background.

coverage

Conformal Prediction

Distribution-free prediction intervals with finite-sample guarantees — from the basic recipe to its fundamental limitations and existing attempts to overcome them.

  1. Your Model Is Confident. Should You Be? Why point predictions are incomplete, what prediction intervals actually are, and why constructing them correctly is harder than it looks.
  2. Conformal Prediction The split conformal recipe: train, calibrate, quantile, done. A finite-sample coverage guarantee for any model, any distribution.
  3. The Constant-Width Problem Marginal versus conditional coverage, the impossibility theorem, and why constant-width intervals hide dangerous unevenness.
  4. Heteroscedasticity and Variance Stabilization When prediction difficulty varies, raw residuals are not comparable. Variance-stabilizing transformations and weighted nonconformity scores.
  5. Adaptive Conformal Methods CQR, Studentized CP, and Localized CP — what each gets right, what each gets wrong, and the gap that remains.
  6. The Origins of Conformal Prediction From Kolmogorov’s foundations through Vovk’s transductive framework to the modern split method — how conformal prediction came to be.
  7. Beyond the Split Full conformal, cross-conformal, and jackknife+ — recovering statistical efficiency without giving up finite-sample coverage.
  8. When Exchangeability Breaks Distribution shift, non-stationarity, and feedback loops. What happens to conformal guarantees when the real world violates the one assumption we need.