Skip to content

Conformal Prediction

Contribution to Gammerman Festschrift

I am honored to announce a new book chapter in "The Importance of Being Learnable", a volume of essays dedicated to Alexander Gammerman on the occasion of his 80th birthday.

Prof. Gammerman is a foundational figure in AI uncertainty quantification and the co-inventor of the Conformal Prediction algorithm.

Together with Lars Carlsson, Ernst Ahlberg, and James Gammerman, I co-authored the chapter "Application of Confidence and Probabilistic Models to Practical Problems".

Our contribution surveys the transformative impact of the methods Gammerman pioneered, examining their adaptation to real-world challenges in:

  • Drug Discovery: High-stakes decision-making with valid uncertainty.

  • Autonomous Systems: Enhancing safety in self-driving technologies.

  • NLP: Mitigating hallucinations in Large Language Models.

  • Industrial Engineering: Optimizing maintenance schedules and anomaly detection.

The volume recognizes Gammerman's long-lasting impact as a researcher, educator, and mentor, celebrating a career that spans from pioneering mathematical models of plant photoreceptors to advancing the formal treatment of uncertainty in AI.

New Preprint: Conformal Blindness

We typically assume that if a data distribution shifts drastically, our Conformal Test Martingales (CTMs) will explode and warn us. The standard logic is simple: exchangeability implies uniform p-values; therefore, non-uniform p-values imply a break in exchangeability.

But what if the p-values stay uniform while the data moves?

In my new note, "Conformal Blindness: A Note on A-Cryptic change-points", I demonstrate that this is possible.

By constructing a specific counter-example using bivariate Gaussian distributions and an oracle conformity measure, I identify a trajectory (an "A-cryptic line") along which the data can shift arbitrarily far without triggering any CTM. In this specific setting, the p-values remain perfectly uniform, and the CTM remains flat.

This finding serves as a proof-of-concept for a fundamental "blind spot" in conformal testing: we only detect shifts that are distinguishable by our specific conformity measure. If the shift aligns with the measure's blind spot, we are flying blind.