My academic journey

From engineering school to a PhD in applied math, forecasting neurodegenerative disease progression.

A strange relationship with math and research

At 20, when I entered École des Ponts, I swore I’d never do math again. The lack of real-world applications was killing me. I seriously considered switching to business through a dual degree. But a business school graduate stopped me cold: “Build technical skills first. You’ll have time for business later.”

So I gave math a second chance. I studied economics at Université Paris Dauphine, then did a Machine Learning internship at Argonne, a U.S. national laboratory. That’s where maths clicked. It led me to a Machine learning master at École Polytechnique.

At that point, I wanted to build tangible things—research wasn’t an option. But it was peak “Big Data / ML hype.” Everyone wanted it, and I was struck by the shallow understanding and lack of mentorship in industry.

So, a few coffees later, I started a PhD with Stanley Durrleman and Stéphanie Allassonnière.

My research: modeling neurodegenerative disease progression

My research focused on modeling how neurodegenerative diseases progress (Alzheimer’s, Parkinson’s, Huntington). The objective was threefold:

  • Reconstruct the “average” progression over long time horizons
  • Characterize individual trajectories relative to that average
  • Predict individual evolution up to ~5 years ahead

All of that with multimodal data: cognitive assessments, imaging (MRI, PET), and blood biomarkers. The hard part is that real biomedical data refuses to behave. Progression isn’t linear; its pace changes. Follow-ups are irregular, missing, or incomplete. Each person provides only a small piece of the puzzle, and variability across individuals is massive.

So we built a single modeling backbone that could adapt across diseases and modalities, grounded in (1) geometry (Riemannian formalism), (2) probabilistic modeling of heterogeneity, and (3) inference under uncertainty and missingness (Monte Carlo / Markov chains, Expectation–Maximization).

My main academic work lives in my thesis and papers, including:

Making research that doesn’t evaporate after publication

Early in the PhD, I noticed something wasteful: every new student restarted from scratch (concepts, code, tooling). You learn a lot, but collectively you lose years.

So alongside papers, I built Leaspy, a Python package with a simple intention: give future PhDs and engineers a foundation they could extend with new models, new cohorts, new diseases, new analyses. It became a base for follow-up work, including:

At its best, that mindset supported a large-scale prediction paper published in Nature Communications, showing our models could forecast Alzheimer’s progression up to five years ahead.

The same “make it last” instinct showed up elsewhere too: Medium post about Leaspy, teaching, Digital-Brain.org, the Disease Progression Modeling website, and collaborations with academic and pharma teams to analyze clinical trial data (quantifying drug effects, stratifying populations).

It’s also what pushed me toward building my first company.