About

I am a postdoctoral research scholar in the Computer Science Department at Columbia University, working with Elias Bareinboim. Before joining Columbia, I completed my PhD in the Seminar for Statistics at ETH Zürich under the supervision of Nicolai Meinshausen. My research interests are in applying causal inference for trustworthy data science, including both statistical and computational aspects. I have previously worked on fair machine learning and explainability, and more recently on algorithmic recourse. I am also very interested in applications, in particular in medicine. I previously worked on epidemiological questions of causation in intensive care unit (ICU) research, including applications of AI tools in the ICU.

Office address: Computer Science Department, Columbia University
CSB 490
500 West 120th Street, 10027 New York
Phone: +1 646-919-9628.

Research and CV

Fields of Interest Curriculum Vitae: CV
  • Trustworthy Data Science
  • Fair Machine Learning
  • Causal Inference
  • Epidemiology in Intensive Care Medicine
  • Open-Source Statistical Software
Before joining Columbia
I obtained my PhD in Statistics
from ETH Zurich
and my BMath and MMath
from the University of Cambridge.

Tutorials

  • Plečko, D. and Bareinboim, E., 2022. Causal Fairness Analysis Tutorial. International Conference on Machine Learning, ICML 2022.
  • Bareinboim, E. and Plečko, D. and Zhang, J., 2021. Causal Fairness Analysis Tutorial. ACM Conference on Fairness, Accountability, and Transparency (FaccT), Mar/2021.
  • Papers

    Teaching

    I thoroughly enjoy teaching. In Spring 2023, I offered a 4-week course (8 lectures in total) on fair machine learning (within the Causal Inference II course taught by Elias Bareinboim). Below, you can find the course outline and all the necessary materials (including slides, lecture videos, and vignettes for software examples).

    Fairness Course at Columbia

    Lectures 1-2 (Week 1)


    Outline:
    (L1) Theory of decomposing variations within the total variation fairness measure TVx₀, x₁(y). Explaining the Fundamental Problem of Causal Fairness Analysis. Introducing contrasts and the structural basis expansion for causal fairness measures. Introducing the Explainability Plane. Introducing the simplified cluster causal diagram called the Standard Fairness Model.
    (L2) Measures in the TV family. Using contrasts in practice to measure discrimination. Structure of the TV family. Organizing the existing causal fairness measures into the Fairness Map.

    Slides: Lecture 1, Lecture 2

    Video: Lecture 1, Lecture 2

    Lectures 3-4 (Week 2)


    Outline:

    (L3) Identification of causal fairness measures from observational data. Estimation of causal fairness measures based on doubly-robust methods and double debiased machine learning.
    (L4) Relationship to key existing notions in the fairness literature. Understanding where counterfactual fairness falls in the Fairness Map. Implications of causal fairness for the Fairness Through Awareness framework. Connecting notions of predictive parity and calibration with causal fairness.

    Slides: Lectures 3+4

    Video: Lectures 3+4

    Lectures 5-6 (Week 3)


    Outline:

    (L5) Introducing the three key tasks of causal fairness analysis: (1) bias detection; (2) fair prediction; (3) fair decision-making. Discussing Task 1 of bias detection in depth with applications, including the United States Government Census 2018 dataset, COMPAS dataset & other synthetic examples.
    (L6) Discussing Task 2 of fair prediction. Proving the Fair Prediction Theorem that demonstrates why statistical notions of fairness are not sufficient in general.

    Slides: Lectures 5, Lectures 6

    Video: Lecture 5, Lecture 6

    Vignettes: Census Task 1 Vignette, COMPAS Task 1 Vignette, COMPAS Task 3 Vignette

    Lectures 7-8 (Week 4)


    Outline:

    (L7) Moving beyond the Standard Fairness Model. Discussing how to extend causal fairness analysis to arbitrary causal diagrams. Discussing variable-specific and path-specific notions of indirect effects. Discussing identifiability and estimation of variable-specific indirect effects.
    (L8) Discussing decompositions of spurious effects. Introducing the partial abduction and prediction procedure. Introducing partially abducted submodels. Proving variable-specific spurious decomposition results for Markovian causal models. Proving variable-specific spurious decomposition results for Semi-Markovian causal models.

    Slides: Lectures 7+8

    Video: Lectures 7+8


    ETH Zurich

    I was also involved in teaching during my PhD at ETH Zurich. Below is the list of courses for which I was the course assistant:

    Other

    I review for the Journal of Machine Learning Research (JMLR), Journal of American Statistical Association (JASA), and for machine learning conferences. I also proudly support the research of the Swiss Sarcoma Network.