Aditya Mittal

Aditya Mittal

I am currently a Masters student in Computer Science at UC Irvine, and my research interests are in algorithmic fairness in machine learning. I am also open to exploring new research directions!

I completed my undergraduate degree in Statistics at UC Davis, where I was advised by Norman Matloff for my senior thesis. My work received an Honorable Mention for the CRA Outstanding Undergraduate Researcher award (2025).

Contact details attached below:

Papers

TowerDebias: A Novel Unfairness Removal Method Based on the Tower Property

arXiv | DOI

A. Mittal and N. Matloff (2025). "TowerDebias: A Novel Unfairness Removal Method Based on the Tower Property." arXiv:2411.08297
Status: Under review.

Abstract

Decision-making processes have increasingly come to rely on sophisticated machine learning tools, raising critical concerns about the fairness of their predictions with respect to sensitive groups. The widespread adoption of commercial "black-box" models necessitates careful consideration of their legal and ethical implications for consumers. When users interact with such black-box models, a key challenge arises: how can the influence of sensitive attributes, such as race or gender, be mitigated or removed from its predictions? We propose towerDebias (tDB), a novel post-processing method designed to reduce the influence of sensitive attributes in predictions made by black-box models. Our tDB approach leverages the Tower Property from probability theory to improve prediction fairness without requiring retraining of the original model. This method is highly versatile, as it requires no prior knowledge of the original algorithm's internal structure and is adaptable to a diverse range of applications. We present a formal fairness improvement theorem for tDB and showcase its effectiveness in both regression and classification tasks using multiple real-world datasets.

Code

Github Repository:

dsld: A Socially Relevant Tool for Teaching Statistics

arXiv | DOI

A. Mittal, T. Abdullah, A. Ashok, B. Zarate Estrada, S. Martha, B. Ouattara, J. Tran, and N. Matloff (2025) "dsld: A Socially Relevant Tool for Teaching Statistics." arXiv:2411.04228
Status: Under Review.

Abstract

The growing influence of data science in statistics education requires tools that make key concepts accessible through real-world applications. We introduce ``Data Science Looks At Discrimination'' (\texttt{dsld}), an R package that provides a comprehensive set of analytical and graphical methods for examining issues of discrimination involving attributes such as race, gender, and age. By positioning fairness analysis as a teaching tool, the package enables instructors to demonstrate confounder effects, model bias, and related topics through applied examples. An accompanying 80-page Quarto book guides students and legal professionals in understanding these principles and applying them to real data. We describe the implementation of the package functions and illustrate their use with examples. Python interfaces are also available.

Code

Github Repository:
Quarto Book:

Talks

A Mathematical Approach to Algorithmic Fairness December 2024

Conference: Directed Reading Program - UC Davis Department of Mathematics
Session: Poster Session
Location: Davis, California, USA
Date: December 2024

Description

Presented mathematical foundations of algorithmic fairness, exploring theoretical approaches to ensure machine learning models make unbiased predictions across different demographic groups.

Materials

Poster:

Discrimination Analysis in a Box: an R Package August 2024

Conference: Joint Statistical Meetings (JSM) - 2024
Session: Rethinking Statistics and Data Science Education: Incorporating Changing Technology and Encouraging Critical Thinking
Location: Portland, Oregon, USA
Date: August 2024

Description

Presented our work on the dsld R package for discrimination analysis in educational settings. The talk focused on how the package can be used to teach statistical concepts through real-world examples of fairness and bias analysis.

Materials

Slides:
Abstract:

TowerDebias: Eliminating the Effect of Sensitive Variables from Black-Box Machine Learning Models April 2024

Conference: Undergraduate Research, Scholarship, and Creative Activities (URSCA) Conference
Session: Oral Session
Location: Davis, California, USA
Date: April 26-27, 2024

Description

Presented our novel TowerDebias method for removing sensitive variable effects from black-box machine learning models, demonstrating its effectiveness in improving fairness without model retraining.

Materials

Slides:

Discrimination Analysis in a Box: a Machine Learning Package for Teaching December 2023

Conference: UC Davis Scholarship of Teaching and Learning
Session: Poster Session
Location: Davis, California, USA
Date: December 1, 2023

Description

Presented our dsld package for discrimination analysis in educational settings, showcasing how it can be used to teach statistical concepts through real-world fairness examples.

Materials

Poster: