Himabindu "Hima" Lakkaraju is an Indian-American computer scientist who works on machine learning, artificial intelligence, algorithmic bias, and AI accountability.
She also investigates the practical and ethical implications of deploying machine learning models in domains involving high-stakes decisions such as healthcare, criminal justice, business, and education.
Lakkaraju co-founded the Trustworthy ML Initiative (TrustML) to lower entry barriers and promote research on interpretability, fairness, privacy, and robustness of machine learning models.
Her doctoral research focused on developing interpretable and fair machine learning models that can complement human decision making in domains such as healthcare, criminal justice, and education.
[12] During her PhD, Lakkaraju spent a summer working as a research fellow at the Data Science for Social Good program at University of Chicago.
As part of this program, she collaborated with Rayid Ghani to develop machine learning models which can identify at-risk students and also prescribe appropriate interventions.
[17][18] She co-authored a study which demonstrated that when machine learning models are used to assist in making bail decisions, they can help reduce crime rates by up to 24.8% without exacerbating racial disparities.
She initiated the study of adaptive and interactive post hoc explanations[22][23] which can be used to explain the behavior of complex machine learning models in a manner that is tailored to user preferences.