UVa AIML Seminar
The AI and Machine Learning Seminar @ UVa

Data Contribution Estimation for Large Language Models & Debiasing Can Be Complementary


Stephanie Schoch & Hannah Chen
Department of Computer Science, University of Virginia

Time: 2023-11-15, 12:00 - 13:00 ET
Location: Rice 540 and Zoom

Abstract

Stephanie Schoch

Title: Data Contribution Estimation for Large Language Models

Abstract: Tasks enabled by data contribution estimation (DCE) aid model improvement through data improvement. While benchmark DCE evaluation tasks demonstrate efficacy in application across many ML domains, the computational costs have limited application of DCE to large language models (LLMs). In this talk, we will examine 1) the ways LLMs stand to benefit from DCE, 2) the challenges of scaling existing DCE methods to LLMs, and 3) recent work to address these challenges.

Hannah Chen

Title: Debiasing Can Be Complementary

Abstract: Statistical fairness stipulates equivalent outcomes for every protected group, whereas causal fairness prescribes that a model makes the same prediction for an individual regardless of their protected characteristics. While recent work has shown that counterfactual data augmentation is effective for reducing bias in NLP models, the debiased models are often evaluated only on metrics that are closely tied to the causal fairness notions employed by the debiasing method. This brings into question the effectiveness of the debiasing methods and the reliability of these bias evaluations. In this work, we consider bias metrics proposed for statistical and causal fairness via a case study on gender bias in NLP models. We find that statistical and causal debiasing methods show greater effectiveness when evaluated on a bias metric with the same fairness notion. In addition, causal debiasing may not improve statistical fairness. We demonstrate that combinations of statistical and causal debiasing techniques are able to achieve the best overall performance on both bias metrics. Our results demonstrate the need to be cautious about how bias metrics are used in evaluation.

Bio

Stephanie is a Computer Science PhD candidate at the University of Virginia advised by Dr. Yangfeng Ji in the Information and Language Processing (ILP) Lab. Her research interests lie in data quality analysis and selection, with a particular interest in natural language data.

Hannah is a Computer Science PhD candidate at University of Virginia. I am advised by Prof. David Evans and Prof. Yangfeng Ji. Her research primarily focuses on auditing NLP models. She has worked on adversarial NLP and privacy attacks on NLP models in the past. She is currently working on bias/fairness assessment for NLP models.