UVa AIML Seminar
The AI and Machine Learning Seminar @ UVa

Generalizing Models to Unseen Domains and Open Environments


Sheng Li
UVA, Data Science

Time: 2024-11-20, 12:00 - 13:00 ET
Location: Rice 540 and Zoom

Abstract Domain generalization (DG) has attracted increasing attention in recent years, as it seeks to improve the generalization ability of visual recognition models to unseen target domains. In this talk, I will first introduce the background of domain generalization and then present our recent work on generalizing visual recognition models to unseen domains and open environments. Three new problem settings will be discussed, including semi-supervised single domain generalization, open-set single domain generalization, and text-driven domain generalization. I will introduce our solutions to each problem and discuss future research directions on this topic.

Bio Sheng Li is a Quantitative Foundation Associate Professor of Data Science and an Associate Professor of Computer Science (by courtesy) at the University of Virginia (UVA). He was an Assistant Professor of Computer Science at the University of Georgia from 2018 to 2022, and a Data Scientist at Adobe Research from 2017 to 2018. He received his PhD degree in Computer Engineering from Northeastern University in 2017 and received his master’s degree and bachelor’s degree from School of Computer Science at Nanjing University of Posts and Telecommunications in 2012 and 2010, respectively. His recent research interests include Trustworthy AI, Causal Inference, Large Foundation Models, and Vision-Language Modeling. He has published over 170 papers and has received over 10 research awards, such as the INNS Aharon Katzir Young Investigator Award, Adobe Data Science Research Award, Cisco Faculty Research Award, and SDM Best Paper Award. He currently serves as Associate Editor for six journals such as Transactions on Machine Learning Research (TMLR) and IEEE Trans. Neural Networks and Learning Systems (TNNLS), and serves as an Area Chair for IJCAI, NeurIPS, ICML, and ICLR.