Improving Personalization and Consistency of Large Foundation Models
Jindong Wang
William & Mary, Department of Data Science
Time: 2026-03-11, 12:00 - 13:00 ET
Location: Rice 540 and Zoom
Abstract Foundation models, including large language models and multimodal models, are becoming indispensable in how we live, work, and communicate. Our research aims to improve these models by enhancing personalization and consistency. First, although global alignment has significantly improved safety, it remains unclear whether and how foundation models can align to support individual safety. Second, large multimodal models can both generate and understand content, but these capabilities can conflict, leading to inconsistencies, such as “what I can understand, I cannot create,” and vice versa. In this talk, I will share our recent work on personalization and consistency in foundation models, with the goal of drawing the community’s attention to these two critical concepts.
Bio: Dr. Jindong Wang is an Assistant Professor at William & Mary Department of Data Science. He is also a faculty member of Future of Life Institute. Previously, he was a Senior Researcher in Microsoft Research Asia from 2019 to 2024. His research interest spans machine learning, large foundation models, and generative AI for social science. He is among World’s Top 2% Highly Cited Scientists and Most Influential AI Scholars. He is associate editor of IEEE Transactions on Neural Networks and Learning Systems (TNNLS), guest editor for ACM Transactions on Intelligent Systems and Technology (TIST), area chair for ICML, NeurIPS, ICLR, KDD, ACL, ACMMM, and ACML, and SPC of IJCAI and AAAI. He published over 60 papers at top-tier venues (23000+ citations, H-index 54). His research is supported by Amazon Research Award, Google Research Award, NVIDIA Academic Research Grant, AMD University Program AI & HPC Award, Microsoft Accelerate Foundation Model Research Award, and William & Mary Faculty Research Award. His research was integrated into Microsoft health products that reduced token consumptions by 15%, and quant finance with increased prediction accuracy. His work was reported by Forbes, MIT Technology Review, and other international media. He received best papers awards at several conferences and workshops. He published a book Introduction to Transfer Learning and gave tutorials at IJCAI’22, WSDM’23, KDD’23, AAAI’24, AAAI’25, and CVPR’25.