Deep Neural Networks To Analyze Deformable Shapes From Images
The past decade has witnessed the success of deep neural networks (DNNs) in image processing and computer vision, achieving groundbreaking performance on specialized tasks such as image classification, segmentation, object detection, and tracking. Inspired by the human visual system, DNN models were widely believed to learn features from images including texture, color, and the shape of objects. However, recent studies have revealed that DNNs have a strong bias towards seeing image textures rather than shapes. The inability of DNNs to analyze shape as humans do limits their power and potential practical impact in applications where shape is important. In this talk, I will present our recent research on teaching DNNs to learn deformable shapes from images, and how this approach can be leveraged to improve current deep learning (DL) algorithms in image analysis and computer vision. Specifically, I will introduce a new family of low-dimensional shape representation learning algorithms based on fine-grained deformations derived from images. I will also showcase the practical impact of this research in real-world clinical applications, such as neurodegenerative diseases (i.e., Alzheimer’s) detection and real-time image-guided navigation systems for cardiac resynchronization therapy.
Professor Zhang completed her PhD in computer science at the University of Utah. She was a postdoctoral associate in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology. Professor Zhang received the Medical Image Computing and Computer Assisted Intervention (MICCAI) young scientist award in 2014 and was a runner-up for the young scientist award in 2016. She has been a member of MICCAI society and an area chair for MICCAI since 2018. She is also an active area chair for International Symposium on Biomedical Imaging (ISBI) and Medical Imaging with Deep Learning (MIDL).