Abstract:

Convolutional Neural Networks (CNNs) have become increasingly important in the field of computer vision. The commonly used CNNs usually require high computation complexity and contain many parameters. Therefore, it is infeasible to directly deploy these high-capacity networks on some edge devices, such as mobile phones and embedded equipment, which have limited computing power and memory resources.

To alleviate this problem, lightweight neural networks with fewer parameters have been developed, e.g., MobileNet and ShuffleNet. However, there is inevitably a performance gap between high-capacity networks and lightweight networks.

To narrow this gap, knowledge distillation is utilized. Knowledge distillation aims to leverage the hidden information in the high-capacity network (teacher) to guide the optimization of the lightweight network (student) for performance improvement. In this seminar, a brief review of the existing distillation methods will be given. Besides, I will introduce two of our works on improving the generalization and effectiveness of feature distillation via resistance training and projector ensemble.

Bio:

Yudong Chen received his bachelor’s and master’s degree from Shenzhen University. He is currently working towards his PhD degree under the supervision of Dr Sen Wang and Dr Jiajun Liu at the University of Queensland, Australia. His research interests include network compression and knowledge distillation.

Host:

Dr Sen Wang

This session will be conducted via Zoom: https://uqz.zoom.us/j/89362232168

About Data Science Seminar

This seminar series is hosted by EECS Data Science.

Venue

Room: 
Zoom