Team
Principle Investigator
Song Han is an Associate Professor at MIT EECS. He received his PhD degree from Stanford University. His research focuses on efficient deep learning computing. He proposed “deep compression” technique that can reduce neural network size by an order of magnitude without losing accuracy, and the hardware implementation “efficient inference engine” that first exploited pruning and weight sparsity in deep learning accelerators. His team’s work on hardware-aware neural architecture search (ProxylessNAS, Once-for-All Network (OFA), MCUNet) was highlighted by MIT News, Wired, Qualcomm News, VentureBeat, IEEE Spectrum, integrated in PyTorch and AutoGluon, received six low-power computer vision contest awards in flagship AI conferences, and a world-record in the open division of MLPerf inference benchmark (1.078M Img/s). Song received Best Paper awards at ICLR’16 and FPGA’17, Amazon Machine Learning Research Award, SONY Faculty Award, Facebook Faculty Award, NVIDIA Academic Partnership Award. Song was named “35 Innovators Under 35” by MIT Technology Review for his contribution on “deep compression” technique that “lets powerful artificial intelligence (AI) programs run more efficiently on low-power mobile devices.” Song received the NSF CAREER Award for “efficient algorithms and hardware for accelerated machine learning” and the IEEE “AIs 10 to Watch: The Future of AI” award.
Team
Han Cai: Ph.D. student, efficient deep learning and AutoML, recipient of three rounds of low-power computer vision challenge, Qualcomm Innovation Fellowship.
Shreya Chaudhary: undergraduate student, quantum computer system and ML. She is interested in quantum computing and is working on torchquantum related projects.
Wei-Ming Chen: postdoctoral researcher, efficient deep learning on edge devices and embedded systems.
Emelie Eldracher: MEng student, TinyML and on-device learning for wearable devices and sports. She presented TinyML at TEDxMIT .
Catherine Liang: Ph.D student, Quantum computing and machine learning. She is interested in quantum control, quantum machine learning, quantum error correction, quantum algorithms, etc.
Ji Lin: Ph.D. student, efficient and hardware-friendly machine learning and its applications. He is the NVIDIA Fellowship finalist and Qualcomm Innovation Fellowship recipient.
Yujun Lin: Ph.D. student, efficient deep learning and its accelerator architecture co-design. His is the recipient of Qualcomm Innovation Fellowship.
Zhijian Liu: Ph.D. student, his work on efficient 3D perception won the first place in the NuScenes LiDAR segmentation challenge and was selected as the project of the month by NVIDIA Jetson Community. His research is supported by Qualcomm Innovation Fellowship and featured by media outlets such as MIT News and NVIDIA News.
Anne Ouyang: undergraduate student, she develops high-performance kernels for TinyML.
Kevin Shao: undergraduate student, efficient machine learning for autonomous driving.
Haotian Tang: Ph.D. student, efficient machine learning algorithms and systems co-design. He is the recipient of ICLR, ICML, NeurIPS outstanding reviewer awards.
Guangxuan Xiao: Ph.D. student. His research interests focus on the development of efficient algorithms and systems for deep learning, specifically large foundation models. He has already made notable contributions to the field, as his work SmoothQuan and Offsite-Tuning have received ~500 stars. His work has real-world impact: SmoothQuant has been integrated by NVIDIA’s FasterTransformer, and engineers from Huggingface and Intel are working on integrating and using SmoothQuant in their libraries.
Hanrui Wang: Ph.D. student, Quantum Computer Architecture and Machine Learning. His work on robust quantum ansatz search won best poster award at NSF AI Institute. His efficient Transformer work won 1st place in NeurIPS MicroNet challenge. He was the recipient of 1st place of ACM student research competition, DAC Young Fellowship, Qualcomm Fellowship, Baidu Fellowship, Analog Devices Outstanding Student Designer Award. His research is featured by media outlets such as MIT News MIT homepage spotlight, and Venture Beat.
Wei-Chen Wang: postdoctoral researcher, efficient deep learning, TinyML, embedded systems. He was the recipient of the ACM/IEEE CODES+ISSS Best Paper Award and the IEEE NVMSA Best Paper Award.
Zhekai Zhang: Ph.D. student, deep learning accelerator
Jessica Zheng: MEng student, efficient machine learning applications, especially dealing with time series data. Her work received the No.1 smallest model at TinyML competition, ICCAD’22.
Ligeng Zhu: Ph.D. student, on-device learning, tiny training, life-long learning, he is the recipient of Qualcomm Innovation Fellowship.
Openings
We have openings for efficient deep learning (LLM, TinyML, autonomous driving) and quantum computer system research. We are also looking for Android developers. Contact Zhijian Liu (zhijian@mit.edu), Hanrui Wang (hanrui@mit.edu), and hanlab.recruiting@gmail.com if you are interested.
Sponsors
We thank the generous sponsors of our research: ADI, Amazon, AMD, Apple, ARM, Cognex, Facebook, Ford, Google, Hyundai, IBM, Intel, Microsoft, MIT AI Hardware Program, MIT Microsystems Technology Lab, MIT-IBM Watson AI Lab, National Science Foundation, NVIDIA, Qualcomm, Samsung, Semiconductor Research Corporation, SONY, TI.