TSM: Temporal Shift Module for Efficient Video Understanding

Ji Lin1    Chuang Gan2    Song Han1
1MIT    2MIT-IBM Watson AI Lab

News

  • [10/13/2019] TSM is featured by: MIT News / MIT Technology Review / WIRED / Engadget / NVIDIA Jetson Developer Forum.
  • [10/03/2019] TSM model is friendly for distributed training. With TSM, we can scale up Kinetics training to 1536 GPUs and finish within 15 minutes. See the full Systems for ML workshop paper here: [link].
  • [09/23/2019] We deploy an online version of TSM model on NVIDIA Jetson Nano. It can perform real-time online hand gesture recognition. [link]
  • [07/22/2019] Our paper is accepted to ICCV 2019. See you in Seoul!
  • [03/29/2019] We have released the code of TSM project. Check it out on github!

Abstract

The explosive growth in video streaming gives rise to challenges on performing video understanding at high accuracy and low computation cost. Conventional 2D CNNs are computationally cheap but cannot capture temporal relationships; 3D CNN based methods can achieve good performance but are computationally intensive, making it expensive to deploy. In this paper, we propose a generic and effective Temporal Shift Module (TSM) that enjoys both high efficiency and high performance. Specifically, it can achieve the performance of 3D CNN but maintain 2D CNN's complexity. TSM shifts part of the channels along the temporal dimension; thus facilitate information exchanged among neighboring frames. It can be inserted into 2D CNNs to achieve temporal modeling at zero computation and zero parameters. We also extended TSM to online setting, which enables real-time low-latency online video recognition and video object detection. TSM is accurate and efficient: it ranks the first place on the Something-Something leaderboard upon publication; on Jetson Nano and Galaxy Note8, it achieves a low latency of 13ms and 35ms for online video recognition. The code is available at: https://github. com/mit-han-lab/temporal-shift-module.

Paper

BibTeX

@inproceedings{lin2019tsm,
  title={TSM: Temporal Shift Module for Efficient Video Understanding},
  author={Lin, Ji and Gan, Chuang and Han, Song},
  booktitle={Proceedings of the IEEE International Conference on Computer Vision},
  year={2019}
}  

Live Demo

[NEW] We have updated an online demo of hand gesture recognition, running on NVIDIA Jetson Nano (99$) at 8 watts. The model uses MobileNetV2 as the backbone and inserted online TSM. It is compiled with TVM. It can run at more than 70 FPS on Nano (for the demo, the speed is delayed by the camera).


We build a live demo on online hand gesture recognition with our TSM model. It is useful for driving scenarios where we can communicate with computers using hand gestures.

The model uses MobileNetV2 as the backbone, and TSM is only inserted for the last 3 residual blocks so that we can reuse the features to facilitate online recognition.

The demo is running on an NVIDIA Jetson TX2 board. It can recognize around 15 videos (8 frames) per second.


Introduction Video

This is a video introducing our TSM framework.