We are pretty proud of students in TF 04 class where we encourage them to build projects which have great impacts on AI communities. They are going to be the AI pioneers in upcoming years.
Teaching Team
Computer Vision
A ConvNet for the 2020s
The “Roaring 20s” of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model.
A ConvNet for the 2020s
-
Source code + Slide
Verified
Swin Transformer
This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision.
Hierarchical Vision Transformer using Shifted Windows
-
Source code + Slide
Verified
Natural Language Processing
BERT
A new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers
Pre-training of Deep Bidirectional Transformers for
Language Understanding
-
Source code + Slide
Verified
Transformer-XL
Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens.
Attentive Language Models
Beyond a Fixed-Length Context
-
Source code + Slide
Verified
RoBERTa
A replication study of BERT pretraining that carefully measures the impact of many key hyperparameters and training data size.
A Robustly Optimized BERT Pretraining Approach
-
Source code + Slide
Verified
Time Series
TimGAN
A novel framework for time-series generation that combines the versatility of the unsupervised GAN approach with the control over conditional temporal dynamics afforded by supervised autoregressive models
Time-series Generative Adversarial Networks
-
Source code
Verified