Chris J. Maddison    @cjmaddison    11/24/2021      

In this talk, I give a simple perspective on representation learning. The recent progress should be blowing your mind: in 2012, a single network (AlexNet) solved a single dataset (ImageNet). In 2021, a single network (CLIP) solved an entire subfield (image classification).
  
    6         43










 
  Related  

Andrew Drozdov    @mrdrozdov    18 hours      

MNIST is really amazing dataset and story. There was a nice talk about its history at ICML in 2016... really wish I could find that link. Also, CIFAR was really important for me in my MS, and Imagenet seems crucial for a large amount of progress with deep learning.
  
    1      



Jeremy Cohen    @deepcohen    12/6/2021      

People learning about NTK are often confused by the following apparent paradox: in the NTK regime, the last-layer feature kernel (and the NTK) do not evolve in the infinite-width limit, yet somehow the network still fits the training dataset. How can the network fit the ....
  
    1         4



Matthias Niessner    @MattNiessner    9/23/2021      

(8/n) Training speed: given that deep learning is so empirical, it’s critical that your setup facilitates fast turnaround times for debugging. Make sure you understand where the bottle neck lies (data loading vs backprop); a single batch should be processed in under a second.
  
          42



Martin Görner    @martin_gorner    11/24/2021      

Self-supervised learning techniques on images are making steady progress. Here are Masked Auto Encoders: https://t.co/eZpzogEUsZ I's impressive how much the neural network can reconstruct.
  
    2         6