Yann LeCun    @ylecun    10/15/2021      

He told me that our 1989 Neural Comp paper on ConvNets shocked him because he was working on training the Neocognitron with backprop. He abandonned the project after our paper. Fukushima's work influenced me. But multilayer training (through backprop) is a pretty big deal.
  
    4         40










 
  Related  

Rosanne Liu    @savvyRL    12/1/2021      

There are barriers everywhere—yes, even when classes, code, papers are all online. Just because information is out there doesn't mean it's easy convert it to knowledge. Because the most essential learning and knowledge gain is still through *working with people*.
  
          3



Matthias Niessner    @MattNiessner    9/23/2021      

(8/n) Training speed: given that deep learning is so empirical, it’s critical that your setup facilitates fast turnaround times for debugging. Make sure you understand where the bottle neck lies (data loading vs backprop); a single batch should be processed in under a second.
  
          42



TheSequence    @TheSequenceAI    11/30/2021      

Modern deep neural networks are large and require incredibly large training datasets. The traditional sequential approach is simply impractical. But we can use parallel training. The idea of parallelizable training is intuitive but incredibly hard to achieve. 1/2
  
    1         7



Been Kim    @_beenkim    12/2/2021      

So in your email, please instead state: 1) Very brief description a project that you are most proud of and why (with the link to the writeup if any) 2) What did not work in that project 3) Why you would be interested in working with me 3/n
  
          4



Shruti Rijhwani    @shrutirij    12/1/2021      

Thank you, this wouldn't have been possible without you! I'm so happy to be working on this project :-)
  
          4