Tweet  

Naveen Rao    @NaveenGRao     10/13/2021
10 months ago I tweeted that we were getting a new project off the ground鈥oday I鈥檓 proud to announce with @hanlintang @jefrankle and @mcarbin that MosaicML comes out of stealth! We are focused on the compute efficiency of neural network training using algorithmic methods.馃憞
 Reply      Retweet   20      Like     160    

More by:   Naveen Rao
zoom in  
 








  More Tweets  

Naveen Rao    @NaveenGRao     7 hours
Curious about our process for vetting methods for compute efficient training @MosaicML? Read our detailed methodology blog by our research team. Our motivation is to establish an efficiency ground-truth! https://t.co/HkDjdDGsRU
 Reply      Retweet   5      Like     9    

More by:   Naveen Rao
zoom in  
 



Alex Williams    @ItsNeuronal     9/23/2021
Very excited to announce that I'll be moving this winter to start a new lab at NYU (@NYU_CNS) + Flatiron (@FlatironCCN)! We'll be working on new statistical methods for neural data, with a particular focus on state changes and long-term learning/drift in large neural populations
 Reply      Retweet   1      Like     23    

More by:   Alex Williams
zoom in  
 



Yann LeCun    @ylecun     10/15/2021
He told me that our 1989 Neural Comp paper on ConvNets shocked him because he was working on training the Neocognitron with backprop. He abandonned the project after our paper. Fukushima's work influenced me. But multilayer training (through backprop) is a pretty big deal.
 Reply      Retweet   3      Like     35    

More by:   Yann LeCun
zoom in  
 



Naveen Rao    @NaveenGRao     10/8/2021
In the context of neural network training, we can usually find ways to split computation over a desired number of devices. So, we don鈥檛 need to think monolithically and jam everything in one mega processor.
 Reply      Retweet        Like     4    

More by:   Naveen Rao
zoom in  
 



Sylvain Gugger    @GuggerSylvain     9/27/2021
Tired: Looking at the training loss curve while the model is training. Wired: Testing the model on any data you want while it's still training, thanks to the inference widgets on https://t.co/XYFbV2mDy3 and see it getting better!
 Reply      Retweet   6      Like     21    

More by:   Sylvain Gugger
zoom in  
 















  More from:  

Alex Williams
Postdoc at Stanford in stats/neuroscience

Sergey Levine
Associate Professor at UC Berkeley

Francesco Locatello
Senior Applied Scientist at AWS AI. Former: PhD student at ETH Zürich/Max Planck Institute for Intelligent Systems, Research Consultant and Intern at Google.

Facebook AI
Facebook AI focuses on bringing the world together by advancing AI, powering meaningful and safe experiences, and conducting open research.

Jian Tang
Assistant Professor at Mila. Working on deep learning for graphs with applications in knowledge graphs, Drug Discovery and material discovery.