Tweet  

Arun Mallya    @arunmallya   ·   10/13/2021
Code for #GANcraft (ICCV'21) has been released at https://t.co/iGEYvv6d9V, with pretrained models & training instructions. You can even import your own worlds and make them real! This also includes updates to the #Imaginaire repo to make it faster, better, and more awesome!
 
 Reply      Retweet   57      Like     310    

More by:   Arun Mallya
zoom in  
 








  More Tweets  

Jack Clark    @jackclarkSF   ·   10/12/2021
Here's the Inspur paper: https://t.co/A1w4TomNRr Writing up for this week's Import AI. Very notable that all the GPT3 replications have come from industry. Academia nowhere in sight (except in China, where BAAI has been training vast models like Wu Dao).
 Reply      Retweet   2      Like     1    

More by:   Jack Clark
zoom in  
 



Sylvain Gugger    @GuggerSylvain   ·   9/27/2021
Tired: Looking at the training loss curve while the model is training. Wired: Testing the model on any data you want while it's still training, thanks to the inference widgets on https://t.co/XYFbV2mDy3 and see it getting better!
 Reply      Retweet   6      Like     21    

More by:   Sylvain Gugger
zoom in  
 



Max Jaderberg    @maxjaderberg   ·   9/29/2021
We released a new paper improving Population Based Training (PBT): Faster Improvement Rate (FIRE) PBT https://t.co/djQWeBxKQN. PBT allows automated adaptation of hyperparameters within the time of a single experiment run (the image shows learning rate on ResNet-50 ImageNet) [1/n]
 Reply      Retweet   26      Like     118    

More by:   Max Jaderberg
zoom in  
 



Antonio Grasso    @antgrasso   ·   10/6/2021
An experienced training data partner can help bring your AI models to production faster; see how @SamaAI ’s MicroModel™ technology gives companies like Google and NVIDIA a competitive edge. Learn more > https://t.co/Tho3i2G5QY via @antgrasso #SamaPartner #AI #DataQuality
 
 Reply      Retweet   72      Like     86    

More by:   Antonio Grasso
zoom in  
 



Tal Linzen    @tallinzen   ·   10/5/2021
Is it fair to say that the category of "these large pretrained models" doesn't include word2vec? If so, is it because word2vec is not large, or because it is not pretrained?
 Reply      Retweet        Like     3    

More by:   Tal Linzen
zoom in  
 















  More from:  

Facebook AI
Facebook AI focuses on bringing the world together by advancing AI, powering meaningful and safe experiences, and conducting open research.

Hugging Face
The AI community building the future. #BlackLivesMatter #stopasianhate

François Chollet
Deep learning @google. Creator of Keras. Author of 'Deep Learning with Python'. Opinions are my own.

The Gradient
The Gradient cuts through the hype and the cynicism to provide accessible, sophisticated reporting on the latest AI research.

Lilian Weng
doing work @OpenAI



  Hashtags  

   #GANcraft
   #Imaginaire
   #SamaPartner
   #AI
   #DataQuality