Tweet  

Google AI    @GoogleAI     10/13/2021
Medical image classification models often pre-train on natural image datasets. Today, we present alternative approaches that use additional pre-training on medical images, along with metadata-based data augmentation, to significantly improve performance. https://t.co/DHy0XojZwm
 Reply      Retweet   160      Like     561    

More by:   Google AI
zoom in  
 








  More Tweets  

Sebastian Raschka    @rasbt     10/4/2021
"PASS: An ImageNet replacement for self-supervised pretraining without humans" (https://t.co/H6Zmcy5AFo). This datasets of 1.4 million images with CC-BY license, and fewer problematic images, seems like a good alternative to ImageNet for pre-training your next CV model.
 Reply      Retweet   1      Like     28    

More by:   Sebastian Raschka
zoom in  
 



Andrei Bursuc    @abursuc     9/28/2021
Some interesting findings: - pre-training on PASS vs pre-training on IN-1k leads to performance in the same ball park for various downstream tasks - performance on human-centered downstream tasks, eg human dense pose prediction, is on par with IN-1k models 2/
 Reply      Retweet        Like     3    

More by:   Andrei Bursuc
zoom in  
 



Arkaitz Zubiaga    @arkaitz     9/23/2021
馃 Performance of stance classification models drops over time due to changes in data. 馃挕 In this paper, we assess the extent to which performance drops and propose solutions to mitigate the decay. (w/ @rabab_alkhalifa @Elena_Kochkina) https://t.co/rNj66hXTcN
 Reply      Retweet   1      Like     2    

More by:   Arkaitz Zubiaga
zoom in  
 



Prafulla Dhariwal    @prafdhar     6/9/2021
Exciting results on training a cascade of upsampling diffusion models. Adding data augmentation at the lower resolutions fixes the train-test gap during upsampling and allows to generate high quality natural images!
 Reply      Retweet   2      Like     28    

More by:   Prafulla Dhariwal
zoom in  
 















  More from:  

Google AI
Google AI is focused on bringing the benefits of AI to everyone. In conducting and applying our research, we advance the state-of-the-art in many domains.

Sebastian Raschka
Author of the 'Python Machine Learning' book. Tweet about Python, deep learning research, open source. Asst Prof of Statistics @UWMadison. Opinions are my own.

Intel AI
Harnessing silicon designed specifically for #AI, end-to-end solutions, & tools that quickly deploy & scale up—#IntelAI leads the next evolution of compute.

Hanie Sedghi
Senior Research Scientist @GoogleAI #GoogleBrain. Passionate about bridging the gap between theory and practice in ML. Current focus: deep learning phenomena

Facebook AI
Facebook AI focuses on bringing the world together by advancing AI, powering meaningful and safe experiences, and conducting open research.