Yuki    @y_m_asano    11/23/2021      

Impressive results from pretraining on image+text on tasks like COCO (>62%mAP), IN-1k (>83%) & K400 (>87%). But >512*10 A100 GPU-days and (of course) proprietary data. More data is "better", but also we have no idea what the model has learned. Worrying. https://t.co/pYvFQ1PKkm
  
    1         13










 
  Related  

MichiganAI    @michigan_AI    11/23/2021      

From ACL 2021 with love: How I learned to stop worrying and love the BERT" by @ashkankazemi Get the details from Michigan AI's latest blog post: ⬇️ https://t.co/hZ4kSdhziu
  
    1         2



Vincent Sitzmann    @vincesitzmann    11/23/2021      

To me, this is especially awesome, because we synthesized many of things that I learned over the past few years that you can’t currently learn in any university course. The knowledge summarized in esp. Part 1 of this review was previously scattered over tens of papers! (2/n)
  
          3



AK    @ak92501    11/30/2021      

Vector Quantized Diffusion Model for Text-to-Image Synthesis abs: https://t.co/PMjKbO37AX github: https://t.co/TO5c2ydrjd produces significantly better text-to-image generation results when compared with conventional autoregressive models with similar numbers of parameters
  
    1      



Sylvain Gugger    @GuggerSylvain    11/24/2021      

Missed the course event last week? @stevhliu has summarized all talks with super nice visuals and we have grouped them in a special chapter of the course! https://t.co/s9GPFyJjGz
  
          4