Kyunghyun Cho    @kchonyc    11/23/2021      

The first-ever multilingual model to win WMT, beating out bilingual models: https://t.co/kbhRpkgVbA The Microsoft ZCode-DeltaLM model won all three tasks by huge margins: https://t.co/ecjZUpQ9Lc so, who actually won what?
  
    16         109




  Hashtags  

   #MIA_2022






 
  Related  

Shlomo Engelson Argamon    @ShlomoArgamon    11/29/2021      

No, no one would actually do that… 🤣🤣😢
  
    1      



Christian Wolf    @chriswolfvision    11/23/2021      

Stanford: "we call our models 'Foundation Models'". Microsoft: Hold my beer "we use the name of Florence as the origin of the trail for exploring vision foundation models, as well as the birthplace of Renaissance."
  
          5



Ikuya Yamada    @ikuyamada    6 hours      

Our #MIA_2022 workshop on 🌎multilingual NLP🌎 will be held at NAACL! Stay tuned for our shared task for multilingual question answering!🚀🚀
  
          4



Jack Clark    @jackclarkSF    11/29/2021      

2 years means frontier of dense generative models goes from 1.6billion parameters (Salesforce, following GPT-2), to 530billion parameters (Microsoft Megatron-Turing https://t.co/md03Qz8K90). 330X increase in model size, not too shabby.
  
    2         2