Kyunghyun Cho    @kchonyc    11/23/2021      

The first-ever multilingual model to win WMT, beating out bilingual models: The Microsoft ZCode-DeltaLM model won all three tasks by huge margins: so, who actually won what?
    16         109




Shlomo Engelson Argamon    @ShlomoArgamon    11/29/2021      

No, no one would actually do that… 🤣🤣😢

Christian Wolf    @chriswolfvision    11/23/2021      

Stanford: "we call our models 'Foundation Models'". Microsoft: Hold my beer "we use the name of Florence as the origin of the trail for exploring vision foundation models, as well as the birthplace of Renaissance."

Ikuya Yamada    @ikuyamada    6 hours      

Our #MIA_2022 workshop on 🌎multilingual NLP🌎 will be held at NAACL! Stay tuned for our shared task for multilingual question answering!🚀🚀

Jack Clark    @jackclarkSF    11/29/2021      

2 years means frontier of dense generative models goes from 1.6billion parameters (Salesforce, following GPT-2), to 530billion parameters (Microsoft Megatron-Turing 330X increase in model size, not too shabby.
    2         2