robocrunch
MilaNLP
@MilaNLProc
The Milan Natural Language Processing Group #NLProc #ML #AI
Tweets by MilaNLP
✨NEW PAPER ALERT ✨ We propose EAR 👂 a new Entropy-based attention regularization term to prevent lexical overfitting in #Transformer models (Findings of ACL 22) Code: https://t.co/IAKx2r9xof By @peppeatta @debora_nozza @dirk_hovy @ElenaBaralis #acl2022nlp #NLProc
Shared by
MilaNLP
at
5/5/2022
✨NEW PAPER ALERT ✨ Pretrained language models can be biased and harmful: we suggest using social bias verification techniques in models development pipelines inspired by software testing By @debora_nozza @federicobianchy @dirk_hovy #acl2022nlp #NLProc @BigscienceW
Shared by
MilaNLP
at
4/20/2022
✨NEW PAPER ALERT ✨ We release XLM-EMO, a multilingual #emotion prediction model trained on 19 languages with competitive performance also in zero-shot settings! Paper: https://t.co/4JRGOGxwMs By @federicobianchy @debora_nozza @dirk_hovy #acl2022nlp #NLProc
Shared by
MilaNLP
at
4/19/2022
For this week's @MilaNLProc reading group, @federicobianchy presented "Reasoning with #Transformer-based Models: Deep Learning, but Shallow Reasoning" by @Chadi_Helwe et al. Paper: https://t.co/gm5kbLp6ht #NLProc #ReadingGroup
Shared by
MilaNLP
at
4/14/2022
📖For our weekly @MilaNLProc lab seminar, it was a pleasure to have @LorenzoScottB talking about compositionality in language models #NLProc
Shared by
MilaNLP
at
4/8/2022
For this week @MilaNLProc reading group, @debora_nozza presented "@DynabenchAI: Rethinking Benchmarking in NLP" Paper: https://t.co/ftlp7kaGUz #NLProc #ReadingGroup
Shared by
MilaNLP
at
4/7/2022
Contextualized Topic Models v2.1.1 is out! (papers at #EACL2021 and #ACL2021NLP). #NLProc Package: https://t.co/VwmhWv6m05 - New model: SuperCTM (adding supervision to CTM) - New model: β-CTM - General fixes
Shared by
MilaNLP
at
7/19/2021