robocrunch
Edoardo Ponti
@PontiEdoardo
From Aug 2022: lecturer in NLP at @InfAtEd @EdinburghUni. Currently: postdoc at @Mila_Quebec. Previously: PhD @Cambridge_Uni (@stjohnscam)
Tweets by Edoardo Ponti
Grammatical markers are implicitly aligned in pre-trained multilingual encoders by encoding the same grammatical functions through the same subset of neurons across languages. This may help explain the "unreasonable" effectiveness of zero-shot cross-lingual transfer.
Shared by
Edoardo Ponti
at
5/6/2022
Also, it contains the largest-scale audit of gold-standard benchmarks to date, revealing that e.g. 71.4% of turns in Wizards of Wikipedia are hallucinated. Even worse, language models tend to not only 🦜 but even amplify this noise.
Shared by
Edoardo Ponti
at
4/25/2022
Fantastic new work from @nouhadziri: data-centric + modelling solutions can remove most hallucinations from knowledge-grounded dialogue and increase its quality (e.g. abstractiveness)!
Shared by
Edoardo Ponti
at
4/25/2022
Our method improves 1) sample efficiency in reinforcement learning on seen tasks on 8 levels of BabyAI; and 2) few-shot adaptation to 20 held-out NLP tasks on CrossFit, compared to baselines with "entangled" soft parameter sharing or skill combinations based on expert knowledge.
Shared by
Edoardo Ponti
at
3/2/2022
Multitask learning by decomposing tasks into sets of fine-grained skills (discrete, reusable, and autonomous facets of knowledge). New work with Yoshua Bengio @sivareddyg from @Mila_Quebec and @murefil from @MSFTResearch 📘: https://t.co/1Js8D1ra1d 💻: https://t.co/hn3fHxRAHZ
Shared by
Edoardo Ponti
at
3/2/2022
In our new paper, @KreutzerJulia @licwu @sivareddyg and I present a method to enhance translation-based cross-lingual transfer (gains up to 2.7 per task and 5.6 per language). Pdf: https://t.co/iiG3joGn8A. Code: https://t.co/lOJKVPJu2C @Mila_Quebec @CambridgeLTL @GoogleAI
Shared by
Edoardo Ponti
at
7/26/2021