Robocrunch        AI
  Tweet  

Daniel Levy    @daniellevy__   ·   9/14/2021
Great collaboration with @violet_zct et al. on DRO for multilingual translation! Two key ideas: - surprisingly, the robust objective improves performance on *every* language pair (vs ERM) - tailoring the optimization algorithm to the architecture (here transformers) matters a lot
 Reply      Retweet   4      Like     13    






  Similar Tweets  

Graham Neubig    @gneubig   ·   9/16/2021
Our new #EMNLP2021 paper describes a simple, efficient, and effective way to learn multilingual models that work well on *all* of the languages they're trained on. It's based on the framework of distributionally robust optimization with a number of important tweaks. Check it out!
 Reply      Retweet   1      Like     11    



Sergey Levine    @svlevine   ·   7/16/2021
Data-driven design is a lot like offline RL. Want to design a drug molecule, protein, or robot? Offline model-based optimization (MBO) tackles this, and our new algorithm, conservative objective models (COMs) provides a simple approach: https://t.co/ivfCvN147Z A thread:
 Reply      Retweet   45      Like     204    



Christopher Manning    @chrmanning   ·   9/12/2021
people use an easily measurable objective in place of what is good. Maybe they’re right that an optimization mindset tends to focus on process rather than goals. This demands refocusing on goals and defining a good, multifaceted objective function before you start to optimize.
 Reply      Retweet   1      Like     5    



Sebastian Ruder    @seb_ruder   ·   9/14/2021
How can we apply language models to language varieties without much data? In this project, we show that by ensembling adapters from related languages *at test time* and weighting them to become more confident in their prediction, we can obtain strong performance.
 Reply      Retweet   1      Like     9    



Nandan Thakur    @Nthakur20   ·   9/12/2021
🔥15+ languages 🗣️ added to BEIR🔥 🍻 mBEIR is an effort to provide robust dense multilingual #IR retrievers in future. We add Mr.TyDI (10 lang, real data) and mMARCO (8 lang, translated data) for training and robust monolingual evaluation. Check README: https://t.co/Z9EZ7yT5F9
 Reply      Retweet   1      Like     8    



Graham Neubig    @gneubig   ·   9/16/2021
When we train multilingual NLP models, we often have different parameters for each language. However, language is actually a continuum, with many varieties! Our #EMNLP2021 paper combines language-specific parameters, adapting at test time to peculiarities of individual sentences.
 Reply      Retweet   2      Like     6    



Suraj Patil    @psuraj28   ·   6/18/2021
Blazing fast language generation with 🤗 Transformers and JAX/Flax 🔥 JAX makes it super easy to parallelize generation across multiple accelerators (tpu cores, multi GPUs) which gives a huge speed boost!
 Reply      Retweet   1      Like     13    



Sebastian Ruder    @seb_ruder   ·   8/23/2021
Our RemBERT model (ICLR 2021) is finally open-source and available in 🤗 Transformers. RemBERT is a large multilingual Transformer that outperforms XLM-R (and mT5 with similar # of params) in zero-shot transfer. Docs: https://t.co/AKwV0UF6cT Paper: https://t.co/TXF7qlJtUY
 Reply      Retweet   166      Like     639    



Niels Rogge    @NielsRogge   ·   9/9/2021
🔥 BEiT by @MSFTResearch is now available in @huggingface Transformers! As I really liked this paper and the figure was super-clear, it was a breeze to contribute :) it uses a very clever masked image modeling pre-train objective, inspired by BERT to outperform supervised ViT! 🤗
 Reply      Retweet   1      Like     4    



Weights & Biases    @weights_biases   ·   9/10/2021
If you're interested in implementing PPO, check out @vwxyzjn's 11-step tutorial! PPO (Proximal Policy Optimization) is a deep RL algorithm released by @OpenAI in 2017 Bonus: After you implement PPO, watch experiment metrics as they come in live ⚡️ https://t.co/Lz3nSrBybw
 Reply      Retweet   1      Like     5    



Hugging Face    @huggingface   ·   9/14/2021
📢 Introducing 🤗 Optimum A new open source library to optimize 🤗Transformers for production performance. 🏎 Quantize, Prune, Optimize models easily, targeting hardware from our partners @intel @graphcoreai @Qualcomm! 🤩 https://t.co/oemVDWlnxI
 Reply      Retweet   1      Like     11    



Leo Boytsov    @srchvrs   ·   8/24/2021
Transformer architecture reality check. Most modifications do not lead to improved performance, unless the # of parameters is increased dramatically. One exception is decoupling input and output embedding parameters. h/t @seb_ruder https://t.co/trRNhfR3iR
 Reply      Retweet   44      Like     226    



Pin-Yu Chen    @pinyuchenTW   ·   6/21/2021
Join our live session #CVPR2021 @CVPRConf to know more about how data poisoning affects the certification performance of randomized smoothing for non-robust and robust models at test time!
 Reply      Retweet   1      Like     17    



Rico Sennrich    @RicoSennrich   ·   9/17/2021
congratulations to @bricksdont for successfully defending his thesis on Robust Neural Machine Translation today! with many thanks to the external examiner @mjpost!
 Reply      Retweet   4      Like     31    














 
  Relevant People  

Daniel Levy
CS PhD student @Stanford, former intern at @GoogleBrain and @Facebook. Alumni @Polytechnique.
Daniel Levy 17.8

Graham Neubig
Associate professor at CMU, studying natural language processing, machine learning, etc. Japanese account is @neubig.
Graham Neubig 40.1

Sebastian Ruder
Research scientist @DeepMind • Natural language processing • Transfer learning • Multilinguality • Blog: https://t.co/naxDPsILJU • Newsletter: https://t.co/7JGh2qp8jA
Sebastian Ruder 49.9

Hugging Face
The AI community building the future. #BlackLivesMatter #stopasianhate
Hugging Face 47.5

Sergey Levine
Associate Professor at UC Berkeley
Sergey Levine 43.3

Weights & Biases
Developer tools for machine learning. Build better models faster with experiment tracking, dataset versioning, and model management.
Weights & Biases 38.3

Niels Rogge
ML Engineer @huggingface. @KU_Leuven grad. General interest in machine learning, deep learning, NLP. Making AI more accessible for everyone!
Niels Rogge 21.1




  Hashtags  

   #EMNLP2021

   #IR

   #

   #CVPR2021