Robocrunch        AI

Sheng Zhang    @sheng_zh   ·   9/14/2021
How to extract relation whose args never co-occur in a paragraph when distant supervision is very noisy and SOTA models are less effective? Checkout our #EMNLP2021 paper "Modular Self-Supervision for Document-Level Relation Extraction"
 Reply      Retweet   3      Like     20    

  Similar Tweets  

TsinghuaNLP    @TsinghuaNLP   ·   9/6/2021
Our #NAACL2021 paper presents an open hierarchical relation extraction model, which can leverage relation hierarchy for better relation discovery, and can also directly add the newly discovered relations into existing hierarchies. #TsinghuaNLP Paper:
 Reply      Retweet   1      Like     3    

Graham Neubig    @gneubig   ·   9/16/2021
Our new #EMNLP2021 paper describes a simple, efficient, and effective way to learn multilingual models that work well on *all* of the languages they're trained on. It's based on the framework of distributionally robust optimization with a number of important tweaks. Check it out!
 Reply      Retweet   1      Like     11    

Violet Peng    @VioletNPeng   ·   9/14/2021
We know language models are few shot learners, multi-task adapters, knowledge base completer, etc. It’s exciting to see that they are also great document-level information extractors (with some tweak). Excited about pushing the limit of what generative language models can do!
 Reply      Retweet   1      Like     10    

Julian Eisenschlos    @eisenjulian   ·   9/10/2021
🧉New #EMNLP2021 paper alert🧉With the TAPAS model we showed how Transformers can be effective at parsing tabular data for QA and entailment. But how can we manage large table inputs that don't fit in 512 tokens? In our latest work we introduce MATE🧉 1/5
 Reply      Retweet   1      Like     3    

Google AI    @GoogleAI   ·   9/9/2021
Introducing personalized #AutomaticSpeechRecognition models, the latest effort from #ProjectEuphonia that leverages an extensive corpus of disordered speech to develop models that are more effective for speakers who have speech impairments. More
 Reply      Retweet   8      Like     20    

Posted by:
Google AI

Nils Reimers    @Nils_Reimers   ·   9/8/2021
🚨 All-Purpose Sentence & Paragraph Embeddings Models As part of the JAX community week from @huggingface we collected a corpus of 1.2 billion training pairs => Great embeddings for sentences & paragraphs Models: Training Data:
 Reply      Retweet   1      Like     3    

Stanford HAI    @StanfordHAI   ·   7/6/2021
How do we address the quality of training data that models learn from? Snorkel AI, an idea born out of Stanford AI Lab, provides a novel way to generate the right kind of data necessary to develop effective algorithms.
 Reply      Retweet   10      Like     26    

Nils Reimers    @Nils_Reimers   ·   6/3/2021
Small & Fast Models 🏎️💨 We added several small & fast models, for optimal encoding speed on GPU & CPU. Multi-Lingual Models 🇺🇳 Multi-lingual models for 50+ languages are available. They achieve by far the best performance across all available multilingual models for many tasks. h
 Reply      Retweet   2      Like     21    

Nils Reimers    @Nils_Reimers   ·   9/8/2021
🚨Model Alert🚨 🏋️‍♂️ State-of-the-art sentence & paragraph embedding models 🍻State-of-the-art semantic search models 🔢State-of-the-art on MS MARCO for dense retrieval 📂1.2B training pairs corpus 👩‍🎓215M Q&A-training pairs 🌐Everything Available: 🧵 https
 Reply      Retweet   2      Like     9    

Nils Reimers    @Nils_Reimers   ·   9/2/2021
Great work by my student @KexinWang2049 accepted at #EMNLP2021 Findings. Proposing a new unsupervised sentence embedding & domain adaptation method. One of the few works that does a proper evaluation, showing that previous methods are often worse than MLM or out-of-the-box models
 Reply      Retweet   9      Like     50    

Michihiro Yasunaga    @michiyasunaga   ·   9/15/2021
Excited to share our new #EMNLP2021 paper "LM-Critic: Language Models for Unsupervised Grammatical Error Correction" with @percyliang @jure @StanfordAILab @StanfordNLP! Paper: Github: Thread below [1/7] ⤵️
 Reply      Retweet   2      Like     5    

Nicola De Cao    @nicola_decao   ·   9/15/2021
Happy to announce my second #EMNLP2021 paper: Editing Factual Knowledge in Language Models. We took some pre-trained LMs (BERT/ BART) and we learn an "editor" function that can modify factual knowledge in the LM. 📄paper 💻code
 Reply      Retweet        Like     14    

Nils Reimers    @Nils_Reimers   ·   6/28/2021
Really happy about the launch of Sentence Transformers v2. All models are now hosted on the HF models hub. This makes sharing & finding your custom sentence embedding models extremely easy. Plus: You can directly interact with these models on the hub.
 Reply      Retweet   18      Like     95    

Graham Neubig    @gneubig   ·   9/10/2021
Nearest neighbor language models, which augment a neural LM with an external datastore, achieve large performance gains but are slooooow. Check out our #emnlp2021 work that gets an 800% speedup through (1) datastore compression, (2) dimension reduction, and (3) adaptive retrieval
 Reply      Retweet   3      Like     28    

  Relevant People  

Sheng Zhang
Researcher @MSFTResearch. JHU '20 PhD @jhuclsp.
Sheng Zhang 4.81

Natural Language Processing Lab at Tsinghua University

Nils Reimers
NLP researcher at @huggingface • Creator of SBERT (
Nils Reimers 22.3

Graham Neubig
Associate professor at CMU, studying natural language processing, machine learning, etc. Japanese account is @neubig.
Graham Neubig 40.1

Violet Peng
NLP researcher, Assistant Professor @ UCLA-CS. (she/her/hers)
Violet Peng 25.6

Sergey Levine
Associate Professor at UC Berkeley
Sergey Levine 43.3

Julian Eisenschlos
Math, NLP, Deep Learning • @Google AI Language • Previously ASAPP & @facebook • Co-founder @BotMaker_io
Julian Eisenschlos 22.9