MosaicML    @MosaicML    11/22/2021      

Large Language Models are notorious for being expensive to train, but provide a model that can be evaluated on generalized language understanding benchmarks. What if the goal is to perform well on a task-specific benchmark instead? Can we cut down the costs of pre-training? (1/9)
    7         42




Microsoft Research    @MSFTResearch    12/6/2021      

Current benchmarks may yield imprecise readings of AI models’ natural language understanding. Two new NLU benchmarks aim for more accurate evaluations. #NeurIPS2021
    7         11

Angeliki Lazaridou    @aggielaz    4 hours      

The world evolves constantly...but are language models capable of dealing with that real-world shift? Come chat with us about benchmarks and analyses characterizing models' (in)ability to perform temporal generalization. Today 4:30-6:00pm GMT #NeurIPS2021
    10         37

AK    @ak92501    11/24/2021      

Can Pre-trained Language Models be Used to Resolve Textual and Semantic Merge Conflicts? abs: LMs provide the SOTA performance on semantic merge conflict resolution for Edge, LMs do not yet obviate the benefits of fine-tuning neural models
    1         5

Google AI    @GoogleAI    12/2/2021      

What linguistic information is captured by language models? To better understand this, we investigate how a model’s ability to correctly apply the English subject–verb agreement rule is affected by word frequency during pre-training.
    14         49