Sebastian Raschka    @rasbt    9/25/2021      

There is a good section on quantifying annotator uncertainty. Ties in to of what I have been thinking about lately, i.e., how to encode natural labeling ambiguity and quality for supervised learning algos
  
          9




  Hashtags  

   #NAACL2022






 
  Related  

Lilian Weng    @lilianweng    12/6/2021      

The performance of supervised learning tasks improves with more high-quality labels. However it is expensive to collect many such labels. Semi-supervised learning is one of the paradigms for dealing with label scarcity: https://t.co/a119o6MukS
  
    21         89



NAACL HLT    @naaclmeeting    12/5/2021      

Workshop at #NAACL2022 (@naaclmeeting): 📣 Deep Learning on Graphs for Natural Language Processing
  
    2         2



Martino Sorbaro    @martopix    11/29/2021      

I was thinking about reward-modulated Hebbian or STDP learning. It switches from Hebbian to anti-Hebbian based on reward. Why/how does it work, theoretically speaking? I have trouble imagining intuitively how reward-based learning and unsupervised component analysis combine.
  
    1      



NAACL HLT    @naaclmeeting    12/5/2021      

Workshop at #NAACL2022 (@naaclmeeting): 📣 Deep Learning for Low-Resource Natural Language Processing (DeepLo 2022)
  
    1         1



Pasquale Minervini    @PMinervini    2 hours      

If you are interested in learning from graph-structured and natural language data, hybrid neural-symbolic models, compositional generalisation and in making Deep Learning models more data-efficient, robust, and explainable 🤖 consider joining my group at @InfAtEd! (2/5)
  
          5