Juan Nunez-Iglesias    @jnuneziglesias    11/23/2021      

I've noticed lots of papers learn "low" dimensional embeddings of images (say d=50), then do UMAP/tSNE/whatever to bring d down to 2-3. But has anyone tried to directly learn a 2D embedding? Is something fundamental (eg bumpier landscape) preventing this from being the default?
  
    2         26










 
  Related  

Hal Daumé III    @haldaume3    11/29/2021      

does anyone know if there’s a place to find old *ACL calls for papers? like from the 80s or earlier? and if not centralized, if anyone has pointers to pre-2000 CFPs i’d be grateful!
  
    1         1



Fabian Mentzer    @mentzer_f    11/24/2021      

Fun fact: TF and PyTorch seem to have different default leaks for their leaky ReLUs Glad papers report the alphas they use (they don't actually 😔).
  
          3



Sasha Rush    @srush_nlp    11/29/2021      

Does anyone have a good example of a tutorial on Generative Thinking / Modeling without math? Something like generative story <=> inference with non-technical examples. Something like Kevin Knight circa 2009. It makes me said when I google this I get 200 janky GAN tutorials.
  
    1         3



Kyle Cranmer    @KyleCranmer    12/6/2021      

My brief thoughts on this are: a) parsimony has worked quite well historically on the fundamental laws of physics b) the simple laws are in the latent space, we don't observe it directly c) in the space of the data, it's more complicated and ugly d) latent part is transferable
  
    1         22