Tweet  

DeepMind    @DeepMind   ·   10/13/2021
A unified way to discover and manipulate options: how the successor representation can be seen as a natural substrate for temporal abstraction in #ReinforcementLearning: https://t.co/yLOXIqUsYq New paper by @MarlosCMachado, @andre_s_barreto, and Doina Precup.
 
 Reply      Retweet   39      Like     132    

More by:   DeepMind
zoom in  
 








  More Tweets  

Brad Neuberg    @bradneuberg   ·   9/26/2021
Domain Invariant Representation Learning with Domain Density Transformations: “We propose a method to learn a domain-invariant representation by enforcing the representation network to be invariant under all transformation functions among domains.“ https://t.co/emEjHi74k6
 Reply      Retweet   1      Like     3    

More by:   Brad Neuberg
zoom in  
 



Ted Underwood    @Ted_Underwood   ·   6/3/2021
The annoying thing about topic modeling is that it surfaces any weird stuff in your dataset. So you run a giant topic model that takes 18 hours and then immediately discover "oh, I need to take out that weird stuff." Then wait another 18 hours, and discover ...
 Reply      Retweet   1      Like     70    

More by:   Ted Underwood
zoom in  
 



Google AI    @GoogleAI   ·   8/4/2021
Natural speech often has disruptions and complexities that are difficult for #NLP models to understand. Today we introduce two benchmark datasets that challenge models on temporal reasoning (TimeDial) and contextual disfluencies (Disfl-QA). Details below: https://t.co/kD65S8TYwx
 Reply      Retweet   103      Like     353    

More by:   Google AI
zoom in  
 



Sebastian Raschka    @rasbt   ·   10/4/2021
"Self-supervised Representation Learning for Reliable Robotic Monitoring of Fruit Anomalies" (https://t.co/4RRdGRpcSF) Interesting take-aways re self-supervised representation learning from this work: (a) early stopping on the pre-text task benefits the downstream task, too [1/2]
 Reply      Retweet   3      Like     10    

More by:   Sebastian Raschka
zoom in  
 















  More from:  

DeepMind
Our team research and build safe AI systems. We're committed to solving intelligence, to advance science and humanity.

Google AI
Google AI is focused on bringing the benefits of AI to everyone. In conducting and applying our research, we advance the state-of-the-art in many domains.

Facebook AI
Facebook AI focuses on bringing the world together by advancing AI, powering meaningful and safe experiences, and conducting open research.

Brad Neuberg
Machine Learning Engineer at @planet. Research Affiliation at SETI & NASA FDL. Previously @ Dropbox and Google. Started coworking. More: https://t.co/m7fXSRGHHV

Antonin Raffin
Researcher in robotics and machine learning (Reinforcement Learning). Member of Stable-Baselines team: https://t.co/6kofLNqZjn



  Hashtags