Robocrunch        AI
 
  Trending Tweets  

François Chollet    @fchollet   ·   2 hours
Have you read the preview version of Deep Learning with Python, 2nd edition? Would you like to provide a quote we'll display on the back cover? Email me your quotes! (my email is on my website). Provide your name, your title & company/affiliation. Quotes should be 1-2 sentences.
 Reply      Retweet   4      Like     27    



julia kiseleva    @julia_kiseleva   ·   2 hours
#IgluContest competition at #NeurIPS2021 approaches how to build interactive agents that learn to solve a task while provided with grounded natural language instructions in a collaborative environment. There is still plenty of time to submit your solution to contribute to "cake":
 Reply      Retweet   1      Like     2    



Stanford HAI    @StanfordHAI   ·   3 hours
Stanford student @chenwang_j shares new work on human-robot collaboration, supported by HAI:
 Reply      Retweet        Like     5    



Yuandong Tian    @tydsh   ·   4 hours
FAIR MPK Reinforcement Learning (RL) team has full-time research scientist opening. We study fundamental research problem in ML/RL-guided optimization and its real-world applications. Look for candidates in RL, optimization, and representation learning. POC: yuandong@fb.com
 Reply      Retweet   4      Like     25    



Christian Guckelsberger    @creativeEndvs   ·   4 hours
Great talk by @Terrybroad, summarising techniques for "Active Divergence with Generative Deep Learning", e.g. for artistic purposes - plenty of examples straight from the art world. Joint work with @sebastianberns @SimonGColton @iccc_conf #iccc21
 Reply      Retweet   3      Like     3    






Gautam Kamath    @thegautamkamath   ·   4 hours
Definitely agree. I've met interviewees who over-prepped, and it felt a bit inauthetic? I mean if our work is adjacent, some knowledge may be warranted. But if you work in architecture and know details of my proofs in Appendix C, it feels like you're just gaming the interview
 Reply      Retweet        Like     8    



Eunsol Choi    @eunsolc   ·   5 hours
We investigated whether NLI model can help us verify QA model predictions. It was cool to make use out of decontextualization system, and to identify many cases where QA models output correct answers for the wrong reasons! led by @Jifan_chen & @gregd_nlp
 Reply      Retweet   3      Like     19    



Greg Durrett    @gregd_nlp   ·   5 hours
Check out Jifan's latest work! Continuing on his NAACL 21 work (https://t.co/iwmiavobSY), we've been thinking about how to improve robustness of QA predictions. Eunsol's decontextualization model (https://t.co/dObC0MovQj) ended up being a really useful piece to reframe QA as NLI!
 Reply      Retweet   1      Like     11    



Rico Sennrich    @RicoSennrich   ·   5 hours
congratulations to @bricksdont for successfully defending his thesis on Robust Neural Machine Translation today! with many thanks to the external examiner @mjpost!
 Reply      Retweet   4      Like     26    



Ted Underwood    @Ted_Underwood   ·   5 hours
Random Forests, Stochastic Parrots 🦜, The Pile, catastrophic forgetting: the whole field behaves like a science fiction writer who took a marketing course while dropping acid.
 Reply      Retweet        Like     10    



Quanta Magazine    @QuantaMagazine   ·   5 hours
Turing patterns have been identified on the smallest scale yet, suggesting that the pattern-formation mechanism might be even more pervasive than scientists have thought. https://t.co/n4xlmtf266
 Reply      Retweet   12      Like     41    



Hugging Face    @huggingface   ·   5 hours
🥁 We can't wait to share our new inference product with you! 🤩 - it achieves 1ms latency on Transformer models 🏎 - you can deploy it in your own infrastructure ⚡️ - we call it: 🤗 Infinity 🚀 📅 Join us for a live event and demo on 9/28! https://t.co/fvhb86gsG7
 Reply      Retweet   23      Like     163    



Microsoft Research    @MSFTResearch   ·   5 hours
ANSWER: A lively discussion on opportunities and challenges in human-oriented AI with Microsoft researchers, leading academics, and industry experts in reinforcement learning. What's the QUESTION?
 Reply      Retweet   4      Like     13    



PyTorch Lightning    @PyTorchLightnin   ·   6 hours
Friday's Community Spotlight! ⚡️💜 3D object segmentation with self-supervised learning: https://t.co/wcAagyfUKe
 Reply      Retweet   1      Like     17    



Shauli Ravfogel    @ravfogel   ·   6 hours
Really happy about this work! We create counterfactual representations by taking a mirror image with respect to linear "concept subspaces", and use them to test linguistic hypotheses on the working of NNs. Check out the thread for more details! https://t.co/1Xl9XjaGDe
 Reply      Retweet   1      Like     8    



Sergey Levine    @svlevine   ·   6 hours
Multi-task RL is hard. Multi-task offline RL is also hard. Weirdly, sharing data for all tasks (and relabeling) can actually make it harder. In conservative data sharing (CDS), we use conservative Q-learning principles to address this. arXiv: https://t.co/4hhl639bgh A thread:
 Reply      Retweet   11      Like     54    



Ishan Misra    @imisra_   ·   6 hours
[2/2] We show that using better positional encodings and non-parametric queries is critical for 3D detection. Transformers are also good encoders of 3D point data and work well for classification. Accepted as an ICCV’21 Oral
 Reply      Retweet        Like     5    



Andrew Davison    @AjdDavison   ·   6 hours
Varied, dynamic scene graphs of points, objects and structures are the right goal representation for #SpatialAI. We show how Gaussian BP on factor graphs can tackle the true, complex inference problem this leads to, with efficient parallel implementation on a @graphcore IPU.
 Reply      Retweet   1      Like     9    



Michiel ☁️    @Michielstock   ·   6 hours
Today I learned about the Superformula, which can be used for all kinds of biological shapes #JuliaLang function superformula(a,b,m,n₁,n₂,n₃;Δθ=1e-4) θ = 0:Δθ:2π r = @. (abs(cos(m*θ/4)/a)^n₂+abs(sin(m*θ/4)/b)^n₃)^(-1/n₁) return @. r * cos(θ), r * sin(θ) end
 Reply      Retweet   3      Like     6    



Grusha Prasad    @grushaprasad   ·   7 hours
We know SOTA NLMs encode at least some useful linguistic properties. But do they *use* these properties as we might expect them to? In work accepted at CoNLL, @ravfogel @tallinzen @yoavgo and I propose a method to explore this question! Preprint: https://t.co/9Lw0UYBDQT 🧵1/N
 Reply      Retweet   9      Like     35    



Glen Berseth    @GlenBerseth   ·   7 hours
I will be teaching a course on robot learning, starting in January 2022! This course will focus on deep reinforcement learning methods and their application to robotics. You can find more info on the course here: https://t.co/Sr1ic5KpRG
 Reply      Retweet   5      Like     27    



Emanuele Rossi    @emaros96   ·   7 hours
Question for Graph ML and Signal Processing Twitter. How can I prove that a filter matrix S which acts on a graph signal x is a low-pass filter, when it's not diagonalizable by the Graph Laplacian's eigenvectors?
 Reply      Retweet   2      Like     5    



Peter Melchior    @peter_melchior   ·   7 hours
Come work with me and my amazing colleagues @j_dunkley, Romain Teyssier at @PU_Astro. We're looking for postdocs with new computational, statistical, or deep learning approaches to cosmology. Deadline Nov 1 https://t.co/ZqXaY2BRHx
 Reply      Retweet   6      Like     16    



Jimmy Lin    @lintool   ·   8 hours
Tevatron is a new toolkit for training and running dense retrieval models by @luyu_gao and @xueguang_ma - check it out! https://t.co/M2HA8Y09ya
 Reply      Retweet   1      Like     13    

Posted by:
Jimmy Lin


Truyen Tran    @truyenoz   ·   8 hours
We are pleased to share the materials of our ECML-PKDD 2021 tutorial on Machine Learning and Reasoning for Drug Discovery.: * Tutorial description: * Slides: https://t.co/BEJ4UMj3ub #drugdiscovery #artificialintelligence #machinehttps://t.co/8AqNNsCKPt https://t.co/seLd8d2qGw
 Reply      Retweet   1      Like     3    



Max Tegmark    @tegmark   ·   8 hours
#AI meets physics: welcome to our 2pm ET YouTube colloquium with Stanford's indomitable Surya Ganguli on “Understanding computation using physics and exploiting physics for computation” https://t.co/0eCUX5K5IU
 Reply      Retweet   1      Like     27    



Gianmaria Silvello    @giansilv   ·   8 hours
Somin Wadhwa with @_hamedzamani presenting “Towards System-Initiative Conversational Information Seeking” at #DESIRES2021
 Reply      Retweet   1      Like     2    



Thomas Wolf    @Thom_Wolf   ·   9 hours
You can do exactly the same with models weights by the way!
 Reply      Retweet        Like     3    



Matthew Mayo    @mattmayo13   ·   9 hours
Introducing #TensorFlow Similarity - a newly-released library from Google that facilitates the training, indexing and querying of similarity model https://t.co/sIU3kCsxNv
 Reply      Retweet   3      Like        



QuTech    @QuTech_news   ·   10 hours
✨✨✨Quantum teleportation transfers a quantum state between two parties without sending any of its parts. The catch? A classical channel is still required to transfer information, making instantaneous communication for now only possible in science fiction.#QuTechAcademy
 Reply      Retweet        Like     5    

Posted by:
QuTech


CopeNLU    @CopeNLU   ·   11 hours
"How Does Counterfactually Augmented Data Impact Models for Social Computing Constructs?" analyses reasons CAD is beneficial for social NLP tasks (sentiment, sexism, hate speech) @indiiigosky @hide_yourself @ffloeck @clauwa @IAugenstein https://t.co/LKnRrYI43e #EMNLP2021 #NLProc
 Reply      Retweet   3      Like     11    

Posted by:
CopeNLU


Creative Virtual    @creativevirtual   ·   12 hours
Successful Conversational AI: Blending Machine Learning & Human Intelligence https://t.co/Yyh4SLfi72 via @AITimeJournal
 Reply      Retweet   1      Like        



Fabiana Clemente    @fab_clemente   ·   13 hours
Would you like to learn more about #syntheticdata? Better understand #deeplearning and Generative models? Join us at the Synthetic Data community! #opensource #opensourcecommunity https://t.co/UId9tPCUxO
 Reply      Retweet   1      Like     1    



Andrey Lukyanenko    @AndLukyane   ·   14 hours
So, it was found out that generative models trained on texts with human misconceptions imitate these misconceptions. Isn't the whole point of generative models to learn the patterns in the data? It seems to me that models did exactly what they were supposed to do.
 Reply      Retweet        Like     8    



Barry Haddow    @bazril   ·   14 hours
New parallel corpus. Over 40M parallel sentences from 506 translation directions, all official EU languages. Derived from the reports of the European Court of Auditors. Paper: https://t.co/n57XyJiHNN , Corpus: https://t.co/9tYcnA0nGT
 Reply      Retweet   13      Like     39    










 
  Trending Now  


Microsoft Research
Founded in 1991, Microsoft Research is dedicated to conducting both basic and applied research in computer science and software engineering.
Microsoft Research 61.2

François Chollet
Deep learning @google. Creator of Keras. Author of 'Deep Learning with Python'. Opinions are my own.
François Chollet 58.2

Max Tegmark
Known as Mad Max for my unorthodox ideas and passion for adventure, my scientific interests range from artificial intelligence to the ultimate nature of reality
Max Tegmark 49.5

Stanford HAI
Advancing AI research, education, policy, and practice to improve the human condition.
Stanford HAI 47.5

Hugging Face
The AI community building the future. #BlackLivesMatter #stopasianhate
Hugging Face 47.4

Thomas Wolf
Co-founder & Chief Scientist at @HuggingFace – I lead the Open-Source & Science teams – 🤗Transformers & 🤗Datasets libraries – @BigScienceW research workshop
Thomas Wolf 44.4

Jimmy Lin
I profess to know very little at the University of Waterloo. I used to write code for Twitter and slides for Cloudera.
Jimmy Lin 43.7

Sergey Levine
Associate Professor at UC Berkeley
Sergey Levine 43.3

Matthew Mayo
Comma separated valedictorian; flat file fanatic #MachineLearning #DataScience #NLProc @kdnuggets
Matthew Mayo 42.7

Ted Underwood
Using machine learning to study literary imagination, and vice-versa. Distant Horizons (UChicago, 2019). Information Sciences (@iSchoolUI) / English.
Ted Underwood 40.3

PyTorch Lightning
The lightweight PyTorch AI research framework. Scale your models, not the boilerplate! Use our platform @gridai_ to scale models from your laptop to the cloud.
PyTorch Lightning 37.0

Creative Virtual
Award-winning chatbot, virtual agent, live chat & conversational AI solutions for better customer, employee & agent engagement. #CX #EX #AI #ConversationalAI
Creative Virtual 36.3

QuTech
We keep you posted on our road to develop and foster Quantum Computing and Quantum Internet. The future is #quantum
QuTech 33.2

Gautam Kamath
Professor of Computer Science @UWaterloo, affiliate @VectorInst. I lead @TheSalonML. ML, Stats, Robustness, Privacy. Prev: @SimonsInstitute @MIT @Cornell. 🇨🇦
Gautam Kamath 32.8

Andrew Davison
From SLAM to Spatial AI; Professor of Robot Vision, Imperial College London; Director of the Dyson Robotics Lab; Co-Founder of SLAMcore.
Andrew Davison 32.6

CopeNLU
University of Copenhagen Natural Language Understanding research group, led by @IAugenstein #NLProc #ML #dlearn
CopeNLU 27.7

Yuandong Tian
Research Scientist and Manager in FAIR (Facebook AI Research)
Yuandong Tian 27.0

Greg Durrett
Assistant Professor at UT Austin. I do NLP most of the time. he/him #BlackLivesMatter
Greg Durrett 27.0

Eunsol Choi
work on natural language processing / machine learning. assistant professor @UTCompSci, previously at @googleai, @uwcse, @Cornell. all opinions are of my own.
Eunsol Choi 26.6

Emanuele Rossi
Researcher at Twitter and PhD student at @imperialcollege. Working on graph deep learning. Previously at Fabula AI (acquired by Twitter). @Cambridge_Uni alumnus
Emanuele Rossi 25.2

Andrey Lukyanenko
Polyglot. Economist. Data Scientist (CV TechLead). Kaggle Kernels 1st Rank, Competition master. Love reading Fantasy books.
Andrey Lukyanenko 25.0

Rico Sennrich
SNSF Professor at University of Zurich / Lecturer at University of Edinburgh.
Rico Sennrich 24.5

julia kiseleva
Researcher @MSFTResearch.
julia kiseleva 23.3

Barry Haddow
Researcher in Informatics at University of Edinburgh. Mainly working on machine translation.
Barry Haddow 23.1

Christian Guckelsberger
Computer Scientist/Art Historian/Postdoc in Interactive Artificial Intelligence. Researching AI acting creatively in its own right, via intrinsic motivation.
Christian Guckelsberger 22.6

Ishan Misra
Researcher @facebookai; computer vision; machine learning
Ishan Misra 21.4

Gianmaria Silvello
Information Retrieval, Data Citation and Provenance, Digital Libraries and Archives researcher @ University of Padua, Italy
Gianmaria Silvello 20.4

Truyen Tran
Associate Professor of Artificial Intelligence at Deakin Uni, Australia. Super curious about intelligence, science and living things.
Truyen Tran 16.6

Fabiana Clemente
Unlocking data for data scientists @YData_ai Listen to my podcast about Machine Learning and data privacy: https://t.co/5IhKacKBMS
Fabiana Clemente 15.2

Grusha Prasad
A CogSci PhD student at JHU interested in using psycholinguistic and computational approaches to think precisely about language. Also a lover of puns. She/her
Grusha Prasad 13.5

Peter Melchior
Asst. Prof. of Statistical Astronomy at @Princeton, @PU_Astro & @PrincetonSML. I weigh cosmic structures, value every pixel, express my own views.
Peter Melchior 13.5

Glen Berseth
Postdoc at Berkeley Artificial Intelligence Research Lab @berkeley_ai performing research in deepRL and robotics. Received my Ph.D. from UBC. he/him
Glen Berseth 11.1

Shauli Ravfogel
CS graduate student at Bar-Ilan University, NLP lab
Shauli Ravfogel 11.1

Michiel ☁️
Machine learning and computational biology researcher. Develops models to understand ecology and synthetic biology. Likes 📚, ☕, 👨‍🍳, 👨‍💻 and ☁️s.