robocrunch
Meta AI
@MetaAI
We focus on bringing the world together by advancing AI, powering meaningful and safe experiences, and conducting open research.
Tweets by Meta AI
Meta AI is calling researchers and developers to submit their papers to be highlighted at our workshop at @CVPR. Our workshop will touch on explainable #AI for computer vision. Learn more: https://t.co/VVf7ZxyoWX
Shared by
Meta AI
at
5/12/2022
We're sharing details on new methods to more efficiently train Vision Transformers, a model architecture that can achieve state-of-the-art results on a wide range of #computervision tasks. Learn more: https://t.co/lcTapMddLO
Shared by
Meta AI
at
5/6/2022
Federated learning (FL) is an important privacy-preserving method for training AI models. Today, we describe what we believe is the first asynchronous federated learning (FL) system running at scale. Our results show it is 5x faster than existing FL: https://t.co/CDAyd0Fxoy
Shared by
Meta AI
at
5/4/2022
Join the #Ego4D challenge, exploring the largest ever dataset of first-person video and five new research benchmarks: episodic memory, hands+objects, social, AV, forecasting. First round of the competition ends June 1 with results shared at #CVPR. https://t.co/cmL7qmfzKH
Shared by
Meta AI
at
5/3/2022
Today Meta AI is sharing OPT-175B, the first 175-billion-parameter language model to be made available to the broader AI research community. OPT-175B can generate creative text on a vast range of topics. Learn more & request access: https://t.co/3rTMPms1vq
Shared by
Meta AI
at
5/3/2022
(2/2) We’ll use insights from this work to guide the development of AI that processes speech and text as efficiently as people do.
Shared by
Meta AI
at
4/28/2022
1/2 Today, we’re announcing a long-term research initiative to better understand how the human brain learns and processes language. This project is in collaboration with @NeuroSpin_91 and @Inria. https://t.co/kh0w7DXgJh
Shared by
Meta AI
at
4/28/2022
Through April 29th, researchers are gathering virtually at @iclr_conf to present cutting-edge research on all aspects of deep learning. Conference attendees can visit Meta virtual booth: https://t.co/7t3twboG5f
Shared by
Meta AI
at
4/25/2022
Attendees will gather virtually to discuss deep learning, best practices in #AI, and opportunities for further development in the field. https://t.co/fuhlDzKoOj
Shared by
Meta AI
at
4/21/2022
Last year, Meta AI researchers announced a dataset that shed light on fairness in #AI. Since then, researchers have added speech transcriptions, and have been testing how vision and automatic speech recognition models work for different people in different demographic groups…
Shared by
Meta AI
at
4/14/2022
Update: Meta researchers have released the correspondences between the Deepfake Detection Challenge and Casual Conversations. Now, researchers can measure biases of their deepfake detection models: https://t.co/aa6KToz1o5
Shared by
Meta AI
at
4/14/2022
Learn more about Meta AI’s newest inverse protein folding model: ESM-IF1, developed by Meta researchers and team @chloehsu0 @adamlerer @BrianHie @TomSercu @robert_verkuil @jason_liu2 and @ebetica Check it out: 👇
Shared by
Meta AI
at
4/12/2022
Managing Director Antoine Bordes (@antoinebordes) will be at @WAICANNES. On April 14th, he will touch on his natural language processing research and Meta’s endeavor to deliver inclusive translation technologies, both IRL and in the metaverse: https://t.co/5j6RU8KqtR
Shared by
Meta AI
at
4/12/2022
Meta AI continues to advance self-supervised learning for images. Today, we are sharing a demo to explore what can be done with these models. So read about it: https://t.co/cMLPX05X6s And try it: https://t.co/LxMmuMlvGW
Shared by
Meta AI
at
4/8/2022
Meta AI researchers are announcing iSDF; real-time neural signed distance fields for robot navigation and manipulation. iSDF can fill in partially observed regions and adaptively allocate memory capacity to map at different levels of detail. #AI https://t.co/gTT5NytjFP
Shared by
Meta AI
at
4/6/2022
As part of our latest efforts toward ‘textless NLP,’ which aims to break free of reliance on text for training, we’ve achieved a new model that can generate chit-chat with pauses, ‘ums,’ and overlapping speech just from raw audio. Learn more: https://t.co/U0IebF9v5B
Shared by
Meta AI
at
4/4/2022
(1/4) Last year, we introduced GSLM, the 1st language model that generates expressive speech using only raw audio recordings as input, breaking free dependence on text for training. Today, we’re sharing 3 research milestones: https://t.co/U0IebF9v5B
Shared by
Meta AI
at
3/31/2022
An open sourced end-to-end #AI model that automatically creates high-quality biographical articles is here—an advancement needed to increase equity in representation across the internet. Learn more here: https://t.co/kLSj5B9TxG
Shared by
Meta AI
at
3/30/2022
With our core goal of improving accessibility and increasing equity within #AI advancements, we are releasing an AI model and dataset to complement existing efforts to bring more representation to Wikipedia. Learn how it works and download the dataset: https://t.co/YWr4vdVaGF
Shared by
Meta AI
at
3/30/2022
Mephisto is a new open, collaborative way to collect, share, & iterate on best practices for collecting data to train AI models. https://t.co/SZbrHU0Rhs Researchers can swap out components and easily find the annotations they need, lowering the barrier for creating custom tasks.
Shared by
Meta AI
at
3/29/2022
We’re sharing details about dense retrieval models that will help pave the way for ubiquitous neural information retrieval. https://t.co/VkoEG2BTGd This research will not only improve search as we currently use it, but also enable smarter AI agents of the future.
Shared by
Meta AI
at
3/28/2022
EgoObjects is the largest object-centric data set, which contains more than 110 hours of egocentric videos and focuses on object detection tasks. It includes 40,000 videos, and 1.2M object annotations from up to 600 object categories. Learn more: https://t.co/YqcrVZFb3U
Shared by
Meta AI
at
3/18/2022
Researchers at Meta AI are advancing the future of personalized #AI assistants. Alborz Geramifard spoke at Inside the Lab about how Meta AI is paving the way towards smarter devices powered by artificial intelligence. Watch the full talk on our website: https://t.co/r6vjXqk2pf
Shared by
Meta AI
at
3/16/2022
We are bridging the gap between the physical and digital worlds with new advancements in #AI. Jerome Pesenti (@an_open_mind) and Joelle Pineau spoke at Inside the Lab about self-supervised learning and pushing our creative limits with advancements in #AI. https://t.co/r6vjXqk2pf
Shared by
Meta AI
at
3/15/2022
Meta AI is building a high-performance open source multilingual ASR model that uses pseudo labeling, a popular #ML technique that leverages unlabeled data. Our work makes it possible to build an effective ASR model using unlabeled data across 60 languages.https://t.co/YKb8SNhOya
Shared by
Meta AI
at
3/14/2022
The capabilities of computer vision models are expanding, but current models require a tedious amount of human refinement to annotate. Piotr Dollar shared on the advancements in self-supervised learning and how #AI is changing that. https://t.co/r6vjXqk2pf
Shared by
Meta AI
at
3/9/2022
Meta is working on innovations in translation so that in the future – and in the #metaverse – language will no longer be a barrier to information and opportunities. Learn more: https://t.co/lx7Jz8Ptxw
Shared by
Meta AI
at
3/9/2022
Exciting new work from Meta AI researchers: Star Temporal Classification (STC) can perform sequence classification with partially labeled data with up to 70% of the labels missing. It is implemented using GTN, a framework for differentiable WFSTs. https://t.co/LNh38qcXsi
Shared by
Meta AI
at
3/2/2022
We’re pleased to announce new advances in SEER, Meta AI’s groundbreaking self-supervised #computervision model. SEER is now not only much more powerful, it also produces fairer, more robust computer vision models. Learn more: https://t.co/OferzHr9Ic
Shared by
Meta AI
at
2/28/2022
Be sure to tune in and watch talks from industry leaders, touching on topics from human-level intelligence and self-supervised learning developments, to building #AI ethically, equitably, and more. Check out the full agenda and sign up here: https://t.co/RRuybHcm3D
Shared by
Meta AI
at
2/17/2022
Dynabench is a research platform launched in 2020 that is used for data collection and benchmarking, offering dynamic solutions like combining models with human touch. Learn more about this project and how you can participate: https://t.co/K7zune5cYv
Shared by
Meta AI
at
2/17/2022
Researchers at Meta AI are advancing the future of personalized #AI assistants. @AlborzGr will speak about how Meta AI is paving the way towards smarter devices powered by artificial intelligence. Sign up today: https://t.co/RRuybHcm3D
Shared by
Meta AI
at
2/16/2022
The capabilities of computer vision models are expanding. But current models require a tedious amount of human refinement to annotate. Join us for Inside the lab where Piotr Dollar will speak about self-supervised learning and how #AI is changing that. https://t.co/RRuybHcm3D
Shared by
Meta AI
at
2/15/2022
We are releasing a series of multilingual autoregressive language models (XGLMs) up to 7.5B parameters, which significantly outperform English-centric language models in few-shot learning on 20+ languages. Paper: https://t.co/pa7anJI83U Models and code: https://t.co/VsZ0KY0ynm
Shared by
Meta AI
at
1/27/2022
Congratulations to the winners of the Open Catalyst Challenge at #NeurIPS2021!
Shared by
Meta AI
at
12/7/2021
Here's where you can find Meta AI researchers at #NeurIPS2021 today. For the full schedule, visit our website: https://t.co/zLFVDNeAR5
Shared by
Meta AI
at
12/7/2021
Meta AI researchers are presenting 83 papers at #NeurIPS2021, including eight as spotlights and five as orals. Learn more about wav2vec, Habitat 2.0, and more here: https://t.co/QTKdj1WVyh
Shared by
Meta AI
at
12/6/2021
Meta AI researcher @dchaplot will present SEAL: Self-supervised Embodied Active Learning on December 8 at #NeurIPS2021. Learn more:
Shared by
Meta AI
at
12/3/2021
We’re releasing the 1.0 version of Opacus, a #PyTorch training library that makes it easier for researchers to adopt differential privacy in #ML. Opacus 1.0 will accelerate differential privacy research in the field. Learn more: https://t.co/rvmDLsaunl
Shared by
Meta AI
at
12/2/2021
We’re releasing NeuralProphet, a scalable and easy-to-use open source framework for hybrid forecasting models. Built in #PyTorch, NeuralProphet produces accurate, interpretable time series forecasts quickly. https://t.co/eXyisbWre6
Shared by
Meta AI
at
12/1/2021
We’re augmenting our open source data set to help surface fairness issues in #AI speech models. By adding human speech transcriptions, researchers can test how automatic speech recognition models work for people in different demographic groups. More: https://t.co/EPB66lrYM3
Shared by
Meta AI
at
11/24/2021
Ever wondered if you already added an ingredient when cooking a meal? Or what to attach next when assembling a dresser? Facebook AI has built Anticipative Video Transformer, a model that can understand sequences of events & suggest the next step. Details: https://t.co/F8OvlWWT3D
Shared by
Meta AI
at
10/13/2021
At #ICCV2021, we are presenting 3DETR & DepthContrast, two new models that advance 3D understanding. They address common challenges by leveraging self-supervised learning and establishing a general architecture that simplifies 3D understanding. Learn more: https://t.co/2gpdyUWi1q
Shared by
Meta AI
at
10/7/2021
Today, we’re unlocking @DynabenchAI, a first-of-its-kind platform for dynamic AI benchmarking. AI researchers can now create their own custom tasks to better evaluate the performance of #NLP models in more dynamic, & realistic settings for free. https://t.co/5hhESN3PRc
Shared by
Meta AI
at
9/24/2021
We've taken another step toward eliminating language barriers. In collaboration with @huggingface, we recently released 4 direct speech-to-text models that translate English to Arabic, Catalan, German & Turkish. Try out the models for yourself: https://t.co/mKSGLwYnBn
Shared by
Meta AI
at
9/10/2021
We’re introducing GSLM, the first language model that breaks free completely of the dependence on text for training. This “textless NLP” approach learns to generate expressive speech using only raw audio recordings as input. Learn more and get the code: https://t.co/kRkUaFyZWb
Shared by
Meta AI
at
9/9/2021
The LVIS 2021 challenge is live! It uses our #dataset that contains 1203 object categories, 160k images, and 2M instance annotations. The deadline to submit your challenge entry is September 27. Learn more about LVIS and the challenge here: https://t.co/HmDy9fkimF
Shared by
Meta AI
at
8/23/2021
What if you could create virtual boxing athletes that could automatically develop a winning strategy? We released a #deeplearning framework at #SIGGRAPH2021 that generates control policies for two-player sports where the players are simulated. Learn more: https://t.co/wxlkdN0BD0
Shared by
Meta AI
at
8/20/2021
Facebook AI is sharing Multiscale Vision Transformers (MViT), a family of visual recognition models that incorporate the seminal concept of hierarchical representations into the powerful Transformer architecture. Learn more: https://t.co/BxSp1CE6uh
Shared by
Meta AI
at
8/11/2021
FAIR research scientist, Ishan Misra (@imisra_) sat down with @lexfridman to demystify self-supervised learning & its impact in #AI: https://t.co/OHqoe7gdKt. Read the blog post that inspired the conversation: https://t.co/Tblj1SGbYA
Shared by
Meta AI
at
8/2/2021
Facebook AI has released DrQ-v2, a model-free #reinforcementlearning algorithm for visual continuous control. DrQ-v2 yields state-of-the-art results by using data augmentation to learn directly from pixels. Learn more and get the code:
Shared by
Meta AI
at
7/21/2021
We’ve developed a new computer vision model called ConViT, which combines two widely used AI architectures — convolutional neural networks (CNNs) & Transformer-based models — in order to overcome some important limitations of each approach on its own. https://t.co/NlCJ6NMNox
Shared by
Meta AI
at
7/20/2021
We’re sharing our work on few-shot neural architecture search (NAS), which combines the accuracy of vanilla NAS with the speed & efficiency of one-shot NAS. Few-shot NAS lets anyone design a powerful custom model quickly, with just a few GPUs. Learn more: https://t.co/fdUTFSmMK3
Shared by
Meta AI
at
7/19/2021
We’ve built and open-sourced BlenderBot 2.0, the first #chatbot that can store and access long-term memory, search the internet for timely information, and converse intelligently on nearly any topic. It’s a significant advancement in conversational AI. https://t.co/H17Dk6m1Vx
Shared by
Meta AI
at
7/16/2021
We’ve developed two methods to significantly improve the accuracy of supernets, which have emerged as a powerful way to make network architecture search more efficient. AttentiveNAS and AlphaNet deliver state-of-the-art results on the ImageNet data set. https://t.co/RHEpU1A0S2
Shared by
Meta AI
at
7/14/2021
After being trained entirely in simulation, an RMA-enabled robot is then deployed in the real world, where its base policy and adaptation module work asynchronously to enable it to adapt in real time.
Shared by
Meta AI
at
7/9/2021
Researchers from Facebook AI, @berkeley_ai, and @SCSatCMU have developed #AI that can enable a legged robot to adapt in fractions of a second to changing conditions in the real world.
Shared by
Meta AI
at
7/9/2021
(resharing w/correct link!) We’re using the natural association between video & sound to teach machines to better understand the world. Our self-supervised approach (a #CVPR21 best paper candidate) learns directly from sounds & images in videos. https://t.co/QHXtgklJGy
Shared by
Meta AI
at
7/8/2021
We’re sharing new research on using the natural association between video & sound to teach machines to better understand the world. Our self-supervised approach, which is a #CVPR21 best paper candidate, learns directly from sounds & images in videos. https://t.co/Wp6sYXwBHe
Shared by
Meta AI
at
7/8/2021
We’re sharing a new theory that attempts to explain one of the mysteries of #deeplearning: why so-called non-contrastive self-supervised learning often works well. Learn more: https://t.co/mVgxbBUcnB
Shared by
Meta AI
at
7/7/2021
We released DensePose-CSE, a #detectron2 framework for predicting dense correspondences for people and animals within and across categories in one go. Learn more: https://t.co/Ph5Fo55bbq
Shared by
Meta AI
at
7/6/2021
Here is the first method to enable freestyle dance generation in high-resolution from any single image using Generative Adversarial Networks (GANs). Let’s dance! https://t.co/277XkPEtZH
Shared by
Meta AI
at
6/25/2021
We presented an approach to allow scalable learning of single image 3D reconstruction, using in-the-wild image collections in a ‘shelf-supervised’ manner: https://t.co/KYiIXdnnS1
Shared by
Meta AI
at
6/25/2021
At #CVPR2021, Facebook AI pushed the state of the art in many important areas of #CV, including 3D reconstruction, image manipulation, cross-modal learning and more. Here are some highlights:
Shared by
Meta AI
at
6/25/2021
We’re sharing significantly improved Mask R-CNN baselines that match recent SOTA results from other #computervision experts. We’re also providing an analysis of what drove these gains & adding recipes to our open source Detectron2 object detection library. https://t.co/BDMR8XijES
Shared by
Meta AI
at
6/21/2021
We are contributing to ongoing work to identify manipulated images and improve the detection of data provenance with the Image Similarity data set and challenge, hosted by DrivenData and recently launched at #CVPR2021. Learn more: https://t.co/XqSDbCoX7v
Shared by
Meta AI
at
6/21/2021
In collaboration with @ntu_spml, @LTIatCMU, & @jhuclsp we introduce SUPERB, a benchmark using 10 speech processing tasks to standardize evaluations of #unsupervised models used in speech processing advancements. Submit & evaluate your models here: https://t.co/AIjx3IoZLt
Shared by
Meta AI
at
6/18/2021
We're open-sourcing XCiT, a new Transformer-based #computervision model with linear (not quadratic) complexity. XCiT, created in partnership w/ @inria researchers, processes high-res images extremely efficiently & delivers strong performance. Code & models https://t.co/7aRHfNxOp6
Shared by
Meta AI
at
6/18/2021
We are launching the Open Catalyst Challenge, an open AI research competition to build new machine learning models that will help scientists discover new catalysts for efficient, economical green energy storage. Learn more: https://t.co/G7DhxRgfC5
Shared by
Meta AI
at
6/17/2021
We’ve just open-sourced AugLy, a new #Python library that will help AI researchers use data augmentations to evaluate and improve the robustness of their machine learning models. Read more: https://t.co/w1FBLKMUFh
Shared by
Meta AI
at
6/17/2021
The simplicity and stability of HuBERT open the door for research on analyzing learned representations and broader adoption in the speech and NLP communities. The quality of the learned presentations facilitates deployment for many different downstream speech applications.
Shared by
Meta AI
at
6/15/2021
HuBERT either matches or improves upon the SOTA speech representation methods for the standard Libri-light and Librispeech benchmarks. Also, discrete HuBERT representations achieve SOTA performance for Spoken Language Modeling and compression with an impressive rate of 365bps.
Shared by
Meta AI
at
6/15/2021
We are releasing pretrained HuBERT speech representation models and code for recognition and generation. By alternating clustering and prediction steps, HuBERT learns to invent discrete tokens representing continuous spoken input. Learn more: https://t.co/0eF3emyKYu
Shared by
Meta AI
at
6/15/2021
TextStyleBrush is the first self-supervised AI model that replaces text in existing images of both scenes & handwriting — in one shot — using just a single word. Read more: https://t.co/0QfLraAQvV
Shared by
Meta AI
at
6/14/2021
We are releasing ACCENTOR, a new data set that combines contextual chit-chat and traditional task-oriented dialogs. Automatic & human evaluations show our models can code-switch seamlessly, making virtual assistant conversations more natural & interactive. https://t.co/HjOzZkpLfC
Shared by
Meta AI
at
6/11/2021
The first fully documented and supported release for FairScale. FairScale makes available the latest distributed training techniques in the form of composable modules and easy to use APIs for optimizing training and scaling your models. Check it out: https://t.co/atUHZnGWOM
Shared by
Meta AI
at
6/3/2021
Facebook is making #PyTorch the default framework for building all our #AI and machine learning models. Learn how PyTorch is already powering the next generation of Facebook experiences, and what the future holds: https://t.co/qu1p1FXp2O
Shared by
Meta AI
at
6/2/2021
We’ve developed an AI framework that helps filmmakers guide an aerial drone to record just the right kind of shot. Tell it whether you want a very exciting video clip or something calm & the system picks the trajectory and camera angle. Learn more: https://t.co/yhSAvHpIhk
Shared by
Meta AI
at
6/1/2021
Facebook AI’s new open source speech recognition model, wav2vec Unsupervised, uses no transcribed data at all. We’ve tested it on many languages, such as Swahili, that have proven challenging for other systems. Learn more in our blog post here: https://t.co/b6ic50AsM6
Shared by
Meta AI
at
6/1/2021