robocrunch
Google AI
@GoogleAI
Google AI is focused on bringing the benefits of AI to everyone. In conducting and applying our research, we advance the state-of-the-art in many domains.
Tweets by Google AI
Today we present a pilot effort to create an automatic wireless network planning tool that estimates radio propagation and coverage using detailed geometric models to quickly and accurately estimate path loss for radio transmissions. Read more at https://t.co/19HjWOl3Hi
Shared by
Google AI
at
5/12/2022
Congrats to the team at @MGnifyDB on their 2.4B sequence protein database release! We're glad to have supported this effort with an #ML generated collection of 1.5B protein function predictions. Read more about this project, including prior work, below: https://t.co/5BtFo1iXAo
Shared by
Google AI
at
5/12/2022
Learn about chain of thought prompting, a method that equips language models to decompose multi-step problems into intermediate steps, enabling models of sufficient scale to solve complex reasoning problems that are not solvable with standard prompting. → https://t.co/ep510qByis
Shared by
Google AI
at
5/11/2022
Today we present an approach that uses monolingual data to train #MachineTranslation models for zero-resource translation, which has enabled the expansion of #GoogleTranslate to include 24 under-resourced languages. https://t.co/T0lwbvhTd6
Shared by
Google AI
at
5/11/2022
Introducing a new #ReinforcementLearning framework to safely and efficiently learn robotic legged locomotion skills while both minimizing the risk of damage due to falls and automatically resetting after each trial. Learn more at https://t.co/02UU66eEZ5
Shared by
Google AI
at
5/5/2022
Work on #GraphNeuralNetworks (GNNs) has produced many GNN variants, but methods for evaluating GNNs have received less attention. Read how GraphWorld enables exploring GNN performance on a broader population of graphs not covered by popular datasets. → https://t.co/fzSTUOeE3M
Shared by
Google AI
at
5/5/2022
Alpa is a framework that uses just one line of code to easily automate the complex model parallelism process for large #DeepLearning models. Learn more and check out the code. https://t.co/xFfW5tml9v
Shared by
Google AI
at
5/4/2022
Today we present Value Function Spaces, an approach to long-horizon reasoning for #ReinforcementLearning applications that uses aggregated value functions from individual skills to improve overall performance on complex activities. Learn more at https://t.co/NsJRqbbZgN
Shared by
Google AI
at
4/29/2022
The 10th annual International Conference on Learning Representations is being held virtually this week. If you’re attending @iclr_conf, check out one of our many publications or join us at one of our workshops. Read about our participation at #ICLR2022 → https://t.co/PGATcVLB3x
Shared by
Google AI
at
4/25/2022
Pix2Seq is a general framework that casts object detection as a language modeling task conditioned on pixel inputs, which achieves competitive results compared to more complex specialized methods that are difficult to generalize to other tasks. Read how ↓ https://t.co/oRtq7NZ8De
Shared by
Google AI
at
4/22/2022
Read about FormNet, a sequence model for form-based document understanding that can process the more complex layouts frequently found in form documents and achieves state-of-the-art performance using less pre-training data than conventional methods. https://t.co/pDnnD88qQc
Shared by
Google AI
at
4/20/2022
Introducing Learning to Prompt (L2P), an #ML model training method that uses learnable task-relevant prompts to guide pre-trained models through training on sequential tasks and results in high performance in the #ContinualLearning setting. Read more → https://t.co/pIAN0ORCEq
Shared by
Google AI
at
4/19/2022
Read all about Locked-Image Tuning, which combines the best of transfer learning and contrastive image-text learning to achieve state-of-the-art zero-shot classification accuracy. Learn more and try the demo! ↓ https://t.co/trBHbU0Z98
Shared by
Google AI
at
4/14/2022
Task-oriented dialogue models power many modern conversational agents, but can be limited in their ability to generalize to new tasks. Check out two new approaches for dialogue modeling that use additional context during training for improved performance. https://t.co/IsY5nRIbCX
Shared by
Google AI
at
4/13/2022
Learn how DeepFusion, a fully end-to-end multi-modal 3D detection framework, applies a simple yet effective feature fusion strategy to unify the signals from two sensing modalities and achieve state-of-the-art performance. https://t.co/jqTuWRAuro
Shared by
Google AI
at
4/12/2022
Presenting ALX, an open-source library for distributed matrix factorization that makes efficient use of the #TPU architecture, enabling a high-performance implementation on a large-scale cluster of TPU devices. Learn more and grab the code → https://t.co/e6O6jQ8eVM
Shared by
Google AI
at
4/8/2022
A key challenge in #ReinforcementLearning is learning policies from scratch in environments with complex tasks. Read how a meta-algorithm, Jump Start Reinforcement Learning, uses prior policies to create a learning curriculum that improves performance → https://t.co/CkFLof4fNp
Shared by
Google AI
at
4/6/2022
Today we discuss challenges of irreproducibility in #DeepLearning and how smooth activation functions can help address them. We present the Smooth reLU (SmeLU) activation which gives a simple smooth solution to address the problem. https://t.co/yLJwZrYqjF
Shared by
Google AI
at
4/5/2022
Introducing the 540 billion parameter Pathways Language Model. Trained on two Cloud #TPU v4 pods, it achieves state-of-the-art performance on benchmarks and shows exciting capabilities like mathematical reasoning, code writing, and even explaining jokes. https://t.co/NFHFAHgUkB
Shared by
Google AI
at
4/4/2022
Introducing the Common Voice-based Speech-to-Speech translation corpus, CVSS, which includes 2657 hours of speech-to-speech translation sentence pairs from 21 languages into English. Read about its development and how we used it to train baseline models ↓ https://t.co/dxtif8v9bM
Shared by
Google AI
at
4/1/2022
Are you an English-speaking Android user with a speech impairment? We invite you to provide feedback on Project Relate, a beta app that improves speech recognition technology for people with speech impairments→ https://t.co/KQY7qxWY2d #a11y #accessibility https://t.co/VotZ4nK9kK
Shared by
Google AI
at
3/30/2022
Learn how auto-generated summaries were enabled in Google Docs using a #MachineLearning model that comprehends document text and, when confident, generates a natural language summary of the document content. Read more → https://t.co/FsNWAa3juA
Shared by
Google AI
at
3/23/2022
Presenting PRIME, a data-driven approach for architecting hardware accelerators that trains a #DeepLearning model on existing accelerator data, improves runtime and chip area usage by 1.2 - 1.5X, and can generate accelerators for unseen applications → https://t.co/E0PcQMg3d4
Shared by
Google AI
at
3/17/2022
Learn how our new hybrid quantum algorithm, based on a classical Monte Carlo calculation, performed the largest ever quantum computation of chemistry by using 16 qubits to study the forces experienced by two carbon atoms in a diamond crystal ↓ https://t.co/MlfoOy3Vrl
Shared by
Google AI
at
3/16/2022
Introducing the Multimodal Bottleneck Transformer, a novel transformer-based model for multimodal fusion that restricts cross-modal attention flow to achieve state-of-the-art results on video classification tasks with less compute. Read more ↓ https://t.co/BXMVgap0ID
Shared by
Google AI
at
3/15/2022
Graph Neural Networks (GNNs) are powerful tools for leveraging graph-structured data. Introducing Shift-Robust GNN, an approach that accounts for potential bias in training data and meaningfully reduces the negative effects of such bias ↓ https://t.co/NknO6kqWQE
Shared by
Google AI
at
3/8/2022
Predicting protein function based on amino acid sequences has broad impact, but can be challenging. Check out a new #ML approach that has resulted in ~6.8M new protein function annotations, and try it yourself using an online interactive tool. Read more ↓ https://t.co/GeCr4Ju8XY
Shared by
Google AI
at
3/7/2022
Presenting a novel approach for pre-training video understanding models on untrimmed videos that leverages the teacher-student framework to convert noisy, weak labels to more effective pseudo-labels, resulting in state-of-the-art performance. Learn more ↓ https://t.co/yALAxr44QI
Shared by
Google AI
at
3/7/2022
Introducing CoVeR, a training paradigm that leverages images and video to build a general-purpose action recognition model — achieving impressive performance across many action recognition datasets, without the need for fine-tuning each downstream task. https://t.co/2gnyyhDcOH
Shared by
Google AI
at
3/4/2022
Today we discuss how paralinguistic information was extracted from a single layer of a high-performing language model and used to create a smaller high-performing model. Learn more and review the publicly-available model ↓ https://t.co/gL5dEL8gpN
Shared by
Google AI
at
3/3/2022
Introducing a proof-of-concept production deployment of a next-word-prediction model for Spanish-language #Gboard users that was trained using #FederatedLearning while providing a rigorous #DifferentialPrivacy guarantee. Learn how it was done ↓ https://t.co/HuvpdlGd90
Shared by
Google AI
at
3/3/2022
Introducing a new approach for training #ML models using noisy data that works by dynamically assigning importance weights to both individual instances and class labels, thus reducing the impact of noisy examples. Learn more about it at https://t.co/lKYl0fzeYD
Shared by
Google AI
at
2/28/2022
Read about 4D-Net, a neural network that learns to combine inputs from streams of 3D point cloud and RGB camera image data to enable effective 3D object detection in autonomous driving while maintaining computational efficiency. https://t.co/5kpWVqM9aL
Shared by
Google AI
at
2/23/2022
Mechanical ventilators provide critical support for patients who need assistance to breathe. Today we present exploratory research that uses data from artificial lungs to build a deep learning algorithm for improved medical ventilator control. Read more at https://t.co/ermMeMxxJ9
Shared by
Google AI
at
2/17/2022
Read about the open-source Balloon Learning Environment (BLE), where deep reinforcement learning is applied to create high-performing flight agents that control stratospheric balloons under real-world conditions. https://t.co/PgKgsGxdp6
Shared by
Google AI
at
2/17/2022
In order to address the challenges of climate change, it's crucial that accurate data is used. Check out our recent study about the true operational carbon emissions of #ML model training and industry best practices that can reduce ML’s carbon footprint ↓ https://t.co/1jiGAqtdcc
Shared by
Google AI
at
2/15/2022
Introducing Federated Reconstruction, an approach that enables scalable partially local #FederatedLearning and thus avoids revealing privacy-sensitive data. Read all about our approach, including how it was successfully deployed to Gboard ↓ https://t.co/dSXrMvgEir
Shared by
Google AI
at
12/16/2021
Today we present an open-source system to scale neural networks — often critical for improving model performance — by automatically parallelizing the model across devices, which enables researchers to more efficiently build and train large-scale models. https://t.co/FeSGtU5V26
Shared by
Google AI
at
12/8/2021
While Vision Transformer models consistently obtain state-of-the-art results, they often require too many tokens for larger images and video. Read about TokenLearner, which adaptively generates fewer tokens but enables models to perform better, faster → https://t.co/FiRRefNbM2
Shared by
Google AI
at
12/7/2021
The 35th annual @NeurIPSConf, held virtually with its first Datasets & Benchmarks track, begins this week. We’re happy to have 170+ accepted papers and collaborate with the broader research community via talks, workshops, tutorials and more at #NeurIPS2021 https://t.co/2sLzMNbWTD
Shared by
Google AI
at
12/6/2021
Additional congratulations to the authors of "Reduced, Reused and Recycled: The Life of a Dataset in Machine Learning Research" (https://t.co/a82ogIMhu0) for being awarded the @neuripsconf Datasets and Benchmarks Best Paper Award
Shared by
Google AI
at
12/3/2021
Congratulations to the authors of “Deep RL at the Edge of the Statistical Precipice”, a #NeurIPS2021 Outstanding Paper (https://t.co/dCneRxPgDk)! You can learn more about it in the blog post below, and we look forward to sharing more of our research at this year’s @NeurIPSConf.
Shared by
Google AI
at
12/3/2021
Today we present Reinforcement Learning Datasets (RLDS), a suite of tools for working with and sharing data for sequential decision making, which makes it easier to share datasets without loss of information, regardless of the data format. Read more → https://t.co/NlHjRwRzF1
Shared by
Google AI
at
12/1/2021
Introducing MURAL, a model for image-text matching that leverages language translation pairs to significantly improve multilingual text–image retrieval across a wide selection of both well- and under-resourced languages. https://t.co/v10HBDZxMn
Shared by
Google AI
at
11/30/2021
Introducing RLiable, an easy-to-use library for reliably evaluating and reporting performance of #ReinforcementLearning algorithms, even when using only a handful of training runs. Learn more and access the library to build confidence in your results → https://t.co/KnV61G5oBI
Shared by
Google AI
at
11/17/2021
Making reasonable predictions about the future is an important capability for #ML agents to be most useful. Learn about a new approach that is self-supervised and learns a predictive model that can be applied, without fine-tuning, to a variety of tasks ↓ https://t.co/R0N9keoV3m
Shared by
Google AI
at
11/11/2021
Clustering algorithms partition datasets into meaningful groups and are a key building block of unsupervised #ML. Learn about a new clustering algorithm that enhances privacy while maintaining or improving performance against existing benchmarks ↓ https://t.co/9GgwIlyRao
Shared by
Google AI
at
10/21/2021
Introducing a minimalist and effective approach for vision language model pre-training that learns a single representation from both visual and language inputs and efficiently leverages scaled datasets to achieve state-of-the-art performance. Learn more ↓ https://t.co/U9DY2CZbqR
Shared by
Google AI
at
10/15/2021
Announcing Uncertainty Baselines, a collection of high-quality implementations of standard state-of-the-art deep learning methods for a variety of tasks that help research on uncertainty and robustness to be more reproducible. Read more and get the code ↓ https://t.co/GXZlFxXaef
Shared by
Google AI
at
10/14/2021
Medical image classification models often pre-train on natural image datasets. Today, we present alternative approaches that use additional pre-training on medical images, along with metadata-based data augmentation, to significantly improve performance. https://t.co/DHy0XojZwm
Shared by
Google AI
at
10/13/2021
Discovering new crystalline metal oxides critical for continued technological advancement is made difficult by the extremely large search space. Check out a new approach that prints thousands of distinct compositions quickly for follow-up #ML analysis. https://t.co/2RambPF7cY
Shared by
Google AI
at
10/7/2021
Language models can perform a variety of tasks, but often require careful engineering to perform well on specific tasks. Today we share a simple technique that fine-tunes a model to perform new tasks by following instructions. Learn more → https://t.co/aO1tbe3LNv
Shared by
Google AI
at
10/6/2021
Classical algorithms have proved useful for innovation in computing routes, but their computational burden can increase quickly at larger scales. Today we share how a new algorithm breaks road networks into smaller components for greater efficiency ↓ https://t.co/azOrQ9h8qt
Shared by
Google AI
at
9/30/2021
#ReinforcementLearning (RL) agents have shown promising results across various activities, but often cannot generalize to new tasks, even if they are similar. Learn about a new approach that incorporates RL’s sequential structure to enhance generalization→https://t.co/BSOciuybj5
Shared by
Google AI
at
9/29/2021
Check out Translatotron 2, an updated version of the Translatotron direct speech-to-speech neural translation model, which exhibits improved translation performance, while also applying a new, more secure approach for voice retention. https://t.co/N7Ie1YQN3u
Shared by
Google AI
at
9/23/2021
Introducing Pathdreamer, a new world model that generates high-resolution 360º visual observations of areas of a building unseen by a navigational agent in order to help better predict how to successfully navigate the space. Learn more at https://t.co/U5egwDxcmg
Shared by
Google AI
at
9/22/2021
Check out FACT, an #ML model based on a new large-scale, multi-model 3D dance motion dataset (AIST++), which can generate novel dance sequences for 10 different dance genres, from ballet jazz to hip-hop. https://t.co/WoW3pZ69wG
Shared by
Google AI
at
9/13/2021
Introducing personalized #AutomaticSpeechRecognition models, the latest effort from #ProjectEuphonia that leverages an extensive corpus of disordered speech to develop models that are more effective for speakers who have speech impairments. More ↓ https://t.co/bD8VMlm31q
Shared by
Google AI
at
9/9/2021
Today on the blog we present a 2-stage framework for anomaly detection that combines recent progress on deep representation learning and classic one-class algorithms, is simple to train, and results in state-of-the-art performance. Learn more ↓ https://t.co/4r8jHIXp3I
Shared by
Google AI
at
9/2/2021
To honor Lou Gehrig, former NFL player Steve Gleason, who lost his ability to speak to ALS, recently recited the famous baseball star’s “Luckiest Man” speech. Learn how we leveraged #ML and state-of-the-art TTS to help Gleason speak again in his own voice. https://t.co/R0E9qeQgyp
Shared by
Google AI
at
8/30/2021
Announcing our 4th annual Landmark Recognition (https://t.co/bgHOetfzLh) and Landmark Retrieval (https://t.co/oTURMPxmRY) challenges on @kaggle! This year’s challenges put special emphasis on fairness in large-scale recognition. Learn more and register at the above links.
Shared by
Google AI
at
8/25/2021
A challenge for current generation quantum processors is that operational error rates are too high for a variety of useful algorithms. Learn how we’ve implemented ideas for quantum error correction using repetition code to reduce the error rate. → https://t.co/p81emJblvV
Shared by
Google AI
at
8/11/2021
Check out a case study with Know Your Data — a dataset exploration tool introduced earlier this year at Google I/O — that highlights how biases can be traced to both dataset collection and annotation practices. https://t.co/CZ9sXYG7NN
Shared by
Google AI
at
8/9/2021
Natural speech often has disruptions and complexities that are difficult for #NLP models to understand. Today we introduce two benchmark datasets that challenge models on temporal reasoning (TimeDial) and contextual disfluencies (Disfl-QA). Details below: https://t.co/kD65S8TYwx
Shared by
Google AI
at
8/4/2021
Today we are releasing the Open Buildings Dataset, a new open-source dataset containing the locations and footprints of >500M buildings with coverage across Africa, which can support numerous scientific and humanitarian applications. Read more at https://t.co/ZAFeD3mWQt
Shared by
Google AI
at
7/28/2021
Recently, we released TF-Ranking—an open-source TF-based library—that makes building customized learning-to-rank models easier and facilitates fast exploration of new model structures for production and research. Learn more and grab the code on the blog ↓ https://t.co/KkxCBPR9nB
Shared by
Google AI
at
7/27/2021
While cochlear implants improve the listening experience for many people who are hard of hearing, they can be less effective in noisy environments. Learn how an #ML-based preprocessor can be used to suppress noise, leading to enhanced speech understanding. https://t.co/BCZAThTZ2g
Shared by
Google AI
at
7/23/2021
We are happy to be a Platinum Sponsor of the thirty-eighth @icmlconf. Are you registered for #ICML2021? We hope you’ll visit the Google virtual booth to see how we are solving some of the field’s most interesting challenges. Learn more below! https://t.co/sU50q32qZz
Shared by
Google AI
at
7/19/2021
Today we present connected approaches that push the limits of high-fidelity image synthesis through use of a pipeline of multiple diffusion models that perform progressive iterative refinement and super-resolution. Learn more here: https://t.co/V28qyOc4ky
Shared by
Google AI
at
7/16/2021
While #ReinforcementLearning is a popular method for training robots in simulation, it often comes with a large computational cost. Today we introduce a new physics simulation engine that can speed up training by 100-1000x. Learn more and grab the code → https://t.co/5rCXCCEZIE
Shared by
Google AI
at
7/15/2021
Understanding how to best use unlabeled examples in real-world applications of #ML is often challenging. Today we present a semi-supervised learning approach that can be applied at scale to achieve performance gains beyond that of fully-supervised models. https://t.co/DoRumTHBCs
Shared by
Google AI
at
7/14/2021
As deep #ReinforcementLearning approaches have seen more success, their computational cost has grown. Check out an ICML paper that explores how these successes can be reached with smaller-scale experiments that are more accessible and still insightful.https://t.co/0q6JWLNZFI
Shared by
Google AI
at
7/13/2021
To help reduce gender bias in neural machine translations, we are releasing the Translated Wikipedia Biographies dataset, which can be used to evaluate the gender accuracy of translation models. Learn more and check out the dataset ↓ https://t.co/ldDrLxPa4X
Shared by
Google AI
at
6/24/2021
Today we share how #ML models can be trained to accurately predict phenotypes, and how these predictions can be used to identify novel genetic associations that lead to more accurate predictions of disease predisposition. Learn more ↓ https://t.co/uR3hTZjSgX
Shared by
Google AI
at
6/23/2021
Learn about SimGAN, a novel physics simulator built with #ML that helps close the sim-to-real gap by replacing manually-defined parameters with learnable functions that change according to the state of a robot and more accurately replicate the real world. https://t.co/Q4HjlDRY9P
Shared by
Google AI
at
6/16/2021
While most #ML robotics research occurs in a fixed lab environment, as the field advances into complex and challenging real-world scenarios, using A/B testing to compare results of different models and lab settings is increasingly important. Learn more → https://t.co/L4LFOz8G6P
Shared by
Google AI
at
6/10/2021
Last year we shared how #ReinforcementLearning could hasten the design of accelerator chips (https://t.co/7KqG2pqTqh). Today we're publishing improved methods, which we've used in production to design the next generation of Google TPUs. Read more in Nature https://t.co/gOQVmbWdeV
Shared by
Google AI
at
6/9/2021
Presenting Variational Transformer Networks, a new approach for automated document layout design that leverages self-attention to develop design rule distributions given a set of examples for a particular task. Read all about it at https://t.co/M9UBXiOY9v
Shared by
Google AI
at
6/8/2021
Data is a key aspect of #ML systems, affecting everything from performance to scalability. Learn how the complexity and challenges of data-related work can result in technical debt called “data cascades” and how they can be avoided with early intervention↓https://t.co/LJ4WHS6xwA
Shared by
Google AI
at
6/4/2021
Today on the blog, we introduce two approaches that narrow the “sim-to-real gap” in the field of robotics by ensuring consistency of the visual features used during training to improve real-world performance ↓ https://t.co/t2IGn5eZ9n
Shared by
Google AI
at
6/3/2021