Search Results  



Percy Liang    @percyliang   ·   10/18/2021
7/ Why a new name? ‘Foundation models’ highlight their centralizing function in AI systems. ‘Pre-trained models’ focuses on downstream adaptation. ‘Language model’ highlights only one modality. Use whichever name is appropriate based on which aspect you wish to emphasize.
 Reply      Retweet   1      Like     6    

More by:   Percy Liang
zoom in  
 



Percy Liang    @percyliang   ·   10/18/2021
5/ Foundation models are not the same as large language models. They can be grounded. For example, they could be trained on data that is the visual, auditory, and tactile sensorimotor experience of an agent in the world.
 Reply      Retweet   2      Like     7    

More by:   Percy Liang
zoom in  
 



Percy Liang    @percyliang   ·   10/18/2021
6/ Foundation models are not supposed to be full AI systems; they are unfinished (like a foundation) and need to be adapted to downstream tasks. Therefore, even if foundation models can’t do X by itself, they can still play a vital supporting role in X.
 Reply      Retweet   1      Like     3    

More by:   Percy Liang
zoom in  
 



Percy Liang    @percyliang   ·   10/18/2021
3/ Much of the critique we have received concerns the limitations and risks of foundation models. We appreciate and agree with these points. Indeed our report ‘On the Opportunities and Risks of Foundation Models’ is at least as much about the risks as the opportunities.
 Reply      Retweet   1      Like     3    

More by:   Percy Liang
zoom in  
 



Percy Liang    @percyliang   ·   10/18/2021
2/ We define foundation models as models trained on broad data (generally using self-supervision at scale) that can be adapted to a wide range of downstream tasks.
 Reply      Retweet   1      Like     3    

More by:   Percy Liang
zoom in  
 



Percy Liang    @percyliang   ·   10/18/2021
New blog post reflecting on the last two months since our center on #foundationmodels (CRFM) was launched out of @StanfordHAI: https://t.co/AnkZ0VyGSW
 Reply      Retweet   15      Like     52    

More by:   Percy Liang
zoom in  
 



Antonio Grasso    @antgrasso   ·   10/13/2021
When it comes to machine learning models, the quality of the training data could mean the difference between success and failure. @SamaAI works to ensure ML algorithms are built on a foundation of accuracy & efficiency. More>> https://t.co/Tho3i2XHfy @antgrasso #SamaPartner #AI
 
 Reply      Retweet   4      Like     8    

More by:   Antonio Grasso
zoom in  
 



Stanford NLP Group    @stanfordnlp   ·   10/12/2021
Why spend your days staring at some tech company’s stock price? You can build the future of open source large-scale self-supervised models with other creative young folk as a Research Engineer at the Stanford Center for Research on Foundation Models! Apply https://t.co/hrncAm1eCO
 
 Reply      Retweet   2      Like     7    

More by:   Stanford NLP Group
zoom in  
 



Percy Liang    @percyliang   ·   10/11/2021
The Stanford Center for Research on Foundation Models (CRFM) is looking for a research engineer to join our development team! Interested in large-scale training / being immersed in an interdisciplinary research environment? Please apply! https://t.co/1Mb906CRCM
 Reply      Retweet   1      Like     2    

More by:   Percy Liang
zoom in  
 



Stanford HAI    @StanfordHAI   ·   9/30/2021
A handful of models underpin the majority of AI applications. “These systems are burdened by undetected or unknown biases, meaning developers who adapt them for their applications are working from a fragile foundation.” via @TechCrunch https://t.co/kl4lkof8DQ
 Reply      Retweet        Like     5    

More by:   Stanford HAI
zoom in  
 



Antonio Grasso    @antgrasso   ·   9/29/2021
When it comes to machine learning models, the quality of the training data could mean the difference between success and failure. @SamaAI works to ensure ML algorithms are built on a foundation of accuracy & efficiency. More> https://t.co/Tho3i2XHfy v @antgrasso #SamaPartner #AI
 Reply      Retweet   74      Like     97    

More by:   Antonio Grasso
zoom in  
 



SP Arun    @sparuniisc   ·   9/22/2021
Interesting critique of AI foundation models by @GaryMarcus. Common sense is so critical for intelligence but so elusive to define... Something like dark matter for cognition 🤔 https://t.co/C5HZ0U0u1O
 Reply      Retweet   11      Like     13    

More by:   SP Arun
zoom in  
 



The Gradient    @gradientpub   ·   9/11/2021
Are "foundation models" fit to be a new foundation for AI? In our latest article, @GaryMarcus and Ernest Davis argue that such large pretrained models have too many limitations and the field should look beyond them. https://t.co/W2RSC7DKMI
 Reply      Retweet   49      Like     150    

More by:   The Gradient
zoom in  
 



Siddharth Karamcheti    @siddkaramcheti   ·   8/24/2021
At 10:20 PDT, @laurel_orr1 and I will be talking at the Workshop for #FoundationModels (https://t.co/Lvv5mhU5Ze) about Mistral, as well as our journey towards transparent and accessible training. We hope to see you there - bring your questions! [2/4]
 Reply      Retweet   4      Like     14    

More by:   Siddharth Karamcheti
zoom in  
 



Stanford NLP Group    @stanfordnlp   ·   8/18/2021
Models like BERT & GPT-3 transformed #NLProc, giving a reorientable foundation that powers many applications. 100+ folk around @StanfordHAI came together to write a comprehensive report on the dangers and opportunities as this approach spreads across AI. https://t.co/bLjTKpp39M
 
 Reply      Retweet   34      Like     104    

More by:   Stanford NLP Group
zoom in  
 



Stanford HAI    @StanfordHAI   ·   8/18/2021
NEW: This comprehensive report investigates foundation models (e.g. BERT, GPT-3), which are engendering a paradigm shift in AI. 100+ scholars across 10 departments at Stanford scrutinize their capabilities, applications, and societal consequences. https://t.co/wsk9AImOFR
 
 Reply      Retweet   181      Like     448    

More by:   Stanford HAI
zoom in