robocrunch
Stanford HAI
@StanfordHAI
Advancing AI research, education, policy, and practice to improve the human condition.
Tweets by Stanford HAI
Here's a look back at how AI research at @Stanford has evolved from scholarship focused on pursuing generality to a multidisciplinary field – drawing experts in medicine, psychology, economics, law, art, and humanities. #ThrowbackThursday https://t.co/YdsUEoJKfc
Shared by
Stanford HAI
at
5/12/2022
🚨Just released: HAI policy white paper outlining a roadmap for a multilateral AI research institute (MAIRI) bringing international stakeholders together to promote AI R&D collaboration, multidisciplinary AI research, and democracy-affirming AI with human-centric norms & values.
Shared by
Stanford HAI
at
5/12/2022
The data needed to inform policymaking for sustainable development is often lacking or inaccurate. In this brief, scholars explain how machine learning analysis of satellite imagery could help: https://t.co/uJQiToqvre
Shared by
Stanford HAI
at
5/11/2022
How do we chart a path toward high-quality digital education that is accessible and scalable? @Stanford's @chrispiech talks about how scholars in education, psychology & computer science are working to develop AI systems to help support students at scale. https://t.co/XkaFYV9Y1t
Shared by
Stanford HAI
at
5/6/2022
This week, we joined the @stanford community in welcoming @timnitGebru for @cas_stanford's annual lecture where she discussed the need to disrupt exploitative structures and systems in place, and to champion a community-based approach in AI and technology spaces.
Shared by
Stanford HAI
at
5/6/2022
What leads us to explore the world around us? Through a #HoffmanYee research grant, an interdisciplinary team of @Stanford scholars is examining the nature of curiosity from the perspective of infants and AI agents. Watch HAI Faculty @mcxfrank's interview: https://t.co/TRD1ANgTII
Shared by
Stanford HAI
at
5/5/2022
HAI regularly publishes industry briefs that provide a cross-section of Stanford research in AI across various sectors, including education, healthcare, financial services, and more. Watch out for a new brief on robotics coming soon: https://t.co/P22Tt5tJ3E
Shared by
Stanford HAI
at
5/3/2022
Software development demands two types of experts – a UX designer and a software engineer. With the advent of human-centered artificial intelligence, the boundary between these roles is shifting: https://t.co/OBGmnACYdv
Shared by
Stanford HAI
at
5/2/2022
Experts gather at this year's HAI Spring Conference to discuss the development, benefits, pitfalls and implications of foundation models. Read via @VentureBeat: https://t.co/BbqcGK44b5
Shared by
Stanford HAI
at
4/29/2022
Watch @drfeifei, @chrmanning, and others talk about @Stanford's legacy in the field of #ArtificialIntelligence and what the future holds for this technology: https://t.co/hAjreDQMOp
Shared by
Stanford HAI
at
4/29/2022
This AI-driven brain ‘fingerprinting’ model could be a powerful new tool in advancing diagnosis and treatment for autism: https://t.co/aAPFpfMaTy
Shared by
Stanford HAI
at
4/28/2022
In this episode of @StanfordEng's The Future of Everything, HAI Junior Fellow @JEichstaedt talks about how social media can be used to gauge a population’s mental well-being, which can serve as an early warning system for public health crises. https://t.co/XrfGMzakpX
Shared by
Stanford HAI
at
4/28/2022
ICYMI: On Apr 12, the HAI Spring conference welcomed hundreds of researchers, developers, students & technology enthusiasts to @Stanford for a day filled with expert-led talks on accountable AI, foundation models, and the physical/simulated worlds. https://t.co/dARZHkml5K
Shared by
Stanford HAI
at
4/28/2022
Our latest policy brief discusses how AI models can map satellite image inputs to sustainable development outcomes, their potential applications, the limitations of such an approach, and ways to address them. @MarshallBBurke @DavidBLobell @StefanoErmon https://t.co/XuKZyC88o8
Shared by
Stanford HAI
at
4/28/2022
HAI Policy Director @russellwald closes out today's symposium: "There is an unprecedented opportunity for the federal gov't to work with academia to pool resources together and tackle this challenge through a multidisciplinary approach – a key part of building human-centric AI."
Shared by
Stanford HAI
at
4/27/2022
In this interactive course, executives and professionals engage with Stanford faculty and Silicon Valley leaders to develop skills, decision-making principles, and frameworks to use AI in driving business growth and social impact. @ECorner https://t.co/G2OAUurPYn
Shared by
Stanford HAI
at
4/27/2022
2. Require health care systems to have disparity dashboards, or ways to track in real-time the differential accuracy of medical products across demographics.
Shared by
Stanford HAI
at
4/27/2022
1. Mandate labels on medical products that explicitly state the composition of individuals that these products were designed or evaluated on.
Shared by
Stanford HAI
at
4/27/2022
How to better evaluate medical AI? MIT research scientist Leo Anthony Celi offers two policy recommendations:
Shared by
Stanford HAI
at
4/27/2022
“You can’t expect the same tool that tells why an applicant was rejected to also be the right tool to explain why a model is exhibiting disparities.” @StanfordGSB @BlattnerLaura shares some key takeaways from her latest research on AI/ML models used for consumer credit @FinRegLab
Shared by
Stanford HAI
at
4/27/2022
Happening now: A panel at the @CommerceGov @NIST @FinRegLab HAI symposium focused on improving transparency and mitigating bias in the models used for consumer credit. @StanfordGSB @BlattnerLaura @NCRC Join the livestream: https://t.co/t46VsPA7BD
Shared by
Stanford HAI
at
4/27/2022
“We’re dealing with traumatized datasets, which carry the legacies of what our society has emboldened into the information we collect,” says @nturnerlee. “The question becomes, how are we dealing with these traumatized datasets, and what are we doing to create better datasets?”
Shared by
Stanford HAI
at
4/27/2022
“We shouldn’t view machine learning & AI as things that somehow give us more objectivity. At the end of the day, we should think of them as mirrors that reflect what we’ve been doing in the past.” – @manish_raghavan at the symposium on AI and the economy: https://t.co/t46VsPA7BD
Shared by
Stanford HAI
at
4/27/2022
Government is plagued by challenges with technology, says @danho1. Its systems are outdated, it can’t get/retain top talent, it can’t scale easily. “Revamping the public sector’s resources to effectively manage AI will be a monumental effort but not an insurmountable one.”
Shared by
Stanford HAI
at
4/27/2022
As remote work shifts to hybrid, experts including HAI Associate Director @robreich warn against the use of surveillance tools to monitor attendance. “It treats the employees as akin to machines optimized for maximal performance rather than human beings.” https://t.co/JQyzoyVKK6
Shared by
Stanford HAI
at
4/26/2022
How do we ensure the responsible use of AI in fostering inclusive economic growth? Scholars, policymakers, and leaders discuss the implications of AI deployment across various sectors, including financial services and healthcare. Sign up to join virtually: https://t.co/vYy833VaQf
Shared by
Stanford HAI
at
4/26/2022
In our annual report, see how we've championed multidisciplinary research, convened policy discussions, and created education programs to further our mission of advancing human-centered artificial intelligence. https://t.co/tFflxjOqQS
Shared by
Stanford HAI
at
4/25/2022
“Patients deserve to have their values reflected in this debate and in the algorithms. Adding a degree of patient advocacy would be a positive step in the evolution of AI in medical diagnostics," says HAI Fellow @KathleenACreel on AI ethics in healthcare. https://t.co/galqpOIYjR
Shared by
Stanford HAI
at
4/22/2022
Building a decision-making agent in a highly uncertain environment is a challenge for any developer. HAI Faculty Affiliate Mykel Kochenderfer’s new book outlines best options. https://t.co/ABd9PhLIWH
Shared by
Stanford HAI
at
4/21/2022
Doctors have no simple test for autism. This new AI-driven brain ‘fingerprinting’ model predicts the severity of autism symptoms in patients and could potentially be a valuable tool in advancing diagnosis and treatment. https://t.co/3Ao21Jykl0
Shared by
Stanford HAI
at
4/20/2022
"You don't just sprinkle ethics on top of a research project. It has to start with a scientific question and the ethical framework of that question," says HAI Associate Director @Rbaltman. https://t.co/mpaia84ZI9
Shared by
Stanford HAI
at
4/19/2022
Scholars with the Center for Research on Foundation Models show theoretically that contrastive pre-training can learn features that vary substantially across domains but still generalize to the target domain. https://t.co/bBisEw4hJe
Shared by
Stanford HAI
at
4/18/2022
How do we ensure the deployment of artificial intelligence across different sectors leads to inclusive economic growth? On Apr 27, join us virtually as we convene researchers and policymakers at the helm of responsible AI. @CommerceGov @NIST @FinRegLab https://t.co/lEy0LDl2jH
Shared by
Stanford HAI
at
4/18/2022
Today at 12 pm PT: If AI capabilities eventually exceed those of humans across a range of real-world decision- making scenarios, what should we do about it? This @DigEconLab seminar features @UCBerkeley’s Stuart Russell. https://t.co/g7etjVbrTr
Shared by
Stanford HAI
at
4/18/2022
Scholars with the Center for Research on Foundation Models propose a theoretical framework for understanding the success of the contrastive learning approach when pre-training machine learning models.
Shared by
Stanford HAI
at
4/15/2022
In robotics, how do we fuse different sensor modalities together and what kind of information from each sensor is useful for a particular task? In last week’s HAI seminar, @leto__jean detailed her recent work on touch and vision for generalizability: https://t.co/ZFKVdkhnXM
Shared by
Stanford HAI
at
4/14/2022
Increasingly, doctors are calling on artificial intelligence to help diagnose conditions ranging from cancer to sepsis. How can these tools take into account patient values? https://t.co/JNZVhCEBH0
Shared by
Stanford HAI
at
4/14/2022
Congrats to HAI Associate Director @DanHo1 for being appointed as a member of the National Artificial Intelligence Advisory Committee. In this role, he will advise on national AI policy, from competitiveness to workforce implications and societal impacts. https://t.co/TNEcZSZCnd
Shared by
Stanford HAI
at
4/14/2022
A new way to optimize embodied artificial intelligence: https://t.co/whhUheyJUt
Shared by
Stanford HAI
at
4/13/2022
.@openAI Co-Founder and Chief Scientist @ilyasut: One of the cool things about #GPT3 is that these models are no longer science projects – they’re actually useful. It has been used in production by hundreds of companies.
Shared by
Stanford HAI
at
4/12/2022
Do words matter? @DigEconLab Fellow @SarahHBana's research uses a #FoundationModel to find out how well the words in job listings predict salaries.
Shared by
Stanford HAI
at
4/12/2022
.@CornellCIS dean Kavita Bala discusses 3 key areas of her research: Physical-based visual appearance models, inverse graphics, and world-scale visual discovery. Watch live: https://t.co/kMbyWYxwWd
Shared by
Stanford HAI
at
4/12/2022
.@lizjosullivan explains two main misconceptions on #ResponsibleAI: 1.) We’ve reached consensus on how to debias existing enterprise AI, and 2.) regulation is the only solution to algorithmic discrimination we need for safe, #AccountableAI.
Shared by
Stanford HAI
at
4/12/2022
Let's talk #AccountableAI. Watch @HarvardHBS’s Himabindu Lakkaraju @Berkeley_EECS’s @dawnsongtweets, @Google’s Om Thakkar, and Parity CEO @lizjosullivan weigh in on AI regulation, data privacy, and biases in algorithmic predictions and decision-making. https://t.co/vD91AjltSX
Shared by
Stanford HAI
at
4/12/2022
Live in 15 mins: 2022 HAI Spring Conference on Key Advances in Artificial Intelligence. Check out today’s agenda and join us online: https://t.co/vD91AjltSX
Shared by
Stanford HAI
at
4/12/2022
Researchers believe AI has the potential to usher in an era of faster and cheaper drug discovery and development, but not without a slew of ethical pitfalls. Here @Rbaltman and @GSK’s Kim Branson discuss an ethical framework. https://t.co/8L2umb5qUN
Shared by
Stanford HAI
at
4/11/2022
Researchers combine satellite data and algorithms to track forced labor camps in the Brazilian rainforest, helping local law enforcement root out human trafficking. https://t.co/LDh62h6RYk
Shared by
Stanford HAI
at
4/11/2022
Hear powerful insights into the most recent advances in AI from a leader in computer architecture, @nvidia Chief Scientist Bill Dally. Sign up for tomorrow's livestream: https://t.co/5so1TzOpoo
Shared by
Stanford HAI
at
4/11/2022
"Our AI-driven brain ‘fingerprinting’ model could potentially be a powerful new tool in advancing diagnosis and treatment for autism," says HAI Faculty Affiliate Kaustubh Supekar. https://t.co/C9YDPEv5oj
Shared by
Stanford HAI
at
4/7/2022
Joining us next week is @OpenAI Co-Founder and Chief Scientist @ilyasut who will discuss foundation models alongside other leading experts in the field. Learn more about this year's Spring Conference: https://t.co/J5onUiC13W
Shared by
Stanford HAI
at
4/6/2022
"Strikingly, one in every five computer science PhD graduates now specializes in AI/machine learning. That's significant and actually pretty wild to me." Jack Clark, Co-Chair of the AI Index, summarizes key points in this year's report. https://t.co/Fh60kUHGOs
Shared by
Stanford HAI
at
4/6/2022
This year's Spring Conference will discuss the future of AI with a focus on three key areas: 💡Accountable AI 💡Foundation models 💡Embodied AI in virtual and real worlds Join us alongside leading scholars and industry experts next Tuesday, April 12: https://t.co/OoZYKgr0x6
Shared by
Stanford HAI
at
4/5/2022
Stanford researchers have developed an algorithm that can detect autism by looking at brain “fingerprints” – a step closer toward earlier diagnoses and more effective interventions. https://t.co/R5WqXbuoeE
Shared by
Stanford HAI
at
4/5/2022
Next week, we're addressing three of the most critical areas of artificial intelligence – foundation models, physical/simulated world, and accountable AI. Find out who's speaking, who should attend, and what insights you might gain from this event. https://t.co/rScWA3cMOn
Shared by
Stanford HAI
at
4/4/2022
What are the latest developments in artificial intelligence? @annavmeyer of @Inc outlines seven trends cited in the #AIIndex2022 report. https://t.co/6p7i49KW42
Shared by
Stanford HAI
at
4/1/2022
On April 12, HAI will bring together leading scholars and industry experts to discuss three of the most critical areas of artificial intelligence: https://t.co/EjjVyx73In
Shared by
Stanford HAI
at
4/1/2022
Inside this year's #AIIndex2022: a snapshot of #AIEthics research today – from language model's toxicity and truth problem to the extent of gender bias in machine translation systems. https://t.co/Z17iDtBYJH
Shared by
Stanford HAI
at
4/1/2022
Stanford adjunct professor and HAI faculty member @AndrewYNg talks about foundation models, the data-centric approach, and AI in the next 10 years: https://t.co/B20fYR8sMw
Shared by
Stanford HAI
at
4/1/2022
In our new annual report, see how we've championed multidisciplinary research, convened policy discussions, and created education programs to further our mission of advancing human-centered artificial intelligence. https://t.co/cEYhPDmr4R
Shared by
Stanford HAI
at
3/30/2022
Next week, Stanford's Jeannette Bohg (@leto__jean) will tackle one of the main challenges in robotics: exploring what representations of raw perceptual data enable a robot to better learn and perform contact-rich manipulation skills. Sign up for this event:https://t.co/PXz0zdiVjk
Shared by
Stanford HAI
at
3/30/2022
A few of #AIIndex2022 major findings highlighted via @VentureBeat: more countries are regulating AI systems than ever before; AI ethics is entering the mainstream; costs to train AI systems has decreased; and private investments continue to grow. https://t.co/3cI9jg8xeH
Shared by
Stanford HAI
at
3/25/2022
Starting next week: A new continuing education course co-developed with HAI, which will explore the promise and potential perils of artificial intelligence, metaverse, cryptocurrencies, and other emerging technologies. https://t.co/0i1PeDrx0J
Shared by
Stanford HAI
at
3/24/2022
This year's Spring Conference focuses on foundation models, accountable AI, and embodied AI. HAI Associate Director and event co-host @chrmanning explains these key areas and why you should not miss this event: https://t.co/mQn6CrxsMF
Shared by
Stanford HAI
at
3/24/2022
How much will sea levels rise and how fast? Preparing for the future requires considering infinite scenarios. A research team set out to develop a faster, smarter data collection method to help policymakers get the answers they need to plan ahead. https://t.co/n8D21lmDor
Shared by
Stanford HAI
at
3/24/2022
This spring, join us for a new round of weekly seminars featuring experts from various disciplines and areas of inquiry related to human-centered artificial intelligence. Sign up for our newsletter so you don't miss any event: https://t.co/PLP4d21QIZ
Shared by
Stanford HAI
at
3/23/2022
This Wednesday, @Stanford's Chuck Eesley and @BloombergBeta partner James Cham will explore how companies can use artificial intelligence to gain a competitive edge and achieve their social impact goals. Register for this free webinar: @ECorner https://t.co/chVg1bHS0n
Shared by
Stanford HAI
at
3/22/2022
What's next in artificial intelligence? Join us next month at our Spring Conference as we convene experts in foundation models, the simulated world, and accountable AI to discuss where the field is heading. https://t.co/kibxMuxyQp
Shared by
Stanford HAI
at
3/22/2022
Some key highlights of the #AIIndex2022: Advances in natural language processing, a growing focus in AI ethics, high levels of AI investment, and U.S.-China cross-country collaborations. Read via @FastCompany https://t.co/JRGpLwjSk6
Shared by
Stanford HAI
at
3/21/2022
From the future of work to education trends, our industry briefs distill artificial intelligence research from all of Stanford’s schools, bringing original academic research to bear on issues of importance across different industries. https://t.co/4aTf7E3C6Q
Shared by
Stanford HAI
at
3/18/2022
Email auto-complete, voice assistants like Siri or Alexa, and translation apps don’t work for everyone equally. Research shows who is left behind in these AI-enabled communication tools. https://t.co/jZ3TF2XkvB
Shared by
Stanford HAI
at
3/18/2022
➡️ Data, data, data. Top results across technical benchmarks have increasingly relied on the use of extra training data to set new state-of-the-art results. Read the full report here: https://t.co/sssotp8FgN
Shared by
Stanford HAI
at
3/17/2022
Where is research concentrated in AI? More scholars are focusing on pattern recognition and machine learning while less interested in natural language processing and linguistics. Find out more about the state of AI from this year's #AIIndex2022: https://t.co/F2ZCzHgFzX
Shared by
Stanford HAI
at
3/17/2022
The #AIIndex2022 illustrates one of the key challenges that the field of AI faces today: The bigger and more capable an AI system is, the more likely it is to produce outputs that do not align with our human values, says AI Index Co-Chair @jackclarkSF. https://t.co/aOkp18wzJX
Shared by
Stanford HAI
at
3/17/2022
According to #AIIndex2022, 2021 saw the globalization and industrialization of artificial intelligence intensify, while the ethical and regulatory issues of these technologies multiplied. Read the main takeaways of the report: https://t.co/8pT6APRGHq
Shared by
Stanford HAI
at
3/16/2022
Just released: The #AIIndex2022. This year's report provides new and updated metrics across all aspects of AI: research and development, technical AI ethics, AI policy and governance, diversity in AI, and more. Read the report: https://t.co/ESTJ49xHYv
Shared by
Stanford HAI
at
3/16/2022
Set for release this week, the new #AIIndex2022 covers: ✅ Jobs ✅ Investment ✅ Education ✅ Ethics ✅ Technical capability ✅ Policy Sign up for our newsletter to receive the report tomorrow: https://t.co/6piWcLsnZQ
Shared by
Stanford HAI
at
3/15/2022
In this recent seminar, CS professor Carlos Guestrin presented a framework for increasing trust in machine learning that's anchored on three pillars: clarity, competence, and alignment. Watch now: https://t.co/jYLkMfeCdR
Shared by
Stanford HAI
at
3/14/2022
The healthcare industry could benefit from systems that run on artificial intelligence, but there are many factors that leaders should carefully consider before deploying such technology. A new course discussing these issues will be available this Spring. https://t.co/w3K9U2ZjkJ
Shared by
Stanford HAI
at
3/11/2022
"We never stopped innovating, but our tools are always double-edged swords. That's why it’s so important that risk assessment goes hand-in-hand with supporting innovation,” says @drfeifei at @BPC_Bipartisan's panel on AI and Policy last month. Watch here: https://t.co/gLDdmzcmxp
Shared by
Stanford HAI
at
3/11/2022
In this course, Stanford faculty and industry experts will discuss how healthcare executives and policymakers can leverage AI to optimize workflows within clinical and business environments. Regular application closes March 11: https://t.co/LHrIYFoaCg
Shared by
Stanford HAI
at
3/11/2022
We are hiring! Join us as we work together to help build the future of artificial intelligence. Visit our website to check out current job openings: https://t.co/idL1jQfL60
Shared by
Stanford HAI
at
3/10/2022
Congratulations to HAI Faculty Affiliate @drnigam for being appointed as the first chief data scientist for @StanfordHealth. In this new role, he will lead an effort to advance the use of artificial intelligence in patient care and hospital administration. https://t.co/TiHHl4ktIE
Shared by
Stanford HAI
at
3/9/2022
Many refined AI models fail the test of trustworthiness when deployed in the real world. To change that, HAI Faculty Affiliate @james_y_zou says developers must turn their attention toward #DataCentricAI. https://t.co/dLPYt8ohnk
Shared by
Stanford HAI
at
3/4/2022
Before bringing any AI system on board, healthcare executives must consider a clear data strategy, a means of testing before buying, and a clear set of evaluation metrics to achieve their intended goals, says HAI Faculty Member @drnigam. https://t.co/q5eqGcv7JU
Shared by
Stanford HAI
at
3/3/2022
A recent Stanford study challenges our current understanding of how brains process emotion, offering a new insight that could lead to more successful treatments for mental disorders. @StanfordMed https://t.co/hu3vWsEjpP
Shared by
Stanford HAI
at
3/3/2022
Doctors may rely on retrospective studies to evaluate treatment decisions, but research results can often be confounded by different factors. Stanford scholars used machine learning to identify potential sources of selection bias in medical research: https://t.co/PMKDozsdlh
Shared by
Stanford HAI
at
3/2/2022
In this @stanford course, HAI Co-director @drfeifei, Associate Director @rbaltman, and Faculty Affiliates Jeremy Bailenson and @danboneh will explore the promise and perils of artificial intelligence, virtual reality, blockchain, and the metaverse. https://t.co/u5Oyvt1V3X
Shared by
Stanford HAI
at
2/28/2022
This Wednesday: Join @AmyZegart, Chair of HAI Steering Committee on International Security, as she talks about her latest book, "Spies, Lies, and Algorithms," which explores the history and future of U.S. intelligence. Register now: https://t.co/9Yh2MLvWTr
Shared by
Stanford HAI
at
2/28/2022
AI-mediated communication tools that are meant to improve our quality of life aren't built for all. Device and internet access, age, user speech characteristics, and AI tool literacy are some of the barriers to adoption, says new research. https://t.co/RNYqylebh1
Shared by
Stanford HAI
at
2/24/2022
"Our technological future is the responsibility not of CEOs or engineers, but our democracy. People have been passive about technology’s impact on society. It’s time to exercise our democratic muscles more fully,” says HAI Faculty Member Jeremy Weinstein. https://t.co/GO3iInQ25b
Shared by
Stanford HAI
at
2/23/2022
Today at 10 am PT: HAI Faculty Affiliate Jeremy Bailenson will discuss the psychology of the metaverse. How would virtual reality change the way we communicate, teach, or build culture? Join us here: https://t.co/q4rVith0Rb
Shared by
Stanford HAI
at
2/23/2022
How do we train robots to better understand humans, and what skills do we need to work more seamlessly with robots? HAI Faculty Affiliate @DorsaSadigh discussed with @Rbaltman in this latest episode of @StanfordEng's The Future of Everything podcast: https://t.co/IKzH539cK8
Shared by
Stanford HAI
at
2/22/2022
Today at 10:30 am PT: Join HAI Co-director @drfeifei and other top AI researchers as they discuss with @BPC_Bipartisan some of the critical questions surrounding impact and risk assessment for artificial intelligence. https://t.co/hs7PjqhR4e
Shared by
Stanford HAI
at
2/22/2022
Postdoc opportunity: This GSK-Stanford Ethics Fellowship focuses on exploring the ethical considerations associated with using AI and machine learning to discover new medicines and improve clinical outcomes. Learn more: https://t.co/3QjvnWFKPI
Shared by
Stanford HAI
at
2/18/2022
How will virtual reality change the way we teach, communicate, and build culture? Next Wednesday, Stanford's Jeremy Bailenson will discuss the psychology of virtual reality and the metaverse. Register now: https://t.co/9PogrX2Hkg
Shared by
Stanford HAI
at
2/18/2022
Postdoc opportunity: This https://t.co/gTdHdQO4LE Ethics Fellowship focuses on exploring the ethical considerations associated with using AI and machine learning to discover new medicines and improve clinical outcomes. Learn more: https://t.co/zw8oMJAFOG
Shared by
Stanford HAI
at
2/18/2022
History shows how biometrics for public health could be misused, and how this "should inform debates about deploying biometric systems to manage the coronavirus pandemic," writes former Stanford CISAC-HAI Fellow Michelle Spektor via the @washingtonpost: https://t.co/DRXZAYsyka
Shared by
Stanford HAI
at
2/17/2022
On February 22, HAI Co-Director @drfeifei will join other top AI researchers in a panel discussion hosted by @BPC_Bipartisan, which explores academic perspectives on AI impact and risk frameworks. Learn more: https://t.co/6rr5eahNFt
Shared by
Stanford HAI
at
2/17/2022
On March 2, HAI Associate Director @Rbaltman will discuss the use of artificial intelligence in drug discovery – exploring its promise and potential pitfalls as well as questions of justice and equity in drug research and access. @StanfordEng https://t.co/oCvg8GG8pq
Shared by
Stanford HAI
at
2/17/2022
Just released: @DigEconLab scholars say we need a better way to measure productivity in the digital age, better frameworks to understand AI's impact on labor, and immigration reform to attract and retain top talent. https://t.co/JOcG6VpdIr
Shared by
Stanford HAI
at
2/16/2022
Congratulations to @erikbryn and @percyliang for joining the inaugural cohort of AI2050 Fellows, who will work on key research opportunities and problems to ensure that society benefits from artificial intelligence. @DigEconLab https://t.co/6WJz1cFRG5
Shared by
Stanford HAI
at
2/16/2022
Missed this talk today? Learn what @mbayati means by "greedy" algorithms: https://t.co/BatlzDPOCQ
Shared by
Stanford HAI
at
2/16/2022
Join @mbayati on Wednesday as he discusses new research which explains how "greedy" algorithms that focus solely on exploitation perform well on multi-armed bandit (MAB) problems. Register here: https://t.co/K19TB8Mya3
Shared by
Stanford HAI
at
2/15/2022
As a @C3DTI member, we invite scholars, developers, and researchers to leverage artificial intelligence and digital transformation to improve information security and secure critical infrastructure systems. Learn more here: https://t.co/0I8oMLVq6K
Shared by
Stanford HAI
at
1/6/2022
Using a sentiment analysis algorithm, a recent study of American and Japanese Twitter users suggests that emotional content tends to go viral when it violates, rather than supports, users' cultural values. https://t.co/KUy70ZThJL
Shared by
Stanford HAI
at
12/1/2021
Help shape the future of technology education at Stanford! @StanfordEthics and @StanfordHAI's Embedded EthiCS Fellowship is a collaborative program that embeds the teaching of ethics directly into Stanford’s computer science curriculum. Apply by Dec 8: https://t.co/n5y3x3Kmij
Shared by
Stanford HAI
at
11/24/2021
Interested in machine learning, statistics, and public policy? Apply now for Stanford RegLab's postdoctoral fellowship, which involves a high-impact collaboration with the IRS to build a more effective and equitable tax system. Applications close Nov 28: https://t.co/AblHsAy6MZ
Shared by
Stanford HAI
at
11/23/2021
.@Stanford scholars are working to develop solutions to bridge the gap between virtual and augmented reality technologies using expertise in optics and artificial intelligence. https://t.co/fbiV2j7F0n
Shared by
Stanford HAI
at
11/22/2021
This spring, HAI scholars put forth a call: What are the most radical policy proposals that respond to the challenges of an AI-driven future? Hosts @DanHo1 & @erikbryn talk about #RadicalPolicies4AI and the goals of the upcoming Fall Conference (Nov 9-10): https://t.co/qPVK19CGSD
Shared by
Stanford HAI
at
10/20/2021
A new platform at Stanford AIMI offers datasets at no cost, with the aim of spurring crowd-sourced AI applications in healthcare. https://t.co/8EjNZYknUB
Shared by
Stanford HAI
at
10/12/2021
Just released: A new HAI-commissioned report which emerged from @StanfordLaw's policy practicum program, outlines key considerations and recommendations for creating a National Research Cloud (NRC). (1/2) https://t.co/5y2TCeSpGQ
Shared by
Stanford HAI
at
10/6/2021
Our warmest congratulations to HAI Associate Director @Susan_Athey who is now president-elect of the American Economic Association for 2022! @AEAjournals @AEAInformation https://t.co/Q7gy1jDZis
Shared by
Stanford HAI
at
10/5/2021
Congratulations to HAI Junior Faculty Fellow @JEichstaedt for receiving @IPPAnet's Early Career Researcher Award! An incredible recognition for his outstanding contributions to positive psychology. https://t.co/4Jl33iBOiU
Shared by
Stanford HAI
at
10/5/2021
Using a language metaphor, Stanford's Liqun Luo explains how AI researchers may benefit from a better understanding of how the various parts of the brain connect and communicate with each other. https://t.co/sGlwpCIlXh
Shared by
Stanford HAI
at
10/1/2021
Join our Co-Director @drfeifei for @scale_AI's TransformX Conference on Oct 6, where she will explore the evolutionary origins of vision and how it is the 'cornerstone for intelligence', for both humans and machines alike. https://t.co/kT7k7o5t2i
Shared by
Stanford HAI
at
9/30/2021
Today at 10am: To explain black box algorithms, we study variables important in making decisions by changing them. But not all changes are useful. In this seminar, Stanford's Art Owen introduces the cohort Shapley measure to understand variable importance: https://t.co/hPx81KekkP
Shared by
Stanford HAI
at
9/29/2021
Today at 3pm PT: HAI Associate Director @robreich joins other leading scholars for @BostonReview's event on #AI for Justice, where they will discuss the impact of artificial intelligence and what can be done to redirect it for the public good. https://t.co/pCyL137E5D
Shared by
Stanford HAI
at
9/28/2021
Missed last week's webinar on EU proposals moderated by HAI International Policy Fellow @MarietjeSchaake? @StanfordCyber just uploaded the recording. Check it out here: https://t.co/tbzKJ2QfaQ
Shared by
Stanford HAI
at
9/23/2021
Using a deep learning approach, researchers from Stanford and Cal Poly teamed up to develop a quicker way to assess building damage caused by wildfire. The tool may help focus recovery efforts and provide more immediate information to displaced residents. https://t.co/ak09CRSkGP
Shared by
Stanford HAI
at
9/23/2021
That's a wrap for the Hoffman-Yee Symposium! Special thanks to our presenters, our panel of judges, and guests who have joined us in-person or online. Be on the lookout for more information on which proposals were selected to receive follow-on funding.
Shared by
Stanford HAI
at
9/22/2021
In this interview about the book "System Error", co-author and HAI Associate Director Rob Reich describes the challenge with big tech's focus on optimization in a democratic society, and what actions citizens can take to change the system: https://t.co/0vfpbOysHN
Shared by
Stanford HAI
at
9/15/2021
Stanford scholars have developed machine learning methods to accurately predict molecular structures, unraveling one of the biggest problems in modern biology and medical discovery. https://t.co/W6Iohmoqkv
Shared by
Stanford HAI
at
9/10/2021
Exploring the variety of architectures in the brain may inspire AI researchers to design "new ways of putting multiple architectures together to build better systems than are possible with a single architecture alone,” says Stanford professor Liqun Luo. https://t.co/9OAovAPh4K
Shared by
Stanford HAI
at
9/9/2021
“We need to change the very operating system of how technology products get developed, distributed and used by millions and even billions of people,” said HAI Associate Director @robreich. Learn more about a new book he co-authored with Stanford scholars: https://t.co/uKNrFi8xUh
Shared by
Stanford HAI
at
9/7/2021
In 2019, the California Board of Parole Hearings held 6,061 hearings, each of which created a 150-page transcript. If NLP could “read” them, we could get a picture of how the parole process operates at scale and judge its fairness. https://t.co/jYksNjChLH
Shared by
Stanford HAI
at
8/25/2021
You can also read this comprehensive report from the Stanford Center for Research on Foundation Models that discusses these models’ capabilities, limitations, and impact. https://t.co/t61mUuZNgk
Shared by
Stanford HAI
at
8/24/2021
At 9:30 am PT, join us for the second day of our Workshop on #FoundationModels. Today’s speakers will discuss the economic impact of these massive models, their biases and perpetuated inequities, and more. Streaming live here: https://t.co/Twk6eeGM3z
Shared by
Stanford HAI
at
8/24/2021
Earlier this year, HAI convened researchers from OpenAI, Stanford, and other universities in a Chatham House Rule workshop to understand GPT-3’s capabilities, limitations, and potential impact on society. Below are some takeaways from the discussion: https://t.co/0MWY31d5kO
Shared by
Stanford HAI
at
8/24/2021
We are quickly progressing into an era where sophisticated bots will be everywhere – collecting debts, dispensing advice, and influencing our decisions. How can we make sure we can detect them? Stanford scholars propose the shibboleth rule for AI agents: https://t.co/keciJJBJVK
Shared by
Stanford HAI
at
8/24/2021
GPT-3, known for its scale and sophistication, is already being used for downstream applications such as reading and summarizing news articles. That could lead to serious consequences if bias in these models isn't remedied. https://t.co/Ejo4jpDICI
Shared by
Stanford HAI
at
8/24/2021
Starting in 1 hour: The two-day workshop on #FoundationModels will bring together scholars and experts from a vast array of disciplines to discuss foundation models (eg GPT-3), the underlying technology, and the broader societal impact. Streaming live here https://t.co/U6mhDMUh1m
Shared by
Stanford HAI
at
8/23/2021
HAI Privacy and Data Policy Fellow @kingjen discusses Apple's new plan for child safety and why it's problematic in this @verge Decoder podcast. Listen or read the transcript here: https://t.co/5LZuWI3fEE
Shared by
Stanford HAI
at
8/19/2021
In collaboration with medical professionals, Stanford scholars developed a machine learning algorithm that could delineate patterns and identify tumor subtypes among various cancers to help match patients with the appropriate treatment. https://t.co/pG2AzJY5k9
Shared by
Stanford HAI
at
8/19/2021
As artificial agents adapt to their environments, they will become so unique it will be harder to predict or understand the nature of their conduct. Who should be held responsible when artificial agents engage in illegal activities? https://t.co/AowoyIuZc1
Shared by
Stanford HAI
at
8/18/2021
AI is being transformed by self-supervised models trained at scale (e.g. BERT, GPT-3, CLIP). The Center for Research on Foundation Models, born out of HAI, convenes 175+ scholars across Stanford to study these models. Learn more about this new initiative: https://t.co/vtjMJmRKQX
Shared by
Stanford HAI
at
8/18/2021
NEW: This comprehensive report investigates foundation models (e.g. BERT, GPT-3), which are engendering a paradigm shift in AI. 100+ scholars across 10 departments at Stanford scrutinize their capabilities, applications, and societal consequences. https://t.co/wsk9AImOFR
Shared by
Stanford HAI
at
8/18/2021
At Facebook alone, nearly 27 million hate speech posts have been taken down by AI models by the end of 2020. That's a staggering number that no army of human moderators can keep up with, yet these AI decisions remain controversial. Here's why: https://t.co/E0Rh4UOk4X
Shared by
Stanford HAI
at
8/17/2021
Now posted: The full lineup for the Workshop on Foundation Models. This program brings together speakers from a vast array of disciplines to discuss foundation models (e.g. GPT-3), the underlying technology, and the broader societal impact. https://t.co/9Q92kAwtnh
Shared by
Stanford HAI
at
8/16/2021
Stanford scholars have developed a machine learning tool that could analyze causes of extreme climate events and help make better predictions in the future. https://t.co/xUpkRnx8SN
Shared by
Stanford HAI
at
8/13/2021
Bots are increasingly becoming more sophisticated. How can we ensure that a bot remains detectable in conversations or written exchanges? Stanford scholars propose the "shibboleth rule” for artificial agents: https://t.co/MDVVsGLqh2
Shared by
Stanford HAI
at
8/12/2021
What is human-centered artificial intelligence? https://t.co/3TAzZmzYFX
Shared by
Stanford HAI
at
8/11/2021
Scholars have collected a new dataset called ArtEmis to train neural speakers to generate emotional responses to visual art. “We can show it a new image it has never seen, and it will tell us how a human might feel,” says Stanford's Panos Achlioptas. https://t.co/MuT8r4oBUr
Shared by
Stanford HAI
at
8/11/2021
Researchers developed a machine learning program to help when students get stuck in self-paced learning. The model could have broader implications – from expanding access to poor communities to improving workplace training and continuing ed for adults. https://t.co/5ms97gDIF8
Shared by
Stanford HAI
at
7/27/2021
Researchers found that large language models like #GPT3 perpetuate stereotypes against Muslims, highlighting the pressing need to reduce severe bias in such models. https://t.co/Rh9v9ssQvv
Shared by
Stanford HAI
at
7/26/2021
AI has the potential to revolutionize efforts to address global challenges, but the lack of large quantities of labeled data could be a barrier. Stanford's Stefano Ermon presents new ML techniques to facilitate AI-enabled solutions for sustainable dev't. https://t.co/VVwt1ZOOVp
Shared by
Stanford HAI
at
7/22/2021
Researchers found that a "greedy” algorithm could improve efficiency and lower opportunity costs in tests using the multi-armed bandit approach. "The algorithm isn’t actively exploring, and yet it is nonetheless learning," says Stanford's Mohsen Bayati. https://t.co/508dR8yDm5
Shared by
Stanford HAI
at
7/20/2021
Experts warn against #agritech that deepens existing inequalities and environmental issues in the food and agricultural systems. What are some examples of sustainable and socially just models? Learn more from Stanford’s Tech and Racial Equity Conference: https://t.co/lnrgjsO2Vh
Shared by
Stanford HAI
at
7/16/2021
From human psychology to language analysis, natural language processing is transforming how we study human emotions and the social meanings behind the words we use. https://t.co/SKUram8e0B
Shared by
Stanford HAI
at
7/16/2021
Doctors relying on retrospective studies to give medical advice often need to watch out for selection bias. A team of researchers used machine learning to uncover potential sources of confounding from patients' records. https://t.co/9SMwBwfXhg
Shared by
Stanford HAI
at
7/15/2021
Drawing on their previous work with quantitative image analysis for predicting lung cancer prognosis, HAI faculty member Olivier Gevaert and his team found that a similar approach plus data fusion can be used to better triage Covid-19 patients. https://t.co/M1Tf2MpEmP
Shared by
Stanford HAI
at
7/14/2021
"To remain the global leader in AI, the United States must build a much more expansive, inclusive, and robust innovative ecosystem of industry, academia, civil society and the federal government," says HAI Co-Director Dr. Fei-Fei Li via @TheHillOpinion. https://t.co/p1VPwkFJjb
Shared by
Stanford HAI
at
7/13/2021
How can we ensure that racial bias is not perpetuated in artificial intelligence? HAI Co-Director Fei-Fei Li shares her thoughts during an open discussion on AI and healthcare. Watch the entire talk here: https://t.co/K2S6Byq6yS
Shared by
Stanford HAI
at
7/12/2021
The first large-scale study of real people's mortgage data, led by HAI Faculty Affiliate Laura Blattner, reveals data disparities that make loan default prediction tools less accurate for minorities. https://t.co/1z0BocE6SJ
Shared by
Stanford HAI
at
7/12/2021
Stanford's Open Virtual Assistant Lab uses a novel and comparatively inexpensive approach to training virtual assistants, which researchers hope will widen the competition in smart speakers and address privacy concerns. https://t.co/F8mzHo3F9y
Shared by
Stanford HAI
at
7/12/2021
Debates around AI's impact on the workplace often focus on worker displacement, but less attention is paid to another critical issue: worker surveillance using "bossware." To address both, we need to change the incentives at play, says HAI's @robreich. https://t.co/Ot7mgFqDeo
Shared by
Stanford HAI
at
7/9/2021
The 2021 AI Index report indicates increasing government attention on artificial intelligence in the U.S.: the 116th Congress mentioned AI three times more than the previous one. Discover some of the latest AI trends from this year's report: https://t.co/PkZsR8RVbV
Shared by
Stanford HAI
at
7/8/2021
How do we address the quality of training data that models learn from? Snorkel AI, an idea born out of Stanford AI Lab, provides a novel way to generate the right kind of data necessary to develop effective algorithms. https://t.co/B2zWNSAzY2
Shared by
Stanford HAI
at
7/6/2021
"Without federal funding of this basic and foundational AI research, which traditionally takes place at universities and then is commercialized in industry, the pipeline of AI innovation will quickly run dry," says @drfeifei in her op-ed @TheHillOpinion: https://t.co/5y5UEc4zPU
Shared by
Stanford HAI
at
7/6/2021
With rapid advances in AI came an increasing attention to societal risks and calls for more proactive measures to mitigate them. This brief outlines key aspects of the EU's AI Act – the first legal framework for regulating “high-risk” AI: https://t.co/xShJ4ruXW6
Shared by
Stanford HAI
at
7/6/2021
Stanford research shows greedy algorithms are more efficient in multi-armed bandit tests. This could have implications for everything from web testing to drug trials. https://t.co/4aacvipHo2
Shared by
Stanford HAI
at
7/6/2021
There are three main types of AI interpretability: how a model works, why the model input produced the output, and an explanation that brings about trust in a model. “We need to know which of those we’re aspiring for,” says HAI faculty member Nigam Shah. https://t.co/h4nH2DlpMI
Shared by
Stanford HAI
at
7/5/2021
Stanford’s Tech and Racial Equity conference brought together experts across sectors to discuss smart cities, blockchain, policing tech, and digital agrifood through the lens of racial justice. Read some of the key points from the two-day discussion: https://t.co/eNkpPkIf3q
Shared by
Stanford HAI
at
7/2/2021
At HAI, we focus on applications that augment and enhance human capabilities rather than displace or replace them. In this brief, we provide a sampling of key research at Stanford that can help inform the future of work: https://t.co/qFlHxMSDcV
Shared by
Stanford HAI
at
6/23/2021
To better understand #GPT3’s capabilities, limitations, and potential impact on society, HAI convened researchers from OpenAI, Stanford, and other universities earlier this year. Here are the highlights of their discussion: https://t.co/n5PMgeVuGp
Shared by
Stanford HAI
at
6/22/2021
We’re pleased to join @StanfordCSLI in welcoming our 2021 summer interns! As part of the program, students are equipped with technical skills and experience needed to develop research projects with a focus on language, learning, computation, and cognition. https://t.co/9sfDdl7vXV
Shared by
Stanford HAI
at
6/21/2021
ICYMI: Last week, a panel of AI policy experts examined the merits and flaws of the EU’s AI Act, as well as its transatlantic and global implications. Watch the session – which marks our 100th event since going virtual last year: https://t.co/W1UKHKD2un
Shared by
Stanford HAI
at
6/21/2021
Can we use machine learning to make legal hearings more fair? By processing large volumes of case records, NLP presents opportunities as well as new challenges to tackle: https://t.co/m9KigynCXk
Shared by
Stanford HAI
at
6/16/2021
HAI faculty member @tobigerstenberg and his colleagues at Stanford developed a computer model – the first of its kind – of how humans judge causation in physical settings, which could pave the way for more intuitive and explainable AI: https://t.co/RpzgYZWNIa
Shared by
Stanford HAI
at
6/10/2021
How the brain processes what we see involves complex tasks. Learning these algorithms can enhance not only artificial intelligence, but also our understanding of the brain, says HAI Faculty Affiliate @dyamins. Hear HAI's vision from various perspectives: https://t.co/sBc5MnknYE
Shared by
Stanford HAI
at
6/8/2021
The proliferation of fake content has become increasingly easier due to advances in AI. This brief suggests policy measures to address this growing threat – including shifting attention from the output toward the different actors involved in its creation. https://t.co/BdXRT0tIEA
Shared by
Stanford HAI
at
6/7/2021
We’re curious about your information diet! How do you stay-up-to-date with the ever-accelerating field of artificial intelligence?
Shared by
Stanford HAI
at
6/4/2021
“Disinformation and the algorithmic amplification of lies, polarization, incitements to hatred and violence are now a global problem,” says HAI Int’l Policy Fellow @MarietjeSchaake. She spoke with Aspen Institute’s Ryan Merkley to discuss how the EU is addressing this issue.
Shared by
Stanford HAI
at
6/3/2021
On June 3, Reboot’s latest seminar will feature “The Alignment Problem” author @brianchristian, who tells a gripping account of the AI ethics conversation that is continuously evolving. https://t.co/CjjGLF1nC3
Shared by
Stanford HAI
at
6/2/2021
How can policy address the challenge of deepfakes and disinformation? This HAI policy brief explains the technology and applications of Generative Adversarial Networks (GANs) and suggests measures to regulate fake content made possible by this system. https://t.co/xmhJDk7ntn
Shared by
Stanford HAI
at
6/2/2021
With data driving decisions and growth, should each of us be compensated for the use of our data? HAI Faculty Affiliate James Zou and student Amirata Ghorbani have developed a framework to fairly measure the value of data for AI models: https://t.co/Dr8V8aPZ9J
Shared by
Stanford HAI
at
6/1/2021
HAI Associate Director Daniel Ho discusses the academic perspective on facial recognition technology. “There are really serious questions about how well such technology performs when it’s tested only on a limited set of imagery." Listen via @KQEDForum: https://t.co/xPVyPnDCmR
Shared by
Stanford HAI
at
5/28/2021