robocrunch
Melanie Mitchell
@MelMitchell1
Davis Professor at the Santa Fe Institute. New book, "Artificial Intelligence: A Guide for Thinking Humans": https://t.co/g4VrG7tcrw
Tweets by Melanie Mitchell
O brave new world, that has such language models in it! https://t.co/37zlZ77mAX
Shared by
Melanie Mitchell
at
5/1/2022
Ilya Sutskever: ML benchmarking is broken. Need to evaluate on zero-shot or few-shot. 100% agree!
Shared by
Melanie Mitchell
at
4/12/2022
Just heard Christopher Manning say that using GPT-3 to train suicide prevention workers [by pretending to be a "troubled youth"] is safe because it's just used for training the workers." Wut?
Shared by
Melanie Mitchell
at
4/12/2022
Visual recognition is a key part of solving BPs. And also important to test a system on many different instances of "same vs different" concept.
Shared by
Melanie Mitchell
at
4/8/2022
Until then, I will admire the impressive products of machine learning and big data, but will not mistake them for progress toward general intelligence. (8/8)
Shared by
Melanie Mitchell
at
4/8/2022
We humans can solve these visual puzzles due to our core knowledge of basic concepts and our abilities of flexible abstraction and analogy. Bongard realized how central these abilities are to human intelligence. (5/8)
Shared by
Melanie Mitchell
at
4/8/2022
However, six decades after Bongard proposed these puzzles as a challenge for machine intelligence, there is still no AI system that can solve these problems in any general way. (6/8)
Shared by
Melanie Mitchell
at
4/8/2022
If such an AI system were created, I would be convinced that the field is making real progress on human-level intelligence. (7/8)
Shared by
Melanie Mitchell
at
4/8/2022
Very impressive---indeed, awe-inspiring---AI demos this last week, e.g., from OpenAI (image generation) and Google (text generation). These demos seem to convince many people that current AI is getting closer and closer to human-level intelligence. 🧵 (1/8)
Shared by
Melanie Mitchell
at
4/8/2022
Stochastic parrots and hidden hamsters -- nice animal analogies to explain large language models. "It’s kinda like if someone were showing off some fancy car engine but the vehicle is actually being powered by some hidden hamster wheels." https://t.co/tiu3rjzsO0
Shared by
Melanie Mitchell
at
3/28/2022
A colleague asked me to post this ad for faculty positions in machine learning at the Hamburg University of Technology in Germany. For more info, see https://t.co/Hd5FGcGxIC
Shared by
Melanie Mitchell
at
3/19/2022
Just as one example, following the logic that "critiques without solutions" are not very useful, one might say that work showing adversarial vulnerability of ML models is "not very useful" if it doesn't propose a solution. This would be a very harmful approach.
Shared by
Melanie Mitchell
at
3/17/2022
Are symbols necessary for intelligence? Do symbols in AI need to emerge from non-symbolic substrates in order to be grounded and to avoid brittleness? (h/t @livcomp) (8/9)
Shared by
Melanie Mitchell
at
3/14/2022
AI was *explicitly defined* to be the study of intelligence from the point of view of symbolic systems, problem-solving (e.g., game-playing, theorem-proving, puzzle-solving), and performance. (4/9)
Shared by
Melanie Mitchell
at
3/14/2022
Psychology vs. neurophysiology Serial vs. parallel Two things particularly struck me in this paper: First, in all cases, the term "artificial intelligence" is identified with the focus on the left-hand side of the "vs." (3/9)
Shared by
Melanie Mitchell
at
3/14/2022
Amidst all the recent Twitter talk on the role of symbols vs. deep learning in AI, I came across a very interesting article by Allen Newell, published in 1982, entitled "Intellectual Issues in the History of Artificial Intelligence". https://t.co/75JrnnjMhV 🧵 (1/9)
Shared by
Melanie Mitchell
at
3/14/2022
Newell focuses on important splits that arose in the first 30 years of research on artificial intelligence. Some examples: Symbols vs. numbers Symbolic vs. continuous systems Problem-solving vs. pattern recognition Performance vs. learning ... (2/9)
Shared by
Melanie Mitchell
at
3/14/2022
Two interesting reads for your Friday: @GaryMarcus on deep learning and symbol processing: https://t.co/ThAgJDOVco David Krakauer on "exbodiment" in cognition: https://t.co/nUXYRrDYKU
Shared by
Melanie Mitchell
at
3/11/2022
"How Close Are We to AI?" @blaiseaguera & I will discuss/debate this question, especially w/respect to "understanding" in neural network language models. Town Hall Seattle March 4, 2022, 7:30pm PST. In person in Seattle or watch livestream. https://t.co/yY1wenaDtk
Shared by
Melanie Mitchell
at
2/18/2022
New preprint by @mpshanahan and me: "Abstraction for Deep Reinforcement Learning". https://t.co/JvgXROxdr6
Shared by
Melanie Mitchell
at
2/15/2022
Re-post of my short perspective piece on the analogies of COVID, with an updated reflection.
Shared by
Melanie Mitchell
at
11/30/2021
New: Report describing our July workshop on "Frontiers in Evolutionary Computation" at @sfiscience. This is part of our larger "Foundations of Intelligence" project. https://t.co/UJd5pfGxx8
Shared by
Melanie Mitchell
at
10/25/2021
Re-upping this announcement of open postdoc position at SFI to work with me and others on AI approaches to analogy/abstraction. Apply by October 15.
Shared by
Melanie Mitchell
at
9/16/2021
What exactly distinguishes "Foundation Models" from other pretrained models that can be used for new tasks with (or without) fine-tuning? Is it the self-supervised aspect of the pretraining? The large amount of data used in the pretraining? Something else that I'm missing?
Shared by
Melanie Mitchell
at
9/6/2021
AI / Cogsci people: Do you want a postdoc job at @sfiscience ? Work with me & others on developing methods & evaluation suites for AI systems that perform conceptual abstraction and analogy. Description and application instructions here: https://t.co/0KEKtyvXJJ
Shared by
Melanie Mitchell
at
9/2/2021
"Herbert Simon said we should call it 'complex information processing.' What would the world be like if it was called that instead?" https://t.co/nnpL0lHRif
Shared by
Melanie Mitchell
at
7/31/2021
If you're at the #ICML2021 workshop on Adversarial Machine Learning today, come see our poster about how a small degree of adversarial robustness can dramatically increase the transferability of targeted adversarial examples. https://t.co/2Grich8I7i https://t.co/WPU1tFIhHa
Shared by
Melanie Mitchell
at
7/24/2021
Who thinks @BarackObama's Fall reading list needs to include a book about AI? (I'm happy to suggest one :-)
Shared by
Melanie Mitchell
at
7/9/2021
Worth reading: @HRoitblat critiques DeepMind's "Reward is Enough" paper, points out circularity in their argument. https://t.co/SRzrwbH7bZ @bendee983
Shared by
Melanie Mitchell
at
7/8/2021
Interesting thread on copyright issues for code generated by language models like GPT3.
Shared by
Melanie Mitchell
at
6/30/2021