François Chollet   @fchollet

Deep learning @google. Creator of Keras. Author of 'Deep Learning with Python'. Opinions are my own.



















 

  Tweets by François Chollet  

François Chollet    @fchollet    11/30/2021      

"To use a tool appropriately, you should not only understand what it can do, but also be aware of what it can’t do" From https://t.co/LvbEy5ipsA
  
          14



François Chollet    @fchollet    11/29/2021      

"Down with centralized corporate control! Power to the people! Topple the system!" -- Definitely not 3 corporations and a VC fund in a trench coat
  
    3         13



François Chollet    @fchollet    11/28/2021      

The dominant philosophical current in AI is characterized by narrow-minded reductionism and ahistoricity -- ignorance of the thinking that came before (often, the "before" threshold is... 2015!)
  
    2         7



François Chollet    @fchollet    11/28/2021      

In reality, intelligence is open-ended and embodied, embedded in an environment, in an ecosystem. In the case of humans, it's also embedded in a culture, and externalized as social and technological systems. Cognition cannot be understood in a fragmented manner
  
    2         20



François Chollet    @fchollet    11/28/2021      

Many deep learning researchers have this conception of intelligence as a kind of disembodied brain in a jar, acting on its environment in a one-sided fashion, trying to maximize some sort of reward score -- you could call it the "neocortex as RL agent" mindset
  
    2         10



François Chollet    @fchollet    11/25/2021      

On this Thanksgiving, I am grateful for all the people in my life who bring to it meaning and positive vibes -- my family, friends, awesome teammates, and the Keras community
  
    1         14



François Chollet    @fchollet    11/25/2021      

New tutorial on https://t.co/m6mT8SrKDD: multiple instance learning https://t.co/PaIeVcekeg
  
    1         7



François Chollet    @fchollet    11/25/2021      

People project their illusion of choice on this statement. It does apply to many things throughout human history. Humans love their fantasies. But reality has a way of catching up. For reference, here's the original context where I said it, two years ago.
  
    3         53



François Chollet    @fchollet    11/25/2021      

The fact that many people have staked a lot on an illusion doesn't mean it's not still an illusion.
  
    5         67



François Chollet    @fchollet    11/24/2021      

Productivity tip: if you're procrastinating on something difficult or important, don't set your goal to "get it done", which may seem overwhelming and thus paralysis-inducing. Set it to "sit down, get started, and build momentum". Because that's something you can always do.
  
    10         41



François Chollet    @fchollet    11/24/2021      

If you're interested in Keras codebase internals: Luke from the Keras team posted a detailed walkthrough of the Model class. Implement a simplified version of the class yourself to understand how it all works under the hood! Check it out: https://t.co/ZhiYntA2J7
  
    2         9



François Chollet    @fchollet    11/24/2021      

"Much like in biological systems, if you take any complicated experimental deep-learning setup, chances are you can remove a few modules (or replace some trained features with random ones) with no loss of performance."
  
          4



François Chollet    @fchollet    11/24/2021      

"Deep learning architectures are often more evolved than designed -- they were developed by repeatedly trying things and selecting what seemed to work." From https://t.co/LvbEy5A0k8 (which incidentally is 40% off for Thanksgiving week)
  
    1         11



François Chollet    @fchollet    11/24/2021      

New paper walkthrough on https://t.co/m6mT8SrKDD: masked image modeling. Applying the principles of masked language modeling to computer vision. https://t.co/M1ftCAzC9t Created by @arig23498 and @RisingSayak
  
    1         3



François Chollet    @fchollet    11/24/2021      

There's a bigger lesson here -- the important properties of a dynamic system can't usually be ascribed to any specific artifact within the system -- e.g. "is this piece of code (or data) reliable". They derive from the interaction between the different parts.
  
          13



François Chollet    @fchollet    11/24/2021      

By then, your validation performance has informed 19 choices that went into your model, and so that model is already the result of a search process (a training process, in fact) -- one specific instance among hundreds of possibilities.
  
    1         15



François Chollet    @fchollet    11/24/2021      

A slightly counter-intuitive fact is that the reliability of your evaluation method changes under you. The first time you looked at performance on your validation set, it may have been reliable. But by the 20th time, it no longer is. Even though nothing about it has changed.
  
    2         20



François Chollet    @fchollet    11/24/2021      

It's important to understand that there's no binary "either your evaluation method is tainted, or it isn't". It pretty much always is. What matters is how much. Always take your validation results with a grain of salt. The production data will look quite different anyway.
  
    1         41



François Chollet    @fchollet    11/24/2021      

Most people think of a validation set as a "weak test set": basically an evaluation set, but a bit less reliable than the final test set. It's more accurately a "weak training set": data that you use to improve your model (thus, on which your model will perform artificially well)
  
    8         99