robocrunch
Sara Hooker
@sarahookr
Research @ Google Brain, model compression, robustness + interpretability. @trustworthy_ml Founder of data for good non-profit @deltanalytics.
Tweets by Sara Hooker
What I would like to see more of: untraditional path --> PhD/academic lab/industry lab/machine learning role/conference participation. PhD/Academic lab/industry lab --> open to collaborate beyond traditional academia to help support independent researchers/community research.
Shared by
Sara Hooker
at
5/8/2022
Dunbar's number - the theoretical cognitive limit to the n_ of people with whom one can maintain stable social relationships. "No precise value has been proposed for Dunbar's number, but a commonly cited approximation is 150." https://t.co/ybNiBwQUwy
Shared by
Sara Hooker
at
4/19/2022
Fantastic and accessible read from @cHHillee about why ML efficiency is hard to encapsulate with a single objective. "you can understand efficient of your deep learning regime as consisting of 3 different components..." Compute, Memory and Everything else
Shared by
Sara Hooker
at
3/16/2022
Agreed. We spent a decade making design choices that favored dense models. It is not surprising some of those choices don't favor sparse training. To learn sparse we need to go beyond simply initialization. Also a point we make in work led by @KaliTessera https://t.co/9Rdo17AxE4
Shared by
Sara Hooker
at
3/4/2022
This was one of my hopes in both https://t.co/5yg2p1zlG2 and https://t.co/lXWsnPyEza -- that differences in capacity provide a powerful auditing tool to identify parts of the distribution that are more challenging for the model.
Shared by
Sara Hooker
at
11/18/2021
Great to see more work and tooling explicitly build on our compression identified exemplar framework that comparing models at different capacities can be a powerful interpretability tool. https://t.co/5an5dp3knk demo: https://t.co/AbUMD60AzN
Shared by
Sara Hooker
at
11/18/2021
Rosanne talks about overfitting a single definition of what it means to do good research = we just hire the same people over and over. Successful research requires risk diversification, more value in hiring a single person outside the rubric than over-indexing on same approach.
Shared by
Sara Hooker
at
10/8/2021
+ The overfitting of hardware to a small list of open source models: "This is also why you shouldn’t read too much into MLPerf’s results. A popular model running really fast on a type of hardware doesn’t mean an arbitrary model will run really fast on that hardware."
Shared by
Sara Hooker
at
9/10/2021
fantastic blog post by @chipro https://t.co/ydV7mutppk "With so many new offerings for hardware to run ML models on, one question arises: how do we make a model built with an arbitrary framework run on arbitrary hardware?"
Shared by
Sara Hooker
at
9/10/2021
Tomorrow at @ml_collective DLTC reading group, @KaliTessera will be presenting our work on how initialization is only one piece of the puzzle for training sparse networks. Can taking a wider view of model design choices unlock sparse training? https://t.co/j6WHSDKN4j
Shared by
Sara Hooker
at
8/12/2021
We find that both algorithmic and hardware factors exert minimal difference in top-line metrics. However, the presence of noise amplifies uncertainty disproportionately on certain subsets of the dataset. Underrepresented attributes are disparately impacted.
Shared by
Sara Hooker
at
6/23/2021