I am a research scientist at Anthropic. My research focuses on how to make machine learning models safe and useful in the real world; for example, in engineering applications, the natural sciences, or healthcare.
I study language models as well as general machine learning techniques that are broadly applicable to other kinds of data (e.g. images, organic molecules, astronomical data, satellite imagery, wearable sensors, and more.)
Foundation models are machine learning models that are typically trained on large unlabeled datasets and can be easily adapted to many downstream tasks. My research focuses on making foundation models safe and useful for real-world problems, including in the sciences, engineering, and healthcare.
Two recent focuses:
General machine learning methods for any kind of data, beyond just text or images
– State-of-the-art methods for training foundation models are specialized for particular modalities, such as text or images. Our work has shown that a single method can work across diverse data from 12 different fields, including real-world scientific applications in genomics, wearable sensors, and multispectral satellite imagery.
– We've also proposed Viewmaker Networks, a foundation model that learns to pose its own training task, and shown how it learns to balance the learning of different features to help models succeed on a broad range of tasks.
Applying machine learning to messy, hard-to-define tasks
– Machine learning research is typically conducted on benchmarks targeting a well-specified task. However, in the real world, describing what the problem is to a foundation model is often much of the challenge.
– Our work includes the first study of how language models respond to ambiguous tasks, and has shown that active learning enables pretrained models to resolve this ambiguity for themselves, enabling them to generalize more robustly.
- Stanford FLAME AI Workshop, September 2023, Open Problems for Scientific Foundation Models
- Stanford HAI Congressional Bootcamp on AI, August 2023
- Google Research, April 2023, Task Ambiguity in Humans and Language Models
- UC Berkeley, October 2022
- MIT, October 2022
- Cornell University, September 2022
- Columbia University, September 2022
- University of Washington, June 2022, Self-Supervised Learning for the Real World
- Harvard Medical School, February 2022, DABS: A Domain-Agnostic Benchmark for Self-Supervised Learning
- Invited Talk, NeurIPS Workshop on Controllable Generative Modeling in Language and Vision, December 2021, Off the Beaten Path: Domain-Agnostic ML for Controllable Generation and Beyond
Turbulence in Focus: Benchmarking Scaling Behavior of 3D Volumetric Super-Resolution with BLASTNet 2.0 DataWai Tong Chung, Bassem Akoush, Pushan Sharma, Alex Tamkin, Ki Sung Jung, Jacqueline Chen, Jack Guo, Davy Brouzet, Mohsen Talei, Bruno Savard, Alexei Y Poludnenko, Matthias IhmeNeurIPS 2023Kathryn Wantlin, Chenwei Wu, Shih-Cheng Huang, Oishi Banerjee, Farah Dadabhoy, Veeral Vipin Mehta, Ryan Wonhee Han, Fang Cao, Raja R. Narayan, Errol Colak, Adewole S. Adamson, Laura Heacock, Geoffrey H. Tison, Alex Tamkin*, Pranav Rajpurkar*ArXiv PreprintAlex Tamkin, Margalit Glasgow, Xiluo He, Noah GoodmanNeurIPS 2023Jasmine Bayrooti, Noah Goodman, Alex TamkinCVPR 2023 Workshop on Perception Beyond the Visible SpectrumAlex Tamkin*, Kunal Handa*, Avash Shrestha, Noah GoodmanICLR 2023Zhengxuan Wu*, Isabel Papadimitriou*, Alex Tamkin*EMNLP 2023Alex Tamkin, Gaurab Banerjee, Mohamed Owda, Vincent Liu, Shashank Rammoorthy, Noah GoodmanNeurIPS 2022Alex Tamkin*, Dat Nguyen*, Salil Deshpande*, Jesse Mu, Noah GoodmanNeurIPS 2022Alex Tamkin, Vincent Liu, Rongfei Lu, Daniel Fein, Colin Schultz, Noah GoodmanNeurIPS 2021Press: [Redshift Magazine] [AIM Magazine] [Stanford HAI]Ananya Karthik, Mike Wu, Noah Goodman, Alex TamkinNeurIPS 2021 Workshop on Self-Supervised Learning - Theory and PracticeDaniel Rothchild, Alex Tamkin, Julie Yu, Ujval Misra, Joseph GonzalezArXiv PreprintCenter for Research on Foundation Models (full list of authors)– Section 4.2: Training and Self-Supervision, Alex Tamkin– Section 4.9: AI Safety and Alignment, Alex Tamkin, Geoff Keeling, Jack Ryan, Sydney von Arx– Coauthor: Sections §2.2: Vision, §3.3: Education, §4.1 Modeling, §5.6: Ethics of ScalePress: [Forbes] [The Economist] [VentureBeat]Alex Tamkin, Mike Wu, Noah GoodmanICLR 2021 Alex Tamkin*, Miles Brundage*, Jack Clark, Deep GanguliArXiv Preprint Press: [WIRED] [VentureBeat] [Datanami] [Slator]Alex Tamkin, Dan Jurafsky, Noah GoodmanNeurIPS 2020 Alex Tamkin, Trisha Singh, Davide Giovanardi, Noah GoodmanFindings of EMNLP 2020; Presented at CoNLL 2020 Alex Tamkin, Ramtin Keramati, Christoph Dann, Emma Brunskill. NeurIPS 2019 Workshop on Safety and Robustness in Decision Making; RLDM 2019 Ramtin Keramati, Christoph Dann, Alex Tamkin, Emma Brunskill. AAAI 2020 Ignacio Cases, Clemens Rosenbaum, Matthew Riemer, Atticus Geiger, Tim Klinger, Alex Tamkin, Olivia Li, Sandhini Agarwal, Joshua D Greene, Dan Jurafsky, Christopher Potts, Lauri KarttunenNAACL 2019 Jessica R Cauchard, Alex Tamkin, Cheng Yao Wang, Luke Vink, Michelle Park, Tommy Fang, James A Landay. HRI 2019 Andrew Vanderburg, Christopher Shallue, Liang Yu, Anne Dattilo, Alex Tamkin. American Astronomical Society Meeting Abstracts, 2019
Quoted in WIRED Magazine - Chatbots Got Big—and Their Ethical Red Flags Got Bigger
Abrupt Future Podcast - Alex Tamkin on ChatGPT and Beyond: Navigating the New Era of Generative AI
The Gradient Podcast - Alex Tamkin on Self-Supervised Learning and Large Language ModelsPress: [Communications of the ACM]
The Engineered Mind Podcast - Alex Tamkin on NLP, AI Ethics & PhD Life
Other topics I think a lot about:
Societal impacts of technology, especially machine learning and large language models
Scientific communication and breaking down walls between fields
Outside of research, I organized the Stanford Queer in CS Dinner