My research focuses on how to make foundation models (e.g. GPT-3) safe and useful for real-world problems in the natural sciences, engineering, and healthcare.
I study both language models as well as general machine learning methods that can be applied broadly to images, organic molecules, astronomical data, satellite imagery, wearable sensors, and more.
I'm grateful to be supported by an Open Philanthropy AI Fellowship.
In Fall 2021 I was the instructor of Stanford's CS 197: Computer Science Research. (Slides and materials)
Foundation models are machine learning models (e.g. SimCLR or GPT-3), which are trained on a large amount of unlabeled data and can be easily adapted to many downstream tasks.
My research focuses on making foundation models safe and useful for real-world problems in the sciences, engineering, and healthcare.
My work focuses on both sides of this problem—training these models well and adapting them fruitfully:
Enabling foundation models to learn from any kind of data, beyond just text or images
– State-of-the-art methods for training foundation models are specialized for particular modalities, such as text or images. My work has shown that we can find a single method that works across diverse data from 12 different fields, including real-world scientific applications in genomics, wearable sensors, and multispectral satellite imagery.
– I've also proposed Viewmaker Networks, a foundation model that learns to pose its own training task, and shown how it implements a behavior called feature dropout that helps foundation models perform well on a broad range of tasks.
Enabling foundation models to behave well when tasks are ambiguous
– Machine learning research is typically conducted on benchmarks targeting a well-specified problem. However, in the real world, often defining the task well for a foundation model will be much of the challenge.
– My work includes the first study of how language models respond to ambiguous tasks, and has shown that pretrained models can be finetuned using active learning to resolve this ambiguity and generalize more robustly.
- UC Berkeley, October 2022
- MIT, October 2022
- Cornell University, September 2022
- Columbia University, September 2022
- University of Washington, June 2022, Self-Supervised Learning for the Real World
- Harvard Medical School, February 2022, DABS: A Domain-Agnostic Benchmark for Self-Supervised Learning
- Invited Talk, NeurIPS Workshop on Controllable Generative Modeling in Language and Vision, December 2021, Off the Beaten Path: Domain-Agnostic ML for Controllable Generation and Beyond
- Stanford Center for Research on Foundation Models, October 2021, Active Learning Helps Pretrained Models Learn the Intended Task
- Stanford Vision and Learning Lab, August 2021, Towards Universal Self-Supervision
- Stanford OVAL Seminar, May 2021, Understanding and Controlling Transfer Learning in Large Language Models
- FAIR, December 2020, Language Through a Prism: A Spectral Approach for Multiscale Language Representations.
- Google Brain, September 2018, Searching for Planets with WaveNet
- NASA Ames Research Center, September 2018, Overcoming Dataset Challenges for Vetting Exoplanets with Machine Learning.
BenchMD: A Benchmark for Modality-Agnostic Learning on Medical Images and SensorsKathryn Wantlin, Chenwei Wu, Shih-Cheng Huang, Oishi Banerjee, FARAH DADABHOY, Veeral Vipin Mehta, Ryan Wonhee Han, Fang Cao, Raja R. Narayan, Errol Colak, Adewole S. Adamson, Laura Heacock, Geoffrey H. Tison, Alex Tamkin*, Pranav Rajpurkar*ArXiv PreprintAlex Tamkin, Margalit Glasgow, Xiluo He, Noah GoodmanArXiv PreprintAlex Tamkin*, Kunal Handa*, Avash Shrestha, Noah GoodmanICLR 2023Alex Tamkin, Gaurab Banerjee, Mohamed Owda, Vincent Liu, Shashank Rammoorthy, Noah GoodmanNeurIPS 2022Alex Tamkin*, Dat Nguyen*, Salil Deshpande*, Jesse Mu, Noah GoodmanNeurIPS 2022Zhengxuan Wu*, Isabel Papadimitriou*, Alex Tamkin*ArXiv PreprintAlex Tamkin, Vincent Liu, Rongfei Lu, Daniel Fein, Colin Schultz, Noah GoodmanNeurIPS 2021Press: [Redshift Magazine] [AIM Magazine] [Stanford HAI]Ananya Karthik, Mike Wu, Noah Goodman, Alex TamkinNeurIPS 2021 Workshop on Self-Supervised Learning - Theory and PracticeDaniel Rothchild, Alex Tamkin, Julie Yu, Ujval Misra, Joseph GonzalezArXiv PreprintCenter for Research on Foundation Models (full list of authors)– Section 4.2: Training and Self-Supervision, Alex Tamkin– Section 4.9: AI Safety and Alignment, Alex Tamkin, Geoff Keeling, Jack Ryan, Sydney von Arx– Coauthor: Sections §2.2: Vision, §3.3: Education, §4.1 Modeling, §5.6: Ethics of ScalePress: [Forbes] [The Economist] [VentureBeat]Alex Tamkin, Mike Wu, Noah GoodmanICLR 2021 Alex Tamkin, Dan Jurafsky, Noah GoodmanNeurIPS 2020 Alex Tamkin, Trisha Singh, Davide Giovanardi, Noah GoodmanFindings of EMNLP 2020; Presented at CoNLL 2020 Alex Tamkin, Ramtin Keramati, Christoph Dann, Emma Brunskill. NeurIPS 2019 Workshop on Safety and Robustness in Decision Making; RLDM 2019 Ramtin Keramati, Christoph Dann, Alex Tamkin, Emma Brunskill. AAAI 2020 Ignacio Cases, Clemens Rosenbaum, Matthew Riemer, Atticus Geiger, Tim Klinger, Alex Tamkin, Olivia Li, Sandhini Agarwal, Joshua D Greene, Dan Jurafsky, Christopher Potts, Lauri KarttunenNAACL 2019 Jessica R Cauchard, Alex Tamkin, Cheng Yao Wang, Luke Vink, Michelle Park, Tommy Fang, James A Landay. HRI 2019 Andrew Vanderburg, Christopher Shallue, Liang Yu, Anne Dattilo, Alex Tamkin. American Astronomical Society Meeting Abstracts, 2019
(See Essays for more)
Interview on Abrupt Future - Alex Tamkin on ChatGPT and Beyond: Navigating the New Era of Generative AI
Interview on The Gradient Podcast - Alex Tamkin on Self-Supervised Learning and Large Language ModelsPress: [Communications of the ACM]
Interview on The Engineered Mind Podcast - Alex Tamkin on NLP, AI Ethics & PhD Life
Other topics I think a lot about:
Societal impacts of technology, especially machine learning and large language models
Scientific communication and breaking down walls between fields
Outside of research, I organize the Stanford Queer in AI Dinner with Stanford Inclusion in AI