Alex Tamkin
Email: atamkin_stanford_edu | Twitter: @alextamkin | Mastodon: sigmoid.social/@alextamkin
I'm a final-year PhD student in Computer Science at Stanford, advised by Noah Goodman and part of the Stanford AI Lab and the Stanford NLP Group.
My research focuses on how to make foundation models (e.g. GPT-4) safe and useful in the real world; for example, in engineering, the natural sciences, or healthcare.
I study language models as well as general machine learning techniques that are broadly applicable to other kinds of data (e.g. images, organic molecules, astronomical data, satellite imagery, wearable sensors, and more.)
In Fall 2021 I was the instructor of Stanford's CS 197: Computer Science Research. (Slides and materials).
I'm grateful to be supported by an Open Philanthropy AI Fellowship.
RESEARCH OVERVIEW
Foundation models are machine learning models (e.g. SimCLR or GPT-3), which are trained on a large amount of unlabeled data and can be easily adapted to many downstream tasks.
My research focuses on making foundation models safe and useful for real-world problems in the sciences, engineering, and healthcare.
My work focuses on both sides of this problem—training these models well and adapting them fruitfully:
Enabling foundation models to learn from any kind of data, beyond just text or images
– State-of-the-art methods for training foundation models are specialized for particular modalities, such as text or images. My work has shown that we can find a single method that works across diverse data from 12 different fields, including real-world scientific applications in genomics, wearable sensors, and multispectral satellite imagery.
– I've also proposed Viewmaker Networks, a foundation model that learns to pose its own training task, and shown how it implements a behavior called feature dropout that helps foundation models perform well on a broad range of tasks.Enabling foundation models to behave well when tasks are ambiguous
– Machine learning research is typically conducted on benchmarks targeting a well-specified problem. However, in the real world, often defining the task well for a foundation model will be much of the challenge.
– My work includes the first study of how language models respond to ambiguous tasks, and has shown that pretrained models can be finetuned using active learning to resolve this ambiguity and generalize more robustly.
Talks
- Google Research, April 2023, Task Ambiguity in Humans and Language Models
- UC Berkeley, October 2022
- MIT, October 2022
- Cornell University, September 2022
- Columbia University, September 2022
- University of Washington, June 2022, Self-Supervised Learning for the Real World
- Harvard Medical School, February 2022, DABS: A Domain-Agnostic Benchmark for Self-Supervised Learning
- Invited Talk, NeurIPS Workshop on Controllable Generative Modeling in Language and Vision, December 2021, Off the Beaten Path: Domain-Agnostic ML for Controllable Generation and Beyond
- Stanford Center for Research on Foundation Models, October 2021, Active Learning Helps Pretrained Models Learn the Intended Task
- Stanford Vision and Learning Lab, August 2021, Towards Universal Self-Supervision
- Stanford OVAL Seminar, May 2021, Understanding and Controlling Transfer Learning in Large Language Models
- FAIR, December 2020, Language Through a Prism: A Spectral Approach for Multiscale Language Representations.
- Google Brain, September 2018, Searching for Planets with WaveNet
- NASA Ames Research Center, September 2018, Overcoming Dataset Challenges for Vetting Exoplanets with Machine Learning.
Publications
BenchMD: A Benchmark for Modality-Agnostic Learning on Medical Images and Sensors
Kathryn Wantlin, Chenwei Wu, Shih-Cheng Huang, Oishi Banerjee, FARAH DADABHOY, Veeral Vipin Mehta, Ryan Wonhee Han, Fang Cao, Raja R. Narayan, Errol Colak, Adewole S. Adamson, Laura Heacock, Geoffrey H. Tison, Alex Tamkin*, Pranav Rajpurkar*ArXiv PreprintFeature Dropout: Revisiting the Role of Augmentations in Contrastive Learning
Alex Tamkin, Margalit Glasgow, Xiluo He, Noah GoodmanArXiv PreprintMultispectral Contrastive Learning with Viewmaker Networks
Jasmine Bayrooti, Noah Goodman, Alex TamkinCVPR 2023 Workshop on Perception Beyond the Visible SpectrumTask Ambiguity in Humans and Language Models
Alex Tamkin*, Kunal Handa*, Avash Shrestha, Noah GoodmanICLR 2023DABS 2.0: Improved Datasets and Algorithms for Universal Self-Supervision [🐦thread]
Alex Tamkin, Gaurab Banerjee, Mohamed Owda, Vincent Liu, Shashank Rammoorthy, Noah GoodmanNeurIPS 2022Active Learning Helps Pretrained Models Learn the Intended Task [🐦thread]
Alex Tamkin*, Dat Nguyen*, Salil Deshpande*, Jesse Mu, Noah GoodmanNeurIPS 2022Oolong: Investigating What Makes Crosslingual Transfer Hard with Controlled Studies [🐦thread]
Zhengxuan Wu*, Isabel Papadimitriou*, Alex Tamkin*ArXiv PreprintDABS: A Domain-Agnostic Benchmark for Self-Supervised Learning [🌐site] [🐦thread]
Alex Tamkin, Vincent Liu, Rongfei Lu, Daniel Fein, Colin Schultz, Noah GoodmanNeurIPS 2021Press: [Redshift Magazine] [AIM Magazine] [Stanford HAI]Tradeoffs Between Contrastive and Supervised Learning: An Empirical Study
Ananya Karthik, Mike Wu, Noah Goodman, Alex TamkinNeurIPS 2021 Workshop on Self-Supervised Learning - Theory and PracticeC5T5: Controllable Generation of Organic Molecules with Transformers
Daniel Rothchild, Alex Tamkin, Julie Yu, Ujval Misra, Joseph GonzalezArXiv PreprintOn the Opportunities and Risks of Foundation Models
Center for Research on Foundation Models (full list of authors)– Section 4.2: Training and Self-Supervision, Alex Tamkin– Section 4.9: AI Safety and Alignment, Alex Tamkin, Geoff Keeling, Jack Ryan, Sydney von Arx– Coauthor: Sections §2.2: Vision, §3.3: Education, §4.1 Modeling, §5.6: Ethics of ScalePress: [Forbes] [The Economist] [VentureBeat]Viewmaker Networks: Learning Views for Unsupervised Representation Learning [📝blogpost] [🐦thread]
Alex Tamkin, Mike Wu, Noah GoodmanICLR 2021Language Through a Prism: A Spectral Approach for Multiscale Language Representations [🐦thread] [📝blogpost]
Alex Tamkin, Dan Jurafsky, Noah GoodmanNeurIPS 2020Investigating Transferability in Pretrained Language Models [🐦thread]
Alex Tamkin, Trisha Singh, Davide Giovanardi, Noah GoodmanFindings of EMNLP 2020; Presented at CoNLL 2020Distributionally-Aware Exploration for CVaR Bandits.
Alex Tamkin, Ramtin Keramati, Christoph Dann, Emma Brunskill. NeurIPS 2019 Workshop on Safety and Robustness in Decision Making; RLDM 2019Being Optimistic to Be Conservative: Quickly Learning a CVaR Policy
Ramtin Keramati, Christoph Dann, Alex Tamkin, Emma Brunskill. AAAI 2020Recursive Routing Networks: Learning to Compose Modules for Language Understanding.
Ignacio Cases, Clemens Rosenbaum, Matthew Riemer, Atticus Geiger, Tim Klinger, Alex Tamkin, Olivia Li, Sandhini Agarwal, Joshua D Greene, Dan Jurafsky, Christopher Potts, Lauri KarttunenNAACL 2019Drone.io: A Gestural and Visual Interface for Human-Drone Interaction.
Jessica R Cauchard, Alex Tamkin, Cheng Yao Wang, Luke Vink, Michelle Park, Tommy Fang, James A Landay. HRI 2019 Andrew Vanderburg, Christopher Shallue, Liang Yu, Anne Dattilo, Alex Tamkin. American Astronomical Society Meeting Abstracts, 2019Other Writing
Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models [📝blogpost]
Alex Tamkin*, Miles Brundage*, Jack Clark, Deep GanguliArXiv Preprint Press: [WIRED] [VentureBeat] [Datanami] [Slator]Input on the European Commission White Paper on Artificial Intelligence
Marietje Schaake, Elisabeth Appel, Dathan M. Duplichen, Lisa Einstein, Wren Elhai, Muhammad Dhafer, Muhammad Faishal, Agata Foryciarz, Sydney L. Frankenberg, Toni Friedman, Zoe Huczok, Kyra Jasper, Danielle Jablanski, Jennifer King, Cindy Kuang, Heajune Lee, Shreya Mantha, Vidyangi Patil, Gailyn Portelance, Adriana Stephan, Alex Tamkin, Alessandro Vecchiato, Eva Zhang, Jason Zhao(See Essays for more)
Media
WIRED Magazine - Chatbots Got Big—and Their Ethical Red Flags Got Bigger
Abrupt Future Podcast - Alex Tamkin on ChatGPT and Beyond: Navigating the New Era of Generative AI
AI Artwork in PC Magazine (twitter thread: DALL-E Meets WALL-E: an Art History)
The Gradient Podcast - Alex Tamkin on Self-Supervised Learning and Large Language Models
Press: [Communications of the ACM]The Engineered Mind Podcast - Alex Tamkin on NLP, AI Ethics & PhD Life
Personal
Other topics I think a lot about:
Societal impacts of technology, especially machine learning and large language models
Mentoring, teaching and fostering a healthy and inclusive research culture
Scientific communication and breaking down walls between fields
Outside of research, I organize the Stanford Queer in AI Dinner with Stanford Inclusion in AI
I also like making art, especially ceramics and photography!