My research focuses on large pretrained models (e.g. GPT-3) and how we can better build, understand, and control them. I'm especially interested in multimodal and domain-agnostic methods, which have the potential to unlock important applications in healthcare, manufacturing, and the natural sciences.
In the past, I've also worked in reinforcement learning, human-robot interaction, and computational astronomy, and I've spent time at Google Brain, Google Language, and Google Civics.
I'm grateful to be supported by an Open Philanthropy AI Fellowship.
In Fall 2021 I was the instructor of Stanford's CS 197: Computer Science Research. (Slides and materials)
(See Essays for more)
Other topics I think a lot about:
Societal impacts of technology, especially machine learning
Mentoring, teaching and fostering a healthy and inclusive research culture
Scientific communication and breaking down walls between fields
Outside of research, I organize the Stanford Queer in AI Dinner with Stanford Inclusion in AI