- can automatically detect their own unsubstantiated guesses and hallucinated "facts",
- can handle ambiguous instructions by asking for clarification,
- fail in predictable, auditable ways when they inevitably do something unintended.

One direction I'm excited about is extracting information from the full distribution of outputs under a generative model, and exposing this information to the user to help them interpret the output. An example of this is my recent paper on the R-U-SURE system.
Another exciting direction is interpreting reasoning and information-seeking as sequential decision making processes. Language models are surprisingly good at imitating human reasoning, but frequently make errors due to having a different set of knowledge and resources than the humans they are imitating. How do we build systems that seek out new information if and only if they do not already know the answer?
More broadly, I'm also interested in many other areas of probabilistic machine learning. In the past, I have worked on generative models of discrete data structures (such as trees, sets, and graphs), theoretical analyses of self-supervised learning, a strongly-typed language (Dex) for building unconventional machine learning models, generative models for music, and many others. See my research page for more information.
I was an AI Resident at Google from 2019 to 2021, and I worked on applied machine learning at Cruise from 2018 to 2019. Before that, I was an undergraduate CS/Math joint major at Harvey Mudd College, where I did research on applying deep learning to music, and worked as a math tutor in the Academic Excellence tutoring program at HMC.
In my free time, I enjoy playing board games and indie video games (current recommendations: Outer Wilds, Baba is You, Tunic), making music, and working on a variety of side projects.