About Me
Hi my name is Lance Ying. I’m a third year PhD student affiliated with MIT Brain and Cognitive Science, CSAIL and Harvard SEAS. I'm advised by Josh Tenenbaum and Sam Gershman. My research interests lie in the intersection between Cognitive Science, AI, and Human-Robot interaction. I'm particularly interested in building embodied agents that can understand the physical and social worlds.
Before starting my PhD, I completed my Bachelor’s degree from University of Michigan - Ann Arbor with a quadruple major in Computer Science (with honors), Psychology (with honors), Mathematics, and Cognitive Science (with honors). After finishing my Bachelor’s degree, I spent a gap year in Paris completing my Diplôme du Cuisine from Le Cordon Bleu Paris. My long-term personal and academic interests are to build a robot sous-chef for my future restaurant.
I spend my free time cooking, reading and visiting new places. I have so far visited 35 countries.
Research Interests
My research agenda is expansive and covers the following topics. For a full list of publications, please see my Projects page.
Computational models of social cognition:
How do people make sense of the social world and interact with it? How/When/Why do we infer other agents' mental states even if they are not observable? How do we communicate with others about our mental models, verbally or non-verbally?
- Ying, L.*, Zhi-Xuan, T.*, Mansinghka, V., & Tenenbaum, J. B. (2023). Inferring the goals of communicating agents from actions and instructions. In Proceedings of the AAAI Symposium Series (Vol. 2, No. 1, pp. 26-33).
- Ying, L.*, Zhi-Xuan, T.*, Wong, L., Mansinghka, V., & Tenenbaum, J. (2024). Understanding epistemic language with a Bayesian Theory of Mind. In Transactions of the Association for Computational Linguistics (To Appear)
Communication and cooperation in multi-agent systems:
My current research applies rich theories and computational models of social cognition in Human-AI team. I'm particularly interested in building agents with Theory of Mind capabilities that can effectively collaborate with humans in complex multi-agent tasks.
- Zhi-Xuan, T.*, Ying, L.*, Mansinghka, V., & Tenenbaum, J. B. (2024, May). Pragmatic Instruction Following and Goal Assistance via Cooperative Language-Guided Inverse Planning. AAMAS 2024.
- Ying, L., Jha, K., Aarya, S., Tenenbaum, J. B., Torralba, A., & Shu, T. (2024). GOMA: Proactive Embodied Cooperative Communication via Goal-Oriented Mental Alignment. IROS 2024.
Abstraction and representation in reasoning and planning:
How do humans represent the world efficiently for planning? How do humans represent other agents in crowds? Notably, people don't represent all possible worlds and agents' mental states in their reasoning and planning. They instead form simplified representations with multiple levels of abstractions and choose to represent information that is relevant to the goal/task. My research aims to find an efficient representation and abstraction that can be used for fast planning and decision making.
Synthesizing world-model from multimodal inputs:
A long term ambitious goal of mine and my collaborators' is to build an embodied AGI model that can make sense of the physical world and interact with it, learning new skills from little training data. We approach this problem with a neurosymbolic model integrating Foundation Models with probablistic reasoning models.