Publications
Publications
- 2022
- HBS Working Paper Series
What Would It Mean for a Machine to Have a Self?
By: Julian De Freitas, Ahmet Kaan Uğuralp, Zeliha Uğuralp, Laurie Paul, Joshua B. Tenenbaum and Tomer Ullman
Abstract
What would it mean for autonomous AI agents to have a ‘self’? One proposal for a minimal
notion of self is a representation of one’s body spatio-temporally located in the world, with a tag
of that representation as the agent taking actions in the world. This turns self-representation into
a constructive inference process of self-orienting, and raises a challenging computational
problem that any agent must solve continually. Here we construct a series of novel ‘self-finding’
tasks modeled on simple video games—in which players must identify themselves when there
are multiple self-candidates—and show through quantitative behavioral testing that humans are
near optimal at self-orienting. In contrast, well-known Deep Reinforcement Learning algorithms,
which excel at learning much more complex video games, are far from optimal. We suggest that
self-orienting allows humans to navigate new settings, and that this is a crucial target for
engineers wishing to develop flexible agents.
Keywords
Citation
De Freitas, Julian, Ahmet Kaan Uğuralp, Zeliha Uğuralp, Laurie Paul, Joshua B. Tenenbaum, and Tomer Ullman. "What Would It Mean for a Machine to Have a Self?" Harvard Business School Working Paper, No. 23-017, September 2022.