Norms and Social Ontology in Human–Computer Interaction Margo Katherine Wilke Undergraduate Research Internship Program Spring 2026 Accepted Philosophy, Computer Science, AI, Economics, Psychology This project investigates how people understand fairness, cooperation, and social norms when interacting with computers or AI agents instead of human partners. It draws on classic work by Gilbert in social ontology and on foundational experiments in behavioral game theory. Scholars argues that many social norms cannot be reduced to individual choices or game-theoretic strategies, because they depend on joint commitments between people. Our experimental philosophy study aims to this idea directly. We will compare how participants behave when they believe they are playing a fairness-game with a human, computer, or AI system. Prior research suggests that people may act “irrationally" when they think a human is responsible, but behave more “rationally" when interacting with a computer. We want to measure these patterns using updated methods to see whether attributions of agency and normativity change in technologically mediated contexts. Javier Gomez-Lavin Students may contribute to any of the following: assisting with experiment design, helping program simple game-theoretic tasks, recruiting and scheduling participants for in-lab testing sessions, running study sessions in the VRAI Lab, helping with data cleaning and statistical analysis (R/SPSS; training provided), reading and summarizing relevant literature (Gilbert, Bicchieri, game-theory, x-phi methods), co-writing short research summaries, posters, or presentation materials
https://www.vrai-lab.com/ Preferred but not required: Minimum GPA 3.2, Sophomore–Senior standing, Coursework in at least one of the following helpful areas, Philosophy, Cognitive science or psychology, Behavioral economics or game theory, Computer science, ability to work reliably on structured weekly tasks 0 5 (estimated)