Emotional reasoning parallels between human and LLMs based on complex input integration
Date worked on:
September - November 2024
Research context:
For COGS 80 (Cognitive Science Major Seminar), students spend the term learning about current cognitive science research and investigating a topic of interest. I chose to focus on the emotional reasoning processes in humans and AI.
My involvement:
Co-lead researcher
Collaborators:
Isabel Zaltz (Dartmouth undergraduate, co-lead researcher)
Jonathan Phillips (course professor and research advisor)
Over 10 weeks, I investigated parallels between human and LLM integration of complex input cues for emotional reasoning tasks. The work was split into three sections.
First, I conducted a literature review to broadly compare human and AI cognitive capabilities.
I found that, overall, humans outperform AI in most cognitive functions due to their ability to generalize, conceptualize physical space, efficiently transfer learning, and imagine. While AI outperforms humans in certain task-oriented testing due to faster processing speed and larger amounts of data being processed simultaneously, these outperformances are not generalizable. Human perception of AI is also misaligned; we do not fully trust or understand AI, nor do we know how to best enhance it without costly trade-offs, yet we are willing to rely on it in risky situations and hold it to a high moral standard. This work provides a foundation for how to build AI technologies that seamlessly integrate with human intelligence and existing policies.
Second, I investigated parallels between human-human interaction and human-computer interaction.
I began by analyzing a prominent model of emotional reasoning in humans, involving Bayesian causal reasoning and intuitive theory of mind (Saxe & Houlihan, 2017). Then, guided by Clifford Nass’s CASA (Computers Are Social Actors) paradigm, I explored human and AI emotional reasoning processes when interacting with expressive human and artificial agents. I argued that the principles guiding emotional reasoning in human interactions can also apply to human-computer interactions, with significant implications for future research and development of social and emotionally intelligent AI.
Third, I designed and conducted an experiment to examine whether humans and LLMs exhibit similar emotional reasoning processes, focusing on their ability to combine physical cues with contextual information.
I described an experiment involving 22 complex emotional judgment tasks, for which responses were collected from college students and an LLM, ChatGPT-4o. Our findings indicated that both humans and the LLM effectively integrated complex, and often misleading, emotional cues to arrive at comparable emotion judgments, revealing potential parallels in their emotional reasoning processes. Ultimately, these findings aim to contribute to cognitive science theories of emotional reasoning and the development of more sophisticated AI.