Skip to main content

2023-2024 Events

April
25
2024

From Simulated Subjectivity to Collective Consciousness in Large Language Models

4:00-5:00pm + Reception to Follow, Frances Searle Building (Room 1-483)

Abstract: Large Language Models (LLMs) have revolutionized the automated understanding and understanding of human language, code, images, video, and other sensory data. In this talk, I begin by discussing research into how LLMs can measure and more or less accurately simulate the subjectivity of human agents, their social interactions, and their interactions with AI. I further demonstrate how we can model and assemble usefully diverse collectives of simulated human and alien agents to serve science, culture, and society.

I then propose a new model of collective cognition and knowledge, and introduce novel LLM architectures designed to move us from simulated subjectivities to LLM agents collectively “conscious” of their shared situation, others, and the world. I conclude with a discussion of how we can deploy such agents to automate their evolution, learn from their differences, audit and regulate one another to augment human understanding and collective capacity.

Dr. James Evans is the Director of the Knowledge Lab, a Fellow in the Computation Institute, and the Co-Director for the Masters in Computational Social Science Program. In addition to his leadership duties, Dr. Evans is a Max Palevsky Professor in Sociology with research that focuses on the collective system of thinking and knowing, ranging from the distribution of attention and intuition, the origin of ideas and shared habits of reasoning to processes of agreement (and dispute), accumulation of certainty (and doubt), and the texture—novelty, ambiguity, topology—of human understanding. He is especially interested in innovation—how new ideas and practices emerge—and the role that social and technical institutions (e.g., the Internet, markets, collaborations) play in collective cognition and discovery.
 
February
8
2024

Towards Human-centered AI: How to Generate Useful Explanations for Human-AI Decision Making

4:00-5:00pm + Reception to Follow, Center for Human-Computer Interaction + Design, Frances Searle Building (Room 1-122);

Abstract: Human-centered AI advocates the shift from emulating human to empowering people so that AI can benefit humanity. A useful metaphor is to consider human as a puzzle piece; it is important to know the shape of this puzzle piece so that we can build AI as a complement. In this talk, I focus on the case of AI-assisted decision making by offering explanations of predictions to illustrate key principles towards human-centered AI. Ideally, explanations of AI predictions enhance human decisions by improving the transparency of AI models, but my work reveals that current approaches fall short of this goal. I then develop a theoretical framework to show that the missing link lies in the neglect of human interpretation. I thus build algorithms to align AI explanations with human intuitions and demonstrate substantial improvements in human performance. To conclude, I will compare my perspective with reinforcement learning from human feedback and discuss further directions towards human-centered AI. Chenhao Tan is an assistant professor of computer science and data science at the University of Chicago, and is also affiliated with the Harris School of Public Policy. He obtained his PhD degree in the Department of Computer Science at Cornell University and bachelor’s degrees in computer science and in economics from Tsinghua University. Prior to joining the University of Chicago, he was an assistant professor at the University of Colorado Boulder and a postdoc at the University of Washington. His research interests include human-centered AI, natural language processing, and computational social science. His work has been covered by many news media outlets, such as the New York Times and the Washington Post. He also won a Sloan research fellowship, an NSF CAREER award, an NSF CRII award, a Google research scholar award, research awards from Amazon, IBM, JP Morgan, and Salesforce, a Facebook fellowship, and a Yahoo! Key Scientific Challenges award.
 
October
26
2023

TSB Prospective Student Information Session

12:00pm CDT, Zoom; advanced registration required.

Please join us for Northwestern’s upcoming Technology and Social Behavior (TSB) PhD program information session on October 26 at noon Central Time. We hope to see you there!