Skip to main content

2023-2024 Events

May
17
2024

Code for Law and Law for Code: Toward Responsible Regulation for Complex Social Systems

12:00pm-1:00pm Center for Human-Computer Interaction + Design, Frances Searle Building (Frances Searle Room 1-122)

Contemporary information societies constitute complex adaptive systems that are strongly shaped by two interacting sets of rules: law and code. Recent advances in computing methods and technologies, combined with the increasing availability of data concerning every aspect of our lives, create unprecedented opportunities to tackle our world’s biggest challenges, but also unprecedented risks for individuals, societies, and our planet at large. To seize the opportunities and mitigate the risks, we need a productive exchange between computer scientists, social scientists, humanities scholars, and legal scholars. In this talk, I will discuss different approaches to establishing such an exchange, from computational legal studies to ethical algorithm design. I will further describe how these approaches will help us achieve two long-term goals: developing a critical computational systems theory of law, and devising a transdisciplinary regulatory framework for responsible computing.

Dr. Corinna Coupette studied law at Bucerius Law School and Stanford Law School, completing their First State Exam in Hamburg in 2015. They obtained a PhD in law (Dr. iur.) from Bucerius Law School and a BSc in computer science from LMU Munich, both in 2018, as well as an MSc in computer science in 2020 and a PhD in computer science (Dr. rer. nat.) in 2023, both from Saarland University. Their legal dissertation was awarded the Bucerius Dissertation Award in 2018 and the Otto Hahn Medal of the Max Planck Society in 2020, and their interdisciplinary research profile was recognized by the Caroline von Humboldt Prize for outstanding female junior scientists in 2022. Corinna is currently a Digital Futures Postdoctoral Fellow at KTH Royal Institute of Technology and the Stockholm Resilience Center, a Fellow at the Bucerius Center for Legal Technology and Data Science, and a Guest Researcher at the Max Planck Institute for Informatics and the Max Planck Institute for Tax Law and Public Finance. The overarching goal of their research is to understand how we can combine code, data, and law to better model, measure, and manage complex systems (e.g., contemporary information societies). To this end, they explore novel ways of connecting computer science and law, such as using algorithms to collect and analyze legal data as networks, or formalizing and implementing legal and mathematical desiderata for responsible data-centric machine learning with graphs.
 
May
6
2024

Human-AI Interaction in the Age of Large Language Models

1:30-2:30pm Center for Human-Computer Interaction + Design, Frances Searle Building (Frances Searle Room 1-122)

Large language models have revolutionized the way humans interact with AI systems, transforming a wide range of fields and disciplines. In this talk, we discuss several approaches to enhancing human-AI interaction using LLMs. The first one looks at social skill training with LLMs by demonstrating how we use LLMs to teach conflict resolution skills through simulated practice. The second part develops efficient learning methods for adapting LLMs to low-resource languages and dialects to reduce disparity in language technologies. We conclude by discussing how human-AI interaction via LLMs can empower individuals and foster positive change. Dr. Diyi Yang is an assistant professor in the Computer Science Department at Stanford University. Her research focuses on human-centered natural language processing and computational social science. She is a recipient of IEEE “AI 10 to Watch” (2020), Microsoft Research Faculty Fellowship (2021), NSF CAREER Award (2022), an ONR Young Investigator Award (2023), and a Sloan Research Fellowship (2024). Her work has received multiple paper awards or nominations at top NLP and HCI conferences, (e.g., Best Paper Honorable Mention at ICWSM 2016, Best Paper Honorable Mention at SIGCHI 2019, and Outstanding Paper at ACL 2022).
 
April
25
2024

From Simulated Subjectivity to Collective Consciousness in Large Language Models

4:00-5:00pm + Reception to Follow, Frances Searle Building (Room 1-483)

Abstract: Large Language Models (LLMs) have revolutionized the automated understanding and understanding of human language, code, images, video, and other sensory data. In this talk, I begin by discussing research into how LLMs can measure and more or less accurately simulate the subjectivity of human agents, their social interactions, and their interactions with AI. I further demonstrate how we can model and assemble usefully diverse collectives of simulated human and alien agents to serve science, culture, and society. I then propose a new model of collective cognition and knowledge, and introduce novel LLM architectures designed to move us from simulated subjectivities to LLM agents collectively “conscious” of their shared situation, others, and the world. I conclude with a discussion of how we can deploy such agents to automate their evolution, learn from their differences, audit and regulate one another to augment human understanding and collective capacity. Dr. James Evans is the Director of the Knowledge Lab, a Fellow in the Computation Institute, and the Co-Director for the Masters in Computational Social Science Program. In addition to his leadership duties, Dr. Evans is a Max Palevsky Professor in Sociology with research that focuses on the collective system of thinking and knowing, ranging from the distribution of attention and intuition, the origin of ideas and shared habits of reasoning to processes of agreement (and dispute), accumulation of certainty (and doubt), and the texture—novelty, ambiguity, topology—of human understanding. He is especially interested in innovation—how new ideas and practices emerge—and the role that social and technical institutions (e.g., the Internet, markets, collaborations) play in collective cognition and discovery.
 
February
8
2024

Towards Human-centered AI: How to Generate Useful Explanations for Human-AI Decision Making

4:00-5:00pm + Reception to Follow, Center for Human-Computer Interaction + Design, Frances Searle Building (Room 1-122);

Abstract: Human-centered AI advocates the shift from emulating human to empowering people so that AI can benefit humanity. A useful metaphor is to consider human as a puzzle piece; it is important to know the shape of this puzzle piece so that we can build AI as a complement. In this talk, I focus on the case of AI-assisted decision making by offering explanations of predictions to illustrate key principles towards human-centered AI. Ideally, explanations of AI predictions enhance human decisions by improving the transparency of AI models, but my work reveals that current approaches fall short of this goal. I then develop a theoretical framework to show that the missing link lies in the neglect of human interpretation. I thus build algorithms to align AI explanations with human intuitions and demonstrate substantial improvements in human performance. To conclude, I will compare my perspective with reinforcement learning from human feedback and discuss further directions towards human-centered AI. Chenhao Tan is an assistant professor of computer science and data science at the University of Chicago, and is also affiliated with the Harris School of Public Policy. He obtained his PhD degree in the Department of Computer Science at Cornell University and bachelor’s degrees in computer science and in economics from Tsinghua University. Prior to joining the University of Chicago, he was an assistant professor at the University of Colorado Boulder and a postdoc at the University of Washington. His research interests include human-centered AI, natural language processing, and computational social science. His work has been covered by many news media outlets, such as the New York Times and the Washington Post. He also won a Sloan research fellowship, an NSF CAREER award, an NSF CRII award, a Google research scholar award, research awards from Amazon, IBM, JP Morgan, and Salesforce, a Facebook fellowship, and a Yahoo! Key Scientific Challenges award.
 
October
26
2023

TSB Prospective Student Information Session

12:00pm CDT, Zoom; advanced registration required.

Please join us for Northwestern’s upcoming Technology and Social Behavior (TSB) PhD program information session on October 26 at noon Central Time. We hope to see you there!