Writing with AI: Capturing Its Influence, Designing Its Future
1:00pm CDT, Center for Human-Computer Interaction + Design (Frances Searle Building 1-122)
AI is reshaping not only how we write but also what we write and who we become as writers. In this talk, I will first introduce CoAuthor, a platform that captures keystroke-level human-AI interactions, enabling fine-grained analysis of how AI affects our language, ideation, and collaboration. Second, I will present a design space informed by a systematic review of over 100 AI writing assistants, revealing key design choices, trade-offs, and underexplored areas. Lastly, I will conclude by inviting reflection on societal norms, expectations, and possible futures of writing with AI.
Mina Lee is an Assistant Professor in the Computer Science and Data Science Institute at the University of Chicago. Her research focuses on Writing with AI, particularly how AI is transforming our writing process, the content we produce, and our identities as writers. Named one of MIT Technology Review’s Korean Innovators under 35 in 2022, her work has been published in generalist journals (e.g., Nature Human Behavior) as well as top-tier conferences in HCI (e.g., CHI), NLP (e.g., ACL and NAACL), and machine learning (e.g., NeurIPS). Her research on human-AI collaborative writing received an Honorable Mention Award at CHI 2022 and was featured in various media outlets, including The Economist. Previously, she was a postdoctoral researcher at Microsoft Research and received her Ph.D. in Computer Science from Stanford University.
From Outrage to Understanding: Converstation Networks, AI, and Human Agency
11:00am CDT, Ford Hive Room 2350
Today’s social media thrives on conflict—rewarding outrage and drowning out real conversations with noise, bots, and bullshit. It’s getting harder to understand what people actually think or feel, and harder still to build the trust that healthy communities and democracies need.
In 2017, after studying how social media spreads division and misinformation, we asked: Can technology help people actually listen to each other—not just yell past one another? At MIT’s Center for Constructive Communication, we’ve been developing AI tools and methods that support—not replace—real conversation. Our approach helps communities surface underheard voices, find shared themes, and learn from each other. I co-founded the nonprofit Cortico to bring this work into the world—in schools, cities, and local communities across the U.S.
This work blends tech, media, and community organizing, and it raises tough questions: How do we scale meaningful conversation without losing trust? How do we use AI for large-scale listening while keeping control in the hands of communities? And how do we make sure these tools strengthen human agency instead of replacing it?
In this talk, I’ll share what we’ve learned, show how these tools are being used on the ground, and highlight big questions that still need answering. Our goal is to create an approach that helps people connect, understand each other, and act—together.
Deb Roy is professor of Media Arts and Sciences at MIT, where he directs the MIT Center for Constructive Communication (CCC). He leads research in designing human-AI systems that foster dialogue, listening, and deliberation in ways that build civic muscle. Roy is also co-founder and unpaid CEO of Cortico, a closely affiliated nonprofit collaborator of CCC that develops, operates and supports a conversation platform designed to surface underheard voices and perspectives and create scalable dialogue networks.
The National Internet Observatory
1:00pm CDT, Frances Searle Building 1-483
The National Internet Observatory (NIO) is an NSF-funded infrastructure project aimed to help researchers study online behavior. Participants install a browser extension and/or mobile apps to donate their online activity data along with comprehensive survey responses. The infrastructure, located at Northeastern University, will offer approved researchers access to a suite of structured, parsed content data for selected domains to enable analyses and understanding of Internet use in the US. This is all conducted within a robust research ethics framework, emphasizing ongoing informed consent, and multiple layers, technical and legal, of interventions to protect the values at stake in data collection, data access, and research. This talk will provide a brief overview of the contemporary need to build shared infrastructure for studying the internet, discuss the details of the NIO infrastructure, the data collected, the participants, and the researcher intake process.
Dr. David Lazer is a University Distinguished Professor of Political Science and Computer Sciences, Northeastern University, and Co-Director, NULab for Digital Humanities and Computational Social Science. Prior to coming to Northeastern University, he was on the faculty at the Harvard Kennedy School (1998-2009). In 2019, he was elected a fellow to the National Academy of Public Administration. He is among the leading scholars in the world on misinformation and computational social science and has served in multiple leadership and editorial positions, including as a board member for the International Network of Social Network Analysts (INSNA), reviewing editor for Science, associate editor of Social Networks and Network Science, numerous other editorial boards and program committees.
Dr. Scott Allen Cambo has dedicated his career to helping teams research, design, build, and monitor AI systems that are safe to use, responsibly developed, and trustworthy. He has been a program committee member for the ACM Conference on Fairness, Accountability, and Transparency (FAccT), a member of the steering committee for the Responsible AI Licensing Initiative, and an AI 2030 Global Fellow. His expertise focuses on responsible and safe AI, Natural Language Processing, Data & AI Governance, and Designing Trustworthy AI Systems.
Safe(r) Digital Intimacy: Lessons for Internet Governance & Digital Safety
1:00pm CDT, Center for Human-Computer Interaction (Frances Searle Building 1-122)
The creators of sexual content face a constellation of unique online risks. In this talk I will review findings from over half a decade of research I’ve conducted in Europe and the US on the use cases, threat models, and protections needed for intimate content and interactions. We will start by discussing what motivates for the consensual sharing of intimate content in recreation (“sexting”) and labor (particularly on OnlyFans, a platform focused on commercial sharing of intimate content). We will then turn to the threat of image-based sexual abuse, a form of sexual violence that encompasses the non-consensual creation and/or sharing of intimate content. We will discuss two forms of image-based sexual abuse: the non-consensual distribution of intimate content that was originally shared consensually and the rising use of AI to create intimate content without people’s consent. The talk will conclude with a discussion of how these issues inform broader conversations around internet governance, digital discrimination, and safety-by-design for marginalized and vulnerable groups.
Dr. Elissa M. Redmiles is an Assistant Professor at Georgetown University in the Computer Science Department and a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University. Dr. Redmiles uses computational, economic, and social science methods to understand users’ security, privacy, and online safety-related decision-making processes. Her work specifically investigates inequalities that arise in these processes in order to ultimately design systems that facilitate safety equitably across users. Dr. Redmiles current projects focus on security, privacy & safety in digital labor, digital intimacy, digitally-mediated offline interactions, and medical data donation; building transparency tools for privacy enhancing technologies such as differential privacy; and measuring biases in and ethics of AI-based technologies. Her research has received multiple paper recognitions at USENIX Security, ACM CCS, ACM CHI, ACM CSCW, and ACM EAAMO and has been featured in popular press publications such as the New York Times, Wall Street Journal, Scientific American, Rolling Stone, Wired, and Forbes
AI for Science Communication: Adapting to Different Stakeholders
1:00pm CDT, Center for Human-Computer Interaction (Frances Searle Building 1-122)
Communicating complex scientific ideas to the public is critical for an equitable, informed society, but doing so without misleading or overwhelming people is challenging. As large language models become more capable at summarizing and simplifying scientific text, we have a unique opportunity to use these models to make science more accessible. In this talk I will share my group’s research developing language tools and systems to help communicate science to more people. I will highlight two key communication strategies—based on our previous work—focused on different levels of language: explaining new findings from scientific papers and defining individual scientific terms. For both, I will discuss novel techniques we developed for adjusting generated language to fit the needs of different audiences and methods for modeling an individual reader’s background. I will close by discussing how these techniques generalize to other knowledge intensive communication tasks (e.g., legal and educational settings) and the opportunities of developing new techniques for these settings.
Tal August is an assistant professor at the University of Illinois at Urbana-Champaign. He studies how to adapt language to different audiences, with a focus on knowledge intensive domains like science, health and legal communication. Tal conducts empirical analyses to study how changes in language will affect different audiences, and he builds intelligent reading and writing systems for augmenting our language in new ways. The long term goal of Tal’s research is to improve our communication with—and understanding of—one another through technology. Tal August previously was a Young Investigator at the Allen Institute for AI. Tal received his PhD at the Paul G. Allen School for Computer Science and Engineering at the University of Washington, advised by Katharina Reinecke and Noah Smith.
Understanding Online Attention: From Items to Markets
4:00pm CDT, Center for Human-Computer Interaction (Frances Searle Building 1-122)
What makes a video popular? What drives collective attention online? What are the similarities and differences between clicks and transactions in a market? This talk aims to address these three questions. First, I will discuss a physics-inspired stochastic time series model that explains and forecasts the seemingly unpredictable patterns of viewership over time. This model provides novel metrics for predicting expected popularity gains per share and assessing sensitivity to promotions. Next, I will describe new measurement studies and machine learning models that analyze how networks of online items influence each other’s attention. Finally, I will introduce a macroscopic view of attention, offering mathematical descriptions of market equilibriums and distributed optimization. These results lay the groundwork for our ongoing research into the computational view of attention markets and potential mechanisms for fostering a healthy online ecosystem. Additionally, I will demonstrate Influence Flower, an interactive web app and arXiv plugin designed for qualitatively visualizing the intellectual influence of academic entities. I posit that processes of academic knowledge creation affords many open questions on the dynamics of attention among crowds.
Lexing Xie is a Professor of Computer Science at the Australian National University (ANU), where she leads the ANU Computational Media Lab and directs the ANU-wide Integrated AI Network. Her research spans machine learning, computational social science, and computational economics, with a particular focus on online optimization, neural networks for sequences and networks, and applied problems such as distributed online markets, decision-making by humans and machines. Lexing received the 2023 ARC Future Fellowship and the 2018 Chris Wallace Award for Outstanding Research. Her research has garnered seven best paper and best student paper awards at ACM and IEEE conferences between 2002 and 2019. Among her editorial roles, she served as the inaugural Editor-in-Chief of the AAAI International Conference on Web and Social Media (ICWSM) and is the Program Co-Chair of ACM Multimedia 2024. Prior to joining ANU, she was a Research Staff Member at the IBM T.J. Watson Research Center in New York. She holds a PhD in Electrical Engineering from Columbia University and a BS in Electrical Engineering from Tsinghua University.
TSB Prospective Student Information Session
12:00pm CDT, Zoom; advanced registration required.
Please join us for Northwestern’s upcoming Technology and Social Behavior (TSB) PhD program information session on November 1 at noon Central Time. We hope to see you there!