Talks & Events

Past Events

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Open and Reliable Language Model Adaptation
Jun 6th 2025, 9:00 – 10:00AM PT
Guest Speaker

Faeze‘s research focuses on understanding and improving the capabilities of language models in real-world, dynamic settings. She develops resource-efficient algorithms for constrained reasoning and generation, investigates how training data and alignment strategies shape model behavior, and designs robust evaluation frameworks to capture emergent capabilities. Her broader goal is to build human-centered AI systems that are reliable, adaptive, and safe across diverse applications.

Abstract

In this talk, Faeze explores two crucial frontiers in AI development: democratizing language model adaptation and enhancing their reliability in real-world deployment. She introduces Tulu 3, a family of fully open post-trained language models. While post-training techniques are essential for refining behaviors and unlocking new capabilities in language models, open approaches have significantly lagged behind proprietary ones. Tulu 3 addresses this gap by providing complete transparency into data, code, and training methodologies, yielding models that outperform comparable open-weight alternatives while narrowing the gap with proprietary systems.

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Concrete Problems for Developing Human-Centered Agents
May 30th 2025, 9:00 – 10:00AM PT
Guest Speaker

Gagan is part of the AI Frontiers group and co-leads research on AutoGen, a framework for building multi-agent AI systems. His work lies at the intersection of Artificial Intelligence and Human-Computer Interaction, with a focus on making AI systems more capable, interactive, and useful to people. Before joining Microsoft Research in 2022, Gagan completed his Ph.D. in Computer Science at the University of Washington, advised by Dan Weld. At UW, he was part of the Lab for Human-AI Interaction, where he studied how AI systems can complement human decision-making.

Abstract

Reflecting on his experience developing AutoGen—an open-source framework for building agents and AutoGen-based applications—this talk outlines three concrete challenges in creating human-centered agents: (1) reliably completing complex, multi-step tasks; (2) maintaining common ground between people and agents; and (3) auditing complex agent behaviors. Bansal will share demos and evaluations highlighting his progress on these challenges, as well as open opportunities for the HCI community.

 

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
AI as Normal Technology
May 16th 2025, 9:00 – 10:00AM PT
Guest Speaker

He is a co-author of the book AI Snake Oil and a newsletter of the same name which is read by 50,000 researchers, policy makers, journalists, and AI enthusiasts. He previously co-authored two widely used computer science textbooks: Bitcoin and Cryptocurrency Technologies and Fairness in Machine Learning. Narayanan led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use our personal information. His work was among the first to show how machine learning reflects cultural stereotypes. Narayanan was one of TIME’s inaugural list of 100 most influential people in AI.

Abstract

Arvind Narayanan will present a new paper co-authored with Sayash Kapoo, in which they articulate a vision of artificial intelligence as a “normal technology,” standing in contrast to both utopian and dystopian narratives that portray AI as a potentially superintelligent entity.

In the presentation, Narayanan will explain why they believe the impacts of advanced AI, even if transformative, are likely to unfold gradually. He will make a critical distinction between AI methods, AI applications, and AI adoption. Additionally, he will explore a potential division of labor between humans and AI in a world shaped by advanced AI, and examine the implications of treating AI as normal technology for AI policy, AI safety, and broader human progress.

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Culture as a Lens on Responsible AI
May 9th 2025, 9:00 – 10:00AM PT
Guest Speaker

Dr. Prabhakaran is a Staff Research Scientist at Google’s Responsible AI and Human Centered Technologies organization, and co-lead the interdisciplinary Technology, AI, Society and Culture (TASC) team. Before Google, he was a postdoc at Stanford University, and obtained his PhD from Columbia University. His prior research focused on building scalable ways using language technologies to identify and address large-scale societal issues such as racial disparities in policing, workplace incivility, and online abuse. He has published over 50 articles in top-tier venues such as the PNAS, ACL, TACL, NAACL, EMNLP, NeurIPS, and FAccT.

Abstract

AI technologies are often developed within mono-cultural development contexts, but are meant to interact with multi-cultural usage contexts with divergent values, knowledge systems, and interpretive practices. In this talk, I will present our research that uses culture as a lens on responsible AI, and demonstrate its crucial influence along various dimensions: culture as reflected in data, cultural values and norms that we encode in our models, and cultural knowledge systems we rely on for evaluation of our models. More specifically, we demonstrate how geo-cultural differences shape what language is deemed abusive, stereotypical, or offensive, and where AI models fail in detecting them.

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Beyond Scaling: Frontiers of Retrieval-Augmented Language Models
May 2nd 2025, 9:00 – 10:00AM PT
Guest Speaker

Her research focuses on overcoming the limitations of large language models (LMs) by developing advanced systems.

Her work has received top paper awards at major NLP/ML conferences, the IBM Global Fellowship, and recognition from Forbes and MIT Technology Review.

Abstract

Despite their success, Large Language Models (LLMs) remain limited by issues like hallucination and outdated knowledge. In this talk, Akari introduces Augmented LMs—a new paradigm that enhances LLMs with external modules for greater reliability. Focusing on Retrieval-Augmented LMs, she presents her research on scalable training and retrieval methods, and highlights OpenScholar, a system now used by over 30,000 researchers. She concludes with a vision for future advances in modular, multimodal AI.

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Responsible AI for Enterprise Data Centers: Balancing Technical Excellence and Ethical Standards
Apr 25th 2025, 9:00 – 10:00AM PT
Guest Speaker

Shilpi leads product strategy at IBM Infrastructure, bringing AI into enterprise data center solutions. With 14+ years in engineering, data science, and product management, she holds two AI patents and is an IEEE Senior Member. Shilpi is passionate about ethical tech and building scalable, responsible systems.

Abstract

As AI transforms enterprise data center infrastructure, product managers face a critical challenge: driving innovation while upholding ethical standards. This session explores how Responsible AI is shaping the future of system offerings—ensuring data privacy, fairness, and transparency in environments where operational data is central. Through real-world examples, attendees will learn how to apply ethical AI frameworks to build scalable, innovative, and socially responsible solutions that meet both technical and societal needs.

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Weak-to-Strong Generalization
Apr 18th 2025, 9:00 – 10:00AM PT
Guest Speaker

Pavel focused on reasoning, reinforcement learning, and AI alignment. He previously worked on advanced problem-solving models at OpenAI and contributed to Claude 3.7 and GPT-4-level systems. He completed his PhD at NYU in 2023 and will return as an Assistant Professor in Fall 2025, joining the Tandon CSE and Courant CS departments.

Abstract

As AI systems grow more capable, aligning them becomes increasingly challenging—especially when their behavior outpaces human understanding. This talk explores weak-to-strong generalization: can weak models effectively supervise stronger ones? Through experiments in NLP, chess, and reward modeling, the talk shows how even limited supervision can unlock surprising performance gains—and why alignment techniques like RLHF may not scale without new approaches.

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Book Tour: The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want
Apr 11th 2025, 9:00 – 10:00AM PT
Guest Speaker

Emily M. Bender is a Professor of Linguistics and an Adjunct Professor in the School of Computer Science and the Information School at the University of Washington.

Her research focuses on multilingual grammar engineering, computational semantics, and the broader societal impacts of language technology.

Abstract

A smart, incisive look at the technologies sold as artificial intelligence, the drawbacks and pitfalls of technology sold under this banner, and why it’s crucial to recognize the many ways in which AI hype covers for a small set of power-hungry actors at work and in the world.

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Bridging the AI-UX Divide: Responsible AI Design through Human-Centered Collaboration
Mar 14th 2025, 9:00 – 10:00AM PT
Guest Speaker

Hari’s research looks at enabling multidisciplinary teams to design and develop ethical, responsible, and human-centered experiences with AI. He applies this to AI in education, drawing from cognitive psychology and learning sciences to enhance learning.

He is also the Ram and Vijay Shriram Faculty Fellow at Stanford’s Institute for Human-Centered AI.

Abstract

In traditional software development, UX design and engineering are distinct: designers create specs, and engineers build them. AI blurs this line, as systems evolve dynamically with data and user interactions.

In this talk, I’ll explore how collaboration at the AI-UX boundary shapes responsible AI design. Drawing from industry studies, I’ll show how “leaky” abstractions encourage cross-disciplinary teamwork and why end-user data is crucial in both AI and UX design. I’ll discuss challenges in aligning AI behavior with human expectations, emphasizing transparency, interpretability, and accountability. Finally, I’ll present insights from generative AI prototyping and share practical tools for integrating responsible AI principles into UX workflows.

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Scale Can’t Overcome Pragmatics: Instilling Spatial and Compositional Reasoning into Multimodal Language Models
Mar 7th 2025, 9:00 – 10:00AM PT
Guest Speaker

Ranjay’s research spans computer vision, natural language processing, robotics, and human-computer interaction, with recognized work at CVPR, ACL, NeurIPS, and other leading conferences. His publications have been featured in Science, Forbes, and The Wall Street Journal. He holds a Ph.D. in CS from Stanford and degrees in ECE and CS from Cornell.

Abstract

Compositionality is key to human vision and language, allowing us to interpret new scenes and sentences by combining familiar elements. While past research incorporated compositional and spatial priors into machine learning, large-scale models trained on internet data have largely overlooked them.

This talk formalizes compositionality through cognitive science, evaluating whether models like GPT-4 and Gemini exhibit it—revealing near-random performance. We explore architectural and training modifications inspired by neuroscience and cognitive science to enhance compositional reasoning and address gaps in training data, showing how high-quality human annotations can help build stronger vision-language models.

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Toward Effective Data Governance in Data-Driven Systems
Feb 21st 2025, 9:00 – 10:00AM PT
Guest Speaker

Asia’s research focuses on data-driven systems at the intersection of computing and society. She collaborates with experts across disciplines and has advised the European Commission on tech policy. With experience at Microsoft and Google, her work has earned awards like the Rodota Award and recognition among the “100 Brilliant Women in AI Ethics.”

Abstract

Contemporary AI systems are characterized by extensive personal data collection despite the increasing societal costs associated with such practices. To prevent harm, data protection regulations specify various principles for respectfully processing user data, such as purpose limitation, data minimization, or the need to establish a valid basis for data processing, such as consent or legitimate interest. Yet, practical implementations of these principles in data-driven systems leave much to be desired. This talk will delve into the computational and human factors contributing to lax data protection implementations and uncover ways research can contribute to improved data governance online.

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Training Social Skill via Large Language Models
Feb 14th 2025, 9:00 – 10:00AM PT
Guest Speaker

Dr. Yang’s research focuses on human-centered natural language processing and computational social science. She has received the Microsoft Research Faculty Fellowship (2021), NSF CAREER Award (2022), ONR Young Investigator Award (2023), and Sloan Research Fellowship (2024). Her work has earned multiple paper awards and nominations at top NLP and HCI conferences.

Abstract

Social skills are key to success in work and life, but finding good practice opportunities isn’t easy. Most training relies on expert supervision, which doesn’t scale well given the shortage of trained professionals. So how can we make social skill training more accessible and engaging? In this talk, I’ll share two projects using LLMs to bridge this gap—one helping therapists improve their skills with AI-powered feedback, and another using simulated practice to teach conflict resolution. We’ll also discuss the risks of LLM-based training and how we can address them.

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Human-AI Interaction Under Societal Disagreement
Feb 7th 2025, 9:00 – 10:00AM PT
Guest Speaker

Michael’s research focuses on social, societal, and interactive technologies, with work featured in The New York Times, TED AI, and MIT Technology Review. He has earned prestigious awards, including the Alfred P. Sloan Fellowship, the UIST Lasting Impact Award, and the Patrick J. McGovern Tech for Humanity Prize. Michael holds a master’s and Ph.D. in Computer Science from MIT.

Abstract

How can we better model human attitudes and behaviors? Traditional simulations often fail to capture the complexity of human behavior, but AI opens up new possibilities.

In this talk, Michael will discuss generative agents—AI-driven simulations of human behavior that can remember, reflect, and plan. Grounded in qualitative data from over 1,000 Americans, these agents replicate survey responses with high accuracy. This research offers insights into designing more effective online social spaces, addressing societal disagreements in AI, and embedding societal values into algorithms.

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
“It’s not a representation of me”: Examining Accent Bias and Digital Exclusion in Synthetic AI Voice Services
Jan 31st 2025, 9:00 – 10:00AM PT
Guest Speaker

Dr. Avijit Ghosh works at the intersection of machine learning, ethics, and policy. His research has led to impactful changes, such as shaping regulation in New York City and prompting Facebook to remove its biased ad-targeting algorithm. His work has been featured in The New York Times, Forbes, the MIT Tech Review, and more.

Abstract

AI speech generation and voice cloning technologies produce natural speech but may reinforce accent discrimination. A study of two synthetic AI voice services (Speechify and ElevenLabs) highlights performance gaps across regional English accents, underscoring the need for inclusive design to avoid excluding diverse linguistic traits. These findings call for actionable steps from developers, policymakers, and organizations to create fairer, socially responsible AI solutions.

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
How Microsoft Blends Safety and Security in AI Red Teaming
Jan 24th 2025, 9:00 – 10:00AM PT
Guest Speaker

Tori started her career as a national security strategy consultant, supporting agencies like DHS and the FBI. At Microsoft, she has led initiatives in People Analytics, AR/VR monetization, and now heads AI Safety Red Teaming, ensuring Microsoft’s high-risk GenAI technologies are safe and secure before launch.Tori started her career as a national security strategy consultant, supporting agencies like DHS and the FBI. At Microsoft, she has led initiatives in People Analytics, AR/VR monetization, and now heads AI Safety Red Teaming, ensuring Microsoft’s high-risk GenAI technologies are safe and secure before launch.

Abstract

The Microsoft AI Red Team (AIRT)’s principles and methods combine security red teaming practices and adversarial ML techniques, with safety frameworks and perspectives. This talk will cover how our AIRT integrates these different approaches while red teaming all of our high risk GenAI tech, resulting in a cross functional team approach that adjusts to our diverse technology offerings, from models to copilots.

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Generative Agents: Interactive Simulacra of Human Behavior
Jan 17th 2025, 9:00 – 10:00AM PT
Guest Speaker

Michael’s research focuses on social, societal, and interactive technologies, with work featured in The New York Times, TED AI, and MIT Technology Review. He has earned prestigious awards, including the Alfred P. Sloan Fellowship, the UIST Lasting Impact Award, and the Patrick J. McGovern Tech for Humanity Prize. Michael holds a master’s and Ph.D. in Computer Science from MIT.

Abstract

How can we better model human attitudes and behaviors? Traditional simulations often fail to capture the complexity of human behavior, but AI opens up new possibilities.

In this talk, Michael will discuss generative agents—AI-driven simulations of human behavior that can remember, reflect, and plan. Grounded in qualitative data from over 1,000 Americans, these agents replicate survey responses with high accuracy. This research offers insights into designing more effective online social spaces, addressing societal disagreements in AI, and embedding societal values into algorithms.

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Philosophical Considerations in AI Alignment
Dec 13th 2024, 9:00 – 10:00AM PT
Guest Speaker

Pawan teaches Product Management and System Design at UW PCE. He has built large-scale products used by millions globally, drawing on expertise in system design, development, and engineering management at Silicon Valley startups and Fortune 500 companies. Passionate about AI, he leads The Alignment Meetup in Seattle, fostering discussions on cutting-edge advancements in the field.

Abstract

The rapid rise of artificial intelligence brings challenges in ensuring it aligns with human values and ethics. This session dives into the philosophical core of AI Alignment—how to build systems that act in line with human intentions and morals. Topics include value uncertainty, ethical decision-making, existential risks, and approaches like Inverse Reinforcement Learning, Coherent Extrapolated Volition, and Constitutional AI. Beyond technical fixes, AI Alignment raises profound questions about intelligence, ethics, and the future of human-technology interaction.

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Long-term Societal Impacts of AI and the Role of Higher Education
Nov 15th 2024, 9:00 – 10:00AM PT
Guest Speaker

Jared is a UW Computer Science alumnus, serial entrepreneur, and pioneer in Internet and cloud services. He served as a senior technical advisor for HBO’s Silicon Valley and was instrumental in launching the Nest Learning Thermostat and designing CenturyLink Cloud’s global monitoring infrastructure.

Abstract

Jared recently joined UW-IT as the Lead AI Architect, bringing decades of expertise in enterprise-level AI solutions to education. He’ll share his insights on sustainability and environmental concerns around AI, with a focus on the role of higher education. Join us to learn about UW’s plans in the AI space and engage in discussions about the future of AI with a leader in the field! Jared began his career building core Internet infrastructure, including oz.net (ISP) and the Seattle Internet eXchange. He later founded Cascadeo, an international tech services firm recognized in Gartner’s Magic Quadrant for Public Cloud IT Transformation.

 

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
AI Ageism: studying age discrimination in the era of algorithms. Reflections and future directions for researching algorithmic inequalities
Nov 8th 2024, 9:00 – 10:00AM PT
Guest Speaker

Dr. Stypińska is a sociologist researching algorithmic discrimination and bias in AI and their impacts. Her project, “AI Ageism: New Forms of Age Discrimination and Exclusion in the Era of Algorithms and Artificial Intelligence” (AGEAI), examines how AI technologies affect aging populations in Europe from a socio-technical perspective.

Abstract

In this talk, Dr. Stypińska will present findings from her AGEAI research on AI ageism, supported by empirical data from Germany, Spain, Great Britain, Poland, and the Netherlands. She will also reflect on how the social sciences can provide a deeper and more systematic understanding of algorithmic inequalities, extending beyond the study of AI bias.

 

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Auditing Automated Speech Recognition Systems for Disparities
NOV 1st 2024, 9:00 – 10:00AM PT
Guest Speaker

Dr. Allison Koenecke’s research on algorithmic fairness leverages computational methods (e.g., ML, causal inference) to examine societal inequities across domains. She has received three NSF grants, was named to Forbes 30 Under 30 in Science, and earned Cornell’s CIS DEIB Faculty of the Year award.

Abstract

Automated speech recognition (ASR) systems convert spoken language to text across applications, but Allison’s audit of commercial ASR systems (e.g., OpenAI, Amazon, Apple, Google, IBM, Microsoft) reveals notable underperformance for African American English speakers and individuals with language disorders like aphasia. Her findings quantify these disparities and identify the specific, underlying ML-driven causes. Allison underscore the importance of regular audits to ensure emerging speech systems serve all users inclusively.

Sealed Knowledges: The Impact and Underlying Assumptions of Integrating Direct Answers<br />
into Online Search
Illuminating Dark Data: Privacy-Preserving AI in Distributed Data Silos
OCT 25th 2024, 9:00 – 10:00AM PT
Guest Speaker

Sikha’s research interests lie in the field of responsible AI. Her current research focuses on privacy preserving synthetic data generation with secure multiparty computation. She is also a recipient of the JPMorgan Fellowship.

Abstract

Data is the lifeblood of AI. However, much of the most valuable data in the nation requires tight access control due to its sensitivity. As a result, AI remains heavily underutilized in some of the most important domains, including healthcare and genomics. In this talk, Sikha will discuss promising solutions to overcome this data bottleneck while safeguarding privacy. She will share insights into ongoing research on cryptographic techniques and the generation of synthetic data using the National AI Research Resource (NAIRR).

 

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
OCT 18th 2024, 9:00 – 10:00AM PT
Guest Speaker

Dr. Shen’s work anchors in HCI and intersects with multiple fields, such as NLP, ML, Speech Processing, and Data Science. Her research on bidirectional human-AI alignment aims to empower humans to interactively explain, evaluate, and collaborate with AI systems, while incorporating human feedback and values to improve AI systems.

Abstract

In this talk, Dr. Shen will provide a comprehensive overview of “bidirectional human-AI alignment,” starting with gaps in how humans understand AI decision-making. She will discuss why many explainable AI (XAI) algorithms fall short and introduce interactive solutions she developed to bridge this gap. Additionally, Dr. Shen will introduce methods for leveraging interactive human feedback to enhance AI performance. Finally, she will share insights from her latest research on fundamental values in alignment, outlining key challenges and opportunities for achieving bidirectional alignment over the long term.

Sealed Knowledges: The Impact and Underlying Assumptions of Integrating Direct Answers<br />
into Online Search
Sealed Knowledges: The Impact and Underlying Assumptions of Integrating Direct Answers into Online Search
OCT 11th 2024, 9:00 – 10:00AM PT
Guest Speaker

Nora’s research questions ethical approaches to chatbots and language technology and argues for a post-anthropocentric, structural and power-aware understanding of the implications of language technology. Currently, she is on a research stay at the University of Washington under the supervision of Prof. Emily Bender.

Abstract

The integration of large language models (LLMs) to provide direct answers to online search queries signals a significant change in online search. Questioning the implications of this new search paradigm, Nora introduces the term “Sealed Knowledges” to highlight the increasingly difficult access to a plurality of potential answers through the output of a singular, authoritative, and plausible-sounding paragraph of text. She argues that the integration of language models for the provision of direct answers in search engines is based on a de-situated and disembodied understanding of knowledge, affects users’ subjectivities, and can lead to an increasing spread of misinformation.

Operationalizing Responsible Machine Learning: From Equality Towards Equity
Operationalizing Responsible Machine Learning: From Equality Towards Equity
OCT 4th 2024, 9:00 – 10:00AM PT
Guest Speaker

Dr. Wang’s research focuses on machine learning fairness and algorithmic bias. She has been recognized by the NSF GRFP, EECS Rising Stars, Siebel Scholarship, and Microsoft AI & Society Fellowship.

She earned her PhD in CS from Princeton University and BS in EECS from UC Berkeley.

Abstract

With the widespread proliferation of machine learning, there arises both the opportunity for societal benefit as well as the risk of harm. Approaching responsible machine learning is challenging because technical approaches may prioritize a mathematical definition of fairness that correlates poorly to real-world constructs of fairness due to too many layers of abstraction. Conversely, social approaches that engage with prescriptive theories may produce findings that are too abstract to effectively translate into practice. In my research, I bridge these approaches and utilize social implications to guide technical work.

 

Open Source Software – Linux Kernel Communities and Lessons for AI development
May 31st 2024, 9:00 – 10:00AM PT
Guest Speaker

Dr. Do Yoon Kim is an Assistant Professor of Business Analytics at the Carroll School of Management, Boston College.

His research interests include innovation, technology strategy, and digitization. His recent work studies the software and hardware industries, where he analyzes the benefits and challenges of firms innovating alongside free and open source software communities.

Professor Kim received a doctorate in Strategy from Harvard Business School and a B.A. from Northwestern University.

Abstract

More broadly, he is interested in how knowledge flows across organizational boundaries, and firm strategies/government policies that can facilitate efficient transfers of knowledge.

 

The Uneven Impact of Generative AI on Entrepreneurial Performance
May 17th 2024, 9:00 – 10:00AM PT
Guest Speaker

Dr. Rem Koning is the Mary V. and Mark A. Stevens Associate Professor of Business Administration at Harvard Business School.

His research explores how we can help more entrepreneurs, scientists, and innovators succeed. His work on entrepreneurship and innovation spans industries and regions, from Indian tech ventures to small business owners in Kenya to women’s and mental health startups in the US.

Abstract

Building on this work, he is the co-director and co-founder of the Tech for All lab at The Digital, Data, and Design (D^3) Institute at Harvard, where he leads a group of interdisciplinary researchers studying how entrepreneurs can accelerate and shift the rate and direction of science, technology, and artificial intelligence (AI) to benefit all of humanity.

 

Time and Value of Data
May 10th 2024, 9:00 – 10:00AM PT
Guest Speaker

Dr. Ehsan Valavi is a Postdoctoral Associate at MIT Sloan’s initiative on the digital economy. Continuing his Ph.D. research, he also studies the scalability of Artificial Intelligence (AI) based solutions and the value of data for digital firms. He received his Ph.D. in Technology and Operations Management from Harvard Business School. He completed his undergraduate studies in Electrical Engineering (Telecommunications) at the University of Tehran and has a master’s degree in communication systems from the Swiss Federal Institute of Technology at Lausanne (EPFL). He also holds another master’s degree in Decision, Risk, and Operations Management from Columbia Business School.

Abstract

His research interest is at the interface of digitization, strategy, and operations management. He is currently studying the growth of digital firms and the challenges they face in various business areas. AT MIT, he mainly focuses on studying emerging blockchain-based technologies.

 

Human Capital Acquisition in Response to Data Breaches
May 3rd 2024, 9:00 – 10:00AM PT
Guest Speaker
Dr. Xiupeng Wang’s research primarily focuses on the impact of emerging technologies, particularly recent advances in information technologies, on firms and the labor market. His current research studies how Information Technology affects organizational structures, firm sizes, and market concentration. His research also extends to cybersecurity to analyze firms’ human capital acquisition in response to data breaches. Through massive online experiments, Dr. Wang is developing a new measure of household production and finds the currently omitted non-market contribution to the GDP can potentially be greater than we believed before. He is also excited about using the matched employer-employee data from the U.S. Census Research Data Center to study the effect of high-skilled immigration. Dr. Wang received his Ph.D. in economics from the University of Connecticut.
Abstract

Given the rise in the frequency and cost of data security threats, it is critical to understand whether and how companies strategically adapt their operational workforce in response to data breaches. We study hiring in the aftermath of data breaches by combining information on data breach events with detailed firm-level job posting data. 

Using a staggered Difference-in-Differences approach, we show that breached firms significantly increase their demand for cybersecurity workers. Furthermore, firms’ responses to data breaches extend to promptly recruiting public relations personnel — an act aimed at managing trust and alleviating negative publicity — often ahead of cybersecurity hires. Following a breach, the likelihood that firms post a cybersecurity job rises by approximately two percentage points, which translates to an average willingness to spend an additional $61,961 in annual wages on cybersecurity, public relations, and legal workers. While these hiring adjustments are small for affected firms, they represent a large potential impact of over $300 million on the overall economy. Our findings underscore the vital role of human capital investments in shaping firms’ cyber defenses and provide a valuable roadmap for managers and firms navigating cyberthreats in an increasingly digital age.

EU Scientific Advisory Mechanism on AI
– Policy Talk
April 26th 2024, 9:00 – 10:00AM PT
Guest Speaker
Andrea Emilio Rizzoli (Co-chair of the SAPEA working group on AI in science) holds a Ph.D. in Control Engineering and Informatics awarded from Politecnico di Milano (Italy). He is the Director of the Dalle Molle Institute for Artificial Intelligence (IDSIA USI-SUPSI) and a senior research scientist at IDSIA as well as a professor at SUPSI. His interests are in simulation and in decision support systems and he has been involved in the development of various simulation models in a number of different contexts, from natural resources modelling to logistics and industrial systems. He has authored more than 100 publications on these subjects.
Abstract

The European Union’s Scientific Advisory Mechanism (SAM) provides independent scientific evidence and policy recommendations to the European institutions by request of the Collage of Commissioners. The Scientific Advisory Mechanism comprises SAPEA which brings together around 110 academics from across Europe, offers outstanding expertise from natural sciences, engineering and technology, medical, health, agricultural and social sciences, and the humanities, and provides independent evidence reviews on request.

Machine learning and data science methods to improve social equity
April 19th 2024, 9:00 – 10:00AM PT
Guest Speaker
Emma Pierson is an assistant professor of computer science at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion and a computer science field member at Cornell University, with a secondary joint appointment as an Assistant Professor of Population Health Sciences at Weill Cornell Medical College. I develop data science and machine learning methods to study two broad areas: inequality and healthcare.
Abstract

 

Our society remains profoundly inequitable, due in part to 
biases in human and algorithmic decision-making. Addressing this, we 
propose machine learning and data science methods to improve the 
fairness of decision-making, focusing on applications in healthcare 
and public health. First, we develop scalable Bayesian methods for 
assessing bias in human decision-making and apply these methods to 
measure discrimination in police traffic stops across the United
States. Second, we develop methods to address an important source of 
bias in algorithmic decision-making: when the target the algorithm is 
trained to predict is an imperfect proxy for the desired target. We 
show how to leverage plausible domain knowledge in two real-world 
settings — flood detection and medical testing — to detect and 
mitigate target variable bias.

Context and Participation in Machine Learning
Apr 12 2024

Participatory scholarship historically prioritizes local context, but foundation models are often disconnected from downstream contexts and users by design. I’ll discuss recent work in which we develop a blueprint for public participation that identifies more local, application-oriented opportunities for meaningful participation within the foundation model ecosystem.

Abstract

In AI & ML, participatory approaches hold promise to lend agency and decision-making power to marginalized stakeholders. But what does meaningful participation look like in practice? This talk will first cover an in-depth case study of designing ML tools with and in service of activists who monitor gender-related violence.

Drawing from intersectional feminist theory and participatory design, we develop methods for data collection, annotation, modeling, and evaluation that aim to prioritize activist expertise and sustainable partnerships. Then, we’ll consider what participatory approaches should look like in the age of foundation models.

Evaluating Models and their Explanations
Feb 23, 2024

While large deep learning models have become increasingly accurate, concerns about their (lack of) interpretability have taken a center stage. In response, a growing subfield on interpretability and analysis of these models has emerged.

Abstract

Hundreds of techniques have been proposed to “explain” predictions of models, however, what aims these explanations serve and how they ought to be evaluated are often unstated. In this talk, I will first present a framework to quantify the value of explanations, which allows us to compare different explanation techniques. Further, I will highlight the need for holistic evaluation of models, sharing two tales on (i) how geographically representative are artifacts produced from text-to-image generation models, and (ii) how well can conversational LLMs challenge false assumptions?

This talk is with Dr. Pruthi, an Assistant Professor at the Indian Institute of Science, Bengaluru. He received his Ph.D. from the School of Computer Science at Carnegie Mellon University and is broadly interested in the areas of natural language processing and deep learning, with a focus towards inclusive development and evaluation of AI models.

LLM Training Data Transparency: Impacts, Opportunities and the Path Forward Copy

Feb 9, 2024

What are the potential technical and policy research problems in the LLM space? What should the future of ethical AI be?

Abstract

Transparency Coalition.ai (TCAI) is on a mission to establish our raison d’etre, we will first describe how the current generation of Large Language Models are built on Training Data collected using a variety of mechanisms. These practices have resulted in a variety of potential consumer harms such as mis/dis-information, deepfakes and hallucinations.

In this talk, our speakers will highlight the need for regulatory action on Training Data collection and processing to create an Ethical AI framework that protects consumers. They will also provide a survey of how current regulatory approaches to AI are lacking in specificity, timeliness, and potential impact. They will also spotlight their work towards engaging and educating law and policy makers, their proposed policy initiatives, and learnings from the field.

AI in Education: Perceptions, Ethics, and Trust in the ChatGPT Era

Jan 19th, 2024

How are students and teachers using ChatGPT? How much trust do they place in its accuracy? And what do students and teachers know (and not know) about how to be effective and responsible users?

Abstract

Generative AI, especially tools like ChatGPT, has the potential to revolutionize K-12 education. They offer exciting ways to personalize learning and engage students. But, these tools also come packed with new challenges.

Questions arise regarding effective use and ethical considerations: How are students and teachers using ChatGPT? How much trust do they place in its accuracy? And what do students and teachers know (and not know) about how to be effective and responsible users?

In this talk, our speakers will explore how students and teachers utilize ChatGPT, how much they trust it, and what they understand about the effective and responsible use of AI. We will present findings from two focus group studies involving middle and high school educators and students, exploring the practical applications and ethical implications in classrooms.

This talk with the founders of foundry10 aims to foster collaborative discussions on responsible AI in K-12 education, encouraging attendees to share their experiences and insights.

Applications of PET’s algorithmic auditing, Language model evaluation, Measuring AI social impacts & adoption

Dec 8th, 2024
XD is an emerging technologies group at the US Census Bureau that’s advancing the delivery of data-driven services through new and transformative technologies

Abstract

XD works in multi-disciplinary teams of engineers, project managers, and data scientists to support the research and application of artificial intelligence solutions to the delivery of government services. Each team works with federal stakeholders across government and often with the support of outside partners, such as academic research groups, to apply the latest innovations in artificial intelligence to each project. 


This session is joined by XD Emerging Technology Fellows for a roundtable discussion on issues of responsible AI with a goal of exploring potential collaborations for faculty and students.

Deconstructing Risk in Predictive Risk Models

Oct 27, 2023
Existing risk assessment approaches underlie much of the training data for these PRMs. But what exactly are these PRMs predicting?

Abstract

Predictive Risk Models (PRM) have become commonplace in many government agencies to provide optimal data-driven decision-making outcomes in high-risk contexts such as criminal justice, child welfare, homelessness, immigration etc. While such technology continues to be acquired and implemented rapidly throughout the government because of the perceived benefits of cost reductions and better decision-making outcomes, recent research has pointed out several issues in how PRMs are developed. Notably, existing risk assessment approaches underlie much of the training data for these PRMs. But what exactly are these PRMs predicting? In this talk, I use empirical studies in the context of child welfare to deconstruct and interrogate what “risk” in PRMs actually means and provide provocative directions for the community to discuss how we can move beyond our existing PRM development approaches.

Privacy and the impact of AI on data protection: A practitioner’s view
Apr 21, 2023
A massive increase in market adoption of machine learning of LLMs is driving legislators to regulate the use of Artificial Intelligence, and it is the same privacy programs and engineering functions that are largely expected to pick up the slack.

Abstract

Until the introduction of the European Union’s General Data Protection Regulation (GDPR) in May 2018, privacy and data protection had, with few exceptions, been the domain of legal and policy departments in only the largest corporations.

With the arrival of GDPR and the subsequent introduction of similar regulations around the world, particularly the California Consumer Privacy Act (CCPA), a lot of the weight shifted to privacy programs and privacy engineering functions that are much closer to product development. A massive increase in market adoption of machine learning and, more recently, viral adoption of Large Language Models, are now driving legislators to regulate the use of Artificial Intelligence, and it is the same privacy programs and engineering functions that are largely expected to pick up the slack. In this presentation we will discuss experiences from privacy programs at two different high tech companies, how these programs are organized and what they do, and some of the most pressing challenges, technical and otherwise, that they face when it comes to complying with ongoing tsunami of privacy, data protection and AI regulation.

 

Some Very Human Challenges in Responsible AI
Mar 3, 2023
What are the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice?

Abstract

In this talk, we see an overview some of the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice. I’ll examine these challenges through the lens of three case studies drawn from my own research experiences: disaggregated evaluations, dataset documentation, and interpretability tools. These examples illustrate the importance of interdisciplinary research and human-centered approaches to responsible AI.

Bias by Design: How everyday digital technology can fail its diverse users
Feb 24th, 2024

How does bias in the design of digital technology systematically disadvantage specific groups of people?

Abstract

In this talk, we see an overview some of the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice. I’ll examine these challenges through the lens of three case studies drawn from my own research experiences: disaggregated evaluations, dataset documentation, and interpretability tools. These examples illustrate the importance of interdisciplinary research and human-centered approaches to responsible AI.

Equity and Discrimination in Information Access
Jan 27, 2023
Fairness and anti-discrimination in information access systems, discussing both the state of the art in measuring relatively well-understood harms and new directions and open problems in defining and measuring fairness problems.

Abstract

In this talk, we see an overview some of the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice. I’ll examine these challenges through the lens of three case studies drawn from my own research experiences: disaggregated evaluations, dataset documentation, and interpretability tools. These examples illustrate the importance of interdisciplinary research and human-centered approaches to responsible AI.

Determining systematic differences in human graders for machine learning-based automated hiring
Jan 20, 2023
Fairness and anti-discrimination in information access systems, discussing both the state of the art in measuring relatively well-understood harms and new directions and open problems in defining and measuring fairness problems.

Abstract

In this talk, we see an overview some of the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice. I’ll examine these challenges through the lens of three case studies drawn from my own research experiences: disaggregated evaluations, dataset documentation, and interpretability tools. These examples illustrate the importance of interdisciplinary research and human-centered approaches to responsible AI.