Talks & Events

Upcoming Events

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Long-term Societal Impacts of AI and the Role of Higher Education
Nov 15th 2024, 9:00 – 10:00AM PST
Guest Speaker

Jared is a UW Computer Science alumnus, serial entrepreneur, and pioneer in Internet and cloud services. He served as a senior technical advisor for HBO’s Silicon Valley and was instrumental in launching the Nest Learning Thermostat and designing CenturyLink Cloud’s global monitoring infrastructure.

Abstract

Rrecently joined UW-IT as the Lead AI Architect, bringing decades of expertise in enterprise-level AI solutions to education. He’ll share his insights on sustainability and environmental concerns around AI, with a focus on the role of higher education. Join us to learn about UW’s plans in the AI space and engage in discussions about the future of AI with a leader in the field!

Past Events

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
AI Ageism: studying age discrimination in the era of algorithms. Reflections and future directions for researching algorithmic inequalities
Nov 8th 2024, 9:00 – 10:00AM PST
Guest Speaker

Dr. Stypińska is a sociologist researching algorithmic discrimination and bias in AI and their impacts. Her project, “AI Ageism: New Forms of Age Discrimination and Exclusion in the Era of Algorithms and Artificial Intelligence” (AGEAI), examines how AI technologies affect aging populations in Europe from a socio-technical perspective.

Abstract

In this talk, Dr. Stypińska will present findings from her AGEAI research on AI ageism, supported by empirical data from Germany, Spain, Great Britain, Poland, and the Netherlands. She will also reflect on how the social sciences can provide a deeper and more systematic understanding of algorithmic inequalities, extending beyond the study of AI bias.

 

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Auditing Automated Speech Recognition Systems for Disparities
NOV 1st 2024, 9:00 – 10:00AM PST
Guest Speaker

Dr. Allison Koenecke’s research on algorithmic fairness leverages computational methods (e.g., ML, causal inference) to examine societal inequities across domains. She has received three NSF grants, was named to Forbes 30 Under 30 in Science, and earned Cornell’s CIS DEIB Faculty of the Year award.

Abstract

Automated speech recognition (ASR) systems convert spoken language to text across applications, but Allison’s audit of commercial ASR systems (e.g., OpenAI, Amazon, Apple, Google, IBM, Microsoft) reveals notable underperformance for African American English speakers and individuals with language disorders like aphasia. Her findings quantify these disparities and identify the specific, underlying ML-driven causes. Allison underscore the importance of regular audits to ensure emerging speech systems serve all users inclusively.

Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
Towards Bidirectional Human-AI Alignment via Interaction: Human-Centered AI Explanation, Evaluation, and Development
OCT 18st 2024, 9:00 – 10:00AM PST
Guest Speaker

Dr. Shen’s work anchors in HCI and intersects with multiple fields, such as NLP, ML, Speech Processing, and Data Science. Her research on bidirectional human-AI alignment aims to empower humans to interactively explain, evaluate, and collaborate with AI systems, while incorporating human feedback and values to improve AI systems.

Abstract

In this talk, Dr. Shen will provide a comprehensive overview of “bidirectional human-AI alignment,” starting with gaps in how humans understand AI decision-making. She will discuss why many explainable AI (XAI) algorithms fall short and introduce interactive solutions she developed to bridge this gap. Additionally, Dr. Shen will introduce methods for leveraging interactive human feedback to enhance AI performance. Finally, she will share insights from her latest research on fundamental values in alignment, outlining key challenges and opportunities for achieving bidirectional alignment over the long term.

Sealed Knowledges: The Impact and Underlying Assumptions of Integrating Direct Answers<br />
into Online Search
Sealed Knowledges: The Impact and Underlying Assumptions of Integrating Direct Answers into Online Search
OCT 11th 2024, 9:00 – 10:00AM PST
Guest Speaker

Nora’s research questions ethical approaches to chatbots and language technology and argues for a post-anthropocentric, structural and power-aware understanding of the implications of language technology. Currently, she is on a research stay at the University of Washington under the supervision of Prof. Emily Bender.

Abstract

The integration of large language models (LLMs) to provide direct answers to online search queries signals a significant change in online search. Questioning the implications of this new search paradigm, Nora introduces the term “Sealed Knowledges” to highlight the increasingly difficult access to a plurality of potential answers through the output of a singular, authoritative, and plausible-sounding paragraph of text. She argues that the integration of language models for the provision of direct answers in search engines is based on a de-situated and disembodied understanding of knowledge, affects users’ subjectivities, and can lead to an increasing spread of misinformation.

Operationalizing Responsible Machine Learning: From Equality Towards Equity
Operationalizing Responsible Machine Learning: From Equality Towards Equity
OCT 4th 2024, 9:00 – 10:00AM PST
Guest Speaker

Dr. Wang’s research focuses on machine learning fairness and algorithmic bias. She has been recognized by the NSF GRFP, EECS Rising Stars, Siebel Scholarship, and Microsoft AI & Society Fellowship.

She earned her PhD in CS from Princeton University and BS in EECS from UC Berkeley.

Abstract

With the widespread proliferation of machine learning, there arises both the opportunity for societal benefit as well as the risk of harm. Approaching responsible machine learning is challenging because technical approaches may prioritize a mathematical definition of fairness that correlates poorly to real-world constructs of fairness due to too many layers of abstraction. Conversely, social approaches that engage with prescriptive theories may produce findings that are too abstract to effectively translate into practice. In my research, I bridge these approaches and utilize social implications to guide technical work.

 

Open Source Software – Linux Kernel Communities and Lessons for AI development
May 31st 2024, 9:00 – 10:00AM PST
Guest Speaker

Dr. Do Yoon Kim is an Assistant Professor of Business Analytics at the Carroll School of Management, Boston College.

His research interests include innovation, technology strategy, and digitization. His recent work studies the software and hardware industries, where he analyzes the benefits and challenges of firms innovating alongside free and open source software communities.

Professor Kim received a doctorate in Strategy from Harvard Business School and a B.A. from Northwestern University.

Abstract

More broadly, he is interested in how knowledge flows across organizational boundaries, and firm strategies/government policies that can facilitate efficient transfers of knowledge.

 

The Uneven Impact of Generative AI on Entrepreneurial Performance
May 17th 2024, 9:00 – 10:00AM PST
Guest Speaker

Dr. Rem Koning is the Mary V. and Mark A. Stevens Associate Professor of Business Administration at Harvard Business School.

His research explores how we can help more entrepreneurs, scientists, and innovators succeed. His work on entrepreneurship and innovation spans industries and regions, from Indian tech ventures to small business owners in Kenya to women’s and mental health startups in the US.

Abstract

Building on this work, he is the co-director and co-founder of the Tech for All lab at The Digital, Data, and Design (D^3) Institute at Harvard, where he leads a group of interdisciplinary researchers studying how entrepreneurs can accelerate and shift the rate and direction of science, technology, and artificial intelligence (AI) to benefit all of humanity.

 

Time and Value of Data
May 10th 2024, 9:00 – 10:00AM PST
Guest Speaker

Dr. Ehsan Valavi is a Postdoctoral Associate at MIT Sloan’s initiative on the digital economy. Continuing his Ph.D. research, he also studies the scalability of Artificial Intelligence (AI) based solutions and the value of data for digital firms. He received his Ph.D. in Technology and Operations Management from Harvard Business School. He completed his undergraduate studies in Electrical Engineering (Telecommunications) at the University of Tehran and has a master’s degree in communication systems from the Swiss Federal Institute of Technology at Lausanne (EPFL). He also holds another master’s degree in Decision, Risk, and Operations Management from Columbia Business School.

Abstract

His research interest is at the interface of digitization, strategy, and operations management. He is currently studying the growth of digital firms and the challenges they face in various business areas. AT MIT, he mainly focuses on studying emerging blockchain-based technologies.

 

Human Capital Acquisition in Response to Data Breaches
May 3rd 2024, 9:00 – 10:00AM PST
Guest Speaker
Dr. Xiupeng Wang’s research primarily focuses on the impact of emerging technologies, particularly recent advances in information technologies, on firms and the labor market. His current research studies how Information Technology affects organizational structures, firm sizes, and market concentration. His research also extends to cybersecurity to analyze firms’ human capital acquisition in response to data breaches. Through massive online experiments, Dr. Wang is developing a new measure of household production and finds the currently omitted non-market contribution to the GDP can potentially be greater than we believed before. He is also excited about using the matched employer-employee data from the U.S. Census Research Data Center to study the effect of high-skilled immigration. Dr. Wang received his Ph.D. in economics from the University of Connecticut.
Abstract

Given the rise in the frequency and cost of data security threats, it is critical to understand whether and how companies strategically adapt their operational workforce in response to data breaches. We study hiring in the aftermath of data breaches by combining information on data breach events with detailed firm-level job posting data. 

Using a staggered Difference-in-Differences approach, we show that breached firms significantly increase their demand for cybersecurity workers. Furthermore, firms’ responses to data breaches extend to promptly recruiting public relations personnel — an act aimed at managing trust and alleviating negative publicity — often ahead of cybersecurity hires. Following a breach, the likelihood that firms post a cybersecurity job rises by approximately two percentage points, which translates to an average willingness to spend an additional $61,961 in annual wages on cybersecurity, public relations, and legal workers. While these hiring adjustments are small for affected firms, they represent a large potential impact of over $300 million on the overall economy. Our findings underscore the vital role of human capital investments in shaping firms’ cyber defenses and provide a valuable roadmap for managers and firms navigating cyberthreats in an increasingly digital age.

EU Scientific Advisory Mechanism on AI
– Policy Talk
April 26th 2024, 9:00 – 10:00AM PST
Guest Speaker
Andrea Emilio Rizzoli (Co-chair of the SAPEA working group on AI in science) holds a Ph.D. in Control Engineering and Informatics awarded from Politecnico di Milano (Italy). He is the Director of the Dalle Molle Institute for Artificial Intelligence (IDSIA USI-SUPSI) and a senior research scientist at IDSIA as well as a professor at SUPSI. His interests are in simulation and in decision support systems and he has been involved in the development of various simulation models in a number of different contexts, from natural resources modelling to logistics and industrial systems. He has authored more than 100 publications on these subjects.
Abstract

The European Union’s Scientific Advisory Mechanism (SAM) provides independent scientific evidence and policy recommendations to the European institutions by request of the Collage of Commissioners. The Scientific Advisory Mechanism comprises SAPEA which brings together around 110 academics from across Europe, offers outstanding expertise from natural sciences, engineering and technology, medical, health, agricultural and social sciences, and the humanities, and provides independent evidence reviews on request.

Machine learning and data science methods to improve social equity
April 19th 2024, 9:00 – 10:00AM PST
Guest Speaker
Emma Pierson is an assistant professor of computer science at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion and a computer science field member at Cornell University, with a secondary joint appointment as an Assistant Professor of Population Health Sciences at Weill Cornell Medical College. I develop data science and machine learning methods to study two broad areas: inequality and healthcare.
Abstract

 

Our society remains profoundly inequitable, due in part to 
biases in human and algorithmic decision-making. Addressing this, we 
propose machine learning and data science methods to improve the 
fairness of decision-making, focusing on applications in healthcare 
and public health. First, we develop scalable Bayesian methods for 
assessing bias in human decision-making and apply these methods to 
measure discrimination in police traffic stops across the United
States. Second, we develop methods to address an important source of 
bias in algorithmic decision-making: when the target the algorithm is 
trained to predict is an imperfect proxy for the desired target. We 
show how to leverage plausible domain knowledge in two real-world 
settings — flood detection and medical testing — to detect and 
mitigate target variable bias.

Context and Participation in Machine Learning
Apr 12 2024

Participatory scholarship historically prioritizes local context, but foundation models are often disconnected from downstream contexts and users by design. I’ll discuss recent work in which we develop a blueprint for public participation that identifies more local, application-oriented opportunities for meaningful participation within the foundation model ecosystem.

Abstract

In AI & ML, participatory approaches hold promise to lend agency and decision-making power to marginalized stakeholders. But what does meaningful participation look like in practice? This talk will first cover an in-depth case study of designing ML tools with and in service of activists who monitor gender-related violence.

Drawing from intersectional feminist theory and participatory design, we develop methods for data collection, annotation, modeling, and evaluation that aim to prioritize activist expertise and sustainable partnerships. Then, we’ll consider what participatory approaches should look like in the age of foundation models.

Evaluating Models and their Explanations
Feb 23, 2024

While large deep learning models have become increasingly accurate, concerns about their (lack of) interpretability have taken a center stage. In response, a growing subfield on interpretability and analysis of these models has emerged.

Abstract

Hundreds of techniques have been proposed to “explain” predictions of models, however, what aims these explanations serve and how they ought to be evaluated are often unstated. In this talk, I will first present a framework to quantify the value of explanations, which allows us to compare different explanation techniques. Further, I will highlight the need for holistic evaluation of models, sharing two tales on (i) how geographically representative are artifacts produced from text-to-image generation models, and (ii) how well can conversational LLMs challenge false assumptions?

This talk is with Dr. Pruthi, an Assistant Professor at the Indian Institute of Science, Bengaluru. He received his Ph.D. from the School of Computer Science at Carnegie Mellon University and is broadly interested in the areas of natural language processing and deep learning, with a focus towards inclusive development and evaluation of AI models.

LLM Training Data Transparency: Impacts, Opportunities and the Path Forward Copy

Feb 9, 2024

What are the potential technical and policy research problems in the LLM space? What should the future of ethical AI be?

Abstract

Transparency Coalition.ai (TCAI) is on a mission to establish our raison d’etre, we will first describe how the current generation of Large Language Models are built on Training Data collected using a variety of mechanisms. These practices have resulted in a variety of potential consumer harms such as mis/dis-information, deepfakes and hallucinations.

In this talk, our speakers will highlight the need for regulatory action on Training Data collection and processing to create an Ethical AI framework that protects consumers. They will also provide a survey of how current regulatory approaches to AI are lacking in specificity, timeliness, and potential impact. They will also spotlight their work towards engaging and educating law and policy makers, their proposed policy initiatives, and learnings from the field.

AI in Education: Perceptions, Ethics, and Trust in the ChatGPT Era

Jan 19th, 2024

How are students and teachers using ChatGPT? How much trust do they place in its accuracy? And what do students and teachers know (and not know) about how to be effective and responsible users?

Abstract

Generative AI, especially tools like ChatGPT, has the potential to revolutionize K-12 education. They offer exciting ways to personalize learning and engage students. But, these tools also come packed with new challenges.

Questions arise regarding effective use and ethical considerations: How are students and teachers using ChatGPT? How much trust do they place in its accuracy? And what do students and teachers know (and not know) about how to be effective and responsible users?

In this talk, our speakers will explore how students and teachers utilize ChatGPT, how much they trust it, and what they understand about the effective and responsible use of AI. We will present findings from two focus group studies involving middle and high school educators and students, exploring the practical applications and ethical implications in classrooms.

This talk with the founders of foundry10 aims to foster collaborative discussions on responsible AI in K-12 education, encouraging attendees to share their experiences and insights.

Applications of PET’s algorithmic auditing, Language model evaluation, Measuring AI social impacts & adoption

Dec 8th, 2024
XD is an emerging technologies group at the US Census Bureau that’s advancing the delivery of data-driven services through new and transformative technologies

Abstract

XD works in multi-disciplinary teams of engineers, project managers, and data scientists to support the research and application of artificial intelligence solutions to the delivery of government services. Each team works with federal stakeholders across government and often with the support of outside partners, such as academic research groups, to apply the latest innovations in artificial intelligence to each project. 


This session is joined by XD Emerging Technology Fellows for a roundtable discussion on issues of responsible AI with a goal of exploring potential collaborations for faculty and students.

Deconstructing Risk in Predictive Risk Models

Oct 27, 2023
Existing risk assessment approaches underlie much of the training data for these PRMs. But what exactly are these PRMs predicting?

Abstract

Predictive Risk Models (PRM) have become commonplace in many government agencies to provide optimal data-driven decision-making outcomes in high-risk contexts such as criminal justice, child welfare, homelessness, immigration etc. While such technology continues to be acquired and implemented rapidly throughout the government because of the perceived benefits of cost reductions and better decision-making outcomes, recent research has pointed out several issues in how PRMs are developed. Notably, existing risk assessment approaches underlie much of the training data for these PRMs. But what exactly are these PRMs predicting? In this talk, I use empirical studies in the context of child welfare to deconstruct and interrogate what “risk” in PRMs actually means and provide provocative directions for the community to discuss how we can move beyond our existing PRM development approaches.

Privacy and the impact of AI on data protection: A practitioner’s view
Apr 21, 2023
A massive increase in market adoption of machine learning of LLMs is driving legislators to regulate the use of Artificial Intelligence, and it is the same privacy programs and engineering functions that are largely expected to pick up the slack.

Abstract

Until the introduction of the European Union’s General Data Protection Regulation (GDPR) in May 2018, privacy and data protection had, with few exceptions, been the domain of legal and policy departments in only the largest corporations.

With the arrival of GDPR and the subsequent introduction of similar regulations around the world, particularly the California Consumer Privacy Act (CCPA), a lot of the weight shifted to privacy programs and privacy engineering functions that are much closer to product development. A massive increase in market adoption of machine learning and, more recently, viral adoption of Large Language Models, are now driving legislators to regulate the use of Artificial Intelligence, and it is the same privacy programs and engineering functions that are largely expected to pick up the slack. In this presentation we will discuss experiences from privacy programs at two different high tech companies, how these programs are organized and what they do, and some of the most pressing challenges, technical and otherwise, that they face when it comes to complying with ongoing tsunami of privacy, data protection and AI regulation.

 

Some Very Human Challenges in Responsible AI
Mar 3, 2023
What are the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice?

Abstract

In this talk, we see an overview some of the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice. I’ll examine these challenges through the lens of three case studies drawn from my own research experiences: disaggregated evaluations, dataset documentation, and interpretability tools. These examples illustrate the importance of interdisciplinary research and human-centered approaches to responsible AI.

Bias by Design: How everyday digital technology can fail its diverse users
Feb 24th, 2024

How does bias in the design of digital technology systematically disadvantage specific groups of people?

Abstract

In this talk, we see an overview some of the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice. I’ll examine these challenges through the lens of three case studies drawn from my own research experiences: disaggregated evaluations, dataset documentation, and interpretability tools. These examples illustrate the importance of interdisciplinary research and human-centered approaches to responsible AI.

Equity and Discrimination in Information Access
Jan 27, 2023
Fairness and anti-discrimination in information access systems, discussing both the state of the art in measuring relatively well-understood harms and new directions and open problems in defining and measuring fairness problems.

Abstract

In this talk, we see an overview some of the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice. I’ll examine these challenges through the lens of three case studies drawn from my own research experiences: disaggregated evaluations, dataset documentation, and interpretability tools. These examples illustrate the importance of interdisciplinary research and human-centered approaches to responsible AI.

Determining systematic differences in human graders for machine learning-based automated hiring
Jan 20, 2023
Fairness and anti-discrimination in information access systems, discussing both the state of the art in measuring relatively well-understood harms and new directions and open problems in defining and measuring fairness problems.

Abstract

In this talk, we see an overview some of the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice. I’ll examine these challenges through the lens of three case studies drawn from my own research experiences: disaggregated evaluations, dataset documentation, and interpretability tools. These examples illustrate the importance of interdisciplinary research and human-centered approaches to responsible AI.