Jared is a UW Computer Science alumnus, serial entrepreneur, and pioneer in Internet and cloud services. He served as a senior technical advisor for HBO’s Silicon Valley and was instrumental in launching the Nest Learning Thermostat and designing CenturyLink Cloud’s global monitoring infrastructure.
Rrecently joined UW-IT as the Lead AI Architect, bringing decades of expertise in enterprise-level AI solutions to education. He’ll share his insights on sustainability and environmental concerns around AI, with a focus on the role of higher education. Join us to learn about UW’s plans in the AI space and engage in discussions about the future of AI with a leader in the field!
Dr. Stypińska is a sociologist researching algorithmic discrimination and bias in AI and their impacts. Her project, “AI Ageism: New Forms of Age Discrimination and Exclusion in the Era of Algorithms and Artificial Intelligence” (AGEAI), examines how AI technologies affect aging populations in Europe from a socio-technical perspective.
In this talk, Dr. Stypińska will present findings from her AGEAI research on AI ageism, supported by empirical data from Germany, Spain, Great Britain, Poland, and the Netherlands. She will also reflect on how the social sciences can provide a deeper and more systematic understanding of algorithmic inequalities, extending beyond the study of AI bias.
Dr. Allison Koenecke’s research on algorithmic fairness leverages computational methods (e.g., ML, causal inference) to examine societal inequities across domains. She has received three NSF grants, was named to Forbes 30 Under 30 in Science, and earned Cornell’s CIS DEIB Faculty of the Year award.
Automated speech recognition (ASR) systems convert spoken language to text across applications, but Allison’s audit of commercial ASR systems (e.g., OpenAI, Amazon, Apple, Google, IBM, Microsoft) reveals notable underperformance for African American English speakers and individuals with language disorders like aphasia. Her findings quantify these disparities and identify the specific, underlying ML-driven causes. Allison underscore the importance of regular audits to ensure emerging speech systems serve all users inclusively.
Dr. Shen’s work anchors in HCI and intersects with multiple fields, such as NLP, ML, Speech Processing, and Data Science. Her research on bidirectional human-AI alignment aims to empower humans to interactively explain, evaluate, and collaborate with AI systems, while incorporating human feedback and values to improve AI systems.
In this talk, Dr. Shen will provide a comprehensive overview of “bidirectional human-AI alignment,” starting with gaps in how humans understand AI decision-making. She will discuss why many explainable AI (XAI) algorithms fall short and introduce interactive solutions she developed to bridge this gap. Additionally, Dr. Shen will introduce methods for leveraging interactive human feedback to enhance AI performance. Finally, she will share insights from her latest research on fundamental values in alignment, outlining key challenges and opportunities for achieving bidirectional alignment over the long term.
Nora’s research questions ethical approaches to chatbots and language technology and argues for a post-anthropocentric, structural and power-aware understanding of the implications of language technology. Currently, she is on a research stay at the University of Washington under the supervision of Prof. Emily Bender.
The integration of large language models (LLMs) to provide direct answers to online search queries signals a significant change in online search. Questioning the implications of this new search paradigm, Nora introduces the term “Sealed Knowledges” to highlight the increasingly difficult access to a plurality of potential answers through the output of a singular, authoritative, and plausible-sounding paragraph of text. She argues that the integration of language models for the provision of direct answers in search engines is based on a de-situated and disembodied understanding of knowledge, affects users’ subjectivities, and can lead to an increasing spread of misinformation.
Dr. Wang’s research focuses on machine learning fairness and algorithmic bias. She has been recognized by the NSF GRFP, EECS Rising Stars, Siebel Scholarship, and Microsoft AI & Society Fellowship.
She earned her PhD in CS from Princeton University and BS in EECS from UC Berkeley.
With the widespread proliferation of machine learning, there arises both the opportunity for societal benefit as well as the risk of harm. Approaching responsible machine learning is challenging because technical approaches may prioritize a mathematical definition of fairness that correlates poorly to real-world constructs of fairness due to too many layers of abstraction. Conversely, social approaches that engage with prescriptive theories may produce findings that are too abstract to effectively translate into practice. In my research, I bridge these approaches and utilize social implications to guide technical work.
Dr. Do Yoon Kim is an Assistant Professor of Business Analytics at the Carroll School of Management, Boston College.
His research interests include innovation, technology strategy, and digitization. His recent work studies the software and hardware industries, where he analyzes the benefits and challenges of firms innovating alongside free and open source software communities.
Professor Kim received a doctorate in Strategy from Harvard Business School and a B.A. from Northwestern University.
More broadly, he is interested in how knowledge flows across organizational boundaries, and firm strategies/government policies that can facilitate efficient transfers of knowledge.
Dr. Rem Koning is the Mary V. and Mark A. Stevens Associate Professor of Business Administration at Harvard Business School.
His research explores how we can help more entrepreneurs, scientists, and innovators succeed. His work on entrepreneurship and innovation spans industries and regions, from Indian tech ventures to small business owners in Kenya to women’s and mental health startups in the US.
Building on this work, he is the co-director and co-founder of the Tech for All lab at The Digital, Data, and Design (D^3) Institute at Harvard, where he leads a group of interdisciplinary researchers studying how entrepreneurs can accelerate and shift the rate and direction of science, technology, and artificial intelligence (AI) to benefit all of humanity.
Dr. Ehsan Valavi is a Postdoctoral Associate at MIT Sloan’s initiative on the digital economy. Continuing his Ph.D. research, he also studies the scalability of Artificial Intelligence (AI) based solutions and the value of data for digital firms. He received his Ph.D. in Technology and Operations Management from Harvard Business School. He completed his undergraduate studies in Electrical Engineering (Telecommunications) at the University of Tehran and has a master’s degree in communication systems from the Swiss Federal Institute of Technology at Lausanne (EPFL). He also holds another master’s degree in Decision, Risk, and Operations Management from Columbia Business School.
His research interest is at the interface of digitization, strategy, and operations management. He is currently studying the growth of digital firms and the challenges they face in various business areas. AT MIT, he mainly focuses on studying emerging blockchain-based technologies.
Given the rise in the frequency and cost of data security threats, it is critical to understand whether and how companies strategically adapt their operational workforce in response to data breaches. We study hiring in the aftermath of data breaches by combining information on data breach events with detailed firm-level job posting data.
Using a staggered Difference-in-Differences approach, we show that breached firms significantly increase their demand for cybersecurity workers. Furthermore, firms’ responses to data breaches extend to promptly recruiting public relations personnel — an act aimed at managing trust and alleviating negative publicity — often ahead of cybersecurity hires. Following a breach, the likelihood that firms post a cybersecurity job rises by approximately two percentage points, which translates to an average willingness to spend an additional $61,961 in annual wages on cybersecurity, public relations, and legal workers. While these hiring adjustments are small for affected firms, they represent a large potential impact of over $300 million on the overall economy. Our findings underscore the vital role of human capital investments in shaping firms’ cyber defenses and provide a valuable roadmap for managers and firms navigating cyberthreats in an increasingly digital age.
The European Union’s Scientific Advisory Mechanism (SAM) provides independent scientific evidence and policy recommendations to the European institutions by request of the Collage of Commissioners. The Scientific Advisory Mechanism comprises SAPEA which brings together around 110 academics from across Europe, offers outstanding expertise from natural sciences, engineering and technology, medical, health, agricultural and social sciences, and the humanities, and provides independent evidence reviews on request.
Our society remains profoundly inequitable, due in part to
biases in human and algorithmic decision-making. Addressing this, we
propose machine learning and data science methods to improve the
fairness of decision-making, focusing on applications in healthcare
and public health. First, we develop scalable Bayesian methods for
assessing bias in human decision-making and apply these methods to
measure discrimination in police traffic stops across the United
States. Second, we develop methods to address an important source of
bias in algorithmic decision-making: when the target the algorithm is
trained to predict is an imperfect proxy for the desired target. We
show how to leverage plausible domain knowledge in two real-world
settings — flood detection and medical testing — to detect and
mitigate target variable bias.
Participatory scholarship historically prioritizes local context, but foundation models are often disconnected from downstream contexts and users by design. I’ll discuss recent work in which we develop a blueprint for public participation that identifies more local, application-oriented opportunities for meaningful participation within the foundation model ecosystem.
In AI & ML, participatory approaches hold promise to lend agency and decision-making power to marginalized stakeholders. But what does meaningful participation look like in practice? This talk will first cover an in-depth case study of designing ML tools with and in service of activists who monitor gender-related violence.
Drawing from intersectional feminist theory and participatory design, we develop methods for data collection, annotation, modeling, and evaluation that aim to prioritize activist expertise and sustainable partnerships. Then, we’ll consider what participatory approaches should look like in the age of foundation models.
While large deep learning models have become increasingly accurate, concerns about their (lack of) interpretability have taken a center stage. In response, a growing subfield on interpretability and analysis of these models has emerged.
Hundreds of techniques have been proposed to “explain” predictions of models, however, what aims these explanations serve and how they ought to be evaluated are often unstated. In this talk, I will first present a framework to quantify the value of explanations, which allows us to compare different explanation techniques. Further, I will highlight the need for holistic evaluation of models, sharing two tales on (i) how geographically representative are artifacts produced from text-to-image generation models, and (ii) how well can conversational LLMs challenge false assumptions?
This talk is with Dr. Pruthi, an Assistant Professor at the Indian Institute of Science, Bengaluru. He received his Ph.D. from the School of Computer Science at Carnegie Mellon University and is broadly interested in the areas of natural language processing and deep learning, with a focus towards inclusive development and evaluation of AI models.
What are the potential technical and policy research problems in the LLM space? What should the future of ethical AI be?
Transparency Coalition.ai (TCAI) is on a mission to establish our raison d’etre, we will first describe how the current generation of Large Language Models are built on Training Data collected using a variety of mechanisms. These practices have resulted in a variety of potential consumer harms such as mis/dis-information, deepfakes and hallucinations.
In this talk, our speakers will highlight the need for regulatory action on Training Data collection and processing to create an Ethical AI framework that protects consumers. They will also provide a survey of how current regulatory approaches to AI are lacking in specificity, timeliness, and potential impact. They will also spotlight their work towards engaging and educating law and policy makers, their proposed policy initiatives, and learnings from the field.
Generative AI, especially tools like ChatGPT, has the potential to revolutionize K-12 education. They offer exciting ways to personalize learning and engage students. But, these tools also come packed with new challenges.
Questions arise regarding effective use and ethical considerations: How are students and teachers using ChatGPT? How much trust do they place in its accuracy? And what do students and teachers know (and not know) about how to be effective and responsible users?
In this talk, our speakers will explore how students and teachers utilize ChatGPT, how much they trust it, and what they understand about the effective and responsible use of AI. We will present findings from two focus group studies involving middle and high school educators and students, exploring the practical applications and ethical implications in classrooms.
This talk with the founders of foundry10 aims to foster collaborative discussions on responsible AI in K-12 education, encouraging attendees to share their experiences and insights.
XD works in multi-disciplinary teams of engineers, project managers, and data scientists to support the research and application of artificial intelligence solutions to the delivery of government services. Each team works with federal stakeholders across government and often with the support of outside partners, such as academic research groups, to apply the latest innovations in artificial intelligence to each project.
This session is joined by XD Emerging Technology Fellows for a roundtable discussion on issues of responsible AI with a goal of exploring potential collaborations for faculty and students.
Predictive Risk Models (PRM) have become commonplace in many government agencies to provide optimal data-driven decision-making outcomes in high-risk contexts such as criminal justice, child welfare, homelessness, immigration etc. While such technology continues to be acquired and implemented rapidly throughout the government because of the perceived benefits of cost reductions and better decision-making outcomes, recent research has pointed out several issues in how PRMs are developed. Notably, existing risk assessment approaches underlie much of the training data for these PRMs. But what exactly are these PRMs predicting? In this talk, I use empirical studies in the context of child welfare to deconstruct and interrogate what “risk” in PRMs actually means and provide provocative directions for the community to discuss how we can move beyond our existing PRM development approaches.
Until the introduction of the European Union’s General Data Protection Regulation (GDPR) in May 2018, privacy and data protection had, with few exceptions, been the domain of legal and policy departments in only the largest corporations.
With the arrival of GDPR and the subsequent introduction of similar regulations around the world, particularly the California Consumer Privacy Act (CCPA), a lot of the weight shifted to privacy programs and privacy engineering functions that are much closer to product development. A massive increase in market adoption of machine learning and, more recently, viral adoption of Large Language Models, are now driving legislators to regulate the use of Artificial Intelligence, and it is the same privacy programs and engineering functions that are largely expected to pick up the slack. In this presentation we will discuss experiences from privacy programs at two different high tech companies, how these programs are organized and what they do, and some of the most pressing challenges, technical and otherwise, that they face when it comes to complying with ongoing tsunami of privacy, data protection and AI regulation.
In this talk, we see an overview some of the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice. I’ll examine these challenges through the lens of three case studies drawn from my own research experiences: disaggregated evaluations, dataset documentation, and interpretability tools. These examples illustrate the importance of interdisciplinary research and human-centered approaches to responsible AI.
In this talk, we see an overview some of the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice. I’ll examine these challenges through the lens of three case studies drawn from my own research experiences: disaggregated evaluations, dataset documentation, and interpretability tools. These examples illustrate the importance of interdisciplinary research and human-centered approaches to responsible AI.
In this talk, we see an overview some of the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice. I’ll examine these challenges through the lens of three case studies drawn from my own research experiences: disaggregated evaluations, dataset documentation, and interpretability tools. These examples illustrate the importance of interdisciplinary research and human-centered approaches to responsible AI.
In this talk, we see an overview some of the challenges that arise in supporting AI fairness, interpretability, and responsible AI more broadly, in industry practice. I’ll examine these challenges through the lens of three case studies drawn from my own research experiences: disaggregated evaluations, dataset documentation, and interpretability tools. These examples illustrate the importance of interdisciplinary research and human-centered approaches to responsible AI.