Join us for the RAISE Winter Exposition, a welcoming event celebrating diverse ideas and research in responsible AI. We invite submissions of original or cited work that align with RAISE’s mission to foster trustworthy, sustainable, and equitable AI solutions.
Whether you’re sharing your research or joining as an attendee, this is your chance to connect, collaborate, and make a difference. Everyone is welcome! Please reserve your spot by Jan. 31, 2025.
Feb. 28th:
Keynote Speaker: Dr. Ece Kamar
Open to all RAISE affiliates, including high school students, university students, graduate students (MS or PhD), and faculty members.
You’re invited to showcase original research, ongoing projects, published work, or relevant papers at the poster session (with proper attribution).
The results of the abstract submissions are in!
For any questions, please reach out to Kirandeep Kaur at kaur13@cs.washington.edu or Yaxin Luo at yaxinluo@uw.edu.
Call for abstracts opens: Jan. 06, 2025
_____________________________________
Abstract submission closes: Jan. 18, 2025
_____________________________________
Notice of acceptance: Jan. 24, 2025
_____________________________________
RSVP for participation closes: Jan. 31, 2025
_____________________________________
Name | Position | Title of the project |
Jiayi Yuan | Researcher/ Visitor | InvestESG: A multi-agent reinforcement learning benchmark for studying climate investment as a social dilemma |
Jiaqi He | Student | A Framework for Measuring and Benchmarking Fairness of Generative Crowd-Flow Models |
Navreet Kaur | Student | Persona-based Evaluation of LLMs for substance-use Information |
Kentaro Hoffman | Postdoc | Bayesian Optimal Experimental Design of Streaming Data Incorporating Machine Learning Generated Synthetic Data |
Julia Kharchenko | Student | How Well Do LLMs Represent Values Across Cultures? Empirical Analysis of LLM Responses Based on Hofstede Cultural Dimensions |
Himanshu Jaikumar Naidu | Student | iOSPointMapper: Enabling Equitable Navigation of the Pedestrian Environment with AI and Mobile Devices |
Anna-Maria Gueorguieva | Student | Large Language Models’ Perception of Stigmatized Groups in Social Contexts differ from Human Attitudes |
Harshita Chopra | Student | Feedback-aware Monte Carlo Tree Search for Efficient Information Seeking in Goal-oriented Conversations |
Nicholas Clark | Student | Can We Trust LLMs? Understanding Epistemic Challenges Through Mediator Incoherence |
Jonathan Jiang | Student | Promoting diversity and inclusion of research study participants through developing linguistically and culturally tailored virtual study assistants using generative artificial intelligence (AI) |
Eddie Hock | Student | Embracing Multiplicity in Uncertain Times: Doing Social Science with Rashomon Sets |
Tim Hua | Independent researcher | Hierarchical Monitoring and Talk-Back for AI Control |
Ryan Lagasse | Student | Targeted LLM Steering: Mitigating Side Effects with Selective Feature Control |
Pawan Gupta and Aditya Gupta | Professional / High School Student | A Philosphical Map of AI Alignment |
Preetam Dammu | Student | Dynamic-KGQA: A Scalable Framework for Generating Adaptive Question Answering Datasets |
Kirandeep Kaur & Manya Chadha | Student | Responsible Adaptation of LLMs for Robust Recommendations |
Muhammad Aurangzeb Ahmad | Faculty | AI Surrogates for End-of-Life Decision-Making |