Machine learning-based decision-making systems have been deployed to address many high-stakes problems in our society, such as college admissions, loan decisions, and child maltreatment prediction. It is important for AI systems to offer explanations for end-users to understand why certain decisions are made. Prior work in explainable AI has looked into how to improve the explainability through better visualizations and descriptive texts. Learning sciences research on the other hand has indicated that if we want people to internalize information, it is important to provide them with practice and problem-solving opportunities. In this proposal, we investigate providing in-situ learning experiences for end-users as a way for them to understand AI systems. End-users could manipulate sandbox AI systems in different ways to understand the different outcomes. We will compare the new learning experiences with state-of-the-art explanations on user understanding and learning.
Funding: $45K (2022)
Goal: We aim to develop in-situ experiences inspired by learning sciences theories to facilitate end-user understanding of explainable AI.
Token Investors: Xu Wang, Nikola Banovic, Anhong Guo
Project ID: 1044