Report published April 2022
Providing transparency and explainability of artificial intelligence (AI) presents complex challenges across industry and society, raising questions around how to build confidence and empower people in their use of digital products.
The purpose of this report is to contribute to collaborative, cross-sector efforts to address these questions. It shares the key findings of a project conducted in 2021 between TTC Labs and Open Loop in collaboration with the Singapore Infocomm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC). Through this project we have developed a series of operational insights and guidance for bringing greater transparency to AI-powered products.
The People-Centric Approaches to AI Explainability project brought startups from the Asia-Pacific region and Europe together with multidisciplinary experts in a series of co-creation workshops focused on product and policy prototyping. The project tested a draft Framework for designing AI explainability experiences, which is under development by Meta's Responsible AI team.
These learnings are intended both for policymakers and product makers – for those developing frameworks, principles and requirements at the government level and those building and evolving apps and websites driven by AI. By improving people’s understanding of AI, we can foster more trustworthiness in digital services.
Explore insights for product makers and policymakers on designing for AI explainability across digital products that surfaced through co-creation workshops globally.
You can download the report below. This report is a living document. Please share your feedback to firstname.lastname@example.org.
The report is accompanied by a visual explainer that highlights our journey so far around designing for AI explainability, unpacking some of the people-centred insights from our work.