News

People-centric approaches to algorithmic explainability: product and policy prototyping

Adam-Bargroff-Web

Adam Bargroff

Privacy & Public Policy Manager, Meta & TTC Labs

Please see the report published in May 2022 from this project, which was driven by Meta's TTC Labs and Open Loop in partnership with Singapore's IMDA

Overview

We conducted collaborative research with leading startup and scaleup companies in Asia Pacific and Europe that are responsibly utilizing AI/ML in their digital products. The aim was to assist cross-industry product makers in the ways that their services explain algorithmic mechanisms to the people who are using them, while understanding challenges and best practices from doing so that can help inform public policy development.

Who we are

TTC Labs and Open Loop are experimental data design and data governance initiatives that are initiated and supported by Facebook.

The Singapore Infocomm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC) develop and regulate the converging infocomm and media sectors in Singapore in a holistic way, creating a dynamic and exciting sector filled with opportunities for growth, through an emphasis on talent, research, innovation and enterprise. Singapore sees AI as an important and emerging technology for the Digital Economy. IMDA and PDPC have released a suite of AI governance initiatives to help organisations deploy responsible AI and build consumer trust.

Screenshot 2021-07-05 at 13.43.04

Background

There is an increase regionally in policy and regulatory guidelines aimed at addressing the responsible development of AI. In particular, there is a focus on bringing transparency to AI systems so that people using the services are more aware about, and have more control over, how these services work and the decisions they make.

We need collaborative, people-centered approaches to make transparency work in practice and to future-proof both data-driven services and data policies. Our goal is to complement AI transparency approaches and principles with operational insights and guidance for product makers and policymakers.

What we did

The project explored the conditions for people-focused, design-led innovation for consumer-facing products and public policies. This work built on the process and outputs by TTC Labs and Open Loop. It seeks to complement and provide insights into existing AI global and national frameworks, such as Singapore’s AI Governance Framework, and its companion, the Implementation and Self-Assessment Guide for Organisations (ISAGO), among others.  

The project

  • Built out AI transparency principles into an operational AI explainability and control (AIX) framework which will provide guidance for front-end experiences across industry use cases

  • Understood future-facing challenges and best practices to help inform industry-facing public policy developments. showcasing what ideal governance paths might look like

  • Produced a report synthesising research process, outputs and learnings into practical insights

Who we worked with

We worked with cutting-edge companies across sectors working in Europe and APAC with AI/ML as core to their service development and deployment.

We were excited to work with industry leaders who are committed to innovate responsibly and be part of a movement working together with government, academia, and civil society through external collaboration and experimentation. Operationally, this project engaged product owners and designers, as well as those with data governance expertise.

We collaborated with the following companies as part of this project (in alphabetical order):

- Betterhalf

- MyAlice

- The Newsroom

- X0PA

- Zupervise

Benefits to participating companies

Participating companies benefited from workshops and seminars focusing on relevant areas of their service where AIX can be leveraged to maximise people’s understanding of data-driven services. Participating companies focused on innovative product development around AI/ML practices including

  • Demonstrating the value of the AI system to users

  • Bringing awareness to your users when AI is involved to power the service, especially when using personal data, providing relevant privacy and data use information

  • Explaining individual AI decisions to your users, understanding how, why and when they are made

  • Unpacking the ways in which the system works in more detail to expert audiences

Participating companies

  • Received support from industry and government leaders, including Meta's Open Loop, TTC Labs, our design partner Craig Walker, and Singapore’s IMDA, to co-create responsible approaches to AI in your product / service

  • Engaged with a vibrant community of AI companies, including Facebook and other industry peers

  • Contributed directly to shaping and better informing the AI governance debate

  • Were publicly acknowledged as an industry leader in the area of Responsible AI, with your service featuring as a case study and the potential to share a stage at key global fora

  • Leveraged the training, tutorials, toolkits, mentorship, networks, resources and technical assistance provided by TTC Labs, Open Loop, IMDA and their partners

The project ran from September-December 2021, including a range of virtual 1:1s, seminars and collaborative workshops (including Design Jams) bringing together multiple stakeholders.

To apply

Applications are now closed. Please sign up to the TTC Labs mailing list for updates on our work around Trustworthy AI.

Adam-Bargroff-Web

Adam Bargroff

Privacy & Public Policy Manager, Meta & TTC Labs, Meta

Adam Bargroff is Privacy and Public Policy Manager at TTC Labs & Meta

TTC Labs is a cross-industry effort to create innovative design solutions that put people in control of their privacy.

Initiated and supported by Meta, and built on collaboration, the movement has grown to include hundreds of organisations, including major global businesses, startups, civic organisations and academic institutions.