People-centric approaches to algorithmic explainability: product and policy prototyping


Adam Bargroff

Privacy Public Policy Manager, TTC Labs @ Meta

Project driven by TTC Labs and Open Loop in partnership with Facebook Startup Programs & Singapore IMDA


We are conducting collaborative research with leading startup and scaleup companies in Asia Pacific and Europe that are responsibly utilizing AI/ML in their digital products. The aim is to assist cross-industry product makers in the ways that their services explain algorithmic mechanisms to the people who are using them, while understanding challenges and best practices from doing so that can help inform policy development.

Who we are

TTC Labs and Open Loop are experimental data design and data governance initiatives that are initiated and supported by Facebook.

The Singapore Infocomm Media Development Authority (IMDA) and the Personal Data Protection Commission (PDPC) develop and regulate the converging infocomm and media sectors in Singapore in a holistic way, creating a dynamic and exciting sector filled with opportunities for growth, through an emphasis on talent, research, innovation and enterprise. Singapore sees AI as an important and emerging technology for the Digital Economy. IMDA and PDPC have released a suite of AI governance initiatives to help organisations deploy responsible AI and build consumer trust.

Screenshot 2021-07-05 at 13.43.04


There is an increase regionally in policy and regulatory guidelines aimed at addressing the responsible development of AI. In particular, there is a focus on bringing transparency to AI systems so that people using the services are more aware about, and have more control over, how these services work and the decisions they make.

We need collaborative, people-centered approaches to make transparency work in practice and to future-proof both data-driven services and data policies. Our goal is to complement AI transparency approaches and principles with operational insights and guidance for product makers and policymakers.

What we're doing

The project explores the conditions for people-focused, design-led innovation for consumer-facing products and public policies. This work builds on the process and outputs by TTC Labs and Open Loop. It seeks to complement and provide insights into existing AI global and national frameworks, such as Singapore’s AI Governance Framework, and its companion, the Implementation and Self-Assessment Guide for Organisations (ISAGO), among others.  

The project will

  • Build out AI transparency principles into an operational AI explainability and control (AIX) framework which will provide guidance for front-end experiences across industry use cases

  • Understand future-facing challenges and best practices to help inform industry-facing public policy developments. showcasing what ideal governance paths might look like

  • Produce a report synthesising research process, outputs and learnings into practical insights

Who we’re working with

We’re working with cutting-edge companies across sectors working in Europe and APAC with AI/ML as core to their service development and deployment.

We’re excited to work with industry leaders who are committed to innovate responsibly and be part of a movement working together with government, academia, and civil society through external collaboration and experimentation. Operationally, this project will seek to engage product owners and designers, as well as those with data governance expertise.

We're delighted to collaborate with the following companies as part of this project (in alphabetical order):

- Betterhalf

- MyAlice

- The Newsroom

- X0PA

- Zupervise

Benefits to participating companies

Participating companies will benefit from workshops and seminars focusing on relevant areas of their service where AIX can be leveraged to maximise people’s understanding of data-driven services. Participating companies will focus on innovative product development around AI/ML practices including

  • Demonstrating the value of the AI system to users

  • Bringing awareness to your users when AI is involved to power the service, especially when using personal data, providing relevant privacy and data use information

  • Explaining individual AI decisions to your users, understanding how, why and when they are made

  • Unpacking the ways in which the system works in more detail to expert audiences

Participating companies will

  • Receive support from industry and government leaders, including Facebook, Open Loop, TTC Labs, including design partner Craig Walker, and Singapore’s IMDA, to co-create responsible approaches to AI in your product / service

  • Engage with a vibrant community of AI companies, including Facebook and other industry peers

  • Contribute directly to shaping and better informing the AI governance debate

  • Be publicly acknowledged as an industry leader in the area of Responsible AI, with your service featuring as a case study and the potential to share a stage at key global fora

  • Leverage the training, tutorials, toolkits, mentorship, networks, resources and technical assistance provided by TTC Labs, Open Loop, IMDA and their partners

Expectations for participating companies

The project is running from September-December 2021, including a range of virtual 1:1s, seminars and collaborative workshops (including Design Jams) bringing together multiple stakeholders.

To apply

Applications are now closed. Please sign up to the TTC Labs mailing list for updates on this project.

Project points of contact:

  • Adam Bargroff, Privacy and Public Policy Manager, Facebook / TTC Labs

  • Verena Kontschieder, AI Policy Program Manager, Facebook / Open Loop


Adam Bargroff

Privacy Public Policy Manager, TTC Labs @ Meta, Meta

TTC Labs is a cross-industry effort to create innovative design solutions that put people in control of their privacy.

Initiated and supported by Meta, and built on collaboration, the movement has grown to include hundreds of organisations, including major global businesses, startups, civic organisations and academic institutions.