The "Facebook Accelerator - Singapore" programme is a unique collaboration between Facebook, the Infocomm Media Development Authority (IMDA), The TTC Labs Team, and Craig Walker Design and Innovation Studio. The six-month programme aims to support and empower innovative, data-driven startups across the Asia Pacific region. During the programme, the participating startups can access workshops, practical training, mentoring and expertise from colleagues within Facebook and key external partners with whom they collaborate on the programme.
Last year we worked with IMDA to run a regulatory sandbox which allowed the startups to test creative new approaches to explaining how algorithms work in their products (XAI) and obtaining consent for data capture and sharing. The first season of the Accelerator culminated in the production of a report - “People-Centric Approaches to Notification, Consent and Disclosure”, which contains insights into the UX design and policy co-creation process, as well as detailed illustrations of the solutions which the startups created.
For the second season of the Accelerator programme, which started in March 2020, we wanted to go even deeper in understanding the challenges facing startups who want to be both compliant and creative in their approaches, while encouraging them to be transparent and build trust with their users. We created two ‘tracks’, one on AI Explainability (XAI) and one on Notification and Consent.
In terms of the Notification and Consent Track, we learned that most of the startups were facing difficulties with giving users visibility into the data which is held on them, ensuring compliance with both GDPR and key ASEAN privacy laws and developing and implementing ‘best practice’ methods for obtaining consent (up-front vs. in-context vs. on-demand). The startups also shared some issues in communicating the benefits of data sharing and how it powers their service, and explaining enterprise and B2B data sharing models transparently to users.
While discussing these challenges and potential solutions, participants explored how adapting designs for the local context can increase data comprehension and product value upfront. Solutions that stemmed from this learning included the chatbots and conversational AI to explain what data was being collected and how it was being utilized, and the employment of graphics and animated “characters” to ensure concepts were explained in a simple and engaging way. A great example of this was “Roqis, The Data Defender” a data privacy assistant who was created by the Qiscus team.
Several startups examined how to allow users to control their data over time. One startup - Trabble - focused on the need to educate users about how they can update and share their data. They also created a solution that allows users to delete their data, even when it has been associated with a third party account in error.
Other notable design solutions included the simplification of language in privacy policies to make them easier to comprehend - well demonstrated by e-commerce assistant Halosis - and the use of highly visual, engaging UX flows to help users understand privacy policies quickly. The images of the prototypes and the accompanying design articles which explain their design process can be viewed in our ‘Designs’ section.
For the AI Explainability track, we are testing specific provisions on AI Transparency and Explainability of Singapore’s AI Model Governance Framework and its Implementation and Self-Assessment Guide for Organizations (ISAGO). This is our first Policy Prototyping Program, a global initiative that consists of regulatory innovation labs aimed at developing and testing governance frameworks to inform future rule and lawmaking on AI.
Throughout the 6-month program, the 12 participating companies are developing an AI explainability solution while providing comprehensive insights about their experience in building and delivering that AI explainability solution in accordance with the policy guidance. In terms of preliminary findings, participants’ early recommendations include asking policymakers to consider adding quantitative indicators to measure the explainability, transparency, and fairness of AI systems. Participating companies also flagged the need for clearer communication on what the costs and benefits of adopting certain explainability requirements for their business, operations and clients are. We will be continuing to collect empirical information over the next months, planning the dissemination of final results in early 2021.
To mark the end of this year’s Accelerator program, on Thursday 24th September Facebook hosted a virtual Demo Day which showcased the incredible work of the startups over the course of the past 6 months. We'd like to say a huge "thanks" to IMDAand Craig Walker Design Studio for their amazing partnership over this season, and to Plug and Play APAC for ensuring that the whole programme ran so smoothly!
Looking ahead, Facebook is planning a third season of the "Facebook Accelerator - Singapore", which will start in the spring 2021. The next season will focus on developing the testing methodology for the Model AI Governance Framework and the ISAGO, as well as looking at the issue of designing consent notices for emerging technology platforms. Be the first to hear about the plans for the new season by following TTC Labs on Instagram, Twitter or Linkedin.