This month’s newsletter focuses on what it means for AI creators to be held accountable. Every company and individual involved in creating AI is responsible for the system’s impact on society. But how can we implement accountability in practice?
We also feature our latest report, People-Centric Approaches to AI Explainability, which explains how the most transparent companies will deliver the most responsible and successful uses of AI.
We report on the AI & Society: Demonstrating Accountability panel discussion from the TTC Summit 2022, before shining a light on Wan Sie Lee, AI and Data Innovation Director at IMDA, and the importance of governance in AI. We bring things to a close with three fascinating articles on practical approaches to AI ethics.
What’s New From TTC Labs
Download | AI Explainability Report
Don’t miss out on our latest report on AI Explainability. Inside, we delve into practical approaches to help people better understand what is happening when they interact with AI.
It’s essential reading for policymakers and product makers, who develop frameworks, principles and requirements at the government level and build digital experiences driven by AI.
Whether you missed it or just want to see it again, catch the fascinating discussion on AI & Society: Demonstrating Accountability from the TTC Summit 2022.
Day One of the summit brought together a panel of experts to explore what AI Accountability means for society, and covers health, finance and other areas. How might we implement accountability through practical tools and processes and gain valuable insights?
The discussion was chaired by Dr Christine Custis (Head of Fairness, Transparency, Accountability, and Safety at the Partnership on AI). Guest speakers included Kolja Verhage (Manger Digital Ethics, Deloitte), Prof Barry O’Sullivan (University College Cork), Meeri Haataja (CEO, Co-founder, Saidot.ai), and Luis Aranda (AI Policy Analyst, OECD).
Spotlight interview | Wan Sie Lee, Director at IMDA Singapore
Wan Sie Lee is an expert on AI Governance, helping to grow a trusted AI ecosystem in Singapore. She’s worked tirelessly with industry and government partners to enable data-driven innovation and safeguard users’ interests. In this interview, she discusses the importance of collaboration with organisations big and small to achieve responsible AI.
Understanding how AI works can be challenging at times for everyday users. Meta hopes to change that by empowering people with the right tools and resources to help them better understand the AI that shapes their product experiences.
Model and system documentation is vital in providing transparency and explainability. That’s one reason why Meta published an AI System Card tool prototype designed to shed light on an AI system’s architecture and how it operates. Why not see for yourself in the article?
As AI continues to develop at pace, how can we keep up and ensure people always stay protected? One mechanism could play an important role: algorithmic impact assessments (AIAs), a basis for algorithmic accountability.
The Ada Lovelace Institute has conducted a detailed survey of proposals for AIAs, alongside research into best practices for their implementation. Here, it highlights seven design challenges facing AIAs.
Article | Where to Start With AI Ethics?
AI ethics can be a complex area to navigate. So, where should businesses looking to develop AI begin, and how can ethics be operationalised?
DLA Piper, a multinational law firm, has highlighted a practical route to applying AI ethics. It compares AI ethics with algorithmic accountability to understand the distinction between the two concepts, before covering methods that can be applied to real-life projects.