TTC Labs - Trustworthy AI Experiences
Back BACK TO THEMES

Trustworthy AI Experiences

How can we empower people in their interactions with AI systems?

Unpack key terms related to trustworthy AI with this interactive tool.

View AI Glossary

Understanding AI

How can we empower people to make informed decisions about trusting an AI system?

Visual Explainer

Get an introduction to AI Explainability
with our visual explainer.

View Explainer

Helping people to Understand AI

When people don’t understand how an AI-driven service comes up with results, they may feel less able to discern whether or not to trust it. We therefore need to design mechanisms to explain to people when, how and why they are affected by AI systems, to help support greater comprehension of AI.

These mechanisms should drive practical understanding and agency for everybody, not just for the narrow few. That means the explanations must be provided in a way that caters for non-experts, and gives people the information they need to make a reasonably informed judgement.

The goal is not product explanations, but people’s understanding.

People are not passive recipients of AI explanations.

Persona Insight

Antonio

“I like being able to open a new app and just understand how to use it, without long instructions.”

Through our interactive workshops and research, we’ve found that people draw on their wider knowledge and experience to develop their understanding of an AI in an active manner. Their understanding can change depending on how a product behaves and surfaces results, as people situate explanations within the context of their experience. This means that we must adopt a more interactive concept of explainability, moving beyond the idea that product makers provide explanations and product users merely receive them. Product makers can take a more collaborative approach to explainability through the use of interactive touchpoints, user controls and implicit explainability mechanisms.

Not everyone requires the same level of information

Persona Insight

Rochelle

“I don't know why some articles are surfaced over others, but I'm ok with that.”

Comprehensive transparency and highly detailed explanations aren’t always useful, desired, or even possible. The key to creating positive user experiences is knowing how much to reveal and when.

Product makers need to ensure people are provided with the information most relevant to their respective needs and contexts. They also need to consider how explainability is balanced between different audiences, such as general product users and expert stakeholders, and acknowledge the trade-offs that come with this. When a product serves different user groups, the information and control appropriate to each can vary significantly.

It also means accounting for the fact that different people engage with information in different ways, by designing multiple opportunities for a user to develop understanding in a way that is meaningful to them.

TTC Labs has been researching AI Explainability and ways we can help create a pragmatic path forward

TTC Labs is currently working in this space, exploring how AI Explainability can be applied in industry, policy and design.

We have been bringing together industry, designers, policymakers, researchers and governments to create a unified approach to AI explainability and the challenges it presents across sectors.

We have been testing the draft AI Explainability Framework and Toolkit developed by Meta's Responsible AI team across a range of use cases. The Framework provides direct guidance for product makers designing explainability experiences.

We have produced a set of product design insights and public policymaking insights for the creation and promotion of people-centric explainability experiences.

View Report

In Control of AI

Control is about giving people meaningful agency over their relationship with an AI-powered system.

Empowering People with Control

Controls take many forms; they include mechanisms that allow people to influence the way an AI-powered product generates results, to modify their declared interests and preferences, or to override or remove AI-inferred data inputs. They also include opportunities for people to provide feedback on recommendations and to challenge AI outcomes and explanations if they disagree with them.

Comprehensive control is not necessarily possible or desirable, but appropriate controls can enhance people’s understanding of AI systems. There is more work to be done to identify and develop user controls that can improve transparency and explainability of AI systems for different audiences.

Control requires meaningful agency over AI interactions

Persona Insight

Rochelle

“When there are tons of pop-ups or big blocks of text, I just click through.

Control is about empowering people over their interactions with AI systems. The relative complexity of AI means explaining how and why a system uses information, such as someone’s personal data, is not always straightforward.

And giving people a lot of information about how their data might be used is not the same as meaningful control – it outsources responsibility to the user, instead of the product or system maker.

Meaningful agency means control over inputs, and the ability to feedback into AI processes. It may also include a choice of outputs of AI systems.

Panel Discussion: Where Control Happens

David Polgar moderated this panel discussion at the TTC Summit, with Claudia del Pozo from CMinds and Katie Elfering from Meta, discussing Control and AI

Watch Now

Are you working in this space?
We'd love to hear from you.

Our explorations on this topic are just a beginning. These challenges require more work, and more insight from the wider community.

Get in Touch

Initiated and supported by Meta, TTC Labs is an experimental data design initiative.

TRUSTWORTHY AI EXPERIENCES

Accountability

Accountability focuses on tracing and verifying system outputs and actions. It involves being clear about which parts of a system a person is accountable for and to whom they are accountable.

Transparency Strengthens Accountability

System Cards, a new resource for understanding how AI systems work

Learn More

Technical documentation, such as model or system cards, can provide insights into why and for what purpose an AI model or system was developed, how it was trained, how it produces results and how it will evolve over time.

By producing this documentation product makers are given an opportunity to reflect on their processes which promotes accountability within an organisation, and allowing external experts to verify it can provide assurances to non-experts that the necessary oversight mechanisms are in place, and working effectively.

With adequate information, and clear routes to redress, users are empowered to make an informed decision about the AI systems they use

Accountability is a collaborative effort

Policymakers around the world are increasingly advocating for cross-sectoral AI accountability and transparency. TTC Labs supports this push, and the need to work closely with industry to create mechanisms which are both meaningful and pragmatic.

Developing an AI system, much like producing a new drug or building an aeroplane, can involve numerous people, and even different companies. For the end user to be able to use a product confidently, they need to feel sure that there is appropriate oversight of the development process.

For AI systems, one way of doing this is to map the person or entity who is responsible for each part of the system across the development lifecycle, and to whom they are accountable.

Stay tuned

With a lot more work to do in this space, TTC Labs is exploring ways to support and encourage sensible, unified approaches to AI accountability.

Passionate about Trustworthy AI Experiences?

TTC Labs is actively researching this space, and creating new reports, articles and insights. We also host events to bring industry, experts and policymakers together.

Never miss an update