Presentation is loading. Please wait.

Presentation is loading. Please wait.

Mark Elliot National Centre for Research Methods

Similar presentations


Presentation on theme: "Mark Elliot National Centre for Research Methods"— Presentation transcript:

1 Mark Elliot National Centre for Research Methods
What is Explainable AI? Mark Elliot National Centre for Research Methods

2 What’s the problem? Legal requirements Transparency Accountability
a data controller “must be able to demonstrate that personal data are processed in a transparent manner in relation to the data subject.” (GDPR Art 5.2) Accountability A “data controller shall be responsible for, and be able to demonstrate compliance with” the principles of GDPR (art 5.2)

3 What’s the problem? Legal requirements Right to explanation
Right to access “meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject” (GDPR Article 13 – and elsewhere)

4 What’s the problem? Human system requirements
Many AI system’s are black boxes

5 Drivers for explanation
Why did the system pick this rather than that? When will it fail? Is the system trustworthy? Locus of control.

6 Vanilla Machine Learning system
Initial parameter values Algorithm ML process Recommendation Decision Prediction Data Training data Initial parameter values -> Algorithm Algorithm -> Model

7 Types of why question Why did the algorithm work that way?
What elements of the problem space led to that decision? Why did your learning process produce that algorithm? Algorithm -> model Learning process produce that algorithm -> what correlations / patterns do the model encode? Are they robust (do they generalize)?

8 Types of why question Internal External

9 Options Post hoc: produce an separate algorithm which reads the end to end process In built: Build the decision making algorithm so that traces have within them the basis for explanation. Not sure I understand post-hoc… In the literature, what I’ve seen are models / algorithms for inferring causal explanation from a black-box trained model. Not really end-to-end though, just post-hoc “end” Your second point is related to “learning when to stop”, i.e. the model itself decide if it has sufficient data / information / confidence to make a prediction

10 Personalisation One size does not fit all Complexity of explanation
Assumptions of knowledge/expertise Assumption about type of explanation required Implies models of the user – which is a whole extra problem….

11 Unintended consequences of explanability
We are adding an extra constraint into a system designs – so in terms of the task the results may be non-optimal.

12 Unintended consequences of explanability
We are constraining the range of algorithms and learning processes to ones that humans can understand. So we are channelling AI into the human replication


Download ppt "Mark Elliot National Centre for Research Methods"

Similar presentations


Ads by Google