eduzhai > Applied Sciences > Engineering >

Explaining Decisions Made With AI: A Review of the Co-Badged Guidance by the ICO and the Turing Institute

  • Save

... pages left unread,continue reading

Document pages: 14 pages

Abstract: Co-badged by the Information Commissioner s Office and The Alan Turing Institute, ‘Explaining decisions made with AI ’ (2020) is an expansive excursion into the practical translation of calls for accountability and transparency in organisations that use AI systems. The report aims to give organisations practical advice to help explain the processes, services and decisions delivered or assisted by AI, to the individuals affected by them. The guidance is divided into three sections, namely: Part 1 The basics of explaining AI; Part 2 Explaining AI in practice; and, Part 3 Explaining AI means for an organisation. Our paper aims to, i. provides commentary on the report, both general and section specific, and ii. provides a summary condensed version of the report itself. Our high-level feedback is: i. Ope-rationalization: The guidance is directed at practitioners, but it is unclear how these can be ope-rationalized; ii. Innovation: The recommendations of the guidance will affect decisions on model development and use. Emphasizing less a family of models (‘white-box’) and moving towards a stronger principles-based approach would have been beneficial for rapid testing and innovation; and, iii. Streamline the guidance: there is much redundancy in the text, we would recommend streamlining Parts 1 and 2 and integrating them into a general ‘explanation of explaining AI systems’ discussion. Part 3 could be restructured as a mapping of responsibilities in the explanation matrix and roles in an organisation, including a hierarchy of accountability.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×