Home IssueArtificial Intelligence Initial Lessons Learned From Piloting the EU’s AI Ethics Assessment List

Initial Lessons Learned From Piloting the EU’s AI Ethics Assessment List

by Eline Chivot
by
DARPA Director Steven H. Walker

The European Commission’s high-level expert group on artificial intelligence (AI) has developed an initial assessment list for building trustworthy AI. This assessment list may eventually form the basis for a new legal framework for AI in the EU. The private sector has provided feedback, criticizing the assessment list for including explainability and transparency requirements, as well as redundant questions. The Commission should revise the assessment list to address all three of these points.

First, the final assessment list should remove all questions relating to explainability. Explainability cannot be applied in all AI systems. As Belgian association AI4Belgium states in its feedback: “Full explainability can be a challenge, both in terms of feasibility and practicality.” According to a report from the Developers Alliance, an advocacy group for software companies, “It is impossible to have complete explanations on how the outputs of AI systems are provided.” Initiatives such as DARPA’s XAI or IBM’s AI Explainability 360, which seek to provide explainable AI, are nascent research projects, and it is unrealistic to expect all deep learning systems to be fully explainable. Moreover, making explainability a requirement for AI systems would hold algorithmic decisions to a standard that does not exist for human decisions. (Indeed, as the Developers Alliance notes, “Even human’s decision-making processes aren’t fully known.”) It would also limit the use of some advanced algorithms that offer high levels of accuracy but cannot easily be explained. A better alternative to explainability is algorithmic accountability—the principle that an algorithmic system should employ a variety of controls to ensure the operator can verify algorithms work in accordance with its intentions and identify and rectify harmful outcomes.

Second, it is not reasonable to include transparency as a requirement. According to industry, the section on transparency is vague and guidance on the required level and scope of transparency is unclear which will make implementation difficult in practice. In addition, the economic impact of asking for companies to reveal their source code would be significant as it would prevent them from capitalizing on their intellectual property. Furthermore, AI R&D would slow because businesses could simply copy the work of others decreasing the incentive for future investment. If the goal of transparency is to increase trust by providing sufficient information, this can better be achieved by presenting users with a clear description of the data the algorithm uses and a basic explanation of how it makes decisions.

Third, the high-level expert group should eliminate redundant requirements on the assessment list. For example, the list considers explainability as a form of transparency, while these are two distinct concepts. In addition, some of the questions and themes on the list are not relevant to all sectors, or are already covered by existing EU legislation which could cause confusion in product development. For instance, the section on privacy and data governance overlaps with the requirements of the GDPR. In revising the assessment list, EU policymakers should include only necessary questions and contextualize them with sectoral case studies so that they offer developers actionable guidance.

EU policymakers should heed the feedback from industry and other experts working in the field. Any future requirements for AI systems should be clear, effective, and practical. Without making the assessment list more reasonable, EU policymakers risk imposing undue burdens on companies and holding back the development of AI in Europe.

Image credits: Flickr user SMPAGWU

You may also like

Show Buttons
Hide Buttons