Skip to main content
auditing framework

The STAR Auditing Framework for Trustworthy AI

STAR is researching and developing various trusted Artificial Intelligence (AI) solutions for production lines and industrial use cases. Moreover, the project complements its scientific developments and research prototypes with complementary assets that help manufacturers and providers of industrial solutions to successfully implement, deploy and operate trusted AI solutions. These complementary assets include the means for assessing, benchmarking and improving the trustworthiness of AI systems and solutions.

In this direction, STAR has produced a practical framework for auditing the trustworthiness of AI systems in the form of a “scorecard”. The framework/scorecard serves a dual objective: On the one hand it enables manufacturers and providers of industrial automation solutions to benchmark the level of trustworthiness of their solutions, while on the other it provides them with practical suggestions for improving this trustworthiness.

The auditing framework consists of the following components:

  • A Self-Assessment / Self-Evaluation Form for the trustworthiness of AI systems. It consists of a set of questions that relate to different aspects of the trustworthiness of an AI system.
  • An AI Trustworthiness Evaluation Guide, which provides the means for processing the information of the self-evaluation form. It comprises rules for scoring the trustworthiness of an AI system, based on the information of the self-evaluation form about the system.
  • Two main outputs of the AI auditing process, which are produced based on the processing of the self-evaluation questions in-line with the rules and instructions of the evaluation guide.
INTRA blog 1
Figure 1: STAR Trustworthiness Auditing Framework Overview

 

Considering the above listed components, the AI trustworthiness auditing process involves the following steps:

  • Supply of Information about the AI System: The business owner of the AI system (e.g., manufacturing worker, production manager) or the developer/integrator of the AI system provides information about the system. The information is provided in the form of answers to a specific set of multiple-choice questions based on a proper self-evaluation form/questionnaire. The completion of this step requires that the user of the framework has a good understanding of the AI system and of AI technology in general.
  • Scoring of the System’s Trustworthiness: The supplied answers to the various multiple-choice questions are properly analysed and a trustworthiness scored is computing. The computation is based on the scoring guide of the Auditing framework.
  • Provision of Feedback for Improvement: The AI trustworthiness evaluation guide is leveraged to provide the user of the auditing framework with information and feedback for improving the trustworthiness score of their system. This is aimed at boosting a continuous improvement discipline for the trustworthiness of the AI system. The improvement feedback must be crafted by an expert on the topic of AI trust, following a proper assessment of the supplied answers.

The following figure illustrates the different aspects that are considered by the STAR trustworthiness auditing framework, which include transparency, explainability, fairness, bias mitigation, robustness, ethical guidelines compliance, documentation, human oversight, data collection, data storage, data management and data quality aspects. As evident from the presented list, several aspects have to do with the trustworthiness of the data that are used for developing, training and executing AI systems.

INTRA blog 2
Figure 2: Trustworthiness Aspects Covered by the STAR Auditing Framework

 

To deal with these trustworthiness aspects, the framework includes the following multiple-choice questions:

  Q1: How does your AI system ensure the transparency of AI models and algorithms?

  Q2:  How does your AI system ensure the explainability of AI models and algorithms?

  Q3: How does your system collect data within the AI system to ensure privacy, security, and integrity?

  Q4: How does your system store data within the AI system to ensure privacy, security, and integrity?

  Q5: How does your system manage data within the AI system to ensure privacy, security, and integrity?

  Q6: What measures do you implement to ensure the accountability of the AI system’s decisions i.e., to attribute these decisions to specific algorithms or components?

  Q7: What measures do you implement to identify and mitigate AI bias situations?

  Q8: What measures do you implement to ensure the robustness of the AI system?

  Q9: What measures do you implement to ensure the fairness of the AI system?

  Q10:  What measures do you implement to ensure compliance with ethical standards and guidelines in manufacturing?

  Q11: What measures do you implement to ensure the quality of data of the AI system?

  Q12: What measures do you implement to ensure the cyber-security of the AI system?

  Q13: What measures do you implement for human oversight and intervention when necessary to ensure that AI decisions align with human values and intentions?

  Q14: What measures do you implement to provide comprehensive documentation for the AI system?

Based on the answers to the above-listed questions, a trustworthiness score is calculated. In this direction, it is assumed that the more measures an organisation takes regarding the trustworthiness of an AI system, the greater the trustworthiness score of the system is. STAR provides the scoring guide in-line with this approach, including the upper and lower margins of the trustworthiness score for each question and for the auditing framework (“scorecard”) as a whole. The presented approach offers the following advantages: (i) It is very simple and easy to understand; and (ii) It can score different systems automatically based on clear and unambiguous scoring rules.  However, it also suffers from problems and potential inaccuracies as it assumes that all measures and questions are of equal importance, while giving extra benefits to measures that can foster different dimensions of trustworthiness. To alleviate these issues, it is possible to configure the scoring process in different ways by assigning some weights to specific measures and/or questions.

This first version of STAR’s trustworthiness auditing framework, which will become soon available in the STAR Market platform. We are in the process of improving the auditing framework based on feedback from STAR project members and other industry stakeholders. We also plan to provide complementary assets that will boost the sustainability and wider use of the framework. Such assets include a mini training tutorial about the different questions of the auditing framework, as well as a plan for potential standardisation of this auditing tool.

By: John Soldatos  / Netcompany-Intrasoft