Skip to main content
Explaining the decisions of Artificial Intelligence models in Manufacturing

Explaining the decisions of Artificial Intelligence models in Manufacturing

The fourth Industrial revolution (Industry 4.0) has resulted in the automation of many manufacturing processes. Artificial intelligence (AI) models offer astonishing performances in various industrial use cases such as predictive quality management, effective human robot collaboration and agile production.

However, such high accuracy comes at the cost of low interpretability. As interpretability or explainability, one refers to the notion of explaining and expressing, in an intuitive manner, an AI model. In real world applications, AI solutions need to operate as high-performance models which contain huge amount (up to thousands) of hyper parameters which indicates extreme internal complexity by using non-linear transformations. To that end, AI models tend to operate as “black-boxes” with a low level of clarity of their inner processes, especially to non-IT experts and other stakeholders, thus generating an issue of trust. The field of Explainable Artificial Intelligence (XAI) has been touted as a way to enhance transparency of Machine Learning (ML) models and increase human cognition.

Stakeholders demand explainability for several reasons. Data scientists use XAI to debug their models and identify why it performs poorly on certain inputs as well as to engineer new features, drop redundant ones and improve model performance. Other individuals use it to monitor their models and be alerted when significant drifts relative to the training distributions happen. Explanations for end users are meant to increase model transparency and comply with various regulations. The importance of explainability as a concept has been reflected in legal and ethical guidelines for data and ML. Specifically, in cases of automated decision-making, the European General Data Protection Regulation (GDPR) require that data subjects have access to meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.

The STAR ICT-38-2020 project aims at resolving and overcoming the above issues. Several explainability algorithms are researched and implemented to boost transparency and opaqueness of deployed models in manufacturing processes. Algorithms that identify the most dominant features of deep learning classification mechanisms are among the tools that are used to boost interpretability of the system to stakeholders such as ML engineers and end users (domain experts etc.). Moreover, XAI algorithms that explain the interactions between human and robots will be implemented as well as techniques to identify cyber-attacks such as drifts in the training distribution. In the STAR ecosystem, explainability algorithms are also combined with other powerful concepts such us visual analytics and simulated reality techniques that aim to minimize the risks of physical damage caused by potential agent errors or malfunctions.

By: Georgios Sofianidis, Dimitris Dardanis, Spyros Theodoropoulos, Dimosthenis Kyriazis / UNIVERSITY OF PIRAEUS RESEARCH CENTER