Human in the AI loop and how organisations can assess human-centric AI systems in manufacturing?
In the STAR project we are concerned with key questions about how human-centric AI solutions in manufacturing should be designed, developed, deployed, and importantly, critically evaluated.
We take a co-creative approach to the design, as reported before . We base the development on an appropriate reference architecture. STAR deployment includes specific technology choices reported in our respective deliverable (which can be requested) and our Open Access Book.
We also have concrete examples of how users interact with AI-driven solutions. Here is an example of AI-driven visual quality inspection work implemented by the University of Groningen. An operator in the quality control team (from our IBER OLEFF partner) has the opportunity to interact with an AI-driven system, thanks to a synergy with our partners from the Jozef Stefan Institute (JSI) and Qlector. The user is not only offered post-processing hints or explanations but is additionally provided with prototype images of similar cases, making the process more intuitive. In this way, a job profile in a quality control team of the future might involve more such activities, including data labelling and interaction with AI, and less manual or physical work. This is an example of how job profiles evolve with AI.
But how the industry can evaluate the success of AI-driven industrial systems which are meant to be trusted and human-centric?
A common pitfall is to take a solely technology-driven perspective. This would not suffice for the same reasons that a technology push is never enough for successful innovation. For example, it will not be enough to evaluate success solely based on the accuracy of machine learning models (whichever appropriate nonetheless accuracy definition may be employed).
Another viewpoint is to solely focus on operational performance. There is no denying of the importance of such a perspective. But it is unlikely to be enough if it ignores a third viewpoint - that of human factors, which includes also ethics and work design aspects.
The STAR partnership addresses ethics, work design, and societal factors too, for example, https://star-ai.eu/ai-and-ethics-manufacturing, https://star-ai.eu/bias-management-ai-consistent-human-values, https://star-ai.eu/how-make-human-centred-ai-work-not-just-function. Yet, the safest and more secure systems might be those that allow nothing to be done. The key message is that it would be wrong to prioritise any of the above category of factors over the others, when designing, testing and evaluating human-centric AI-driven systems.
In the STAR project we put forward a systematic approach to human-centric AI systems evaluation, taking all such factors into consideration. The approach is tested in practice on industrial use cases, and wider communities are engaged to share experience, scrutinise the approach, and learn best practices from each other.
More information on this topic was presented during the STAR Interactive AI Co-Creation Workshop on How to Enable Safe, Secure, and Ethical AI in Manufacturing held at the Innovation Cluster Drachten (Philips site) with the additional support of the AI Hub of North Netherlands.
By Christos Emmanouilidis / University of Groningen
- Log in to post comments