Skip to main content
PCL blog N40

Improving Visual Quality Inspections with Human-Supervised Machine Learning and Active Learning Techniques: The Philips Pilot of the STAR Project

In our previous article, Leveraging State-Of-The-Art AI Technologies Aiming To Increase Flexibility In Automated Quality Inspection Systems, we discussed a specific use-case within the Philips pilot of the STAR project. The use-case is suitable for utilising human-supervised machine learning (AI) for visual quality inspections in the production process. These inspections are critical to ensure the delivery of high-quality products within the Philips factory in Drachten. However, due to the complexity of the inspection process, which includes small anomalies, short cycle times, complex part handling, and broad range of products to be inspected as well as the costs associated with automating these inspections, many of them are now still done manually.

To overcome these challenges, the STAR project envisioned that AI models could decrease the need for manual inspection and improve production speed. At the same time, as AI models become more prevalent in industrial settings, it is imperative for their acceptance and adoption that they work well and can be trusted by its users. This is where we use the term explainable artificial intelligence (XAI). This is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. XAI is very important for humans to be able to trust the outcome of these very complex models. Very often, the data scientists not even fully comprehend why the model makes a certain decision.

To make sure the decisions made by the model are accurate and in line with what a human quality inspector would have decided, the STAR consortium envisioned an AI model based on active learning technology. In active learning, the deployed visual inspection model is improved iteratively through the use of manually revised samples of the streaming data where the model’s confidence is the lowest. In this revision process, the operator can then be provided with defect hinting by using heatmaps to label the selected samples more efficiently and accurately. Various defect hinting techniques are available for this purpose (such as GradCAM and similarity heatmaps) and each have their own effect on the manual revision process.

Apart from its usefulness during data sampling, utilising such explainable AI techniques can make AI systems more interpretable to humans by providing insights into the models' rationale behind a prediction. While still in the early stages of development, we already have demonstrated promising results and are confident that with utilizing such techniques it is possible to automate certain aspects of the inspection process while maintaining high levels of quality control.