Simple rule-based algorithms have been used in the visual inspection market for many years, but as their limits have become more apparent, the need for more sophisticated software has grown.

Three Eras of Machine Learning ~ a New Paradigm for Quality Inspection
Three Eras of Machine Learning ~ a New Paradigm for Quality Inspection

Miron Shtiglitz, Director of Product Management | Lean AI

Here, Miron Shtiglitz, director of product management at visual inspection company Lean AI, describes the two eras of machine learning in visual inspection and argues that we are now entering a new, third era, characterised by various levels of semi supervised deep learning. 


Rule-based systems

Many of the most common solutions available in the visual inspection market can be characterised as rule-based. They operate with different algorithms and the parameters are set by an expert. To give a simple example, the rule might be to count how many black pixels are in an image and, if this is above a particular number, flag this image as a defect. 

Solutions like this have been in use for at least 25 years now and have proved very effective at simple tasks. However, for more complex problems, such as those routinely encountered in fields like surface visual inspection, rule-based solutions are inadequate. 

Relatively small changes, like the ink from the printer weakening,lighting condition changing or even a simple change in a supplier material would seriously impact the efficacy of rule-based systems. Quality managers would need to call the service team out to update the parameters again and again. There was a clear need for a more sophisticated technology that will be less rigid to overcome these problems and allow more tolerance in its application, and so we entered the era of deep learning.


The first deep learning era

Deep learning has now become the leading way of providing quality inspection. Deep learning solutions have proven to be able to solve more complex problems that were hard to define with rule-based systems, like detecting scratches or dents on surfaces that are poorly defined. By receiving lots of examples of defects a deep learning model can generalise the problem and detect defects on new parts by deploying this more general understanding. But there was a problem.

The process of providing the model these images for training needs to be done manually. It is a very lengthy process, and it is not easy. During the training of the model, a lot of images need to be reviewed. For example, if the model fails to accurately detect a crack, you need to add more samples to your data until the model understands what a crack is. Furthermore, simply providing the images is not enough, as you also need to provide some mark up around the image. For example, you might need to draw a box around the defect, or perhaps even segmentation, tracing the exact line of the crack.

It is important to bear in mind that the model can only be trained with images of defects. The end-user is required to review thousands of images, identify the ones with defects, then carry out the correct mark up before providing it to the machine. For most deep learning applications, you need hundreds of examples, if not thousands, just to get started. If you are manufacturing 100,000 parts a day, and you know that two per cent of the parts you produce are defective, that is a lot of images you need to review.

One approach to avoiding this headache is feeding the model man-made images of defects. In other words, the customer will manually create defects and feed images of these to the model. Some companies have even sought to develop software that generates images of potential defects, but both approaches run into the same problem. Artificially created defects simply cannot accurately represent defects that are encountered in real-world situations and occur in the natural production process.


Unsupervised learning: a new era?

We are now standing at the beginning of a new, third era in quality inspection. Technological solutions aimed at overcoming the limitations of previous deep learning solutions are generally referred to as unsupervised or semi-supervised systems. A key goal is to automate the process of model building described above.

Whereas traditional deep learning solutions require examples of defects for training, a semi-supervised model can be fed images of a sample of non-defective parts – “OK parts”. The initial model, while not perfect, will provide a basic understanding of what constitutes okay. It will use this generalised understanding to then flag suspected defects or outliers and with feedback from the customer, it will continually update and learn. 

Automating the model building process in this way is much easier for the end-user but the biggest return on investment comes from the reduction in the time it takes to have a model that works. 

In this new era of unsupervised learning, the process of model building can take place on the production line. Previously, the process of data categorization would require one person working on their own for at least a week before returning to the production line with a model. After deploying the model, they would often discover it does not work as intended, requiring further training and interruptions in production. In this new era of unsupervised or semi supervised learning, the initial training mode can take place on the production line and can be complete in just 24 hours before it is ready to deploy. 

A semi-supervised solution is arguably superior to a fully supervised system, as the outcome is a system that better understands the product. Until now, many of the people working on training the model were experts in data and AI but lacked a good understanding of the product. In this new era, semi-supervised systems will leverage the knowledge of the production people and it will be their feedback that helps optimize the model.

A final advantage for those willing to embrace this new era is the capacity for differentiation between types of defects. Previously, the only goal was detecting a defect and classification was less important. However, more advanced deep learning solutions can cluster together different images and build an understanding into different defect types. The data gathered from this can then help support both preventive and predictive maintenance.

In any field of technological progress, we enter new eras where the changes that occur are not just minor changes at the margins, but amount to a paradigm shift. Thanks to breakthroughs in AI, we are beginning to enter a new era where problems that were previously too complex for deep learning solutions are now becoming solvable and the difficult task of building a workable model is now quicker and easier thanks to automation.

The content & opinions in this article are the author’s and do not necessarily represent the views of ManufacturingTomorrow

Comments (0)

This post does not have any comments. Be the first to leave a comment below.

Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

T.J. Davies' Retention Knobs

T.J. Davies' Retention Knobs

Our retention knobs are manufactured above international standards or to machine builder specifications. Retention knobs are manufactured utilizing AMS-6274/AISI-8620 alloy steel drawn in the United States. Threads are single-pointed on our lathes while manufacturing all other retention knob features to ensure high concentricity. Our process ensures that our threads are balanced (lead in/lead out at 180 degrees.) Each retention knob is carburized (hardened) to 58-62HRC, and case depth is .020-.030. Core hardness 40HRC. Each retention knob is coated utilizing a hot black oxide coating to military specifications. Our retention knobs are 100% covered in black oxide to prevent rust. All retention knob surfaces (not just mating surfaces) have a precision finish of 32 RMA micro or better: ISO grade 6N. Each retention knob is magnetic particle tested and tested at 2.5 times the pulling force of the drawbar. Certifications are maintained for each step in the manufacturing process for traceability.