Why the conventional deep learning model is broken

The conventional deep learning model is a supervised model. It takes months of time to develop and train the model before it is ready for the production line. Here, Karina Odinaev, co-founder and CEO of Cortica and co-founder of artificial intelligence start-up Lean AI, explains why the conventional deep learning model is broken and what the alternatives are.

 

The conventional deep learning model is supervised. The model must be shown hundreds or thousands of pre-tagged defect images, teaching it how to determine what constitutes a defect. The process requires significant human involvement, both from a quality manager who will have to tag the defects and the AI expert to tune the architecture and hyper parameters.

 

This journey is not easy and can take months. The process takes thousands of images and a lot of time – typically two months for each camera and for each product type, although this can vary significantly depending on the task. You might hear bold marketing claims about the need for fewer and fewer images, but you will often find in practice that the model does not work as intended and more images and feedback are required. In many instances, the quality manager will have to create manually by force production-like defects for training purposes. Given that these artificial defects do not necessarily represent real-world defects, it is no surprise that this approach can often lead to problems further along.

 

After weeks or even months of work training the model with pre-tagged data sets, the outcome is still uncertain. The system is like a black box, because when it fails you are unable to see why. Another common challenge in production is process variation. The model is required to adapt to the changes, so without this capacity for online learning you will soon encounter degradation of performance.

 

Fully unsupervised models

The opposite of a supervised model is a fully unsupervised model. Some systems rely on part statistics to understand what is okay and what constitutes a defect. There are many challenges with such an approach, including artifacts in the production that are not defects, differing sensitivities to defects in different areas and the fact that defect definition is dynamic.

 

Ideally, a model design for defect detection should represent the knowledge and understanding of the quality manager. They know their product better than anyone else and their input and feedback can mitigate many of the problems described above. The optimal solution is therefore a model that is closer to the unsupervised end of the spectrum, but without the drawbacks of the fully unsupervised system.

 

Our unsupervised system is designed with this goal in mind. Rather than having to tag lots of data yourself, you can simply feed the model untagged data and it learns for itself, unsupervised, what a defective product looks like. There is no getting away from the reality of feeding it a lot of images, but this process is automated and therefore quicker and easier.

 

An unsupervised model can automate the process of building the model because its algorithms allow it to stream untagged images and work out for itself what possible defects look like. However, once it identifies outliers or potential defects, you need someone with knowledge of the product to provide that feedback and allow the model to continually optimize. With this approach you leverage the knowledge of the quality manager, but you don’t wear them out by requiring labelling thousands of images.

 

How long does this process take? Here is the big return on investment. Compared to the conventional model which takes months to be ready to deploy to the production line, the unsupervised model can deliver a workable solution in few weeks or less. The model itself can do the learning on the production line, saving you time and hassle. And with the input of the quality manager, you enjoy the benefits of automation without the problems encountered with fully unsupervised systems which has so far failed to deliver a workable solution.

 

The best of both worlds is an AI solution that allows the quality manager to retain control over what the AI system learns, but avoids the hassle of having to waste months of work tagging. Our unsupervised system is designed to deliver this vision, leveraging and integrating the quality control knowledge you have, but automating the tedious work that is required for the supervised model.

 

Lean AI uses Cortica’s patented machine-learning algorithms to deliver visual inspection software for the toughest use cases in industry. To find out more, visit lean-ai-tech.com

 

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

Zeigo Activate by Schneider Electric: Energy efficiency software for manufacturing facilities.

Zeigo Activate by Schneider Electric: Energy efficiency software for manufacturing facilities.

Whether you're responding to new legislation and regulations or getting pressure from stakeholders and customers, Zeigo Activate empowers companies to effectively calculate, track, and reduce their carbon footprint and become more energy efficient. By providing valuable insights, actionable data, and intuitive tools, Zeigo Activate is tailored for businesses at any stage in their energy efficiency journey. Our easy-to-use software allows you to set your emissions baseline and target, receive a customizable project roadmap, and connect to a network of regional solution providers in energy efficiency and renewable energy so that you can put your ambitions into action.