The recent developments in algorithms and sensor technologies make it possible to efficiently implement vision guided robotics tasks for manufacturers.

Vision Guided Robotics Advances

Contributed by | Axium

 

Vision guided robotics has been used in factories since the 1990’s, but the technological developments of the recent years allowed various new possibilities. Those enabling technological advances are:

  • Fast increase of computing power;
  • Decrease cost of computer memory;
  • Advances in high level software libraries;
  • Advances in imaging hardware.

To illustrate this, the graphic below shows the evolution of processing time for the last 20 years.

We can also cite other examples. Blob analysis is now 4000 times faster today than 20 years ago. This is due to the increase of computing power and advances in algorithms. Blob analysis is a method used to detect regions in a digital image that have different properties. Similarly, pattern matching is now about 1300 times faster than 20 years ago.

On the software side, the enhanced computing power helped developers to create more robust and complex algorithms. This opens the door for real life possibilities for manufacturers.

In the 1990s, we could employ 2D pattern matching using normalized gray scale correlation. It was used in applications like alignment in controlled lighting, some basic pick & place or the control of presence / absence of objects. This approach was very robust to Gaussian noise and invariant to contrast changes. Of course, it had some limitations like no intrinsic support for rotation or scale changes. It had bad robustness for partial occlusion, non uniform lighting conditions or color changes in the objects. Those restrictions limited the possibilities of vision guided robotics applications.

The technological advances allowed the development of better algorithms like geometric pattern matching in late 1990s / early 2000s. This new approach was robust to partial occlusion, color changes and non uniform lighting conditions. Those features permitted applications like robust pick & place of randomly located parts, recognition of objects in uncontrolled environment and alignment in harsh conditions and high variability. It was therefore possible to implement machine vision systems that were robust enough to deal with real life factory floor conditions.

In recent years, improvements in pattern matching and support for 3D data enabled new applications like random bin picking, 3D pose determination and 3D inspection and quality control. Again, this progress in vision software opened the door to a broader use of vision guided robotics in factories. This probably explains, at least partially, why new records for machine vision systems and components in North America were established in the last two years (read our Blog post: New record high in 2014 for north American vision market).

Vision guided robotics - Weldsight-3D-software
SPG, Vision & Robotics, Company member of Axium Group: WeldSight-3D Software

 

In the part 1, we illustrated the advances of vision guided robotics with the evolution of processing time and the development of algorithms and software for vision guided robotics applications. For this second and last part, we will focus on hardware development. To achieve this, we will mainly look at three technologies:

  • sheet-of-light triangulation scanners;
  • structured light and stereo 3D cameras;
  • time-of-flight sensors.

Sheet-of-light triangulation scanners have been known for many years. This approach delivers fast and robust acquisition of accurate and dense cloud of points. To generate the image, a relative motion between the scanner and the part is necessary.

Vision-guided-robotics-5 

Sheet-of-light triangulation scanners

Vision guided robotics applications where accuracy is needed are well suited with this approach. As an example, we can think of quality control of welding surfaces and accurate path adjustments for components assembly (read this article about welding accuracy with 3D view).

Recent advances in sheet-of-light triangulation scanners opened many new possibilities for inline inspection and control applications requiring high 3D data density and high speed. The latest CMOS sensors reached a scanning speed up to several thousands of high resolutions 3D profiles per second!

A newer approach for vision guided robotics is based on structured light and stereo 3D cameras. Its main advantage over sheet-of-light triangulation is that it does not need any relative motion between the sensor and the part. This allows fast generation of 3D point clouds with sufficient accuracy for good scene understanding and robot control. The typical rate is 30 fps. This type of hardware is mainly used to perform robot guidance for pick and place or bin picking applications, like the robotic bin picking of rubber bales.

More recently, time-of-flight sensors obtained a lot of airtime. Again, those systems have the capacity of acquiring full depth data frame without any relative motion. Their main advantage is that they can go up to 100 fps, and even more. Because this technology does not require a triangulation base like the two above-mentioned technologies, it is possible to have a very compact design. Time-of-flight sensors have low or no interference with neighboring sensors and offer good robustness to many kinds of object’s reflection properties. Their main drawback is the relative low resolutions (typically 160×120). Recently, some manufacturers announced that mega pixel resolution time-of-flight cameras should be available for 2015.

With this technology, vision guided robotics can now be valuably used in applications like robotic random bin picking, pick and place of random objects without model definition and robot pick to pallet.

In conclusion, the latest advances in vision technologies opened industrial robotics to various new possibilities that were not feasible with “blind” robots. The recent developments in algorithms and sensor technologies make it possible to efficiently implement vision guided robotics tasks for manufacturers. Therefore, we are optimistic that more and more projects will integrate machine vision and robots in the following years.

 

 

About Axium

Axium is specialized in robotic assembly and material handling. Axium designs and manufactures a complete range of automated solutions for robotic material handling (palletizing, depalletizing, case packing, and peripheral equipments) and transformation of plastic products. Axium solutions meet all clients' needs as for factory and manufacturing automation, in most industries: Food, dairy, consumer goods, beverages and breweries, tissue, folding carton, plastics.

The content & opinions in this article are the author’s and do not necessarily represent the views of ManufacturingTomorrow

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

Zaber LC40 Non Motorized Linear Stage Gantry Systems

Zaber LC40 Non Motorized Linear Stage Gantry Systems

A Zaber gantry kit comes with everything you need to build a customized XY gantry system or XYZ gantry system. These gantry systems feature coordinated multi-axis motion, plug-and-play operation, easy integration with end-effector options, and built-in IO and E-Stop capabilities. An intuitive ASCII interface allows the user to easily communicate with the gantry systems using our free software, either Zaber Motion Library with APIs for several popular languages or Zaber Console. Third party terminal programs that can communicate over a serial port can also be used.