Machine vision for factory automation

Adept Turnkey Solutions
By Steve Geraghty and Marc Fimeri*
Wednesday, 06 May, 2009


Many key tasks in the manufacture of products, including inspection, orientation, identification and assembly, require the use of visual techniques. Human vision and response, however, can be slow and tend to be error-prone either due to boredom or fatigue. Replacing human inspection with machine vision can go far in automating factory operation, but implementers need to carefully match machine vision options with application requirements.

Nothing fabricated beats human vision for versatility, but other human weaknesses limit their productivity in a manufacturing environment. Factory automation utilising a machine vision system in such tasks, then, can bring many benefits, among them greater productivity and improved customer satisfaction through the consistent delivery of quality products.

Implementing a cost-effective machine vision system, however, is not a casual task. The selection of components and system programming must accurately reflect the application’s requirements. In addition, selection decisions need to consider more than the initial component costs.

Define the requirements

One of the first places to begin in selecting a machine vision system for a factory automation task is to closely define the physical and operational requirements.

What task does the system need to perform?

Different tasks may require different vision attributes. Inspection requires an ability to examine objects in detail and evaluate the image to make pass/fail decisions. Assembly, on the other hand, requires the ability to scan an image to locate reference marks (called fiducials) and then use those marks to determine placement and orientation of parts.

What are the key visual performance criteria?

The vision system’s camera and lens must perform at the right levels. Factors such as the smallest object or defect to detect, the measurement accuracy needed, the image size (field of view), speed of image capture and processing, and the need for colour all affect camera and lens choices.

What are the environmental factors?

Some camera choices better suit stationary views while others are more suitable for handling linear object motion. Temperature, humidity, vibration and available space can also impose a need for specific system fabrication and assembly practices.

Who will program the system?

If the expertise to configure the system is not available in house, the user must depend on third-party support to make changes and correct errors in the vision system’s programming. If the system needs periodic changes, such as to inspect a new product line or to interface with new production equipment, the question of programming becomes particularly important.

What equipment must the vision system interface with?

A vision system that only activates a solenoid to eject failed parts from a production line is considerably easier to implement than one that also reports results to a quality control network or that controls the operation of production equipment based on inspection results. Similarly, a system that must inform and enable a human operator has different needs than one that interfaces only to other machines.

What information must the system provide?

Machine vision systems in factory automation seldom operate in a standalone mode. Instead, they must send information to other parts of the factory enterprise for a variety of purposes. Quality traceability, for instance, requires that the vision system either log or report inspection results to the enterprise. Highly controlled operations, such as pharmaceutical manufacturing, may also require the logging of access to and changes made in the vision system, sending such data to a secure drive on the company network.

What are the operator requirements?

The extent to which human intervention into and control of the machine vision system is required can affect many system elements, particularly software. If operators are required to periodically change inspection criteria, such as the tolerances that will be accepted, the software must support such manipulation. Software may also need to provide security to prevent unauthorised access or parameter manipulation and include safeguards to avoid the introduction of erroneous values.

Building a machine vision system

While the answers to these operational and functional questions depend on the application, all machine vision systems for factory automation share some fundamental attributes and behaviours. Systems all have a need to image or inspect a scene or object, operating on a continuous basis at the fastest practical speed. Systems all operate by using the following steps:

  • Position the object or camera so that the camera can view the object or scene
  • Capture an image with a camera
  • Process the image
  • Take action based on the image processing results
  • Communicate results to operators and other factory systems.

Because of this commonality, examination of a specific application such as inspection of objects on an assembly line will help illustrate the method by which developers can build a suitable machine vision system for their application.

The essential elements of an inspection system, shown in Figure 1, include a delivery vehicle, the vision system, the response system and sensors to trigger image capture and system response.


Figure 1: A machine vision inspection system needs a delivery vehicle as well as a means of taking action when parts fail.

A first step in developing an inspection system, then, is to determine how the parts are to be placed in front of the camera for imaging. In this example, the delivery vehicle is a conveyer belt that carries the objects past the vision system at a constant speed. Other possible delivery vehicles include a part feeder, a robotic arm, or humans placing an object in a station for offline inspections.

With the delivery system chosen, developers can determine the most appropriate method for triggering the vision system to capture the image, and triggering the response system to take action. In the case of a conveyer belt delivery vehicle, an appropriate sensor might be infrared or laser sensors that produce a signal when the object passes between them. With other delivery vehicles, sensors such as proximity switches could serve. Manual triggering by a human operator is also an option.

The image capture, processing and evaluation of results are tasks for the vision system. This system determines if the object being inspected is within acceptable quality tolerances and directs the response system as to what actions to take. A separate vision controller such as the Dalsa IPD Vision Appliance may handle the image processing and evaluation, or those functions may be integrated into a smart camera.

The parameters to be evaluated as well as the object characteristics strongly affect the camera, optics and lighting needed in the vision system as well as the image processing software. The illustrations in Figure 2 show how some typical applications are handled. The reading of an identification number (2a) requires close-up imaging, front lighting and optical character recognition software. Inspection of packaged water aerators (2b) requires an entire package view and colour imaging. Inspecting the fill level in a detergent bottle (2c) requires back lighting and the ability to detect the position of the liquid’s surface.

With an appropriate vision system chosen and decision criteria determined, the last step is to define how the system is to respond to its decisions. In this example, the vision controller triggers a PLC to push rejected parts off the conveyer to another delivery system, allowing acceptable parts to continue undisturbed. The controller may also send decision results to the factory enterprise for quality control and traceability purposes.

Ensuring factory integration

The selection of a third-party supplier or system integrator is an essential step in ensuring the efficient integration of a vision system into factory automation. There are a number of factors to consider, both up-front and long-term, that will affect the effectiveness and total cost of ownership of the vision system.

One of the first factors to consider is resolving the sometimes opposing needs of the system design and the runtime operations. To control design costs the system should have a well-specified and bounded task. This makes development and programming simpler and allows optimisation of components such as camera, lighting and optics. To lower total cost of ownership, however, runtime considerations must also be given priority, and these can complicate the design task.

  


Figure 2: Machine vision applications such as reading identification numbers (a), determining package contents (b), and verifying bottle fill levels (c), all require different imaging, lighting and software.

One key runtime consideration that affects design is system portability: the ability to reconfigure the system for use in a different production line or to accommodate slight variations in environment or specifications. Designing for flexibility and extendibility of the system requires greater effort, but can give the users the ability to accommodate changing requirements without vendor assistance, saving time and costs.

A flexible vision system, for instance, will offer a number of options for communicating with third-party factory equipment, operators and the factory enterprise rather than be restricted to the initial requirements. Physical interfaces might include digital I/O for hardwiring to photo sensors, status indicator lights, PLCs and directional control devices. Serial ports allow communication with PLCs, motion controllers, robotic equipment and touch-screen displays. The system could offer ethernet running TCP/IP for interfacing to the factory enterprise and other equipment and also support other protocols and standard languages such as Modbus and Profibus. Similarly, a flexible system design will use modular hardware design so that system elements can be readily replaced, upgraded or modified.

Along with flexibility, vision system users should consider the system’s extendibility. For instance, it can be valuable to choose system software that includes access to a library of image processing and analysis functions to simplify future modifications and enhancements to the vision system’s task. Similarly, hardware extendibility such as the ability to add cameras or change camera resolution at a reasonable cost can be a valuable hedge against future requirement changes.

System maintenance is also an important concern, especially software updates. Ideally, the system vendor makes updates to the vision system software readily available for little or no cost.

Keeping in mind all these various performance, operational and future-oriented considerations can seem a daunting task, but choosing a solution that will satisfy both current and long-term needs will help minimise the total cost of ownership and maximise the system’s productive lifetime.

Image quality is the key

Image quality directly affects the accuracy and precision with which the system can perform its task, so image quality requirements require careful attention. Three main system elements affect image quality: the camera, the optics and the lighting.

The camera is the image capture element of a vision system. Its key parameters are the size of its sensor, its resolution in pixels, the type of sensor (area, line scan or TDI: time delay and integration), and the sensor technology: CCD or CMOS. The speed of the sensor, colour capability and sensitivity to non-visible wavelengths may also be important in some applications.

In consumer devices the optics are considered part of the camera, but in vision systems they are a separate element with their own set of specifications. Key specifications for the optics include their working distance, their field of view, their resolution, their speed (light-gathering capability) and the size of camera sensor they support. Other factors that can affect image quality include lens materials and anti-reflective coatings.

Lighting is one of the most challenging aspects of a vision system. Incorrect or inadequate lighting of an object or scene can dramatically increase error rates in vision systems. Yet, the proper lighting for an application depends strongly on the task to be accomplished and the mechanical and optical characteristics of the objects to be imaged.

Fortunately, there are a few general guidelines that vision system developers can follow. One is to strive for consistent, uniform illumination; the vision system may perceive variations in illumination as variations in the objects themselves. Both spatial and temporal consistency are important.

A second general guideline is to light the scene in a way that amplifies the things to be detected or measured, such as markers or defects. Amplification for vision systems means increased contrast. Thus, if the system is to detect a fiducial marker on the surface of an object, front lighting that avoids shadows and reflections is appropriate. If the system is to look for defects in a glass panel, on the other hand, rear lighting looking through the glass may be more effective.

Finally, the lighting should attenuate clutter and background effects. Image clutter makes identification and extraction of desired information more difficult and error-prone. Similarly, background effects such as reflections and shadows can prevent recognition of key features or trigger false recognitions. The simpler the image the faster and more reliably image processing can extract desired information.

*Steve Geraghty is Vice-President, US Operations at Dalsa.
Marc Fimeri is Managing Director of Adept Electronic Solutions.

Dalsa
www.dalsa.com

Adept Electronic Solutions
www.adept.net.au

 

Related Articles

Is smart manufacturing moving fast enough?

Manufacturers that embrace smart manufacturing can use those technologies to create a competitive...

ABB identifies new frontiers for robotics and AI in 2024

Accelerating progress in AI is redefining what is possible with industrial robotics.

The need for speed

The constant improvements by CPU manufacturers are providing new processing techniques that...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd