Automated object inspection: front-end components

By Christopher G Relf*
Thursday, 13 October, 2005


Although automated object inspection (AOI) has matured to a robust and affordable technology, one of the most common mistakes is to make poor lens, camera and lighting decisions. Users often believe that the image processing software will correct these issues but a little extra work at the front end can save a lot of time and heartache, and can mean the difference between an operational system and a white elephant. In the second instalment of our three-part AOI series, here we look at some of the theory behind practical system lens selection.

System resolution

The resolution of a system refers to the smallest feature of the object that it can distinguish. A common example used to demonstrate resolution is a barcode (Figure 1). As you can see, a barcode is made from black and white bars of varying width, and if the resolution of your acquisition system is too coarse to resolve between the smallest black and white bars, then useful data will be lost and you will not be able to resolve the barcode. One common solution is to use the highest resolution system components available, which means that you will never lose useful data, although there are several reasons not to choose this option: it requires more processing power and memory, is slower and can be much more expensive. The tricky part is to determine a suitable compromise, and therefore the most efficient and economical resolution.

As a general rule of thumb, the smallest object feature that you want to be able to resolve should measure at least two pixels on the camera's CCD. If so, you can be assured of never missing it due to its image falling on an inactive part (sections between each of the CCD array elements) of your detector. Although that sounds simple, you also need to consider the lens system that you are using to focus your image on the detector array. For example, using a two times zoom system will effectively increase the resolution of your image, but decrease its field of vision. So to determine the sensor resolution you will require to achieve a smallest feature of two pixels, you can use the this equation.

The sensor resolution is a dimensionless quantity, so use the same units for both the field of view and smallest feature to achieve a useful answer. For example, when reading a barcode 35 mm long where the narrowest bar measures 0.75 mm, through a lens system of 1x magnification, the sensor resolution would need to be at least [image].

Although this application of the equation is useful for a barcode (as the information you are attempting to acquire is simply a one-dimensional line profile across the code), most detector arrays are two-dimensional, so this equation usually needs to be applied across both dimensions.

Depth of field (DOF)

The DOF of a lens system is the lens-perpendicular spatial range of which objects remain in focus - any objects outside of the DOF will not be in focus. The DOF is directly related to the blur diameter (Figure 2). Consider a beam (a group of adjacent rays) of light travelling from the object through the lens and focusing on the CCD array of a camera: as the beam travels through the lens it undergoes geometric aberration both spatially and temporally, so the beam defocuses. If the beam is defocused enough to cover an area of the CCD that is larger than the required resolution then the image will be out of focus. This phenomenon can also occur due to chromatic aberration, which is evident when different frequencies of light (eg, colors in the visible spectrum) are diffracted at different angles.

Contrast (or modulation)

Contrast is the ability to resolve intensity differences, in a similar way that resolution refers to spatial differences. Contrast is therefore a dimensionless quantity that is the difference between the lightest and darkest features of an image [equation].

As this equation suggests, when the difference between IBrightest and IDarkest is high, the contrast tends toward 1 (unity), indicating that the image has a large range of contrast. Conversely, when the image is 'flat' and has very little contrast, the contrast value approaches zero. In an example image (the greyscale intensities are represented as numbers in a two-dimensional array) [image].

indicates a low-contrast image, whereas the following example image [image].

indicates an image with a high-contrast level. A high-contrast image often appears to be sharper than that of a lower contrast image, even when the acquired image is of identical resolution.

Perspective (parallax error)

Perspective errors occur when the object's surface is not perpendicular to the camera axis (Figure 3). The right-hand image is quite badly spatially distorted (skewed) due to the high angle between the object's surface and the camera axis. Perspective error occurs in just about every lens system that exists, unless the only object feature that you are interested in is directly below the camera axis, and is only a few pixels wide.

Software perspective calibration occurs when an object with known features is acquired, and the differences between the two are used to process subsequently acquired images. Common software-based perspective calibration processes depend on grid distortion targets: glass or plastic plates with know patterns etched or printed on their surface. Commonly-used fixed frequency grid distortion targets are available from manufacturers including Edmund Optics (www.edmundoptics.com - search for calibration target) and consist of an array of dots of uniform size at known intervals. To ensure that a perspective calibration is performed correctly, you should ensure that:

  • the displacement in the x and y directions is equal,
  • the dots cover the entire desired working area,
  • the radius of the dots is at least 5 pixels,
  • the centre-to-centre distance between dots is at least 20 pixels; and
  • the minimum distance between the edges of the dots be 15 pixels.

Using the same camera set-up shown in Figure 3, the perspective error of a fixed frequency grid distortion target is quite apparent, as shown in Figure 4.

Concluding our three-part AOI series, next month we'll have look at how to select the right lighting hardware to ensure you get the most out of your system.

* Christopher G Relf is a senior project engineer at VI Engineering System Integrators (www.vieng.com). A keen software and hardware automation engineer, Christopher is a National Instruments Certified LabVIEW Architect and the author of Image Acquisition and Processing with LabVIEW (CRC Press). This article uses references from this book.

Related Articles

Collaborative robots: the smarter way forward

Robots that can work side by side with humans are changing the way manufacturing is done.

AOG bringing the best of the best to Perth in 2015

With more than 620 companies queuing up to participate in this year's annual Australasian Oil...

Understanding data storage technologies

With the growing amounts of data being stored by industrial organisations today, understanding...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd