Manufacturing AUTOMATION

The gift of sight: A four-step approach to setting up a machine vision system for robot-based manufacturing

November 17, 2011
By David Berry

Increasingly, cameras and image processing technology – known collectively as machine vision – are being incorporated into robotic manufacturing systems. This approach provides added flexibility, so that the robot can locate parts intelligently without mechanical fixturing.

In machine vision systems, each robotic system uses a camera and digital image processing to recognize products and locate them precisely on the conveyor exiting from an injection moulding machine, for example. The machine vision system enables the robot to pick up a product and manipulate it to apply a label or assemble it.

Let’s use an example of a recent installation of a 2D guidance system for a four-axis, SCARA robot at Sistema Plastics, a New Zealand-based manufacturer. In this application, the robot is required to pick up parts one at a time from an indexing conveyor, while the parts are stationary. The camera and lighting are installed in a fixed location above the conveyor, and images of individual parts are taken while the robot arm is positioned out of the way. In this system, a smart camera is used; however, the same principles apply to PC-based machine vision systems.

There are many things to consider when setting up and using a vision-guided robot. Below is a four-step approach – the same used in the Sistema example – to setting up a machine vision system for robot-based manufacturing.


STEP 1: Determine camera resolution and lighting requirements

Camera resolution. The required camera resolution is determined mainly by the field of view (the area the camera must view to see the robot features used to locate the part) and precision (how accurately the robot must be positioned to pick up the part). Larger fields of view and more precise positioning require higher camera resolution. Although higher resolution usually means more expensive hardware and longer image processing times, it is always better to have more resolution than you think is needed.

In the Sistema example, a standard 640 by 480 pixel camera, which is sufficient to locate the product on a 300-mm-wide conveyor, provides a nominal resolution of around 0.5 mm per pixel. Depending on the positional variation tolerated by the robot gripper, accuracies of up to 0.5 mm for the part location can be acceptable.

System lighting. The importance of setting up effective lighting cannot be overstated. Most machine vision systems require a dedicated light source, which should minimize variations in images from part to part, and preferably accentuate product features such as edges or specific markings that the camera is trying to identify.

In the Sistema system, diffuse flat panel LED lighting was used to provide even illumination across the conveyor, with minimal bright spots from the highly reflective plastic parts. A hood was also installed to counter the variable effects of the roof skylights and nearby high-bay lights.

STEP 2: Configure the system

You must configure: image processing software; robot control software; communications between the camera, host PC and the robot; and the user interface (UI). Using a development framework, which provides application tools and supports a range of hardware devices, can substantially speed up the configuration process.

STEP 3: Calibrate the system

After the system has been configured, it must be calibrated to provide accurate results.

Camera calibration. The purpose of calibrating the camera is to remove perspective and radial distortion in the image introduced by the orientation of the camera and the properties of the lens. Calibration also provides more useful measurements in real-world units such as millimetres, rather than pixels.

In this procedure, a special target, such as a checkerboard, must be used for the camera calibration process. The checkerboard is placed on the conveyor. An image is taken with the camera, and then a calibration tool in the image processing algorithm computes the transformation between the image and the known relationship between the intersections of the checkerboard squares. The calibration results are stored in the software so that each time subsequent images are taken, the same calibration is applied. The calibration returns either an unwarped image if it is applied to every pixel in the image, or a correction for individual point locations if the calibration is applied to points of interest only, such as a product location.

Camera-to-robot co-ordinate calibration. For the robot to use co-ordinates that are determined by the camera, the relationship between camera and robot co-ordinate systems must be established. This can be done by: placing a target such as a picture of a crosshair on the conveyor; using the camera calibration data and a pattern locator or line-finding tool in the image processing algorithm to extract the co-ordinates of its centre point, measured in the camera co-ordinate system; and jogging the robot so that the robot tool centre point is directly above the centre of the crosshair target. Then, record the co-ordinates given by the camera and the ones given by the robot controller. Repeat these steps several times for different target locations. The system feeds each set of co-ordinates into a calibration tool, which can compute the transform from one set to the other. This transform is applied each time an image of a part is taken to return its position in the co-ordinate system used by the robot.

Compensation for variations in product height. The calibration of a 2D camera system using a checkerboard gives dimensionally correct values for the robot only when the features located on the part are in the same plane as the checkerboard was placed during calibration. If the checkerboard was placed on the conveyor surface, but the feature the part is located with is above the conveyor surface, an offset error occurs. This error can be avoided by performing the checkerboard calibration with the checkerboard at the same height from the conveyor surface as the surface of the part that the image processing is extracting features from in order to locate the part. If the production line is used for several different products with varying heights, multiple calibrations are required, with each calibration specific to each product selected when it is manufactured.

Setting up the part locator. There are many image processing techniques for locating features on a part. The most commonly used ones are pattern locators, line finders and blob tools. Usually, an image processing algorithm will incorporate a number of these tools. In the Sistema installation, a pattern locator tool alone, when combined with the camera and camera-to-robot calibrations, often provides a sufficiently consistent and accurate result.

STEP 4: Operate and maintain the machine vision system

Setting up a reliable machine vision system doesn’t end when the hardware installation, software configuration and system testing are complete. Operator training is required. In addition, the system might need to manufacture new products from time to time. Setting up a UI that is easy to use and understand will enable faster, more efficient product changes, operator training and system calibration. An effective UI will also provide diagnostics and record data and images, which enhances the understanding of system performance.

CASE IN POINT. The Sistema solution includes these major components: Adept Cobra s600 4-axis SCARA Robot Cognex InSight 5100 Smart Camera, with Patmax locator tool; Smart Vision Lights DLP series flat dome light; and ControlVision VisionServer V6.0 development framework, which enabled the development of machine vision applications.

By following this four-step approach, the Sistema solution was successfully implemented, and resulted in an automated system that is flexible enough to manufacture a wide variety of products. The system has also enabled the company to reduce production costs; reduce the space required for previously manual operations by automating product assembly; and improve the consistent quality of the final product.

David Berry is the chief executive officer of ControlVision, a specialist supplier of machine vision and robotics components and solutions.


This article originally appeared in the November/December 2011 issue of Manufacturing AUTOMATION.

Print this page


Story continue below