Industrial robots guided by machine vision have the potential to revolutionize manufacturing processes, improving repeatability, cycle rate, reliability and safety on the plant floor, while reducing costs associated with labour and fixturing.
A lack of understanding and awareness of the technology’s value has, in the past, hindered adoption. However, as competitive pressures increase, hardware and software costs decrease, and technologies improve, manufacturers in many industries are exploring the possible implementation of vision-guided robotics (VGR) to improve productivity and competitiveness.
To help you decide if the technology is appropriate for your manufacturing environment, we’ve asked five experts in the vision-guided robotics arena to discuss the benefits, challenges, integration issues, major breakthroughs and appropriate applications for the technology in manufacturing.
The participants are Edward Roney, development manager, Product Development Division, Fanuc Robotics America, Inc.; Ken McLaughlin, director, flexible manufacturing, JMP Engineering; Gordon Deans, vice-president, business development, and general manager, Adept Canada; Bryan Boatner, product marketing manager, In-Sight vision sensors, Cognex; and Babak Habibi, president, Braintech. For more information on our panelists, see “The participants” on page 19.
Ask anyone in the vision-guided robotics arena about the technology, and they can provide you with a laundry list of benefits.
“VGR is capable of effecting a major paradigm shift in manufacturing,” says Habibi. “Elimination of hard tooling and fixturing is one of the principal effects that can result in significantly lower capital and maintenance costs for manufacturing. Couple that with the fact that with vision guidance, robots can be deployed increasingly in places where robotic automation was not imaginable before, and you can see that this has a powerful effect on the landscape of manufacturing and the way we will lay out our plants of the future.”
Other major benefits, says Habibi, are reductions in labour costs, as previously manual applications become robotizable because of VGR, and increased asset utilization due to the ability to run multiple part styles down the same production line.
McLaughlin agrees that cost savings are a major plus. “Manufacturers can benefit from labour cost reduction through the automation of tasks that were previously not possible, and often ergonomically challenging.”
Even with the plethora of pluses, experts are quick to caution potential adopters about the many factors to consider for successful implementation, including lighting, location, training and integration challenges. But the panelists agree that successful implementation and integration start with proper planning.
Says Roney, “The first major challenge is the selection of the vision supplier. There can be some significant challenges facing an integrator or end-user depending on whether the vision product is already integrated as a standard product of the robot, or if the use of a general-purpose vision system supplied by another company other than the robot manufacturer is to be applied. In using a general-purpose vision system, several challenges immediately exist, [including] communication – how will the vision system and robot exchange information; precision calibration – the matching of the vision image to the robot co-ordinate systems; and robot math – making the vision data useful to the robot and using co-ordinate frames). These integration concerns can be large engineering tasks that can be eliminated if the end-user specifies the use of a vision system designed as a standard integrated product – an application-specific solution. With a standard product, all the issues listed above have been addressed and designed into the complete intelligent robot package, which then lowers costs, reduces risk and provides a more supportable automation solution.”
McLaughlin explains that often the biggest challenges are associated with the physical constraints of the system. For example, he says, “in vision-guided bin-picking applications, the vision system may be able to easily identify that a part is upside down or on its side, however, the robot gripper needs to be able to deal with this. This means the gripper must be designed to handle parts in multiple orientations. Another common scenario is if a part is leaning up against the side of the bin and the robot can’t pick it because the robot itself will collide with the side of the bin. [The] bottom line [is], in a vision-guidance application you need to consider the entire system – vision system, robot reach, robot gripper, multiple orientations of parts, etc. – in order to design a system that will be successful.”
According to Deans, before implementation, manufacturers must consider logistics, such as where the system will go and who will operate it. Financial issues, such as cost, also need to be reviewed, as do practical issues, such as upkeep, maintenance and training. And once you know that, control of the environment is the single largest factor to consider, he says, adding that lighting and the appearance of the parts are a major part of that.
“Lighting, of course, is subject to changes in ambient light, including overhead lights, sunlight through windows, reflections from adjacent equipment, as well as variations of structured lighting intensity over time,” explains Deans. “The appearance of the parts is subject to variations in manufacturing which may affect the colour or surface finish of the parts, as well as the amount of particulate mixed in with the parts.” He adds that conveyor tracking or an equivalent consideration of moving objects requires “tight timing and latency control between the robot controller, various sensors that detect conveyor movement and speed, and the object detection in the vision system.”
Deans says that environments with abrasives or chemicals that are airborne are also a challenge, which can be overcome with the use of sealed enclosures and proper venting. Stamping or press machines that induce major vibrations are also an ongoing challenge that must be considered at the system design stage.
Boatner explains the challenges from both the 2D and 3D side. “It is important when evaluating a potential VGR application to determine whether you need 2D data (x, y, theta), 3D data (x, y, z, yaw, pitch, roll) or something in between. That being said, for 2D and 3D applications, one of the biggest challenges in VGR is that ambient light may play an unexpected role. Changing light conditions cause nearly every image of the part to vary slightly due to shadows that shift around on the part as its position with respect to the light source changes…A second challenge is setting up communications between the vision system and the robot…For 3D applications, in addition to the challenges stated above, often times the user needs to calibrate two or more cameras to each other, which is complex and requires significant vision and robot expertise.”
One of the technology’s major benefits is also one of its major challenges, explains Habibi. “One of the key advantages to vision-guided robots over their blind cousins is that they don’t need the workpiece to be in the same position or even of the same type before it can be operated on. A blind robot relies on fixturing to pre-position the part at the pre-trained position, whereas a vision-guided robot locates each part every time. The fact that part position, orientation and type can change means that unlike vision inspection systems, where the part is typically fixtured or its movement heavily restricted, VGR systems can’t depend on a convenient placement of cameras and lights to obtain an ideal image.”
In order to overcome any integration issues, McLaughlin says that the proper training of plant personnel is key to a successful integration and adoption. “There is often a perception that vision guidance is black magic and, as such, people are often afraid and distrustful of it. By training plant staff thoroughly when the system is deployed, they understand it and are able to support it.”
Like fine wine, vision-guided robotics is getting better with age, and recent breakthroughs in the technology have improved VGR performance on the plant floor.
Says Roney, “Vision systems are a quarter of the cost they were just 10 years ago, and with the higher levels of integration into the robot controller, the cost of implementation has been significantly reduced. Advanced reliability has also increased the adoption of vision-guided robotics. Vision tools like geometric pattern matching, where the features of an object are identified and matched instead of a pixel comparison, have significantly added to the reliability of machine vision for robotics. For industrial robots that work in manufacturing plants that make parts or products from raw materials or in-process goods, this reliability has greatly increased the applicability of machine vision and reduced the maintenance concerns that were once associated with previous systems that were tremendously light change sensitive.”
Habibi says that there have been several breakthroughs in the area of algorithm development. “I can point to at least three new pose calculation algorithms we have developed in the last three years, which are aimed at locating various types of objects with various geometries and appearances. As more of these techniques come online, a larger variety of workpieces or parts can be located and the information can be used for robot guidance. The ever-multiplying speed of PCs and introduction of dual and recently quad core processors in the last few years is certainly notable as it makes execution of advanced algorithms feasible by allowing them to run within industrially acceptable cycle times.”
According to McLaughlin, “Vision systems and tools have become dramatically more robust in the past few years and have really made vision a stable, solid technology to use in the manufacturing environment…As well, the proliferation of Ethernet communication now allows for the transfer of large amounts of data easily between the vision system and robot.”
“Improvements in industrial lighting solutions are a major breakthrough,” says Deans. “High-intensity LED lighting, directional lighting and more robust lighting solutions have helped immensely. Also, the increase in the computing power of vision systems has improved vision system performance and speed. Processing power, resolution and the packaging of cameras is a major improvement. In particular, digital cameras have greatly simplified the vision interface.”
With these advancements comes increased adoption. The panelists say that, if implemented and integrated correctly, vision-guided robotics can be used in almost any process in many industries, but there are certain industries and applications more suited to the technology than others, and certain applications that are less challenging than others.
“Industries of all types are embracing vision-guided robots,” says Roney. “The automotive industry is using guidance for the assembly and processing of engine and body components. The food industry is using vision-guided robotics to pick products from conveyors for packaging into individual containers or cartons. The pharmaceutical industry is using vision and robots to locate medical supplies on moving belts for packing into shipping cartons. Metalworking industries are finding metal castings on pallets and loading CNC machines to make finished component products…And yes, we are even using robotic guidance in bin-picking applications today that just a few years ago was thought to be impossible.”
“Material handling is the low-hanging fruit that this technology can be used to capitalize on today,” says McLaughlin. “Typical [2D] applications [are] picking from a stationary or moving conveyor…Typical applications for 3D vision guidance are bin-picking applications. We are also seeing interest for robotic deburring and material removal applications where the robot uses the vision system to locate the part before starting to deburr the part.”
McLaughlin says that the automotive industry in particular has embraced the technology because of pressure from low-cost overseas manufacturers. “Automotive programs have lower volumes and shorter lifespans. As such, there is a trend away from special purpose equipment designed to run a single part towards flexible equipment that can run multiple parts or programs simultaneously. For example, instead of using a traditional fixtured conveyor to deliver parts to a cell, a manufacturer can now use a standard, off-the-shelf, flat-belt conveyor and a vision-guided robot to pick the parts from the end of the conveyor. Not only is the initial capital cost less, the system can now run multiple parts simultaneously simply through programming changes to the vision system and robot.”
Deans says that packaging applications are ideal because of the requirement for picking randomly ordered objects from moving belts. Other suitable applications include high-speed parts processing, and complex mechanical and electrical assemblies. “Bin-picking technologies are advancing, but not yet mature,” he adds.
According to Habibi, the technology is suited for applications where, in the past, a part needed to be fixtured to enable the robot to perform an operation on it, or cases where a robot could not be deployed because fixturing was not feasible, such as the removal of parts from bins, parts assembly, sealing and adhesive dispensing. “Generally, with the technology available today, we can locate most rigid objects that have a handful of visually discernable features such as edges [and] holes,” he says.
“Each application,” Habibi continues, “requires a degree of specific engineering and therefore you naturally see system integrators focus their attention on high-volume applications where they can amortize this cost over a large number of systems. In other words, even though a lot of applications may be solvable today from a pure VGR technology point of view, integrators may postpone offering affordable commercial solutions to these in the short- to medium-term while they exploit the low-hanging fruit.”
Boatner has seen an increase in adoption across the automotive, food and beverage, and packaging industries. But, he says, “The fastest growing applications in the 2D VGR market include automated palletizing and depalletizing, conveyor tracking, and component assembly. In the 3D market, applications for auto-racking and bin-picking are showing healthy growth…Unlike using blind robots, vision-guided robots don’t depend on costly precision fixtures to hold parts, require additional labour to load and orient parts, or need upstream actuators, sorters and feeders to separate parts for processing. Consequently, VGR enables manufacturers to more easily process various part types without tooling changeover. Plus, VGR provides the added benefit of automatic collision avoidance for safer work cells.”
Are there any applications for which the technology is not appropriate?
“There are not really any bad applications, just systems that are applied badly,” says Roney. “If a robot application is identified as a vision-guided opportunity, and the system is either miss-specified or incorrectly engineered, then it becomes a bad application.”
According to Boatner, “With the right expertise, robots and vision can improve the effectiveness and efficiency of nearly every factory floor operation.”
But Habibi says that flexible or floppy objects can be more challenging to locate, although individual points of interest can be located reliably. Applications in extreme environments where there is a great deal of debris, high temperature or humidity are also difficult because of camera and lighting line of sight contamination, he says.
What’s the ROI?
So you’ve weighed the benefits, considered the challenges and narrowed down possible applications. Next you have to calculate whether you will receive an ROI in a suitable timeframe. Here are some things to consider:
“ROI calculations include the obvious cost of the system versus increased production and labour savings,” explains Deans, “but [manufacturers] often overlook important issues such as reduced rate of scrapped parts and customer satisfaction due to improved repeatability in the product.”
According to Habibi, when assessing possible ROI, manufacturers must consider a number of factors: “How much do you spend on fixturing of parts or specialized containers to make sure parts are pre-positioned for robots? What would be the savings if they could be eliminated or simplified? Are there labour-intensive, high ergonomic injury processes in your operation that could not be automated in the past due to infeasibility of fixturing? What would be the savings if they could be robotized? Are you getting enough out of the capital equipment investment you have in place now? How much more productivity could you achieve if you could run one, two, three other part styles down the same line? Our analysis shows that in the majority of cases, ROI for typical VGR systems is close to [six months]. In many cases, there is instant ROI when an expensive fixturing or positioning device can be eliminated from the list of capital equipment.”
“Bin-picking is an easy application to calculate the ROI as it is a direct, tangible labour savings,” says McLaughlin. “JMP has implemented several VGR bin-picking applications where the ROI is less than one year.”
In closing, each participant was asked to offer advice to those considering implementing a vision-guided robot system. Here’s what they said:
Roney: “Understand where the application success lies. Is it in the vision system, the robot application or both? Consider support from the company you are purchasing your system from. How much support can they really supply and for how long? If you are buying the vision system and the robot separately, who then will be responsible for the overall success of the vision-guided robotic solution? Also, ask your supplier if classes are available, not just for the vision system or the robot, but on vision-guided robotics where vision and robots are taught together, working together. It is important to remember that in vision-guided robotics, one does not work without the other. You must understand both.”
McLaughlin: “Work with an experienced, certified integrator that has done this before. This mitigates your risk and ensures you get a system that is reliable and robust. Also, keep it simple; use a combination of vision technology and mechanical compliance to solve orientation and reach issues. Test, test and test it again; make sure every scenario is tested and validated to avoid hiccups when installed in the plant.”
Deans: “An important first step is to develop an idea of what the system needs to do, as well as what you would like it to do. This includes a determination of how material will flow in and out, as well as what processes will be performed by the system, and how workers will interact with it. Next, consulting with an experienced partner (i.e. the integrator or robot company) is essential, as they can steer you towards solutions that have worked in the past. We also believe that one should look for vendors with well-integrated solutions, not just piece parts, to reduce risk in the system integration and post-installation service. Finally, complete buy-in from the end-user is important.”
Boatner: “The first thing to do is to consult an integrator that has substantive experience integrating vision and robots. Their experience will be invaluable as you implement your VGR application. It is recommended that a detailed failure mode analysis be performed so that if a problem does occur, an action plan has been identified to limit the downtime of the robot…Make sure to buy vision software with an advanced part finding algorithm, an easy-to-use software development environment, and [one] that includes communications drivers and sample code. It’s also a good idea to purchase a package that has a complete vision toolset…especially if there’s a chance that the vision guidance application will be expanded to include code reading, part gauging or other vision tasks. Finally, be sure that the vision hardware comes standard with cables that are rated to at least 10 million cycles.”
Habibi: “Examine your ROI picture carefully to see where you can realize the most savings…If you are new to VGR, try to find applications in your plant that are already proven elsewhere in the industry to get you familiar with the technology, its strengths and its limitations. Pick a well-known integrator that can provide you with a standard, engineered VGR system targeted at solving these specific applications. Later on, when you are more familiar with the technology, you will be able to extrapolate the capability to other applications with much less risk.”
Edward Roney is the development manager for the Product Development Division of Fanuc Robotics America, Inc., a supplier of industrial robots and factory automation systems. He is responsible for the development of global machine vision products for use in Fanuc’s line of intelligent robots. Roney has been active in the application of machine vision technology since 1982 and is currently serving on the Automated Imaging Association board of directors.
Ken McLaughlin is the director of flexible manufacturing at JMP Engineering, a London, Ont.-based industrial system integration company that specializes in the engineering and provision of automation, control and information solutions. He has been with JMP Engineering for eight years. McLaughlin is responsible for JMP’s turnkey material handling systems for automotive, food and beverage, and pharmaceutical applications.
Gordon Deans is vice-president of business development and general manager of Adept Canada, a manufacturer and marketer of robotics, vision and motion control products for automated material handling and assembly. He was previously the president of Telere Technologies Inc., a consulting firm providing product marketing and business development services to high-technology organizations. He holds a bachelor of science and masters degree in electrical engineering.
Bryan Boatner is the product marketing manager for In-Sight vision sensors at Cognex, a supplier of machine vision sensors and systems. He holds a bachelor of science in mechanical engineering, and held application-engineering positions at Cognex from 2000 to 2005.
Babak Habibi is the president of Braintech, a North Vancouver, B.C.-based company that designs, develops and deploys software for vision-guided robotics systems. He completed his bachelor of science and masters degrees at the University of Waterloo in Waterloo, Ont.
What is vision-guided robotics?
• When a camera or sensor is used to provide positional information to a robot so that path is changed to adapt to the real position of the workpiece or part being processed by the robot. Vision is used to determine that new position and hence guide the robot. – Edward Roney
• The use of vision technology to locate an object and/or feature(s) on an object in space and update the robot path to perform the desired operation on the object. – Ken McLaughlin
• VGR uses digital imaging and intelligent software to add the ability to see, comprehend and reason to traditional blind robots. Vision-guided robots take advantage of cameras and intelligent software to give robots information about their environment, including 3D location, orientation and type of parts; type and quality of attributes and features on parts; and relationship between parts, robots and other objects. They react in real time to type, quality and position of objects in their workspace. – Babak Habibi
• A vision guided robot system is one where machine vision and robot control are tightly coupled to locate randomly oriented objects in the field of view of the camera(s) and generate robot movement to act on the objects. – Gordon Deans