Since writing my last column on fair trade, I have had a few encounters with several people, each wanting to share their "world is not fair" story with me. And, for the most part, I agree with their views and can sympathize. That is, of course, until I find out that they're not doing anything about it.
The most common use of an enclosure is to protect the equipment inside from the elements, as well as mechanical damage from accidental contact with other equipment. But how do you determine the appropriate level of protection for an enclosure?
CoDeSys software, first released in 2005, is designed for OEMs and hardware vendors who would rather use pre-built software than their own. The latest release of 3S Inc.'s flagship programming software, version 3.0, is an entirely re-written version from the previous release, version 2.3. Using the most recent development environments from Microsoft and the .Net framework, 3S has created a visually appealing and very functional integrated development environment (IDE) for programming in the IEC-61131 framework and specification.
This isn't, however, a column on IEC-61131. It's a re-visitation to IEC-based programming software, which historically lacked the features and functionality that more mature programming products have (e.g. RSLogix from Rockwell Automation).
The "Quick Start" on the company's website is still based on version 2.3, as is the demo program. But you can import the files as they are converted to the new file formats.
The installation went well. The software had to install Microsoft's .Net framework version 2.0, which suggests that my operating system software needs to be updated.
It automatically started two services - "Gateway" and "PLC Control." These services are used to provide communications between the IDE and the defined target PLC at runtime.
There is a lot to set up in an IEC environment. However, you only need to know the target PLC or PAC (programmable automation controller) at the end of the process to map the inputs and outputs to the variables that you have created.
The editor is very clean. There are many windows and menu selections, which, unlike the help system, are context sensitive.
From my experience, the help system did not provide easy access to things you need to know. It assumes that you are already familiar with some of the concepts of IEC and of 3S's software approach. In my opinion, this has always been the case with most IEC-based software environments, and one of the reasons why it has not been embraced by North American industry.
I did, however, like the way the help file was structured such that the links to certain topics were hidden until asked for.
The program organizational units (POUs) are the program files used for the logic in the control system, and the devices identify the hardware (target PLC). The POUs can have all five IEC languages, as well as a continuous function chart.
Once you have created a POU (e.g. ladder diagram), you can create the variables you need in the structured text window. The tabbed interface of the IDE makes it easy to navigate back and forth.
There is no real syntax checking, which was disappointing. An error message appeared because I used a semicolon instead of a colon. Line numbers from the source programs were given, however, which led me in the right direction. Another disappointment was that keyboards are not as well supported in this package compared to some other software packages. I find using the mouse for most functions tedious and time-consuming.
Creating ladder diagrams is painless, but I wouldn't say that it is a very fast development environment. Using the auto declare function allows you to enter variables on the fly, which aids in creating the database.
Building a function block diagram is also relatively painless. It might be my unfamiliarity with the software, but there doesn't seem to be any quick way of connecting devices and I/O points.
Once the project has been built and compiled without errors, you can run the project in simulation mode. This will help you in some basic application troubleshooting without having the physical hardware available.
One of my pet peeves is the lack of on-screen information. Anyone who has used RSLogix cannot do without the address descriptors. A tagname (which is the only way you can refer to a memory space) cannot adequately identify the input or output. Off project documentation is needed to assist the user to understand and troubleshoot the application.
There is a built-in HMI application that can be used for basic interface. The interface is much richer and more functional than the previous version; however, the phrase "not yet implemented" appears in the help file frequently. Some OEMs might wait for the completion of some of the functionality before migrating, but you can bet that they will.
I would not ignore this IEC-based software package. If you are an IEC user, you can't afford to stay in the dark.
Version: 3.0 SP2
Vendor: 3S Inc. (www.3s-software.com)
Application: Control device programming
Price: Development environment is free
The low-power wireless sensor and control networks are extending the capabilities of factory automation systems to physical spaces and functions never before possible. Recently released studies by research firms like Harbor Research (www.harborresearch.com), On World (www.onworld.com) and ABI Research (www.abiresearch.com) predict a rapid acceleration in the adoption of this technology over the next 12 to 36 months.
A major catalyst for this adoption is an upcoming wave of new ZigBee products. Over the past few years, more than a quarter billion dollars of investment has been put into development of the underlying technology for these wireless networks, including low-power, low-cost silicon; ZigBee-compliant network stacks; and development tools. This investment has allowed major OEMs to standardize on ZigBee, and soon they will begin pushing these products to market. The availability of these products will provide companies with the hardware components necessary to move forward with ZigBee deployment projects on the factory floor.
Other key factors that will accelerate the deployment of ZigBee applications are corporate initiatives focused on energy management and stringent operational standards, which are supported by the capabilities of ZigBee.
Putting ZigBee to work
The expected applications of ZigBee include energy management, advanced process control, safety enhancement, machine monitoring and maintenance, as well as temperature and vibration monitoring. But how do you get these wireless applications to work in production operations?
It is important to begin discussing this issue now because there are some critical hurdles and challenges that organizations will run up against when they move forward with their deployments. To help with the transition from wired to wireless, there are a number of questions that manufacturers need to ask themselves.
Key questions - DEPLOYMENT
Before deploying and commissioning ZigBee networks, manufacturers must first consider:
• How do I plan for the quantity and placement of the wireless devices in the venue, particularly when the existing staff has little experience deploying wireless RF devices?
• How do I create a ZigBee application that can be installed by an electrician or other professional who is typically in charge of wired installations?
• How do I create a system whose installation begins on-site by an electrician and is completed by a specialist remotely?
• How do I embed enough automatic capability in ZigBee devices so that they can operate effectively, securely and easily at the time of the device's commissioning?
• How does the installation team establish a simple way of binding each wireless device to the location where it is installed, so that both the device and the application understand the device's functional placement and role?
One common thread that runs through each of these questions is the issue of how to successfully deploy a wireless application using the same team that is responsible for the traditional wired sensors and actuators. Very few of the teams that currently oversee wired sensor networks in industrial settings have extensive experience working with RF devices, and most companies will not have the luxury of an RF-trained engineer to support every step of a ZigBee application deployment. This presents a significant obstacle to ZigBee deployments, which is different than the installation process for wired sensors. Wireless enables freedom of choice, and that will lead to a larger volume of wireless devices. In turn, this means that most of the wireless devices will have to have a level of automated intelligence embedded in them to enable easy commissioning and flexible use. Addressing these challenges will require advanced planning to automate deployment issues faced by the people who will actually have responsibility for installing the ZigBee application.
Key questions - BUILDING APPLICATIONS
When building ZigBee applications, organizations must ask themselves:
• How can I get all the disparate components of a ZigBee network to operate as a unified system?
• How can I accelerate integration of the ZigBee application with other systems within the facility so that it becomes a fully integrated extension of the company's technology infrastructure?
• How do I build the network with automated functionality and network intelligence that addresses the lack of a human interface on most of the devices within a ZigBee network?
One of the most compelling and powerful characteristics of ZigBee applications is that they connect device capability in ways that have previously been impossible to accomplish or even to imagine. That strength of the technology also causes new operational challenges because these applications bring together devices and technologies that have previously not worked together. The process of making these disparate components talk to one another and operate as a unified system is daunting and often requires expertise in atypical areas of technology. Planning ahead to select devices and components that minimize these interoperability issues is very important. Likewise, it will be valuable to have processes and technologies that will help automate the process of building out the application and overcoming interoperability snags that occur along the way.
Key questions - MANAGING THE NETWORK
To manage a wireless network's health and performance, manufacturers have to think about:
• How can I proactively manage the network to ensure reliability and performance?
• How do I dynamically monitor and manage radio consumption to conserve battery power in wireless devices?
• How do I run diagnostics when there is a network performance issue?
• How do I manage network load between different channels or subnets?
• If I have redundancy in the network, in order to maximize reliability, how do I manage the network in real time to deal with issues such as interference and switching channels to improve performance?
One of the things that will catch many organizations by surprise is the difference between wired and wireless networks when it comes to management and maintenance. The diagnostic process, for example, is different for a wired network than for a wireless one. Whereas troubleshooting a failure in a wired device is limited to examination of elements within sub-sections of a wired circuit, performance issues with a wireless device have multiple potential causes and the deductive process of identifying the true cause is complex. More importantly, the network itself must be imbued with the automated intelligence to handle these monitoring and management chores. This will create unforeseen changes for technical teams who are experts in wired systems, but new to wireless networks.
Key questions - ENSURING SECURITY
Security is an important consideration for wireless networks. To ensure the network is secure, organizations must ask themselves:
• How do I ensure that a wireless device is appropriately secured for the application and the function? And how do I begin that device's life in a secure fashion?
• How do I put a system in place that allows that security scheme to be flexible and secure?
• How do I build easy-to-design-and-manage capabilities into the network that establish a hierarchy of access that aligns access privileges with the functional uses of an application?
The question is not whether wireless can be secure; ZigBee networks can and do meet the stringent security requirements at the network level in industrial settings. The question instead is how to make them secure in the application context that not only achieves a user's security objectives, but also provides a flexible platform that supports different purposes of the device and the needs of the organization over time. Advanced work is critical for achieving these twin objectives of security and flexibility.
The questions outlined here are by no means exhaustive. They are meant to provide a starting point for the process that an organization will embark on as it begins planning its wireless deployment. The operational challenges that companies face in building, deploying and managing wireless technologies are real. Advanced planning will help overcome these challenges. Companies need to maximize the level of automated intelligence built into these devices, to minimize the complexity of living with these applications day-to-day once they are operational in production settings.
Industrial robots guided by machine vision have the potential to revolutionize manufacturing processes, improving repeatability, cycle rate, reliability and safety on the plant floor, while reducing costs associated with labour and fixturing.
A lack of understanding and awareness of the technology's value has, in the past, hindered adoption. However, as competitive pressures increase, hardware and software costs decrease, and technologies improve, manufacturers in many industries are exploring the possible implementation of vision-guided robotics (VGR) to improve productivity and competitiveness.
To help you decide if the technology is appropriate for your manufacturing environment, we've asked five experts in the vision-guided robotics arena to discuss the benefits, challenges, integration issues, major breakthroughs and appropriate applications for the technology in manufacturing.
The participants are Edward Roney, development manager, Product Development Division, Fanuc Robotics America, Inc.; Ken McLaughlin, director, flexible manufacturing, JMP Engineering; Gordon Deans, vice-president, business development, and general manager, Adept Canada; Bryan Boatner, product marketing manager, In-Sight vision sensors, Cognex; and Babak Habibi, president, Braintech. For more information on our panelists, see "The participants" on page 19.
Ask anyone in the vision-guided robotics arena about the technology, and they can provide you with a laundry list of benefits.
"VGR is capable of effecting a major paradigm shift in manufacturing," says Habibi. "Elimination of hard tooling and fixturing is one of the principal effects that can result in significantly lower capital and maintenance costs for manufacturing. Couple that with the fact that with vision guidance, robots can be deployed increasingly in places where robotic automation was not imaginable before, and you can see that this has a powerful effect on the landscape of manufacturing and the way we will lay out our plants of the future."
Other major benefits, says Habibi, are reductions in labour costs, as previously manual applications become robotizable because of VGR, and increased asset utilization due to the ability to run multiple part styles down the same production line.
McLaughlin agrees that cost savings are a major plus. "Manufacturers can benefit from labour cost reduction through the automation of tasks that were previously not possible, and often ergonomically challenging."
Even with the plethora of pluses, experts are quick to caution potential adopters about the many factors to consider for successful implementation, including lighting, location, training and integration challenges. But the panelists agree that successful implementation and integration start with proper planning.
Says Roney, "The first major challenge is the selection of the vision supplier. There can be some significant challenges facing an integrator or end-user depending on whether the vision product is already integrated as a standard product of the robot, or if the use of a general-purpose vision system supplied by another company other than the robot manufacturer is to be applied. In using a general-purpose vision system, several challenges immediately exist, [including] communication - how will the vision system and robot exchange information; precision calibration - the matching of the vision image to the robot co-ordinate systems; and robot math - making the vision data useful to the robot and using co-ordinate frames). These integration concerns can be large engineering tasks that can be eliminated if the end-user specifies the use of a vision system designed as a standard integrated product - an application-specific solution. With a standard product, all the issues listed above have been addressed and designed into the complete intelligent robot package, which then lowers costs, reduces risk and provides a more supportable automation solution."
McLaughlin explains that often the biggest challenges are associated with the physical constraints of the system. For example, he says, "in vision-guided bin-picking applications, the vision system may be able to easily identify that a part is upside down or on its side, however, the robot gripper needs to be able to deal with this. This means the gripper must be designed to handle parts in multiple orientations. Another common scenario is if a part is leaning up against the side of the bin and the robot can't pick it because the robot itself will collide with the side of the bin. [The] bottom line [is], in a vision-guidance application you need to consider the entire system - vision system, robot reach, robot gripper, multiple orientations of parts, etc. - in order to design a system that will be successful."
According to Deans, before implementation, manufacturers must consider logistics, such as where the system will go and who will operate it. Financial issues, such as cost, also need to be reviewed, as do practical issues, such as upkeep, maintenance and training. And once you know that, control of the environment is the single largest factor to consider, he says, adding that lighting and the appearance of the parts are a major part of that.
"Lighting, of course, is subject to changes in ambient light, including overhead lights, sunlight through windows, reflections from adjacent equipment, as well as variations of structured lighting intensity over time," explains Deans. "The appearance of the parts is subject to variations in manufacturing which may affect the colour or surface finish of the parts, as well as the amount of particulate mixed in with the parts." He adds that conveyor tracking or an equivalent consideration of moving objects requires "tight timing and latency control between the robot controller, various sensors that detect conveyor movement and speed, and the object detection in the vision system."
Deans says that environments with abrasives or chemicals that are airborne are also a challenge, which can be overcome with the use of sealed enclosures and proper venting. Stamping or press machines that induce major vibrations are also an ongoing challenge that must be considered at the system design stage.
Boatner explains the challenges from both the 2D and 3D side. "It is important when evaluating a potential VGR application to determine whether you need 2D data (x, y, theta), 3D data (x, y, z, yaw, pitch, roll) or something in between. That being said, for 2D and 3D applications, one of the biggest challenges in VGR is that ambient light may play an unexpected role. Changing light conditions cause nearly every image of the part to vary slightly due to shadows that shift around on the part as its position with respect to the light source changes...A second challenge is setting up communications between the vision system and the robot...For 3D applications, in addition to the challenges stated above, often times the user needs to calibrate two or more cameras to each other, which is complex and requires significant vision and robot expertise."
One of the technology's major benefits is also one of its major challenges, explains Habibi. "One of the key advantages to vision-guided robots over their blind cousins is that they don't need the workpiece to be in the same position or even of the same type before it can be operated on. A blind robot relies on fixturing to pre-position the part at the pre-trained position, whereas a vision-guided robot locates each part every time. The fact that part position, orientation and type can change means that unlike vision inspection systems, where the part is typically fixtured or its movement heavily restricted, VGR systems can't depend on a convenient placement of cameras and lights to obtain an ideal image."
In order to overcome any integration issues, McLaughlin says that the proper training of plant personnel is key to a successful integration and adoption. "There is often a perception that vision guidance is black magic and, as such, people are often afraid and distrustful of it. By training plant staff thoroughly when the system is deployed, they understand it and are able to support it."
Like fine wine, vision-guided robotics is getting better with age, and recent breakthroughs in the technology have improved VGR performance on the plant floor.
Says Roney, "Vision systems are a quarter of the cost they were just 10 years ago, and with the higher levels of integration into the robot controller, the cost of implementation has been significantly reduced. Advanced reliability has also increased the adoption of vision-guided robotics. Vision tools like geometric pattern matching, where the features of an object are identified and matched instead of a pixel comparison, have significantly added to the reliability of machine vision for robotics. For industrial robots that work in manufacturing plants that make parts or products from raw materials or in-process goods, this reliability has greatly increased the applicability of machine vision and reduced the maintenance concerns that were once associated with previous systems that were tremendously light change sensitive."
Habibi says that there have been several breakthroughs in the area of algorithm development. "I can point to at least three new pose calculation algorithms we have developed in the last three years, which are aimed at locating various types of objects with various geometries and appearances. As more of these techniques come online, a larger variety of workpieces or parts can be located and the information can be used for robot guidance. The ever-multiplying speed of PCs and introduction of dual and recently quad core processors in the last few years is certainly notable as it makes execution of advanced algorithms feasible by allowing them to run within industrially acceptable cycle times."
According to McLaughlin, "Vision systems and tools have become dramatically more robust in the past few years and have really made vision a stable, solid technology to use in the manufacturing environment...As well, the proliferation of Ethernet communication now allows for the transfer of large amounts of data easily between the vision system and robot."
"Improvements in industrial lighting solutions are a major breakthrough," says Deans. "High-intensity LED lighting, directional lighting and more robust lighting solutions have helped immensely. Also, the increase in the computing power of vision systems has improved vision system performance and speed. Processing power, resolution and the packaging of cameras is a major improvement. In particular, digital cameras have greatly simplified the vision interface."
With these advancements comes increased adoption. The panelists say that, if implemented and integrated correctly, vision-guided robotics can be used in almost any process in many industries, but there are certain industries and applications more suited to the technology than others, and certain applications that are less challenging than others.
"Industries of all types are embracing vision-guided robots," says Roney. "The automotive industry is using guidance for the assembly and processing of engine and body components. The food industry is using vision-guided robotics to pick products from conveyors for packaging into individual containers or cartons. The pharmaceutical industry is using vision and robots to locate medical supplies on moving belts for packing into shipping cartons. Metalworking industries are finding metal castings on pallets and loading CNC machines to make finished component products...And yes, we are even using robotic guidance in bin-picking applications today that just a few years ago was thought to be impossible."
"Material handling is the low-hanging fruit that this technology can be used to capitalize on today," says McLaughlin. "Typical [2D] applications [are] picking from a stationary or moving conveyor...Typical applications for 3D vision guidance are bin-picking applications. We are also seeing interest for robotic deburring and material removal applications where the robot uses the vision system to locate the part before starting to deburr the part."
McLaughlin says that the automotive industry in particular has embraced the technology because of pressure from low-cost overseas manufacturers. "Automotive programs have lower volumes and shorter lifespans. As such, there is a trend away from special purpose equipment designed to run a single part towards flexible equipment that can run multiple parts or programs simultaneously. For example, instead of using a traditional fixtured conveyor to deliver parts to a cell, a manufacturer can now use a standard, off-the-shelf, flat-belt conveyor and a vision-guided robot to pick the parts from the end of the conveyor. Not only is the initial capital cost less, the system can now run multiple parts simultaneously simply through programming changes to the vision system and robot."
Deans says that packaging applications are ideal because of the requirement for picking randomly ordered objects from moving belts. Other suitable applications include high-speed parts processing, and complex mechanical and electrical assemblies. "Bin-picking technologies are advancing, but not yet mature," he adds.
According to Habibi, the technology is suited for applications where, in the past, a part needed to be fixtured to enable the robot to perform an operation on it, or cases where a robot could not be deployed because fixturing was not feasible, such as the removal of parts from bins, parts assembly, sealing and adhesive dispensing. "Generally, with the technology available today, we can locate most rigid objects that have a handful of visually discernable features such as edges [and] holes," he says.
"Each application," Habibi continues, "requires a degree of specific engineering and therefore you naturally see system integrators focus their attention on high-volume applications where they can amortize this cost over a large number of systems. In other words, even though a lot of applications may be solvable today from a pure VGR technology point of view, integrators may postpone offering affordable commercial solutions to these in the short- to medium-term while they exploit the low-hanging fruit."
Boatner has seen an increase in adoption across the automotive, food and beverage, and packaging industries. But, he says, "The fastest growing applications in the 2D VGR market include automated palletizing and depalletizing, conveyor tracking, and component assembly. In the 3D market, applications for auto-racking and bin-picking are showing healthy growth...Unlike using blind robots, vision-guided robots don't depend on costly precision fixtures to hold parts, require additional labour to load and orient parts, or need upstream actuators, sorters and feeders to separate parts for processing. Consequently, VGR enables manufacturers to more easily process various part types without tooling changeover. Plus, VGR provides the added benefit of automatic collision avoidance for safer work cells."
Are there any applications for which the technology is not appropriate?
"There are not really any bad applications, just systems that are applied badly," says Roney. "If a robot application is identified as a vision-guided opportunity, and the system is either miss-specified or incorrectly engineered, then it becomes a bad application."
According to Boatner, "With the right expertise, robots and vision can improve the effectiveness and efficiency of nearly every factory floor operation."
But Habibi says that flexible or floppy objects can be more challenging to locate, although individual points of interest can be located reliably. Applications in extreme environments where there is a great deal of debris, high temperature or humidity are also difficult because of camera and lighting line of sight contamination, he says.
What's the ROI?
So you've weighed the benefits, considered the challenges and narrowed down possible applications. Next you have to calculate whether you will receive an ROI in a suitable timeframe. Here are some things to consider:
"ROI calculations include the obvious cost of the system versus increased production and labour savings," explains Deans, "but [manufacturers] often overlook important issues such as reduced rate of scrapped parts and customer satisfaction due to improved repeatability in the product."
According to Habibi, when assessing possible ROI, manufacturers must consider a number of factors: "How much do you spend on fixturing of parts or specialized containers to make sure parts are pre-positioned for robots? What would be the savings if they could be eliminated or simplified? Are there labour-intensive, high ergonomic injury processes in your operation that could not be automated in the past due to infeasibility of fixturing? What would be the savings if they could be robotized? Are you getting enough out of the capital equipment investment you have in place now? How much more productivity could you achieve if you could run one, two, three other part styles down the same line? Our analysis shows that in the majority of cases, ROI for typical VGR systems is close to [six months]. In many cases, there is instant ROI when an expensive fixturing or positioning device can be eliminated from the list of capital equipment."
"Bin-picking is an easy application to calculate the ROI as it is a direct, tangible labour savings," says McLaughlin. "JMP has implemented several VGR bin-picking applications where the ROI is less than one year."
In closing, each participant was asked to offer advice to those considering implementing a vision-guided robot system. Here's what they said:
Roney: "Understand where the application success lies. Is it in the vision system, the robot application or both? Consider support from the company you are purchasing your system from. How much support can they really supply and for how long? If you are buying the vision system and the robot separately, who then will be responsible for the overall success of the vision-guided robotic solution? Also, ask your supplier if classes are available, not just for the vision system or the robot, but on vision-guided robotics where vision and robots are taught together, working together. It is important to remember that in vision-guided robotics, one does not work without the other. You must understand both."
McLaughlin: "Work with an experienced, certified integrator that has done this before. This mitigates your risk and ensures you get a system that is reliable and robust. Also, keep it simple; use a combination of vision technology and mechanical compliance to solve orientation and reach issues. Test, test and test it again; make sure every scenario is tested and validated to avoid hiccups when installed in the plant."
Deans: "An important first step is to develop an idea of what the system needs to do, as well as what you would like it to do. This includes a determination of how material will flow in and out, as well as what processes will be performed by the system, and how workers will interact with it. Next, consulting with an experienced partner (i.e. the integrator or robot company) is essential, as they can steer you towards solutions that have worked in the past. We also believe that one should look for vendors with well-integrated solutions, not just piece parts, to reduce risk in the system integration and post-installation service. Finally, complete buy-in from the end-user is important."
Boatner: "The first thing to do is to consult an integrator that has substantive experience integrating vision and robots. Their experience will be invaluable as you implement your VGR application. It is recommended that a detailed failure mode analysis be performed so that if a problem does occur, an action plan has been identified to limit the downtime of the robot...Make sure to buy vision software with an advanced part finding algorithm, an easy-to-use software development environment, and [one] that includes communications drivers and sample code. It's also a good idea to purchase a package that has a complete vision toolset...especially if there's a chance that the vision guidance application will be expanded to include code reading, part gauging or other vision tasks. Finally, be sure that the vision hardware comes standard with cables that are rated to at least 10 million cycles."
Habibi: "Examine your ROI picture carefully to see where you can realize the most savings...If you are new to VGR, try to find applications in your plant that are already proven elsewhere in the industry to get you familiar with the technology, its strengths and its limitations. Pick a well-known integrator that can provide you with a standard, engineered VGR system targeted at solving these specific applications. Later on, when you are more familiar with the technology, you will be able to extrapolate the capability to other applications with much less risk."
Edward Roney is the development manager for the Product Development Division of Fanuc Robotics America, Inc., a supplier of industrial robots and factory automation systems. He is responsible for the development of global machine vision products for use in Fanuc's line of intelligent robots. Roney has been active in the application of machine vision technology since 1982 and is currently serving on the Automated Imaging Association board of directors.
Ken McLaughlin is the director of flexible manufacturing at JMP Engineering, a London, Ont.-based industrial system integration company that specializes in the engineering and provision of automation, control and information solutions. He has been with JMP Engineering for eight years. McLaughlin is responsible for JMP's turnkey material handling systems for automotive, food and beverage, and pharmaceutical applications.
Gordon Deans is vice-president of business development and general manager of Adept Canada, a manufacturer and marketer of robotics, vision and motion control products for automated material handling and assembly. He was previously the president of Telere Technologies Inc., a consulting firm providing product marketing and business development services to high-technology organizations. He holds a bachelor of science and masters degree in electrical engineering.
Bryan Boatner is the product marketing manager for In-Sight vision sensors at Cognex, a supplier of machine vision sensors and systems. He holds a bachelor of science in mechanical engineering, and held application-engineering positions at Cognex from 2000 to 2005.
Babak Habibi is the president of Braintech, a North Vancouver, B.C.-based company that designs, develops and deploys software for vision-guided robotics systems. He completed his bachelor of science and masters degrees at the University of Waterloo in Waterloo, Ont.
What is vision-guided robotics?
• When a camera or sensor is used to provide positional information to a robot so that path is changed to adapt to the real position of the workpiece or part being processed by the robot. Vision is used to determine that new position and hence guide the robot. - Edward Roney
• The use of vision technology to locate an object and/or feature(s) on an object in space and update the robot path to perform the desired operation on the object. - Ken McLaughlin
• VGR uses digital imaging and intelligent software to add the ability to see, comprehend and reason to traditional blind robots. Vision-guided robots take advantage of cameras and intelligent software to give robots information about their environment, including 3D location, orientation and type of parts; type and quality of attributes and features on parts; and relationship between parts, robots and other objects. They react in real time to type, quality and position of objects in their workspace. - Babak Habibi
• A vision guided robot system is one where machine vision and robot control are tightly coupled to locate randomly oriented objects in the field of view of the camera(s) and generate robot movement to act on the objects. - Gordon Deans