Giving robots the ability to think and reason in an industrial environment
June 14, 2007
By Mary Del
Imagine a manufacturing plant where robots are sophisticated enough to understand their environment and choose the best path to achieve a goal. Imagine robots capable of working together as a team to solve problems. Such robots, endowed with high-level cognitive capabilities – including perception processing, attention allocation, anticipation, planning and reasoning – would greatly expand the possibility of flexible manufacturing because they would be able to see, feel, touch and reason within the confines of unpredictable environments.
This isn’t your father’s robot.
“The difference with cognitive robotics is that instead of programming a robot with the sequences of mechanical actions it should perform when certain conditions arise, a cognitive robot is programmed with the logic behind the mechanical actions it can perform and how to synthesize these mechanical actions in order to bring the environment to a specific state,” says Stavros Vassos, a PhD student working in the cognitive robotics group at the University of Toronto.
Do cognitive robots exist today in an industrial environment?
“To my knowledge, there are no industrial applications where the cognitive robotics design paradigm is used,” says Vassos. But his group’s research focuses on a high-level programming language for controlling robots. In this language, one can specify the available actions that the robot can do, including actions that affect the environment and actions that extract information from the environment using its sensors; the objects and properties of the environment and how these change for each of the possible actions the robot can do; and a high-level control program that specifies the behaviour of the robot using the properties of the environment as control-flow expressions and the actions that the robot can do as basic statements.
Michael McCourt, president of Stratford, Ont.-based D&D Automation, engineering specialists who provide customized controls and manufacturing intelligence solutions, agrees that we have not yet seen “true cognitive robots” applied in an industrial setting.
“A cognitive robot would need to ‘understand’ its environment and choose the best path to achieve a set goal,” says McCourt. “A truly cognitive working environment would be capable of almost infinite flexibility to build variations of any product…We are seeing robotics with ever-increasing input devices like vision, touch, pressure and auditory inputs that allow for ever-increasing complexity in manufacturing. We are reaching a point where the robotics certainly appear to be cognitive, but the leap to true cognitive robotics or truly cognitive manufacturing is a long way off.”
McCourt says that costs are too high and that the technology just isn’t there yet, particularly where the hardware and software are concerned. “Hardware will have to be capable of changing or ‘growing’ like a human brain to reflect the requirements of the software. I also believe that the software will have to be able to develop new algorithms or ‘thought’ and drive the hardware in the desired direction,” he says. “We will need to work on some range of motion limitations so that robotics are less limited in their ability to do physical work. Without this range of motion, the power of the cognition is wasted. Communications speeds will also need to be significantly sped up in order to bring a series of cognitive pieces together in a collaborative manufacturing process.”
To make cognitive robots a reality, D&D Automation is currently working on developing the sensory requirements and the range of motion that cognitive robots need to be truly cognitive.
ABB, a supplier of industrial robots, is working with what it describes as cognitive robotics to transform manufacturing processes.
“At ABB, we use the phrase ‘cognitive robotics’ to describe robots that can sense and react to changes within the industrial work environment,” says Steve West, ABB’s business development manager, vision technologies. “Traditionally, robots have been limited to specific commands and have been unable to respond effectively to variations within a process. Robots working in factories today are able to utilize capabilities such as vision guidance, force sensing and voice recognition.”
ABB is currently applying what it describes as cognitive robotic technology in the assembly of torque converters. “This process requires a robot to both see and feel,” explains West. “To assemble a torque converter into a transmission requires that the part be seated onto a series of spline gears. Up until a few years ago, only humans were capable of such a task because of our innate ability to sense force, shimmy the part, sense force again, and then finally seat the converter onto a gear.”
West says that ABB is working with advanced technology using algorithms that give robots the ability to create a hypothesis so that they may interpret a random situation and take the appropriate action.
“Most manufacturers have capitalized on robotic technology to stay competitive, but there are many areas of the factory that remain labour intensive,” explains West. “If you tour factories today, you’ll see incredible automation technology and then walk 15 yards and see something like humans putting wheels on cars manually. We’re still working to solve labour-intensive manufacturing operations using robots. It’s only that these robots must have some ability to perceive and respond to changing inputs, much like humans do all day long without even thinking.”
To achieve this, there are many challenges that have to be overcome. “Aside from limitations in processing speeds, the greatest challenge of cognitive robotics is to avoid point solutions and to instead provide a software and hardware platform that can solve families of problems with a high degree of reliability,” says West.
“Every day we see customers with a desire to automate what are really simple operations for humans, but the technology is not yet available for the application of robots. That’s why we keep funding R&D and believe this to be a good business for now and the foreseeable future.”
“It is possible that as the techniques related to cognitive robotics become more powerful and the implementations of cognitive robots more robust, we will see cognitive robots replace humans in more advanced tasks that currently robots cannot safely perform,” says Vassos. “For instance, a futuristic scenario may involve cognitive robots that consist of modules that come with a representation of their parts and the dynamics of the available mechanical actions they can perform. The modules can then be combined and incorporated into industrial applications by programming only at the level of task specification, while the low-level details are sorted out by reasoning that the cognitive robot does by itself.”
Beyond the technological hurdles that need to be overcome, Vassos says that there needs to be more collaboration between industry and academia to determine the needs of industry.
And, McCourt adds, “We need people who think and dream big.”
What makes up a cognitive robot?
By Stavros Vassos, PhD student, University of Toronto
Cognitive robotics is a design paradigm that, when applied to robotic agents (in contrast to software agents), involves taking care of issues that lie in several different fields of research and applications, including:
1. The mechanical part of the robot that is responsible for movement and affecting the environment (e.g. mechanical arms, the body, the motors);
2. The software and hardware parts responsible for getting meaningful information from the environment in which the robot is situated (e.g. the hardware and the information processing software for doing feature extraction from visual images and sound);
3. The software part responsible for the representation of the environment and the way that the robot can interact with it (e.g. a logical specification of the properties of the environment as well as how these are affected by the available actions that the robot can perform);
4. The software part responsible for the specification of the task that the robot should do based on the previous representation;
5. The software and hardware parts that make use of the representation of the environment and the specification of the task that the robot should do to compute the behaviour of the robot at any given moment; and
6. The software and hardware parts that provide the interface between the reasoning component and the actual sensors and actuators in the environment.