Process
Automation technicians are constantly challenged to keep instrumentation loops and I/O working at peak efficiency in the least amount of time possible. While multiple tools have typically been needed to perform various troubleshooting tasks, today's multi-function instruments, such as mA process clamp meters, allow technicians to perform a wide range of tests with outstanding accuracy and efficiency, while cutting down on the number of instruments needed to do the job.
Migrating an agrochemical producer's existing installed base of pH analyzers to Endress+Hauser's Memosens and Liquiline technology platforms increased pH data reliability, decreased downtime, and increased production, according to Endress+Hauser. Maintenance labour was reduced by 50 percent and consumable usage was cut by 60 percent, producing savings of more than $450,000 US per year. The agricultural chemical manufacturer produces and supplies agrochemicals for worldwide markets. The manufacturing plant uses pH measurements to monitor and control reactions at more than 30 locations in utilities, incinerators, scrubber effluents, pH quench tanks and wastewater final effluents. Additional pH measurement and control locations are used in chemical synthesis reactors on recirculation lines to confirm and/or control pH for chemical synthesis. Depending on the criticality of processes, some pH measurements required closer attention and more maintenance than others. For example, the 80 units installed in the phosphatization and sulfonation processes required calibrations twice a week. Typically these calibrations required about 45 minutes to perform, resulting in approximately 120 hours per week or 6,240 hours each year to maintain calibration. The agrochemical producer made immediate improvements by replacing existing pH electrodes with Endress+Hauser Memosens pH electrodes with Teflon reference junctions. Included with the new electrodes was a new electrode holder and cabling technologies that eliminated previous reliability problems, such as poor connections and resulting electrical faults. The Liquiline platform that supports electrode lab calibration away from the field was also installed. With the use of this platform, the customer was able to rotate electrodes every two weeks with pre-calibrated electrodes. These electrodes also lasted longer - several months as opposed to a few weeks. The Liquiline platform extends to the lab, so calibration no longer needs to be performed in the field. Electrodes can now be continually cleaned, calibrated and even regenerated under controlled conditions in the lab. As a result, technicians can replace installed electrodes with calibrated ones in a fraction of the time. Endress+Hauser's Memosens technology increased the plant's average usage per electrode from 748 hours to more than 2,500 hours. With these changes in place, yearly maintenance hours decreased from 6,240 to 3,200, for a net savings of 3,040 hours. Through proper electrode, holder and cabling selection, and the use of Memosens and Liquiline technologies, the customer saved more than $450,000 US per year in labour and materials to maintain its pH systems.  Numerous opportunities for saving time, cost and materials in many other processes were identified throughout the plant, and implemented by replacing outdated analog pH loops with Endress+Hauser's inductively coupled Memosens and Liquiline technology platform.
A new era of the human-machine interface (HMI) is upon us. Harsher safety and compliance standards have persuaded many operating companies to replace their "traditional" graphics with high-performance HMIs - a move that is sure to make life easier for plant operators everywhere.
A new generation of OPC, or open connectivity in industrial automation, is upon us. The latest specification is a major step forward in both technology and capability. Called OPC-UA, it starts by unifying the multiple specifications of the past, and redefines the abbreviation of OPC (formerly OLE for Process Control - a purposeful implementation of certain Microsoft technology) to OPC-UA (now standing for Open Connectivity - Unified Architecture). Like everything the second time around, the latest generation of OPC-UA delivers solutions to problems of the past, delivers significant new capabilities and is a foundation to build on well into the future. How far it's come The first generation of OPC primarily developed into three distinct specifications. They were OPC-DA (designed for Real-time Data Access), OPC-A&E (designed for Alarm and Event Message access) and OPC-HDA (designed for Historian Data Access). Product developers could pick and choose from these separate specifications and implement what was most suitable to them. Since most of the automation world revolves around real-time data access, it was only natural that OPC-DA would become the most widely supported specification followed by OPC-A&E and finally OPC-HDA. One of the first goals of OPC-UA was to drive broad and consistent implementation for all data access when possible. For example, while communication drivers may be focused on real-time communications, they also generate status and error messages. A built-in support for the Alarm and Event portion of the specifications would allow access to not only data, but all related status messages, through one interface. The same can be said for Historic Data Access. While few real-time communication devices need HDA support, there is a class of device called an RTU (Remote Terminal Unit) that typically offers all three forms of data, real-time Data Access, Alarm and Event Messages and Historic Data Access. Again, reliance on OPC-UA will facilitate one interface for these various data types where three were required in the past. The OPC of 1995 was very Microsoft-centric, leveraging OLE (Object Linking and Embedding), a technology enabling applications-to-applications communications in Microsoft operating systems. This also involved a technology called COM (Component Object Model) and, when talking from one computer to another over a network, DCOM (Distributed Component Object Model). While it is true that Microsoft dominates most of the machines we see on the plant floor, there is another layer of technology that is deeply embedded. This is the layer of devices and control systems, typically running an RTOS (Real-Time Operating System). These are designed for performance and compactness and generally have internal mechanisms very different from a Microsoft Operating System. OPC-UA, unlike its predecessor, is designed for portability and is intended to be used in all manner of devices, potentially from a remote sensor all the way to an enterprise dashboard application. The next generation OPC-UA is built around today's leading technology and it's what is known as a "Service Oriented Architecture" (SOA). Service Oriented Architectures involves creating programs (or services) that perform a very unique function, and that can expose that function to any remote application with the authority and need for the service. The interface to a service is commonly through a well-designed and self described interface (typically XML). Hence, an application can connect to a service, pass information in an open and descriptive manner, and the service will provide a similar result. Additionally, by following the concepts of a Web interface (similar to logging onto a web port), OPC-UA behaves like any other Web interface and is as easily managed as other web applications, making it firewall friendly and manageable by system administrators. The ability to define Ports for service access and the control of traffic is very straight forward and predictable in OPC-UA. This is a major benefit over OPC based on COM and DCOM, which required a great deal of Windows Security configuration to enable distributed operation and did not give precise control over the machine PC to PC communications in terms of Ports, making it difficult to manage through firewalls. Developing a solution to perfectly fit a problem is far better than leveraging general purpose technology for a niche application. Whereas the first generation OPC leveraged Microsoft standards, OPC-UA is purposely built for the needs of the automation industry. This makes it a very effective solution in terms of both performance and security. OPC-UA leverages a compressed and binary data transfer, while managing secure access through security certificates. Finally, OPC-UA builds on earlier concepts of the OPC Data specifications, while extending them with new complex data, the ability to have clients access structures of information to maintain context and data relevance from the plant floor right through enterprise intelligence applications. Overall, OPC-UA is a much richer and more robust implementation over the past, specifically addressing the needs of the automation industry. Making it to market The OPC of years ago was new technology with no history to build on. All users were struggling to make headway and faced with the learning curve of a new technology. This go-around will be significantly different. Yes, there are specifications to download and an implementation can be done from scratch, however, due to interoperability requirements that route would not be recommended. The OPC Foundation has developed a number of stack implementations (the software needed to manage the OPC-UA external interfaces). Joining the OPC Foundation will give you access to both specifications and reference stacks. Another likely solution for becoming OPC-UA enabled would be to leverage complete implementations available from vendors in this space. If your goal is to add an OPC-UA interface to your product, why not license a widely proven solution from a technology provider. This option was not available in the past. This time around however, there are a number of OPC-UA early adopters, that can deliver a fully operational OPC-UA implementation along with other features and benefits that may be required. Expect the first OPC-UA implementations to come in the form of complete solutions. Interoperability is always the greatest challenge in leveraging a new communications standard. Hence, the most reliable solutions will come in the form of a suite of products, delivering the new technology (OPC-UA) designed and tested as a complete solution. This can come as products from one vendor, or through partnerships, where vendors work closely to deliver the latest technology in a beneficial and reliable manner. Interoperability will be better than in the past. Earlier this year, the OPC Foundation introduced its first independent certification lab, located in Germany. In the past, OPC interoperability was proven through self-certification tests and regular interoperability events. Today, while those earlier mechanisms are still available for starters, the ideal proof of interoperability will come through independent certification by sending your product to the lab. A battery of tests will be performed and after several days of testing and communications analysis, if all goes well, a certification logo will be issued. This logo is good for the specific product tested and is version dependent. A major new version will require a re-certification. What is next? Well, OPC is a very successful technology that has proven the test of time. It will not fall out of favour quickly and the OPC we have all grown to know and trust will continue on for many years. OPC-UA will become successful as it delivers exactly what is needed to solve the problems of the past, while delivering significant benefits for the future. Tony Paine is president of Kepware Technologies. He is also a member of the OPC Foundation's OPC-UA Technical Advisory Committee. For more information on OPC, contact the OPC Foundation www.opcfoundation.org.
Inductive Automation, a player in the realm of web-based  and database-centric HMI, SCADA and MES solutions, has released Ignition by Inductive Automation, what the company says is the industry’s first top-to-bottom cross-platform SCADA system built on OPC-UA. “With the operating system wars moving into high gear and with the recent widespread adoption of OPC-UA by major software vendors, there was really no other choice than to develop a fully cross-platform, OPC-UA based solution,” said Steve Hechtman, president of Inductive Automation. “Ignition is the Swiss Army knife of the industrial software business,” he continues, “and represents a major step forward in continuing our mission of opening up plant data across the enterprise.” Ignition by Inductive Automation builds on Inductive Automation’s experience in the industrial software arena, consolidating and enhancing the company’s previous FactorySQL and FactoryPMI products. “Existing users of FactorySQL and FactoryPMI will feel right at home with the new platform, which is fully backwards compatible,” Hechtman states, “and they will immediately recognize the weight of their feedback in this new release.” Ignition continues Inductive Automation’s philosophy of open, accessible systems that are unhindered by traditional licensing restrictions. Unlimited tags, screens, and no-install clients deployed using innovative Web-Launch technology mean that customers can focus on making data available, without being hung up  by costs. The ability to connect to any number of databases and the ability to open multiple web-launched design clients at no additional cost further expands the software’s value. As an additional dramatic testament to Inductive Automation’s commitment to open, accessible data, they also announced that Ignition will offer the industry’s first fully cross-platform commercial OPC-UA server absolutely free. The OPC-UA server will offer a variety of drivers at launch — with many more to come — and an open driver API, all at no cost to end users. Despite its many advantages in security, stability and cost, the company says Linux has never been a viable option in the industrial sector, due to a lack of software options. They says Ignition changes this by delivering for the first time a fully-featured industrial software package that offers the same benefits on Windows and Linux. www.inductiveautomation.com
Nine billion dollars. That’s the value of the 761 million gallons of architectural coatings the paint industry shipped in 2006, reports the National Paint & Coatings Association. Valspar Corp. is the third largest paint and coatings company in North America, and the sixth largest in the world, with plant production totaling near 250 million gallons of paint annually. For all plant design, integration, measurement, and control, Valspar trusts Meter Maintenance & Controls Inc. (MMCI). MMCI is a system integrator and technology supplier in Redlands, Calif., that specializes in true turnkey liquid measurement solutions. They have set up or retrofitted the Valspar plants in Wheeling, Ill., Sacramento, Calif., Lebanon, Penn., Statesville, N.C., and Garland, Texas, to name a few. New plants receive a top-to-bottom paint blending and batching system, with everything from the piping, to the electrical, to the process equipment and programming being supplied, installed, and programmed by MMCI. To handle the paint blending process in each of these plants, MMCI recommends Emerson Process Micro Motion flow meters. These flow meters measure mass flow, volume flow, density and temperature variables, and provide precise control measurement of the various ingredients that are blended together to create a given batch of paint. From a management and operation standpoint, Valspar wanted a system that would allow the entire enterprise to be integrated from the plant floor controls to the information systems. Plant operators need diagnostic information for monitoring of the process and for identifying maintenance needs or problems on the line without requiring that the operator be trained on the control system. The laboratory also needed access to this information for quality control and trending. As a loyal Rockwell Automation customer, MMCI chose to use a Rockwell Automation Process Automation System (PAS) to extract data from the flow meters. As each flow meter batches a raw material into a mixing tank, the process variables are recorded by RSSql and ultimately present to the Valspar operators in a Rockwell Automation RSView Human Machine Interface (HMI). In RSView an alarm system is implemented with predetermined setpoints that, when triggered, alert the operator and provide cues indicating the proper action can be taken. These process variables are also pulled into RSSql to give Valspar’s laboratory access to historical data for all past batches. The challenge With the ideal equipment selected for the plant, what remained was a networking problem. Emerson Process developed the HART (Highway Addressable Remote Transducer) Multi-drop protocol in the 1980s, so naturally the Micro Motion systems are programmable and communicate via the protocol. HART is a highly accurate and robust protocol, making it ideal in process industries, but to push this diagnostic information through to the HMI, MMCI needed to convert this data somewhere along the line to Rockwell Automation’s EtherNet/IP protocol. “Rockwell Automation Ethernet communication is like the golden child. With other providers of HART interfaces we have used, we have needed to use an OPC server to collect and distribute the information, which required that we write our own code,” said Mike Smith, Programmer and Systems Engineer for MMCI. “We just needed a way to communicate between our ControlLogix® PAC (Programmable Automation Controller) and the flow meters.” The solution Terry Davis, president of MMCI, approached Tom Thuerbach, branch manager for Royal Wholesale Electric in Riverside, Calif., to help him find a solution. Thuerbach recommended ProSoft Technology’s EtherNet/IP-to-HART multi-drop communication gateway (5207-DFNT-HART). "I know Terry Davis demands high quality, reliable products for his customers.  It was an easy decision to recommend Rockwell Automation and ProSoft products," Thuerbach comments. Som Chakraborti, business manager of Process Automation at Rockwell, adds, "EtherNet/IP is core to Rockwell Automation's Integrated Architecture that helps end-users like Valspar Corp. converge industrial and business technologies plant-wide. ProSoft's gateway offering leverages the EtherNet/IP backbone to create a powerful process control application that can easily communicate with other plant-floor and information systems." “MMCI has been using Rockwell and ProSoft products for years…possibly since we first started as a company in 1989,” Davis comments. “We use ProSoft’s Modbus ControlLogix cards all the time, so it was a no brainer. Now we try to use their HART gateway in all the paint plants we work in; and have plans to apply it in many other industries we serve. Just recently MMCI replaced a Pepperl + Fuchs HART Multiplexer system with ProSoft’s in the Statesville, North Carolina facility. We were glad to find a modern solution for an old communication platform.” In all, MMCI has set up five plants for Valspar, with each project involving anywhere from 30 to 50 flow meters. In a general application, MMCI has all HART flow meters linked up to a single ProSoft gateway. The gateway routes the data over Ethernet to the Rockwell Automation ControlLogix PAC. The ProSoft module acts as a bridge, allowing the Process Automation System to communicate seamlessly with the flow meters. Once data is extracted from the meters it can be distributed to RSSql and RSView. The results The greatest benefits of the new system are streamlined efficiency, simplified monitoring and operation, and the creation of a quality control process for preventative and predictive maintenance. “Our plants are happy with the feedback that we are now receiving from our meters,” says Mike Dimaggio, the director of engineering for Valspar working out of the North Carolina facility. “Using this information we have been able to modify our preventative maintenance plans to stay ahead of any issues before they occur. For example, we began changing out filter bags before the pumps and meters. In the past if the bag wasn't changed out we would reduce the flow to the point that we would have meter inaccuracies. Now that the system tracks this data, we have been able to see how often we should be changing these bags to avoid any errors when batching, and are able to act before an error occurs.” “Also, in the past if someone had a theory that a metering problem causes a quality issue with a batch,” Dimaggio continues, “we could not prove or disprove them. We had to look at the meter the next time it was used. Now, with stored data several times per minute for each meter charge, we can go to the real data from the questioned charge and either prove or disprove this theory. The ability to avoid meter inaccuracies will definitely help us from a quality standpoint.” Dale Simmons, lead engineer for Valspar working out of the Wheeling facility, continues, “With the HART system we can track and standardize flow rates of materials between sites. We also use the density outputs to monitor solids levels in our slurry tanks. Logging the history enables us to track line cloggages and take preventive action. In several situations we have used the historical HART data in conjunction with RSSql to troubleshoot issues that have occurred within the batching system itself... meters faulting out, misdirected flows, incorrect RSSql transactions, and more.” From a monitoring and operations standpoint, the process allows any person in the plant at any given time to view activity on the floor, the Watch Dog Timers set up by MMCI, and any other critical information. This saves money and time for Valspar associated with hiring and training employees, plus the rework and maintenance that would otherwise have to be done by a technician. The system is user-friendly manner and because the measurement system is so accurate, the system nearly runs itself and downtime is mostly eliminated. “I know Valspar appreciates not having to call us out there every time they run into a maintenance hickup,” Davis notes, “though the systems still operate without issue today.” Adrienne Lutovsky is a public relations specialist with ProSoft Technology Inc.
Statistical process control (SPC) has been used for many years to continuously monitor and improve manufacturing processes. However, there is a limit to the level of understanding traditional SPC quality control can provide. In many cases, users can determine that something has changed and most times even what has changed, but many engineers struggle with determining and proving the why — and the time it can take to diagnose the problem is often significant. In many highly regulated industries, such as medical device manufacturing, the standards bodies are pushing toward extremely high levels of process understanding. Key to this understanding is the concept of a “process signature.” What is a process signature? Imagine a mechanical press that takes half a second to insert one part into another part. During that half second, there will be variation in the distance travelled by the press, the force applied by the press and the rate of movement. If there are sensors connected that collect this data, a complex multi-dimensional waveform can be measured. This is an example of a process signature, which consists of thousands of data points gathered over a short time period. Across the plant floor, process signatures are constantly being generated by every manufacturing operation. The amount or type of data does not matter. The key is that the process signature data is tied to both the part under manufacture and the testing environment. Origin of data does not matter either; the process signature data could come from analog sensors, digital I/O measured states, any remote sensors, data in a PLC memory or other controller, or even senses from the product under test itself. A process signature captured for one part gives us a scientifically objective and complete representation of our test or process data for that specific part. As you run more parts through the test, you can capture more signatures and eventually you have a library of signatures that characterize the process and the product. Patterns begin to emerge. Quality and manufacturing engineers are able to visualize the control limits needed for proper quality control and compare against their calculations. We can now use process signature verification technology. Process signatures provide the most complete understanding of the manufacturing process, and they also provide a solid platform for innovation demanded in today’s competitive marketplace. In this article we will describe the role of process signatures in quality manufacturing and the characteristics of a successful process signature implementation. The benefits of process signature technology impact production in multiple ways, from consistent quality of manufacture to the ability to locate bottlenecks and improve yield. What’s process signature verification? Not all parts will have the same process signature. Process signature verification (PSV) is the comparison of a process signature against a set of control limits, which describe a set of control waveforms with upper and lower bounds, creating a “pass” envelope for a good operation. The verification process ensures that a pass or fail decision is based on the process signature being completely within the bounded envelope. Most quality engineers would agree that process signatures provide the best solution for quality manufacturing. Where many solutions fail, however, is in implementation. Applying the techniques of process signature verification to the real world is a complex task. As an example, Ottawa-based Sciemetric Instruments have spent many years of R&D effort working in real world customer environments with the ultimate goal of having operators control the power of PSV with the single press of a button. A successful test station Sciemetric Instruments have found that the best PSV solutions originate from a blend of hardware support and flexible software design. Hardware support involves the reliable collection and storage of thousands of data points per second. Data can come from plant networks over digital I/O, through protocols such as OPC, from digital encoders or from analog sensors. A test solution must be able to synchronize all of these inputs and collect data at the resolutions needed for proper signature verification. For example, Sciemetric’s sigPOD PSV provides a 16 bit A/D converter, which can sample at high frequency in order to capture every nuance of a process. Remember, a process signature is generated from any manufacturing or test process. A successful solution must support custom connection equipment (such as a fastening machine) or any other test station that can generate data. From the software standpoint, there are two important measures of success. The quality engineer must have the tools to be able to dive as deeply in to the scientific modeling environment as needed in order to find manufacturing problems and make corrections. And the operator must be able to leverage the power of PSV as simply as possible. Let’s illustrate these points with a mechanical press example: • Our test station is connected (all sensors and PLC connections made) and ready for calibration for our first press. The operator will run through a series of “good” presses so that the test station can “learn” the process signature of a good press. All control waveform envelopes are automatically generated (see Figure 1) and ready for PSV. After ten minutes of setup, the operator is ready for full production. • The quality engineer is now able to analyze the press operation from afar and graphically view the process signatures. The first analysis may be SPC verification. The ability to convert process signature information to SPC data sets (median values, histogram data, Cpk values and sigma levels) is key to understanding the design process. We now have a successful test station implementation. We could end our story here — but a process signature is a powerful thing. It is a reflection of a period of time for a specific part under test at a specific test station. We can now analyze multiple parts under test in more detail, in different combinations and scenarios. Streamline production, increase yields Figure 2 shows process signatures from 1,000 parts captured at a specific leak test station. Four types of signature groupings can be seen: 1. Compliant: well-behaved signatures, tightly grouped and normal. These parts are a pass. 2. Not compliant: failures that are obvious since they are so far away from the norm. 3. Almost compliant but still failure: in this case these are true failures that only deviate slightly from the norm, impossible to catch with classic systems. 4. Almost compliant but still failure: marked as defective, but are they really defective parts? These parts are perfect candidates for control limit tuning (described below). To determine if they’re true fails, a quality engineer can run a simulation using only Group 4 parts to see their unfiltered process signatures across the entire production line on all test stations. Did they fail other tests? Does this specific test station have too tight a control limit? One of the parts could be pulled and destructively tested for confirmation. If a change is warranted, new control limits can be sent down to the test stations. Increasing quality means increasing our understanding of the why behind any defect. Process signatures give us that why. Trends in production processes now appear. Suppose a situation arises where a part that passed test has been deemed a failure in the field. Finding root cause of the failure now becomes an objective science. Was a control limit set too loosely? Is a production machine beginning to show signs of failure? The quality engineer can isolate and highlight any anomalies. Once the root cause is found, the resolution of the problem opens new doors of quality control, such as inventory recalls. A platform for innovation Competition demands that innovation in the design lab is matched by innovation on the factory floor. The use of process signatures provides confidence that design changes are being properly reflected in manufacture. Its benefits to manufacturing systems are evident. But process signature technology is dynamic and only limited by the type and quantity of available data. Quality managers with an eye on the future will see ways to manage today’s risk while introducing new capabilities at times of their choosing. David Mannila is a senior product manager at Sciemetric Instruments Inc. in charge of the SigPOD PSV and QualityWorX product lines. Randy Martin is a technical writer at Sciemetric with more than 15 years of experience with embedded real-time software in the process control industry.
Engineers at Moxa have been designing communication network solutions that satisfy the strict, many-fold requirements of industrial automation for over twenty years. Their equipment enables power utilities to offer uninterrupted and reliable electric power to the public, even under harsh environmental conditions. But Moxa’s latest power substation automation system — an IEC 61850-3-certified, 18-port, embedded computer — presented some new challenges for the company’s designers. Moxa wanted to build a platform for substation automation that could handle a large number of LAN and serial ports while withstanding high temperatures in a fanless, 1U standard rack-mount form factor. “We also had to meet rigorous electromagnetic interference (EMI) testing requirements for IEC 61850-3, a specification governing communication networks and systems in substations,” says Moxa European business development manager Hermann Berg. “Our EMC/RFI shielding technology and purpose-built L-type heat sink that takes heat to the side rather than the top or bottom, combined with our prior experience with Intel Architecture Processors enabled us to develop this stackable computer.” The new DA-681 Series rack-mount embedded computer is designed to service the communications traffic generated by as many as six Ethernet ports and a mix of twelve RS-232/RS-485 ports. This high level of I/O capacity and flexibility is needed as power substations transition from analog to digital, which requires integrated communications and control systems for managing various equipment inside a power substation. Moxa is a leader in industrial serial communication, using its own serial technology to serve the most diverse and demanding requirements. Using Intel’s leading CPU technology, Moxa was able to build an “industrial off-the-shelf” computer system that stands up to the extreme environmental conditions of the power substation. Faster processing with less heat The Moxa design team chose Intel’s mobile product line to power their DA-681 rack-mount embedded computer because it offers high levels of computing performance while enabling a fanless solution. Further decreasing power consumption, the DA-681 automatically throttles (reduces) the operating frequency of the processor, if the system runs hot, through the use of Moxa-designed BIOS features. “With Intel processors, our energy customers have the computing headroom to run pre-installed operating systems, like Linux, Windows WinCE 6.0, or XP Embedded, in addition to executing many protocol stacks, protocol conversion routines and data pre-processing algorithms needed to monitor and control power systems,” says Mark Liu of Moxa. Measures of Success • IEC 61850-3 certification demonstrates compliance with energy industry-standard requirements such as environmental and EMI immunity for communications networks and systems in substations. • Fanless, high temperature design increases power system reliability and stability, based on Moxa’s extensive experience in building industrial-grade networking and computing equipment. •  Moxa’s five-year product warranty assures utility operators of reliable performance for years to come, facilitated by Intel’s long life cycle support for seven years. •  EMC (ElectroMagnetic Compatibility) protection ensures critical functions experience no delays or data loss when exposed to various EMI disturbances, enabled by Moxa EMC/RFI shielding technology. Integrating multiple control units Utility operators are looking for reliable monitoring solutions that perform many control functions in a single, secure box such as the DA-681. The DA-681 can be used to automate power distribution and monitor substations and service cabinets. “Instead of dedicated communication units, some power substations still use separate control units with proprietary, non-integrated data acquisition, analysis and handling mechanisms,” says Berg. “These aging units can be highly susceptible to frequent communication shutdowns, complicated maintenance procedures and may not maintain stable and reliable operations.” Simplifying energy application development Designed to meet the real-time demands of energy substation applications, the DA-681 runs Linux, WinCE 6.0 or Windows XP Embedded (pre-installed) and provides a friendly environment for developing sophisticated application software. “We offer a ready-to-run software platform, based on energy industry standards, with easy-to-use serial communication technology to significantly reduce system development effort and time,” says Liu. “This is particularly helpful for power automation system integrators as they no longer need to develop the network from the basic hardware layer.” Renewable energy trends “The move from using traditional coal-fired power plants to renewable energy sources is well underway and is expected to accelerate considerably over the next decade,” says Hermann Berg. He continues, “In particular, solar power has been recognized as a viable alternative, and in recent years a number of regions in both North America and Europe have enacted so-called FiT (Feed-in Tariff) legislation that allows individuals to sell solar-generated power to their local power utility.” At the same time, industry experts predict a number of large-scale solar power plants will emerge and sell power to consumers through existing power grids. Moxa’s embedded computers are expected to enable both efficient wind or solar power plant operation and the integration of power substation equipment into the electricity network operator’s smart grid. www.moxa.com www.intel.com
Implementation of a comprehensive automation and control solution from Emerson Process Management is contributing to dramatic operational improvements at the Porcheville, France, thermal power plant.The solution incorporates the company’s PlantWeb digital plant architecture with the Ovation expert control system, Scenario simulation, AMS Suite predictive maintenance software and Smart Wireless technology, as well as its SureService customer support program. The measurable improvements – including a 50 percent improvement in the plant’s ability to ramp up and down on demand – are translating into economic benefits for the plant’s owner, EDF Group, while also helping to maintain grid stability in France.The Porcheville plant is located in the Yvelines region along the Seine, roughly 30 miles west of Paris. The plant entered service in 1968 as a four-unit, 2,400 MW (4 x 600) oil-fired peaking plant. More than a decade ago, EDF Group, the top electricity producer in Europe, shuttered units 1 and 2 due to the high cost of fuel and surplus capacity in France’s electricity market.Several years ago, in response to a growing demand for electricity in France, EDF Group began an ambitious renovation program at Porcheville designed to boost the plant’s generating capacity and responsiveness. Improving the plant’s ability to ramp up and down swiftly and accurately over a broader megawatt range enables Porcheville to compete for ancillary service contracts from the government. This not only translates into additional generating revenue, but also enables EDF Group to avoid financial penalties that would be incurred if it were unable to fulfill its contractual obligations. From a broader perspective, the ability of the plant to nimbly meet the nation’s growing appetite for power helps maintain grid stability during swings of demand, frequency and voltage.The automation and control portion of the project called for Emerson to replace obsolete Control Data analog controls with its Ovation control system for all four units. Each unit’s Ovation system consists of a dedicated controller to manage individual operations of the Alstom steam turbine and three controllers for monitoring critical data associated with the boiler and balance-of-plant processes. The Ovation system will also receive additional tank level measurements from Emerson’s Smart Wireless network, soon to be installed. Smart Wireless extends PlantWeb predictive intelligence into areas that were previously out of physical or economic reach, opening the door for new possibilities in process improvement.Despite the project’s requirements, including modernizing two units that had been mothballed for more than a decade, Emerson was able to install, test and startup the control systems within separate 18-day windows that had been allotted for each unit. Meeting this aggressive timetable was important, as it helped EDF avoid financial penalties associated with project delays.The results of the control system upgrade have been dramatic. Prior to the upgrade, EDF was only able to ramp the Porcheville units by +/– 60 MW when requested by the grid. Since commissioning, Ovation has boosted Porcheville’s maneuverability to +/– 90 MW, giving EDF the opportunity to win additional ancillary service contracts that positively impact its bottom line.The increased demand for power production capacity is EDF’s greatest concern. Even after being out of service for a decade, Emerson’s automation solution enabled EDF to put the Porcheville units back online in record time. After more than two years of operation, EDF is confident that they can reliably produce power when called upon during peak demand periods. This capability has translated into increased revenue potential for EDF, as well as vastly improved plant performance and equipment safety.At the Porcheville plant, Emerson’s AMS Suite: Intelligent Device Manager works with the Ovation system to manage smart field devices, resulting in improved production reliability. Maintenance personnel can access predictive diagnostics on all HART instruments, including Emerson’s Micro Motion mass flowmeters and Rosemount OXYMITTER 4000 transmitters, as well as Fisher® DVC 6000 digital valve controllers installed throughout the plant.As part of the comprehensive project, EDF is also taking advantage of Emerson’s Scenario simulation solution. When it is up and running later this year, the simulation will be used to train new and existing operators, strengthening their skills and ability to respond to varying situations, which will further enhance plant performance.To help ensure the plant continues to take full advantage of the latest technologies, EDF is also utilizing Emerson’s SureService customer support program, which offers access to comprehensive technical support and Ovation software upgrades for five years."A project of this scope and complexity demonstrates how Emerson is uniquely positioned to provide power generators with a comprehensive plant automation and control solution that incorporates not only the control system itself, but also simulation, field instrumentation, wireless technologies – and more," said Bob Yeager, president of the Power & Water Solutions division of Emerson. "It’s this ability that sets Emerson apart from other vendors who lack the technology, expertise or resources to successfully complete such a complex project within an aggressive timeframe."www.emersonprocess.com
Located on the picturesque shores of Lake Superior, Thunder Bay, Ont., is a growing community. And since it was recently ranked as one of the top ten cities for business in Canada, population is likely to continue to increase from the 120,000 citizens who live there today. Providing safe drinking water is a municipal priority. To do that, plus protect the environment, Thunder Bay set a goal to implement "lake-to-lake" water management. This means taking water from Lake Superior through the treatment process to the distribution system, and then back through the pollution control plant before returning it to the environment. In less than a decade, Thunder Bay has succeeded.A New Plant with an Entirely New Process To achieve "lake-to-lake" water management, Thunder Bay constructed an entirely new facility which is the first of its kind. While the previous plant used direct filtration with sand filters and disinfectants, the unique Bare Point Water Treatment Plant uses an advanced ultra filtration system to purify the city's water, while expanding daily capacity from 14 million gallons to 25 million gallons.With an all-new facility and an aggressive timeline, the City of Thunder Bay called on Wonderware Canada East, the local Wonderware distributor and Canadian system integrator Automation Now to assist. Challenges included integrating an existing pumping station with the new plant equipment as well as planning for future expansions. The initial facility had 12 PLCs, with 20 additional remote pumping stations to come that would incorporate PLCs from different manufacturers. Communications between the local PLCs and remote locations would be vital to the success of the project. The Clear Choice Without the ability to closely monitor and control this complicated system, the quality of Thunder Bay's water would be at risk. So it was critical to find the right SCADA (supervisory control and data acquisition) system – one versatile enough to meet the needs of the new facility plus its future expansion. Bare Point required accurate, real-time data gathering to ensure reliable control of the plant's equipment, regardless of location. Recording and logging the data, sounding alarms for threshold conditions and securely storing information were also priorities. The new system needed to be easy to use as well as provide comprehensive reports for informed decision-making by management. After evaluating the options, the Wonderware solution was recommended and approved. The Bare Point plant is controlled by a Microsoft Windows-based system utilizing Wonderware Terminal Services software located in the operations center of the main plant. Redundant servers with UPS backup systems log over 5,000 points of data, 24 hours a day, 7 days a week. The award-winning Wonderware InTouch human-machine interface (HMI) software forms the core of the Bare Point solution. In the application design phase, it provided power and flexibility as well as connectivity for the broad range of devices in the local and remote plant locations. And now the InTouch software enables operators to closely monitor pumps and control valves, and its graphics enable them to visualize the water moving through the plant. Working with the InTouch software, the Wonderware Historian provides a high-performance, real-time and historical database to integrate the operations center with the plant floor. As an extension of Microsoft SQL Server, Wonderware Historian collects comprehensive Bare Point operating statistics while reducing the volume of data that must be stored. And it integrates this information with event, summary, production and configuration data. Its scalability is ideal to accommodate Bare Point's plan for growth.For desktop-based analysis and reporting, Wonderware ActiveFactory software – part of the Wonderware ArchestrA architecture – was designed in to the system. With the ActiveFactory software, Bare Point's process engineers can spot specific trends in real time plus prepare historical reports which can be exported to Microsoft Excel. Simple point-and-click dialogs mean that plant operators can trouble-shoot problems and identify operational inefficiencies easily and quickly. Results in Record Time Wonderware software and its intuitive interfaces made the design, installation and testing move forward rapidly. Once Automation Now was on board for the project, with support provided by Wonderware Canada East, Bare Point was operational within one year. Today, engineers enjoy end-to-end control of plant processes. The easy-to-learn graphical interface enables employees in the operating center to see a real-time representation of the capacity of water moving through the facility, plus they can control the process and monitor error and fault codes from all of the PLCs. And when they leave the operating center, SCADA terminals throughout the plant enable access to the Wonderware system wherever they may be working. Plant operators rely on Wonderware SCADAlarm as an indispensable tool for maintaining water quality. If an instrument takes a reading that is out of a pre- determined range, an alarm sounds – both on the SCADAlarm screen and a plant-wide alarm system. Redundant servers secure plant data and store it for retrieval in the event of a failure. And the Wonderware Historian software's reporting capabilities enable management to maximize plant efficiency and accelerate expansion plans. One of the unique features of the new Bare Point plant is its training facility. Instructors project live views of the operations, providing a highly productive environment for learning, group analysis and troubleshooting. Proper Planning Ensures Payback Return on investment has come in record time. Real-time reporting has enabled more effective regular maintenance for reduced downtime. And historical trending reports have led to greater visibility and increased operational efficiencies. But the biggest ROI is anticipated to come as remote stations are added. Automation Now expects that development time for these additions will be cut in half. This means that efficiencies will be realized during expansion and the money saved is projected to provide payback within the next two years. With a forward-looking team and the Wonderware solution, the new Bare Point Water Treatment Plant has quickly established itself as a technologically-advanced and environmentally-conscious facility bringing clean water to the City of Thunder Bay.www.wonderware.com
The requirement of some products to be "leak-proof" can become quite burdensome for manufacturing engineers without access to adequate leak testing expertise. Far too often, leak testing technology is poorly integrated into assembly operations as an afterthought to the assembly line design.As a result, those assemblies rarely meet Repeatability & Reproducibility (Gauge R&R) requirements for leak detection, a concept to ensure stable measurements where a tester gets the same results each time they measure. Poorly integrated technology also slows manufacturing operations down considerably, with significant, though often unrecognized, bottom line impact. The following is a review of the basic approaches to leak testing, including the pros and cons of each method. Helium testingWhenever there is a need to segregate highly noxious or otherwise hazardous substances, the costs of leak testing become a secondary consideration to the exacting standards to be achieved by the testing process. Many aerospace components, for example, contain highly flammable liquids and gases, and their manufacturing operations need to verify that each and every part meets the tightest tolerances for leaks. In applications where the leaks to be detected are below 0.001 standard cubic centimetres per second (sccs), the use of a helium mass spectrometer is often advised. Helium testing usually involves pressuring a test part with helium or a helium/air mixture inside a test chamber. The chamber is then evacuated and a mass spectrometer samples the vacuum chamber to detect ionized helium. The biggest plus of this method, and usually the one that compels the use of this technology, is its accuracy and reliability. Helium mass spectrometers can measure leakage as slow as 10-6 sccs. This accuracy is largely due to the sensitivity of the mass spectrometer sensor itself, which operates under hard vacuum conditions and is essentially counting ions. These units offer reliable and consistent leak detection for leak rates less than 10-3 sccs. There is, however, a downside to using this method. Helium testing is quite costly, often double the cost of other methods. Test chamber and test circuit components are very expensive, especially for testing large part volumes. Compared to plain air, of course, the helium itself adds an expense. These costs are usually prohibitive for any application that does not require tests for leaks less than 0.01 sccm. Pressure decay testingAnother leak testing option is a method built upon measurement of pressure decay rates. In this method, a reference volume is pressurized along with the part to be tested, and a transducer reads the pressure differential between the non-leaking reference and the test part over a period of time. There are no direct leakage rate detections with this method. Rather, time/pressure readings are converted into leak rate calculations. The biggest plus of pressure decay technology, especially when compared to helium testing, is its significantly lower cost. For one thing, the cost of using helium gas is eliminated. However, the lower costs of pressure decay instruments can be extremely misleading, as the overall testing time may cause a drag on assembly line speed. This is because a pressure decay method inherently requires two measurements and elapsed time between these measurements. In typical assembly operations, pressure decay testing can increase per part assembly time by 20 to 40 per cent. In addition, the time lapse between measurements in a pressure decay test can be far more troublesome for reasons beyond assembly speed considerations. A two-step measurement process coupled with the time lapse needed between measurements significantly increases the potential for measurement error. Because measurements are highly vulnerable to changes in plant conditions such as drafts or temperature fluctuations, there are difficulties in determining the actual volume of test parts and test circuits, both of which must be known in order to calculate results. For similar reasons, pressure decay methods are impractical for leak testing parts with very large flow leaks. If the pressure drops too rapidly it cannot be measured accurately. Mass flow testingThe downsides of the aforementioned methods makes their selection an unlikely choice in the lion's share of test-centric assemblies where cost, accuracy and speed are paramount concerns, and issues of liquid/gas toxicity are not present. Mass flow sensing for leak testing offers the most accurate, reliable and cost-effective method for nearly all applications where leaks greater than 0.5 sccm must be detected, as well as many applications with tolerances in the 0.3-0.5 sccm range.Unlike pressure decays methods, mass flow methods use single-step direct measurements of heat transfer of a flowing gas from leakage flow directed across a heated element. Temperature sensitive resistors measure the temperature of the incoming and outgoing flow, and the transducer creates an output voltage proportional to the mass flow creating the leak rate measurement. In the mass flow method, a part is pressurized and any leakage is compensated naturally by air flowing into the part from the source, which can be a reference volume reservoir pressurized along with the part or an air-supply line with pressure controlled by a regulator. Testing tipsKnowing which method to use is the first step in building optimized test-centric assemblies. However, the specific way one implements the selected testing methods is also critical. It is quite common, for example, to select leak testing instruments with generic sensing devices that are not customized to an application. This approach is inherently flawed. One needs to remember that it is not the cost of the testing instrument that is important, but rather the cost of the overall testing process. Gauge R&R of testing instruments is not an especially meaningful measurement because it is the actual Gauge R&R of the testing during real assembly that counts. Fixturing on parts being tested is usually of equal or greater importance than the leak testing instrument itself. For that reason, reputable manufacturers of leak testing technology will provide in-house engineering expertise that can customize instruments and optimize tooling and fixturing design for test-intensive operations. Perhaps the most important consideration in building test-centric assemblies is in sourcing design and test engineers with proven expertise in leak testing methods. Engineering firms that specialize in leak testing will usually provide free application evaluations with recommendations for best assembly and test methods in terms of speed, accuracy and cost. It is not uncommon for re-engineered test-centric assemblies to cut testing time by orders of magnitude. This potential for throughput gains cannot be ignored. Jacques Hoffmann is founder and president of InterTech Development Company, which specializes in automated leak and functional testing. Jacques can be reached at 847-679-3377 or This e-mail address is being protected from spambots. You need JavaScript enabled to view it .
At the foot of Nursewood Road in Toronto's historic Beaches neighbourhood lies a unique art deco-style building, which Canadian author Michael Ondaatje called a "palace of purification" in his novel In the Skin of a Lion. Passersby often mistake this "palace" for a museum or an old university. It's actually the R.C. Harris Filtration plant, Toronto's largest water treatment facility, producing almost half the drinking water Torontonians consume daily.
Page 17 of 17

Subscription Centre

 
New Subscription
 
Already a Subscriber
 
Customer Service
 
View Digital Magazine Renew

We are using cookies to give you the best experience on our website. By continuing to use the site, you agree to the use of cookies. To find out more, read our Privacy Policy.