Jeremy Pollard

Jeremy Pollard

Jeremy Pollard has been in the industrial automation industry for more than 25 years. He has worked as a systems integrator, consultant and educator.

Tuesday, 26 January 2016
Jan. 26, 2016 - In 1995ish, I developed and moderated a software track for ISA Tech, the first and only one dedicated to the conference and not exhibits. Rich Ryan (who was with Rockwell Automation at the time) and I had a discussion on the future of automation software. He said that automation companies will become more I/O centric, and that instead of hardware pulling software like programming software, software will pull hardware.
Monday, 24 November 2014
Nov. 24, 2014 - I reviewed Radica’s Electra back in 2006. The fact that the product is still active says something about the company and the relevance of the product.
Friday, 24 October 2014
Oct. 24, 2014 - If you want to be a programmer, but don’t want to spend the money on the tools, you have some options.
Monday, 29 September 2014
Sept. 29, 2014 - AutomationDirect’s “Point of View” (POV) Windows-based software is a feature-rich industrial HMI with SCADA tendencies designed for machine control as well as plant floor enterprise. To be clear, AutomationDirect has licensed the core of a third-party technology and value-added to it to create POV.
Wednesday, 18 June 2014
Dream Report, from France-based Ocean Data Systems, is marketed as a real-time reporting generator solution for industrial automation. It is interesting to note that the company is trying to get Dream Report recognized as the global standard for reporting in automation. This is a new development, and while it is unclear to me what this really means, it is worthwhile to examine the technology.
Thursday, 22 May 2014
My last column dealt with Rockwell Automation’s Connected Components Workbench (CCW). But there is much more to say, so this is Part 2 of my review.
Tuesday, 25 March 2014
I have never thought of myself as being “out of a loop,” but I fear I have been.
Wednesday, 12 February 2014
I have often wondered, “What’s the big deal about safety? There can’t be that much to it!” In fact, I’ve actually said those words to Kevin Pauley, the Canadian manager for Pilz Automation and Safety, whom I have known forever. And that’s when he educates me.
Tuesday, 19 November 2013
I have been in the PLC and software game since 1977. I have seen a lot of things transpire between then and now when it comes to programming software.  
Thursday, 17 October 2013
In past columns I have covered many HMI/SCADA products that have a web browser interface for various devices. And, with the advent of the cloud, there may be many more reasons to have a browser interface available to you and the plant floor.
Tuesday, 17 September 2013
Well the HMI marketplace may have taken another turn again. We have always had definite purpose devices in the past, but in recent years the industry has moved away from that to a more flexible can-do-anything approach.
Thursday, 13 June 2013
This isn’t a software column as such. But it is. I had just finished writing a four-part series on technology and the effects on our economy when, suddenly, a thought struck me. We have been using PLCs for 40+ years. There are tons of things that have changed and some things that haven’t. Any PLC or PAC is doing the same job as it did years ago, but now it’s for a different reason—business. The business of technology used to be fun. In my last column, I talked about the Raspberry PI and how much fun it’s going to be to get back into a bit of banging mode. So far it’s not all that much fun because I am realizing that I am far behind the curve. I grew up with PLCs, as did most people my age in my industry. The changes that occurred were phenomenal. We went from a relay-replacement expensive slow appliance to a compact, fast, “I can do anything” device/node. Before laptops, the industry used definite purpose devices for interfacing. Remember that, all along, PLCs used a programming language—you just needed a device to enter it. There are varying accounts about how stuff got started, but Dick Morley and his company Modicon (Bedford Associates at the time) created a technology appliance that could do what the big boys were asking: replace relays. The technology was sold on troubleshooting and the fact that anyone in your maintenance department who knows relay logic can program and work with PLCs. Why? The language to program them was ladder logic, the same representation that an electrician has seen for years on D-size drawings. In the ‘90s there were individual software companies popping up everywhere. But after so many years and permutations and false starts on successors, the PLC (and its ladder logic) is still the king. Morley wrote in the mid ‘90s that ladder logic will erode in time. Seems it hasn’t. It works—plain and simple. There are more than 200 vendors of PLC hardware. Not one of them offers a non-ladder programming solution. Remember fuzzy logic? Today, it’s nowhere to be found. So here we are. Companies are delivering solutions with PCs, dedicated hardware, fancy screens and a full array of options. You can even connect to your nuclear controller over the Internet! Comforting. But the one consistent theme in all of this is ladder logic. It’s simple, robust, extendable and, having learned it, it provides a career that may more sustainable than most, or at least some. What’s old is new again This article originally appeared in the June 2013 issue of Manufacturing AUTOMATION.
Tuesday, 14 May 2013
Raspberry PI Single Board ComputerCost: from $35.00Vendor: Element14.com While my columns deal with software exclusively, I have often paired hardware and software where I believe can provide benefit to you, the reader. I first ran into the Raspberry PI single board computer (SBC) six months ago. I had been looking for a Windows Embedded SBC for some work I was doing. I didn’t want to spend an arm and a leg, but it seems you have to if you want Windows. The PI is a Linux-based, ARM-based processor board with HDMI video, network connectivity, sound and USB ports. It also has headers for special purpose interfacing. Installing the OS on the SD card was fairly simple, even for me! Using a Windows-based computer, the OS image file is written to the SD card. Insert into the PI and fire it up. The PI boots to a command prompt after you log in. It has a built-in X-Window interface and can be invoked by typing ‘STARTX.’ You can set up the user as an admin, which allows for file manipulation, or as a basic user. The really cool thing about this environment is that it can take you back to the days of being able to really control the device at a register level. This may allow you to do various things that you may not have been able to do before. The amount of community support for Linux and PI is astronomical. I believe that the PI was originally intended as a STEM (science, technology, engineering, math) educational device and for hobbyists, but it can do much more than control a crane. How about 3-D printing? Or how about a low-cost device for kids at home to run a virtual session on a VMWare server? Or how about a remote terminal? The Linux OS command set is vast. Depending on the image of the OS you install, it will determine what is available. ‘-VI’ is the editor for text files for instance, so it really isn’t intuitive. While the PI may be child’s play, the STEM community can use it as a springboard for young ‘uns to learn a bit more about engineering applications. Want ice cream with that PI? This article originally appeared in the May 2013 issue of Manufacturing AUTOMATION.
Friday, 15 March 2013
Well, I left you high and dry from my last column on VMware Hypervisor, for which I apologize. I hope I don’t disappoint you with the next installment on the implementation of the Hypervisor server. I left you with the Hypervisor installed, virtual machine (VM) converter loaded and used, along with a third product called Sphere, which is used to monitor and manage the installed VMs on the Hypervisor. The concept of a data store was introduced where the VMs are either created or copied to. While Hypervisor supports RAID configurations, it should be local due to bandwidth and network issues. To give you a better idea of the possible importance of this technology, I have more than 15 VMs created and stored on a local 2TB hard drive. I have had seven running at the same time without any degradation on the local Windows machine that is running the VSphere interface. All of the horsepower is on the Hypervisor server and, since it really isn’t running Windows itself, it’s pretty quick! The VMs range from my compiled development environment, SCADA and HMI development, graphics and multimedia support and customers’ machines that have been virtualized so that I can test all of my software as if I am at customer sites. I have Windows NT, 95, 98, 2000 and seven as VMs so I can test anything with any operating system. (Windows 8 is on its way.)While this is a boon for developers, any plant, OEM, or integrator can make use of this technology right now. Each VM can be configured to have all standard hardware components that the operating system on the VM can support. It uses the available hardware from the guest, such as a CD-ROM and USB ports. To support my customers, I develop in one VM, copy the final code to a USB stick, connect the stick to the VM that connects to the customer and transfer the code directly. Previously, all applications had to be on the same physical machine. Now various applications can be resident on a server in a nice environment, and you no longer need industrially-hardened hardware out on the floor. A simple interface device like a thin client is all that’s needed. This improves security and maintainability as well as reduces costs. Not bad so far, eh? So imagine five VSphere clients on the floor. The five VMs are on a Hypervisor server. Data logs and databases need to be common, so you can create a sixth VM that would act as the server for all the data. The real issue is access to the outside work and backing up that data. The outside world can access the data just as if it was a physical machine. The ‘server VM’ has its own IP address on the network and can respond to requests as normal. Once you have your server set up with the VMs that you want and need, a new part of the challenge is to back up these VMs so that if the VM gets corrupted due registry issues or common Windows driver issues, you can recover the VM from the backup file created. There are two ways of creating backups for your VMs. The snapshot manager is a manual option that allows you to take and store a snapshot of the VM that you can restore back to if needed. Best practice suggests a scheduled backup. I have used Acronis True Image for local machines for years and now use Acronis vmProtect for VMware Hypervisor. vmProtect installs as a web service, thus uses your default browser as the interface. Once the license and the connection to the Hypervisor are set up, the only thing left to do is create the backup task(s). You can back up the server itself and all of its configurations, or a combination of VMs. It really is as easy as pie. The location of the backup images can be anywhere on the network, so this is where a raid NAS (Network Storage) can come into play. Once you set up the task, you are done. Because Acronis vmProtect interacts with the Hypervisor directly, the VM doesn’t have to be running, or if it is, it will still back up the VM along with any dynamic data that is on the VM. You can also extract individual files from the backup image, replicate the VM or even run the VM from the backup image directly. You can also use Acronis True Image locally on each VM, but with the multi-backup configuration of vmProtect, you can’t beat the convenience and reliability of backing up your VMs. The application of Hypervisor must include recovery. It’s easy, and reliable. Look for applications that can benefit from virtualization and leave the backup to Acronis. In my experience, it is the only way to go. Welcome to 2013! This article originally appeared in the March/April 2013 issue of Manufacturing AUTOMATION.
Thursday, 24 January 2013
As I walked into the classroom at Durham College for the first time in 1986, I was terrified but oddly confident; kinda like a push/pull feeling. I was a “professor.” I had worked for Allen Bradley for just less than 10 years, just short of vesting my pension so, yes, you guessed it, I am no spring chicken! But because of the training and guidance I received from the experienced people at the plant in Cambridge, Ont., I have forged an enviable career. Funny how it all started. Back when I was in high school, grade 13 was for the brainers. I fooled myself into thinking that I was one of them, and took three math credits, chemistry, physics, history and English. Believe or not, I achieved very high marks in the first five subjects. The other two… well, not so much. After high school, I went to Ryerson to study controls engineering, before Ryerson became a university. They were the best three years of learning I could ever imagine. I met some lifelong friends, was awarded ‘best all-round technologist’ in fourth semester and I wondered why and how. I pondered that it was because I played in a band from Friday night ‘til Sunday… so my effective study time was short. Back then, Allen Bradley, as a company, really did consider their people very important. They hardly ever fired anyone. They trained their way through the impasse. Novel idea. When I was going to leave the company, the VP of sales met with me and my immediate manager to change my mind. I still wonder what they saw in me and my skills and/or personality, but I have to tell you, it felt pretty good to be “courted.” I feel that the only reason this meeting happened was because of the training and experiences I had from high school to Ryerson, to the AB training program and, of course, my own insatiable thirst to learn. I got involved in everything. I have only missed one ISA technical conference in 23 years. I have presented at most. At one point in time I was receiving 26 publications, and read them all. (Note: there aren’t that many around anymore!) I produced the first ‘learning’ newsletter called The Software User. Steve Rubin of Intellution fame asked me where I made my money. My dumbfounded look said it all. It really isn’t about the money – it is about helping and guiding. The feeling of sharing and guiding is priceless. Our college system is failing us. Our apprentice system is failing us. Our employers are failing us. Hopefully we are not failing ourselves.   This article originally appeared in the January/February 2013 issue of Manufacturing AUTOMATION.
Tuesday, 27 November 2012
I now pronounce ISA Automation Week dead! Allow me to explain. I submitted a paper on concepts and high-level implementation of Predictive Maintenance. The Malaysian engineer that followed validated my high-level approach and explanations. I felt good about the research and implementation of my presentation.Now, here’s the deal. The attendees paid to attend the conference, so they were there because they wanted to be. This is good. There were just as many people in the room as there were in the previous 15 years of giving papers and presentations at an ISA conference. In those years, attendees didn’t have to pay to attend the exhibits, but still had to pay for the conference. So the ‘show’ attracted a lot of tire kickers, if you will, but the show floor was packed with big booths, funky products and games to attract people into their booths.This year’s exhibit took me less than five minutes to walk. Which of the ‘big’ guys were there? Hmmmm. No one! The ISA’s Automation Week has shrunk to a level that is almost below regional division table-top shows. One of the reasons industry people go to these gatherings is to network and to see people once or twice a year that you wouldn’t otherwise see. Doesn’t happen anymore, since most of these people don’t attend. There was a press room, but it was empty. No buzz!ISA is an organization like all others, and it has to grow and adapt with the times. That time is now. Back in the ‘90s, Richard Simpson, then of ISA, created a new approach, called ISA Tech. The exhibits were open in the afternoon, and the technical sessions were in the morning. I can’t remember if the attendees had to pay as such, but to me it was a roaring success. The function of technical sessions and organizations is to reach out to people, to educate and to promote the “goodness” of our industry. ISA Tech did that, which is why I feel ISA needs to stop being a process-focused, one-show organization and support local chapters and put together mobile tech programs for their audience. Like ISA Tech, as such, but charge for the privilege of attending.If ISA Toronto wanted to put on a conference on SCADA for instance, ISA HQ should help them implement that. People don’t have to travel in this day and age, so do webcasts, etc.I would really miss the opportunities to network and mingle with the people I have become friends with over the years, but the concept of a single show, at a single time, I think is past its prime.What do you think? This article originally appeared in the November/December issue of Manufacturing AUTOMATION.
Monday, 22 October 2012
VMware ESXi Hypervisor is a demi-god! I introduced you to VMware Player some time ago, and it has served our community well. It is now time to move up in the food chain.A virtual machine (VM) is a software representation of computer hardware. VMware virtualizes all aspects of hardware so that you can make a VM look like any computer you want. It can configure and support Apple, Linux and, of course, Windows of any flavour. It even supports Android!ESXi version 5 is somewhat fresh technology. You can download it from VMware’s website under the free tools section. Once downloaded, you need to burn the ‘ISO’ file to a CD. This makes the CD bootable, and allows for the install of the server software onto the host hardware.The biggest immediate benefit is that you can run multiple VMs simultaneously. While their performance will rely on the amount of memory you have in the server (up to 16 GB), it’s nice to have the ability to run Linux applications alongside of Windows apps, and, of course, having the ability to run older applications in a Windows98 environment is very cool, all at the same time.If you have any existing VMs created with VMPlayer, or with an older version of the standalone converter, you will have to convert them to the 5.0 platform.So we have downloaded two products –the ESXi server software, installed on our server hardware, and the stand-alone converter, installed on the Windows machine to allow us to convert and/or create new VMs.But how does this all work? When you set up the server, you gave it a user name and password. It is sitting on the network waiting for a client to come calling. The stand-alone converter can make that call, along with a third product called VMSphere client. This software allows you to configure, monitor and control the ESXi server.There is no trick to installing the vSphere client, and when you fire it up, it will ask you for the address of the server on your network. VMs eat up a lot of disk space, so you may choose to put a large singular drive in the ESXi server to store all of your VMs.Using a network drive or raid system isn’t advisable due to the bandwidth required. I put a 2TB drive in the server, created the new data store and was ready to begin the real journey of creating, using and managing all of my VMs from my new ESXi server.Boy, now I’m ready to take on the world – which I’ll do in my next column!
Thursday, 20 September 2012
Well boys and girls, I wanted to give you the lowdown on the network guru tools out there, but I can’t. These guys all use SNMP (simple network management protocol) to utilize their functions.The cost of managing SNMP-enabled switches is much greater than those that are not, but it also seems that to take advantage of the data from these switches… well it is not for those who are ‘on the floor,’ if you will. I tried to gain access to three networks and failed because they did not have SNMP enabled, or they didn’t want me to ‘evaluate’ their network.While this column may not be a regular review column, I want to introduce you to some options. However, I qualify that with the fact that if you are using an IT group, the SNMP will typically be disabled by default. And from what I have found it stays that way! The most important information in any network is the communication data that the scouts provide. SolarWinds, Intravue, Foglight, Spiceworks and homegrown network utilities are tools that can help. If we look at the needs of our control network, then we need to be very simplistic. Imagine a SCADA computer or a PLC network where messages are travelling back and forth. Suddenly, screens stop updating and messaging stops. What to do?My favourite is still Intravue. While without SNMP it is limited in functionality, it builds a database of industrial devices, monitors connectivity and response times, and tracks additions and removals dynamically. While expensive, its real benefit is in the SNMP data from the switches and routers, but it can still be a valuable monitoring tool. SolarWinds has tons of free stuff, and some of it is really good. Most control networks are on the same subnet, and their IP address tracker compiles a list of connected devices. Very cool, BUT it only tracks the ones that are there, so if one disappears, on a rescan it won’t be there.Remembering that most devices have SNMP public data, the device can be identified with most tools, along with the communication path, which switch it is connected to and the port. Knowing that a port has failed can be helpful, yes?Spiceworks is another company that has an awesome offering, which is free for most. Network bandwidth monitoring, network access, mobile devices and network alerts are part of the offering but, again, without SNMP the results are feeble. The last in the fold is Foglight, from Quest Software. Their latest marketing email started out with “is your network SLOW?” This would make you think that most network issues have to do with response time. You would be mostly right, especially if you are dealing with wireless. If a device goes offline, that’s an easy one to figure out. But, of course, where that device is connected, the port and other device-specific information may not be readily available. Foglight is a big piece of software used for big networks. It may not be very helpful for you, but if your IT guys know about it, then it might.WhatsUpGold is another paid product that my wireless guys use. I really don’t think it gives the whole picture, but may be worth the effort since it has a really cool graphical layout tool to see your network in action (using PINGs of course!)Keeping that in mind, I hope that some of the products and services introduced here can make a difference for you. After all, when it comes to the network, when it doesn’t work, you are in big trouble. And let’s face it: it’s not all visible to the control guy’s eye!This article originally appeared in the September 2012 issue of Manufacturing AUTOMATION.
Friday, 18 May 2012
Just to set the stage: My customer is a water municipality with 15 remote sites running local PLC control for some operations, along with some local control for chlorine and treated water. We are communicating with the mother ship using public network frequency 802.11A, which is high frequency Wi-Fi, but not necessarily private. The communication provider is Motorola.The system had resident drivers installed that connected between the devices and the application code. It worked well…when it worked.We found that regardless of the reliability of the wireless network, we had all kinds of issues with gathering data. Nothing we did improved the problem. Then, one day, everything blew up. The system stopped gathering data every five minutes, as it was designed to do. Instead, it was only gathering data once every 45 minutes because the connectivity between the server and the device was constantly being interrupted by “something.” I’ll reveal what we discovered later on in this column. So what would you do if you found yourself in this situation?  My previous column (January/February 2012) dealt with some very rudimentary tools to deal with device connectivity — Ping Plotter and SolarWinds IP Address Tracker. These tools provide you with basic features like “Are you there?,” which provides the cable, device power and the ability to communicate. What they don’t do is prove that you actually have true protocol communication with the device. This month, I want to introduce you to SNMP and the devices that incorporate your network.Routers, switches (not hubs), cabling and devices (wired and wireless) can all cause communication issues that can interrupt data traffic and cause operational failures, such as losing the connection between a PLC and programming software. Use of these devices brings up many questions: Is it copper cabling or fibre? Are you using plant-to-plant leased lines, the Internet or an Intranet? Is your control network separated from the office/user network?So many questions!One of the very primitive contributors to network issues is device accessibility. Is the device accessible? If it is, what is the response of that device? Check your network topology. How many switches, routers, cables and the like are between you and the device?  If you don’t know the answer, you need a network diagrammer. And, since something may have changed since the last drawing, a real-time diagramming tool is needed.So how can SNMP help?It depends on the devices on the end, and the devices used in between. Take Netgear or Linksys switches, for instance. Cisco is the industry standard. Most software products know Cisco by default. Products such as those from Netgear or Linksys may not be known to network tools by default, so additional data for these devices will be needed. More on this below. Level 1/2/3 switches, managed vs. unmanaged switches and smart switches reveal various levels of information about your network. You can Google each one to determine the range of information for each. Never have an unmanaged switch anywhere on your network. That’s like having a cell user in a dead zone!The types of problems that can pop up are as varied as the tools available to monitor and troubleshoot them. In the example in the first paragraph, the network was put into a state of disarray by a set of resident drivers from an older installation of hardware devices. We moved from a serial to Ethernet device server environment to a true Ethernet device. The drivers for the serial devices were removed, or so I thought. It seems they came back, and only at certain times would they interfere with the network traffic. Where the confusion resided was in the tools we used. The owner of the wireless network had connectivity, but had no packet data trapping. We used Wireshark for that. We evaluated the local computer environment by using SysInternals TCPView. The standard toolset used in a product called IntraVue was ineffective.Would SNMP have helped in this situation? We did set up the switches with SNMP and had SNMP traps set up, but with this particular problem, the results were not helpful.We should always be prepared in our network world to use various tools for various problems. In a wired world, some things are easy. In a wireless world, not so much. But when push comes to shove, we should be able to handle 90 percent of the issues ourselves.SNMP can allow us to map our network, trap errors and determine if we have a bad device, indicate inconsistencies due to bad cabling and/or connections, and find failed devices.Each network aggregate device will have a MIB (Management Information Base) file associated with it, which allows an SNMP software interface to interrogate and extract status and connectivity information. It also allows the user to change values in the aggregate device, as well, if allowed.Inside a MIB file are object identifiers. These are the specific variables that the aggregate device supports.The tools available for monitoring these devices are varied. Next column, we will look at three in particular — SolarWinds, Foglight and Spiceworks.This column originally appeared in the May 2012 issue of Manufacturing AUTOMATION.
Tuesday, 24 January 2012
Everyone is talking about the cloud. The Wiki definition of the cloud is: "the delivery of computing as a service rather than a product, whereby shared resources, software and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet)."
Page 1 of 3

Subscription Centre

 
New Subscription
 
Already a Subscriber
 
Customer Service
 
View Digital Magazine Renew

Events

ATS Knowledge Day
March 28, 2019
Hannover Messe 2019
April 1-5, 2019
RFID Journal LIVE!
April 2-4, 2019
Automate/ProMat 2019
April 8-11, 2019

We are using cookies to give you the best experience on our website. By continuing to use the site, you agree to the use of cookies. To find out more, read our Privacy Policy.