Manufacturing AUTOMATION

Effective data analysis: You’re collecting data, now put it to work

May 15, 2018
By David Mannila

May 15, 2018 – Your production lines are likely collecting data like never before, but are you obtaining timely and actionable insight from this data? Insight to reduce production costs per unit by boosting first time yield? Insight to reduce warranty claims and recalls? Insight to improve quality with fewer in-process test stations?

Industry 3.0 computerized manufacturing and gave us the tools to collect a variety of data from across the plant floor. It did not, however, provide the most effective means to make timely use of this data to improve these performance measures.

With Industry 4.0 – or Manufacturing 4.0 – data collection has become even more pervasive and granular. More is not necessarily better. Data that doesn’t work for you every day, on every shift and through every cycle of a production process or test isn’t worth much.

The good news is effective data analysis is also a hallmark of Manufacturing 4.0. The Big Data management, analysis and visualization tools that unlock your data’s full potential and make it accessible to the non-engineer are widely available today. Here is what these tools allow you to do.

First, collect and organize
Every cycle of a production process or in-process test generates data. This includes SPC and scalar data that is great for monitoring and tracking the health of production line at a high level. What’s more important is the digital process signature generated by each process or test cycle — these provide much more granular insight to trace the root cause when a problem occurs. And let’s not forget the images and related datasets generated by machine vision systems used for quality inspection.

Advertisement

All this data must be collected into a single central database. It can’t be left to languish in silos trapped on the plant floor. We can do this by equipping process and test stations with appropriate digital sensors, connected over a robust network architecture.

Once collected, this data must be organized in the right way — by the serial number of the individual part or assembly. This serialized dataset, called a birth history record, paints a picture of what happened to that part or assembly through every stage of production and testing in millisecond increments.

The next step: Analyze
Many systems capture and store digital process signatures as flat image files. These systems lack on-demand visualization tools. Data must be exported into spreadsheets, in which each test or part has its own tab with its signature’s waveform image. There is no way to overlay and correlate these images. Making any sense of this pile is time-consuming and frustrating, and these files are not easily correlated with other related data, such as scalars or machine vision images.

Modern analytics tools convert data into histograms that can be correlated with other data types associated with that serial number or type of part to illustrate the profile of a good unit and the range of acceptable deviation. This makes it easy to create and visualize a baseline against which to compare all units.

Then act with surgical precision
With this approach, process variations or anomalies can be spotted before they lead to scrap and rework, and be further analyzed to reveal relationships that might explain the problem. Anomalies can be quickly searched for within a large group of units and only those that match the anomaly need to be quarantined or recalled. This is particularly valuable if the product has already shipped. Instead of a mass recall that impacts a large number of customers and tarnishes your reputation, you can engage in a targeted and limited recall of the specific serial numbers, as the data indicates the flaw.

Say a final assembly, like a hydraulic system or a fuel pump, is leaking. The first step is to pull all the data for that problem unit. See if there is anything off about the unit’s original test signatures, even within the range of standard deviation. Then, look at the data from all the other processes upstream that touched the unit. Sometimes a problem just doesn’t show up at a test station.

For example, maybe a pressing operation was flawed and this can show up in the force versus distance signature for that process. The part may still pass its final leak test, but will eventually leak like a sieve at the location of the poorly pressed components when subjected to normal use over time.

You can play ‘what if?’ at any time
Once you have optimized the function and performance of your production and test stations through this process of data collection and analysis, you don’t have to reinvent the wheel. With the root cause of a production or quality issue identified, the comparable process, test or machine on other lines or at other plants can be adjusted before they can suffer the same problem. Process signatures from a new production line can be matched against an existing production line to give a strong indication of conformance.

Take your leak test, for example. With any test, there is always that point where running the test longer will not yield further improvement — or in the case of a leak, take more time to fill or stabilize the part. In the old days, quality engineers had no choice but to rely on archaic methods of trial and error, wading through piles of spreadsheets with manual calculations to find the ideal test limits. They don’t have to anymore.

Today’s data analytics tools can automatically calculate statistically based limits. Hundreds or even thousands of datasets can be tested and reviewed offline to explore the impact of different test parameter and limit settings. On a high-volume production line, shaving a couple seconds off each test cycle can be enough to reduce the number of test stations that are needed to keep up with production.

With all that data at your fingertips, you don’t have to wait until you know you have a problem or a new requirement to implement before taking action. You can run a simulation or data model at any time on your archive of data, just to see what the outcome might be. Play with the numbers, run them again, and compare the results. In just minutes or hours, algorithms can achieve reams of spreadsheets, which once took days or weeks to accomplish.

The end goal is to find out how you can make those little tweaks that can add up to a big impact on quality, efficiency and yield. Best of all, this can be done offline so it doesn’t disrupt or interfere with the line operation, the test stations or with anyone doing their job.

Rapid ROI and big savings
Data makes all the difference when it comes to continuous improvement in manufacturing. Visibility into the process – what’s working and where the problems lie – is a powerful tool for reducing cycle time, controlling costs and improving productivity across the plant. The more data you have, the better the insight you can achieve. With today’s off-the-shelf data analysis and visualization tools, a modest investment can generate a return on investment in a matter of months and yield millions in annual savings.

As a senior product manager at Sciemetric Instruments, David Mannila has broad responsibility for new product concept, definition and development, as well as maintaining Sciemetric’s overall product roadmap. He has more than 20 years of hands-on experience working with manufacturers to develop products that make them more efficient, profitable and respected in the marketplace.

This feature was originally published in the May 2018 issue of Manufacturing AUTOMATION.


Print this page

Advertisement

Story continue below