Connecting quality data to process data
Product quality has always been, and will remain, a constant focus and challenge for process manufacturing companies. Today’s changing landscape brings new ways for the industry to strive for, and achieve, operational excellence by using Big Data. Big Data and the Industrial Internet of Things (IIoT) has opened new ways to improve product quality. Self-service advanced analytics, in particular, allows subject matter experts (SMEs) to contribute to operational excellence and meet corporate quality and profitability objectives.
What defines product quality?
A modern definition of quality is “fitness for intended use,” which implies meeting or even exceeding customer expectations. Another way of defining quality is what impacts the product quality before the customer uses it in these three ways (see Figure 1).
1. Raw material: Raw materials entering the production process are often natural materials that typically have varying consistencies, which requires process adjustments to maintain an unvarying quality level of the finished product.
2. Production line: The production line’s design and configuration impact the process and asset performance, which, in turn, impact each other. For example, materials impacting asset performance, or overall equipment effectiveness (OEE), would be the fouling of heat exchangers due to the substance running through the tubes.
On the other hand, the asset’s performance can impact product-processing circumstances—for example a pressure-drop due to a malfunctioning pump. A process control system (PCS) is usually in place to achieve a consistent a production level while remaining safety-compliant and cost-effective.
The intermediate product quality can be tested and monitored during production, but to know if the product meets the requirements, lab tests are done according to control system procedures.
3. Product storage and shipment conditions: The packaging, handling and environment may impact how long the product remains within specification. For the customer, the product received might be considered as raw material for another production process. When using products within a production process, the customer gathers quality data and provides feedback to the supplier to improve future deliveries of the ordered goods. This feedback is closing the top loop in the figure.
Quality control or quality assurance?
Quality control (QC) and quality assurance (QA) ensures product quality. QC and QA are closely related, but different concepts. QC detects errors in the product while QA prevents quality problems.
Typically, QC is done after the fact, while QA includes activities to prevent quality problems and requires systematic measurement, comparison with a standard, process monitoring, and an associated feedback loop.
The key to QA lies within sensor-generated, time-series production data that is collected through data acquisition applications. This data can be captured in a historian with its own trend viewer and analytics capabilities. This often requires programming skills to analyze operational performance.
With computing power increasing, visualizing the time-series process and asset data of many instruments is possible. Process experts can use visualization and pattern recognition capabilities from self-service analytics tools to assess production processes and correlations. The software is used to see what has happened, how often, why it has happened, and even suggest potential root causes upstream in the process.
By empowering process engineers with advanced analytics tools, more production issues can be assessed by interpreting the data. Process data already gives good options to assure product quality through self-service analytics, as is shown in the following use case (see sidebar).
Case study: Residence time assessment to reduce quality deviation in a chemical batch process
With a total cycle time of about 12 hours, the residence time in a chemical batch process can vary a lot in each batch because there are some common vessels and buffer tanks between the different process steps. At different steps, the pH value is also measured to follow up the changes. When the same process variable is measured at two different process steps in the plant, the two measurements should correlate quite well.
The residence time between these two process steps can be assessed by use of influence factors the self-service analytics solution suggests when looking for the time shift that shows the highest correlation between the two.
The very different relationship between the up- and downstream process variables in certain time periods showed the process engineer that the upset is most likely caused somewhere between the two measurements. In this case, the user can check if the pH in the dryer is caused before or after the halogenation step.
With this approach, the residence time can be assessed much more accurately and the process variable variation can be more easily explained. The root causes of product deviations can be determined, allowing for production process improvements, and monitoring future production run can be added to reduce further product quality deviations.
Connecting quality data to process data
All data captured during the production process can be used to assure the product quality, which can also be improved using historical data, and aid in visual root cause analysis. However, this is still a bit like driving in the dark without streetlights, road signs, or navigation. What is needed is contextualization of the process data.
One part of contextualization is tying the quality test data from the laboratory to the process data, particularly in instances of batch production, where the context of a batch (such as batch number, cycle time, etc.) can be linked to the test data from the laboratory. This ensures each specific batch run is tied to its process data and to its own quality data. Linked information enables quicker assessment of the best runs for creating golden batch fingerprints to monitor future batches. It also helps collect the underperforming batches for starting analysis to improve the production process.
Another part of contextualization is capturing events during the production process such as maintenance stops, process anomalies, asset health information, external events, production losses, etc. The equipment’s degrading performance can indicate the product quality will be impacted, which can be used to assure product quality. All of this contextual information helps users better understand operational performances and get more indications from the self-service analytics platform to start optimization projects.
Process engineers, empowered with self-service advanced analytics, are at the heart of the production process and are crucial to QA. By contextualizing process performance with quality data, customer expectations can be met or exceeded. Continuous operational improvements will help reduce costs related to waste, energy, and maintenance and will increase quality products yield, leading to the improved profitability for the site.
Edwin van Dijk is the vice president of marketing at TrendMiner. Edited by Emily Guenther, associate content manager, Control Engineering, CFE Media, email@example.com.
KEYWORDS: Big Data, data analytics, quality assurance (QA)
Advanced analytics to improve quality assurance (QA)
How to use advanced analytics to improve the overall production process.
How can your facility benefit from using advanced analytics?