AI isn’t all magic
Software uses real-time data to draw real-world conclusions.
Welcome to the amazing world of data science! Okay, for those math majors out there, your ears perked up. Perhaps everyone else groaned.
Stick with me! This isn’t all about the numbers. We’re about to explore one of the most exciting emerging technologies out there. We’ll break it down to help you get started. Sound good? Let’s dig in.
You’ve probably heard about the promise of machine learning and artificial intelligence (AI). These two technology concepts have caught the world’s attention. Self-driving cars? Human-like robots? Devices that know your schedule before even you do? Their promise isn’t limited to the consumer space; the industrial world has caught the fever as well. Look at many recent headlines in major industrial publications. This will be with us for a while. In fact, it will be pervasive. So, you might as well get to know about it.
We’ll cover some machine learning and AI basics, how companies are using the technologies and their promise for the future. In addition, we’ll look at some practical details and steps for implementing applications.
AI’s advent
Analytics traditionally is the realm of statisticians. Before computers were commonplace in industry, analysts worked with pencil and paper. “Analytics” is a general term meaning data science.
Finding a point along a simple line generally falls under the category of standard data analytics. The line is thought of as a “regression.” Finding the point along the line is a “regression analysis.” Performing simple regression analyses is commonplace.
Machine learning can be thought of as a subset of AI, though the two terms often are used interchangeably. Machine learning is the statistical side of AI, which also includes cognitive computing and modeling. The boundaries of each category can be blurry.
Another way to characterize AI would be as computer code that uses real-world data to draw conclusions. These conclusions can be acted upon automatically, if a system is set up that way, to make decisions. Then more information can be fed into the system and more decisions can be made. This description of AI reflects popular culture’s perception of AI as something that mirrors human thought. We take information in, come to conclusions, and make decisions.
Thus, human thought often can be expressed in the form of ”if, then” and other type propositions. In an analogous manner, AI is instantiated in any of several types of algorithms.
Algorithms consist of computer code. This code, when written to do AI, is combined with descriptive data to derive results that support conclusions. Many types of algorithms are used in AI. It’s useful to categorize the algorithms into groups based on function. A few of the larger groupings are described below.
Algorithmic categories
This is where things start getting interesting. Let’s break down the major categories of machine learning algorithms.
Clustering
Some algorithms “group” things together.
This is best illustrated with an example. Let’s say a part is being produced. A quality assurance department or in-line measurement system will associate two measurements with the part: width and height. This data is used to generate a chart. Based on how the equipment works, parts generally fall into one of the three groups, as can be seen in Figure 1. Wider parts are to the right and taller parts are toward the top.
Colors can be used to make clear the categorization scheme, as shown in Figure 2.
That’s simple enough. If you’re an Excel wizard you could generate the graph yourself, plotting new parts to the chart as they come through.
Clustering becomes more powerful as more information is added to the schema. Let’s say information is added about rejected units. It turns out the blue group represents almost 80% of ejects, whereas the yellow and red groups represent around 20%. Now you’re on to something!
But why stop there? Let’s say we start collecting length as well. Imagine Figure 2 is plotted in three dimensions. The red, yellow and blue figures might all be close to you or far away from you. If the images is rotated, the groupings remain information bearing. Advanced graphing programs do this, so it’s possible to continue manually inputting and grouping information. Grouping is improved, showing 85% of rejects in the blue group.
If clustering is done with a machine learning algorithm, it performs the categorization, but merely automates something that can be done manually. Then why is machine learning the better option?
A fourth dimension
Well, what if another dimension is wanted? Perhaps conductivity is a parameter of some significance, or a humidity reading, or an anodization current. But a 3-D graph can have only three dimensions.
On the other hand, use of a machine learning algorithm means four, five, six or even 100 dimensions can be graphed. This means additional interesting categories can be tracked, and it’s possible the blue category for a particular combination of categories could get to 95 or 99%. Insight is improved significantly into which as-built products most likely are to be in need of rework. The option exists to pull them from production before unnecessary steps are taken, saving both time and money.
Decision trees
The thought experiment just described suggests some of the possibilities inherent in these techniques. Let’s briefly look at some others.
Decision trees are another conceptually simple technique with big implications. We build decision trees all the time in our daily lives to answer questions like: How’s the traffic? What’s the weather like? Should I call my mother today? Having an algorithm build one for us is a simple task. Start by identifying the data that describes a process, as well information about the results. The algorithm generated as a result builds a tree that maps the predicted outcomes. It runs through numerous possibilities (perhaps thousands or more) to come up with a tree that is as accurate as can be.
Models and training
Other important concepts include modeling and training. Training occurs when process data is fed into a machine learning algorithm such that a model is generated, allowing good and bad process examples to be identified.
Imagine someone sitting down with a pen and writing out a decision tree. Then they write another, and another, until they have a pile of them. After that, the best decision tree can be identified, and the others thrown away. That final decision tree would be the model. For the clustering algorithms, the model would just be called a “clustering model” or “the model.” All models are used later to do the predictions.
In other words, using the optimal decision tree, for example, a set of data can be used to derive a prediction based on the process flow. To go back to the prior example, the first statement might be, “if the width is less than 23.5, then xxx, and the second question could be, “if the height is more than 43.3, then xxx. The machine learning algorithm creates the questions to get to the best answers it can.
Decision trees have a lot of utility in predictive maintenance applications.
Regression analysis
As already mentioned, regression analysis can be simple. It can be complex when it comes to machine learning algorithms.
The basics of regression analysis can be illustrated by its use to find a point on a line. To draw a line, first decide what kind of line to draw. Is it a curve? Is it a straight line? Does it have many curves? If plotting it in an x-y plane (two dimensions), this is simple to do. Machine learning shines when it is applied to complex data and many dimensions. Drawing 100 dimensions by hand would never be practical, but an algorithm can handle that with ease, and can find the best fitting regression with ease, if one exists.
Regression analysis can be very useful in process tuning and production forecasting. Not all data is appropriate for regression analysis (e.g., data that is clustered), but it can be great for items that have relationships where one factor affects other factors.
Deep learning
Perhaps you’ve heard of neural networks? Do you know what they are? I have a secret for you: no one does. Okay, that’s not entirely true. But it’s mostly true. Let me explain.
Neural networks are models that contain thousands or millions of nodes, which are small blocks of computer code, each of which can take input and produce output. Neural networks are so-called because they are meant to model how neurons work in our brains. Some connections between nodes are stronger and some are weaker, just like the neurons in our brains.
For neural networks, the algorithms don’t just configure them. They build them. They can build models with millions of nodes to process data.
As a neural network digests data, it morphs and changes until it does a fairly good job at predicting outcomes or providing categorizations. It can be trained to do just about anything. It could take recent sensor readings and produce a probability of a problem arising. Or it could evaluate a set of 1s and 0s to determine whether it portrays a cat. All these millions of nodes somehow “understand” the image. It’s just simple math for each node, but somehow the arrangement and weights of the nodes allow for drawing conclusions.
This is why I say no one really understands them. Ask a data scientist to explain neural networks, and he or she will be able to explain the math. Ask that same data scientist to explain how it can recognize Fluffy amidst those bytes, and the word “magic” may arise in the explanation.
As impressive as neural networks are, they present a conundrum in the industrial space. If an algorithm tells you to do something, do you do it? If it’s a decision tree, or a regression, or a clustering model dispensing advice, it’s possible to trace how the conclusion was derived. But with a neural network, there’s no possibility of understanding the “reasoning” of the system. It just gives an answer, that can be believed or not. If the decisions might lead to downtime or production bottlenecks it may be asked why anyone should take the word of a black box.
The future is here
The world is changing. AI and machine learning are here to stay, and the tech industry is fully on board. The technology has proven its value, and adopting companies see themselves pulling ahead of the pack. AI isn’t the answer to every problem, but when applied well, it can make a large difference in a short amount of time. If AI provides even half the value promised, it will have significant positive impact across industries. Go out there, have some fun and build some models! You’ll be glad you did.
Get some hands-on experience
How does anyone get started with machine learning or artificial intelligence projects? Here’s a few things to think about beforehand.
- Identify a problem. Start by picking a process, area, or technique to improve. Find something with a strong need where the above algorithms seem like they could help.
- Gather data. The more data the better. Thousands or millions of data points help train a model to be as good as possible. Make sure to use quality data. Bad data can easily throw off algorithms. Data pre-processing and cleaning is almost always one key to success.
- Brush up on statistics. (Or find someone who’s an expert.) Understanding sampling techniques, causation versus correlation, and having a sense of the quality of the results helps avoid false starts.
- Have domain knowledge. (Or find someone who does.) Knowing the process or techniques are critical to knowing if results are reasonable. Data scientists are great, but simply unleashing one on some data won’t get good results.
- Create the model. This can be with any machine learning software that exists. Modern supervisory control and data acquisition (SCADA) systems have some popular machine learning algorithms built in. Numerous cloud offerings and platforms include these type algorithms as options.
- Deploy the model. Often models can be run next to machines or on premise, even if built on the cloud or using other tools. Find the best way to run the model for the organization. If part of a critical process, running it on the premises is ideal.
- Monitor for success. If success can’t be measured, no one will know it exists. Have the data from before to compare it to. If the model needs refinement, go back to step five. Keep in mind that sometimes trying several models or combining a few delivers the best results.
This article appears in the IIoT for Engineers supplement for Control Engineering and Plant Engineering. See other articles from the supplement below.
Do you have experience and expertise with the topics mentioned in this content? You should consider contributing to our WTWH Media editorial team and getting the recognition you and your company deserve. Click here to start this process.