By Heinz Linsmaier, CEO, camLine
The desire to automate has been one of the drivers behind the development of manufacturing since time immemorial. Humans have always worked hard to try and lessen the need to work hard, with each industrial revolution bringing increasingly sophisticated machines onto the manufacturing floor to take the strain for us.
Industry 4.0 is the latest revolution, introducing the concept of a ‘Digital Twin’ to the factory floor. Intelligent machines can create a virtual version of themselves, composed of the data they produce, which can direct process decisions based on performance parameters and AI networks. As our capabilities here improve, we can begin to reduce the number of employees on the frontline, automating more and more processes that once had to be manual.
With so many machines creating Digital Twins, big data is increasingly placed at the heart of business decisions in the modern manufacturing enterprise, informing strategy across every area of the manufacturing process. According to research from IDC, there will be 163 zettabytes of data by 2025; that’s 144,000,000,000,000 gigabytes!
Big data offers an opportunity to make serious, immediate improvements to the cost-effectiveness of manufacturing operations, raising the quality of manufacturing while reducing support costs. For example, data streams can track defects, conduct forecasting for the supply chain, and analyse machinery for maintenance needs.
However, for the best possible results we need to marry these insights and strategies with the personal intuition of the boots on the ground. Our desire to automate can’t marginalise the value of those on the manufacturing floor; we need to find the right balance between the benefits of automation and the intuition of our experts.
The cliché that we could remove all of the staff from the factory floor is — within our lifetime at least — a ludicrous proposition. Industry 4.0 and big data are still in their relative infancies (the German government initiative from which Industry 4.0 takes its name was only publicised as recently as 2016), and we simply aren’t able to exercise the control over our machines that we’d need to make this a reality.
The tiny differences between supposedly identical machines are a good example of current limitations. Machines will be built to the same specifications but develop their own character – qualities like wear and tear, different combinations of replaced parts over time etc., that make every machine unique. Over time the performance of these machines will grow apart, and it’s difficult to modify data analyses to accurately compensate for these tiny disparities.
An operator who has worked with these machines as they’ve developed will recognise the differences intuitively, literally feeling the difference in qualities like noise and vibration. This operator’s annotations on a data sheet can often prove to be of more use, and more cost-effective, than digital analysis.
The Human Factor
Taking this one step further, it’s not just that data struggles to accommodate ‘identical’ machines. It’s also not very good at predicting the impact of people, and there will almost inevitably be human processes that impact the results of the data you’re receiving.
Most of these elements are down to human inconsistency – if you’re delayed in beginning a manufacturing cycle by two minutes every time, how many cycles do you lose per year? There’s also machinery in and around the manufacturing floor that doesn’t boast IoT connectivity but can impact the analysis process. For example, if you use the microwave for breakfast every morning and it disrupts a key Wi-Fi network, this makes a major and seemingly inexplicable impact on productivity.
You might counter that, if we continue to develop our data analysis capabilities, we could eventually track every possible metric on the manufacturing floor and finally have the data we need for full automation. This overlooks the fact that, even with our ‘limited’ capabilities at present, we’re already producing far more data than we can actually use.
Many manufacturers are recording dark data — data produced by operations or analysis, but not used — in the hope that will be beneficial at some future point. Others aren’t recording it at all, unaware that it could be of use. With more than 30 billion connected devices by 2020 according to IHS Markit, you can imagine the difficulty in leveraging the intimidating amount of data that this network will produce.
Individual intuition is, again, a major asset here. With those on the manufacturing floor able to help provide the context for their efforts, data scientists are able to ask the right questions of the data sets to hand, align the data of value, and clean up the results. Individual expertise thereby allows manufacturers to genuinely get the most out of the data they create and collect.
Ultimately, all of these examples illustrate the need for data to be managed by a human element that is close to the operational process, and not just from an observer’s perspective. The manufacturers who will benefit from the most insightful, cost-effective processes can only identify and establish them with the help of those immersed in said processes.
camLine is a software solutions provider based in Germany that has expertise in high tech manufacturing data management processes.