Computer vision in smart manufacturing – computing on the edge or on the cloud?

Industrials looking to implement smart manufacturing practices in their factories are quickly faced with the question of computing capacity. Our machine learning engineer Joni wrote about the reasons for opting for on-premise and cloud – and how and when managers should make decisions between them.

As businesses all over the world are evaluating the possible benefits and savings of moving their databases and computationally heavy tasks to the cloud, the very same is happening in the manufacturing industry as well. The days when process data in the factory was only stored in on-premises systems are long gone.

Although the terms “Industry 4.0” and “Smart Manufacturing” have existed for almost a decade now, only a small percentage of the industrial sector and plants have gone through this ‘smart’ transformation. Smart manufacturing is in general considered the mission of automatizing existing manufacturing and industrial processes with IoT solutions, automated robotics and applied artificial intelligence.

The engine behind smart manufacturing is arguably the data and the ways it can be used to enable products to be created more efficiently. Nowadays factories are packed with sensors from temperature to air quality and from vibration and proximity to smart cameras. This is to enhance quality assurance and create better capabilities to survey by giving a 360 degree snapshot of the current situation on the production line. Sensors create piles of time series data, but without proper use the majority of the potential information goes to waste. To help understand what is happening on the line and to make the most out of every component, computer vision is an essential part in making manufacturing smarter.

Computer vision expands understanding

Machine vision plays a central role in the transformation to make manufacturing smart. Machine vision, which has existed for some decades already, allows factory managers to see the whole picture of the production chain with centrally organized video recording.

However, while machine vision is reliant on the human eye to observe and respond, computer vision algorithmically processes the status images and video stream into information and insights, and can even give instructions to the other components on the production line to alter the operations if needed. Opportunities are limitless in how computer vision can either help and give insights to human workers or instruct other components of the production process.

For example, using object detection algorithms, computer vision allows quality assurance to be taken to a whole new level with automatic verification of product quality and alerts on products with any flaws. Computer vision can also be used to process images that are typically hard for humans to understand. Once trained, computer vision applications perform well in detecting outliers from the process flow – for example, computer vision can help humans detect certain patterns in X-ray or thermal camera images. In more advanced factories, computer vision is merged with robotics, assisting those completing harder tasks with 3D imaging when assembling pieces in the production line.

The usual approach with making camera recording smarter has been to upload video streams to the server and carry out the algorithmic processing there. However, as computer vision is heavily reliant on neural networks – and hence, rather heavy computational tasks – using a cloud environment seems to be a more appropriate choice.

Updating to the cloud brings flexibility and cost savings

Though enabling smart transformation with data rich solutions, most of the existing data hardware solutions have become insufficient in balancing the load for on-premise systems. Most factories would benefit from changing from on-premises systems to cloud environments – this creates cost savings on hardware and computation, and allows analysis of proper datasets of sensor data all over the production line, as well as comparison to peers.

The benefits of using cloud computing in computer vision tasks also include a flexible environment to store and share datasets and to have access to GPU powered computing instances. The computation itself can be divided into model training and to infer object detections from videostream.

This is, of course, the preferred practise when computer vision tasks are piloted with only a handful of camera sets monitoring production lines. The other option of creating a central, on-premise system requires substantially higher upfront investments.

However, when more and more video streams are processed in the cloud, the costs easily rise even higher than with on-premise solutions. To avoid transferring all the smart processes of the factory to the cloud and finding yourself with suddenly skyrocketing production costs, the solution here can be to opt directly for edge computing.

Edge balances privacy and security with bandwidth

Edge computing is a distributed computing environment where computing devices are brought to the ‘edge’ of a server which is not in the cloud. This radically reduces latency to do the calculations and allows robots and other components to react faster to the results of computer vision inferences. Whereas cloud computing is based on large, centralised data warehouses and computation halls, edge computing is not dependent on sometimes limited computing resources and network bandwidth. Also, the business doesn’t have to be that dependent on third party services.

The edge environment also allows industry operations to be distributed in a more secure and more private environment – when industry data is shared in a closed environment, the pressure to invest in data encryption and the risk of data leaks becomes smaller. It is also easier to plan the architecture and supervise the computing resources when access to resources doesn’t have to be ordered specifically and transmission costs of sharing sensor data are a fraction compared to the cloud.

This doesn't mean that the use of cloud computing should be avoided at all costs – so if you've just made a hefty investment in the cloud, don't blow your top just yet. Connections between servers where edge devices are connected and the cloud can still be made and securely maintained by handpicking the resources that should have access to the cloud. For example, computer vision models can be trained on a weekly basis in the cloud with bigger datasets, and then the model can be uploaded back to the server at the plant to which edge devices get access.

Weighing the options

While it is certain that computer vision applications in industrial solutions are here to stay, many managers find themselves struggling to calculate and weigh lifetime value costs between cloud computing, edge and on-premises solutions. What’s more, with almost every pilot project bringing new sets of measurement devices and sensors to the mix, many managers may have a hard time finding a clear policy on how to integrate new technologies to the existing production line.

Smartbi partners with world-leading edge computing providers, such as NVIDIA and Huawei, and the Nordic IT infrastructure leader Proact. Together with our partners we can deploy truly end-to-end computer vision solutions for smart manufacturing – on public cloud, hybrid or fully on the edge. Don’t hesitate to contact us if you’re interested.

Joni Karras

Machine Learning Engineer