Industry 4.0: powered by vision, smart factories are coming | Imaging and Machine Vision Europe

2022-09-23 21:06:58 By : Ms. Kivi Tang

Share this on social media:

In a post-pandemic world, the need for efficient, optimised factories has never been greater, finds Andrei Mihai

The idea of a Fourth Industrial Revolution – or Industry 4.0 – has been floating around for a while now. At its crux is interconnectedness – from the IoT platforms and the sensors picking up the data, to the cloud where everything is brought together; Industry 4.0 factories will no longer be a set of different machines working on individual – and independent – tasks, but a coordinated mechanism fed by data and smart algorithms.

Some companies are already experiencing that future today. From Siemens calling Industry 4.0 ‘the future of industry’ to Ericsson hailing it as ‘a new era in manufacturing’, there’s no shortage of momentum around this transformation, but exactly what kind of role can machine vision play?

Humans are fundamentally visual creatures. All of us rely heavily on visual cues to assess situations and make decisions. Even for machines, which can use all sorts of sensors and aren’t faced with any inherent biological limitations, vision is likely to be one of the key sources of information.

‘Vision is one of the most important senses for humans, but also for machines,’ agrees Sören Böge, head of product management 2D image acquisition at Basler, a manufacturer of imaging components for computer vision applications. ‘You have several sensors in a factory – for example, photoelectric sensors or proximity sensors – but comprehensive tasks, such as sorting, quality analysis or orienting an AGV [Automatic Guided Vehicle] requires vision to deliver the significant data you need to make a factory smart enough to make its own decisions.’

For factories looking to incorporate machine vision, the challenge lies not so much with the sensors themselves, but rather how these sensors communicate with each other. A factory’s ability to make its own decisions based on this dialogue is what truly differentiates a smart factory.

The first prerequisite for this (and Industry 4.0 in general) is true interconnectedness. You can’t have islands or isolated components in a factory. Everything needs to be connected. This is not a new idea, but what is new is how they connect. Increasingly, emerging communication technologies and protocols are enabling the processes required for smart factories to happen.

‘You still have bus system-based factories but, if you look in the future, 5G, wi-fi 6 and other standards of wireless connection will play a significant role, which will allow you to interconnect broader spaces, broader areas and even multiple factories in the future,’ Böge adds. Basler’s prioritising of communication standards is also clear from the fact it is the first machine vision company to join the 5G Alliance for Connected Industries and Automation.

Even with the new standards of wireless connection, however, the industry still faces sizeable limitations. Machine vision implies a lot of information; even standard-speed cameras can stream at 120 MB per second and, if you have dozens – or hundreds – of cameras and machines and other utilities, even wi-fi 6 likely won’t be sufficient to support all this data transfer. So Böge expects we’re not quite done with wires yet, and there’s still a way to go before streaming becomes truly wireless.

But there are alternatives. An important accelerator of machine vision in smart factories is a combination of edge computing, where smaller computers are embedded into systems with sufficient calculation power to do some of the pre-analysis work on the edge. Edge systems exist, but there are important cost and security challenges. When edge systems become cost-competitive with PCs, then you can set up a truly smart and independent factory.

Despite these shortcomings, however, Industry 4.0 isn’t something on the near-horizon – it’s happening already. ‘If we look at the factory, the game changer for me in Industry 4.0 is not hard cut,’ says Böge. ‘I don’t think it’s something that’s starting now, but is an evolution that began quite some time ago.’

As more large companies share their success stories, we can expect confidence to increase and the new industrial revolution to truly take shape, with machine vision as its core accelerator. When that happens, how could we expect a smart factory to work? ‘Almost like a home automation system,’ suggests Böge. The idea would be to have an automated supply store, where the operative supply management could be initialised automatically for tasks such as reordering and stock management. Then the parts would be transported to the assembly machines, where robots would load and unload them. Everything would be quality inspected and the final product put on the storage shelf, where there would be a connection to shipping and so on. ‘So, working back from the customer order, you could more or less complete production planning and, in nearly every step of this, vision systems would be a central success factor,’ adds Böge.

Industry 4.0 – and digitisation in general – have become relatively ‘fuzzy’ terms and terminology also has a geographical component. In the USA, for instance, IIoT (Industrial Internet of Things) is the buzzword that organisations respond to; it’s not Industry 4.0. Although, no matter what you call it, it is all based on AI, and deep learning specifically.

It is widely recognised that deep learning is now the state-of-the-art in machine learning for machine vision and it is being increasingly widely deployed across industrial applications. As well as studies showing this, Nvidia, one of the key players in the vision game, also clearly states it. AI, and deep learning specifically, enable computer vision models to learn, adapt and perform comparably to a human expert in a factory, while requiring significantly less input (and lower costs in the long run). In the food industry, for instance, AI systems have already become proficient at detecting and grading various food products – in some cases with an accuracy of more than 95 per cent, or even 99 per cent, according to research by Lili Zhu and Petros Spachos et al in their recent study, Deep learning and machine vision for food processing: A survey.

‘Machine vision systems are widely used in food safety inspection, food processing monitoring, foreign object detection and other domains,’ write the authors. ‘It provides researchers and the industry with faster and more efficient working methods and makes it possible for consumers to obtain safer food. The systems’ processing capacity can be boosted to a large extent, especially when using machine learning methods.’

Methods such as those common in food detecting, grading and processing can also be used in factories to deal with components or products, but there remain significant obstacles. Among these is visual data availability. To train your deep-learning algorithm for tasks such as sorting, picking and quality inspection, you need a lot of visual data, and obtaining good quality data is rarely easy. But solutions are emerging on this front, too. Siemens’ SynthAI generates thousands of randomised annotated synthetic images just from 3D CAD data. This data can then be used for training, enabling organisations to handle the training of their systems with fully annotated datasets. This not only shortens data collection and training time, but also eliminates tedious manual images and labelling, and produces a model that can also be used offline.

Developing easily accessible and high-quality data is important for developing efficient algorithms that, in turn, not only speeds up the machine vision-based process, but also makes the entire system consume less power, which is important as it makes it easier to run on edge machines and pass-through wireless connectivity.

Other companies, including Basler, have demonstrators for automating various aspects of factories and are working at automating robots, often remarkably quickly. Within hours, an example demonstrator can be set up to separate various products placed randomly below the robot, even for people without any machine vision or programming experience – something that would have taken thousands of lines of code and thrown up many complications just a decade ago.

Given all this, why aren’t we seeing smart factories pop up everywhere? Based on signals from both industry and academia, the technology required for Industry 4.0 is already here, but there remains a significant gap between something being possible and something that supports a good business case. We know that, in general, industry is slower than research to adapt.

Perhaps the biggest current deterrent for smart factories is that, despite tangible progress being made, there are few large-scale success stories. Setting up a factory is complicated in the first place: it’s a big investment and you need everything to work smoothly. Investors are often understandably hesitant to implement the latest technology because that technology doesn’t have a track record.

There’s also a knowledge gap to bridge. Even a smart factory needs ongoing monitoring, so you need engineers who are well-versed in new wi-fi standards, machine learning algorithms and edge computing. You can’t just expect your existing engineers, who usually work with legacy systems, to be automatically confident or capable in every new technology.

It’s going to take time, but the signs are already out there. It’s perhaps just a matter of confidence and cash – and this isn’t an era when either are in plentiful supply.

Intergro Technologies has several patents in machine vision. Credit: Intergro Technologies

Image: Thitichaya Yajampa/shutterstock.com

Image: Thitichaya Yajampa/shutterstock.com

In a post-pandemic world, the need for efficient, optimised factories has never been greater, finds Andrei Mihai

Microcracks can eventually cause an entire panel to shatter. Credit: Teledyne Dalsa

How companies are balancing the demands of increasing inspection speed while making the process more efficient. By Holly Cave

Inspecting tortillas on a high-speed packaging line. Credit: Scorpion Vision

Packaging lines are becoming more automated thanks to vision, especially lines processing food and fresh produce, as Abigail Williams finds out

The panel discussion at Embedded World in June. Credit: VDMA and Framos

Anne Wendel, VDMA Machine Vision, reports on what was said during a vision panel discussion at Embedded World earlier in the year

Credit: gualtiero boffi/shutterstock.com

Warehouses operate on dollars per square foot, with very different drivers for vision tech than a factory, writes Tom Brennan, president of Artemis Vision

We round up some of what will be on display when the Vision show opens its doors

The Stemmer Imaging UK team celebrating the 35th anniversary, with Mark Williamson central on one knee. Credit: Stemmer Imaging

Stemmer Imaging celebrated its 35th anniversary this year. We spoke to Stemmer’s Mark Williamson about adapting to a maturing vision sector