Vision-guided robotics (VGRs) allow defect-free manufacturing by giving critical quality information such as fault data and measurement tolerances that a blind robot configured to function within a coordinate system or stage cannot provide.
They may identify detects via inspection, which has a direct influence on quality. As an indirect type of quality, they also can utilize predictability or a technique in which a robotic system pauses due to a vision system mistake, therefore highlighting a problem in the process.
Both techniques rely on Industry 4.0 to detect and flag faulty items, making them more effective. In addition, vision systems may record and post quality data, which is uploaded on an external network, which operators can utilize to forecast and respond to mistakes rapidly. Some executives even utilize the data to improve deep learning algorithms.
“The Industry 4.0 vision is to have an intelligent connected manufacturing system that is highly data-driven; therefore, the accuracy of the data, how fast it is obtained, and the data interpretation is key,” says Frank Stone, national sales manager, Capture 3D.
“The resulting data provides insight into the entire product lifecycle from design to development to production for a modernized lean manufacturing strategy. This data unlocks Quality 4.0 capabilities, such as digital assembly analysis, allowing you to use digitized components to virtually build an assembly for form, fit, and function analysis regardless of the physical location. Simulating the assembly process within the digital space reduces costs and accelerates launch time.”
According to Nick Longworth, senior systems application engineer at SICK Inc., Machine vision, a type of artificial intelligence, is highly prevalent in robots today. Due to manpower constraints, the pandemic has only increased its usage as end-users seek to develop more automated and adaptable processes.
“There are both mature and nascent areas to the robotic machine vision field,” Longworth says.
“On one hand, you have traditional rule-based algorithms like pattern matching, optical character recognition, and other tools, which have been allowing robots to complete pick-and-place and inspection tasks for decades. On the other, you have machine learning and deep learning applications that are allowing the industry to complete tasks that seemed impossible a few years ago, like anomaly detection in wood grains.”
He argues that mature, rule-based products are perhaps the most prominent in robotic machine vision. At least a handful may be found in almost any facility that uses robots, and they are simple to operate and very dependable. Machine learning and deep learning, on the other hand, are relatively new to vision robots.
“Due to complexity and need for further development, they are currently reserved for applications where they are absolutely needed over their traditional rule-based vision counterparts,” Longworth says. “They receive a lot of attention because they have the potential to alter robotics as we know them.”
Deep learning, for example, is expected to achieve broad productive use within 2 to 5 years, according to experts.
According to Steve Reff, automation and launch support manager at Capture 3D, AI and machine learning are generally used to replace repetitive activities in order to achieve higher throughput.
“For quality control and dimensional inspection, AI technology is just not there yet, because the industry needs to adopt full-field data collection as a standard—and it must be good data,” he says. “With complete, high-quality data sets, there is potential for AI to become capable of making intelligent decisions through machine learning and eventually take over more decision-making processes for us in the future, but first, we need to secure consistent access to good data sets.”
AI systems, like humans, require excellent data to make smarter judgments.
“The better your data is, whether you’re a robot or a human, the better, faster, and more accurate your decision-making is,” he says. “Accurate data is always at the core of every good decision.”
Lavanya Manohar, senior director of Cognex, predicts that the number of vision-enabled robots will increase over the next decade.
“We also expect more and more deep learning to be utilized in the inspection and positioning of robots,” she says. “We expect robots to operate with more intelligence and move into areas of more complex grabbing, positioning, and scene-understanding. We expect to see more adoption of 3D vision within robotics and not just traditional 2D vision.”
According to Longworth, trends in the field of vision robots may be summed up in two words: “simplify” and “complexity.”
“End users are attempting more complex vision applications but want to simplify the way they are built, programmed, and supported,” he says. “Many small to medium-sized end users also may want to DIY the integration to cut costs. This has led to a rise in more configurable and “no-code” technology. These solutions allow users to build complex applications without advanced knowledge of robotics or machine design.”
Difficult activities, such as bin picking or deep learning, were previously thought to be too complex for practical implementation, such as in manufacturing or a warehouse, he claims, but firms have developed tools to simplify them. According to him, PLB software, for example, allows users to solve a bin-picking application in a few customizable stages and have their robot collecting components within a couple of hours after unpacking the camera. Older technologies, such as 2D vision, are being simplified as well.
Combining automation with PLB software makes vision technology more accessible to novice users while also allowing veteran users to improve existing systems and processes.
“It allows companies with fewer resources to automate effectively and efficiently while giving experienced users another avenue for development and continuous improvement,” Longworth says.
This is echoed by Stone.
“We are seeing exponential growth in the demand for automated solutions. The trend is to go automated to increase throughput and program repetitive processes because, for ROI purposes, everyone wants to streamline processes and cut costs—and the best way to do that is to automate the process,” he says.
For instance, lights out manufacturing is a strategy that allows businesses to conduct an eight- or twelve-hour shift without requiring human involvement. Companies can literally turn down their lights and wake up the next morning to find inspection records created for them by an automatic part uploading batch processing system.
“In the short-term future, we will see more solutions similar to this because the industry is looking for ways to automate processes and become more efficient and leaner in the way they manufacture goods. As this space becomes more competitive, implementing automation, whether through vision robotics or otherwise, can provide a great ROI.”
Furthermore, these solutions allow a user or another resource to accomplish something else.
According to Manohar, the robotics industry is becoming more eager to “try out vision.”
Deep learning and 3D vision are still being used by manufacturers, and robots are becoming more user-friendly. Both are getting less expensive.
There is still space for improvement.
“Despite all the improvements made in the area of vision with 3D and deep learning and traditional high-accuracy 2D, the technology is still relatively lower down the S-curve compared to inline manufacturing use-cases for vision — such as measurement, gaging, identification,” she says. “Continued algorithmic improvements, greater hand-eye flexibility between the robot and vision, and a full-system optimization per use-case will be required to see adoption rates accelerate.”