& Construction
Integrated BIM tools, including Revit, AutoCAD, and Civil 3D
& Manufacturing
Professional CAD/CAM tools built on Inventor and AutoCAD
Machine learning, a subset of artificial intelligence (AI), describes algorithmic processes that enable software programs to automatically improve from experience. A recent study and report from Fortune Business Insights expects the global machine-learning market to grow at a compounded annual growth rate of 38.8% from $21.2 billion in 2022 to $209.9 billion in 2029.
Purely digital machine-learning processes can benefit manufacturing in many ways, such as improving product design and operational efficiency, but this overview deals primarily with machine learning in robotics. And while robotic process automation (RPA) technologies are software based and confer many cost-saving and efficiency benefits to businesses, the robotics referred to here are physical, industrial robotics.
Machine learning is a key component in creating a connected factory that includes a chain of industrial Internet of Things (IIoT) devices, including robotics, that enhance and streamline workflows as part of the paradigm of smart manufacturing. IIoT sensors generate big data streams that machine-learning data analysis can mine for valuable insights. Robotics also have improving abilities to sense and reason about their surroundings through some of those IIoT sensors, including but not limited to ultrasound, radar, lidar, force sensors, computer vision from cameras, graphics processing units (GPUs), and machine learning AI.
Besides its use for product assembly, machine learning has begun to and will continue to transform manufacturing in a number of ways. Machine learning can benefit assembly greatly; making certain products such as semiconductors on machine-learning equipment can reduce downtime, spillage, and maintenance and inspection costs.
Post-assembly, machine learning can improve quality assurance. Now that high-resolution cameras and high-powered GPUs are common and not prohibitively expensive, machine-learning computer-vision systems can often inspect products for defects better than people can.
Machine learning can also perform nondestructive testing without human error. For example, using sensor data such as ultrasound with machine-learning segmentation and object-detection algorithms could find defects like cracks in material with greater accuracy and efficiency.
In its executive research paper “Digital Factories 2020—Shaping the Future of Manufacturing,” PricewaterhouseCoopers (PwC) reported that predictive maintenance in manufacturing will be the largest growth area for machine learning in factories during the next few years, growing from 28% of firms using it in 2020 to 66% planning to use it by 2025. Predictive maintenance for factory robotics and other machines comes from the big data generated from IIoT sensors on equipment that records information on the equipment’s condition. Machine-learning algorithms then analyze that data to predict when a machine will need maintenance, helping avoid costly downtime from unscheduled maintenance and instead planning maintenance for times of low customer demand, PwC says.
Digital twins of robotics, other machines, products, and even entire factories are virtual representations of real or hypothetical things that use simulation, AI, and machine learning to predict and optimize performance for quality and efficiency at a lower expense than doing so in the physical realm.
Other uses of machine learning in factories have less to do with robotics but still make the overall enterprise of manufacturing more efficient. Just as big data from IIoT sensors enables machine- learning predictive maintenance, centralized data analytics from digital factories fed into machine-learning algorithms can improve supply chain management—from optimizing logistics routes to replacing barcode scanning with computer-vision inventory management and optimizing available storage space. Machine learning can also predict demand patterns to help avoid overproduction.
Generative design uses machine learning to cycle through myriad design possibilities optimized for desired cost, material, weight, strength, and other factors such as manufacturing techniques, so you can make the most of the robotics and machines currently on your factory floor.
Machine-learning algorithms, whether they are purely digital—like in the case of internet search engines—or applied to physical machines like robotics systems, need to be fed huge data sets in order to identify patterns and learn from them. The input data must be voluminous enough to cover all the likely scenarios an AI will encounter, as well as the less likely scenarios, for comprehensive learning to sink in. Without enough data, the machine-learning “model” may never reach its full potential. And it may seem obvious, but the data must also be accurate for the model to learn properly. Machine learning trains important AIs such as medical robots that assist in surgery, so data accuracy is paramount.
Third-party data-training platforms are available for training machine learning models like robotics systems to perform an ever-growing list of tasks and behaviors, such as assembling and building products and structures, interacting with people or avoiding people, and so on. Data-training companies can tailor the training data under advisement for how the robotics systems need to function in a given factory.
Machine learning has many subsets, such as deep learning, which is common today because the substantial computational power it requires is now plentiful and relatively affordable. Deep learning takes advantage of neural networks, which are networks of nodes where the weights of the nodes are learned from data. These networks are designed to mimic the way human and animal brains adapt to dynamic inputs to learn. Here are some other subsets of machine learning impacting robotics, as well as some of their applications.
Using deep neural networks, computer vision enables machines to interpret visual stimuli like digital images, video, and data from sensor technology like radar, lidar, and ultrasound. It does this in a manner similar to the way human vision distinguishes objects from each other, understands how far away things are and whether they’re in motion, and observes if something is wrong in an image. Based on their visual input, the machines can then recommend action or take their own action.
Computer-vision systems with sufficiently powerful processing can exceed human abilities to inspect products or watch an assembly line, for example, because they can quickly analyze larger numbers of objects faster and notice tinier defects than a person could. A computer-vision system’s deep-learning process needs to consume mass quantities of data so that it can compare items and eventually learn the difference between, say, a perfect part and a defective part. The data is processed in a convolutional neural network (CNN) to interpret single images and a recurrent neural network (RNN) to interpret series of images such as video feeds.
The influx of big data from security and traffic cameras, smartphones, and other visual technology has helped computer vision flourish, and the technology has been key to the rise of automatic inspection systems. Besides being vital to the prospects of autonomous vehicles to recognize and avoid other cars, pedestrians, bicycles, road signs, and markers, computer vision’s use is growing within manufacturing. IBM has predicted the computer-vision market will hit $48.6 billion in 2022.
IBM is also working on leveraging the edge-computing power within the devices of the IIoT to use computer vision in automotive manufacturing to detect quality defects. That’s possible through computer vision’s object-detection ability, which could benefit almost any area of manufacturing by identifying product flaws or machinery that needs repairs. Computer vision’s object tracking—the ability to follow an object once detected—is essential for cobots (“collaborative robots” intended for direct human/robot interaction within a shared space), as well as autonomous vehicles and drones.
A form of supervised machine learning, imitation learning refers to when a “trainer,” usually a human, demonstrates a behavior to a machine-learning entity in a physical or a simulated environment, and the AI formulates behavior strategies based on the trainer’s examples. The learning AI, or the “agent,” takes input from “independent variables” in the environment, as well as “target variables” from the trainer’s actions. For example, if the AI is trying to learn grasping from the trainer, the target variable could be the way the trainer’s grasp technique changes from grasping one type of object to grasping another. Based on the lessons from the trainer, the AI creates a “policy,” or a strategy for actions it will use in the future.
Imitation learning has played an important role in the AI technology that enables self-driving cars, as well as a machine that can beat the human world champion of the game Go. For robotics, imitation learning has been particularly vital to field robotics for work outside of static, predictable environments such as factories and fields like construction, agriculture, and the military. It’s an important method for developing humanoid robotics and other legged robotics, as well as off-road mobility.
A type of reinforcement learning, multi-agent learning, or MARL (multi-agent reinforcement learning), places multiple AIs, or agents, inside a common physical or simulated environment. Whereas imitation learning teaches a single agent that tries to imitate a trainer, multi-agent learning induces a cumulative learning effect from multiple agents either collaborating or competing together and learning from the others’ actions. Each agent has access to its own information based on its own observations and experiences and can share the information for collective progress. This type of machine learning has become common in games, but it has many other practical applications—for example, with fleets of autonomous vehicles or teams of search-and-rescue robots.
A popular video on multi-agent learning from OpenAI pits two teams of AIs against each other in games of hide and seek. After many iterations, what started as rudimentary gameplay evolved into sophisticated strategy as the teams learned to create obstacles, overcome those obstacles, build structures, find a way into those structures, and so on.
Many early efforts in machine learning, such as computer-vision projects to discern the content of images, required the data to be labeled with metatags. For example, each image had to be labeled as a “dog” or a “hot dog” and so on. Such labeling efforts came with highs costs of time and money. By contrast, self-supervised learning (SSL) algorithms do not rely on labeled data. Instead, an SSL AI trains itself to predict one part of the input from another part of the input. It’s sometimes known as predictive learning as a result. SSL has been useful, for example, in machine natural language processing (NLP) and with Google’s medical image classification work.
Most SSL algorithms have been restricted to a single “domain” of input, such as spoken words, text, or images. However, researchers at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) have introduced DABS (domain-agnostic benchmark for self-supervised learning), which lets algorithms apply SSL from seven input types (including multilingual text and speech, sensor data, and images), with more to be added later. SSL has already been beneficial for improving autonomous vehicle safety and diagnosing diseases. DABS could lower the barrier of entry for companies to use SSL and explore its promising potential in areas like industrial diagnostics.
As machine learning in robotics becomes more and more sophisticated, it enables robotics systems to take over more complicated jobs that are dangerous or overly repetitive for humans or allow more interactivity with humans as cobots become better able to perceive their surroundings. That makes the smart factory safer for people while also freeing them up for more creative “soft skill” work or to be upskilled for jobs like programming and machine repair.
Robotics with machine learning can reduce human error, avoid unscheduled downtime, and inspect product quality with precision and consistency beyond human capability, which also makes manufacturing operations more productive and efficient, enhancing the bottom line.
A 2022 collaboration between McKinsey and the World Economic Forum classified 103 factories out of more than 10,000 facilities worldwide as “lighthouse” manufacturers, meaning they were fully transitioned to Industry 4.0 technology. The study found that these lighthouse manufacturers are more agile and customer focused and made greater improvements to performance in the areas of productivity and sustainability, such as reducing waste and greenhouse-gas emissions. The study further designated 59 advanced lighthouse manufacturers. These advanced lighthouses significantly embraced machine-learning technologies more than the other companies, including flexible automation using intelligent robotics that collaborate with people and collect data for analysis, as well as machine-learning computer-vision inspections that identify defects.
Improved software and other technological developments have made it more practical than ever for manufacturers to introduce intelligent robotics into their operations. The Association for Advancing Automation (A3) claims that even small and midsize manufacturers can deploy intelligent robotics systems that deliver return on investment within six to 15 months on average. And depending on the system, existing factory workers can often learn to operate a robot rather than hiring a dedicated roboticist or engineer.
Firms may want to begin gradually by evaluating one or two areas where intelligent robotics would help, rather than attempting a comprehensive overhaul. Is there an area where the proverbial 3Ds of dirty, dull, or dangerous jobs could be taken over by advanced robotics? Another good place to start is replacing or enhancing manual quality inspection with a robotic arm equipped with machine vision that can inspect machined parts. Machine-vision systems can also manage inventory and collect copious data that machine learning can analyze for process improvements.
Another potential entry point could be an autonomous mobile robot (AMR) that moves items around the factory floor with the intelligence to maneuver around obstacles and people. A more sophisticated AMR may be equipped with a robot arm for additional collaborative functionality. Employees could be reassured that such robots are there to assist rather than replace them—and could even help advance their careers by catalyzing new skill acquisition.
There are many potential robotics and AI suppliers that can help incorporate machine learning robots into a manufacturing business. It will take some consideration over whether the business prefers to buy from a distributor that does not assist with the robotics’ deployment, an integrator that works collaborates on installation and deployment, or a “robotics as a service” company that essentially leases the technology and provides maintenance and monitoring as part of the price.
The trend of machine learning in robotics becoming increasingly accessible to factories of smaller sizes and budgets should only continue alongside the improvement of machine learning powered by data collected from the IIOT, as well as abundant computing power. Projects such as Autodesk AI Lab’s Brickbot are pursuing the goal, which could make machine-learning robotics more accessible and potentially change the whole paradigm of robotic manufacturing from mass production to endless configurability.
Automated assembly lines currently have a single purpose: to produce one thing at a massive scale. Their industrial robots are programmed for specific, repeatable tasks, and reprogramming them can be an arduous job taking months or even years. _“_It’s incredibly tedious, unbelievably complicated, and very error prone,” says Autodesk VP of Research Mike Haley.
However, customers increasingly want customization and personalization from products, making reconfigurable assembly lines more necessary. With Brickbot, the Autodesk AI Lab has set the goal of teaching a robot how to build with LEGO bricks, the same way a child would learn. Brickbot takes in sensor data and, using machine learning, infers the conditions of its environment and then acts accordingly and adaptively. That’s just the beginning, though. After continual improvement, if the robot could learn to assemble anything, it could also redefine how robots work in an industrial setting.
“Traditionally, a robotic assembly line putting together a car is very deterministic,” says Yotto Koga, software architect, Autodesk AI Lab. “Everything has to be in its place for that system to work. If you change the design or the parts in that design, you have to reengineer everything so that those new parts are made deterministic, as well. We’re looking at ways to make robots easier to use so we can put assembly lines together and make them accessible to more people—not just big companies that have deep resources.”
Haley says that machine-learning robotics systems like Brickbot can be trained digitally at millions of times faster than real-world time using 3D models before transferring its learned knowledge to physical objects. Eventually, the AI Lab could apply that learning to any industrial environment building automotive or aerospace parts, electronic devices, or whatever is needed.
“The factories of tomorrow are not going to be single-purpose,” Haley says. “They are going to adapt to the needs of any one time. You might decide overnight to redesign your product. And by the next morning, the factory’s learned how to deal with that design change, and it’s ready to go.”
This article has been updated. It originally published October 2018.
Markkus Rovito joined Autodesk as a contractor six years ago and joined the team full-time as a content marketing specialist focusing on SEO and owned media. After graduating from Ohio University with a journalism degree, Rovito wrote about music technology, computers, consumer electronics, and electric vehicles. Since his time with Autodesk, he’s developed a great appreciation for exciting emerging technologies that are changing the world of design, manufacturing, architecture, and construction.
Emerging Tech
Emerging Tech
Emerging Tech