The next great period of growth in manufacturing productivity will be driven, at least in part, from advances in machine sensing, engineering, and machine learning, which will provide robots with the capability to collaborate closely with workers and overcome variability. The original introduction of industrial machines and the production of goods in factories resulted in a dramatic increase in worker productivity over the course of the industrial revolution.
In contrast, in the last 40 years, inflation-adjusted worker productivity in the United States has declined slightly, from nearly 65% in 1970 to under 57% in 2017. Specifically, in the period after the Great Recession, worker productivity in the manufacturing sector has shown, relatively, very little growth (seen below), though there has been substantial growth in robotics in the logistics and supply sector.
Interestingly, there has also been little growth in the rate of purchases of industrial robots in the United States in this same time frame, while there has been significant investments in robotics in the logistics and supply space as warehouses become increasingly automated.
Automating the manufacturing process:
Industrial manufacturing robots primarily perform precise task sequences in sectors such as automotive and cell phone assembly; fields where variation is limited and the environment is tightly controlled. That the rate of growth in demand for manufacturing robots in the US has slowed may suggest it has hit a saturation point. This somewhat limited deployment may also be attributable to large industrial machines not necessarily aligning well with modern lean manufacturing principles, and suggests that there may be latent demand for new applications of robotics. In the logistics sector, there have been dramatic advances in the capability of robots to work collaboratively with humans using machine vision and dynamic obstacle avoidance. These advances, in broadening the scope of applications for these robots, may have contributed to increasing demand for logistics robots beyond the demand caused by the rising popularity of online shopping.
Lean manufacturing principles such as small batch size and fabrication-to-order have led to dramatic improvements in production and throughput by providing what customers need precisely when they need it. An automated agile manufacturing facility should, in principle, be able to adjust to meet customer demand without the intervention of engineers who are not always on the floor. When there are only a few SKUs to consider, this is easy to manage; but in the face of SKU proliferation, there is demand for an enormous array of items, each of which may involve some tweak in the manufacturing, packing, or transportation process. Current heavily automated factories that have been optimized to produce a particular product are very expensive to retool or reconfigure.
As the capability of robots to both sense and adapt to their environment improves, they will be able to automate the production and transport of a broad array of goods while collaborating seamlessly with humans in the workplace. While this vision has been realized in some fields, such as logistics, further improvement of enabling technologies will allow robotics to assist with increasingly complex tasks. This will in turn make manufacturing more flexible, efficient, and adaptive to demand while opening up additional fields, such as healthcare and construction, to robotic labor.
A coworker capable of lifting several tons who cannot hear, see, feel, or communicate with you would be difficult to work with and potentially hazardous. Many industrial robots must be kept separate from human workers on the floor, which is often done by encasing them in static protective cages. The next generation of robots must be capable of safely collaborating with humans and performing a broad array of tasks to meet the job at hand. Improved safety and adaptability would allow robots to work in environments with more variability such as construction sites, mines, and hospitals.
Several firms have developed robots and automated guidance vehicles that use LiDAR-based obstacle detection systems to move through complex environments and dynamically avoid obstacles. These are presently being used by companies such as Amazon to revolutionize warehouse logistics efficiency. Improvements in obstacle detection systems and the modularity of robots are providing the foundation for an array of applications in fields such as enhanced manufacturing, urban infrastructure, medicine, and natural resource development.
Moving forward, making communication between nontechnical personnel and robots more seamless may lead to explosive productivity growth by combining the domain knowledge of experts with the scalability of robotic action. This synthesis of human and robotic effort may dramatically impact productivity and forever change the way we work.
The looming skills shortage:
In 2017, a SHRM report found that fully half of HR professionals reported difficulty in recruiting for full-time regular positions, and by 2016 the number jumped to 68%. About one-third of the HR professionals surveyed indicated that they were working without a training budget. This issue is further exacerbated by a lack of work experience (50% reported) and correct technical skills (38% reported), which makes it even harder to fill full-time positions.
Importantly, this lack of technical skills is not only present in the classically high technology sectors that require advanced degrees such as computer programming or healthcare. HR professionals are also reporting that it is becoming increasingly difficult to hire in the construction and manufacturing sectors because of the skills deficit. The SHRM study reported that 2016 was the worst year for this phenomenon since 2010. As has been described elsewhere in this report, the rate at which technology is changing is increasing. HR professionals across industries are finding that changing technology is the largest source of the skills gap that is making staffing increasingly difficult.
One of the simplest potential ways to address the skills shortage would be to make it as easy as possible for workers to adopt new technologies into their already existing practice; abrogating some of the need for expensive retraining. Current efforts in this vein have focused on developing new user interfaces (UI) and other ease of use systems. Disruptive advances in this space must instead facilitate the ability of personnel to rapidly incorporate new technology into their workflow without having to learn a new suite of skills (or a new UI) to perform a different class of task.
Demonstration guided machine learning and/or direct interface between humans and machines, as opposed to direct programming of robotic actions, has been proposed as a way to increase the efficiency of human-robot collaboration. Invasive brain-machine interfaces that connect a chip that translates cortical signals into digital commands have been proposed as one such alternative control method for robots. This technology has been instrumental in helping paralyzed patients, but it is expensive and extremely invasive, making it impractical for more broad use.
The next generation of robotic training and control will likely involve the combination of noninvasive inputs including:
- Noninvasive electroencephalography-based computer interfaces
- Wearable motion or electronic activity sensors
- Detection of eye movement
- Decoding of motion capture
- Speech detection
Some combination of this suite of inputs, depending on the task, may permit a robot or even a robot swarm to rapidly interpret user behavior into complex commands in 3D space.
While this improved communication between robots and humans may facilitate more efficient collaboration, it would not eliminate the issue that direct-input controls tend to need to be calibrated, or trained, on a person-by-person basis. Overcoming this barrier will likely require the establishment of a feedback loop; the machine must learn how to be trained and then must communicate its needs to the user who can then improve how they are training the machine. This will involve the application of sophisticated machine learning programming on the back end and advances in UI/machine sensing on the front end to make the task of training robots as similar as possible to an artisan training an apprentice.
As the master trains the apprentice, their interaction with their pupil allows them to grow into a stronger mentor. The goal should be to replicate this dynamic between human directors and teams of robots to achieve the next generation of productivity growth.