According to Stoney Trent and James Doty III
Military AI systems are inherently human-machine systems. From basic tools and weapons to complex automation, people and technology interact with each other. Even the most autonomous drones are maintained, armed and used by people. Goal setting in any work environment remains an inherent human responsibility. Technology developed without considering these interactions wastes resources and provides no military advantage.
Despite recently touted technological advances (such as the US Javelin missiles), US investment in technology often falls short of its intended purpose. Since the mid-1990s, government programs have produced disappointing results. The Standish Group found that only 21 percent of government software projects completed between 2011 and 2015 were completed on time, on budget, and met client expectations. From 2001 to 2014, the Department of Defense spent $46 billion on weapons systems that were decommissioned before they were put into service. One of them, the US Army's Future Combat System (FCS), was canceled in 2009 and is considered a complete failure. From 2004 to 2014, the US Army spent $2.7 billion on an unsuccessful project. intelligence support system called Distributed Common Ground System-Army (DCGS-A).
Commercial software development looks no better. The Wall Street Journal reported that 75 percent of venture capital firms in the United States do not return capital to their investors. The Center for Information and Software Quality estimates that the cost of low-quality software in the United States in 2020 was $2.08 trillion. This figure includes the cost of failed IT projects, outdated systems and outages. Unfortunately, these problems manifest themselves not only in lost capital, but also in lost lives. Two well-known commercial examples of misguided development of autonomous AI technologies are the Boeing 737 Max and Tesla's "autopilot" system.
Due to financial difficulties in supplying a new aircraft to compete with the Airbus A320 neo, Boeing introduced the 737 Max. This new aircraft introduced imperfect automation, the Maneuvering Performance Enhancement System (MCAS), which corrected for trim based on data from angle-of-attack sensors on the wings. To avoid training and airworthiness checks that would increase costs and delay delivery, Boeing kept the MCAS hidden from pilots and the FAA. This design and delivery approach assumed that the sensors would always provide reliable data and that the pilots would never have to intervene. Boeing's "cover-up culture" has resulted in 346 deaths and at least $20 billion in direct costs.
The Tesla "autopilot" system is a set of sensors and software that helps drivers in certain conditions. Since 2015, when Tesla released the system, 250 people have died in car accidents. So far, the use of "autopilot" has been confirmed in crashes that have killed twelve people. Every year since 2015, Elon Musk has declared that his cars will demonstrate full autonomy within twelve months. The warnings in Tesla's current owner's manuals tell a very different story: "The Autosteer is designed to be used on controlled access highways with a fully alert driver. When using Autosteer, keep your hands on the steering wheel and be aware of road conditions and surrounding traffic. Do not use Autosteer in construction areas or where cyclists or pedestrians may be present. Never rely on Autosteer to determine the correct trajectory. Always be ready for immediate action. Failure to follow these instructions could result in damage, serious injury or death."
The failures of Boeing and Tesla illustrate the fatal error of trying to automate and autonomize cars. The foolish idea of replacing people with technology puts engineers in a stalemate when they have to anticipate all possible future conditions and failure modes. Instead, a body of human factors engineering research shows that people are the source of system resilience when technology gives them the freedom to act. People decide how and when to trust technology based on their understanding of the capabilities, limitations, state, and trajectory of the machine. These trust decisions require good feedback, which is often overlooked in the design of AI and autonomy. Human-centered design is not a polish that can be applied at the end of the development process. Rather, it is the result of sustainable, sound engineering practice.
Read also:
U.S.-China rivalry over Indo-Pacific is just getting started