Chapter 11 Automatization Imperative: Are humans too slow to be in the loop?
From Duignan, P. (2026). Surfing AI: 30 New Concepts for Getting Your Head Around AI Shock.
AI and robotics offer many opportunities to automate activities currently undertaken by humans. As such efforts expand, there has been understandable pushback from those worried about AI-automatized systems eliminating humans from decision-making. People building systems where AI is embedded have assured those who are worried that ‘human decision-makers need to remain in the loop.’ They can build this into the systems they have developed, but there seems to be no realistic way for human oversight to be anything but a token gesture as the speed of AI development accelerates. The problem here is what we can call the automatization imperative. It states that powerful drivers will soon lead to eliminating humans from decision-making in many, if not most, situations.
Cases where this imperative is powerful are ones where competitive pressures mean it is crucial to act in time. In such instances, AI can now react many times faster than human decision-makers. This means that human decision-making is already, the slowest step in the process. When humans are the limiting factor in such processes, competitive pressure to respond faster will, it seems, guarantee that humans are eliminated from decision-making within such systems.
“Powerful drivers will soon lead to eliminating humans from decision-making in many, if not most, situations”
Regardless of whether humans attempt to push back against the automatization imperative, there is an inherent logic within automation that suggests such resistance will not be successful. This is where the speed of processes, whether AI-automated or not, has reached a critical threshold. This threshold is where a developing situation needs to be reacted to in such short time-frames that it is too risky to insist that human decision-makers are kept in the loop.
An illustration of this is bank runs. As the speed of processing transactions increases, it reduces the inertia of earlier systems that provided enough time for decision-makers to react when a bank run had started. People had to gather at the bank to demand their cash, and it took time for this to happen. As they did so, the authorities were alerted to what was underway. This is reflected in the fanciful idea that central banks should always have vans full of cash and fast drivers on standby. Once a bank run starts, they should rush to the bank and wave around bundles of cash. This would assure worried bank customers that there is enough money available to be withdrawn from the bank and, by doing so, stop the bank run.
In an environment where transactions are electronic and increasingly instantaneous, a bank run could be over very quickly if large numbers of customers withdraw their deposits electronically. Customers will also increasingly have AI making decisions about where they move their money and automatically actioning those decisions. In such cases, little real-time human decision-making is involved in what is causing a bank run. The automatization imperative predicts that the solution requires removing human decision-makers from the response. This is so the response can be launched at a speed similar to the problem it is trying to solve.
Warfare is another area where the automatization imperative is now relentlessly driving toward systematically eliminating human decision-makers simply because they cannot react fast enough. Star Wars movies illustrate the absurdity of ignoring the impact of the automatization imperative on the way warfare is increasingly being conducted. Surrounded by a highly sophisticated technological environment, the Star Wars heroes set out in fighters and participate in dogfights reminiscent of the Second World War. Even though it would be rather unsatisfying for the moviegoer, the set-piece conflicts in a Star Wars-type conflict are likely to be very brief. AI-controlled unmanned drones belonging to each side would fight each other. The side with the faster and better AI would win quickly.
This is now coming to be the case in real-world warfare, and the same pressure for rapid response times will arise regarding potential crash situations with self-driving cars and aircraft. Equally, in some medical emergencies, humans will have to be removed from the decision-making loop in order to act in time. Actuarial calculus will lead to the removal of human decision-makers, as it increasingly shows the unfortunate consequences of humans remaining in the decision-making loop.
Meanwhile, as discussed above, as a response to concerns about automatization, some of those introducing AI systems in some settings will attempt to reassure us that humans will remain in the loop. From a formal point of view, this may be the case, and the attempt will be made to try to include humans in a step in the decision-making process where they can agree or disagree with an AI system. However, from a psychological perspective, it is likley to be impossible in most cases to get a human worker to conscientiously perform this function on an ongoing basis. Imagine that an AI system makes recommendations that are correct ninety-five per cent of the time. In such a case, a worker simply becomes habituated to approving the AI system’s recommendations because of the cognitive load involved in disagreeing with it. Because the system is almost always right, a great deal of supervision and incentivization is required to make sure that the human worker functions as a significant check on the AI system.
Of course, you can think of the automatization imperative as not being so much about eliminating human decision-making. Instead, you can think of it as taking human decision-makers out of the details of the response and elevating their decision-making to a higher level. However, the automatization imperative in competitive environments also applies at higher levels within the decision-making system. Its logic will mean that as humans relegate themselves to higher decision-making levels, AI’s progressive development will relentlessly chase them up the decision-making tree. In doing so, AI will systematically eliminate human involvement at each level.
In a sense, what futurist Ray Kurzweil has called the ‘singularity’ is a consequence of the automatization imperative. The singularity is a situation where the development of AI and other technologies is proceeding so fast that a point is reached where, because of these systems’ speed of improvement and complexity, they are moving beyond human control. However, since AI will be autonomously improving itself because of the automatization imperative, humans will not have sufficient time to notice what is happening and to stop it. It is easy to see how the automatization imperative is already relentlessly driving us toward the singularity.
Viewed in this way, short of banning AI, the automatization imperative suggests that there is only one way to avoid a singularity-type situation. The only way to avoid the singularity, if humans want to stop it, is to build more powerful AI watchdogs to prevent it. These are AI systems tasked with monitoring as the singularity approaches and taking action to prevent it. This, of course, relies on humans ensuring that the rate of development of AI watchdogs is always faster than that of other AI systems. Given the multiple parties involved in developing AI, often in hard-to-control settings, this is likely will, no doubt, prove difficult.
“The only way to avoid the singularity, if you want to stop it, is to build more powerful AI watchdogs to prevent it.”
Given the automatization imperative, the need to act fast means that such AI watchdogs will obviously have to be given the authority to autonomously shut down other AI systems without waiting for humans’ permission. This is in itself another example of the automization imperative and represents a further ceding of control from humans to AI. However, given that humans have launched rapidly improving AI, AI watchdogs are the only way of addressing the issues that arise due to the automatization imperative.


