I completely disagree with your conclusion that automated reflex is a bad thing. In fact, I feel that it it is a very Good Thing. Everything from nuclear plant SCRAM systems through computer driven buy-sell reflex in the stock market, to the simple logic that runs traffic lights in response to traffic conditions, serves to buffer erratic human activity.
You see, it takes a great deal of training to prepare a human to take predictable, logical and safe action in a crisis situation. Even with years of training, a human is still likely to get confused or frightened, to lose bearings, to pass out, to be overcome by a zealot with a box-cutter... The number of variables, especially in an ambiguous, crisis situation, is too often too much for a human to handle.
Automated systems cut through the ambiguity, and react predictably to criteria which were considered and reconsidered by numerous (expert) people in parallel, without the distraction of being in imminent, mortal danger.
The machines simply do what the human at the controls OUGHT to do, were (s)he not right there in the middle of fighting for their life.
In this case, the Russian pilot and the Swiss controller made the ultimate human mistake of second-guessing the protocol set in place by cooler heads, and enacted by silicon. The controller got spooked by noticing the danger late, and issued a "descend" instruction to try and regain control of a situation that he was not prepared to handle in that very instant. The controller must not have been fully aware of the situation, a typical problem humans have in stressful moments that require a decision. The pilot got confused and frightened by the impending collision, and trusted a live human instead of the prerecorded one which was leading him to safety. Also, a typical human reaction.
The programmed reflex of automatic safety systems is for from arbitrary. It is the result of considerable research, and in a great many ambiguous situations which tend to render human judgment useless and erratic, it is capable to doing instantaneous threat assessment, of prioritizing information, and of recommending, or outright taking, proper corrective action.
Yes, certainly, there are situations in which human ingenuity is by far superior to programmed reflex. Anything requiring creativity, for example, or improvisation in a completely new situation, is not something that can be programmed.
Aircraft collision avoidance is not something that takes creativity or improvisation. It takes nerves of steel, and until humans have these, they have to learn to trust nerves of copper, and brains of silicon.
We built these machines to serve and protect us in exactly these sorts of situations. To second guess them at precisely the moment and situation for which they were created is to deny their existence entirely. You might as well rip them out of the console altogether. Ignoring the AI reflex is like laying aside your hammer, and trying to drive nails with your fist. We make these tools to improve our lives, not as useless artifacts.
|"Is K5 my kapusta intellectual teddy bear?"|