A crash course test for artificial intelligence

By |2018-11-26T22:15:03+00:00November 26th, 2018|Technology|0 Comments

Artificial Intelligence (AI) is the current hot item in “tomorrow world” as techies see it as the next new thing to take over the outmoded human brain, some of which actually do possess a modicum of native intelligence. AI algorithms have been successfully implemented by many enterprises to do such tasks as determining credit risk, consumer marketing optimization, credit card fraud detection, investment decision making, x-ray and electrocardiogram interpretation, and efficient travel and navigation choices. So far so good.

In the mold of “I am from the government and here to help you,” AI is being promoted to even more critical tasks…say driving a car. However, programmers and engineers might reflect a bit more on one of the more pervasive and deadly laws of the universe…the law of unintended consequences, and the limits of programmed intelligence.

Some time in the future we may read the following news report:

The data suggest that a new collision avoidance system, called Minimize Collisions Characteristics Augmentation System (MCCAS), introduced on the latest generation of self-driving cars, erroneously kicked in, pushing the steering system of the car sharply to the left to protect it from a perceived collision with an object in the road which its algorithms had determined to be an imminent collision hazard. The driver had identified the object as a shiny crumpled beer can in the road and of no consequence and attempted to keep the car straight ahead on the road. The stored algorithms in the car computer had characterized the light-scattering from the beer can as a much larger object, perhaps a rolling shopping cart. Another feature of the automated system was a steering wheel which would vibrate to warn the driver of a collision, and to resist all attempts by the human driver to take control. Consequently, the driver was unable to force the computer controlled steering wheel back to neutral, and the car crashed killing the driver.

The truth is, we do not need to wait for such a future account. The recent crash of a Boeing 737 MAX aircraft, operated by Indonesian Lion Air, killing all 189 people on board provides a glimpse into the robotic mindset guiding the aircraft. If subsequent investigations confirm these first reports, the flight data detail the vain struggle of the pilots trying to keep the aircraft level while the latest addition to the automated functions of the aircraft had erroneously declared an imminent stall, and put the plane into a sharp, corrective dive. Attempts by the pilots to pull the plane to level flight were apparently over-ridden by the newest enhancement of the on-board computer system, and it nose-dived into the sea. “HAL” knows best right up till the fatal crash.

While the AI computer was making a billion calculations per second in a game of “match the output of the sensors to the library of stored known objects,” the driver of my fictional future car had quickly recognized the sparkly object in the road as a crumpled beer can, and of no consequence. The pilots of the doomed aircraft probably could tell that the aircraft was flying level, in spite of questionable sensor input to the contrary.

Replacing human sensory input with electro-mechanical devices is common enough that the possibility of malfunction of either is a real consideration. Humans have the evolutionary advantage in that their brains have an innate ability to make distinctions in the real world for which AI systems require learning exercises to identify objects and situations already mastered by a 6-month-old child. The AI computer must build its own library of objects against which it will base future decisions as it navigates its decision tree based on sensor inputs. What happens when a bug or ice fouls a sensor? AI also lacks the adaptability and value-judgement skills possessed by humans to deal successfully with a situation for which it has no prior training nor reference data in its decision-tree core.

The unnecessary death of 189 people is a high price to pay for a computer programming glitch. “To err is human” is good advice for AI programmers as well.