Artificial Intelligence (AI) will use super-fast computers with huge storage capacity, connectivity and advanced algorithms to analyze problems and find solutions quickly.
I watched every space shuttle launch, except one, before the Challenger explosion. I saw the icicles on the Challenger and saw that the conditions were outside the envelope of shuttle operations and said, “Nobody is going into space today.” I went to work, expecting to watch the launch in a couple days.
If the shuttle computers monitored how the engines were compensating for the stresses on the vehicle and going out of the normal range, it could have separated from the solid rocket boosters (SRBs) and the external tank (ET) and then returned to launch point (RTLP).
I worked at Honeywell on the non-flight computers, and we were told the Honeywell engine controller computers operated properly. They were not checking for the right abort conditions, same as the main computers.
You could say the Challenger disaster was a software design error.
An AI system monitoring the flight could have recognized the emergency situation and aborted the flight safely.
Simple preflight rules, or specific logic monitoring the flight, or generic AI oversight could have prevented the tragedy. This can be applied to many aspects of decision making.
Isaac Asimov’s three laws of robotics (1. Do not allow a human to be harmed, 2. Do not allow harm to yourself, and 3. Rule 1 has precedence over 2). This could be required for licensing an AI device. The interesting aspect is that AI would then block abortions.