Last century, we trusted machines to do things for us; this century, we’re starting to trust them to decide things for us. Humans have a notoriously patchy record when it comes to decision-making. But relying on technological systems to make decisions for us – especially when risks are involved and our safety is at stake – could have major consequences.
Soon we will have artificial intelligence in self-driving cars and trucks automatically adding together every mile driven, identifying every hazard and every accident avoided or not, and building up, in huge orders of magnitude, more experience than any human driver. Soon there will be clear distinctions between what’s correct and what is not in all sorts of areas, such as medical diagnostics and financial analysis.
There is rarely perfection in everything, and certainly no software is perfect, as Microsoft daily proves. Not even a super-machine can think of every eventuality or interact 100% accurately all the time. But what is more worrying still is the trusting-ness of human nature and the reluctance of humans to admit making mistakes until it is too late and lives are lost.
We may believe that a machine “knows” more than we do, or can access information we can’t. We’ll need to bring a healthy scepticism into interactions with them. We will definitely have to figure out how to identify and adapt to situations where our machines are in over their metaphorical heads, and hit the brakes for them. (based on an article by Jamais Cascio, distinguished fellow at the Institute for the Future, in New Scientist. Sept 2017. Heavily edited for length)