After a long day at work I settled into my favorite easy chair, relishing the solitary adult beverage I normally permit myself, and sat scanning through the news on my favorite digital device. I was looking desperately for something which, upon reading, wouldn’t prompt me to run screaming into the night calling for my mommy – I failed miserably.
Staring up at me from the pixilated depths of digital chaos was erstwhile tech entrepreneur Elon Musk, speaking out about the dangers of artificial intelligence (AI). I am neutral in regard to Mr. Musk and his technological endeavors (Tesla and SpaceX) but I do agree with him about artificial intelligence and the risks inherent in its inappropriate usage.
What – the attentive reader may wrathfully ask – is “inappropriate usage?”
Well, let’s take a look at AI and how it is currently used – and how it might be used in the very near future.
Any human activity that is augmented by software or hardware is, technically speaking, interacting with AI. The thermostat that you set at 72 degrees and automatically shuts off when the room cools down to that temperature is an AI. It has intelligence because it has been programmed to shut off when it senses an ambient air temp of 72, and it is not human (it is artificial).
Now, granted, it is also a very, very, dumb AI — if you ask it to solve “1 + 1” you will have many electric bills to pay before you ever get an answer.
AI, however, can be much, much more. For instance — the current state of AI involves systems that inform you of sensory data that the AI has gathered and when this data is compared against norms and standards, may generate an appropriate message to a human.
That little notice that appears on your car’s display that says your right front tire is under-inflated? AI.
The text message your security system sends you that the back door was opened at 2:12 a.m., by your now grounded daughter? AI.
Music or book recommendations digitally sent to you based on previous purchases? AI
Advertising for products and services digitally sent to you based solely on your Internet browsing? AI.
Stop for a moment to think about all the messages you receive (digital display on some device, email, text message or other) that entails an AI or other automated system forwarding data (mildly interpreted at best) directly to you.
It will be much more than you think, until you think about it.
There is nothing wrong with this because the AI is in passive mode – simply gathering facts. No danger of Alexa or Siri ordering a pizza without being told to do so, or so we are informed…
The next advance is to program the AI in such a way that not only does it gather data (either directly through modified sensory input, or indirectly by humans inputting the data), but it interprets the data and offers unique recommendations. As I write this diatribe, all those AI recommendations are still based on a wide range of algorithmic possibilities all created by humans – the AI simply processes the data faster and can assess more of those recommendations in various scenarios than humans.
No matter what – though – the AI still makes recommendations of future action that are wholly based on a range of options preset by human programmers.
Not only that, but AIs generally are not permitted (by any human that has half a brain) to make decisions that involve action unless that action has already been planned and reviewed by humans as well. Letting an AI auto-shutdown a dangerous problem in a nuclear plant is still based on human decision-making scenarios.
The great risk is when – at some point in the future – AI use is widespread and not under the careful and constant supervision of intelligent and discerning humans. The great risk is when humans abdicate responsibility and allow AI not only to gather, interpret and make decisions based on known options, but permit AI to make decisions which may not be found on a carefully preset and human-programmed decision tree.
In humans we like to call this “thinking outside the box,” but in AI this could be catastrophic since the ripple-down effect on humans cannot – despite the processing power of AI – be adequately calculated.
The somewhat misguided Utopian vision of an AI is basically one in which the AI operates and thinks just like a very smart human – just faster, bigger, better, stronger and all that. However, obviously that is not enough because you cannot rely on an AI which makes decisions solely based on the logic and reason of advanced programming. Hey – I think that was a Star Trek episode!
Anyway, in other words, we are creatures of body, mind, and soul – humans have emotions, passions, as well as ethics and morality. Unless you can teach that to an AI, you will have AI-rendered decisions and actions that are only based on the best possible outcome – and the best possible AI outcome may not necessarily be the best possible human outcome.
Humans, generally speaking and referencing them as an overall species, are not very intelligent. Truth be told, humans have made incredible progress since our cave-dwelling days by simply being average – usually by utilizing the insights of a relatively small percentage of gifted humans.
When the AI genie is let out of the bottle, we run the risk that human reliance on this form of intelligence will not only create a dangerous dependency, but also allow average humans to control technologies that are way beyond their comprehension and understanding.
Worse than that, I can easily imagine those average humans – fully cognizant of their average nature – granting the right of control of the AIs to humans supposedly “gifted” enough to understand the technology, trusting such people to always make decisions about the AIs that will benefit everyone equally.
What could possibly go wrong with that idea?
The truth is, Elon Musk (and many others) rightfully do not trust the average people to control the technology, nor (in their heart of hearts I suspect) do they trust their fellow intelligent techies to do the right thing. Getting humans to do the right thing is a job than even an AI would have trouble in executing.
I think using the word “executing” in the same sentence as “AI” is probably one of the things bugging Mr. Musk.
AIs are coming, and we do need standards of performance and structure. If the AIs will eventually be able to do all the things that the techies now wax prosaic about, then we are creating beings that will be godlike.
Anyone who has studied Greek mythology knows how irrational and petty the gods actually were, and being a god does not of necessity mean you act like a god.
Anyway, I have to go now because I just got a text message from my pharmacy, an email from my refrigerator (out of milk) and apparently my car just auto-subscribed to some kind of music service. Fug.