Lately there seems to be a lot of interest in artificial intelligence. Recently I read several blog posts about AI writing by Martha Kennedy. This week it is Fandango posing a question about AI.
I can’t write about AI without thinking about the stories of Isaac Asimov. Anyone familiar with his science fiction stories will know that he wrote a great many about robotics and AI. Asimov mostly saw robots as benign and the robots in his stories had to conform to The Three Laws of Robotics which were meant to safeguard humanity from harm.
The Original Laws of Robotics (The Calvinian Religion)
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
However, in his later years Asimov came up with The Zeroth Law.
The Zeroth Law (The Giskardian Reformation)
A robot must act in the long-range interest of humanity as a whole, and may overrule all other laws whenever it seems necessary for that ultimate good.Isaac Asimov
This seems to me to suggest that Asimov thought that AI beings would become more intelligent than humans and know what was best for them better than they did. You could say this is a good thing but it has its pitfalls. Can AI make decisions that are not only wise but also compassionate? Does AI operate with ethical values? Of course, humans don’t always know what is good for them either and may fight against things that could ultimately benefit them.
In other stories Asimov writes about a super computer called Multivac which eventually becomes so sophisticated that it runs all aspects of human life and is self-maintaining. What happens if such a computer develops a fault? Humans would not have the intelligence or ability to fix it or to survive without it.
Having AI that is so intelligent that it can take care of all our daily needs could free us from drudgery and give us time to do the things we want to do but there would be a cost. What would become of all those people who once did the menial jobs? How would they live? You can see why the Luddites were worried. Having all our needs taken care of might also make humans lazy and lacking in a sense of purpose.
AI can be a wonderful tool but having machines that are smarter than we are may ultimately lead to the end of human civilisation if we don’t know how to fix them.
I decided to pose this question to an AI Generator myself.
Q. Write a short answer to this question. What would happen if Al surpassed human intelligence?
A. If Al surpassed human intelligence, it would be a monumental feat and mark an important milestone in the advancement of artificial intelligence. It could open new pathways to innovative solutions to complex problems, while simultaneously raising questions about our own capabilities. Automation of more advanced tasks may become increasingly prevalent, potentially replacing or augmenting existing jobs with machines that can learn quickly and accurately perform difficult assignments. With the potential for increased efficiency and accuracy, AI-driven systems could revolutionize many aspects of society, from education to transportation. However, such a development also has the potential to limit creativity by allowing for precise but overly structured thinking processes.
My conclusion: Sir Humphrey Appleby might have written this but regular people don’t write this way if they want to be understood.
This is a pretty interesting subject thinking about robotics, machine ethics, AI etc.