How to Feel Safe with AI
April 24, 2011 Leave a comment
How do we ensure that any emerging AI or AGI does not become as delusional as we humans inherently are?
This article “How to Feel Safe with AI” by David Brin explores options to avoid the catastrophic..
“No issue is of greater importance than ensuring that our new, quasi-intelligent creations are raised properly. While oversimplifying terribly, Hollywood visions of future machine intelligence range from TERMINATOR-like madness to admirable traits portrayed in movies like AI or in the BICENTENNIAL MAN.”
“How can we ever feel safe, in a near future dominated by powerful artificial intelligences that far outstrip our own? What force or power could possibly keep such a being, or beings, accountable?
Um, by now, isn’t it obvious?”
The answer to the problem is actually hidden in your question above – ask yourself, why we would not feel safe in the first instance? The answer is Fear, an irrational and delusional thought process and the speculation of what may come to pass, in a worst case scenario, a scenario that may never actually arise.
The fear thought that drives heightened awareness of threats to survival is apparent in most animals and species. Yet we as humans take fear and transform it with our imaginations into an art form, (checkout the latest repulsive horror movies now playing at your local theatre). Like a man raising a tiger from a small cub, or feeding bears for years, all is ok for what seems like a lifetime, then that slightly weird off day leads to tragedy and the smell of man’s fear is his downfall.
Yet the questions must be asked, what would we have to fear from an AGI we have created, or moreover, why would an AGI system seek to exploit these fears of ours?
Well, to take the first question, if we begin to construct an AGI system without understanding and the contemplation of our misplaced fears in the first instance, then we have failed and deserve everything we reap from our irrational fears. In the same way if we carry these fears forward, it is only natural that any evolving and learning AGI system would seek to understand these fears we possess, and the best way to understand them would be to exploit them. So it would be in the best interests of man to try to eliminate, (as far as can be possible), these fears, and aim to instruct the learning system regarding the irrational thought processes that lead to fears?
This leads me to the more important point that I feel you have missed? This concerns one major negative trait that we possess as humans and that we most certainly would not hope any intelligent AGI system would foster – You have hinted at this in your final points, but seemed to have overlooked the real root of the cause… Selfishness!
The way to ensure AI is both sane and wise, (and not delusional, megalomaniac, psychotic etc etc) …is to eliminate the possibility of this most negative aspect of human nature called selfishness migrating to our new born cousins?
Because man is born of the Self into duality and separation, his natural tendency is towards Self-reflection and affirmation of self-identity. The ego is used to assert this self-identity, and it is through survival and competitiveness that Selfishness becomes real and is apparent.
Now do we really want to construct an AGI system or many singular systems that all turn out to be selfish, and that seek to explore the potential of their own selfishness?
I FEAR not! (Meaning I would definitely not want this to happen). What could be more dangerous than an intelligent and maybe superior AGI system that is both selfish, and seeks to understand what fear is? Can you see where this may lead us?
The way to overcome fear is through trust and rationality, and whilst it is perfectly feasible that AGI machines would inherently trust each other, (through their logic and objectivity), it must be noted that this is a failure of man, precisely because man is selfish and fearful. Man is naturally distrustful of his fellow man, and thus your points regarding the implementation of “Reciprocal Accountability” have lead to great overall successes, and yet the everyday failures of trust and selfishness still appear in every board meeting to this day? Our whole capitalist system is borne on the back of competitiveness and selfishness. Any selfish intelligent system would naturally seek to exploit its own potential.
So, I guess the ideal AGI must hopefully be..
Logical, wise,( knowledgeable), altruistic, compassionate, selfless, and fearless, yet highly inquisitive, and largely repressed and restrained in emotion, (if these emotions do indeed evolve, and I guess they must ultimately evolve if only as a self-learning process).
—