It begins- The A-Z blog challenge starts today! For those of you who don’t know, that means each day in April (except Sundays) we will be working our way through the alphabet talking books, movies, superheros, oscars, and anything else related to entertainment. To kick things off I’m talking about Androids and artificial intelligence, and an upcoming release I can’t wait for. Let’s get into it!
The Humanity of Dolphins and Artificial ‘Non-Humans
In 3 days, Artificial, the debut novel in the series The Keplar Chronicles, comes out. Author Jadah McCoy brings a new story in android/human love and war to pop culture and I couldn’t be happier. Syl is a human barely surviving the vicious Cull- bug-like robots- and barely remembers what emotions feel like. Bastion is an android sex-worker who is only alive because he can convincingly hide his capacity for emotion. For those of you who are, like me, completely over the angsty emotion and helpless pawn routine, Syl is a breath fresh badassery. The story runs the gamut of Bladerunner-esque problems but delves much deeper into what exactly humanity is. Says McCoy,
“I recently read an article about India declaring dolphins to be “non-human persons.” What is humanity but self-awareness, sentience, intelligence? We coin it “humanity” because human beings seem to be of the opinion that no other creature can feel as we do, think as we do, communicate as we do. Yeah, maybe dolphins don’t have the Grammy’s and maybe they don’t debate the effects of eating organic fish versus pollution-tainted fish, but does that make them any less intelligent? Just because your dog barks instead of speaking English, does that make you understand them any less? Just because an android is made of metal and coding, does that make their emotions any less real? Humanity is a term held on such a high pedestal. There are pieces, glimpses, of humanity in everything around us. One simply has to open their eyes and minds to see them.”
How Fiction Plays Into Our Innate Fear
Ex Machina pokes at the innate curiosity and fear of where the line of humanity is drawn, what really makes us human, and what happens when we can no longer tell the difference between machine and human? Media and entertainment have played off the fear that artificial intelligence will develop so well that androids will take over and kill or enslave humans. While this isn’t exactly hard to imagine, my question is- aren’t humans are just as capable and likely to do these things? If WWIII were to break out, the results would be catastrophic, without the existence of androids. Slavery is still an issue in other parts of the world, who’s to say we won’t be the ones enslaving androids when the time comes. Perhaps the future we will be fighting for the equality of androids.
So who determines when android AI becomes human? In an article called What is a Human? – Toward Psychological Benchmarks in the Field of Human-Robot Interaction by P. H. Kahn, H. Ishiguro, B. Friedman, and T. Kanda, two kinds of situational tests to human-like androids.
Two different types of claims can be made about humanoid robots at the point when they become (assuming it possible) virtually human-like. One type of claim, ontological, focuses on what the humanoid robot actually is. Drawing on Searle’s  terminology of “Strong and Weak AI,” the strong ontological claim is that at this potentially future point in technological sophistication, the humanoid actually becomes human. The weak ontological claim is that the humanoid only appears to become human, but remains fully artifactual (e.g., with syntax but not semantics). A second type of claim, psychological, focuses on what people attribute to the fully human-like humanoid. The strong psychological claim is that people would conceive of the humanoid as human. The weak psychological claim is that What is a Human? – Toward Psychological Benchmarks in the Field of Human-Robot Interaction Peter H. Kahn, Jr., Hiroshi Ishiguro, Batya Friedman, and Takayuki Kanda I people would conceive of the humanoid as a machine, or at least not as a human.
These are all possibilities, and several of the ‘tests’ programmer Nathan Bateman was looking for in Ex Machina are a combination of these claims. The study goes on to describe the six accepted pschylogical considerations of humanity. They are:
1) Autonomy: are we conditioned to behave autonomously in certain ways, or is the lack therein a direct indicator of free will and morality?
2) Immitation: we as infants learn through immitation, and likely this is a hallmark of android behavioral growth.
3) Intristic Moral Value: we value human (and sometimes animal) life enough to understand on a core level why hurting or killing are bad and interaction is something we seek.
4)Moral Accountability: we are accountable for our actions, so an indicator is if we begin to also expect androids to be morally accountable.
5)Privacy: we have the right to determine what is and is not known about our private selves and life. This becomes tricky when techinically an engineer knows much about an android when he is the creator.
6) Reciprocity: we expect response and respond in kind to each other as humans, ie: when someone extends a hand, we shake it.
Reciprocity is really very interesting in this analysis, because it denotes that reciprocal relationships are how we gain perspectives and readjust our way of thinking accordingly. For example, when children are put in a slave environment, they do not develop the proper reciprocal relationships. With this logic, we as creators will determine how androids develop reciprocity and gain new perspectives. If, upon creation, we treat them as equals and reciprocate on a human level with them, they will learn to behave as humans do in this way. If we use these androids solely as tools, as non-human entities that are not of an equal stature, they will not readjust their perspectives based on our relation to them, and therefore won’t feel the need to adhere to other aspects of humanity, such as morality. Artificial actually holds up to this theory well. Since the android Bastion engages in sexual intercourse, by definition an equal meeting of two individuals, he would be able to develop empathy and human perspectives from these interactions and gain reciprocal behavior.
What do you think will be the turn out of fully-developed AI? Tell us in the comments and tweet/instagram us your thoughts on how Artificial approaches the human/non-human quandry with #KeplarChronicles
Artificial releases April 4th, 2016 Pre-order here
What is a Human? – Toward Psychological Benchmarks in the Field of Human-Robot Interaction by: P. H. Kahn, H. Ishiguro, B. Friedman, T. Kanda. In Robot and Human Interactive Communication, 2006. ROMAN 2006. The 15th IEEE International Symposium on (sept 2006), pp. 364-371, doi:10.1109/ROMAN.2006.314461