Will AI Ever be Considered a ‘Person’?

Muhammad Mooneeb Hussain
12 min readJun 19, 2020
Artificial Intelligence from Detroit: Become Human

Artificial Intelligence — two words that either spark curiosity, interest, or outright fear amongst us all. The ethical and moral dilemmas around AI have been present ever since John McCarthy (the father of AI) mentioned the term AI and these dilemmas persist to this day — more-so now than before. With advancements being made at an astoundingly exponential rate, the idea of making a conscious, rational, sentient being is not merely fictional anymore. The question that will be tackled within this essay is not simple; neither does it have a simple answer. To truly understand all that this question entails, we have to understand first what is meant by AI within this discourse. Then, we’ll have to consider the philosophical side of the equation along with the scientific. Meaning, we will have to decipher questions such as ‘what is a person?’, ‘What is conscience?’ etc. That will be the second step. The third and final step will be an attempt to understand the prevailing point of views and validate their presence (or absence) within the grey areas of philosophy and science we discuss.

Firstly, before we go into the more sophisticated debate regarding the existence of AI, we must first understand what the term AI is describing. According to Nils J. Nilsson:¹

Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.

The importance of AI, and the main foundation behind its existence, is tied tightly around the concept of intelligence. Not just any intelligence, though, but the latest technologically intellectual feat that we have achieved, or HLMI (Higher Level Machine Intelligence). At one point, that would have been the calculator — an intelligent machine aiding in the solving of complex equations within seconds. However, a calculator is not AI anymore as it is not intelligent enough to be considered as one. AI’s worth is attached to its intelligence, just like ours is to productivity.

It is also crucial to note that, within many, if not all, of the scenarios, human beings will serve as the basis upon which AI will be compared. Whether it is in terms of physical, moral, philosophical, or ethical, everything an AI does or will do will be in contrast to what humans can achieve. The reason that is given for this, and one which is widely accepted, is that there exists no known species with the same capabilities that we humans possess. If we are to develop a machine, an AI, to make it our most intelligent and powerful creation, the only way to do so is to compare and surpass what humans have. Whether this statement mentioned above is valid, or if it is just another example of human hubris oozing out of our entitled selves, that philosophical argument is for another time.

However, intelligence on its own is not enough. For a machine to be genuinely an AI, it must be able to use the intelligence it possesses in order to help humanity. These two core elements, ‘Intelligence’ and ‘usefulness,’ is what separates AI from any other machine. If a machine is useful but not intelligent (or intelligent enough by modern standards), it is just technology. If a machine is intelligent but not useful, then it is incomplete, malfunctional, or rebellious. AI is a tool that we create for our aid, nothing more and nothing less.

For the curious and the fearful, this outlook of AI raises several questions, one of which being:

“What if we go too far and create something too intelligent?”

According to Dr. Stephen Hawking, we would be laying claim to our demise as “The development of full artificial intelligence could spell the end of the human race.” It would be too foolish to disregard the concern shown by one of the brilliant minds of our generation, but his worry cannot be used to generalize the opinion of every other scientist or researcher.

In a survey performed by various professors of leading institutions, they questioned AI Experts on matters related to AI achieving HLMI. Out of the several questions asked, two of them were related to the time-frame (how quickly machines would achieve HLMI) and outcomes (would HLMI have a positive or negative impact on humanity), respectively. In the first question, 50% of the researchers predicted for HLMI to occur within 45 years from the time of the survey. While for the other question, the median probability of the answers given was that 25% predicted that HLMI would have a good impact on humanity, 20% predicted they’d have a great impact, and only 10% predicted that their impact would be negative.²

The survey is a perfect representation of the lack of consensus within the AI researching community regarding the impact of HLMI. This is because our consideration of whether AI has become ‘too intelligent’ or achieved ‘HLMI’ depends on how we perceive or define the two terms. In many aspects, as Eliza Kosoy, a researcher at MIT, mentions, machines have already outperformed human minds. They have more substantial information storing capacity and more robust computing capacity. They’ve already mastered one of our most intellectual games, chess, and beaten our strongest representative, while self-driving cars are passing and complying with human-made traffic laws. However, from another viewpoint, AI has yet to achieve the empathy and kindness that our intelligence possesses, meaning they have not truly overcome us.

Assuming that AI manages to achieve HLMI and surpass its creators, what happens then? What happens, not to the humans, but the AI and our perception of AI? As the title of this essay asks, will we consider them as equals, as ‘people’ and ‘beings’? These are the questions that we are going to discuss further on.

However, before we can look into answering them, we have to understand some philosophical aspects present within the questions that were posed above. The question of “What/who is a person/human?” has been part of an age-old discourse amongst both philosophers attempting to untangle the complexities of the human self, and snarky teenagers trying to avoid personal questions.

According to Strawson, in his book ‘Locke on Personal Identity,’ we can split the idea of a ‘person’ into two definitions: either referring to a human being as a whole or referring to the individuality of a single human being.³ Additionally, within the book, he also describes the attributes that John Locke considers vital within a person: body, soul, actions, and experiences. To Locke, it was the combination of these four elements that make up what we call a person (but sadly, not an Avatar).

In essence, both the given definitions, as well as the attributed qualities, do make inherent sense, especially in the context of how the word person is used within our society. The body eludes to the material body that we humans have — the skeleton, the muscle, and the skin. The soul refers to the unique essence of each individual, the core that makes us who we are. Actions are of two types, physical and mental. Physical actions refer to our ability to move and interact with physical materials. Mental actions, on the other hand, involve thoughts, rationality, and conscience. Lastly, experiences are the stories and the memories that we create, and the lessons that we learn through the use of the first three attributes. All of these is what makes a human being a person.

However, do they make an AI a person? That is what we are trying to understand. From an initial impression, it may seem like that an AI could never have these four characteristics, but that’s not necessarily the case. AI can have all four of these traits, one of which (mental actions) is directly related to its intelligence. But before we can attribute the trait, we need to look deeper (or as deep as we possibly can without writing a paper within a paper) as to what thoughts, rationality, and conscience mean to understand whether or not they relate to an AI.

Conscience, as Sigmund Freud considers it to be, is the interaction between the three parts of a mind: id, ego, and super-ego. Id represents the primal segment of the mind, the part driven by an aching want to fulfill our basic desires and necessities. Ego’s responsibility is that of policing id and keeping it in check to ensure our desires don’t overcome the boundary of morality. Super-ego is the authority that keeps a check on the ego to make sure it’s carrying out the responsibilities it has been given.⁴

Do note that Freud’s understanding and explanation of the conscience is not the only one that exists. In fact, there exists no generalized definition of conscience, nor will there ever be. The reason why I chose Freud’s definition is because it encompasses all the three factors mentioned in Locke’s mental actions.

The conscience, as mentioned above, is the interaction between the three parts of the mind. More specifically, though, it assumes the form of the authoritative measures that super-ego takes to keep ego in-line. Meaning conscience is not something that’s existing within us; it is something that develops through our interaction with our surroundings when growing up. This, in turn, affects our rationality, since rationality — or our way of reasoning — is our ego. Ego’s responsibility is to make sure the decision we end up making to appease our desires are one that is rational and lie within the acceptable social and moral norms. Combine all three, and we get thoughts.

This entire philosophical journey we just undertook was with humans at the epicenter. When we are considering an AI with HLMI, we outright accept them as machines that have surpassed all aspects of human intelligence and abilities — physical, mental, and emotional. They have conscience, rationality, thoughts, id, ego, and super-ego comparable to humans. Such machines are not then just a piece of technology; rather, they’re beings. Sentient, rational, emotional beings. That is what it means to have surpassed humanity. Yet, from Locke’s point of view, this isn’t enough to be a person. They still need to have a body, a soul, and should have undergone experiences. However, it is difficult to figure out if AI will ever manage to achieve the other three attributes without them coming into existence and us having to face that reality. This is where pop-culture comes in.

Movies, TV-shows, video-games have all delved deeply into issues concerning HLMI and them coming into existence. Pacific Rim, Terminator, I. Robot, Iron Man are just a few examples. A key commonality within the examples is that the Artificial Intelligence within these movies are robots, i.e., they have a material body, something that’s important to comply with Locke’s definition. The one example that this paper will focus on is a video game that came out relatively recently called ‘Detroit: Become Human’ (which will be referred to as D:BH).

The reasoning for choosing this game to talk about HLMI isn’t just because it’s recent, it’s because of how effective the game is in drawing us within the world it creates and the issues it presents. A choice-based game, D:BH presents us with a world where Artificially Intelligent androids have become an essential cog within the world’s machinery. They have integrated within the society and have become the perfect representation of what we expect of an AI — a technological means that make our life comfortable. They are made to look, talk, and act as normally as humans — so much so that the only discernible feature is a ‘computing circle’ on the side of their forehead. Being able to play this game from an android’s point of view, rather than a human, is what makes the message within the game hit home. We have all seen situations where AI has gone rogue and attempts to destroy us all; this game does not take that one-sided route. It gives the players a choice on how they’d like to play the game, showing us many different conclusions depending on what side we choose to be on. The consequences of our decisions are what makes it feel like real-like — that we too can face the same scenario in the future.⁵

The issues and the messages that the game attempts to deliver are from both points of view — the effects androids have on humans and vice versa. From our point of view, the game touches upon issues of fear, unemployment, class division, heavy dependency on machinery, religious sentiment, and rebellion. From the android’s point of view, we see a completely different picture: slavery, abuse, inequality, racism, and revolution. It visualizes for us a world where androids/robots/AI have achieved HLMI, and in doing so, either have or will eventually fulfill Locke’s criteria for being a ‘person’. They’ll have a body, a physical material body. Although they may not have a heart, their high mental intelligence would make them have feelings, to allow them to feel pain, sorrow, empathy, heartbreak, etc. This, in turn, results in them experiencing situations, creating memories — both good and bad — remembering the joy and the pain and the trauma. With the androids being integrated within every conceivable position within D:BH, each was able to have their own unique experience. They each had essence. They each had a soul. So, for all means and purposes, D:BH showed us a viable scenario where AI acted like people and, hence, should be considered as such.

However, they weren’t called people, and if we were to face such a situation in real-life, they possibly wouldn’t be considered as such either. This is not just because of the human reactions depicted by D:BH when the androids’ ‘rebelled’ and demanded recognition as people, but also because of what we see within our own societies. In D:BH, humans did not take kindly to such demands — they considered it a threat to them and their power, their creations turning against them. In our own real world, we see the same response given to actual human beings — minorities fighting for their rights, for equality, for justice, and for rightfully demanding to be considered people. If human beings cannot get the right to be called a person because of the fear of the controlling majority, how will AI ever get their rights?

What is infuriating, though, is that we have cases and scenarios where the title of personhood has been given to entities that are not human. Corporate Personhood exists, which gives corporations some of the rights bestowed upon humans. Similarly, we have examples of nature receiving the status of a person as well (which I am totally supportive of). We even have a physical AI Robot, Sophia, who has been granted citizenship in Saudi Arabia. Yet, we are still hesitant to consider many humans as people.

As for Sophia, she is the closest we have gotten to an AI with HLMI so far, but even she is a long way from surpassing human intelligence. Even though it may seem monumental that she received Saudi citizenship, Sophia currently is still a technological tool that has not consciously begun to challenge our control. Until and unless she becomes conscious and uses rationale to judge her decisions, it cannot be said with certainty that AI will be considered a person.

To conclude, the answer to the question of “Will AI Ever be considered a person” is both yes and no. As times change, societies develop and become accustomed to new technology, adopting and accepting new policies and changes. If, in the future, we do become a civilization that’s more understanding and empathic to the cause of machines or beings other than ourselves, then yes, AI will be considered as a person. However, if we continue to have the same mindset that we currently have — one of fear, hatred, and control, then no. Humans will not consider AI as a person.

But it is the future, and the future will forever remain uncertain. In this current time, and even within this essay, we’ve attempted to understand and analyze AI from the perspectives, laws, and policies of humans. If AI does become more intelligent than us, then what’s to say they won’t grant themselves the right to be called a person? Create their own laws, policies, and scriptures that relate to them? Become their own community. It sounds terrifying, but it’s actually not. In the end, the desire to live peacefully alongside our own, that’s what people want.

References

[1] Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., . . . Teller, A. (2016). Artificial Intelligence and Life in 2030. Stanford University. Stanford: Stanford University Press. Retrieved June 15, 2020, from http://ai100.stanford.edu/2016-report+
[2] Grace, K., Salvatier, J., Dafoe, A., Zhang, B., & Evans, O. (2018, July 31). Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts. Journal of Artificial Intelligence Research, 62, 729–754. doi:10.1613/jair.1.11222
[3] Strawson, G. (2011). Person. In G. Strawson, Locke on Personal Identity (pp. 5–16). Princeton: Princeton University Press.
[4] Dimmock, M., & Fisher, A. (2017). Conscience. In M. Dimmock, & A. Fisher, Ethics for A-Level (pp. 157–167). Cambridge: Open Book Publishers. Retrieved June 14, 2020
[5] Maisenhölder, P., & Seng, L. (2019, November 11). The Serious Side Of Science Fiction — On The Usage Of Medial Depictions Of Artificial Intelligence For Ethical Reflection Using Detroit: Become Human As An Example. ICERI2019 Proceedings. doi:10.21125/iceri.2019.0850
[6] Ashrafian, H. (2014, January 11). AIonAI: A Humanitarian Law of Artificial Intelligence. Science and Engineering Ethics, 21(1), 29–40. doi:10.1007/s11948–013–9513–9
[7] Turner, D. (2013). The Soul. In D. Turner, Thomas Aquinas: A Portrait (pp. 70–99). London: Yale University Press. Retrieved June 14, 2020, from from www.jstor.org/stable/j.ctt32bjcb.7

--

--