Are we human because we have traits and qualities that neither machines nor animals have? The definition of "human" is a circle, since what makes us human is what makes us human (i.e., distinct from animal and machine). It is a definition by negation: our "human-ness" is what sets us apart from animals and machines.
We are people because we are neither machines nor animals. Evolutionary and neo-evolutionary theories, which say that animals and humans are on the same line in nature, have made this way of thinking less and less plausible.
Our uniqueness is partly quantitative and partly qualitative. Many animals are smart enough to be able to understand symbols and use tools. We are better at it than most people. These are two of many differences that are easy to measure.
Qualitative differences are a lot more difficult to substantiate. We can't and don't know if animals feel guilty, for example, because we can't get inside their minds. Do animals love? Do they understand what sin is? What about permanence, meaning, logic, self-awareness, and the ability to think critically? Individuality? Emotions? Empathy? Is AI (artificial intelligence) a contradiction in terms? If a machine can pass the Turing Test, it could be said that it is "human." But really, is it? And, if it's not, why not?
Literature is full of stories about monsters like Frankenstein and the Golem, as well as robots that look like people. Their actions are more "human" than the actions of the people around them. Maybe this is what really sets humans apart: they don't always act the same way. It is the result of the interaction between the genetically fixed nature of humans and their constantly changing environments.
Even more, the Constructivists say that Human Nature is just a product of culture. On the other hand, sociobiologists are determinists. They think that human nature can't be judged morally because it is the result of our wild ancestors and cannot be changed.
A better Turing Test would look for strange and confusing patterns of bad behaviour to figure out who is doing the wrong thing. In his "Oration on the Dignity of Man," Pico della Mirandola said that man is born without a shape and can shape and change himself at will. He called this "creating" himself. Existence comes before essence, the Existentialists said hundreds of years later.
The fact that we know we will die may be the one thing that makes us human. All living things have a "fight or flight" survival mode that kicks in automatically (and to appropriately programmed machines). Not so with the effects of knowing you are going to die soon. These are uniquely human. Appreciating what is short-lived leads to beauty, the uniqueness of our short-lived lives leads to morality, and the lack of time makes people ambitious and creative.
In a life that goes on forever, everything comes to be at some point, so the idea of choice doesn't make sense. Realizing that we only have so much time forces us to choose between different options. This choice is based on the idea that people have "free will." People think that animals and machines have no choice because they are slaves to their genes or programming.
But none of these answers really answer the question, "What does it mean to be human?"
The set of traits that we think of as "human" can change in big ways. These traits and characteristics change in ways that can't be changed back. This happens because of drugs, neuroscience, self-reflection, and experience. When these changes add up, they can, in theory, lead to the appearance of new properties or the loss of old ones.
Animals and machines shouldn't have free will or be able to use it. So, what about putting machines and people together (bionics)? When does a person become like a machine? And why should we think that free will stops at that point, which seems arbitrary?
Introspection, or the ability to make models of the world that are both self-referential and recursive, is thought to be something that only humans can do. What about machines that can look inward? Critics say that it's clear that these machines, unlike people, are programmed to think about themselves. To count as introspection, it must be done on purpose, they say. Yet, if self-reflection is a choice, WHO makes that choice? Self-willed introspection leads to logical paradoxes and an infinite loop.
Also, the idea, if not the formal concept, of "human" is based on a lot of unspoken assumptions and rules.
Putting aside political correctness, why assume that men and women (or people of different races) are the same as humans? Aristotle didn't think so. Males and females are very different, both genetically (in their genotype and phenotype) and in their environments (culturally). What do these two groups of people have in common that makes them both "human"?
Can we think of a person who doesn't have a body (a Platonic Form or soul)? Aristotle and Thomas Aquinas both say no. The soul can't exist without the body. If a machine kept an energy field going that had mental states like ours, would that be considered human? What about a person who is in a coma? Is he or she (or it) still a full person?
Is a baby who just came into the world human, or at least fully human? If so, in what way? What about a future race of people whose faces we wouldn't recognise? Would you think of machine-based intelligence as human? If so, at what point would it be thought of as human?
In all of these talks, we might be mixing up the words "human" and "person." The first one is a private matter of the second one. Locke's idea of a person is that it is a moral agent that is responsible for what it does. It is made up of the continuity of its mental states that can be seen from the inside.
Locke's definition is a useful one. It is easy for machines and energy matrices to live there, as long as the functional conditions are met. So, an android that meets the requirements is more human than someone who is brain dead.
Descartes' argument that we can't say how a soul stays the same over time if it's not in a body is true only if we assume that these "souls" have no energy. It is possible that an intelligent energy matrix with no bodies could keep its shape and identity over time. It is already done by some AI and genetic software.
Strawson's idea of a "person" as a "primitive" comes from both Kant and Cartesian thought. All of the individuals of that type of entity are affected by both the physical and mental predicates at the same time and in the same way. One of these things is a person. Some people, like Wiggins, say that only animals can be people, but this is not strictly necessary and is too restrictive.
Most likely, the truth is in a synthesis:
A person is any kind of fundamental and irreducible entity whose typical physical parts (i.e., members) can always be in different states of consciousness and always have a set of psychological traits.
This definition includes people who are not animals and people with brain damage as people ("capable of experiencing"). It also includes Locke's idea that people are like "clubs" or "nations" from an ontological point of view, since their personal identity is made up of a number of interconnected psychological continuities.