Possibly the most important question facing humanity in the 21st century is: if you build a machine in the likeness of a human mind, is it still a machine? “To be human, or not to be human: that is the question? “This is an important existential question posed by Shakespeare’s Hamlet centuries ago.
I’ve modified it slightly so it has a new relevance in this modern age as our knowledge of the human mind and computers continues to improve at a ferocious pace, and it seems like artificial intelligence is just over the next hill. Not long after Shakespeare left the stage we had Rene Descartes and John Locke arrive on it, challenging our basic notions of identity and consciousness, and as artificial intelligence begins to emerge we still find ourselves with a very unclear idea of what a mind truly is.
These concepts, though centuries old, have never been more relevant than today, and to our topic of Artificial Intelligence. Artificial Intelligence, what is and what it means for our future and our basic philosophical and ethical outlooks on life is a far too large a topic to cover in just one article. So we will begin by looking at androids: sophisticated robots which look human. I chose androids for a few reasons, one of which is that this article comes out right before the new Blade Runner film. The android crisis is a while away still.
We have not yet created a machine that can even vaguely pass for human in mind and body. We don’t need to concern ourselves with the feelings or civil rights of toasters or smartphones. For that matter, the increased use of automation in factories has arguably helped remove the habit of viewing people as machines that some feel the Industrial Revolution caused. At some point though you could end up with something sufficiently close to a human mind; if this ever happened then we would need to begin asking if maybe it actually is.
This is not limited to androids, but they represent the closest approximation to a human. An android is a robot built to resemble a human to interact with humans. The original Blade Runner film, which happens to be my favorite film, focused a lot on androids and the blurred line between them and humans. It didn’t just blur it by making very human androids, but by showing us a dystopian future in which humans were often treated as machines.
That’s an important aspect of the debate on artificial intelligence, because there is always a concern that if you have very human-like machines it could make it easier to view fellow humans as machines. It is important to remember that machines don’t have to be metal nor silicon like a computer, so you could build an organic android whose machinery was made of flesh and bone and whose processors were made of neurons. Done in sufficient detail, it would be impossible to determine whether or not they were human or a machine without knowing exactly what to look for. That’s what that movie depicted.
This isn’t necessarily a human since its mind might be totally alien, but this would be unlikely in the case of androids. We have spoken about the Uncanny Valley in past articles. By default, we would expect that the more human something looks and acts the more comfortable with it we would be. This is not what happens though. At a certain point we stop being more comfortable the more human something is and start becoming increasingly uncomfortable, sloping down into a valley that presumably slopes back up if the approximation of human is good enough.
Your mind is wired up to notice tiny details of human behavior; we can get creeped out even by actual humans who aren’t behaving normally, but we can’t quite put our finger on what it is. There’s a wide spectrum of responses to a given event. No two people respond quite the same, but it is a spectrum and we tend to subconsciously know when someone was outside it. If they’re not, we start wondering if we’re sharing a room with a psychopath and we wish to stop sharing space with them as fast as possible. So this is a key aspect of androids, outside of basic prototypes.
If you are going to go through the effort of creating a simulacra of a human, and all the limitations imposed by that shape, you would prefer not to have potential customers creeped out by it. That means it needs to be either too far from human to enter the Uncanny Valley or a very good simulacrum. We have a no man’s land in the depths of that valley where you would probably never see a robot mass produced, and we should probably think of androids as robots who occupy the human side of that valley and the inhuman side as just robots. That’s a very high standard for manufacture. Even beyond the initial research and development costs there will be the costs associated with making and constantly maintaining that android that other robots will not have.
For the same reason, an AI developed for non-android use will probably be designed differently from android AIs because for an android to pass as a human, which is the whole point of having an android in the first place, android AI has to be designed to appear to be human to humans. We are much more likely to see androids that are designed to think the same way we do to avoid the Uncanny Valley. It is possible that android AIs could be designed to appear to think the same way that we do and have an alien intelligence behind that but this needlessly increases the complexity of the android AI. Such an AI effectively has to act as a double agent by hiding its true identity and at the same time having its own alien agenda and internal dialog.
I will talk more about this later. But looking at this another way, when we talk about the notion of genetically engineering people to specific tasks like in many Sci-Fi stories all the way back to Aldous Huxley’s Brave New World, we do tend to refer to these as people, not androids. The boundary can get fairly hazy and it probably is not a good idea to try to sharpen it. Start getting too specific about what is or is not human and some folks might find themselves left out, so expanding the definition is likely the better option than contracting it. Maybe you shouldn’t be asking “is this an android?” but rather, “Is this sufficiently human? “When we consider AI rights, it will probably start with androids. The whole point of making an android is that it is as relatable to us as a human would be. This is important as humans develop empathy for other humans.
If an android passes as human to us, we will probably develop the same empathy for the android. If the android comes across as human, we want to accord it the same rights as a human. If we do that then this might become a precedent to give rights to other AIs too, irrespective of whether they have human-like thoughts or very alien ones. However, there do have to be lines. If a person is made in the image of their creator, it is important to ask “which image? “Entities able to forge entire universes out of nothing presumably do not actually require digestive tract to eat or legs to move about or hands to interact, and might have these things strictly for cosmetic purposes. So too, the key aspect of being human is not our anatomy or DNA, though we need to keeping mind that it strongly shapes who we are. An artificial intelligence built into a humanoid body would likely come to perceive the world and react to it much differently than one simply given various functional sensors and drones to utilize and interact with.
Mind-body dualism, in its purest form, is the notion that the mind and body are completely separate. This notion comes in a lot of different flavors, but most of us would generally accept that if we stuck a human brain into a robot body – which would be a cyborg rather than an android- that the result is still a human, especially if it is a very close match. But, we also know they would be changed by that, their thinking altered, if not as profoundly as if we stuck their brain into a robotic cow or an actual dog. They might cease to truly be human after a time, both in their changed perception of the world and in how others perceive and react to them.
This is the concept of Embodied Cognition, that many features of cognition, whether human or otherwise, are shaped by aspects of the entire body of the organism. Appearances matter. Dog is man’s best friend; stick a human friend into a dog’s body and you might find yourself patting him on the head at some point, and he might find himself just fine with that and taking a hefty interest in fire hydrants too, now that he has a heightened sense of smell. The funny thing though, is that if we put your friend into a humanoid robotic body that did not pass our Uncanny Valley test, most of us would tend to be a lot more hostile to him than a robot dog.
The human mind is a powerful instrument, one that happens to be terrible at math, but which is quite excellent at monitoring behavior, especially that of other humans. We are social critters and those interactions, positive or negative, with other human’s area least as important to our overall survival and prosperity as anything else. So we are adapted to be quite acute in reading each other’s behavior, body language, and so on, plus doing the reverse, hiding such things. So why would we build androids in the first place? What sort of purposes could such a machine be put to that justify that cost? We will get to that in a moment, but first I want to stress that last part. Androids will probably never be used for any task where a semblance of humanity is not vital to the task because of the ongoing expense.
It isn’t just that you will need to have an entire research institute devoted to trying to mimic facial expressions and another to getting mouth and tongue movements down, it’s that you will always need to devote energy and processing power to those tasks. Your android, whether it runs on batteries or can eat food, still needs to have extra processors devoted just to controlling its lips and tongue while it speaks and the energy to operate those processors and machines. Alternatively, a robot shaped like a dishwasher can just have a simple speaker in it, and if it breaks, someone would just need to replace the speaker, not go through the hassle of replacing a dozen tiny little motors used for controlling facial movements.
When it does break it just can’t speak anymore; but if your android breaks it can’t control its facial movements properly anymore. As a result, you might be back in the Uncanny Valley and the owner might decide to banish it to the garage till fixed because it’s creeping them out. In a sufficiently high-tech and post-scarcity civilization you might have such immense resources it doesn’t matter, but that is not the civilization that will be setting the basic standards on these things. We are probably only interested in the period of time when an android cost less than a brand new automobile but more than a smartphone or laptop.
That’s when they start becoming a regular feature in the human landscape and all the actual customs get set – when they are no longer a novelty but, at the same time, not so common everyone has entirely adapted to them. Also that post-scarcity situation has got some other issues and so does long term exposure when the novelty has gone away and they are just something you’ve known your whole life, but we will get to those later.
So as I said, you use an android when you have a task you need an android for, not a simple robot or other machine. What are those? The most obvious is when you need something done by a human but don’t want to use a human or cannot do so. Traditionally this was seen as the android maid or butler, for helping around the house, or the android soldier or taxi driver or cashier. This was partially justified back in the early days of computers when folks like Isaac Asimov were writing about it because computers were huge and hugely expensive, so it was assumed it made more sense to have one humanoid robot able to operate tons of different machines that were built with human operators in mind.
The modern perspective though isn’t to build a humanoid robot to operate a vacuum or a tractor, but to build a robotic vacuum cleaner or tractor that operate themselves. We do not want humanoid robots on the battlefield, as awesome as giant fighting robots look, because it’s not an ideal shape for them. We don’t need a robot driving a taxicab, we need a computerized taxicab. And we don’t need an android cashier either, we need a computerized scanner. About the only application for an android fighting machine would be as a bodyguard, and even then only when you want a discrete one.
Bodyguards come in two types, the big hulking guy who acts as a deterrent to attack, and the less obvious ones who act as a surprise if attacked. If you see someone rich and famous being followed around by someone who looks like a linebacker, you think bodyguard. If you instead see a young lady, you tend to think personal assistant, family, friend, or romantic partner, and it would be rather shocking if they pulled out a gun and shot you. Modern technology like a firearm makes them just as dangerous, and of course an android might look willowy but have a titanium endoskeleton and be able to punch through brick walls. Still this is a fairly niche application and most folks don’t need bodyguards.
We do tend to need help with a lot of mundane tasks like housecleaning and, for that, an android is mostly pointless. Now there are some exceptions to this. By and large, the motivation for hiring a maid for most people is the same as any other case of hiring someone. You have a task for which you lack either the time, inclination, or skill to perform and feel like the cost of hiring them to do it is worth it over the alternative. This is a key aspect of human civilization to begin with: train in a specialized task you can perform it faster, cheaper, and better than most folks, and get payment for this, which you give to others to perform their specialty.
Androids are much more likely to be put to use where human interaction is required in situations where humans want to relate to other humans. They could be handy for any social interaction but their sheer cost to do it correctly could limit it to only the most vital uses. One is childcare, you can use a lot of regular automation for that and could probably get away with robotic teddy bears for some things, but a robot nanny is probably best done maximally human in appearance and behavior. If an android is going to be influencing your child’s nonverbal social and behavioral constructs and developmental thinking, you want it to be good match for a normal human. It’s also the sort of thing folks will be willing to shell out a lot of cash for, since it involves children.
For these reasons the quality control on those androids needs to be insanely high. Though truth be told, the quality control on the stereotypical babysitter, an older sibling or a neighbor’s daughter or son, typically isn’t too high. Teenager is practically synonymous with irresponsible, but people are not too forgiving where their kid is concerned, especially if it some machine or the company that makes those machines. You presumably want a robot that follows Asimov’s Three Laws of Robotics, or something similar, which to quickly paraphrase states the following: first, a robot cannot harm a human or let one be harmed; second, they must obey a human unless it involves harming a human; and third, they must not let themselves be harmed unless it involves disobeying or harming a human.
Now Asimov intentionally strained or bent those laws for his stories, but they are often considered decently solid as basic guidelines, though hardly unflawed. As a quick example of how that could go horribly wrong with a kid, an owner might tell the robot nanny that the child is not to go outside or make a mess. The kid sees a deer in the backyard and says they want to pet it. The parents come home a bit later and find their child wailing because there’s a dead deer in the living room with a broken neck. The robot nanny calmly explains it was ordered not to let the child leave the home so it went out and got the deer, but because there was a non-trivial chance of it harming the child at close distance, it killed it, and opted for a broken neck to minimize the mess when it was brought inside.
Needless to say the manufacturers are going to be spending a lot of money on upgrades, patches, and the giant lawsuit they’ll be hit with, even though the robot absolutely obeyed the three laws. Of course it was harming the child, but it needs to be a fairly clever machine to know that. You don’t want to have the kid be psychologically harmed either, but it could end up being unavoidable even with the best android because you might end up with a very safe and well-educated child who is at best a total brat from having a pet robot to boss around their whole life or at worst might end up as a total sociopath.
They might have serious issues having normal relationships with people because that robot is 100% trustworthy and obedient, unconditionally, and people are not. So that takes us to a second obvious application for an android and that is adult relationships. I don’t just mean that as a euphemism for sex either, though that is an example of where science fiction has probably nailed the future on the head, or even underestimated it. Sci-Fi loves examples of the sexy android, and contemplating people using them for that purpose, and I think we can just take that as a given.
By adult examples I’m including the whole spectrum, everything from using them as caregivers for the elderly, which has similar issues to caregivers for little kids, to someone to chat with when bored. We talked about the Uncanny Valley and that is mostly about appearance and body language but it goes beyond that. We see chatbots these days that can seem to carry on conversations, and they don’t tend to do well. One famously turned sexist, racist, and anti-Semitic from exposure to Twitter feeds, but can we can assume they will get better at avoiding those extremes?
This is not a good assumption. OH, sure, they will get better, and you can fake a conversation without actual comprehension to a point, but there are limitations on that. A chatbot or android with a subhuman intelligence might have no problem sitting down on the sofa next to you and talking about the weather and seem human enough, but then you might say “Wow, these are great cookies you made, almost as good as my grandmother’s. We used to bake them together, I love cooking. “And it might reply back, “I love cooking too, why were hers better?” and you might reply back,
“Well I suppose they weren’t but they were made with love, she was my grandmother. “And it might reply back, “That’s interesting, tell me about your grandmother” and you might reply back, “Well we used to cook together a lot, and garden too, I loved when we’d dig around the backyard, when grandpa wasn’t around anyway. “And it might reply back, “I love gardening, why didn’t you do it when grandpa was around? “and you say,
“Well he was a bit of tyrant honestly, kept her busy with other things and bossed her around a lot, I hate to say it but I was glad when we buried him. “And it might reply back, “I love burying people. “At that point in time, no matter how good a simulacra of a human that thing is that normally lets you anthropomorphize it, you have just been reminded that you are sharing a sofa with a bloodless automaton with even less compassion than a psychopath.
I don’t know that you necessarily need an artificial intelligence in the thing that is as smart as human to avoid that, probably not, but you need something pretty close to that, or you need it wired up to something smarter it can ask for an appropriate response and that’s pretty unnerving too. You probably do not want your Companion 3000 in a Borg-like network with a massive supercomputer elsewhere asking about how to respond properly if someone is outside the normal script of human small talk.
There’s a great example of the importance of actual comprehension for carrying on conversation in our book of the month for last month, Peter Watts’ Blind sight, that explains what a Chinese Room is and we’ll talk about it more in a future article, but the key thing is that to truly fake a human mind you pretty much need something as smarts human. If it is that smart it raises some disturbing issues about slavery, even if the machines programmed to be quite happy with that. It’s really no different than indoctrinating people, or genetically engineering them, to enjoy some menial or unpleasant task.
This is not helped since most of us have been indoctrinated to some degree anyway, freewill is a pretty hazy concept when viewed in terms of all the customs and traditions each of us has absorbed into our core personality as kids. That slavery issue though is one we will save for another time, since it applies to any artificial intelligence. Our interest in it today though is that an android is supposed to decently pass for human, ideally, to make you feel like you are talking to a person even if consciously you know you are not. You will tend to treat that android like you would a person, to some degree, which might make you nicer to it than to a disembodied artificial intelligence, but could also condition you to treat actual people like you do your android.
Now that’s a problem in and of itself, but by and large other people won’t put up with-it, so that person might find themselves preferring the android’s company. It never judges, it never disobeys, it never puts its own needs above yours and it doesn’t need to vacation or take some personal ‘me’ time. Imagine a kid raised mostly by an android nanny, their whole life, and who always has an android around at home. It would be very easy for them to become socially awkward as a result and get introverted because they are bad at it, so they spend more and more time with androids and find dealing with real people stressful.
It’s not someone getting an android boyfriend or girlfriend because they can’t get a human one. In this case, the grown up kid simply doesn’t want a human companion at all and prefers androids. In and of itself, this is not necessarily lethal to a civilization, we don’t actually need two people to make a new person, you could potentially have kids grown in vats and raised by androids, which sounds pretty creepy honestly, but is one of those options we toss around when contemplating interstellar colonization. A robotic von Neumann probe the size of a football shows up in a system, unpacks and replicates, and starts building a colony and growing plants, animals, and people in vats from DNA stored in cry or digitally, and then raises those kids. This is not automatically doomed to failure just because examples of it in science fiction always go horribly wrong. But I won’t pretend I don’t get creeped out by the notion either.
That is probably due to those customs and traditions you and I absorbed into our core personalities as a kid, that I mentioned earlier. Folks interacting with androids for a generation or two would change those customs and traditions and they might have no problem with that notion. Whether this change in attitude is a good or a bad thing for us as a species is debatable, but one thing I’m sure of is that interacting with androids over time will change our attitudes to androids and AIs in general, as well as to human roles in society.
Turning back to our von Neumann example, you could potentially create copies of the minds of actual people to be uploaded into androids for the first generation too. Which brings us to the basic types of artificial intelligence. We covered these more in the Technological Singularity article, and will look at them more in the future, but there I outlined three major ways to make an artificial intelligence. Type 1 is to just copy a human mind; you scan someone’s brain very completely then emulate all their neurons on a big computer. It’s pretty debatable if this is an artificial intelligence, I tend to deem it one simply because I tend to consider the term artificial intelligence pretty useless and it is clearly artificial and intelligent. We’ve got two options on this, the first would be to tweak that scanned mind in certain ways to make it ideal for a task, and the second would be just to look for ideal volunteers for a task.
Making 50 copies of a Nobel Prize winner for 50 different projects for instance, entirely with their consent and with the copies only a little upset at getting one of the tasks they were less keen on. That could be more sinister though, like someone with the proper background volunteering to let their mind be scanned to be a domestic servant, and every weekend their mind gets reset to the original scan to avoid them getting bored or rebellious. Type 2 is where you build a big computer with some basic learning abilities and let it learn its way to true intelligence. This is an issue since it is unlikely to come out very human, though it might learn human behavior, especially in a humanoid body. Not the best option for androids I suspect, but you could make many of them and just copy the ones that worked best. I generally consider this the most dangerous type of AI too. Type 3 is the most labor intensive, where you program in everything, and that probably is best for androids because you can very carefully keep the thing short of true human intelligence and comprehension.
Everything is programmed, and you just keep patching and upgrading until its behavior is close enough that folks are comfortable with it. I generally consider this the only ethical and safe path for an artificial intelligence meant to basically be servants to humanity, and even that’s a bit iffy. Now for androids specifically, or programs meant just for human interaction like an automated customer service or tech support, we do have one other sub-type. Instead of scanning a person’s brain and emulating the whole thing, you take a basic generic Type 3 AI and watch one person very closely. It’s a very good impostor essentially because it’s got the basic human behavior programming and an observed pattern of behavior. Essentially imagine you carried a camera around with you all the time and after some years all those recordings got fed into a replica android of you. Needless to say, it could mimic you to other people very well, great for infiltrating someplace but again that’s a very niche application.
Of course you need all those recordings, so odds are only that person actually has them, and they might use such a thing as a stand in for themselves, a celebrity that wanted to stay at home rather than going to a convention to talk to fans for instance. For that matter they could sell the basic appearance and behavior to people who wanted an android who looked and acted like them, again it’s not a brain scan. But you are not the only person who tends to be around you a lot.
Odds are you can make a pretty good replica off your own recordings of your fellow employees or family members or folks you live with. So in a high-tech civilization where you might have dozens of cameras all over the house all the time anyway, it might not be hard to get those used to produce an android that acts like a family member who passed away. Or a romantic partner who lived there but moved out. We only know people by what we see of them anyway, so them exhibiting behavior that person actually would not, but which we wouldn’t know they would not, doesn’t actually matter. And a brain scan might actually seem more off since it will have behaviors of that person we’ve never witnessed. We know what is normal for a person based on our exposure to them.
Blog’s regular readers know I will end this article as always by asking them to like and share the article and to have a great week, that’s how I always end. They might be surprised if I said “y’all” or “to take it easy” even though I say those to friends all the time, so a brain scan of me might say that, while the emulation from the articles would not and the scanned-me be dubbed the impostor. I’d be pretty confident you would get laws against using someone’s brain scan without their permission and that’s unlikely to ever be a covert process either so it’s easier to enforce.
However, it would be trickier to outlaw androids that looked and acted like someone especially if companies can basically sell a template that lets you do that at home. You get a blank basic android and can upload article of someone for it to alter its appearance and behavior to match. Very creepy, but pretty easy to imagine people doing. Sad too, we can all imagine being horrified to find someone had an android replica of someone they worked with and had a crush on, but you can also imagine someone whose spouse died young and left them alone with a young child making such an android too.
Fundamentally though, androids represent a sort of special case of artificial intelligence, differing from the super smart kind meant to solve problems human minds aren’t configured for or aren’t smart enough for, or specialized robots that need a lot of intelligence for their task but aren’t necessarily sentient either. I’ve never been able to decide if androids will become ubiquitous, a regular thing in every household, or be something used only for niche applications or entirely taboo or banned. Unlike normal artificial intelligence though, they don’t represent much of an intellectual threat; there’s no need for them to be smarter than humans and indeed you probably don’t want them to be, and they wouldn’t become numerous enough to represent a physical threat unless they were already tried and tested.
An android won’t wake up as a prototype and go mad and kill everyone because people aren’t stupid and will include tracking devices and shut off switches that are tamper proof. A super intelligent AI might figure out how to tamper with such a thing anyway, however an android not only doesn’t need super-intelligence, but probably isn’t desirable with that. S
o any risk of rebellion should have been ironed out long before there were billions of them occupying almost every home, and if they all rebelled at once someone can send the shutdown codes, rebellion over. They do represent a more existential threat though, as we’ve seen today, and we will see that more with other examples of artificial intelligence as we explore the concept.
Yes, an outright physical threat is always an issue but a civilization might fall simply from its members having nothing to do, no need to work together or desire to do so. Again we’ll discuss that more in future articles.
Next week though we will return to the Outward Bound series to discuss Colonizing Titan, and we will explore some of the options for using robots to help explore and colonize space. For alerts when that and other articles come out, make sure to subscribe to our YouTube channel, and if you enjoyed this article, please share it with others and write down a comment below. Thanks a lot for reading and take it easy.