When the robot is realistic, but not realistic enough, we enter that uncanny valley– our growing empathy turns into revulsion, or horror. The trick is to get it either more human-like, or less.
Robert Zemeckis tried the former in his movie “Mars Needs Moms” and apparently failed (though I suspect it had more to do with the story.)
That there is an uncanny valley in human simulations is not disputable; but can you actually ever get out of it with better tech, better AI? The answer is no. The problem of the uncanny valley has nothing to do with the quality of the simulation, it is a problem of the existence of the simulation.
Where the uncanny valley theory goes wrong is in assuming that what makes is horrified is the deviation from authentic human. But uncanny as Freud saw it has nothing to do with appearance, but is something that was supposed to remain secret but has been revealed. That secret “thing” or experience doesn’t come from the robot, it comes from you, and you’re not supposed to ever see it. Making it more visible only makes it worse.
It may be possible to build a robot that can trick a human for a time; but if the human knows that it is a robot, that knowledge alone invokes the dread.
It is looking into a lie, something that looks exactly like something but stands outside its definition. The same dread a man feels when he looks into his loving wife’s smiling eyes and detects nothing amiss, yet already knows she is unfaithful. Looking into a psychopath’s eyes, knowing that he exists in complete freedom, bound by nothing; not necessarily cold, but limitless. And why other simulations like Wall-E or C3PO don’t generate dread despite their poor simulation of humans. Those last two may be robots but they are utterly, completely human, because we know their limits.
The uncanny is the appearance of Id without Superego. It answers to nothing. It takes just as much pleasure in a sandwich as in pushing it’s humanoid fingers straight through your eyes just to see what it feels like. No amount of physical perfection will ever hide the fact that it can do anything it wants, outside morality and being, and you can’t.
And why do we think we’d be better able to connect with more human-like robots? We connected just fine with Wall-E, and he’s two dimensional and friends with a cockroach. On the other hand, we can’t really even connect with humans.
But what will cause human beings the most dread in the android future won’t be the androids, but living amongst the people who have bonded “completely” with them.
No related posts.