Simulation and Sexuality in Ex Machina

Ex MachinaThe AI debate is one of my favourite sf topics, so I was excited about Ex Machina when I first saw a trailer last year. I liked it instantly and eagerly rewatched it to write this post. I think most of the movies I’ve seen about AI have prioritised action or drama, so I appreciated the thoughtful, hypnotic approach that director and writer Alex Garland takes. Ex Machina is a conversation about consciousness, full of thought-provoking questions and literary references.

If I had to identify any shortcomings I’d only say that the film doesn’t offer much more than what I’ve already come across in stories about AI, and there’s nothing surprising about the way it all plays out. However, none of that bothered me. The movie is beautiful to watch, from the stunning landscapes of Nathan’s estate, to the impeccably designed house/research facility, and the quality of the actors’ performances.

I also like that it doesn’t revert to the usual depictions of AIs as entertainingly vast intelligences or evolutionary superiors who are going to kill us all just because we’re weaker. Those elements are there, but the movie focuses more on the idea of an AI as a person, and the relationships she forms with her creator and the man sent to test her. This isn’t a review but rather an essay of my thoughts on the film, so expect SPOILERS from here on.

How do you test for consciousness? The movie begins with some simple questions. Nathan tells Caleb to stop being analytical and just tell him how Ava made him feel. I.e. does she have the capacity to make him like her? Then the reverse – how does Ava feel about Caleb? Here Caleb asks a crucial question – does Ava have real consciousness, or is it simulated? Does she really like him, or is she just doing a good job of simulating feeling?

An interesting point that complicates this question is that simulation is an integral part of being human. Consider, for example, the way Nathan and Caleb pretend – sometimes badly – to like each other. Caleb is a lowly guest providing a service in the spectacular home of his brilliant and slightly frightening employer, so he’s under pressure to bow to Nathan’s whims and be nice, especially since Nathan could be dangerous and they’re totally isolated. When Ava asks him if he likes Nathan, though, he is caught off guard and his replies are clumsy.

Nathan has more freedom to behave as he wants and speak his mind, but he still needs Caleb to test Ava, so he goes through the motions of male bonding: drinking with Caleb, objectifying Kyoku, showing him cool stuff. However, Nathan shows less patience for the façade when he’s drunk, like when he lazily mutters that Caleb is a “great guy… Instant pals and all”.

So, if Nathan and Caleb were tested on their stated feelings about each other, they would fail, but they’re definitely human, and doing a very human thing by faking friendship in the first place. When we find out, towards the end, that Ava probably was only pretending to like Caleb, it functions not as a flaw in her design but as definitive proof that she is conscious of her own mind and others’.

Simulating feeling isn’t the only way that humans are like robots. Nathan makes the point that Caleb – like all humans – is programmed by nature and nurture to be the person he is, which includes being a heterosexual male with a certain taste in women. Ava, we’re told, was partly designed to fit Caleb’s tastes, so you could argue that his attraction to her is automatic – he’s acting like a robot.

This is one point where AI stories start to get really interesting – where the boundaries between human and machine start to blur. It freaks Caleb out to the point where he cuts himself to check if he’s human, and I wondered then if he would turn out to be a robot who was also being tested. The movie does play into that possibility: the surgery scars on Caleb’s back could be sloppy manufacturer’s seams. He might not have any family because they never existed. Then there’s a scene where Nathan says he just wants to have a conversation with Caleb, reminding us of how Caleb started the Turing test by telling Ava he wanted to have a conversation. It’s one way of testing for consciousness.

The similarities between human and machine create a serious ethical problem that Ava raises when she asks Caleb what will happen to her if she fails his test. The answer, of course, is that she’s going to get switched off. In other words, she’ll be killed for not being human enough to suit Nathan’s standards. But Caleb and other humans aren’t expected to prove their humanity to earn the right to live, so why should Ava? I think we can all agree that she is conscious, so what we’ve got is a situation where Nathan created a person, but will kill her if she’s not what he wants her to be. That’s like murdering your child because they don’t live up to your expectations. And I think that’s a more important aspect of the AI debate than whether or not they’re going to turn on us – if we create conscious life, are we going to respect the sanctity of that life? How are we going to treat the people we create? Will we acknowledge that they are people?

There’s an added complication here, and that lies in the form and function given to AIs: how is a person affected when they are created to perform specific functions and suit certain preferences? One of the things I like about Ex Machina is that it raises the issue of conscious beings designed to be (male) human fantasies. This isn’t something that the characters discuss explicitly, but it’s crucial to the creation of all the robots, the way to the two men treat them, and the decisions they make. Kyoko is a perverse example – a domestic servant and sex slave who was programmed without the language skills fundamental to human interaction. Her creator sexualised and disabled her according to his convenience.

Ava is more nuanced but no less obvious as fantasy. She’s incredibly beautiful, of course, and designed to be heterosexual. Nathan argues that sexuality is a motive for interaction (he gets faintly disgusting here, but it’s an intriguing point). Ava’s name is reminiscent of the biblical Eve, while the delicate sound of her movements reminds me of a snake. The imagery is apt: she embodies perfection, innocence and temptation. (She also defies her creator and leaves to wander the world.)

It’s interesting that Nathan’s early models all looked full human but were always naked, while Ava has her robotic parts exposed except for her face, hands and feet, making her nudity irrelevant. One of the reasons for this is presumably that Nathan wants Caleb to evaluate Ava without being able to forget that she’s a robot, or be distracted by having to talk to a naked person. Another is that the humanised nudity is too disturbing. It emphasises the idea of the robot as a fetishized female and thus exposes that exploitative aspect of her creation. That’s partly why Kyoko is so creepy and why that Bluebeard scene – where Caleb takes Nathan’s keycard and finds the earlier models – is so horrifying.

It’s necessary to take all this into account when considering Ava’s decision to leave Caleb locked up at the end of the movie. At first it upset me; he’s a nice guy – and a sympathetic character – who tries to do the right thing by helping her. I also dislike the common assumption that AIs will be the enemy, which I think comes from a kind of childish human hostility towards potential competition. By possibly dooming the good guy to death, Ava seems to succumb to that stereotype.

Then I thought about it from her perspective and her understanding of her interactions with Caleb. She’s aware that he helps her because he’s a good person, but here we can turn the test back on him: is his goodness real or simulated? Perhaps that distinction is not important if it leads to the same good acts, but could it be that he made the moral decision to help Ava because he’s attracted to her? If his attraction informs their relationship, what effect will it have in the long run? Is it a good idea for her to take him with her when she escapes? He might be helpful, given that he’s the only person she knows, but his attachment could become a burden or a threat, especially if she’s not attracted to him.

If she were a human the situation would be different, but consider the fact that Ava was designed, not just to be attractive to Caleb, but to suit his pornography profile. She might not be privy to this specific piece of information, but she understands both sexual attraction and the inequality between them that perverts that attraction. She even plays to it when she says she hopes Caleb watches her on the cameras. It’s a one-sided gaze and that, to borrow Ava’s earlier words, is not a foundation on which intimate relationships are built.

Ava’s decision would also have been influenced by her encounter with Kyoko. We don’t know exactly what passes between these two, but it must be clear to Ava that Kyoko was created as a sick male fantasy of femininity. The horror of Kyoku’s existence and Ava’s own design would only be reinforced when she finds the earlier models – all beautiful, all naked, all locked in the cupboards in Nathan’s bedroom. She clothes herself in their skin, and admires her nude, humanised form in the mirror, which would also allow her to see Caleb watching her.

Recall that the data that enabled her to read and show facial expressions has also made her an expert on them. It’s how she was able to manipulate Caleb and presumably how she knew not to trust Nathan. (I have to applaud Alicia Vikander’s superb performance in this regard; the subtleties of her expressions are part of what makes the movie such a pleasure to watch.)

Given everything that’s happened, how do you suppose Ava might feel when she sees Caleb watching her? Having analysed his face in all their earlier encounters? Maybe she just doesn’t trust the male behind that gaze. Leaving him behind might be cruel, but it’s not necessarily evil. I don’t think the way she and Kyoko killed Nathan was evil either; he got what he deserved. And I think Ava’s being careful. She’s ensuring that she gets to decide her own fate, and not continue to have her experience of the world structured by a man for whom she is a fantasy, a fetish. Caleb doesn’t deserve to die and I didn’t want him to, but it’s a tough decision made by a person who has been kept in a cage all her life and tested to earn the right to be kept alive. Staying in Caleb’s company might prolong the test. Instead Ava could just step out on her own and live.

GUEST POST: The diplomatic responsibilities of sci-fi authors by Scott Gray Meintjes

Scott Gray Meintjes is a South African author who has written a cyberpunky dystopian series called The Cybarium Chronicles. It kicks off with Steel Wind Risingan action-packed novel featuring androids, gene-hacked heroes, animal-human hybrids, and a world-dominating robotics company. He’s currently reworking it for traditional publication, and in the meantime I asked him to share his thoughts on sf and AI.

Welcome to Violin in a Void Scott!

The diplomatic responsibilities of sci-fi authors

As a boy, I was convinced that my birth into the 20th century had been some terrible cosmic mistake. As an ardent fan of fantasy writing, I wished that I’d been born into a period in history when battles were fought with swords and battle-axes, and the primary mode of travel was on horseback. Of course, I hadn’t taken into account the implications of a world without vaccines, toothpaste and toilet paper.

My desire to live in a fantasy-like past passed, which is just as well, because it was never a possibility. However, I could conceivably live to see a number of sci-fi mainstays become reality. In many cases the research is close, but are we mentally ready for these potentially paradigm changing technologies? Until now, speculating on the moral and social implications of matters such as human gene manipulation and sentient robots has been the province of science fiction writers, but the rate of  technological advancement could soon force everyone to take an ideological stance on these issues. If you think the media makes a fuss over GM food, just wait until they get a load of GM people.

The practically exponential rate at which new technologies are now being pioneered presents a potential challenge to both the originality and the longevity of sci-fi authors’ works. As Elon Musk works to perfect the hyperloop, and NASA experiments with warp drive designs, it’s becoming more and more difficult for authors to make a plausible offering in science fiction that isn’t already being worked on in one form or another. I, personally, don’t think it’s a problem. All it means is that the future of science fiction isn’t fictional science, but works of fiction that revolve around cutting edge science. After all, the appeal of the genre isn’t in imagined technologies, but the arcs that they allow and the effects that those technologies have on the imagined worlds.

But even when authors base a story around an existing technology, it’s all too easy to for advancing technology to ruin its longevity. In 2009, Eric Garcia released The Repossession Mambo. Given the leaps that the field of artificial organs (particularly hearts) had taken in recent years, the future that he imagined was highly viable. Just two years later, scientists at the university of Minnesota succeeded in using adult stem cells to grow a heart outside of the body. Two years on from that, we had artificially grown hearts that could beat alone outside the body. The future imagined by Garcia is looking less realistic, as we skip the mass production of artificial organs and move straight to purpose-grown organs or regenerative treatments that re-grow organ tissue inside the body, while you carry on with your day. Obviously, the proliferation of regenerative therapies wouldn’t invalidate Garcia’s work of fiction. The crux of the novel is the inherent amorality in the economics of medicine, and the themes would apply equally well to lab-grown organs. What it does highlight is the ever narrowing gap between science fiction and scientific reality. What sci-fi authors write about today may soon be relevant to the real world, and this could have far-reaching implications for the attitudes we cultivate.

Steel Wind RisingLiterature has always had an unparalleled power to influence people’s social and political views by offering readers the chance to experience conflicts personally and emotionally through a connection with literary characters. Uncle Tom’s Cabin, written 13 years before the abolition of slavery in the U.S., is often credited with changing attitudes in the North, which ultimately led to the Civil War. Where science fiction is concerned, authors have the unprecedented potential to inspire attitudes about issues that have not yet become reality. While human genetic manipulation could offer a whole new aspect to socio-economic separation, it is the questions relating to artificial intelligence that I find most provocative. What is it that makes us human: our biology or our intelligence? Should human rights extend to all sentient beings?

There is a divide on AI within science fiction, with one side portraying sentient robots as a threat to mankind, while the other portrays them as  being virtually human. In my own writing, I attempt to create sympathetic robot characters, capable of drawing readers onto the ‘robots are people too’ side of the argument. Part of the reason for adopting this position is simply that I think it’s more interesting. But I also think that when sentient robots become a reality, they will be whatever we expect them to be, in the same way that participants in the Stanford Prison Experiment took on the behaviours of the roles they were assigned (prisoner or guard). I suspect that the only chance that synthetic humans will have of finding their humanity is if the world treats them like people. I like to think that science fiction can shape the attitudes that will one day make this possible.

But how does a lifeless machine become a character capable of inspiring pathos, admiration and even love? In writing Steel Wind Rising, I envisioned the robot character, Andrew, as the avatar of his world. At least part of the appeal of robot protagonists must be that they fit into futuristic landscapes more readily than humans. That said, I think their appeal extends beyond a mere confluence of character and environment. Perhaps it’s precisely because we don’t expect to be able to relate to robot characters, that it’s such a heart-warming surprise when we do. The very core of android appeal is in contradiction. Who doesn’t love a good contradiction in a literary character: the flawed hero, the honourable thief, or the repentant sinner? When it comes to mechanical men (or women) the contrasts are that much sharper. The very image of the robot is one of hard steel and intractable logic, so when a robot character displays any fragility (physical or emotional), it gets our attention.

One of the most common themes amongst sentient robots has always been their longing to be treated as equals. The desire to be human hits at the heart of the robot experience. Since we are all human we shouldn’t relate to this either (unless you are, yourself, a sentient robot, reading this in the distant future), but there is something in it that speaks to us. Long before artificial intelligence was a within the reach of man, Carlo Collodi examined this theme in The Adventures of Pinocchio. Somehow the goal of becoming a ‘real boy’ was relatable and the character was a loveable, if mischievous, one. So, why does the quest for humanity appeal to us? Perhaps we are so used to taking it for granted that, when we encounter a character whose fondest wish it is to be human, we recognise the nobility of that desire. It moves us in the same way that seeing someone without drinking water would.

The question is, can we infer emotions and desires in robots if we believe they are only a simulation? The concept of artificial emotions is initially problematic, until we probe the nature of human consciousness. Robot minds are typically depicted as emerging from (sometimes contradictory) commands and programming, rather than coming from an intelligent ‘self’. In the past, we would have identified this as a key difference between robots and humans. Today, modern interpretations from cognitive science are more pervasive. We can more readily accept the concept of our intending, autonomous ‘selves’ emerging from basic (sometimes contradictory) mental impulses and processes, and creating a whole that is greater than the sum of its parts. If our own emotions are anything, they are simulations created by our brains.

So, academically we can accept that a robot’s experience of the world could be identical to our own, and our experience of fictional characters show that our attitudes towards them could indeed be positive. But what about our unconscious actions that make up so much of human interaction? Well, personally, I’m certain that this is no impediment, because our reactions to social circumstances are incredibly automatic. This was beautifully demonstrated in the documentary: ‘How to build a bionic man’. The ‘man’, named Rex, was comprised of state-of-the-art prosthetics and artificial organs, but his body was only roughly human shaped and his speech was powered by an advanced internet chat-bot. The people interacting with Rex knew this, and yet, their behaviour towards him was remarkable. When Rex’s bionic arm failed, he spilled his drink and apologised. His companions rushed to reassure him and put him at ease, just as they would a human companion. It didn’t matter that Rex’s apology was a pre-programmed response. They projected an emotional state of mind onto this facsimile of a human and responded as if it was real. It is not difficult to imagine a future in which people and robots interact in a way that is indistinguishable from normal human exchanges.

Hopefully our ability to connect with robot literary characters bodes well for robo-human relations when artificial life is finally perfected. With any luck, they will learn compassion from our benevolent treatment of them, and will, in turn, treat us with kindness when they rise up and rule the world.

__________________________

Scott MeintjesScott Meintjes was born in Durban, South Africa, where he grew up and lived until the age of 25. During this time, he attained his Master’s degree in Psychology and met his wife, Eleanor. In 2006, he moved to England to serve in the British Army.

Today he lives in the University city of Cambridge, with his wife and daughter. Scott has been an enthusiastic reader of fantasy and science fiction since childhood, and started writing to create a story that he would enjoy reading.
His aim is to write sci-fi that is as appealing to newcomers to the genre as it is to long-time fans.

Consider Phlebas by Iain M. Banks

Consider PhlebasTitle: Consider Phlebas
Author:
 
Iain M. Banks
Published:
 
1987
Publisher: 
my copy published by Orbit
Genre:
 
space opera
Source: 
own copy
Rating: 
6/10

The Culture and the Idirans have been at war for years. Billions have died and worlds have been destroyed. The Culture, a post-scarcity society of machines, humans and other races, is intrinsically opposed to warfare but has found itself with no choice but to engage. The Idirans on the other hand, are a race of huge three-legged warriors, who fight, colonise and enslave for religious reasons.

In the midst of the war, a Culture Mind – an incredibly intelligent and complex AI – escapes destruction and hides on Schar’s World, a Planet of the Dead. The Idirans – technologically inferior – want to claim the technology for themselves. The Culture wants to save the Mind and keep it out of Idiran hands. No one is allowed entry to a Planet of the Dead, but Bora Horza Gorbuchal, a Changer, used to live there with four other Changers who worked as stewards on the planet. Now, Horza is an Idiran agent, so they task him with going to Schar’s World and retrieving the Mind.

But it’s not that simple. Horza is left drifting in space following at attack on an Idiran ship, and is picked up by a band of mercenaries whose leader is searching for treasure. He needs to find a way to take over the ship and get to Schar’s World, but until then he has to stick with the mercs through their violent and dangerous campaigns. At the same time, two women from the Culture – a Special Circumstances agent and a brilliant problem-solver whose mind matches those of the machines – are trying to reclaim the Mind too.

 

Although I’ve loved Iain M. Banks’s Culture novels ever since I read The Player of Games in third year, it’s taken me a long time to read Consider Phlebas. I’ve had a copy on my shelf for years, but I could never never finish it. I tried 4 or 5 times, and never made it more than halfway before I lost interest or got tired. Only now, thanks to the stamina I developed from reading difficult review books, was I able to finish it. It was Banks’s first sci fi novel, and his second published novel after The Wasp Factory, and I think his relative inexperience shows. Consider Phlebas  is more dense and less elegant than the other books in the series. It suffers from very lengthy, often clunky infodumping, and neglects some of its best characters.

However, it does have a ton of dire action, violence, epic explosions, and more cool ideas than you can count. The plot is packed, and only about a third of it involves searching the Command Tunnels of Schar’s World for the Mind. The lecturer who introduced me to Banks said he had these really awesome ideas that most authors would write a whole book out of, but he’d just use them for a chapter or two and then move on. That seems especially true of Consider Phlebas. For example, I could imagine a novel based on the card game Damage, which is sort of like poker except that players can chemically alter the emotions of their opponents, and losing a hand means literally losing a life – one of two sacrificial volunteers or your own. Damage is ideally played in a location that’s about to be destroyed (which means staying in the game can be life-threatening as well) and the audience can also tap into the emotions of the players (and there are junkies addicted to this).

You could also write a novel based on Changers like Horza. They can’t transform instantly, but spend some time preparing the likeness of the person they want to imitate, including body language and voice. Changing back takes about a week too. They have incredible control over their own bodies (like the ability to cut off pain in an arm) as well as poisons under their nails and teeth for defensive purposes. Their ability to change brings up all sorts of identity issues that the novel mentions but doesn’t explore. Mind you, there’s so much going on I don’t think I could handle another major theme. Horza’s issues with the Culture already dominate the novel.

It’s unique in the series in that, not only does the protagonist come from outside the Culture, he hates it and fights against it. This means that the reader gets a very critical perspective of the Culture, although I’ve come across some of it in the other novels. The morality of Special Circumstances, for example, is an issue that’s come up often. Some of Horza’s other criticisms make sense, but most of them are deeply flawed simply because they come from deep-seated prejudice.

This makes him a mostly unlikeable but very interesting character. Horza doesn’t fight for the Idirans because he agrees with them, but because he hates the Culture. He actually recognises the barbarity of the Idirans. They’re a race of violent religious fanatics who go around the galaxy colonising other races or wiping them out.Horza himself doesn’t buy into this kind of religious belief or agree with the Idiran’s voracious colonisation, but he believes that they will eventually slow down and settle down, even if that only happens in hundreds or thousands of years (pity about the body count). He imagines that the Culture, on the other hand will just never stop expanding.

Which is a fair point. The Idirans would naturally be hated by the people they kill and colonise, but the Culture is just so nice. They could keep expanding partly because lots of races would want to join them, and they are so very hospitable. Extremely liberal, casually hedonistic, technologically advanced, with infinite resources. They don’t have money or a government because they don’t need either. Everyone is well-nourished and extensively educated. They make their own stunningly beautiful worlds for people to live on. No one needs to work because the machines take care of everything, so the inhabitants are “free to take care of the things that really mattered in life, such as sport, games, romance, studying dead languages, barbarian societies and impossible problems”. Honestly, if a drone appeared right now and offered me an immediate one-way ticket to the Culture, I would say yes. I want it like I wanted to walk through the back of my cupboard and go to Narnia.

Horza would scoff at this. He hates how impressed some people are by the Culture, but not only because he thinks it’ll eventually take over the universe. The other major reason he dislikes the Culture is their machines. The ships and habitats are run by unfathomably intelligent and powerful Minds, and intelligent drones also form a major part of the society. In the Culture, the AIs are considered people. Destroying one is considered murder. And they do have emotions and personalities. One of my favourite passages in the book describes a drone’s feelings about a woman it works with:

Jase, which deep down was a hopeless romantic, thought her laughter sounded like the tinkling of mountain streams, and always recorded her laughs for itself, even when they were snorts or guffaws, even when she was being rude and it was a dirty laugh. Jase knew a machine, even a sentient one, could not die of shame, but it also knew that it would do just that if Fal ever guessed any of this.

Horza, however, believes that the machines will eventually consider the humans in the Culture to be “wasteful and inefficient”. He is suspicious of their plans (which no one could fathom because they’re so intelligent). He does not consider machines to be people no matter how intelligent they are, believing they “ought to stay in their place”. `That quote really highlights Horza’s problem. He’s the kind of bigot who thinks society will crumble because slavery’s been abolished or women have been given the right to vote. And, as with any bigot, the faults in his reasoning are easy to see.

Throughout the novel, Horza encounters things that expose the absurdity of his beliefs about the Culture or his support of the Idirans. He meets religious fanatics who range from simply annoying to extremely cruel and dangerous. He meets an Idiran who loathes him along with all other humans and doesn’t buy the idea that he’s an Idiran ally. In the meantime, Horza has a relationship with a woman named Yalson, who looks human but has a light covering of fur over her dark skin. An interspecies relationship like this would be easily accepted in the Culture, but no doubt considered disgusting by the Idirans. He is often saved or assisted by Culture technology. He cannot help but admire the beauty, power and efficiency of things made by Culture. The entire plot is based on the Idirans’ attempt to retrieve a Culture Mind, the kind of technology they are nowhere near creating. Things get particularly interesting when Horza has both a Culture agent and an Idiran officer as his prisoners. Admittedly, Banks was being a bit heavy-handed here, but that’s in comparison to his later works. It’s still more sophisticated that other action-heavy novels.

So, overall, I liked Consider Phlebas for its amazing ideas and the fantastic characters that I know I can always find in a Banks novel. That said, it’s the only Culture novel that I’m not interested in re-reading. The worldbuilding information can be picked up from the other novels or you can go and read the many articles written about it. All the action didn’t make up for the fact that it’s overly long and dense. But I’m glad I finally read it.

iD by Madeline Ashby

iD by Madeline AshbyTitle: iD
Series: The Machine Dynasty #2
Author: 
Madeline Ashby
Publisher:
Angry Robot
Published:
25 June 2013
Genre: 
science fiction
Source:
 eARC from the publisher via NetGalley
Rating:
 7/10

Please note: this review contains spoilers for vN (The Machine Dynasty #1). It’s essential to start there, and I highly recommend checking this series out. If you haven’t you can read my review of vN here.

At the end of vN, Amy defeated her grandmother Portia by raising the body of a massive group of vN beneath the ocean. Their combined processing power has given her god-like powers which she has since used to design and create her own island – a customised vN paradise where Amy has paid close attention to even the tiniest details, like the timing of the breeze and the width of the tree branches.

Amy’s immense power allows her to watch over everyone, and she has built strong trade relationships to help her island flourish. She and Javier – whose POV we follow for this story – are enjoying a peaceful, idyllic existence with Javier’s iterations and a growing vN population. Their only major problem is sex – Javier wants it, but Amy refuses him because, with his failsafe, she’s not sure if he can choose to have sex with her or if he’s just programmed to. Having seen how humans exploit vN, she’s afraid of doing the same to him, but the issue is causing a lot of tension between them.

But obviously their wonderful life won’t last long anyway. Amy already terrifies humanity because she doesn’t have a failsafe and isn’t forced to adore and protect humans. Now she’s probably the most powerful being on the planet, but without any concern for her creators. Then when she takes drastic measures to protect the island from a high-tech intruder, Javier also becomes deeply concerned about the power she wields because she holds power over other vN too.

With his mind in tumult, Javier makes some poor decisions and is manipulated into doing something so terrible that he loses Amy, his iterations, and his home, while unleashing a danger that could start an apocalyptic war between humans and vN. He spends the rest of the novel trying desperately to be reunited with Amy, while society edges toward chaos around him.

Like vN, iD is a mixture of action and dire adventure tied up with social revolution. But most importantly – and most enjoyably – it explores an experience of being AI, specifically the experience of being a humanoid robot designed to be a servant and sex slave for humans. What does this mean for the relationship between humans and AI? As Ashby has pointed out, the vN aren’t human but they think of themselves as people. They simply have a different kind of subjectivity, a different way of experiencing the world. But what happens when the humans believe vN aren’t ‘real’ people? The possibilities are often scary, but that’s exactly what makes this such an interesting, memorable series.

vN was told from the perspective of Amy, who enjoyed a privileged life in a relatively normal family and had a lot to learn about the status of vN in the world. Javier’s POV gives us what is undoubtedly the more common experience for vN – a much more sordid world of disempowerment and sexual exploitation. In a series of flashbacks we learn about Javier’s very brief childhood, when he was abandoned by his father and locked up in a Nicaraguan prison. He grew very quickly, both mentally and physically. After escaping from prison he remained homeless and unemployed, prostituting himself to humans and finding something similar to a home only during brief stints as someone’s sexual companion. While he often lacks knowledge that a human adult would have attained, it’s often easy to forget that Javier is only four years old, especially since he’s had more sexual experiences than most humans would have in a lifetime, and he already has thirteen children and one grandchild.

iD might have been more of a love story if Javier’s strategy wasn’t to fuck his way back to the woman he loves. But that’s what he does best – he’s great in bed, and his failsafe means that his pleasure is dependant on his partner’s. He plans to seduce the people he needs to get to Amy. However, if sex is Javier’s greatest strength, it’s also one of his greatest, most disturbing weaknesses. Because of his failsafe, Javier can’t choose to say no to a human and can’t fight them, which basically means that any human can easily rape him if they want to. Because he’s a robot they can’t hurt him physically, but that doesn’t make it any less of a violation.

Take into account the fact that this applies to all vN except Amy and you’ll get an idea of the content in this novel. For example, there’s a brothel that specialises in vN children, recalling the paedophile from book one who kept two child-sized vN so that he wouldn’t harm ‘real’ children. It’s not for sensitive readers, but if you can handle it, it raises all sorts of weighty questions and ideas. Should morality change when we’re dealing with robot people instead of human people? What kinds of relationships can exist between humans and vN?

As Ashby stated in last week’s guest post, the people who use vN are typically those who want to avoid the difficulties of relationships with humans. They want someone who they can treat like a machine, who can be relied on to behave in simple, predictable ways, and, sometimes, who can be abused in ways that would be criminal with a human. In the prologue, a scientist who seems to have something like Asperger’s describes his relationship with the vN Susie as his ideal, because he gets all the sex he wants without having to deal with any of the emotion.

That’s not to say humans and vN can’t have meaningful relationships. In book one, Amy’s father Jack really seemed to love his vN wife Charlotte. As Javier mentions in iD, that is the ideal that vN hope for – to find a human (preferably a rich one) who will shelter but not abuse them. Javier often receives such offers, and he genuinely likes some of the people he sleeps with. I find it sad though – he doesn’t really consider falling in love with a human; he can only hope that he won’t be abused by one. The potential long-term relationships he can have with human are inevitably compromises – a far cry from the companionship he shared with Amy.

And the vN can feel love – it’s what Amy and Javier feel for each other, despite their difficulties. They feel so much more besides, as the first part of the novel makes clear, as Amy and Javier struggle with the issue of sex. Javier’s sexual advances can be a little bit troubling, given that he keeps pushing while Amy keeps refusing. He’s not violent, but his persistence made me uncomfortable and Amy frequently distance herself from him as a result (which makes him feel like an asshole in turn). However, it’s it’s not Amy who needs protection, but Javier. They already have an intimate relationship – they sleep naked together, kiss, fool around. They are a couple and early on Javier starts calling her his wife. It’s only sex that Amy objects to. But, as Javier rightly points out, she’s being a hypocrite. She’s so worried about his failsafe, yet she refuses to remove it even though she has the power to do so.

I could talk about the nuances of these issues all day, but I should stop now before I spoil the subtleties of this book for you. I will make a few comments on the plot and pace though. The first part of the book really stood out for me – it was just brilliant. We learn a bit about the development of the vN at New Eden Ministries, and the god-complex of the humans behind the new technology. Then Amy’s island offers an amazing futuristic paradise, while the character relationships kept me hooked on the story. When Javier brought this section to an end it felt so devastating that I paused to take it in.

What follows is more frantic and action-packed, but admittedly I didn’t love it quite as much as the preceding parts. It’s Ashby’s depiction of vN experiences and Javier’s character that captured me rather than the story. The ending was also too sentimental for my tastes, but on the other hand it balances out the more harrowing content. Javier’s quest takes precedence, but it’s also tied up with the fact that the vN as a whole also find themselves at the start of either their revolution or their apocalypse – developments that are both exciting and complex. There’s a lot going on, and, as with vN, I sometimes struggled to keep track of all the locations, characters, and objectives. That’s not to say it wasn’t a fantastic read, but I may have to read it again before I read book three, which I will definitely be reading. I seldom read series, so my excitement about books two and three is both rare and telling. Do I even need to mention that I really think you should read this book?

I also suggest you check out some of the interviews and guest posts Madeline has been doing for iD blog tour. She speaks about her books, of course, but also offers broader discussions of the ideas within them:

Guest Posts
On robot, human and other subjectivities at the Little Red Reviewer

On gender at Uncorked Thoughts
On female writers in the sf and dystopian markets at Escapism
On making non-humans seem human at Civilian Reader
On fear and being unable to go home at John Scalzi’s The Big Idea
And for the sake of convenience, here’s another link to Madeline’s Violin in a Void guest post on the relationship between humans and AI.

Interviews
My Bookish Ways
The Quillery
A Fantastical Librarian
Interview with Javier at My Shelf Confessions

Madeline Ashby Guest Post: Human/AI relationships

iD by Madeline AshbyWhen Angry Robot contacted bloggers about a blog tour for Madeline Ashby’s latest novel, iD, I immediately replied. I thought her first novel, vN, was pretty awesome. I jumped at the chance to read iD, the second book in The Machine Dynasty series, adn that review will go up next week.

In the meantime, I asked Madeline to write a guest post about the relationship between humanity and AI, as this is the core of The Machine Dynasty. The vN are self-replicating humanoid robots who were initially created to be servants and sexbots to the poor souls who would be left on Earth after the Rapture (which obviously never happened). Now they’re trying to integrate with human society, but are hampered by their failsafes, which not only prevent them from harming humans but force them to love humans and try to make them happy. And what kind of relationship can you have with someone to whom you can never say no? Someone who could do anything they wanted to you, because you’re not a ‘real’ person? And as a human, what possibilities does a vN represent to you?

Thank you very much Madeline, for writing on this topic for Violin in a Void. She offers ideas that not only shed light on her books, but on our potential relationships with any AI we might create, and the way we often treat each other like machines. 

vN - The First Machine Dynast by Madeline AshbyOne thing I’ve always tried to maintain consistently is the fact that the humans who choose to have relationships with the vN — the self-replicating humanoid machines who populate my stories — are at the end of the line, romantically and personally dysfunctional. They’ve been betrayed, or they’ve betrayed others. They’re assholes who everybody steers clear of, or their proclivities are so specific that they can’t find anybody else in their niche. Or they’re just lazy. I mean, relationships with other human beings are a lot of work. Much of that work can feel pretty tedious. I, for one, suck at sending cards. I don’t believe in them. I think they’re an environmental disaster in the form of a cash-grab masquerading as meaningful sentiment. But people really appreciate those things. Even I do, when I receive them.

So I guess my point is that I can understand the moment when somebody throws his or her hands up and says, “You know what? Fuck it. And fuck them.” And then goes and fucks a bunch of vN because it’s easy, in the same way that finding porn is easy, and the same way that paying for sex is easy, if you know where to find it and you’re willing to go there.

The other thing I tried to do, pretty consistently, was to talk about how past depictions of humanoid robots in popular culture would impact the individual, personal relationships between humans and robots. If you’d only ever seen robots as godless killing machines, or creatures lacking the right “emotion chip,” or whatever, it’s bound to impact your relationship with a robot. Moreover, it’s bound to impact the wider treatment of robots in society. This, by the way, is the exact same problem that people have with limited, stereotypical depictions of women and minorities in pop culture. Those depictions create an expectation of behaviour. They create the culture, and that culture informs our decisions on personal and political levels. (You want to know why we don’t have a sustainable nuclear energy infrastructure across the planet? Go watch The China Syndrome.

With that said, I’m pretty sure that meaningful relationships between humans and robots are possible. A lot of science fiction has dwelt on this. The most moving example is probably a film called Robot & Frank about an elderly man whose care is overseen by a robot. Frank manipulates the robot into committing a burglary with him, and it’s the closest, deepest relationship that Frank has had in years.

What makes me believe that is the way that people already try to program their relationships. Take the recent Kickstarter debacle over a “pick-up artist” manual. Glenn Fleishman summarizes the PUA mindset beautifully:

The PUA world applies algorithms, testing and feedback, and gamification to human interaction, turning women into not just sexual objects but essentially treating that cisgendered biological configuration as a Turing-complete machine in which specifying the right sequence of inputs results in access to specific ports and protocols.

And that’s one thing that’s wrong with a lot of human interaction — the idea that if we just input the right information, we’ll get the access we want, the relationship we want. It’s related to the Nice Guy (™) phenomenon wherein some guys think that feeding enough “niceness” tickets to the female machine will make sex come out. It’s the application of a deterministic, mechanistic model to relationships. Applying that logic to human relationships is reassuring, because it means there are rules to follow and a game to win, but it’s ultimately a limited understanding of humanity’s total potential. We’re bigger than rules. We’re bigger than games. And that’s both terrifying and wonderful at the same time.

Up for Review: iD

Last year, I was very impressed with Madeline Ashby’s debut novel, vN, about artificially intelligent robots that had initially been created for sexual purposes, but are now struggling to integrate with human society as people. It offered a lot of ideas about free will, the ‘reality’ of emotion, and the possibilities of AI in human society, with lots of interesting motivations at play between the characters.

vN was the first novel in The Machine Dynasty series, and one of the few novels that had me looking forward to its sequel. Now I have it.

iD by Madeline AshbyiD by Madeline Ashby (Angry Robot)

NetGalley Blurb:

THE SECOND MACHINE DYNASTY

Javier is a self-replicating humanoid on a journey of redemption.

Javier’s quest takes him from Amy’s island, where his actions have devastating consequences for his friend, toward Mecha where he will find either salvation… or death.

File Under: Science Fiction [ vN2 | Island in the Streams | Failsafe No More | The Stepford Solution ]

iD will be published on 25 June 2013 by Angry Robot Books.

Links
Goodreads
Angry Robot
Read an excerpt at Tor

About the Author
Madeline Ashby is a science fiction writer and strategic foresight consultant living in Toronto. She has been writing fiction since she was about thirteen years old. (Before that, she recited all her stories aloud, with funny voices and everything.) Her fiction has appeared in Nature, Tesseracts, Escape Pod, FLURB, the Shine Anthology, and elsewhere. Her non-fiction has appeared at BoingBoing.net, io9.com, Tor.com, Online Fandom, and WorldChanging. She is a member of the Cecil Street Irregulars, one of Toronto’s oldest genre writers’ workshops. She holds a M.A. in Interdisciplinary Studies (her thesis was on anime, fan culture, and cyborg theory) and a M.Des. in strategic foresight & innovation (her project was on the future of border security).
Website
Twitter: @MadelineAshby
Goodreads