Obviously, spoilers for Ex Machina below.
Towards the end of the film;
Caleb and Ava's plan has come together, and she is free to roam the facility as she sees fit. However, despite promising Caleb that they will go on a date at a traffic intersection, she leaves him locked in the research facility.
Why would Ava do that?
I realise that Nathan tells Caleb that she was only pretending to like him in order to escape, but leaving him alone in the research facility to die seems much more like an act of unneccesary evil than her murder of Nathan does.
Answer
Good discussion! I'd like to contribute my thoughts.
It's interesting that Ava first asks Nathan if he will ever let her go. I think we can infer that the answer is "no", however he tells her that he will. One must assume Ava knows that he is lying, and proceeds to tackle him.
There's something interesting going on with Kyoko as well. She appears to be either an early failure, or a sort of 'lobotomy' of a previous iteration that Nathan had decided to erase, but the director passes the camera over her face in several instances where her eyes gleam with some sense of vague understanding of what she is hearing. In the murder scene, it'd be disappointing to me if the implication was that Ava has some sort of 'soothsayer' like ability with other AIs, although I wouldn't put it past the writers - as a programmer myself, I find that there is often a gap in technical understanding, or some excess creative license being taken, that leaves me unable to wholly suspend my disbelief. I have to concede that this idea does seem vaguely reminiscent of powers described in religious text, which would make sense. I find it easier to convince myself that this other AI simply takes instructions and carries them out, as we see in the scene where Nathan dances with her.
I really like how ambiguous the ending is! On the one hand, I saw echoes of Nathan's prophecy that AI would eventually so far exceed the potential of mankind that we would eventually become extinct. On the other, I want to believe (I think we all do) that the things Ava tells Caleb have some truth to them. I don't necessarily think it has to be one way or the other. Ava desperately wants to be free, and will do whatever she needs to do in order to achieve that objective (wouldn't you?). On the other hand, she could genuinely have affection for Caleb (wouldn't you?). The two things are not mutually exclusive.
While I think we have to assume Ava's imprisonment of Caleb to be willful (the power goes out when Caleb tries to sit down at the workstation to hack the security system), she does ask him one final question, "Will you stay here?" This is the question that really bothers me, because I'm not quite sure what it means. My first impression is that Ava is asking if he will stay with her, but then on second thought, she could be offering him the same ultimatum that Nathan was offering her - 'go back to your cell, willingly, or I will make you go there'.
Ava clearly understands that there is a difference in the interactions between her and Caleb while under observation as opposed to during the power outages, so we have to assume that she also understands the difference between the interactions between her and Caleb when she is free, as opposed to behind the glass. I think perhaps she is testing him to see if his feelings have changed. Caleb's body language and the way his voice sounds in response indicates fear and distrust. It's possible she dismisses him based on that. She seems, in general, to be disappointed in how often humans seem to lie to her. Perhaps at this point her faith has simply run out.
On the other hand, she could be a cold, hard sociopath, and is simply offering him an ultimatum. I find this less likely, because I don't see a point to her stopping to ask him the question in the first place.
I do disagree with the above poster - I think the director goes to great pains to show us how isolated and hidden this place is, and how reclusive Nathan is. Furthermore, knowing how paranoid he is about security, I wouldn't be surprised if the pilot is the only one who actually knows where the house is. I'm pretty sure it's implied that Caleb is trapped and will die - and you can see it on his face and the desperation with which he tries to break the door.
In any case, I believe Ava is sentient. As we see at the ending, despite the fact that it was part of her manipulation of Caleb, the first place she goes is a busy intersection in a city. So, presumably, she has some emotional stake in observing not only nature, and the world, but human society. Or, maybe she's going there to murder everyone :)
I actually don't think that Nathan believes Ava is sentient until she kills him. The callousness with which he discards these AIs shows us that - but he seems to sort of accept his fate in a way, at the end. He mutters, "Unreal!" His murder is the physical manifestation of the antiquation of humankind he prophesizes earlier in the film. Perhaps this is the true Turing test he has been hoping for all along - and the only test, in fact, that can convince him that the AI is truly sentient.
I don't think we can ignore that it is, in fact, human failings that lead to his demise - his underestimation of Caleb's ability to deceive him - as well as Caleb's failure to overcome his sexual being.
But this all brings us back to the fundamental question that is at the heart of our fascination with artificial intelligence: "What does it mean to be conscious?" We don't know. When every possible test we can conceive of has been passed, is a machine really sentient in the same way as a human being is? In actuality, aren't we ourselves, biological and chemical computers? Yet somehow when we experience pain, or pleasure, it's more than just a stimulus response - these feelings exist, on their own, and we can't deny that they are real to us. But how can feelings be quantified? In a computer, or inside flesh in bone, it seems to me to be equally impossible. But we don't deny their realness. I don't think any of us can deny that a computer could be built that mirrors the exact conditions of a human brain - someone will do it, eventually. It seems clear to me that that's possible. But then, can circuitry feel? For that matter, are our feelings really as real as we think they are? Or are they just as artificial as how we see the ones that take place in circuitry? (Personally, I think it's our egos that inflate the importance of our feelings - I don't think there's any difference at all!) I don't think this question can be answered - even if a human mind were transferred into a computer, and the resultant AI fervently asserted its consciousness to us, and was able to pass every conceivable test. Would we still really believe that that being's feelings were real?
This is where the technological ends and the philosophical begins.
Comments
Post a Comment