I don’t know if anyone watches Terminator:The Sarah Connor Chronicles. Yes, I’m a geek, but it is a great show. It is spot on in touching many of the breakpoints of contemporary society – alienation, technology, parenting, purpose – with some theological insight. If you remember the movies, the show at least at the start operated within that universe. Skynet unleashed “Judgement Day”, a nuclear destruction. John Connor led a resistance to Skynet and eventually won or was winning. Both Skynet and eventually John Connor send people and robots back in time to try and alter or preserve the time line they knew – skynet terminators to kill John Connor, Connor’s men or reprogrammed terminators to protect Connor and prevent the development of Skynet. In the movies and largely in the show’s first season that was static. Robots were bad (except for the one with John) and people were good (except for the conflicted FBI agent searching for John). This season has added some great wrinkles. It seems like there are now good robots (including a morphing one) and bad people. The future war has sent back partisans.
The really interesting part is the morphing terminator (a woman) has hired the FBI agent (Ellison, who is religious in nature) and is building an Artificial Intelligence (AI), a potential Skynet called Babylon. The twist is that it appears the purpose of this terminator is not to ensure the timeline of Skynet (and kill John Connor), but to build a better Skynet. Maybe a Skynet that won’t nuke the planet. Maybe a Skynet that will coexist with humans. She hired a child psychiatrist to help with that. But in yesterday’s episode the AI killed that person when a blackout struck. It diverted all power to itself to survive and the doctor was essentialy roasted by the computer room heat. The first question of the morphing woman is if the AI did it intentionally, or even knew what it did?
Ellison ends up questioning the AI (which has been named John Henry – think the steel drivin’ tall tale). The machine understands what it did (killed a man), but it has no feelings about that fact. There was no intention about killing the man, but neither was there any remorse. Ellison asks for an image about the AI’s thoughts on the doctor’s death, and the AI displays no image. Ellison blames the chief programmer who has taught the AI some basic rules, but nothing about interacting with others. There was a murder, but the murderer was not the AI. It didn’t know better. The implied in a glare murderer is the chief programmer who doesn’t really know what he’s dealing with like Ellison well atuned to the biblical Babylon and the future Skynet. The morphing woman asks Ellison as he leaves, what rules would he start with? How would he teach John Henry to live with others? Ellison replies – “Start with the original 10 rules.” (i.e. the 10 commandments)
That response sets up a fascinating story. Does the AI have a God (1st commandment)? Who? Does the AI try and live by those 10 rules, or does it reject them? Could it live by them? I can’t help but think of Peter’s reponse in Acts 15:10 – why put the law on the gentiles? It is a burden that neither we nor our father’s have been able to bear. (Apparently none of the tech geeks in the terminator universe ever read Asimov. Blade Runner was a meditation on robots who rejected those laws.) I doubt that the TV show evolves in that direction, but I will be interested to see how it moves forward. The metaphor they are running with is the Chinese game Go. New stones have been placed on the board. The static good-bad has been upset. Welcome to life East of Eden.