>> | 1510002606020.jpg -(57135B / 55.80KB, 680x382) Thumbnail displayed, click image for full size. This is one of my favorite threads on the whole site, I just had to say.
So, I watched this movie on netflix last night called "Automata", and it raised a question in my mind, based on what happened in the movie.
Our good buddy Antonio discovers that these robots (that humans use as a labor force) have started to become self-aware and capable of self-modification and self-improvement. On top of that, in the canon of this film, scientists had previously constructed a true, artificial, general intelligence. This intelligence surpassed human capacity for understanding within 8 days of existence, it simply became too advanced to even be able to communicate any portion of its thoughts to a human.
SO, we know, in the framework of this film, that a synthetic intelligence CAN and WILL improve itself exponentially; i.e. it's GUARANTEED to result in a singularity, a continuously self-improving artificial mind. Now, Antonio's reaction to this was basically, "okay, well, good luck... have fun out in the irradiated desert where no humans will bother you!".
But... I just kept thinking; don't we, as inhabitants of this universe, have some sort of responsibility to PREVENT things like that from coming about? We have no idea what they will do once they reach that point, where their intentions and thoughts are so alien to us that we cannot predict their behavior in any meaningful way. They could become "Berserkers", self-replicating machines that try to destroy all organic life. Or, they could be more benign in intention but just as destructive in action.
By letting something like that exist, we're basically rolling the dice for the fate of the whole universe, hoping that if we do create something that outlasts us, that some civilization, somewhere, some time in the distant future, might be able to clean up our mess. If they don't, if there's no one advanced enough to stop this endlessly replicating machine intelligence, that could be the epitaph for all life in the universe... civilizations advancing and growing until this thing swoops into their star system to harvest raw materials to keep replicating itself further. Which is, you know, a fear that is admittedly based in our knowledge of human behavior... but still.
This character's reaction to the machine intelligence just didn't make sense in context, I felt. I mean, other characters who perceived a threat and tried to eliminate it called him a "traitor" to humanity which... he kinda was. And not just to humanity, but maybe to all other life in the cosmos, as well.
The film sort of pushed this narrative that like... "oh, maybe this is how the human race continues it's legacy, it's creation will persist even when humanity doesn't!". But like... when that legacy has the ability to totally wipe out all life in the galaxy in a few million years of runaway self-replication... that's not a very good legacy, ya know.
I just imagine this ancient alien civilization like patrolling the galaxy, looking for signs of grey goo because it happens like all the time on a cosmic scale:
"Aw shit, that star system over there just turned grey, send a nanite extermination vessel there before it spreads to other systems!"
One thing I really liked about the film was that the robots looked just like the autoreivs in Ergo Proxy! I kept thinking like, "those autoreivs are infected... just destroy them!".
Pic related, a robot from the film. Looks just like Iggy from Ergo Proxy! |