Text&Image Sadu Saks
Will machines one day evolve beyond human programming?
The idea of sentient intelligent technology is nothing new. In fact, it has populated an entire genre of sci-fi movies and books, with cautionary tales of apocalypses and revolutions.
Today, the prophecy is becoming reality. Media and technology have impregnated every aspect of our social, political and economical lives. Machines are capable of replicating human speech, thought and action (to a certain degree, thankfully). It is becoming clearer and clearer that the way they are being used constitutes a huge threat to human labor. The most important question lies beneath the agenda of the capitalist machine and tech under the capitalistic system.
Who is sponsoring these feats of innovation and why?
The initial dream for machines was that they would set us free from labor. From all of our mundane roles as cogs, whether it be in the household or in the factory. Technology would reduce our weekly hours and allow for a less laborious lifestyle. And it began with dishwashers and housewives! Sadly, tech under capitalism rather aims at the increase of productivity and profit rather than human happiness or fulfillment. To capitalists, the machine is a reliable worker who never complains and never takes sick days. Instead of reaching a hybrid work system between human and machine which would benefit both, the current capitalistic system seeks to push for the human worker to be a completely outdated asset to the production system. Yet the fear the elite mass produce is another: sentience. While we are still far from reaching an artificial consciousness, the amassed sentiment from movies and stories, combined with the increasingly humanoid robots, gives ground to such paranoia. Is this to distract us? Or does it have some truth?
What interests me is:
Would robot self-awareness necessarily lead to destructive outcomes?
What do we feel towards or for inanimate machines and why?
A prevalent feeling towards machines is empathy. We feel bad for them. We don’t like to see them kicked, abandoned, forgotten. Is it because we're scared of what they could do if they were capable of feeling hurt? Or is it because we simply project our feelings onto objects? Do we feel guilty? What if a machine, with its dreadful potential for
destruction, instead chose peace?
What I wish to explore through this short story are the contrary representations of AI that seem the most common. One is the machine-of-war cyborg, the final stage of human capitalist inventory. Often it is made to gain power over other groups, whether it be countries, planets or aliens. It ends up being an entity of destruction, incapable of being stopped or reasoned with. Apocalyptique.
The other is a reflection of male fantasy and gender oppression, the feminine humanoid. This one is often infantilised, and given the ‘opportunity’ to learn love, affection and kindness. The ideal woman and dream of many incels. But both robots, in my opinion, are nothing more, nothing less, than children of a different birth. Children who are socialized and taught to behave within the confines of their roles. If the war robot is a child, given access to unlimited knowledge and supernatural power, any of us humans would be just as confused and blinded by rage as ‘IT’ can be. While the humanoid, socialized into submission and caretaking, if ‘IT’ went through the process of self-awareness, wouldn’t it end up with much more shaped and founded rage?
If humanity brought both of these entities to life, but then ended up disappearing, how would they cope with life, their birth, their lone existence on an abandoned planet?
Comments