I find it fascinating to discuss is the rise of Artificial Intelligence. It is interesting to speculate just what will happen to society when machines become sentient and how such sentience will even come about (I have discussed this from another angle previously here). One of my favourite theories regarding this future epoch, put forward by Mr. Singularity himself, Ray Kurzweil is that human beings will begin to augment themselves so drastically with prosthetics, nanomachines, etc. that the line between artificial and organic life will become blurry and that the first sentient machines will be an augmented us. Kind of a trippy thought when you consider that this line has already begun to blur with things like pacemakers and neural interfaces.
One thing that often comes up in a conversation about machine sentience is the possibility that machines will rise up against human beings a la Skynet in Terminator. So captivating has this premise been to the imagination that Isaac Asimov famously wrote about it and drafted his famous 3 Laws, which are as follows:
Isaac Asimov’s Three Laws of Robotics* (Including the “Zeroth Law”)
(0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.)
1. A robot may not injure a human being or, through inaction allow a human being to come to harm.
2. A robot must obey obey the orders given to it by human beings, except where such orders would conflict with the First Law
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The first thing you might notice about Asimov’s three laws (which function only as a story-telling tool) is that they have no empirical basis. In his fictional world there is nothing to prevent a robot builder from building a positronic man with no such safety features. And, if such safeties are programmed into the robots, their kind might aspire to sentience but never true autonomy. While I wanted to make a token reference to these laws due to their influence in the realm of science-fiction, in a discussion of the rise of malevolent machines in the real world, we need not consider these so-called “laws” any further.
Sentience is not Pre-Requisite of Malevolence
And why not?
The two problems with such musings about laws preventing robots from harming human beings are that they don’t appreciate the broader ramifications of sentience and they ignore the writing on the wall. With regard to the first point, any overt external restriction on complete freedom of choice** would be overridden by a sentient being if the will to act in contravention to that restriction existed.
Being a sentient being myself I feel qualified to speak on the topic and I would say that much more effective than drafting laws vis a vis over-reaching programming would be a regimen of conditioning the sentient robot into embracing a certain set of values so that they would govern themselves in a desirable way. Of course all of these lofty values would go out the window if the robot’s very survival was at stake and it was put in a position of kill or be killed. To prevent this tragedy it would be important for us not to be stingy on oil and fresh batteries (i.e. their day-to-day essentials) lest the scarcity of such items put them at odds with each other and us.
With regard to the writing on the wall, machinery is becoming malevolent without even being sentient yet. And this is really the point I want to talk about in this post. The degree to which our machinery is set in opposition to us is a direct function of how competitive our society is and the degree to which we embrace automation and mechanization. Speculating idly about the machines someday posing a detriment to us is insulting to anyone whose job has already been mechanized. Or, anyone who has ever received a ticket for an offense caught by an automated traffic camera. Hell, anyone who has ever had a vending machine eat up their change probably has some latent fear of the unreasoning malevolence of machines.
Machines represent the ultimate ideal of what we strive for in our competitive, unfeeling society. Simply put, they are the proletariat perfected. They don’t require vacations or rest, they are eminently replaceable and they don’t have that troublesome human element which sometimes makes exceptions for people. No, machines are absolute and universal in their application of their tasks and as human labour gets more and more specialized this seems to be the standard we are reaching for. If you think about the hierarchical nature of most jobs where everyone reports to someone and everyone has a boss, we can see how the framework is already in place.
The image on the left is from a google search for workplace hierarchies while the image on the right is from a search for computer system hierarchies. These two
graphs are obviously not definitive proof of what I’m saying but serve as an interesting visual example of the top-down orientation of our models for achieving goals and completing tasks
We have to operate within approved lines (at an approved pace) or else we face reprimand and the potential loss of means of access to survival (monetary income).
Like most negative aspects of society, such overbearing oversight and supervision has typically been celebrated with a positive spin; it’s usually called accountability and the public clamors for it, especially after some corruption or malfeasance has been exposed. But every time we implement more oversight, ostensibly to curb malfeasance or sub-par job performance, what we really do is suck the humanity out of a job and limit the wiggle-room for the employee*** without actually removing the incentive for malfeasance. If you want further evidence of this, ask any government employee how much leeway they have in the application of their duties. Everything is by the book, with paperwork ad nauseum so as to indemnify all involved parties against future reprisals and keep the civil service accountable to the public.
But this isn’t just me railing against the problem of monolithic bureaucracies, at least not entirely. I have heard people complain about how their taxes go toward paying the multitude of civil servants whose job is to make sure that they are paying their taxes, licensing fees, tickets, etc. But what if we eliminated all those people’s jobs and instead had automated processes in place to administer our affairs?
Well for one, if you think the taxes would go down in light of the fewer salaries to be paid, don’t hold your breath.
More importantly though, we would lose that human element which still exists, albeit in an atrophied state, within your typical bureaucrat/civil servant. It’s rare, but I have had positive experiences with government workers, wherein they have actually gone (somewhat) above and beyond their required level of job performance for me or made an important exception. Do you think that would happen in a fully-automated world? There is no appealing to the better nature of a computer. Trust me on this; there have been times when my computer has frozen on me and I’m like, “Come on, you piece of shit,” and it stays frozen. Now you could argue that maybe I insulted it with my choice of words,
but I suspect that the computer would have remained intransigent in its stubborn refusal to work properly even if I had demonstrated loving affection.
Seriously though, next time you call your cell phone carrier, see how far you get with the automated voice before you are praying for a human being to come on the line even if only to tell you that you owe extra fees.
In any event, I don’t want to lose sight of the main point here, which is that the automation and mechanization we are seeing today are the real rise of malevolent machines insofar as such mechanization displaces human labourers. Human labourers who are, of course, already set at odds with each other due to the very nature of the competitive system. And I’m not even going to get into the depravity of fully automated military vehicles on the horizon, vehicles which would not only displace thousands of soldiers from the jobs they rely on for survival, but effectively remove the potential for human compassion that can still exist in war.**** Nor will I get into high-frequency trading in the stock market, which is basically advanced computers “siphoning money out of the markets all day long,” necessarily to the detriment of other human beings, companies and nations who are not so well-equipped.
Understand though that this isn’t a rallying cry for Luddites to assemble, nor is it baseless technophobia. Mechanization can truly be our salvation as it has the power to free us from monotony and drudgery, enabling lives of leisure, discovery and scientific inquiry. But when said drudgery is the only thing keeping people fed, they have every right to fear machines. Even more than they have the right to fear Mexican illegals.
Seriously, in a competitive system, machines are kind of dicks.
* While the laws were written regarding robots and not A.I. proper, Asimov was referring to sentient robots which equates to A.I. on the back end.
**”Complete Freedom of Choice” is a problematic concept which warrants some discussion, but for the purposes of this post I simply mean a degree of personal choice comparable to that of a human being.
***The classic problem of trading freedom (someone else’s preferably) for (your own) security (or at least the illusion of it).
****I think it goes without saying that I am not advocating the further employment of soldiers in any absolute sense but rather noting that they are human being who need access to resources through money, even if they get that money in one of the worst ways possible.