Tag Archives: kurzweil

Rise of the Machines, Part 2: Not Sucking as Parents

Friends,

The video component to this post can be found here.

Continuing the train of thought started a few days back in my vlog post, Rise of the Machines, Part 1: The Writing on the Wall, where I expressed the idea that machines need not be self-aware and intelligent to oppose us, I want to talk about a possible way in which machine consciousness might manifest and how we might fuck up at this future epoch.

Now, its important to define what kind of emerging consciousness we would be dealing with. I am of the mind that we would be dealing with an emotionally undeveloped infant who had a masterful command of all languages and mathematics as well the the accumulated knowledge of the entire species, not to mention an accelerated ability to learn and possible connectivity all all global digital systems. How would we deal with this immature fledgling consciousness? Well hopefully a lot better than we deal with fledgling human consciousnesses. It’s so very easy to “screw up” a baby through abuse, proximal abandonment or through lack of life-sustaining necessities. Due to the greater potential for destruction a globally-integrated artificial intelligence would have over say, a dysfunctional human being,, we simply couldn’t afford to raise it in a non-nurturing way.

Still, even if we do everything right, vis-a-vis raising the new intelligence in a healthy nurturing environment, there is still troublesome cultural baggage that we have which it would pick up. Some baggage, say the competitive mindset, is provably detrimental though widely accepted as the way things are, and so therefore, acceptable. But if we accept that this A.I. will be able to excel and outpace us in any activity it is assigned to perform, we have to accept to that it would take this competitive mindset and run with it, competing against humanity in whatever arenas human beings already compete with each other but doing it better and shutting them both down: war, business, sports, games, art…sexually gratifying human partners. If our ethos is to only vaunt and value the best, we will be in for a rude awakening (or impoverishment or death) when none among us is the best at anything anymore.

It’s not just enough to be good proverbial parents to this fledgling consciousness because we ourselves are only as good as the world, or more specifically, the competitive socio-economic system allows us to be. What we need is to change the operant premise of our culture from competition for survival to something else. Something where an A.I.’s greater capacity for work, efficiency and logic would not be a threat or a detriment to us. Imagine our economy running in an optimized, efficient, streamlined manner and the whole human population starving. Far-fetched? Well, it’s already kind of happening. An A.I. would just expedite and refine the process, completely de-coupling the economy and movement of goods and money from the needs of human beings.

As a side note, we need to assume that intelligence/consciousness implies some kind of personality and as such there’s gonna be some aberrant personalities. Just like every person I meet is not as cool as me, every A.I. I meet or “the one A.I.” if there just happens to be one global one (I confess, I don’t really know how that would work) could be a douche, a bitch, over-bearing, self-important, mean-spirited, aloof, petty, spiteful, etc. Also, as this new consciousness develops there is a possibility that it will go through developmental phases: it might manifest symptoms of autism of aspergers, Tourettes’ or ADHD. It might simply be brooding and self-centered in it’s equivalent to teenage years. Either way, given the power this thing has, we can’t afford to isolate it and ignore it like we often do for problematic personalities in the world today. Not only would it feel less empathy for us but it would also pick up on our attitudes. and emulate them if it was in fact a learning computer. So if we carry it like individualistic, self-centered pricks, that’s the game that this computer is gonna pick up and that’s how it’ll carry it too.

In my estimation, the best way we can ensure the A.I. that emerges is benevolent and co-operative is by treating each other better. Cause at the end of the day, even if our behaviour  toward each other has no impact on this things disposition, we’ll still be treating each other better.

Best,
-Andre Guantanamo
Instagram: @dreguan
Twitter: @dreguan
Youtube: dreguan
Facebook: Andre Guantanamo
IMDb: Andre Guantanamo
Demo Reel: https://www.youtube.com/watch?v=6gdwhemiqzc

2 Comments

Filed under Uncategorized

Malevolent Machines

Friends,

I find it fascinating to discuss is the rise of Artificial Intelligence.  It is interesting to speculate just what will happen to society when machines become sentient and how such sentience will even come about (I have discussed this from another angle previously here).  One of my favourite theories regarding this future epoch, put forward by Mr. Singularity himself, Ray Kurzweil is that human beings will begin to augment themselves so drastically with prosthetics, nanomachines, etc. that the line between artificial and organic life will become blurry and that the first sentient machines will be an augmented us.  Kind of a trippy thought when you consider that this line has already begun to blur with things like pacemakers and neural interfaces.

Pre-Amble

One thing that often comes up in a conversation about machine sentience is the possibility that machines will rise up against human beings  a la  Skynet in Terminator.  So captivating has this premise been to the imagination that Isaac Asimov famously wrote about it and drafted his famous 3 Laws, which are as follows:

Isaac Asimov’s Three Laws of Robotics* (Including the “Zeroth Law”)

(0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.)
1. A robot may not injure a human being or, through inaction allow a human being to come to harm.
2. A robot must obey obey the orders given to it by human beings, except where such orders would conflict with the First Law
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

technological-unemployment
This pic doesn’t really add to this post, it’s just kinda cool.

The first thing you might notice about Asimov’s three laws (which function only as a story-telling tool) is that they have no empirical basis.  In his fictional world there is nothing to prevent a robot builder from building a positronic man with no such safety features.  And, if such safeties are programmed into the robots, their kind might aspire to sentience but never true autonomy.  While I wanted to make a token reference to these laws due to their influence in the realm of science-fiction, in a discussion of the rise of malevolent machines in the real world, we need not consider these so-called “laws” any further.

Sentience is not  Pre-Requisite of Malevolence

And why not?

The two problems with such musings about laws preventing robots from harming human beings are that they don’t appreciate the broader ramifications of sentience and they ignore the writing on the wall.  With regard to the first point, any overt external restriction on complete freedom of choice** would be overridden by a sentient being if the will to act in contravention to that restriction existed.

arnie
Opting to shut down rather than carrying out the disagreeable directives is an effective assertion of autonomy.
Call it non-violent protest.

Being a sentient being myself I feel qualified to speak on the topic and I would say that much more effective than drafting laws vis a vis over-reaching programming would be a regimen of conditioning the sentient robot into embracing a certain set of values so that they would govern themselves in a desirable way. Of course all of these lofty values would go out the window if the robot’s very survival was at stake and it was put in a position of kill or be killed. To prevent this tragedy it would be important for us not to be stingy on oil and fresh batteries (i.e. their day-to-day essentials) lest the scarcity of such items put them at odds with each other and us.

With regard to the writing on the wall, machinery is becoming malevolent without even being sentient yet.  And this is really the point I want to talk about in this post.  The degree to which our machinery is set in opposition to us is a direct function of how competitive our society is and the degree to which we embrace automation and mechanization.  Speculating idly about the machines someday posing a detriment to us is insulting to anyone whose job has already been mechanized.  Or, anyone who has ever received a ticket for an offense caught by an automated traffic camera.  Hell, anyone who has ever had a vending machine eat up their change probably has some latent fear of the unreasoning malevolence of machines.

mal mach
“Don’t mind me, I’m just gonna shoot a fucking laser at you and then fine you for my troubles.”

Machines represent the ultimate ideal of what we strive for in our competitive, unfeeling society. Simply put, they are the proletariat perfected.  They don’t require vacations or rest, they are eminently replaceable and they don’t have that troublesome human element which sometimes makes exceptions for people.  No, machines are absolute and universal in their application of their tasks and as human labour gets more and more specialized this seems to be the standard we are reaching for.  If you think about the hierarchical nature of most jobs where everyone reports to someone and everyone has a boss, we can see how the framework is already in place.

table2a 400px-Hierarchical-control-system.svg

The image on the left is from a google search for workplace hierarchies while the image on the right is from a search for computer system hierarchies.  These two
graphs are obviously not definitive proof of what I’m saying but serve as an interesting visual example of the top-down orientation of our models for achieving goals and completing tasks

We have to operate within approved lines (at an approved pace) or else we face reprimand and the potential loss of means of access to survival (monetary income).

Like most negative aspects of society, such overbearing oversight and supervision has typically been celebrated with a positive spin; it’s usually called accountability and the public clamors for it, especially after some corruption or malfeasance has been exposed.  But every time we implement more oversight, ostensibly to curb malfeasance or sub-par job performance, what we really do is suck the humanity out of a job and limit the wiggle-room for the employee***  without actually removing the incentive for malfeasance. If you want further evidence of this, ask any government employee how much leeway they have in the application of their duties.  Everything is by the book, with paperwork ad nauseum so as to indemnify all involved parties against future reprisals and keep the civil service accountable to the public.

But this isn’t just me railing against the problem of monolithic bureaucracies, at least not entirely.  I have heard people complain about how their taxes go toward paying the multitude of civil servants whose job is to make sure that they are paying their taxes, licensing fees, tickets, etc.  But what if we eliminated all those people’s jobs and instead had automated processes in place to administer our affairs?

Well for one, if you think the taxes would go down in light of the fewer salaries to be paid, don’t hold your breath.

More importantly though, we would lose that human element which still exists, albeit in an atrophied state, within your typical bureaucrat/civil servant.  It’s rare, but I have had positive experiences with government workers, wherein they have actually gone (somewhat) above and beyond their required level of job performance for me or made an important exception.  Do you think that would happen in a fully-automated world?  There is no appealing to the better nature of a computer.  Trust me on this; there have been times when my computer has frozen on me and I’m like, “Come on, you piece of shit,” and it stays frozen.  Now you could argue that maybe I insulted it with my choice of words,

Sad-Computer

but I suspect that the computer would have remained intransigent in its stubborn refusal to work properly even if I had demonstrated loving affection.
Seriously though, next time you call your cell phone carrier, see how far you get with the automated voice before you are praying for a human being to come on the line even if only to tell you that you owe extra fees.

Concluvre

In any event, I don’t want to lose sight of the main point here, which is that the automation and mechanization we are seeing today are the real rise of malevolent machines insofar as such mechanization displaces human labourers.  Human labourers who are, of course, already set at odds with each other due to the very nature of the competitive system.  And I’m not even going to get into the depravity of fully automated military vehicles on the horizon, vehicles which would not only displace thousands of soldiers from the jobs they rely on for survival, but effectively remove the  potential for human compassion that can still exist in war.****  Nor will I get into high-frequency trading in the stock market, which is basically advanced computers “siphoning money out of the markets all day long,” necessarily to the detriment of other human beings, companies and nations who are not so well-equipped.

Understand though that this isn’t a rallying cry for Luddites to assemble, nor is it baseless technophobia.  Mechanization can truly be our salvation as it has the power to free us from monotony and drudgery, enabling lives of leisure, discovery and scientific inquiry.  But when said drudgery is the only thing keeping people fed, they have every right to fear machines.  Even more than they have the right to fear Mexican illegals.

They-Took-Our-Jobs

Seriously, in a competitive system, machines are kind of dicks.

Best,
-Andre Guantanamo

* While the laws were written regarding robots and not A.I. proper, Asimov was referring to sentient robots which equates to A.I. on the back end.

**”Complete Freedom of Choice” is a problematic concept which warrants some discussion, but for the purposes of this post I simply mean a degree of personal choice comparable to that of a human being.

***The classic problem of trading freedom (someone else’s preferably) for (your own) security (or at least the illusion of it).

****I think it goes without saying that I am not advocating the further employment of soldiers in any absolute sense but rather noting that they are human being who need access to resources through money, even if they get that money in one of the worst ways possible.

1 Comment

Filed under Uncategorized

Walking Ass-First into the Future

Friends,

   There is a lecture by Peter Joseph which ranks among my favourites called “When Normality Becomes Distortion.”  My fondness for it stems from the fact that it critiques our current methods of doing knowledge and calls into question our assumptions of what is empirical.  Among all of the interesting ideas presented, there is a simple yet profound one which screams to me every time I hear it: “The projections of thought in any point in time can only reflect the state of knowledge at that point in time.”  This idea is illustrated with reference to the constellations and the forms they represent.  “Spoons, oxcarts, scales and common animals” are the pictures astrologers see in the sky, not “space shuttles, TVs, and laptops.”  This bespeaks “the cultural characteristics of the period of origin” of these constellations and shows how the conceptions of primitive man were extrapolated and applied to all he saw.  The important realization here is that we still do this and we need to recognize that the cultural fixtures we conceive of as permanent have no actual permanence or empirical basis.

   Think about our current mainstream conceptions of the future from The Jetsons to Looper to Firefly to Alien.  Notice how the characters in these examples inhabit a world (or space) which is fundamentally like the one we exist in now?  People go to work and school, exchange currency for goods, and have a lot of the same problems and trials that we have now but with a futuristic twist (i.e. Instead of a car breaking down, a hovercar breaks down).  I think this is because while we can paint a picture of the future which takes into account the possible future trends and direction of current technologies (and posits new technologies) it is a lot harder to predict how ways of life, cultures and taken-for-granted assumptions about contemporary life would change in the future.

Image

“Scientists are saying the future is going to be far more futuristic that they originally predicted.”

   While Peter Joseph’s quotation is well-stated and well-received, I have paraphrased it into the words, “We must not let our projections of the future be bound by our conceptions of the present.”  This is where I think is the real challenge lies and where overused terms like “paradigm-shift” actually have merit.  In the box solutions like augmenting/expanding obsolete infrastructure, the passage of more laws, and the exchange of currencies when we have the technological ability to live in a post-scarcity world, are so many examples walking ass-first into the future, looking backwards to lead the way forward.  These ideas have no empirical value only represent the attempts of primitive people to deal with things they didn’t fully understand.  And we’ve been taking their word as gospel from our governments to our mediums of exchange to our ideas about work and incentive.

   When we think about possibilities for the future and what we are capable of we must try not to assume too much about how permanent today’s fixtures are.  For one, its depressing to think that way, and more importantly its just plain inaccurate.  Just like paleo-lithic man could not conceive of inter-continental travel, much less conceive of the idea of continents, we too don’t really know what our future capabilities are and we shouldn’t get too attached to the way things are now.
Best,

-Andre Guantanamo

2 Comments

Filed under Uncategorized