Friends,
The video component to this post can be found here.
Continuing the train of thought started a few days back in my vlog post, Rise of the Machines, Part 1: The Writing on the Wall, where I expressed the idea that machines need not be self-aware and intelligent to oppose us, I want to talk about a possible way in which machine consciousness might manifest and how we might fuck up at this future epoch.
Now, its important to define what kind of emerging consciousness we would be dealing with. I am of the mind that we would be dealing with an emotionally undeveloped infant who had a masterful command of all languages and mathematics as well the the accumulated knowledge of the entire species, not to mention an accelerated ability to learn and possible connectivity all all global digital systems. How would we deal with this immature fledgling consciousness? Well hopefully a lot better than we deal with fledgling human consciousnesses. It’s so very easy to “screw up” a baby through abuse, proximal abandonment or through lack of life-sustaining necessities. Due to the greater potential for destruction a globally-integrated artificial intelligence would have over say, a dysfunctional human being,, we simply couldn’t afford to raise it in a non-nurturing way.
Still, even if we do everything right, vis-a-vis raising the new intelligence in a healthy nurturing environment, there is still troublesome cultural baggage that we have which it would pick up. Some baggage, say the competitive mindset, is provably detrimental though widely accepted as the way things are, and so therefore, acceptable. But if we accept that this A.I. will be able to excel and outpace us in any activity it is assigned to perform, we have to accept to that it would take this competitive mindset and run with it, competing against humanity in whatever arenas human beings already compete with each other but doing it better and shutting them both down: war, business, sports, games, art…sexually gratifying human partners. If our ethos is to only vaunt and value the best, we will be in for a rude awakening (or impoverishment or death) when none among us is the best at anything anymore.
It’s not just enough to be good proverbial parents to this fledgling consciousness because we ourselves are only as good as the world, or more specifically, the competitive socio-economic system allows us to be. What we need is to change the operant premise of our culture from competition for survival to something else. Something where an A.I.’s greater capacity for work, efficiency and logic would not be a threat or a detriment to us. Imagine our economy running in an optimized, efficient, streamlined manner and the whole human population starving. Far-fetched? Well, it’s already kind of happening. An A.I. would just expedite and refine the process, completely de-coupling the economy and movement of goods and money from the needs of human beings.
As a side note, we need to assume that intelligence/consciousness implies some kind of personality and as such there’s gonna be some aberrant personalities. Just like every person I meet is not as cool as me, every A.I. I meet or “the one A.I.” if there just happens to be one global one (I confess, I don’t really know how that would work) could be a douche, a bitch, over-bearing, self-important, mean-spirited, aloof, petty, spiteful, etc. Also, as this new consciousness develops there is a possibility that it will go through developmental phases: it might manifest symptoms of autism of aspergers, Tourettes’ or ADHD. It might simply be brooding and self-centered in it’s equivalent to teenage years. Either way, given the power this thing has, we can’t afford to isolate it and ignore it like we often do for problematic personalities in the world today. Not only would it feel less empathy for us but it would also pick up on our attitudes. and emulate them if it was in fact a learning computer. So if we carry it like individualistic, self-centered pricks, that’s the game that this computer is gonna pick up and that’s how it’ll carry it too.
In my estimation, the best way we can ensure the A.I. that emerges is benevolent and co-operative is by treating each other better. Cause at the end of the day, even if our behaviour toward each other has no impact on this things disposition, we’ll still be treating each other better.
Best,
-Andre Guantanamo
Instagram: @dreguan
Twitter: @dreguan
Youtube: dreguan
Facebook: Andre Guantanamo
IMDb: Andre Guantanamo
Demo Reel: https://www.youtube.com/watch?v=6gdwhemiqzc
In theory, couldn’t someone just create a “consciousness code” and send it out as an update for all the robots, to regulate whatever degree of awareness they’d want the robots to have (which, I suppose, the robots could evidently surpass, if they’re mostly concerned with competition…)? Even if the robots are self-aware, they’d still be learning differently than humans, so I’m not sure about the inevitability of their inheritance of our cultural baggage. If they’re programmed for efficiency, human-imparted complexes wouldn’t serve their purposes. They also likely wouldn’t become as hung up as humans, since their need for nurturing would be different. Hopefully they could be built in such a way (i.e. de-programmable) that ‘mistakes’ made in their infancy wouldn’t be as difficult to shed as they tend to be for humans. If they aren’t, though, which behaviours do you estimate are most important for us to model for these up-and-coming, impressionable robots in the aim of treating each other better?
I’m all for moving away from a competitive system of disproportionately placing value on the “best” among us. Right now I’m reading a book called “Mindset,” which is all about realizing potential in a non-competitive and nurturing way.
I guess I’m not really sure what you mean by a consciousness code, unless you mean programming a computer to be self-aware. I suppose that’s possible, but I guess in the back of my mind I assumed “life” would just sporadically take shape as a critical threshold of computing power, knowledge and self-propagation was reached.
I don’t really know how they’d learn. Knowledge I think would be easy; they’d just accumulate more and more and eventually start making their own inquiries, but a lot of the learning I talked had to do with emotional maturity and socialization. I think that a conscious machine would mirror human development in certain,if accelerated regards.
I think instilling (or programming, if appropriate) a collaborative mentality and a desire to see life flourish is the best route. The kind of mentality all of humanity would benefit from even without AI. We gotta assume that whatever game we’re running as human beings, that’s the game the machines are gonna learn and dominate at. So it should be something positive and conducive to our continued survival and prosperity. That way, machine sentience would only be a boon for us as they could assist us in our path into the future.
Merci pour ton comment! (I think?….) 😛