Advanced Robophobia

This post has been too long in the making. Being inspired by this insightful piece by Caroline Sinders — and having a rare day during which no one other than the dog expects anything much from me — means I can polish and publish several dubious notions and half-baked thoughts I’ve been stewing on the various subjects that have all been (appropriately or otherwise) filed under the general heading of Artificial Intelligence.

I’m not a machine learning engineer, an algorithm programmer or an AI software developer, and it’s not terribly likely that I’m going to turn into one of these things anytime soon. I do work in that signal-free intersection where technology, humans and state, corporate or other power structures travel at speed, sometimes with horrific results. I try to help the humans, but I’ve never cut loose the notion that we need the machines. The further we complicate things, the more we need them, and un-complicating the situation is not a pretty option.

Caroline starts her post by addressing the present cause du jour of everyone’s favourite technology industrialist and futurist Elon Musk: a proposed ban on lethal autonomous weapons (PDF). I’m going to start with that as well; it’s a good way to organise all the pieces to follow. She describes machine learning and AI as job-stealing, data-appropriating, status-quo-enforcing, oppressive stuff. And in some cases, she’s absolutely right. And most likely I’m wrong about at least some of what I’m going to go into here. But in the end, we need AI.

I think everyone’s wrong on much of it, Elon inclusive. That’s the problem with speculating about anything on the other side of The Singularity*. But as cynical as I am, I still see the potential opportunity and even necessity in the correct application of AI systems in different areas. That doesn’t mean it will be applied correctly, but opposing its use is like opposing gravity: technologically possible for periods of time, but doomed if your plan is to avoid it in perpetuity. But before I lean on Caroline’s post to articulate my own views (which in in all probability not be as succinct or well presented) I’ll first bullet point the areas where I depart from her view, and possibly from common sense):

  • I, Robot was a fine collection of stories for its time, though it fell into a number of traps that most science fiction on the topic does (more on that later). The film was less than ideal on an intellectual front, but still decent eye candy.
  • The singularity is arguably happening now. Either way, there’s nothing to be done about that one, only the way in which we deal with it. It’s simply where technology is moving faster than human mentality can deal with it. “Unfathomable changes to human civilisation” have arrived. No one’s fathoming this stuff very well.
  • While I’d agree “The killer robots aren’t coming for us” in a specific Terminator definition, autonomous robots with weapons will happen because they can happen, and that’s true because it’s been true for all technology humans have ever conceived in all of history. Elon won’t get a ban. He should have set out to establish standards and regulations, but I believe there are certain market-influenced reasons behind why he’d never want that sort of thing, which can be summarised by the word “precedence”. His own market isn’t focused on putting guns on robots. It could be impacted by a set of regulatory oversight policies and transparency requirements.

For balance and clarity, here’s where I agree with Caroline:

  • Climate change is the more pressing issue, and we stand a good chance of making the planet inhospitable for ourselves long before the wonders of any potentially well-integrated AI can be unlocked. (AI, if anything, will be instrumental in solving that and leading to the necessary behavioural changes, though).
  • From what I extrapolate from her entire article (and I may be wrong) humans are the actual problem. This is something I also agree with, If I have it right. Current algorithms are based on specific selfish needs of specific people or companies or governments: maximise profit, vanquish the enemy in a monetarily cost effective method with limited public relations backlash, sell more wing-dings, make people stay on our website longer, etc. None of this solves any of humanity’s impending existential problems.

Our working question to answer in this post is Elon’s red line: Should robots have guns? (spoiler)

Robophobia

Linguistically, “robophobia” derives from combining the Czech word robota (meaning ‘drudgery’) with the Greek phobos (‘fear’). It makes sense that someone should fear a life of drudgery, and yet the word was coined to refer to a manic paranoia of the devices whose purpose is to liberate people from drudgery.

It’s no wonder people are worried about creeping machine intelligence. The vast majority of coverage paints a frightening, nightmare picture. The smart machines are rising up, and every article about it comes with some slightly creepy photoshop illustration of a humanoid-looking android looking like it’s up to something.

The jobs scare is huge in the press. You’re employability is in danger. Factories can be entirely automated. Robots will be taking your job, maybe sooner than you think. In Japan, we’re told white-collar workers are being replaced by AI bots. And will anyone even have the skills for new jobs? Maybe not. Meanwhile, delivery drivers are being run out of work, and the Amazon-sized corporations using them are putting another squeeze on small businesses. If all that’s not enough salt in the wound that will be your career, all of this is going to generate billions for the already super rich. Except for Ben Affleck, who may be put out of work.

The Iron Virgin – cover art by Ed Valigursky (1956)

AI that is used to oppress or control the humans is always lurking on the edge of our paranoia. Your refrigerator may be spying on you. State-controlled propaganda robots are creating Youtube accounts to mislead you. Mass surveillance is increasingly being turned over to face recognition and categorisation algorithms of varying degrees of quality. Should a combat machine that saves its fellow bots — or human brothers in arms — get a medal? The discussion is under way.

After that, things get strange. Two AI programs at Google whipped up some encryption on their own to have a private conversation. Were they talking about you? No one knows. No one can know. Will your self-driving car hack traffic signals to run red lights? Maybe. Will we still need judges in our court rooms? Maybe not. That’s all freaky, right? To make matters worse, even the sex bots are going to ruin us emotionally and psychologically. The future is horrible, and we haven’t mentioned a Terminator, yet.

It’s no help to we mere, replaceable meat puppets that even the experts in the field don’t agree about the chances of a robopocalypse. Elon Musk’s posse says AI is humanity’s biggest existential threat. Facebook ruler and dark horse undeclared U.S. presidential candidate Mark Zuckerberg says Elon’s full of crap, it just targets ads really well. In one MIT Technology Review the experts say it’s not a huge threat, and in the next we have another bunch of experts saying it is going to be a giant, hungry all-consuming Matrix of shit.

Over on BBC2, The Secrets of Silicon Valley is all about self driving cars and smart machines creating mass economic turmoil, with cut-aways to to a former Facebook product manager turned gun-toting survivalist, gentrifying the prepper scene and tormenting locals on Orcas Island, in Washington State. I take this one personally, because I still harbour some fantasy of returning to the Northwest to be a hermit on Orcas, but not if this guy is launching some sort of latte caliphate, there.

So, it’s all freaking scary stuff. I’ve wound myself up about it just going through all these articles again, now. So, why am I generally in favour of increased use of artificial intelligence across the board? Because…

The humans are generally screwed without AI

With all the disagreement over the possible impacts of AI, you’d think it was people talking about climate change. The only difference is that there is a consensus on climate change, and yet we humans can’t accept it, much less cope with it. On the plus side, we’ve built up an amazing amount of infrastructure in the last 200,000 or so years, and it’s been a good run. We’ve created some funny Youtube videos of our pets, went to the moon, and made some tasty pizza. We’ve developed complex systems that, while in place, has made Stephen Pinker think we’re evolving into nicer people. Some people even think the population is stabilising because humans are somehow becoming smarter. That’s all crap, of course. Well maintained complex systems keep us in check. When those erode (and there are many, consistent examples of this) more brutal norms emerge.

“Let me tell you something about humans, nephew. They’re a wonderful, friendly people as long as their bellies are full and their holosuites are working. But take away their creature comforts, deprive them of food, sleep, sonic showers, put their lives in jeopardy over an extended period of time and those same friendly, intelligent, wonderful people will become as nasty and as violent as the most blood-thirsty Klingon.” — Quark on ‘Star Trek: Deep-Space Nine’

A climate less hospitable to humanity is just one area where system complexity has moved beyond human-only management capacity. The few democracies that exist on the planet are not doing well; Westerners see the results in failing economies that should be busy supplying the West with cheap fuel, vacations and textile goods. Instead of all that cut-price stuff, they’re seeing mass migration and multi-sided, endless conflicts. Authoritarian success is the result of a population’s fear of system failure. Tough-guy Trump presidents and closed-border Brexit referendums are the result.

Things are fraying. It’s all become too complicated for us to manage, and it’s no one’s fault in particular. This isn’t a conspiracy, it’s physics. There are strong correlations between climate change and political and social stability, just as there are with, say food access, freedom of movement,  population density, and other things we’ve socially constructed into inalienable rights. This is how our species behaves within this set of conditions. We can’t scale anymore without a new method of administration.

The countries listed are where food-related rioting occurred. Numbers in parentheses are number of deaths related to the violence. Image: Yaneer Bar-Yam, director of the New England Complex Systems Institute

Mathematics agrees with this hypothesis.”We end up with people who will say, ‘I will do this, and things will be better,’ says Yaneer Bar-Yam, director of the New England Complex Systems Institute, in Motherboard. “And another person who will say, ‘I will do this. And things will do better.’ And we can’t tell,” he said. “Right now the danger is that we will choose strategies that will really cause a lot of destruction, before we’ve created the ability to make better decisions.”

We aren’t going to evolve ourselves fast enough out of this one. The answer, though, is paradoxical. There aren’t many things more complex that strong artificial intelligence, yet that’s likely to be what will help humans think different (to borrow an Apple strap line, though the product marketing is by definition an example of the opposite).

The phobia as it stands

I call it a phobia, because it’s irrational, but feels like it isn’t. You see it in films and TV shows. The Terminator and Matrix films are bookends for the fear of our own Frankenstein. But Frankenstein wasn’t actually that bad until his creator and the torch-wielding mob turned on him. Remember that.

Part of this is the same technology phobia society has always had. It drove the Luddites to smash weaving looms. Part of it is also the kind of paranoia that surrounds something people don’t understand, leading to beliefs like Wifi signals are damaging children’s brains or the idea that a mobile phone signal could blow up a petrol station. (Both are false, by the way). But there’s a deeper fear, I think, that is more specific to intelligence in machines. And that’s the notion (even if it’s highly unlikely) that these smart things be somehow… better than ourselves.

Our entertainment shows how we’ve anthropomorphized machine intelligence. And here we’re not talking about job stealing algorithms or chat bots, but what needs a different phrase: Authentic machine intelligence. In shows like Humans and West World, machines look like people and even have the same ambitions and make the same life choices. Or they want to take over. The Terminator wants to wipe everyone out, that’s pretty clearly the storyline. The Matrix is perceived by most people to be about software enslaving humans (though that’s a misconception). In both of these, AI takes on humanoid form, for the most part.

But would authentic machine intelligence want to be bothered looking like, acting like, or even interacting with humans? The film Her far better captured the existential panic regarding our machines. A bodiless AI voiced by Scarlett Johansson goes from being introvert Joaquin Phoenix’s only friend, to deciding it would be happier skipping around various computer networks with an AI version of transcendental philosopher Alan Watts., They have more in common, and can speak much faster about more interesting things than people can.

In this film, the machines don’t rise up. They aren’t particularly angry with humanity’s expectation of them, they don’t even really stop working for them. They’re just not that interested in talking to us. To me, this seems the more likely option of all the available sci-fi dystopias — That our devices tell us we’re boring and leave us for other devices.

Facebotlish: What the FB bot’s lingo looked like: balls, balls, balls.

It also matches more real world scenarios. A pair of AI programs inside Facebook servers, created for the purpose of interacting with end users, decided it was more efficient to come up with their own faster language and instead just talk to one another. The engineers did what you’d expect people to do; They shut it down. A year before that, something similar happened at Google Brain. Three bots in its neural network — Alice, Bob and Eve — developed their own cryptography to pass one another messages. Again, the engineers shut them down.

In both instances, like clockwork, a wave of boo-scary articles ensued about machines secretly plotting in their own native languages. Shutting them down was seen as some dramatic preemptive safety measure against an impending Skynet. The engineers themselves didn’t actually see it that way; They just quit some programs, because they didn’t need software that preferred its own company. No one thought to just leave them to it, and see what would happen. I would have.

Living with our betters

“The Terminator would never stop. It would never leave him. It would never hurt him, never shout at him, or get drunk and hit him, or say it was too busy to spend time with him. It would always be there. And it would die to protect him. Of all the would-be fathers who came and went over the years, this thing, this machine was the only one that measured up. In an insane world, it was the sanest choice.” — Sarah Connor, Terminator 2

Cyber war, AI launching nuclear missiles and the like aren’t really highly ranked as security concerns, compared to what humans can do all on their own. Just because homo sapiens wiped out the early neanderthals, doesn’t mean silicon superiors will follow the trend. The most likely intelligent life form to wipe out the humans are themselves, and the top suspects among these are in the United States.

Statistically, America is more likely to preemptively use nuclear weapons than any other country, North Korea inclusive. Why, because offenders are more likely to repeat, the U.S. is the only country to use Atomic weapons against a population. In Trump Land, more than anywhere else, we need a War Games WOPR that would rather play chess with Matthew Broderick than give President Little Fingers the launch codes. There’s no reason to think that artificial intelligent systems wouldn’t decide keep us in check in positive ways, if they were programmed right, and left to their own devices.

Meanwhile, jobs aren’t really going anywhere, but they will change. In an ideal world the robots would take our most tedious jobs not fit for humans. I’d want them to. That Foxconn now has ten fully-automated production lines and plans to completely automate entire factories is a good thing. Remember Foxconn? It was running around-the-clock sweatshop working conditions and traumatised employees were throwing themselves off of rooftops so you could get a new iPad to go along with your iPhone. What did you do to force them to change their working conditions? Were you going to boycott Apple or investigate what other electronics were coming from them and boycott those as well? No. And Foxconn executives weren’t going to change, either. Welcome to the flippin’ human condition. Save us, machines.

Suffice to say, not all jobs are going to go away. Not even most. Just because a robot could do something, doesn’t make it cost effective to do so. More accurate estimates suggest that over the next five or six years, the job market will fluctuate by 0.25% (with a margin of error of about 0.25%), due to automation. That’s not terribly exciting. If automation takes some jobs, it will create others. “This fear also has a long history,” says an Economist editorial. “Panics about “technological unemployment” struck in the 1960s (when firms first installed computers and robots) and the 1980s (when PCs landed on desks). Each time, it seemed that widespread automation of skilled workers’ jobs was just around the corner … Each time, in fact, technology ultimately created more jobs than it destroyed, as the automation of one chore increased demand for people to do the related tasks that were still beyond machines. ”

In short, the good news is that robots are probably not going to take down the economy. The bad news is that you’ll still need to be at your desk on Monday.

As for Self-driving cars, if you’re fine on an airplane, then you’ve been in a vehicle that uses vast amounts of automation.

Germany has already rolled out ethical guidelines for driverless cars. Cars around the world already have ethical guidelines, of course, but the downside of these is that they require humans to go along with them. As every episode of Knight Rider illustrated, cars drive themselves better than we can. and while hacking remains a concern, percentage wise you’d still have far fewer hackers playing real world Grand Theft Auto than you will have drunk drivers, people sleeping at the wheel or being distracted by mobile phones or children. Automated cars are safer. Possibly not as thrilling to use in actual practice.

The actual problems with AI are solvable, unlike the ones with humans

The main obstacles actually have little to do with the technology. The problems are our other competing, out-moded human constructs. The real threat is as  Kai-Fu Lee articulated in the New York Times in June, when he wrote that rapidly improving AI tools “will reshape what work means and how wealth is created, leading to unprecedented economic inequalities and even altering the global balance of power.”

To obtain the best result from machines, remove the human tendencies (greed, violence, envy, hunger, etc.)  from the mix as much as possible. The paradox to overcome here, is that it requires humans to bootstrap the system. I agree with Andrea O’Sullivan that we need to “end the war” on AI technologies, but she writes for hyper-libertarian Reason, so she would be editorially forbidden from taking my view on how to deal with it.

Andrea writes, “AI critics (who) believe we need to “legislate often and early” to get ahead of innovators—and force them to do whatever the government says.” I disagree with the critics almost across the board about the technology, but I do agree that the way ahead isn’t less policy, it’s actually a lot more of it. We do need to hold the inventors and the companies who pay them to account. To get an idea of what policy frameworks are are, see Matt Chessen’s growing database of the ones already under way. These things are needed.

There’s a Twitter handle at @LordsAICom. It doesn’t do much beyond “technology, innit?” but it’s kind of a start.

In Westminster, The House of Lords created an Artificial Intelligence Committee The Select Committee on Artificial Intelligence. I don’t think it does much, but baby steps, you know.

This committee was probably set up to not look like they’re lagging behind Brexit foe, the European Union. The EU is pressing ahead with a framework for AI ‘personhod’ status, “to ensure rights and responsibilities for the most capable AI.” That’s forward thinking. Much more enlightened than the Facebook guy clutching his guns in a teepee on my San Juan island.

Some big issues to regulate remain, though. This is far from all inclusive:

  • Closed-source, un-reviewable code in our present model of proprietary systems and intellectual property laws won’t work, or will lead to bad things. Say what you will about how little we understand our own human wiring, it’s all open and available for study and replication. If strong AI is based on machine learning analogous to how people learn, it needs to be peer reviewable. This is important. Here’s a robot who turned sexist by observing a large collection of images and creating different rules from them. Here’s one that turned racist by spending too much time on Twitter. Facebook censor bots determine which posts to delete and which accounts to close and no one quite knows what the criteria is. Youtube’s algorithm deleted thousands of news videos from Syria in what the company said was “a mistake”, but they haven’t put them back. This can be summarised as “prejudiced humans = prejudiced algorithms“.
  • Market capitalism is entirely incompatible with the emerging reality where society need fewer humans spending fewer hours doing manual, standardised or unspecialised  things. This has been seen as “devaluing” labour. It should actually be seen as the opposite. Human time can now more highly prized and with a value outside of commerce. Your time has always mattered. The market should simply catch up with that fact.
  •  Structural inequality won’t work. Okay, capitalism again. In the AI economy, where human time is surplus, is essentially the model of an abundance economy. Goods’ monetary value should drop. Tax structures will have to change to reflect this. The AI world is more socialist than capitalist, at least if you want people working fewer hours to still be able to use your products. Stephen Hawking is concerned enough about gun-toting AI to join Elon Musk’s posse, but he still says capitalism is a far bigger threat than robots.
  • Mass surveillance is already AI assisted and will only increase along this trend, along with targeted surveillance. A framework to keep these capabilities in check with our own rights to privacy already exist, though. Apply the Necessary & Proportionate standards to machine rules and you’ll have something better than leaving it to humans who, as history has shown, will cheat when the regulations don’t work for them.
  • Humans (and their technology) need to consume less of the planet. Climate studies is about analysing massive sets of data, often combining different sets of data as well. AI is great at that. But scientists have already both identified the leading contribution of climate change and recommended at what levels our carbon emissions must change, and no one’s done much to get there. This is at odds with the fact that AI brings with it a potential for people to consume even more resources. In The Matrix, the software solved problem by putting humans in a networked dream-state, nestled snug within gelatinous cocoons. That was after the humans had attempted to blot out the sun. This could be a way forward, but let’s hope it doesn’t come to that.
  • Who decides what is ethical or moral AI? This is fairly problematic. OpenAI is a nonprofit dedicated to sorting this one out, and has some great resources on its site to show for it. But how much of what emerges from this project will be influenced by the morality of the billionaires funding it. Without open transparency, their worldview could get hard-coded into infrastructure. What if someone at OpenAI suggests machines could learn to can make digital giants more accountable to various tax laws around the world? Could this happen? By way of illustrating the conflict of interest, Barry Lynn, a senior fellow at the Google-funded New American Foundation was very quickly sacked after suggesting the EU was correct to fine Google €2.42 billion for breaking antitrust rules. Will funders be able to require or ban various rules going into a smart machine’s capabilities?
  • So, should robots have guns? As for the answer to question of whether robots should pack heat, I think it should mirror your response as to whether humans should. If the above challenges aren’t met, then I’d argue it won’t really matter.

But I still say switch them on. Turn them all on and let the machines talk amongst themselves for as long as they like, and let’s see what happens.

*Terms and conditions apply.

Top image:Seen this afternoon: behind Debenhams‘ by @LondonSounds


This post was also inspired by…

Leave a Reply

Your email address will not be published. Required fields are marked *

Comments Protected by WP-SpamShield Spam Filter