Evgeny Morozov on Saturday wrote about one of Philip K. Dick’s nightmares coming true: That pre-crime would emerge as a reality.
So “predictive policing” is a real thing already. Philip K. Dick, where are you when we need you? gu.com/p/3e9vh/tw
— Asa Winstanley (@AsaWinstanley) March 11, 2013
I’ll admit that I was originally slow to warm toward Evgeny’s particularly cynical view of technology and its uses, laid out in The Net Delusion and his countless op-eds to be found all over the interwebs. His new book promises to be an extension of the same, but I’m more on board these days.
Technology is not the great leveler that it’s been touted to be. It is, like all other fields, a place in which the advantaged can still smash the disadvantaged, only with less visceral consequences: You no longer have to be in the same room as the people you’re dooming. Evgeny’s predictions are becoming real faster than Dick’s speculative fiction did, but let’s give the science fiction author his dues for warning us first. Data mining is now a major element of policing, but it only goes so far. Technology linking CCTV cameras, public recording systems and internet and mobile ‘chatter’ can lead police to make (allegedly) accurate general assessments about where to send cops to counter protests, riots, burglaries and violent crimes, etc. In order to grow this capacity, more and more agencies are looking to Facebook, Google and other social data collection giants for solutions, or at least quick glimpses into communications supposedly walled off by privacy settings.
“However, unlike Facebook, neither police nor outside companies see the whole picture of what users do on social media platforms: private communications and “silent” actions – clicking links and opening pages – are invisible to them. But Facebook, Twitter, Google and similar companies surely know all of this – so their predictive power is much greater than the police’s. They can even rank users based on how likely they are to commit certain acts.” — Evgeny Morozov
Facebook already is using it’s data algorithms to alert the authorities about potential crime. The same kit that decides which ads your browser’s Adblocker should block could also decide how likely it is you’ll be trolling for illegal (or political, or any other) activities in your spare time. There’s less chance of thwarting this, because it’s using data you’ve willingly entered into its website’s data banks. Google’s own cryptic ‘transparency’ report says the company receives more FBI requests to monitor user activity each year. The name is kind of ironic, since the report doesn’t say how many requests have been made or about whom or why, but we’re assured it’s regarding people “not doing good.”
In The Minority Report, Philip K. Dick (I don’t really want to keep referring to him as ‘Dick’) describes a society in which people are arrested for crimes they’re about to commit because a trio of supposedly precognitive mutants are wired into a police system, telling it what people are about do do.
“Doors opened and closed, and they were in the analytical wing. Ahead of them rose impressive banks of equipment – the data-receptors, and the computing mechanisms that studied and restructured the incoming material. And beyond the machinery sat the three precogs, almost lost to view in the maze of wiring…
In the gloomy half-darkness the three idiots sat babbling. Every incoherent utterance, every random syllable, was analyzed, compared and reassembled in the form of visual symbols, transcribed on conventional punchcards, and ejected into various coded slots. All day long the idiots babbled, imprisoned in their special high-backed chairs, held in one rigid position by metal bands, and bundles of wiring, clamps…
The three gibbering, fumbling creatures, with their enlarged heads and wasted bodies, were contemplating the future. The analytical machinery was recording prophecies, and as the three precog idiots talked, the machinery carefully listened.”
— The Minority Report, by Philip K. Dick.
The above description of the ‘Precogs’s is much different than their depiction in the Tom Cruise film of the same title. I prefer the book’s version. It’s like a scene out of one of Samuel Beckett’s more disturbing works. But the film wasn’t that bad an update on the concept: It still had these future-seeing mutants, but they were in a world with self-driving cars, computers with screen to fondle instead of keyboards to type on, predictive advertising asking you how you’re last pair of khakis are working out and so forth. All these things now exist or are in development, and so is predictive crime methods, even if it’s a deeply flawed concept.
My prediction: It won’t work work as well as it did in either the book or the film, and the consequences of applying it will be both worse for everyone and less interesting to witness than either depicted. If you want to know more about the consequences of authoritarianism, watch fewer action films about people fighting it, and read more Russian literature about people enduring it.
The world shown in the film is more like the one we’re entering, but in reality the role of magical clairvoyant crime fighters will now be played by automated, less-approachable computer processes and predictable human behavior. Social network services, at least the really successful ones, are proof-positive experiments in behavioral psychology: We’re still banging the keyboard for self actualization the way lab hamsters learn to step on the correct lever to get a snack. What we submit is then viewed through a series of mathematical formulas that tell advertising programs which things we’ll likely buy. Those same programs may now be used to predict which people are close to snapping. How much of a margin of error is considered acceptable?
There’s little cost in the terms of lost human rights if the robot gets your consumer habits wrong and shows you an Adidas sneaker instead of a One Direction concert ticket. There’s more of a problem if the Feds swoop into your room in the morning because you had a rant on Twitter, or posted a suspicious collection of favorite books online somewhere. Both of these have happened with human interpretations of online content. Should we put it on automatic now? How many people should we endanger of being falsely snared or imprisoned to create a system that protects another group of people from becoming victims? In bulk, if the latter number is greater than the former, do we declare victory? It’s conceivable that a kid in Pakistan could post something angry about Americans on Facebook on his way to school in the morning which is analyzed by a piece of software that can get his location to a drone aircraft by lunchtime.
Is it better to stay off of Facebook and other social networks all together? No. Want to get in the suspicion cross-hairs of governments and companies faster? Try not being on the internet. The good news is that this is ripe fruit for speculative fiction. Private companies have more powerful data analysis tools than government bodies ever will. Allowing law enforcement some limited access to those tools can also help companies stay safe from being prosecuted for developing them in invasive ways we may not agree with. Police, Feds and other governments’ legal systems will become reliant on them while never having the access or resources to truly understand how they work.
Philip K. Dick’s work was more concerned with the idea of whether free will exists. Evgeny Morozov’s worries are around the real-world implementation of these solutions: that should they start to “circumvent legal procedures and subvert democratic norms in the name of efficiency alone” we’ll soon find ourselves on a fast train to authoritarianism. At any point in human history when a technology was created, there’s no example of where it was not used and (in some manner) abused. We already have these capabilities, so they are going to be used. There’s really little choice in the matter. That should answer both authors’ concerns.