Four questions

Locking down your online life is a frequent topic of this infrequently updated blog. Managing digital threats may sound like a massive thing, but what if you could nail it down with just four questions? Is it even possible?

Last month I ran a digital security workshop for Thomson Reuters Foundation’s Global Security Seminar 2014 for Rory Peck Trust, along with my colleague, Max. At the Trust’s site I manage a few pages on the subject. For our day’s session, the group left Canary Wharf for Bletchley Park, now a museum in unassuming Milton Keynes, but once the home of British code breaking efforts during World War II.

All things encryption and surveillance  are capturing the imagination right now, it seems. That ol’e odd couple the GCHQ & the NSA are a part of making it happen, but it’s in the news and on the big screen at present.

The Imitation Game and Citizenfour provide cinematic bookends on the subject. On one end we have a historical feature film depicting a necessary and deft government hacking operation aimed at ending a disastrous war, and possibly helping us all begin to heal after Benedict Cumberbatch’s regrettable turn as Julian Assange. On the other end we’ve got a documentary about how the spy agencies that eventually evolved from that effort have turned on their own citizens and are engaging in a dangerous, paranoid data-gathering frenzy without point or end.

And technology has grown more sophisticated. Private companies are selling wondrous spyware to help dictators and despots target dissidents, and these also seem to work pretty handily against the media as well. British spy agencies have internal policies that allow monitoring of journalists, lawyers and other “sensitive professions” as the norm instead of the exception.  A computer doesn’t even have to be on a network to be monitored remotely. But while the technology and tactics seem to improve, they’re essentially aimed at the same flaw: The weak point is between the keyboard and the chair.

Human ingenuity broke the Enigma encryption code in WWII, but it was aimed at human fallibility. Our fantastic tour guide on a quick walk around Bletchley highlighted how one German spy sent the same message each day: “nothing to report.” It made breaking the code for the day pretty easy. Another agent’s daily message were weather reports in England, which also allowed code breakers a way of reverse engineering the code as they knew the weather (grey, rain).

Today humans are still the best weakness to bypass security and get into the technology, whether it’s getting you to download and install something you shouldn’t or joining a network that’s doing something besides delivering your Facebook. The first security hole to patch is you.

Learning how to use a new piece of more secure technology isn’t the issue. That just takes a bit of time and some aggravation at the often horrible design and user experience. But after a short spell, you forget that you didn’t know it.  Knowing what tools to use and how to choose it is also pretty well documented. But before you invest your free time in that, you should spare a thought as to why you’d want to.

The technology that’s usually stronger in terms of privacy or anonymity isn’t as intuitive as your usual bare-it-all digital kit. It takes longer to use, sometimes requires you to change various settings on your computer or mobile, or disable features you may kind of like. To use secure communication tech, you’ll also often need to convince other people you’re talking with to use them as well, and that ain’t often easy. Most importantly, you need to change the way you do things. Just what are you trying to accomplish, exactly? That’s what you need to sort out.

Before we can wander the isles of the digital security toy shop, what’s needed is a risk assessment and/or alternately titled threat model. WhatI put forth to our indulgent seminar participants and will now attempt to replicate in quirky blog fashion is what’s emerging as the widely peer-reviewed Digital Security Threat Model.

I can’t take any credit for these. Jonathan Stray includes them in his blog post on Security for Journalists. Jennifer Henrichsen uses essentially the same four questions here. Search any on of them and you’ll find that they’ll show up in results in various places along with the other three, in content created by various security experts and the like. The 4Qs are:

1. What specifically do I want to keep secret?
This could be your research, communications with someone else, a location, identities of your contacts, etc.

2. Who is the adversary in this situation?
Why would they want to get at this information (or, ‘What makes them an adversary?’) What is their interest in the data your protecting?

3. What can your adversary do to find out?
This could be as boring as legal mechanisms such as a subpoena; technological means such as eavesdropping or hacking; Social engineering (tricking you into giving them access); or theft, violence, intimidation and so forth.

4. What are the consequences if they succeed?
Does it put you, your contacts, colleagues or others in physical, legal, reputational or some other kind of jeopardy? Will it ruin your news scoop, or effect your organisation?

The Electronic Frontier Foundation covers those four questions, and also adds a fifth one that you’re probably better off adding to the list.

5. How much trouble are you willing to go through in order to try to prevent those consequences from happening?
This is where you decide how much effort is going to be required and what you’re prepared to do.  Decide what’s going too far for you before you’ve gone too far. This may run the spectrum from using encrypted chat to set up meetings, to lying in court (not that anyone’s endorsing that).

Now that you’ve got your specific threat model, it’s almost time to head to the technology fun bit, but let’s hold off for just a tick. It’s worth putting this into the wider context of what you’re trying to achieve. This comes courtesy of Alec Muffett’s “Ask Yourself” page:

  1. What am I trying to achieve?
  2. What is my threat model? (These are the 4-5 questions above)
  3. What is the true, undecomposable value of that which I am protecting?

Once you have answered all of these, then:

  • What should my policy say in order to express all the above?
  • What technologies exist that will enable me to implement the above?

And now we’re at the candy. What are you going to do and use to limit the risk of your threats happening? That’s a topic for blog posts to come, but in the meanwhile, here’s more than enough to get you started.

So, it’s still slightly more involved than four questions, but likely more manageable than you may have thought.