Now, there are many problems in the world of digital security - from governments around the world undermining privacy technology or firewalling their citizens off from information to valiant but underfunded security tools having the time to focus only on keeping the tool safe, but not making it easy to use. Some of these problems are rather significant, some are more approachable, but there remains a hidden problem, so pervasive and pernicious that it undermis all of our good work in bringing usable, human-centered privacy and security tools to wider audiences.
Cross-posted from my piece on Medium
It was the second day of digital security training, and I was losing the room. The journalists, documentarians, and media activists around the table were more intent on following their friends and colleagues via Facebook chat than dealing with the fidgety, hard to install, but super-secure communications tools I was trying to promote.
They had good reason — it was winter 2014, during the tense final days of Ukraine’s EuroMaidan protests, going on just across town from our training. The urgency of communication was just too much. Overnight, most of the trainees had chosen to uninstall the app we’d burnt the better part of the previous day getting to install on a mix of Windows XP, 7, Macs, and even Linux systems.
But then again, I had good reason to urge security. Protesters were being arrested because of insecure communications. People were worried about their own government, but also about the small number of companies controlling their telecommunications.
I thought I had understood their need — they wanted a way to have trusted, private communications that spanned from mobile to desktop, chat to voice.
But I had failed. I was pushing a collection of tools I knew to be the best in its class for security, developed transparently as open source, with constant attention to not only bugs but the nuances of cryptography and careful, responsible implementation and monitoring of new possible flaws. The tools were also the only ones that combined these security features, with both text and voice capabilities that could bridge desktop and mobile.
These activists required a tool that they could show to others and start using in minutes; not one that took a day of training and debugging just to install. Tools that aren’t used aren’t providing security.
This is partially a footnotes section from last week's Crpyto Saves Lives post, but every week brings new stories, and this week was a doozy. So, let's recap the whole "backdoored crypto / secret golden keys can work" argument:
(1) We can protect private information
*Cough* OPM *Cough*
Update: "Security bloggers and researchers claim to have uncovered a publicly available database exposing the personal information of 191 million voters on the Internet. The information contains voters’ names, home addresses, voter IDs, phone numbers and date of birth, as well as political affiliations and a detailed voting history since 2000."
(2) Well, we are really good at protecting super-important crypto keys that only give good guys access,
So, those luggage locks with a "golden key", now required world-wide that only trained TSA agents can pop open? Yeah, about that... - TSA's master key set was allowed to be photographed, and while that photo was quickly taken off the internet, the damage was done. Anyone can now 3D print completely functional TSA keys.
(3) Besides, adding a backdoor won't cause problems!
There are many great arguments to protect truly private communications from a human rights perspective, and specifically through a Constitutional lens -- restoring the privacy of having a conversation in your living room and having your personal records stay personal are core first and fourth amendment rights which have suffered greatly in the digital age.
My work takes me around the world to support journalists, human rights activists, and a wide variety of amazing people working to improve the world. They are all facing incredible threats posed by powerful actors. These adversaries use malware, hacking, and all forms of digital attacks to compromise the networks of activists.
Open source, trusted, strong cryptographic tools -- and increasingly, trusted commercial systems such as Google's -- are their only available defense, in situations where failure can include targeted harassment, indefinite imprisonment, torture, and even death.
Encryption saves lives.
I have a piece up on Medium about our SAFETAG project. It's a project that myself and another colleague have spent countless hours building out to really focus on working with small non-profits on assessing their risks and providing a framework for them to think critically about what really matters to their work, and how to most reasonably address them based on potential impact, real risk of it happening, and their capacity to change digital security practices.
It continues to be very rewarding work, and our tiny team has gotten to see a lot of amazing changes take place by the organizations which have been audited through this process. Anyhow. Check out our piece on Medium: https://medium.com/local-voices-global-change/meet-safetag-helping-non-… , but also take a look at the framework itself at SAFETAG.org. It's an open source framework, and we'd love to see questions and commits over on our github repository!
So, via @runasand, I learn that Buzzfeed's writers have PGP keys:
I cannot express in mere words how much this makes me happy in the world of normalizing real people having the ability to send actually secure email (especially to journalists!). PGP's various implementations get a lot of heat for their lack of usability, and the process itself, even with a theoretically super-easy interface, is still a complex set of ideas to understand and use in your normal communications. So every organization I see that is willing to tackle this head-on, and (hopefully!) have internal champions, mentors, support, training, and drinking games (I presume) to really encourage adoption is a huge win, be that a 3 person organization or a 100-person organization.
Still, I can't help myself:
"All of Buzzfeed's PGP Keys -- you won't believe the last one!" (Sorry, I cannot help myself) https://pgp.mit.edu/pks/lookup?search=%40buzzfeed.com&op=index
"Buzzfeed journalists can send encrypted emails -- but why they send them will blow your mind!"
"Top 10 passwords buzzfeed journalists use for PGP -- #8 will drive you crazy!"
"3 pieces of metadata not protected by encrypted emails you'd never guess!"
"5 attachments you never thought you'd be able to send encrypted to buzzfeed!"
Pivot-Twist-Dev takes the classic "pivot-twist" approach for idea pitches of taking a familiar concept and twisting it in a new way to the international development space. You might get ideas like "IT'S LIKE TINDER, BUT FOR NATURAL RESOURCE MANAGEMENT" or "IT'S LIKE FITNESS TRACKERS, BUT FOR SOCIAL ENTREPRENEURSHIP." Are they good? Only a pilot project could possibly tell.
You know what hasn't gotten an update in a while? This blog! What else? The cute kittens of digital security over at I can haz digital security?.
The world is busy and imperfect. But hey, kittens!
Also - an SSL update. I've added in Cloudflare's Universal SSL to my site, and it communicates with mt backend over the old kinda-broken-ish (but not really!) CA SSL cert. I'll be moving to EFF's Let's Encrypt as soon as it's open for business. This should clear up most SSL errors for you, and if you really, really care about a direct SSL connection to this site, I trust your ability to securely contact me about getting that working for you.
I am far from the first to compare digital security practices to safer sex practices. Heck, you can even see a rap career blooming as Jillian York and Jacob Appelbaum suggest that it's time that we "talk about P-G-P" at re:publica.
Talking about software and trust gets both very boring and very depressing quickly. Let's instead move on to the juicy sex-ed part!
A quick disclaimer: First, apologies for the at-times male and/or heteronormative point of view; I'd welcome more inclusive language, especially around the HTTPS section. Second, I am unabashedly pro-Tor, a user of the tor network, and am even lucky enough to get to collaborate with them on occasion. The garlic condom photo comes from The Stinking Rose..
Super-duper Unsafe Surfing
Using the Internet without any protection is a very bad idea. The SANS Institute's Internet Storm Center tracks "survival time" - the time a completely unprotected computer facing the raw Internet can survive before becoming compromised by a virus - in minutes. Not days, not even hours. This is so off the charts, that with a safer sex metaphor, using no protection is more akin to just injecting yourself with an STD than engaging in a risky behavior.
Barely less unsafe surfing
Adding in a constantly-updated anti-virus tool, and a firewall, and making sure that your operating system is up to date is akin to being healthy. You have a basically operational immune system - congrats!. You'll be fine if the person you're sleeping with has the common cold, but anything more serious than that and you're in trouble.
Using HTTPS - visiting websites which show up with a green lock icon - is also a good practice. You can even install some browser plugins like HTTPS Everywhere and CertPatrol that help you out.
HTTPS is kind of like birth control. You may successfully prevent *ahem* the unauthorized spread of your information, but you're still relying on a significant amount of trust in your partner (to have taken the pill, to withdraw), and there are things out of your knowledge that can go wrong - the pharmacist provided fake pills, or you have a withdrawal failure (please note this is about digital security advice, and not at all giving good safer sex advice - a quick visit to wikipedia is a good start for effective -- and non effective birth control methods!). With SSL Certificates, you are still trusting that the website has good practices to protect your information (insert the constant litany of password reset links you've had to deal with this year here), and there have been cases of stolen SSL certificates) and are tools to help an attacker try and intercept your encrypted traffic.
Slightly Safer Surfing
With digital security, a lot like with safer sex, some methods can be combined for a greater effect, but layering other methods can be a horrible idea. Adding using anti-virus tools, firewalls, system updates, and HTTPS on top of any other method here is a universally Good Thing.
Using a VPN is like using a condom, provided by your partner for this encounter, and given to them by a source neither of you have any real trust in. Asking the manufacturer for information about exactly how it's made, or what its expiration date is will often result in grand claims (but no hard evidence). Requests to see the factory floor and verify these claims are presumed to be jokes. The VPN-brand condom generally works, and is definitely fast and easy, but you're placing a lot of trust in a random company you found while searching the Internet, and probably also the cheapest one you found. On top of that, you're also still trusting your partner to not have poked any holes in the condom.
Overall, It's still much better to be using the VPN than not, and if you trust your partner (i.e. the website or service you're going to), and you trust the VPN provider for whatever reason - perhaps a widely trusted company has given an independent audit of the VPN, or you or your workplace has set it up yourself - then for most situations you're pretty safe. Layering a VPN on top of the above tools is good, but layering VPNs on VPNs or on other networks is actually not dissimilar to layering condoms - it actually makes failure in very weird (and, lets face it, awkward) ways /more/ likely.
Still, though, wouldn't it be better if you could rely even less on trust, and have that trust backed up with evidence that you yourself can look at?
Using Tor is like using a condom which you not only know has gone through extensive testing, you can even visit the factory floor, look at the business' finances, and talk with the engineers and factory staff. It's /still/ not 100% safe, but it is a heck of a lot safer, and you can verify each and every claim made about what it does and does not do.
And to be clear here, if you're logging in to a website over Tor, that website now knows who you are (you're no longer anonymous to them, and possibly others watching you do this along the wire), and that website is storing your password and may fail to protect it at some point. That website can still turn out to be malicious and attack you, and very powerful adversaries can even specifically try and intercept traffic coming from a website and going into the super-secret Tor network, change it, and include an attack they know works well against out of date versions of the browser you're using. An out of date Tor browser is like an expired condom - it's best not to bet your life on it.
To really (over-)extend the analogy, the Tor-branded condom business happens to be heavily funded by a religious organization that is strongly against birth control (and indeed has an entire project that tries to undermine birth control methods, to the point of installing secret hole-punchers in condom factories). This same organization (it's large!) does have a different and vocal component that strongly supports safer sex, and not only funds giving away condoms, but also the production of them. It's not, seemingly, the most logical set up, but hey, we're talking religion, politics and sex - logic doesn't always come in to play here.
Like sex, there is no truly "safe" way to play on the Internet, and it's unrealistic to expect that abstinence from the Internet is realistic. So, be careful out there, adopt safer practices, and keep your wits about you. Good luck!
There's a budding conversation on "trust" over in the twitterverse. I began a draft post a while back that compared Tor (the amazing privacy and anti-censorship network and all privacy-protecting software to condoms. More on that soon, but let's actually talk about how you might have trust in a software project, using Tor as an example. Tor has been in the news recently, and I've had a ton of people ask me about how safe it is to use, so I figured one click-bait headline is as good as another in having an open and honest discussion about Tor.
First, let's be transparent. Tor - not unlike the Internet itself - did in fact start out as a project by the US Naval Research Laboratory, and does continue to receive funding by the US Government to support freedom of expression around the world, with targeted efforts to enable free speech and access to uncensored information in countries where Internet connections are heavily filtered.
So, can you trust Tor? How do you know that the NSA hasn't forced Tor into building a "back door" into the Tor software, like they did with RSA Security, and many other pieces of software you use daily, or like what has historically happened to privacy-protecting services like hushmail?
The answer is actually that you should not actually need to trust the organization behind Tor in order to be confident that the software is built to be safe. This is enabled by the fact that Tor is open source - meaning you can read every line of the code they use to build the software you install. Of course, even with open source software, you're trusting whoever is compiling it do do so on a secure system and without any extra malicious intent. The Tor Project answers this problem by using "deterministic builds", which let you check, independently, that the code posted publicly is the code you're running.
If you use Windows or Mac, both "closed source" operating systems, you are absolutely, 100% trusting that no one in the company, nor any government with significant sway over these companies, has snuck in code to allow remote spying. You have no way to inspect the code running your operating system, and every tool you use on top of it is vulnerable to being undermined by something as simple as a hack to the tiny piece of software that tells your computer how to talk with the keyboard, which could just as easily also store every password you have ever typed in. You're also trusting your ISP, every web site you log in to, and thousands of other intermediaries and companies, from the ones who provide SSL Certificates (enabling the "green lock" of a secure website) to the manufacturer or your wifi router and cablemodem to not betray your trust by accident, under duress, or with malicious intent.
Of course, even back in the green pastures of open source, there is no "absolute" level of trust, no matter how much we'd like there to be. Rare is the user who actually checks the "signature" of the download file against the posted "signature" online to make sure the tool they're about to install is the intended one. And even rarer is the user who checks in on the deterministic build process (and it's still fragile, so hard to guarantee even so). Even at this level, you are trusting the developers and others in the open source and security community to write solid code and check on it for bugs. The Tor Project does an exceptional job at this, but as heartbleed reminds us, huge, horrible bugs can go unseen, even in the open, for a long time. You're also trusting all the systems that the developers work on to not be compromised, and to be running code that is also in more or less good condition, and to be using compilers that aren't doing funny things.
For what it's worth, this is hardly a new problem. In my unhumble opinion, I'd still rather have this more open model of shared trust in the open source world than rely on any single company, whose prime motive is to ship software features on time.
So - can you trust Tor? I do. But saying that I "trust" Tor doesn't mean I have 100% faith that their software is bulletproof. All software has bugs, and particularly security software requires a lot of work on the part of the user to actually make it all work out as expected. It's time to talk about trust less as a binary and more as a pragmatic approach to decision making based on best practices, source availability, and organizational transparency.