Submitted by Jon on Wed, 11/18/2015 - 16:54
I have a piece up on Medium about our SAFETAG project. It's a project that myself and another colleague have spent countless hours building out to really focus on working with small non-profits on assessing their risks and providing a framework for them to think critically about what really matters to their work, and how to most reasonably address them based on potential impact, real risk of it happening, and their capacity to change digital security practices.
It continues to be very rewarding work, and our tiny team has gotten to see a lot of amazing changes take place by the organizations which have been audited through this process. Anyhow. Check out our piece on Medium: https://medium.com/local-voices-global-change/meet-safetag-helping-non-p... , but also take a look at the framework itself at SAFETAG.org. It's an open source framework, and we'd love to see questions and commits over on our github repository!
Submitted by Jon on Tue, 09/15/2015 - 18:48
So, via @runasand, I learn that Buzzfeed's writers have PGP keys:
I cannot express in mere words how much this makes me happy in the world of normalizing real people having the ability to send actually secure email (especially to journalists!). PGP's various implementations get a lot of heat for their lack of usability, and the process itself, even with a theoretically super-easy interface, is still a complex set of ideas to understand and use in your normal communications. So every organization I see that is willing to tackle this head-on, and (hopefully!) have internal champions, mentors, support, training, and drinking games (I presume) to really encourage adoption is a huge win, be that a 3 person organization or a 100-person organization.
Still, I can't help myself:
"All of Buzzfeed's PGP Keys -- you won't believe the last one!" (Sorry, I cannot help myself) https://pgp.mit.edu/pks/lookup?search=%40buzzfeed.com&op=index
"Buzzfeed journalists can send encrypted emails -- but why they send them will blow your mind!"
"Top 10 passwords buzzfeed journalists use for PGP -- #8 will drive you crazy!"
"3 pieces of metadata not protected by encrypted emails you'd never guess!"
"5 attachments you never thought you'd be able to send encrypted to buzzfeed!"
Submitted by Jon on Fri, 05/29/2015 - 19:21
Pivot-Twist-Dev takes the classic "pivot-twist" approach for idea pitches of taking a familiar concept and twisting it in a new way to the international development space. You might get ideas like "IT'S LIKE TINDER, BUT FOR NATURAL RESOURCE MANAGEMENT" or "IT'S LIKE FITNESS TRACKERS, BUT FOR SOCIAL ENTREPRENEURSHIP." Are they good? Only a pilot project could possibly tell.
Submitted by Jon on Sat, 01/17/2015 - 21:00
You know what hasn't gotten an update in a while? This blog! What else? The cute kittens of digital security over at I can haz digital security?.
The world is busy and imperfect. But hey, kittens!
Also - an SSL update. I've added in Cloudflare's Universal SSL to my site, and it communicates with mt backend over the old kinda-broken-ish (but not really!) CA SSL cert. I'll be moving to EFF's Let's Encrypt as soon as it's open for business. This should clear up most SSL errors for you, and if you really, really care about a direct SSL connection to this site, I trust your ability to securely contact me about getting that working for you.
Submitted by Jon on Fri, 09/26/2014 - 14:39
I am far from the first to compare digital security practices to safer sex practices. Heck, you can even see a rap career blooming as Jillian York and Jacob Appelbaum suggest that it's time that we "talk about P-G-P" at re:publica.
Talking about software and trust gets both very boring and very depressing quickly. Let's instead move on to the juicy sex-ed part!
A quick disclaimer: First, apologies for the at-times male and/or heteronormative point of view; I'd welcome more inclusive language, especially around the HTTPS section. Second, I am unabashedly pro-Tor, a user of the tor network, and am even lucky enough to get to collaborate with them on occasion. The garlic condom photo comes from The Stinking Rose..
Super-duper Unsafe Surfing
Using the Internet without any protection is a very bad idea. The SANS Institute's Internet Storm Center tracks "survival time" - the time a completely unprotected computer facing the raw Internet can survive before becoming compromised by a virus - in minutes. Not days, not even hours. This is so off the charts, that with a safer sex metaphor, using no protection is more akin to just injecting yourself with an STD than engaging in a risky behavior.
Barely less unsafe surfing
Adding in a constantly-updated anti-virus tool, and a firewall, and making sure that your operating system is up to date is akin to being healthy. You have a basically operational immune system - congrats!. You'll be fine if the person you're sleeping with has the common cold, but anything more serious than that and you're in trouble.
Using HTTPS - visiting websites which show up with a green lock icon - is also a good practice. You can even install some browser plugins like HTTPS Everywhere and CertPatrol that help you out.
HTTPS is kind of like birth control. You may successfully prevent *ahem* the unauthorized spread of your information, but you're still relying on a significant amount of trust in your partner (to have taken the pill, to withdraw), and there are things out of your knowledge that can go wrong - the pharmacist provided fake pills, or you have a withdrawal failure (please note this is about digital security advice, and not at all giving good safer sex advice - a quick visit to wikipedia is a good start for effective -- and non effective birth control methods!). With SSL Certificates, you are still trusting that the website has good practices to protect your information (insert the constant litany of password reset links you've had to deal with this year here), and there have been cases of stolen SSL certificates) and are tools to help an attacker try and intercept your encrypted traffic.
Slightly Safer Surfing
With digital security, a lot like with safer sex, some methods can be combined for a greater effect, but layering other methods can be a horrible idea. Adding using anti-virus tools, firewalls, system updates, and HTTPS on top of any other method here is a universally Good Thing.
Using a VPN is like using a condom, provided by your partner for this encounter, and given to them by a source neither of you have any real trust in. Asking the manufacturer for information about exactly how it's made, or what its expiration date is will often result in grand claims (but no hard evidence). Requests to see the factory floor and verify these claims are presumed to be jokes. The VPN-brand condom generally works, and is definitely fast and easy, but you're placing a lot of trust in a random company you found while searching the Internet, and probably also the cheapest one you found. On top of that, you're also still trusting your partner to not have poked any holes in the condom.
Overall, It's still much better to be using the VPN than not, and if you trust your partner (i.e. the website or service you're going to), and you trust the VPN provider for whatever reason - perhaps a widely trusted company has given an independent audit of the VPN, or you or your workplace has set it up yourself - then for most situations you're pretty safe. Layering a VPN on top of the above tools is good, but layering VPNs on VPNs or on other networks is actually not dissimilar to layering condoms - it actually makes failure in very weird (and, lets face it, awkward) ways /more/ likely.
Still, though, wouldn't it be better if you could rely even less on trust, and have that trust backed up with evidence that you yourself can look at?
Using Tor is like using a condom which you not only know has gone through extensive testing, you can even visit the factory floor, look at the business' finances, and talk with the engineers and factory staff. It's /still/ not 100% safe, but it is a heck of a lot safer, and you can verify each and every claim made about what it does and does not do.
And to be clear here, if you're logging in to a website over Tor, that website now knows who you are (you're no longer anonymous to them, and possibly others watching you do this along the wire), and that website is storing your password and may fail to protect it at some point. That website can still turn out to be malicious and attack you, and very powerful adversaries can even specifically try and intercept traffic coming from a website and going into the super-secret Tor network, change it, and include an attack they know works well against out of date versions of the browser you're using. An out of date Tor browser is like an expired condom - it's best not to bet your life on it.
To really (over-)extend the analogy, the Tor-branded condom business happens to be heavily funded by a religious organization that is strongly against birth control (and indeed has an entire project that tries to undermine birth control methods, to the point of installing secret hole-punchers in condom factories). This same organization (it's large!) does have a different and vocal component that strongly supports safer sex, and not only funds giving away condoms, but also the production of them. It's not, seemingly, the most logical set up, but hey, we're talking religion, politics and sex - logic doesn't always come in to play here.
Like sex, there is no truly "safe" way to play on the Internet, and it's unrealistic to expect that abstinence from the Internet is realistic. So, be careful out there, adopt safer practices, and keep your wits about you. Good luck!
Submitted by Jon on Fri, 09/19/2014 - 09:05
There's a budding conversation on "trust" over in the twitterverse. I began a draft post a while back that compared Tor (the amazing privacy and anti-censorship network and all privacy-protecting software to condoms. More on that soon, but let's actually talk about how you might have trust in a software project, using Tor as an example. Tor has been in the news recently, and I've had a ton of people ask me about how safe it is to use, so I figured one click-bait headline is as good as another in having an open and honest discussion about Tor.
First, let's be transparent. Tor - not unlike the Internet itself - did in fact start out as a project by the US Naval Research Laboratory, and does continue to receive funding by the US Government to support freedom of expression around the world, with targeted efforts to enable free speech and access to uncensored information in countries where Internet connections are heavily filtered.
So, can you trust Tor? How do you know that the NSA hasn't forced Tor into building a "back door" into the Tor software, like they did with RSA Security, and many other pieces of software you use daily, or like what has historically happened to privacy-protecting services like hushmail?
The answer is actually that you should not actually need to trust the organization behind Tor in order to be confident that the software is built to be safe. This is enabled by the fact that Tor is open source - meaning you can read every line of the code they use to build the software you install. Of course, even with open source software, you're trusting whoever is compiling it do do so on a secure system and without any extra malicious intent. The Tor Project answers this problem by using "deterministic builds", which let you check, independently, that the code posted publicly is the code you're running.
If you use Windows or Mac, both "closed source" operating systems, you are absolutely, 100% trusting that no one in the company, nor any government with significant sway over these companies, has snuck in code to allow remote spying. You have no way to inspect the code running your operating system, and every tool you use on top of it is vulnerable to being undermined by something as simple as a hack to the tiny piece of software that tells your computer how to talk with the keyboard, which could just as easily also store every password you have ever typed in. You're also trusting your ISP, every web site you log in to, and thousands of other intermediaries and companies, from the ones who provide SSL Certificates (enabling the "green lock" of a secure website) to the manufacturer or your wifi router and cablemodem to not betray your trust by accident, under duress, or with malicious intent.
Of course, even back in the green pastures of open source, there is no "absolute" level of trust, no matter how much we'd like there to be. Rare is the user who actually checks the "signature" of the download file against the posted "signature" online to make sure the tool they're about to install is the intended one. And even rarer is the user who checks in on the deterministic build process (and it's still fragile, so hard to guarantee even so). Even at this level, you are trusting the developers and others in the open source and security community to write solid code and check on it for bugs. The Tor Project does an exceptional job at this, but as heartbleed reminds us, huge, horrible bugs can go unseen, even in the open, for a long time. You're also trusting all the systems that the developers work on to not be compromised, and to be running code that is also in more or less good condition, and to be using compilers that aren't doing funny things.
For what it's worth, this is hardly a new problem. In my unhumble opinion, I'd still rather have this more open model of shared trust in the open source world than rely on any single company, whose prime motive is to ship software features on time.
So - can you trust Tor? I do. But saying that I "trust" Tor doesn't mean I have 100% faith that their software is bulletproof. All software has bugs, and particularly security software requires a lot of work on the part of the user to actually make it all work out as expected. It's time to talk about trust less as a binary and more as a pragmatic approach to decision making based on best practices, source availability, and organizational transparency.
Submitted by Jon on Sat, 04/12/2014 - 20:11
There's a point here about heartbleed and security — I promise. Keep with me.
As I am wont to once the weather finally begins to coöperate, I've been trying a few new things out on the grill. When I'm in this exploratory phase, I love digging through the infinitely interesting BBQ blogs of the Internet - they're full of hard-won knowledge about fire and smoke, but often lack a certain level of technical polish.
Case in point, my reference blog for this week's experiment was a well-seasoned old blog, but they'd lost every single comment from years of discussions. Why? No technical glitch, but simply because they'd chosen a private company to manage their comments - and it went out of business, leaving them not only without a commenting tool, but without those years of educational clarifications and discussions.
Ownership and control matter. This is true when you're talking about your possessions, your house, your comments on a BBQ blog, and with your software. I've railed against app-ification before, but I want to make a slightly deeper point here. If you bought a house, but with the condition that any repair, no matter how minor, you had to contract the previous owner (and only them) to make at a cost of their choosing - would you feel you really owned or controlled that house? Would you buy a car where the hood was locked shut, accessible only to the specific dealership where you bought it?
These cases are very much the situation with the vast majority of software you run on your computer. From Microsoft Word to Apple's iTunes, and even more insidiously, OSX and Microsoft Windows themselves - are all locked away from you. You've been forced to pay hundreds of dollars for them with the purchase of any computer - but you have no control or real ownership over them.
The alternative is what's called "free" or "open source" software (people get into fierce debates on the terminology here, which I'm ignoring for the time being). All software starts with instructions that are more-or-less understandable by humans; commands like if (this thing) then (do this other thing). Generally speaking, this "language" is then turned into something that's closer the more basic tools that computers understand. Imagine a particularly skilled dog with a great memory - by stringing together enough fetches, play deads, stops, roll overs and so on, you could eventually come up with a sequence of commands that would have this dog go out and buy a beer for you at the corner store, and bring in back.
"Closed source" software only gives you the computer-understandable version, and it's surprisingly difficult to turn that back into a simple, human-understandable chunk of logic. "Open source" software, on the other hand, always provides you with the original, understandable language.
This means a lot of things - one, you can tweak it. If you don't like the beer that your dog fetched, you can find the human-speak parts of the commands where it's selected, and make sure your preference for hoppy beer is respected, and then turn it back into the commands your computer can do.
This ability to change how your own tools work itself has many additional benefits - you can share that change, and if it's useful enough, that change itself will be included in the next version of the "core" software that everyone uses.
And finally, Heartbleed
This openness also means anyone can look at the logic that is driving their tool. This means that when you start talking about trusting software, there's a heavy preference towards the software that you can look at the source code of, and even more preference towards software where a lot of people have been looking at this same code.
So, that failed with heartbleed. The team behind OpenSSL is tiny compared to their impact. Two out of every three secure servers in the world are running the software that this four-person team manages. And on New Years Eve 2011, one of their developers committed a very, very subtle piece of code that basically didn't make sure that all the doors were closed behind it, and no one else at the time (or anyone who'd taken a look the in two years and chance since) noticed.
So obviously the whole open source thing is broken, right? The bug is out in the open for anyone to figure out, but no one fixed it!
It's not quite so simple. Do you really think that a working piece of closed-source code gets a second glance by its development team? They're just as bound by priorities and shipping product releases as an open-source team, but their code gets locked away with not even the chance for a third party to find a bug and lend a hand — but it's no more secure than the open source tools from concentrated probing, and testing for flaws just like heartbleed.
So yes, heartbleed was bad, but it was also a reminder in how powerful the open source software world can be in finding and fixing a bug. Most of us woke up with some updates to install, and that was the end of it. What horrible, dark bugs are lurking, unfindable, in every piece of closed source software? The precise number is unknowable, but the prevalence of viruses and malware that affect deeply closed systems like Windows might be a strong hint.
No more broken hearts
Going forward, I obviously have a long wishlist of things I'd like to see - a public discussion on what trust in software really means, better tools on every platform to guarantee software packages are what they claim to be (Tor is doing amazing work here), a return to inter-operable standards, especially when we're talking security systems... But as a beginning point, simply better support structures for open code development would be nice. We have volunteers building the basic structures of the Internet - which is an absolutely amazing and good thing - but let's make sure they have the time and resources to do it.
Submitted by Jon on Mon, 02/24/2014 - 09:56
Senator Cruz's office's response to my personal note about surveillance I sent as part of TheDayWeFightBack:
Thank you for sharing your thoughts regarding the National Security Agency's surveillance program. Input from fellow Texans significantly informs my decision-making and empowers me to better represent the state.
During my time in the Senate, I have consistently reiterated my support of programs that can detect impending threats to our homeland or diplomatic and military facilities abroad. It is imperative, however, that we strike an appropriate balance between remaining vigilant against terrorism and protecting the civil liberties guaranteed to the American people by the Constitution.
Unfortunately, the government has eroded the American peoples' trust by the secrecy surrounding these surveillance programs. I will continue working with my Judiciary Committee colleagues and the entire Senate to review existing law and the actions of the Administration to ensure that we protect our Constitutional liberties. In doing so, I hope to guarantee true accountability in these programs so that we protect Americans from the threats of both terrorism and unwarranted government intrusion.
Thank you for sharing your views with me. Please feel free to contact me in the future about any issue important to your family. It is an honor to serve you and the people of Texas.
Senator Ted Cruz
Submitted by Jon on Sun, 02/16/2014 - 15:17
Thanks to the great, community-focused CACert , plus DreamHost supporting SSL on virtualhosts (which is a boring technical detail, but oh so exciting), I now have SSL working for https://JonCamfield.com - Most of the site should be 100% SSL, with a few included images here and there still being on non-ssl sites.
For OTR-enabled chat, my fingerprint is FE0E870C 40A3B334 5E6E84F0 D013369F 3C064E4C (for most of my devices now, thanks to KeySync!)
Submitted by Jon on Mon, 01/27/2014 - 10:58
I once rented a part of a house that had been, well, not fully cleaned out from the previous occupants. It was a house full of hackers that had been variously occupied by friends and friends-of-friends for almost a decade as they passed through Austin on their way from or to new lives, which is to say, it had, well, "character".
One of the odder things left behind by the previous inhabitants was a literal pile of Final Fantasy boxes, completely intact save for the all-important registration codes. A bit of digging uncovered a fascinating tale of cross-border, tax- and fee-free value transfer. The former occupant, let's call him "Bob" was engaged in a business proposition with a colleague based in South Korea, let's call her "Alice." Whatever version of the RPG Final Fantasy had just been released in the States (only). This had proved very difficult to pirate, causing a huge untapped demand in Korea. Koreans, however, had been happily hacking away at another RPG game which was only just now catching on Stateside. So, Bob would tear off and destroy these registration codes, emailing the codes themselves to Alice in Korea. Alice, in exchange, would provide Bob powerful and rare in-game items for the newly-popular game - these were of less value to the Korean market, as it was saturated with players and therefore items, but there was no arbitrage market into the States -- before Alice and Bob, at least. Bob could then sell these on online grey markets for such items, effectively creating a way for both Alice and Bob to profit (rather lucratively, from my understanding) from local markets, and transfer value across borders without incurring bank costs, wire fees, or, for that matter, taxes. This setup lasted for as long as both were able to extract value from the arbitrage process, but obviously wasn't able to scale or even easily re-adapt to new opportunities.
With the rise and increasing stability of bitcoin as an actual contender for a digital currency, the global market suddenly starts looking a lot more local.