Let's Talk About Secure Messaging Apps

in #security8 years ago

This is a topic that comes up every now and then, and it's one I have a number of fairly carefully formed opinions on, but I've not taken the time to organize those opinions and write them down, so this post is an effort to do that.

Some Background

First though, let's lay out the context: why do we care at all? I think most of us are already on the same page here, but for the sake of clarity, let's explore it anyways. There's a million messaging mechanisms out there, and some of them are already quite widespread, with email and SMS (texting) being your basic, as-close-to-universal-as-anything-gets options. Why do we want something else?

Well, we each have to answer that question for ourselves, but at this point I know that emails and texts are being collected and stored by people I don't know or trust. Private conversations between me and people I do know and trust are being intercepted, and could easily be tampered with in transit, by people I do not know or trust. From the start, that knowledge alone rubs me the wrong way. Even if I don't have something I actively want to hide, we're talking about my intimate details not merely being laid bare, but indexed and made searchable, to people I do not know or trust. Why would I want that? All else being equal, why would I not prefer to have that data kept private? Of course I prefer it is kept private.

But let's not be coy: we all have secrets and private thoughts and discussions. If you claim not to have secrets, I'll know never to trust you with any of mine. I absolutely do work with some information that I do want kept private, and I want good technical solutions for accomplishing that. Moreover, I know that there are people out there whom I actively distrust, who are actively recording mine and everyone else's conversations online.

So what, then, do I want in a messaging system? Well, thanks to my background in cybersecurity, I understand quite a lot about what is possible and what is desirable, and perhaps I can help some of my readers shortcut directly to the end of this search without going to grad school for cryptography like I did.

A Bit About Security in General

Everything in security is about adversaries. I want X, someone else wants (not X). Plug in whatever you want for X, and start figuring out strategies: we employ one strategy to get X, someone else employs a counterstrategy. Security is about playing those games out in our heads until we come up with our strategy for getting X, such that there is no effective counterstrategy.

And security is always open-ended: is there actually an effective counterstrategy that we didn't foresee? We don't know. If there is, and our adversary finds it before we do, we obviously want to figure out what that counterstrategy is, and then define our counter-counterstrategy to thwart it.

In practice, these games tend to continue indefinitely with each side either finding a still more effective strategy, or losing interest and giving up. In practice, there is almost always a more effective counterstrategy out there still to be found, it's just a question of motivation. This is the origin of the truism "every system can be hacked."

So from the start, we can state with confidence: there is no such thing as a truly secure X app, but there are probably a whole string of options, each a closer approximation to that standard than the last, having been designed to withstand all known counterstrategies. This general pattern holds true for pretty much every area in security, be it cyber or otherwise. Every lock can be picked, but if you care, you can get one that makes it really, really hard so that most lock-pickers will give up before they succeed.

In information security specifically, which is the most relevant field to messaging systems, there are three main goals that all secure systems try to attain. These are known as the CIA Triad (no relation to the Central Intelligence Agency):

  • Confidentiality -- Only those who are supposed to know the information can see it
  • Integrity -- Only those who are supposed to be able to modify the information can modify it, and they can only modify it in the appropriate ways
  • Availability -- All those who are supposed to have access to the information do have access to it, readily and easily

Within the study of information security, a system's security is formally defined as that system's ability achieve those three goals. If a system fails to meet any of those goals, it is insecure to the extent it falls short of them. The point to understand here is that security is not all-or-nothing. A system might have awesome confidentiality and integrity, but be really hard to use, and within information security, that system is not as secure as it could be. Whether that system is more or less secure than a system that has awesome confidentiality and availability, but makes no promises of integrity, is formally undefined and is entirely a matter of opinion.

There is no official badge of secure-ness that a system can get; it doesn't work that way. Information security gives us a language with which to understand and discuss what parts of a system are or are not secure against what attack strategies. It doesn't give us an objective rule or score as to how secure a system is.

And Messaging Apps, Specifically?

In messaging apps, then, security means that only the people I intended to be able to read my message can read it (confidentiality); that the message they got was exactly what I sent (integrity); and that all of them could easily read it (availability). So to warm up, let's look at our examples from earlier, email and texting:

  • Confidentiality
    • Email makes no attempt to hide the contents of communications from parties other than the addressed recipient
    • Text messages are encrypted between cell phones and the tower, but this encryption has been thoroughly broken since the 90's, and no attempt is made to hide the contents of messages while in transit between cell towers
  • Integrity
    • Email makes no attempt to prevent tampering of the contents of the message in transit, nor does it make any attempt to render such tampering evident after the fact
    • SMS makes no attempt to prevent tampering of the contents of the message in transit, nor does it make any attempt to render such tampering evident after the fact
  • Availability
    • Emails are usually delivered and people usually don't have much trouble getting them, but no formal guarantees are made that emails will be delivered in order or at all
    • Texts are usually delivered and people usually don't have much trouble getting them, but no formal guarantees are made that text messages will be delivered in order or at all

So neither of these systems ranks high in confidentiality or integrity, but both do pretty well in availability. SMS is arguably better at confidentiality, but when its feeble attempt at encryption has been breakable to every hacker in his mom's basement since 1999, it's hardly even worth mentioning.

OK, So What Do We Want?

In general, we want an option that covers all three areas (confidentiality, integrity, availability) reasonably well. So let's briefly discuss the state of the art in each of these areas:

Confidentiality is generally provided by encryption. Encryption means scrambling the message so unauthorized people can't read it. There are a lot of different algorithms to encrypt things out there, and most of them are broken and can be decrypted by people who aren't supposed to be able to. So we want to be sure to use an encryption algorithm that isn't broken, such as AES (the Advanced Encryption Standard, which is really just a title given to the algorithm most trusted by the National Institute of Standards and Technology, or NIST, at any given time. This title is currently held by an algorithm named Rijndael).

But encryption is a bit more complicated than that. Consider SMS: as we discussed above, SMS uses broken encryption, and it only uses this encryption between the cell tower and the cell phone. Everywhere else, no encryption is used. So even if SMS used AES, it wouldn't be very confidential because it would only hide the message for part of its journey. For encryption to give us full confidentiality, it must be End to End, which means that the sender encrypts the message so that no one except the intended recipient can decrypt it. Even if the message is not decrypted by a middleman, the mere existence of a middleman who could decrypt the message breaks the End to End property of an encrypted system. In practice, designing a system where no such middlemen can exist is quite tricky, and just because a system is called "End-to-End Encrypted" doesn't mean it really is.

So to sum up confidentiality, we achieve this through encryption, but not only do we need encryption, we need a trustworthy encryption algorithm, and that algorithm has to be deployed in such a way that we don't accidentally empower unauthorized parties to decrypt our messages. This is quite tricky to do in practice, and people make mistakes at it every day.

Integrity is sometimes provided by the encryption algorithm, but is sometimes provided by other algorithms. For example, AES alone does not provide any guarantees of integrity -- an AES encrypted message might have been tampered with, even if the tamperer didn't know what the message said. Suppose Eve has recorded several encrypted messages from Alice to Bob, including one that says "Yes" and another that says "No." Without actually knowing which one is which, Eve could simply swap one for the other, and this will destroy the integrity of the conversation without necessarily compromising its confidentiality.

Cryptographic protocols ensure integrity in a number of ways, and the issues at hand are complex enough to warrant several posts, so I won't attempt to cover them in detail here. It is important to note, however, that integrity and confidentiality often go hand in hand: while it's entirely possible to have either one without the other, we usually secure them both together, and when one goes, the other often goes with it.

The state of the art is that computers and software are now getting quite good at establishing a securely encrypted link to someone with guarantees of confidentiality and integrity of the messages between you and that someone, but the software can't guarantee that that someone is who you think it is, so to be sure, the humans must take some additional steps to verify that no third parties sneaked into the middle and started quietly passing messages back and forth between you and your intended recipient, possibly reading and/or changing them in transit. This is known as a Man in the Middle attack, or MITM.

Usually we solve this problem by trusting a central server to keep track of who is who and make sure that everyone is really talking to who they think they're talking to, but that server could just as easily lie and grant itself or someone else MITM access. Blockchain technology provides a decentralized, trustless solution to this problem, allowing software to associate a human-provided username to a particular account without trusting anyone who might lie about that pairing, but this is pretty cutting edge, and I don't know if anyone is doing this securely yet or not.

Availability is the red-headed stepchild of information security. While security blowhards will pontificate long and hard about confidentiality and integrity, frequently speaking as though these are the only goals that matter, in practice availability is the metric that actually guides people's choices in software. Consider email and SMS: although they both abysmally fail at confidentiality and integrity, they're highly available, which is to say they're easy and reliable, so everyone uses them. Simultaneously, other systems like GPG/PGP may have strong confidentiality and integrity guarantees, but only cryptography experts know how to use them, and even they rarely actually do because they're so much effort. So while confidentiality and integrity get all the press, availability is what makes the decision, and availability essentially boils down to "Yeah, but can my grandma use it?"

And therein lies the rub: a 'secure' messaging system that no one uses is not actually secure, because it's not available. Usability is part of security. It is a key part. Don't let anyone tell you otherwise. A truly secure system isn't just hard to break technically, it must also be easy to use correctly, and hard to use incorrectly.

In many cases, security boils down to a choice: do we take confidentiality and integrity at the cost of availability, or do we take availability at the cost of confidentiality and integrity? My goal is to find an acceptable balance of both.

Down to Brass Tacks: My Recommendations

In case it isn't yet clear from all of my discussion on security so far, determining whether a particular messaging app is secure or not is really freaking hard. To emphasize, when someone out there releases some app and says "This is a secure messaging app," that claim means absolutely nothing until a lot of really smart people who understand security at least as well as I do spend a lot of time and effort reviewing the underlying protocols and application code to verify that claim. I consider myself qualified to do such a verification, but I rarely do because it's a bloody ton of work to do well. For this reason, it's generally only security professionals (expensive) and academic institutions that go to the trouble.

Of the secure messaging apps I have made any effort to review, my recommendations today boil down to two different apps: Wire and Signal. To be clear, I have not undertaken a formal review of these apps myself (someone would have to pay me to do that, and I would charge a lot); however, I have read their own security claims and have examined the formal reviews of others.

Wire (https://wire.com) is my favorite, since it ranks pretty well in all categories. It's easy to use, it has pretty rigorous security standards, it's a partially open source system (the client apps and parts of the server code are open source), it has undergone formal third party security review (with acceptable results), and there don't appear to be any known serious flaws or vulnerabilities.

If Wire is my favorite, Signal (https://whispersystems.org) is my second favorite. Though far less feature-rich than Wire, it is based on the well-known, thoroughly reviewed, and widely implemented Double Ratchet protocol designed by Open Whisper Systems. This is probably the best protocol out there for confidentiality and integrity, since it's so well-known and battle-tested, being the protocol behind Signal, What'sApp, Google Allo, and Facebook Messenger, to name a few just off the top of my head.

The reason I recommend signal over these other apps is that, although these other apps are more user friendly and widely used (availability), they are also privacy risks as they are all closed source apps owned by companies known for hoovering up and storing forever any and all private data they can find. Thus we can safely assume that, unless additional evidence shows otherwise, when using these apps we have no confidentiality from their makers. Signal, in contrast, is open source, and the company behind it, Open Whisper Systems, publicly commits to protecting their users' privacy by retaining as little information as possible about the users' communications, even where doing so prevents them from implementing user-friendly features. This improves confidentiality at the cost of availability.

So Signal may offer a bit more confidentiality and integrity, but Wire is a lot easier and more fun to use (roughly: higher availability). I also note that Signal requires a phone number, which they use to improve integrity at the cost of privacy (a facet of confidentiality). Also, for maximum security, both of these apps support an additional manual key verification step to ensure that no Man in the Middle has crept into the connection.

Honorable Mention goes to Keybase (https://keybase.io) which was recently pointed out to me. Based on the widely respected, but rarely used, GPG/PGP protocol, Keybase makes GPG easy enough that people can now use it painlessly. Furthermore, Keybase leverages the Bitcoin blockchain to help provide confidentiality and integrity guarantees without the manual verification steps that most other apps benefit from. Of course, manual verification can also be performed for optimal security. From what I see so far, Keybase might be more secure than Wire or Signal; however, I haven't spent enough time looking into it to form a trustworthy opinion.

And Now for the Snake in the Grass

Extreme Dishonorable Mention goes to Telegram (https://telegram.org), which I want to highlight specifically as an app which, in my opinion, is not secure at all.

Telegram is marketed, quite emphatically ("Telegram is more secure than mass market messengers like WhatsApp" is a direct quote from their FAQ page), as a secure messaging app; however, since shortly after its publication, Moxie Marlinspike (a well-known and respected hacker, co-author of the Double Ratchet protocol that powers Signal and others) pointed out irregularities in the protocol which render its security claims suspicious.

One would expect a reasonable team acting in good faith to re-evaluate their protocol's security, and perhaps enlist a respected security firm to review their designs, after such a cold reception by the cryptography community. Instead, Telegram doubled down and launched an open challenge to break Telegram's security. This would seem to indicate their confidence in the security of their protocol, and put the ball in the court of those claiming it is flawed. Instead, Marlinspike pointed out that this challenge was designed in such a way that it can't be won, no matter how bad the crypto is. He even provided an example of a trivially breakable crypto protocol, and pointed out that even that protocol can't be broken according to the rules of the challenge.

This conversation is fairly old at this point, but Telegram continues to persist and market itself as a secure messaging app. There are plenty of unsubstantiated claims in the wild that Telegram is secure, but I've never seen one with any substantiation based on the underlying cryptography. There are, however, plenty of articles on how it's not secure, from respected sources that provide substantial evidence for their claims. And there are now at least two papers formally presenting actual attacks on Telegram's protocol: 1, 2 (I have not reviewed these papers in detail; I see no reason to spend the time on it).

So why so much hate for Telegram? Because they still actively market their app as secure, and at this point, I can only assume that claim is an intentional lie. I try to give people the benefit of the doubt, and apply Hanlon's Razor ("Never attribute to malice that which is adequately explained by stupidity"), but at some point I have to ask myself: can I really believe they're that stupid? Or, are they trying to deceive people? I honestly cannot imagine that someone can be that stupid; I think anyone acting in good faith would have questioned themselves by this point, and in this case, once the question is honestly asked, the answer is honestly obvious. So while I have no positive proof that they are intentionally lying, all signs seem to point that way. Please tell me, dear reader, am I being unreasonable?

Conclusion

So to wrap things up, let me emphasize that this is a complex issue, and it's one that I do not take lightly. I have a great deal of experience that I believe qualifies me to opine on what is and is not a secure messaging app, but I do so with hesitation because even for me, it's a lot of work to form a quality opinion. It is for that reason that I don't have an opinion on every messaging app out there. I have found a couple of apps that I do trust for my day-to-day messaging, and I'm always on the lookout for more, but at the end of the day, this is a game of one-upping that we'll be playing forever, because that's how security works.

I recommend Wire and Signal, and possibly Keybase. I strongly warn all to actively distrust Telegram. These opinions are based on thorough and thoughtful, if not professional grade, reviews of the software and security in question, based on a background of formal training in cryptography and cryptographic software protocol design, reverse engineering, analysis, and exploitation at Rensselaer Polytechnic Institute, thanks to which I am able to understand and participate in technical security reviews.

My opinions are my own, and they are only intended to be good enough to satisfy me, which is a highly subjective standard. They are provided in the hopes that they are useful, but I make no promises that they are valid. If they aren't, please let me know. :)

Thanks for reading

With a background in software development and a passion for security, Nathan has identified blockchain technology as his niche. He is dedicated to creating applications which empower individuals to shape a better world for themselves and others.

Sort:  

very much informative post ..

For phones my favorite has to be Silent circle its a paid app and they offer an encrypted phone as well.

In my post I linked to this article by The Grugq, another hacker whose name I recognize. He recommends Silent Circle, and his recommendation is probably at least as good as (read: probably better than) mine. :)

That was so informative, thank you for all your hard work, I have followed you because it was so good. 🕉

There are only few things I dislike about Signal (even though it is still better than most other end-to-end encrypted messaging apps out there).

One is their annoying insistence of using phone numbers, which as you mentioned hurts privacy (especially with the app sending contact information on the phone to their servers to help with discoverability). It is nice that apparently as of December Wire allows usernames. However, I read that they are apparently using an older version of Open Whisper Systems' Double Ratchet protocol, which, short of actually studying in detail what does differences are, makes me a little bit nervous to trust compared to Signal's more battle-tested protocol.

Another, which I am not sure if any of these secure open source end-to-end encrypted messaging apps have currently resolved, is the need for a great user experience when using multiple devices while maintaining synchronization between them. If I am not mistaken, I think the Double Ratchet protocol makes doing this more complicated because there needs to be communication between the various devices to keep everything in sync. It should be possible to, for example, jump between laptop and mobile app to respond to messages in a conversation thread while maintaining end-to-end encryption and having the full identical copy of the conversation on both devices.

I would love to see a messaging application use the Signal Double Ratchet protocol but leveraging human-readable account names on a blockchain to establish the secure end-to-end encryption between the users rather than phone numbers or even usernames in a namespace managed by a central authority.

By the way, there are two properties relevant for messaging applications that you didn't bring up in your post: forward secrecy (so that future compromise of private keys does not need to leak past history, unless explicitly backed up) and plausible deniability (so that even the person you were communicating with does not have cryptographic proof they can show to others that you said any particular thing). The Signal protocol (and I guess Wire's protocol too as a result) have both properties. The trade-off with having forward secrecy is that if users want to have their conversation history backed up, it requires a more complicated process to do so. I am not aware if Keybase has plausible deniability. They have made the design decision to avoid forward secrecy for the sake of easier backup of conversation history, however they are apparently planning to add a special "exploding messages" feature which will have forward secrecy, which seems like an appropriate compromise.

Great points all around, @arhag, thanks! Yes, Wire's protocol is based in part on the Axolotl Ratchet, which was later upgraded to be the Double Ratchet, which is technically only part of the Signal protocol (as Double Ratchet only manages the cryptography, and not the key exchanges, etc etc). According to Wire, they went off-book from the official Axolotl protocol because they wanted to not require a phone number, and Axolotl (and Signal still today) use the phone number to provide some of their security guarantees, so removing it isn't trivial and takes some innovating.

And yes, I think I was clear that I do not regard Wire's protocol as being as secure, from a confidentiality and integrity standpoint, as Signal; however, Wire is much more available, what with it's easy and friendly usernames (rather than SUUUUUUPER finicky phone numbers in Signal, et. al.) and it's nice UI/UX. And all of that comes with the addendum that, AFAIK, Wire's protocol has not been seriously attacked, even after having been formally reviewed by a university's security department (they found a potential MITM vuln that the servers could exploit on video calls, I think, but MITMing video calls is tricky in its own right, and Wire said they knew about it and were planning to fix it; not sure if they have yet or not).

Yeah, Wire has an acceptable UX for adding more devices. I'm not sure about Signal's. On Wire, you can add a new device to your account at any time. If you or your contacts have previously verified all the keys in any conversation, Wire yells loudly and won't let you send messages in those conversations until you've confirmed that you know there's an unverified key in the conversation (even if it's supposedly your own key). New devices cannot decrypt old messages, only new ones going forward, so a new device gets nothing historical, and if people have been verifying keys, probably nothing new either. The place Wire really fails is when I add a new device, I ought to be able to verify it on one of my old devices, and then have the old device send a signed assertion to all my other devices/my contacts saying "Hey, old key you've already verified here, just letting you know that according to me, the new device is legit too." Now, the contacts can decide for themselves whether they accept that assertion, but in general there's no reason to assume that an old key is compromised just because a new device showed up on the account.

As to blockchain integration, yes yes a thousand times yes, I'd like to see this done really well too. Keybase might be it, but I haven't taken the time yet to look into it.

As to forward secrecy and deniability... Yeah, you'll note I actually never even talked about keys in my article. That alone warranted enough prose that I got scared off. There are so many posts that could be written and not even scratch the surface... Haha. And one can't properly understand forward secrecy and deniability without understanding keys (symmetric and asymmetric) at the very least.

Thank you. I will reconsider my opinion on Telegram.

Really good article. Thanks!
I would like to add that Signal currently works on only Androids and iphones, but Keybase works on both computers and phones, which is a significant advantage in my opinion. Of course both are constantly improving, so things will change.

Could you do an article on Keybase? I think there is more to it, as the core is not messaging, but PGP - and that made simple. Messaging is just one application that can benefit from PGP, but there is more too, isn't there?

There is Signal Desktop for laptops/etc, however it still requires a phone, much like Whatsapp on desktop.

For me, the main big question about Telegram is the business model. I just can't believe the development is funded by pure benevolence/philantrophy. The only reason they haven't made it a federated, free network is that they want to make money of it, or are already making money from it.

Very interesting post. I have to admit, a lot of it is over my head though. I am very concerned about security online. I just don't like my privacy being violated by anyone or any country. It's just none of their business. No, I don't do anything illegal but my life should be my own. After reading this I realize I really need to take the time to learn more about it. Thanks for your post!!

I think it's in all of our best interests to spend a little time thinking about this. Whatever app you go with, though, make sure and do some due diligence and look for third party security reviews and search around online to see if anyone has broken their security. It only takes a couple seconds to search 'is X secure' and survey the first page of results! :-)

This was a terrific analysis man. I feel like Neo

hahaha old meme but still cool

Ah :))))))

Wow, very interesting read.

Where do you stand with gaining access to the accounts of known terrorists if it means that a future attack could be thwarted? I'm not saying secure messaging for the masses is bad - quite the opposite - I believe our privacy is being eroded in the name of fighting terrorism

Leaving aside futile preemptive attempts to deny terrorists access to secure communications, I have no more problem with cracking a terrorist's vulnerable computer system than I have with cracking a terrorist's vulnerable skull. Provided of course that by "terrorist" we mean a person who is actively initiating violence against others in order to manipulate the innocent through fear. Such attacks invite defensive force against the attacker through whichever vectors are available to most efficiently minimize harm to the innocent.

There are some who would argue that pretty much all the organizations committing acts of terrorism today are exactly the same organizations who are claiming that no one except them ought to have access to strong cryptography or else terrorists will use it.

You can't be serious. Banks already co-opted the state, and in the words of Max Keiser: "Lloyd Blankfein is a financial terrorist". It doesn't matter if the criminal organization is debilitating your existence through nail bombs or financial fraud and usury, the state (current one at least) already is a terrorist organization.

Why not just block the areas who are known to harbor terrorists right to access the internet. Why must everyone pay for a few bad apples mistakes.

Starting with the USA because it has the highest number of terrorist?