I created this site to enable people to compare many so-called “secure messaging apps”. Likewise, I hope to educate people as to which functionality is required for truly secure messaging.
In 2016, I was frustrated with the EFF’s very out-of-date comparison, and hence I decided to create a comparison myself. Reaching out to various privacy organisations proved to be a complete waste of time, as no one was willing to collaborate on a comparison. This is a good lesson learnt: Don’t be beholden to other people/organisations, and produce your own useful work.
This site is not meant to be comprehensive; security is difficult, and a full review of each app is simply not plausible due to time, a lack of access to source code in many cases, and a lack of knowledge of development practices, and general cyber security maturity.
I am not connected to any of the companies or people behind the apps, nor do I receive any money with relation to this website.
Specifically, with every single app that I’ve assessed, you must trust their directory servers. These are the servers that ensure that Person A is really sending a message to Person B, and that Person C cannot intercept the message or impersonate either Person A or Person B.
Each of our own personal threat models vary. If you’re sending messages to your mum about dinner, then the privacy of your data and metadata probably isn’t of that much concern. However, if you’re a medical professional, journalist, lawyer, political dissident, or even a politician, there are many reasons why you would want to protect your, or your clients’, information.
I have to admit that message notification is outside my area of expertise. However, according to the FAQ for Signal, using these services is necessary in order to provide a good user experience.
This is my understanding:
According to the Threema FAQ, it’s possible to use Threema on Android without Google Cloud Messaging (Google’s message notification service).
Wire can also be used without Google Cloud Messaging on Android. Update: Signal can now be used without Google Cloud Messaging.
Can developers be coerced by a government to create a backdoor? Servers don’t exist in a vacuum; someone or something needs to connect to servers in order to update code. Can this mechanism be manipulated?
Who has physical access to these servers? Are they in a cage whose biometric authentication allows only the company access? Can datacentre staff access the servers? Are the servers encrypted? Are these servers in the cloud? Can the cloud vendor access the servers?
All these questions are important, because if the servers aren’t secure, governments may be able to gain access to message content. And these servers are under someone’s legal jurisdiction, which means that they could be seized or manipulated.
Five Eyes and Fourteen Eye countries are more susceptible to pressure from the US given their relationships.
As I mentioned above, trust is a very important aspect to secure messaging apps. A means by which trust can be achieved is by independent security analyses. This is important because no one can mark his own homework. In the cryptography world, this is known as Schneier’s Law:
Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break.
Independent security analyses can help confirm the security of the following:
I’m only confirming the following:
I’m not assessing the following:
No Independent security analysis is perfect. In the real world, these engagements are often restricted by time, money, availability of staff, and many other reasons. They’re also a point in time assessment, sometimes repeated annually or when major changes occur.
Do you make any money from this site?
No. The domain name and hosting cost me a small amount of money per month. However, it does take quite a bit of my free time to research, keep up to date, and maintain the website.
How have you assessed each app?
For each app, I have done the following:
Yes, it’s possible that the information on the apps’ sites could be [purposely] incorrect. Yes, it’s possible that I’ve been given incorrect information. Hence why open source software, independent audits, funding, etc. is so important to consider, too.
Why don’t you assess Tox?
Tox doesn’t support push notifications on iOS. I don’t believe it will become a mainstream messaging app until it does.
Telegram is GDPR compliant. It must be secure!
No, Telegram is not secure, and GDPR — as with all privacy legislation — is mostly not worth the paper on which it’s written, as the EU continues to attack secure messaging apps despite EU politicians and bureaucrats using Signal for privacy. Apparently “privacy by design” — a key GDPR requirement — could mean granting governments and intelligence agencies the ability to read messages.
The ability to build secure messaging apps doesn’t happen because politicians and bureaucrats write regulations; secure messaging apps are created by building upon decades of work by private individuals and private companies/organisations, including cryptography, coding standards, messaging protocols, infrastructure design, identity services, etc.
The only helpful aspect of GDPR is incentivising companies to create secure services by forcing companies to inform users of a data breach.
Indeed, messaging apps are secure not because of governments but despite governments.
Why don’t you assess app xyz?
I’ve decided to try to keep the table reasonably small. And I’m only aiming to assess the most popular messaging apps. That said, I will assess new apps if I think they offer a secure alternative to the apps that I’ve already assessed.
Signal, Wire. etc. do allow anonymous user registration. What gives?
No, they don’t. If you need to give away personal data — a phone number, an email address, etc. — then it’s not anonymous. It’s not necessary to require personal data to register users.
App xyz has vulnerabilities. Surely it’s not secure?
All software has bugs, some of which are vulnerabilities. I originally attempted to rate apps based on previous/known vulnerabilities; however, I felt it raised more questions than answers. Is an app less secure because it’s had vulnerabilities? Does a vulnerability necessarily mean the app is insecure? It depends is the answer, and this answer cannot be written in table form.
You finally assessed Riot / Element!
Yes, after 20+ requests, I finally got around to it. Please note I’ve assessed the default installation, not the option of running your own server. Element / Riot uses matrix.org’s standard and as a backend, which complicates the assessment.
What about apps such as WeChat from companies in China?
China is a Marxist-Leninist state under which there is no separation between the state and individuals and the state and private companies.
There is only the state, only one’s subservience to the state, and hence assessing any messaging app from China is a complete waste of time. Assume China’s government can read every single word sent over these apps. They are in no way secure.