Ratings

This matters because many countries have laws that demand that encrypted data be able to be decrypted by the government. Many other countries employ vast surveillance networks or have uncomfortably close relationships with companies when it comes to gaining access to customers’ data.

Red = Company is under the jurisdiction of a known Five Eyes partner. Or the company is under the jurisdiction of a country that is well known for [mass] surveillance.

Yellow = Company is under a jurisdiction that is not known for [mass] surveillance, or forcing companies to hand over or decrypt data. Or the country is known to cooperate with Five Eyes countries.

Green = Company is under a jurisdiction that is not known for [mass] surveillance, or forcing companies to hand over or decrypt data. No known ties to Five Eyes etc.

See above. In order to operate a truly global service, companies may have infrastructure in different regions of the world in order to, for example, provide lower network latency.

Red = Infrastructure is under the jurisdiction of a known Five Eyes partner. Or the company is under the jurisdiction of a country that is well known for surveillance.

Yellow = Infrastructure is under a jurisdiction that is not known for [mass] surveillance, or forcing companies to hand over or decrypt data. Or country is known to cooperate with Five Eyes countries.

Green = Infrastructure is under a jurisdiction that is not known for [mass] surveillance, or forcing companies to hand over or decrypt data. No known ties to Five Eyes etc.

This matters because companies can be forced by law to give customers’ data to intelligence agencies. Other known methods by which these agencies can get customers’ data include coercion, hacking, planting an employee, or simply asking nicely. I’ve used the term “intelligence agencies” to refer to any government agency.

Note: I have considered “customers’ data” here to mean customers’ content/messages (data, not metadata). Wickr, for example, does cooperate with law enforcement agencies — as all companies must — but they can only hand over metadata because the content is encrypted (and they don’t have the keys).

Red = The company has been implicated in giving customers’ data to intelligence agencies. This is proven by evidence.

Yellow = The company has been implicated in giving customers’ data to intelligence agencies. There is no direct evidence, but the source is reputable.

Green = The company has not been implicated in giving customers’ data to intelligence agencies.

This matters because some jurisdictions mandate that certain systems must have surveillance access for governments.

Note: While many American companies were implicated in the Snowden leaks — the PRISM program specifically — I’ve considered this for the app only. If they are part of PRISM, this is considered under “Implicated in giving customers’ data to intelligence agencies”.

However, in saying that, I assume that Facebook, Google, Apple, and Microsoft have all granted government backdoors to their apps for intelligence agencies. But apart from Microsoft, there is no proof that I could find.

Red = Confirmed. The app was specifically designed to enable surveillance.

Yellow = It’s widely accepted that the app was designed to enable surveillance based on evidence from a reputable source.

Green = No… not that we know…

Many companies periodically publish a transparency report. This details what type of requests have been received from governments, how many requests were made, how many customers were affected, etc.

Red = Company does not provide a periodic transparency report. (Or it’s not particularly useful.)

Green = Company provides a meaningful transparency report periodically.

This matters because companies often talk a big game when it comes to customers’ privacy. How often have you heard this after a data breach? “We care deeply about our customers’ security/privacy and have industry-leading security in place”.

Red = Company does not design its systems to collect minimal customer information, or does not have strong encryption/security controls; or does not have a simple, readable privacy policy and terms & conditions. Or the company is known to cooperate with legal (or informal) requests for customer information. Or the company’s business model relies upon users’ data.

Yellow = I’m not sure that there is a middle ground. I’ll write this if I ever think that it’s appropriate for an app company.

Green = Company designs its systems to collect minimal customer information; has strong encryption/security controls; a simple, readable privacy policy and terms & conditions. The company is unable to hand over user data to governments even if asked. Likewise, the company is known to fight legal challenges to decrypt or otherwise hand over customer data. The company’s business model does not rely upon users’ data.

This matters because “money talks”, as the saying goes. If the company or person behind the money is likely to have reason not to protect customers’ privacy, it’s important to know. This could be indicative of the company not doing as they say (Google, WhatsApp, for example) or changing their mind once they’ve onboarded enough customers from whom they can make money.

Red = Funded by a company/person that/who is well-connected to, or well-known for, collecting customers’ data. Or they are known for collecting customers’ data or cooperating with the authorities when it comes to requesting customers’ data.

Yellow = I don’t know if there is a middle ground. If there is, I’ll write about it when it happens.

Green = Funded by companies/people that/who either have a vested interest in, or no obvious reason against, encrypting/securing customers’ data. They mustn’t be known for collecting customers’ data or cooperating with the authorities when it comes to requesting customers’ data.

This matters because many companies use customers’ data for advertising, for improving their services, or simply to sell to other companies. Do you truly believe such companies want to protect your messages if they normally make money from your personal data?

Red – Yes, they collect more than is required for the functioning of the secure messaging app. Indeed, they collect other customer data for other parts of their business.

Yellow = They collect only the minimal amount (cellphone number or email address, for example) of customer data to provide a secure messaging app.

Green = They collect no user data. (I’m assuming here that you can buy the app, if required, in an anonymous fashion.)

This matters because many companies use customers’ data for advertising, for improving their services, or simply to sell to other companies. Do you truly believe such companies want to protect your messages if they normally make money from your personal data?

Red – Yes, they collect more than is required for the functioning of the secure messaging app. Indeed, they collect unprotected customer data or the messages sent themselves.

Yellow = They collect only the minimal amount (cellphone number or email address, for example) of customer data to provide a secure messaging app.

Green = They collect no user data.

This requirement is based on the permissions according to Apple’s app store.

Self-explanatory.

Red = No.

Green = Yes.

Specific key exchange, encryption, and hashing algorithms are considered secure by cryptographers. It's important that algorithms without known weaknesses are used. These are the building blocks upon which secure encryption is built.

Note that I have not considered if the implementation of these building blocks is sound.

Red = App uses cryptographic primitives that have been broken. Practical attacks against them exist.

Yellow = App uses cryptographic primitives that are considered weak. However, there are no known practical attacks against them yet.

Green = App uses well-known, secure cryptographic primitives that provide post-quantum protection.

This matters because a fully open source app can be audited by the industry. Open source code leads to near full transparency: we can tell if a company’s claims meet reality. Likewise, we can find any vulnerabilities in the software, weaknesses in the implementation, or design deficiencies. The server code must also be open source; this is because all apps use a central directory service to match users. Vulnerabilities and backdoors could exist in these directory services.

Red = No.

Green = Yes.

Are you sure that the app you downloaded from Google and/or Apple is using the exact code source that the developers published? Reproducible builds are a method by which installed apps can be compared to published source code, thereby ensuring that no malicious changes have been made to the apps.

This matters because many people have good reasons for needing to remain anonymous. Having to provide a unique ID of some kind — a cellphone number, email address, etc. — means giving away something that could be used to track you.

Red = No, users must provide some kind of contact details such as an email address or cellphone number. (I’m aware that you could get an anonymous email address or even cellphone numbers. However, I’m not considering workarounds. Even these could be traced.)

Yellow = You must provide an email address or a cellphone number. However, these are provably hashed, and hence they are unreadable by the company.

Green = Yes, you do not need to give away any details in order to use the app. (I’ve accepted here that you must be uniquely identifiable by the directory server, and hence that some kind of random ID must be assigned to each user in order for the app to work.)

(Hashes are irreversible, one-way encryption functions that can give each cellphone number or email address a unique value that is essentially gibberish. Your cellphone number or email address is hashed on your device, then uploaded to the directory server. In turn, everybody who uses the app has hashes of all of their contacts calculated on their device and then uploaded to the directory server. If two hashes match, then the directory server knows that your contact has the app installed without knowing your (or their) email address or cellphone number.)

Some apps require that you register yourself with a cellphone number or email address. This data is stored on the company’s servers (with one-way encryption [a hash] hopefully). It matches phone numbers and/or email addresses in your contact list (assuming that you allow the app access to it) so that you know who else uses the same app.

However, how do you know that you have been “matched” with the correct person? That the company hasn’t matched you with someone else (e.g., an intelligence agent)? This is especially important the first time that you are “matched”.

Some apps allow you to manually add a contact without needing to trust that a third party correctly matches you. This happens by two people scanning each other’s QR code. Threema does this very well.

This also has the advantage that you don’t need to give your phone number or email address to the company. You can add people anonymously, thereby increasing your privacy.

Red = No.

Green = Yes.

In order to ensure that you’re talking to whom you believe you are, it’s important that apps support the verification of users’ fingerprints. A fingerprint is a representation of your identity that’s bound to your encryption keys. If you cannot manually verify fingerprints within the app — by scanning a QR code, or by publishing your fingerprint, or by sending your fingerprint via another medium, or simply reading it over the phone — then your messages could be intercepted by what is called a “man in the middle (MITM)” attack.

Alice is sending messages to Bob. Well, she thinks she is sending messages to Bob; but actually, she is sending messages to Eve, who reads them, and then passes them on to Bob. Neither Alice nor Bob realize this is happening.

Verifying fingerprints ensures that this cannot occur.

Red = No.

Green = Yes.

When using most messaging apps, you must/can provide a phone number, username, or email address. If a friend of yours has that phone number or email address in their contacts list, the app can automatically add your friend to your contacts in the app itself. (Or perhaps you must add their username manually.) That’s how these messaging apps know which of your friends is using it, too.

When first adding a contact, it’s possible that a directory service could “match” you with the incorrect person, either maliciously or by mistake. This could mean that while you believe that you are talking to a friend of yours, you are in fact talking to an intelligence agency. Manually verifying each other’s fingerprint wouldn’t raise any concerns since the intelligence agency would be using a valid fingerprint.

This is one way in which Threema has absolutely nailed mutual identification without a directory service. Each person can scan a QR code in the app — either by being physically in the same place, or by each person publishing their QR code somewhere on the Internet. Each person can then manually add the other without the need of the directory service.

Likewise, a directory service means that a third party’s device could be trusted. The same functionality that enables iMessage to send all of your messages to all of your authorized devices could be used to send all of your messages to an untrusted device without you knowing.

Note: This is arguably the biggest weakness in all messaging apps. Even if you’ve manually added a contact, you still must trust that the directory service isn’t doing anything malicious. This could include adding an unauthorized device to your account, giving another user access to your account (the same way in which you can use multiple devices), or temporarily matching you with another user. This is why being alerted when a user’s fingerprint changes is so important. Likewise, it’s important that the server side of the system is open source, too.

Red = Yes, directory services could be used to MITM a conversation.

There are no other options; all apps must trust a centralized directory service.

A contact’s fingerprint changes when they reinstall the app/their phone without having backed up (if it’s possible) their ID and encryption key. If the ID and encryption key were not backed up, or the entire phone was not reinstalled, then the app will regenerate a new ID and encryption key, which is represented by a fingerprint. Hence a new fingerprint will be generated.

However, a contact’s new fingerprint could also be a sign of a man in the middle attack. Hence you should re-verify your contacts if their fingerprint changes.

Red = No.

Yellow = Sometimes, under specific circumstances. (Wire does this if you’ve previously verified a contact’s fingerprint.)

Green = Yes.

If data is hashed, it’s unreadable to companies. If, for example, a phone number is hashed, it’s given a unique, irreversible representation that is essentially gibberish. Each phone number will always have a unique representation (hash).

This method can be used to protect contact lists. Instead of uploading a list of your contacts, it’s more secure to upload a hash of each contact. If one of your contacts has the same app, your hashed phone number in your contacts will match their hashed phone number on the company’s servers.

There’s actually no need for companies to have any personal information for a secure messaging app. (Threema does this well.) However, apps such as Signal use phone numbers as a unique ID (and send you an SMS to activate the app).

Red = No personally identifiable information is hashed.

Yellow = A limited amount of personally identifiable information (cellphone numbers) is not hashed. All other information, including contacts, is hashed.

Green = All personally identifiable information is hashed.

In order for end-to-end encryption, the encryption key must be generated and kept on the device itself. If a company has access to the encryption key, then it’s not secure.

Red = No.

Green = Yes.

This is pretty self-explanatory. For apps that can do both unencrypted and encrypted messages (Telegram, Google Allo, etc.), I’ve said “Yes”.

Red = Yes.

Yellow = Most likely. There is a significant amount of evidence that indicates that the company can actually read the messages.

Green = No.

Each message that’s sent should be protected by a unique encryption key (often called a session key). This way if the encryption key on the device is compromised, it doesn’t necessarily compromise past messages (that would have been encrypted with a unique encryption key).

Red = No.

Green = Yes.

Metadata can include the date and time you sent a message, your location, and to whom you sent a message. (Basically any information about the information that you’re sending.) This is important because this data can reveal an awful lot about you. It’s also targeted by law enforcement agencies.

Red = No.

Yellow = Most metadata is encrypted. However, some pieces of (largely unimportant) information are kept by the company.

Green = Yes.

It’s important that all communication between the app and its servers is encrypted over the Internet. This is the same technology that banks, Google, etc. use.

Red = No.

Green = Yes.

This ensures that TLS connections only happen between the app and the company’s servers. Specifically, the app only trusts TLS certificates that come from the company (the public keys of those specific certificates are “pinned” in the app).

Red = No.

Green = Yes.

Encrypting devices (and the data in memory when devices are locked) is important so that the data on them cannot be read without the correct passcode. On iOS, this can be achieved through Apple’s Data Protection API. On Android, it looks as if file-based encryption — the part that encrypts data in memory when devices are locked — is only available on “Nougat”.

Note: I’ve looked for confirmation on iOS that the correct data protection class is being used for each app. The default for third-party data is to encrypt it; however, this can be overridden.

Red = No.

Green = Yes.

Some of the apps provide a form of local authentication — either a password/code or a fingerprint. This provides an extra level of access control to the data that’s held in the app. Note that I’ve only considered functionality when you open the app, not when you access specific chats/settings within the app.

This is separate from authentication — single factor or MFA — on the user’s account.

Red = No.

Green = Yes.

Some apps offer end-to-end encryption that does not encrypt the messages when they are backed up to the cloud. For example, Whatsapp messages are stored in clear text (readable by Facebook) when iCloud is used to back up a device. Apple encrypts the backup data on iCloud but have a copy of the encryption key (and hence can read your backups, including iMessages). Law enforcement has been known to go after the backed-up data when it’s stored at a company.

Note: If a company (that’s you, Apple) has access to the encryption key, I’ve rated this as “No”.

Red = No.

Green = Yes.

Some companies (Whatsapp, for example) retain date and timestamp information of messages.

Red = No.

Yellow = Some timestamp/IP address information is stored, although it is not stored for each message sent.

Green = Yes.

It’s important that each app has been independently tested. Anyone can create a system that they themselves cannot break. This can also help us trust closed-sourced apps, such as Threema and Wickr.

Red = No.

Green = Yes.

It’s important that the clients, APIs, servers, directory servers, and messaging algorithms are all designed correctly. Having design documents published enables experts to check that all of these have been designed correctly.

Note: Even amongst those apps that I’ve rated as “Somewhat”, there’s a big difference in the level of documentation. I might try to further define this in the future.

Red = No. Very little documentation is available.

Yellow = Somewhat. Some documentation is provided.

Green = Yes, documentation — for clients, APIs, servers, directory servers, and messaging algorithms — is provided, and it’s all in one place.

This means that messages will be automatically deleted after a certain period of time. Personally, I think that this adds little to privacy since it’s trivial to take screenshots of messages.

I do, however, see some use cases: 1) sending a contact a piece of information that you don’t want to be available forever (a pre-shared key/password, for example), and 2) ensuring that certain parts of conversations are automatically removed.

Red = No.

Green = Yes.