Does anyone use pgp




















Using symmetric encryption requires, though, that a sender share the encryption key with the recipient in plain text, and this would be insecure. So by encrypting the symmetric key using the asymmetric public-key system, PGP combines the efficiency of symmetric encryption with the security of public-key cryptography. In practice, sending a message encrypted with PGP is simpler than the above explanation makes it sound. You will see a padlock icon on the subject line of their emails. The email will look like this the email addresses have been blurred for privacy reasons :.

ProtonMail — like most email clients that offer PGP — hides all of the complexity of the encryption and decryption of the message. If you are communicating to users outside of ProtonMail, you need to send them your public key first. And so, although the message was sent securely, the recipient does not have to worry about the complexities of how this was done.

Of these three uses, the first — sending secure email — is by far the dominant application of PGP. As in the example above, most people use PGP to send encrypted emails. In the early years of PGP, it was mainly used by activists, journalists, and other people who deal with sensitive information. The PGP system was originally designed, in fact, by a peace and political activist named Paul Zimmerman, who recently joined Startpage, one of the most popular private search engines.

Today, the popularity of PGP has grown significantly. As more users have realized just how much information corporations and their governments are collecting on them, huge numbers of people now use the standard to keep their private information private. A related use of PGP is that it can be used for email verification.

If a journalist is unsure about the identity of a person sending them a message, for instance, they can use a Digital Signature alongside PGP to verify this. If even one character of the message has been changed in transit, the recipient will know. This can indicate either the sender is not who they say they are, that they have tried to fake a Digital Signature, or that the message has been tampered with. A third use of PGP is to encrypt files. In fact, this algorithm is so secure that it has even been used in high-profile malware such as the CryptoLocker malware.

There are other faults , including the difficulty of accessing encrypted emails across multiple devices, and the issue of forward secrecy, which means that a breach potentially opens up all your past communication unless you change your keys regularly.

But the biggest problem with PGP is how difficult it is for people to use simply. This criticism has plagued PGP for most of its existence.

To encrypt an email manually using PGP requires a decent level of technical knowledge, and adds several steps to the process of sending each message, to the extent that even Phil Zimmerman, the creator of PGP, no longer uses it.

Read more: Protect your iPhone with these essential iOS security tips. Even Edward Snowden has screwed it up. When he first reached out anonymously to a friend of Poitras, Micah Lee , to ask him for her public PGP key, he forgot to attach his own public key, meaning that Hill had no secure way to respond to him. Many of the issues around PGP are aligned with email being a dated form of communication. It's one of the few things that actually works. Imagine if Napster and Soulseek had developed an open standard.

It would only have delayed the introduction of Bittorrent, promoting an inferior technology by standardization. Even if all the effort is done that a project like LEAP is striving for, you will still be receiving SPAM and unencrypted mail, just because you have a mail address.

You will still have a multitude of hosts that are still "unfixed" because they don't care to upgrade. You will still carry a dependency on DNS and X. And I still don't see by which criteria a dissident should pick a trustworthy server.

I know I can rent one, but even if I have a root shell on my "own" server, it doesn't mean it is safe. It's better not to need any! So what is this terrific effort to stay backward compatible good for? I don't see it being a worthwhile goal. There is so much broken about it while a fresh start, where every participant is safe by definition, is so much more useful. Especially you don't have that usability challenge of having to explain to your users that some addresses are superduper safe while other addresses are lacking solid degree of privacy.

One major problem with the new generation of privacy tools is, they are so simple , people have a hard time believing they are actually working. Answers - Some FAQs asked and answered. Architecture - How does it stand tall?

Aspects - How different views of your profile are created. Business - An authenticated Internet helps business. Comparison - How do other tools compare. Features - How many social networking services can we replace? Introduction - What has led us here? Literature - Research Papers and more.

Protocol - Ingredients: efficiency and extensibility. PubSub - The publish and subscribe paradigm revamped. Rendezvous - How to get started when you know nobody. Scalability - How to make applications work for billions. Security - People be the gatekeepers to their devices.

Society - How secushare is not a threat to society. Storage - How to keep devices in sync. Threats - What if malware messes up your secushare identity? Transparency - Should everything always be open for everyone to see? Contents 1. Downgrade Attack: The risk of using it wrong.

Transaction Data: Mallory knows who you are talking to. No Forward Secrecy: It makes sense to collect it all. Cryptogeddon: Time to upgrade cryptography itself? Federation: Get off the inter-server super-highway. Discovery: A Web of Trust you can't trust. PGP conflates non-repudiation and authentication. Statistical Analysis: Guessing on the size of messages. Workflow: Group messaging with PGP is impractical.

Complexity: Storing a draft in clear text on the server Overhead: DNS and X. Targeted attacks against PGP key ids are possible TL;DR: I don't care.

I've got nothing to hide. The Bootstrap Fallacy: But my friends already have e-mail! But what should I do then!?? There is no one magic bullet you can learn about. Thank you, PGP. Questions and Answers What's the threat model here? Is this about PGP or rather about e-mail? We need a new open standard first! Why don't we fix all of these problems with PGP and e-mail? Let's summarize: The PGP WoT is publicly available for data mining, has many single points of failure social hubs with compromised keys and doesn't scale well to global use.

There used to be a bunch of those in the 90s and it was a mess. But I can imagine other scenarios. I bet Snowden's cold email to Greenwald was encrypted Trust on first use is not an uncommon security practice. Imperfect but in many times the best alternative, and a good solution while we wait for a replacement to gain traction. Why wouldn't it be?

Sniffnoy on July 17, root parent next [—]. Why not? I mean, that's what publicly listing your public key is for, right? PeterisP on July 17, root parent next [—].

Well, the keyservers also don't validate if it's your key instead of a key submitted by me with your email address on it, so for any secure messaging you need some other, authenticated channel for the potential recipient to assert which is their key. Oh, that's a good point. Heh, I have one of those, too, which even caused a problem once[0].

I wouldn't expect people to find it first, though, because I wouldn't expect people to go to a keyserver first; I'd expect them to find my key on one of the places I have it listed on the web. I've never tried blindly entering someone's email address into a keyserver and just hoping they have a key; I've only sent PGP-encrypted email to people who list their keys on the web.

One person instead downloaded it entirely anew from a keyserver and got the old one. Admittedly I didn't explicitly use the word "refresh". Anyway yeah -- though this problem had happened to me, it hadn't ocurred to me that it might be common; maybe this is more of a problem than I thought GPG chooses the key to use based on alphanumeric ordering of the short key ID, last time I experimented anyways.

Best of luck overcoming that! I don't get what's insecure about normal unencrypted email. It's sent over https, isn't it? It's not like I can read your emails unless I break into Google's servers, no? And even if I do, they probably aren't even stored in plaintext.

I just don't get the encrypted email obsession. It's impossible for an individual to withstand a targetted cyber attack so it seems pointless to go above and beyond to ultra encrypt every little thing. Well, first of all, "breaking in" isn't the only way someone might get access to data on Google's servers.

There are such thing's as subpoenas, not to mention that it is possible a Google employee might abuse access to servers. Furthermore, unless both parties are using gmail, the email will be stored at least temporarily on other mail servers, which may be less secure and you might not even know who controls them.

That would go against their own privacy policy. But they are one change away from doing it. When gmail came out they were explicitly up front about using the content of the email to deliver targeted ads. Has that changed? There are a lot of misinformations around, and the Google haters crowd has plenty of pitchforks. Nobody reads your email in order to show you ads. It's not misinformation. It's just the passage of time that has rendered it misinformation.

Until not so long ago, Gmail messages were actually scanned for ads -- IIRC, Google was actually pretty upfront about it when they first launched Gmail, and explained that it's how they could afford to give users 1G of inbox space in a day and age where 25 MB was pretty good and MB was pretty hard to get for free. They eventually stopped, although the phrasing of the privacy policy is vague enough that, as wodenokoto mentions above, I wouldn't be surprised if email messages were still scanned for some advertising purposes.

The fragment on the page you link to is only about ads shown in Gmail, doesn't exclude using keywords and messages for tracking, classifying etc. It's also not very obvious if "messages in your inbox" also includes messages you send. FWIW, I think the policy is deliberately open-ended so as to be future-proof, but I doubt emails are an important source of advertising data today, so I think it's likely that Google doesn't rely on it that much anymore.

Most sources of legitimate i. Millions and millions of personal accounts are an useful strategic asset to have but I think there are better sources of data. I completely disagree with the first part of your assertion.

Email is still the main medium for all organizations, especially private companies taking your money for something, to communicate with you with plenty of details.

Be it ordering some product online or booking a flight or other travel ticket or ordering a service or anything else. The richness and amount of information conveyed over email pales in comparison to SMS notifications.

So email is still a treasure trove of what people are doing and have done. Even the newsletters they send over email have tracking information. By the time they've sent you an email after your first purchase, they know everything they need to show you relevant ads in fact, that's probably why you made the first purchase I doubt bulk analysis of emails can show anything that is not already known way before the emails got sent.

Depends on what they mean by messages? The whole raw message, incl. The text user sees? Google can probably serve nice ads just based on metadata it gathers on the SMTP level, without even using the raw message. Someone mails his bank, maybe show some banking related e-mails, etc. And it would still stay true to the proclamation on that page.

Are we sure about this? When I downloaded my Google data, I fished around and found information related to my Amazon purchase history and etcetera. The only possible way that I can think of that Google is able to get my purchase data is from my email.

BeetleB on July 17, root parent prev next [—]. That's not misinformation - it's just mildly dated information. Google stopped scanning emails for ads very recently. It was never a conspiracy theory. Google used to be very open about the fact that they were scanning emails. Yes when. What changed was gsuit. If you pay Google for email they will not use the content for ads. Free accounts I think they still do. In modern practice, email is sent over TLS sockets already.

So the only people who can read email are you, your counterparty, your ESP, and your counterparty's ESP, assuming the email providers are following good practice. Since this would have to be a mitm of e. Gmail it's not trivial by any means, but neither is it completely out of reach see for example the periodic rerouting of the internet caused by odd BGP advertisements. One further note is that you can know post-hoc if an email was delivered to Gmail via TLS by the presence or absence of a red lock in the Gmail app or web UI.

Which routers did that? This is often explained in a needlessly confusing way. Nobody should be configuring their own gear, or corporate gear, to just let somebody else decide whether it uses encryption. It may be better than nothing, but it's far from a sure thing: If you can BGP announce an IP, you can get a certificate from letsencrypt. This is a trivial attack vector not just for state-actors, but also stupid kids: in the early s, I announced Microsoft's AS from my own network AS to see what would happen and got a significant amount of microsoft.

There was no security, and there still isn't: Most multihomed sites that change links frequently inevitably find themselves unfiltered either through accident or misplaced trust. I've not seen anyone use a long policy on a distant but popular network to require someone BGP hijack two big networks to beat it, but I suspect such a disruption would be felt across the Internet. I would hope letsencrypt has a number of heuristic safeguards, but I can guarantee they do not make connections from multiple routing paths: My ad server registers a certificate during the SNI hello but before the certificate is presented , and I get a certificate after a single ping.

That's actually more complicated than that. When the mail is sent , it depends whether the recipient uses the same provider or not. If it's the same provider, well, protocols are irrelevant. The main problem with that is that the mail is not encrypted on the various servers it goes through.

Only the server-to-server connections are encrypted. So your provider can access your email, and so can the recipient's.

When that provider's business model is reading your emails so it can send you targeted adds, this is less than great. Yes, Google reads your emails. They try to reassure you by telling you their employees don't read it, but the fact the process is automated actually makes it worse. Also, it might surprise some people just how many servers an email travels through to get to its destination. I just grabbed a random mail from a mailing list I'm on generally a worst case scenario and it had 7 Received headers.

Every mail server is supposed to add a Received header when the mail passes through but there's no way to enforce that, so all I can really say is that mail probably passed through at least 7 servers on it's way to my inbox. Each one of those hops may or may not have talked TLS to the next hop.

Each one probably wrote the mail out to a disk based mail queue in plaintext. There is nothing preventing any of those 7 servers from keeping around that mail even though they forwarded it on. There is nothing preventing them from indexing the mail for spam or marketing purposes.

Any sysadmin can read your email, in general. There's no holistic "this email can't be read by anyone other than the recipient" as a solution, which is what a lot of us are aiming for. Things like protonmail and tutanota get really close, but they're proprietary solutions and don't work for "the many" such as yourself who use a hosted solution such as Gmail, who seem to have no interest in providing an open solution.

I don't want my emails to be readable by Google, yet they will when people I communicate with are using Gmail. But even if TLS is enabled, you don't know that all the time. Emails are not properly encrypted in transit and are available for access at the provider if a court decides to grant a warrant.

That might not be enough protection for everyone. And then, as is talked about in the article, your recipient forwards the mail as plaintext They don't have a choice, because a third party requires it. The goal was to make sure it was as safe as possible. One thing that struck me is that I have a simplified mental model for the PGP crypto, and reality is way weirder than that. The blog post says it's CFB, and in a sense that's right, but it's the weirdest bizarro variant of CFB you've ever seen.

Second block: you encrypt the first ciphertext block, encrypt that, XOR with second plaintext block, and so on. Here's the process in OpenPGP, straight from the spec because I can't repeat this without being convinced I'm having a stroke: 1. The feedback register FR is set to the IV, which is all zeros. This is the encryption of an all-zero value.

The left two octets of FRE get xored with the next two octets of data that were prefixed to the plaintext. FR is encrypted to produce FRE.

These are loaded into FR, and the process is repeated until the plaintext is used up. And then everything after that is off by two? This isn't the only case where OpenPGP isn't just old, it's old and bizarre. I don't have a high opinion of PGP to begin with, but even my mental model is too charitable.

Disclaimer: I'm a Latacora partner, didn't write this blog post but did contribute indirectly to it. I think this is called Plumb CFB. It was invented by Colin Plumb back in the day. There's a few places where this engages in goalpost shifting that seems less than helpful even though I end up agreeing with the general thrust.

We can reasonably assume in that this "security page" is from an HTTPS web site, so it's reasonably safe against tampering, but a "Signal number" is just a phone number, something bad guys can definitely intercept if it's worth money to them, whereas a PGP key is just a public key and so you can't "intercept" it at all.

Now, Signal doesn't pretend this can't happen. It isn't a vulnerability in Signal, it's just a mistaken use case, this is not what Signal is for, go ask Moxie, "Hey Moxie, should I be giving out Signal numbers to secure tip-offs from random people so that nobody can intercept them? The safety number is only partly per-conversation. If you compare safety numbers of different conversations, you'll discover that one half of them is always the same which half that is changes depending on the conversation.

This part is the fingerprint of your personal key. The Signal blog states that "we designed the safety number format to be a sorted concatenation of two digit individual numeric fingerprints. Yes, I see. You'd need to figure out which is "your" half, which the application as it exists today doesn't help you to do since that's not what they're going for.

The person initiating would need to send something to establish a conversation, like "Er, hi? It's clunky, but less so than I feared. I can actually imagine a person doing this. Leace on July 17, parent prev next [—]. This is basically the keybase.

Although they too rely on PGP currently. Leace on July 17, root parent next [—]. Minus the network of independent logs as from what I remember only keybase.

Although they do timestamp their Merkle tree roots into Bitcoin. Does anyone actually do this? Even Signal developers themselves don't! Instead there is a plain old email address where you are supposed to send your Signal number so that you can chat. We manage bug bounties for a bunch of different startups, and I can count on zero fingers the number of times I've had to use PGP in the past year for that.

In practice, people just send bugs with plain 'ol email. I used to get about 1 or 2 PGP-encrypted emails with security bug reports per year when I managed this for my employer.

There's a dedicated team that receives security reports now, with email feeding into an automated ticketing system with automatic acknowledgements, reminders, spam filters, PagerDuty alerts, etc. There's a huge amount of tooling and workflow built around email, with a lot of integrations into all kinds of enterprise software.

Often the only sane way to trigger all this stuff is to send an email. So I think the result of removing PGP will be even more plain 'ol email than anything else.

No, I'm not defending PGP. Even without the automation, every PGP-encrypted email almost certainly results in a bunch of internal plaintext emails between employees that could easily accidentally cc the wrong person, etc.

I'm just pointing out that the chances of replacing PGP with something genuinely secure for these kinds of use-cases are close to zero. So even Latacora-advised startups use plain old email for bug bounties. Why then does the blog post recommend using Signal for that? Because Signal would be better than the PGP theater. In practice, though, it doesn't matter; people are just going to use plain old email no matter what.

They're not going to encrypt their findings to you. Bounty plz? We also got super clever reports on that same bounty program. They just sent email. Maybe all PGP users are morons, that's beside the point. My point is that if someone recommends something but doesn't follow their own recommendation, it is most likely that the recommendation is not well thought-out and can be ignored.

In this case the recommendation to use Signal looks more like a refutation of the point brought up by PGP advocates and not something that anyone would actually do. Just kidding. Or forge my signature, etc. Right, that's insecure. Maybe they should, you know, put a PGP key on their website? Whent talking about alternatives, Signal and WhatsApp get mentioned because they're easy to use. They are. Signal is pretty secure. WhatsApp probably is as well but we can't be sure.

That is, until it isn't anymore. WhatsApp already has a key extraction protocol built right in for its Web interface. Signal has a web Electron interface as well, and a shitty one at that, where the messages also get decrypted.

For WhatsApp, this means you're one line of code away from Facebook extracting your private keys. Signal is different, in that they're not a for profit company. However, they've shown in the past that they are under no circumstances willing to allow support of any unofficial client or federate with another. In fact, they've taken steps against alternative clients on the past, making it clear that only their client is allowed to use the signal system.

The moment the signal servers go out, Signal becomes unusable. This also leaves signal in the same position as WhatsApp, where we are dependent on one person compiling the app and publishing it on whatever app store you prefer. If signal has any Australian contributors and their code review fails sufficiently this means you're basically toast the moment the Australian government gets annoyed at a particular signal user enough.

Very few real alternatives to PGP exist. There are very few actual federated message standards that come close to the features PGP supports. If all of these "real cryptographers" disagreeing with PGP's design would design a new system that can be used the same way PGP is used, I'm sure we'd see that getting some good usage figures quite quickly. I don't believe this is correct. WhatsApp and Signal AFAIK web works by decrypting the original message on your phone, re-encrypting it with a different key that is shared with your web interface this is what is being shared via the QR code when connecting to WhatsApp Web , sending it to the web client, and having your web client use the second key to decrypt.

There are a few apps that attempt to exploit a few security vulnerabilities to recreate your key for you if you lose it and need to access backups, but that isn't the same as what you're describing.

WhatsApp always requires your phone to be around, whereas Signal needs it only when you link it. After linking, the desktop client is independent of the phone being online or in your vicinity or the number being in your possession.



0コメント

  • 1000 / 1000