Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google’s Revamped Gmail Could Take Encryption Mainstream (wired.com)
131 points by id on April 23, 2014 | hide | past | favorite | 89 comments


I would love if Google did this, but I see two significant problems.

1. Who stores my private key? Google? Chrome? Both seem troublesome

2. "Also, Google wouldn’t be able to scan and index the text of your e-mails. That’s a problem if you need to search for old emails not stored on your own machine. It could be a real issue for Google’s business model as well, which involves scanning the text of emails in order to place contextual advertising."

Unclear that Google would give up arguably their greatest personal data asset (better than search, imo), which of course is the key to their whole business model.


> Who stores my private key? Google? Chrome? Both seem troublesome

If you use the OpenPGP plugin for Roundcube, you paste your key into a form on the settings page whilst logged in, and it gets stored in your browsers localStorage (never sent to the server). Of course, with this system, the key needs to be managed externally and pasted into each browser you use, and you need to trust that the server doesn't send modified JS which steals your key.

To make it so you don't have to manage the keypair externally, it could be sent to the server after being encrypted using a password, then whenever you need to use a new browser, it will retrieve the encrypted key from the server, decrypt it with your supplied password and then store it in localStorage. Or perhaps it should remain encrypted when in localStorage, and when you log in, retrieved from localStorage, decrypted with a supplied password and then stored in a global variable.

Javascript crypto in the browser works... until the person running the server decides to steal your key when you log in, or until somebody finds an XSS flaw in the app.


Why can't the browser do the crypto on your input, before it even gets to the site/js? (e.g., expose a secure input and output API (say, a new type of HTML textarea and markup tag). Whenever you start entering text, or when a secure element enters the viewport, look up the key for the site (stored locally, like how they remember passwords) and encrypt/decrypt the content. All the site/js sees is gibberish.


It could. But it hasn't been done yet. And as rcxdude stated, it would need to be done in such a way that you can't create a fake secure textarea using HTML.

Perhaps when focusing on one, all other script on the page stops, no events or timeouts are triggered, everything other than the secure textarea is ghosted out, and something big and obvious in the browsers chrome lights up to show you that you are actually focused on a "secure textarea" and not a normal one.


That sounds more like it should be a separate window/tab/sheet of browser chrome.


You will also need a mechanism for marking the input as secure which can't be duplicated by a malicious webpage.


> 2. "Also, Google wouldn’t be able to scan and index the text of your e-mails. That’s a problem if you need to search for old emails not stored on your own machine. It could be a real issue for Google’s business model as well, which involves scanning the text of emails in order to place contextual advertising."

Caveat: I am no expert whatsoever in crypto/CS so understand this is almost certainly non-workable for some reason (which is what I'm trying to understand). Why couldn't Google hash every word in an email and store the list of hashes alongside the encrypted version. Then when you search for something, the search terms are hashed as well and the service searches for the hashed search terms amongst the lists of hashes.

Edit: Thanks everyone!


This requires what is called homomorphic encryption: https://en.wikipedia.org/wiki/Homomorphic_encryption

There seems to be active research done in this field, e.g.: http://research.microsoft.com/en-us/people/klauter/cryptosto... http://research.microsoft.com/en-us/um/people/senyk/slides/e...

I haven't read all those papers so I'm not sure how close it to working, but from what I have read I'm not sure if it even is practical for searching your own mail. For google indexing everybody's email, that would be contradictory as indexing an email basically reveals its content to the party that is using the index to look up.


Not sure such an approach is needed. The same key used to encrypt email could be used to encrypt a search catalog. The user decrypts the entire catalog when a search needs to be done. The risk of the catalog getting too big could be mitigated by making the indexer constrain the catalog to emails from the last 30 days or so, and making the complete catalog available offline. It can be tuned by letting users add important older emails to the catalog, and so on.

It's an approach I've used with a store/forward database and worked well for me with that.


Hashing only helps when the number of possible inputs is very very large. When the number of possible inputs is "the number of words in the dictionary", or "every IPv4 address" or "every phone number" etc, then it would take a modern home computer a few seconds to generate a raintable which would make the hashes instantly reversible.


>raintable

I knew what you meant, but if anyone else was confused, the correct term is "rainbow table."

https://en.wikipedia.org/wiki/Rainbow_table


In order to do this, Google would need access to the plaintext of the email message, which defeats half of the purpose of the supposed encryption initiative.


Because that's not really making it private. If you have each word hashed and a rainbow table full of hashes matched to words, it's very easily reversible. It doesn't really introduce any privacy.

Not to mention the computational and storage cost.


would a per user salt help with the rainbow table issue?


No. With a list of the users hashes and their salt, you could reverse it all in a matter of minutes or seconds on even a single low spec machine. It would offer nothing over just storing the plain text.


So store the salt on the client, and reindex if the salt is lost?


are you sure? I can think of weaknesses caused by statistical properties of a large corpus of hashed terms (all with the same salt) clustered together in documents which follow a natural language distribution, but matter of minutes or seconds? Why doesn't that simplicity apply for salted hashed passwords?


In theory, passwords are random combinations of words and/or characters so you cannot 'guess' larger than a character at a time.

This scales very quickly to the number of combinations.

http://www.oxforddictionaries.com/us/words/the-oec-facts-abo...

You can guess 90% of the words in the OEC with only 7,000 words in your rainbow table. I suspect that is a pretty fair representation of e-mails text. Even if it is the 1,000,000 number... [As of 2011, commercial products are available that claim the ability to test up to 2,800,000,000 passwords per second on a standard desktop computer using a high-end graphics processor.] http://en.wikipedia.org/wiki/Password_strength#Password_gues...

So ya. If it is a per-word hash, there is no real security value if you have the salt.


The plain text is dictionary words, given that the hash and the salt is known it will be really quick to hash every dictionary word for a single user. There are 99171 words in /usr/share/dict/words and off the shelf hardware can do 1300 million SHA1 hashes per second [0]

[0] http://security.stackexchange.com/questions/8607/how-quickly...


There's only around a million (or a couple million if you're generous with conjugations, lulzspeak, etc) English words.

If you figure passwords understand 75 characters, then one million passwords is around 3.2 randomly chosen characters. If you step up to 4 randomly chosen characters, you'd cover 31 million entries, and hence have about the same strength as reversing a hash of 31 million different words.

A 4 character password is woefully weak by modern standards.


the client can generate terms by combining a secret key with the term while hashing it (keyed-hash).

See one particular approach in: http://people.csail.mit.edu/akiezun/encrypted-search-report....


There are only probably a million or so frequently-appearing terms in English-language email messages (including the most common proper nouns, though of course not all of the long tail). Computing a million hashes with a custom salt is still trivial for modern computers.


"Why doesn't that simplicity apply for salted hashed passwords?"

It does. Don't store salted hashes for passwords.


Huh? What do you store instead?

My understanding of why this doesn't work for passwords is because the number of possible passwords (the size of the rainbow table needed for each salt) is much larger than the number of words in the english language.


Look up "bcrypt" and "scrypt". Using salted hashes for storing passwords is still common, but only for legacy reasons. It's considered insecure nowadays. Machines have got very very good at hashing over the last few years.


bcrypt is still a hash, it even incorporates a salt. Of course sha256 is too fast to compute, bcrypt fixes that.

I feel that this whole thread is based on cargo-culting. There are plenty of or (interesting) blog posts with misleading titles like "you shouldn't use hashes for passwords", which are correctly explaining why the cryptographic hash functions are not well suited to hashing passwords.

However, the solution is to use another method to hash the password. But it's still a hash.

A key derivation function used to process a password, produces a hash if you intend to use it as a hash, i.e. to compare it with another hash. If you use it to encrypt something then that would be a key.

Even the scrypt paper (http://www.tarsnap.com/scrypt/scrypt.pdf) says:

"Password-based key derivation functions are used for two primary purposes: First, to hash passwords so that an attacker who gains access to a password file does not immediately possess the [..]" (emphasis mine)


We say "don't just store salted hashes" because when people hear that, they think one run of SHA256 (or MD5 or whoknows) with a salt. You shouldn't do that. You should use a prepared method, because it's easy and (much more likely to be) correct. If you want to bodge together your own solution based on some large iterations of SHA256, you can, but it will take you much longer than just dropping in (b/s)crypt, and even longer if you have to match the (b/s)crypt feature set it provides out of the box. And you still have to face the fact that your SHA256 solution may not be as secure as you think because it's still a fast hash function, and an adversary may be able to process it much faster than you expect, whereas the (b/s)crypt have considered that in their design.

Pretty much by definition, advice provided to low-crypto-knowledge people can not depend on high levels of crypto knowledge. So, yes, pedantically you can point out that (b/s)crypt still produces a hash by the technical definition of hash. But you only muddy the waters for the low-knowledge people by doing so, and you probably shouldn't hold your breath waiting for plaudits from the high-knowledge people for doing so.


Since the original topic was how to build an server side index that can let users find relevant documents in an encrypted corpus without revealing the corpus to said server (e.g. [1]), I think that whoever diverts the thread into a discussion about the right function to use with passwords is the one being pedantic and missing the point.

Is it possible to build a index for an encrypted corpus that preserves the user privacy even in case the server is taken over?

I don't know ([2], [3]?). But I certainly know that the main weakness is not that it's impossible to hash terms in such a way that a weak machine cannot reverse significant portions of the index in minutes/days.

Seeing the word "hash" in this context clearly doesn't specify a particular hash function. Saying that using a salt fixes the issue with rainbow tables too doesn't specify whether we are talking about naive md5/sha salted hash or a KDF. These details were irrelevant to the discussion.

1. http://people.csail.mit.edu/akiezun/encrypted-search-report.... 2. http://rd.springer.com/chapter/10.1007%2F11496137_30 3. http://www.cs.ucla.edu/~rafail/PUBLIC/SSE.ppt‎


Thanks for the info, I will take a look at those.


There is a lot of research in this area. Here is a paper on how to search on encrypted data:

http://www.cs.berkeley.edu/~dawnsong/papers/se.pdf

One thing you need to be wary of is plaintext attacks. Even if you use a salt and a difficult has but I send you an e-mail and know the contents and can obtain the ciphertext or its digests, you are vulnerable. There are ways past this, of course, but it is one example of a valid attack.

The pdf I referenced has what seems to be a pretty workable approach to me and lets you do hidden searches, boolean searches, phrases, proximity queries, etc.


> this is almost certainly non-workable for some reason

Why is it necessary to have the ability to search past emails? Could there be any value in not looking back?

(It's ok to laugh at this.)


IIRC, Google doesn't mine information from paid-for accounts, so this encryption could be a feature for accounts where the user is paying Google for the service. That's a pretty small number of accounts though, I would think.


what about searching?


Also priority inbox and antispam.


Issue #2 could potentially be solved by a homomorphic cryptosystem in the future.


In response to 2)

I would say the best way (from Google's perspective) would be a way to flip a switch on a per-email basis. So by default, emails aren't secure, but if you need it you can activate it. I wonder if the loss of data would be worth the increase in users though.


> I wonder if the loss of data would be worth the increase in users though.

I think that's the wrong way for Google to look at it. The question should be "how many users will I lose if I don't enable something like this in my services soon?"

The best thing American companies could do right now, especially large ones like Google, to make sure regions like Latina America or Europe don't take a "you have to build your datacenter here" stance and make things much more expensive for them, is to ensure that the data is private, even if it's on American servers, because they've adopted trustless protocols, and even they can't see what's inside.

That's what's going to stop users and institutions from other countries from bailing on American corporations like Google. So losing a little data signal from the ad tracking data is hardly a big loss in comparison, and could help Google regain at least some of the trust they lost post-NSA revelations.


> I think that's the wrong way for Google to look at it. The question should be "how many users will I lose if I don't enable something like this in my services soon?"

Since this answer is "an insignificant amount", I think you'd be happier if Google looked at it the wrong way.


Insignificant only when you look at the way things work right now. This is the problem with monopolies such as Comcast, too. They think "hey why bother improving the service, and cost us money, when we're not going to lose customers to anyone anyway"?

That only works when there's no competition or the competition isn't serious enough. But as soon as something like Google Fiber comes out, they start freaking out, because they know customers have developed a lot of resentment towards them when they kept thinking "why bother improving the service?", and they will quit them as soon as a much better alternative is available.

The same can happen to Google, too. Imagine if this was Microsoft coming out today and saying they are going to implement PGP or something even better for true privacy - and imagine then they would do campaigns like Scroogled and Gmail Man. Now do you still think it's a mistake for Google to do this from a strategic point of view, so they don't lose customers in the future?

If Google wants Gmail to remain the #1 e-mail service, then they need to do such moves first, not as a reaction to other competitors. Because by then the exodus may be too hard to reverse, if the perception of Google's services becomes much poorer and they've lost most users' loyalty (not necessarily the users themselves up to that point - but the loyalty of those users).


No, but speculation makes any conclusion you want to draw possible.

If you want to argue that Microsoft will come out and implement PGP, that's one thing. Hell, maybe Google will decide that it's more profitable to shut down Gmail. Would that also be a mistake from a strategic point of view? I couldn't fault them for it.

But all you have are wishes and fantasies. That's not strategy. That's a pipe dream.


That would make sense anyway from a UI perspective. The 'encrypt this email' option can clearly only work if google is able to reliably determine (or have you provide) the correct public key for the address or person that you are emailing.


In the current state of affairs, lawful intercept/ PRISM devices on Google's network can potentially see everything behind the SSL layer. The global passive attacker can see all incoming and outgoing emails between providers that don't use StartTLS. Google needs to provide the email history < 6 months of US person in response to a administrative subpoena and >6 months with a warrant.

Let say Google made a key directory and end 2 end encryption possible in Gmail but Google held the keys. This would frustrate blanket lawful intercept and the global passive attacker. Google might be able to legally argue that turning over keys requires a proper warrant. They might be able to defend against falsifying the key directory because of external audits.This is roughly where Apple is with iMessage.

If Google goes a step further and only stores an encrypted key or keeps the key client side, now we have the legal terrain of the Government asking a software provider to maliciously modify their software to obtain keys or a passphrase. I think this legal battle is coming. We might be able to preempt it with software verification techniques but definitely coming down the pipe.


There is no "blanket lawful intercept" in the US or any "PRISM devices on Google's network." You seem to be confusing several different NSA leaks.

GCHQ's MUSCULAR program apparently tapped Google's leased lines into the UK, giving them access to Google's unencrypted internal traffic entering that country. That's the closest report to any security agency having devices on Google's network. Google says they no longer have unencrypted traffic on those connections.

PRISM is an NSA system that accesses FBI DITU electronic wiretap data, which contains data the companies send to the FBI for specific users the government presented a court order for.


So the PRISM consist of FBI electronic wiretap portals/devices within a company's infrastructure.

Now we know the FBI will generally require a court order or an NSL before accessing the portal to request user information.

No one has been clear about what the role of the NSA is in these devices are.

1. They could just be administering the system for the FBI.

2. They could be copying all the data that goes to the FBI into their corporate store.

3. They could be applying 51% probability foreign standard to any data they can see with theses devices.

I've seen this question asked of numerous tech executives and no one is able to give a good answer so I'm suspicious. What Bruce Schneier keeps telling us is that the NSA's email interception is robust, they wrap a program in multiple overlapping legal justifications.


The FBI doesn't have devices in the companies' infrastructure either. The companies configure their own servers to forward specific users' data to the FBI's servers off-premises. http://www.wired.co.uk/news/archive/2013-06/12/google-prism-...


Google is the only company to deny having FBI/NSA devices on their network.

When Spylockout asked Apple's general counsel about devices on their network, we could not get a denial. Apple also made iMessage e2e encrypted for a reason...

I'm really supportive of Google's efforts to genuinely protect user privacy. I think they are correct if they believe the only ultimate guarantee of user privacy is end to end encryption in the current legal environment in the US.


Here's Apple's denial: http://www.apple.com/apples-commitment-to-customer-privacy/

I would be surprised if any of the companies allowed FBI equipment to have direct access to all their users' data. Not only is it a bad idea from a privacy perspective, but it is a terrible idea from an engineering perspective as well.


If I was going to start this thread over again, I'd have done a better job differentiating between the concepts of in the data center and in the server. Also I'd do a better job of discussing Section 215 of the Patriot Act which is in transparency reports and Section 702 of the FISA Amendments which in not.

Thanks for improving my arguements for next time :-)


FISA requests started appearing in transparency reports this year. http://googleblog.blogspot.com/2014/02/shedding-some-light-o...

Neither the FBI nor the NSA have code related to data requests running in any of these companies' servers either, so the in-server versus in-datacenter distinction does not make a difference.


> PRISM devices on Google's network can potentially see everything behind the SSL layer

Is there evidence of this, or is this your assumption?

I'm aware that NSA monitored unencrypted data transfers between data centers on Google's dedicated fiber circuits, but as far as I know that was accomplished through telcos and not Google.

Google said they've now encrypted the circuits between data centers, and they've strenuously denied allowing monitoring devices to reside on their network.

Is there another exploit I'm not aware of?


they've strenuously denied allowing monitoring devices to reside on their network

This is potentially a legal requirement for them, issued by the government, that they have no control over.


> potentially a legal requirement for them

We've learned more than enough real facts from Snowden's disclosures that we don't need to make up hypotheticals.

There is plenty of evidence that security services are out of control, and that corporations are collaborating with them.

There is no evidence that Google allowed any government agency to embed monitoring devices in their network.


It's not hypothetical. Such restrictions are routine in other arms of government investigation, eg. money laundering.


IANAL but I did write a "lawful intercept" (LI) spec for a type of infrastructure router. As far as I know, most places in what we'd call "the free world" don't compel carriers to do anything to thwart users from protecting their payloads, nor do they compel carriers to actively deny service to encrypted payloads or even discourage use of encryption. Even if individuals can be compelled to hand over keys, it isn't the carriers' problem.

Empirically, we know that, for about a year, Google, Yahoo, etc. have been bitching about PRISM, but not doing anything to make it more convenient for their customers to encrypt their payloads. Is there some secret law? Heavy-handed pressure? Or did they just spend a lot of time wishing the privacy thing would just go away?

This will be an interesting test to see just how much freedom Our Benevolent Overlords think we can handle.


Bummed that "could" here means "could theoretically", not "might".


Even if google does implement this, where is everyone going to get a shiny new PGP key from? Is google going to create it? Is the user going to create it? How about linking it in to the web of trust. Who is going to go around teaching all the users how to securely verify each other's keys, so that the public key part of the system isn't a complete and utter waste of time?

Don't get me wrong, I would love it if loads more people were to get a PGP key and enter the PGP web of trust (like, say, paypal). It's just that over the last ten or so years, I have found remarkably few people who both have a PGP key, and care about it.


I guess I'm going to err on the side of hope, so for me it seems wired does have some sort of inside information. Not very much, apparently.

But a pre-product announcement from Google doesn't mean much, since they work on so many products. If they do decide to push it to "beta" (ha ha) I'll happily use it.

Either way, I would hope public key crypto does become more mainstream, especially in email.


The way I see Mailpile becoming popular and putting a dent in dragnet surveillance is not by everyone downloading it and running themselves but a trusted third party running it and holding keys for a relatively small group of people.

Some will complain that this way the third party can still access the key and mail encrypted for it. That's true but also a massive step up from everyone on the wire being able to read your messages.

Now, an intelligence agency can just siphon anything and everything. With a large number of independent e-mail providers, only narrow and targeted surveillance would be feasible.

All that without giving up any convenience of webmail.


I'm very wary of upvoting such posts lately, even related to Google. On one hand, we do need a large service provider like Google to adopt end to end encryption in e-mail and popular chat apps, because otherwise it's going to take forever, if we just try to convince people one by one.

On the other hand, Google's corporate goals are very much against end-to-end encryption and strong privacy, and they're even lobbying [1] against it. So it remains to be seen if it's an actual useful thing from Google, or just PR. And I realize that even if it's mainly for PR, that PR could lead other companies to want the same kind of PR, too, and implement such measures as well - but hopefully not in a gimmicky/not very useful way.

Making something like this available at all in major services would still be a big win, however it's still a far cry from actually being enabled by default (like the way Telegram doesn't enable end to end encryption by default - even though their main marketing message for it is "the most secure chat app in the world" - except for most people using it).

[1] - http://www.vice.com/read/are-google-and-facebook-just-preten...


Beyond this comment: https://news.ycombinator.com/item?id=7634928 a further question:

How does opposition to Rand Paul's "Fourth Amendment Protection Act" even make sense for ITAPS? The federal FAPA says exactly one thing: electronic records held by third parties are inadmissible in criminal proceedings unless obtained under consent or under color of a specific warrant demonstrating cause.

What are the business implications of such a law? Google and Facebook have no obvious commercial interest in the outcome of random criminal cases.

Isn't it a lot more likely that Vice just doesn't know what it's talking about and has gotten the bill wrong? That rather than opposing the "Fourth Amendment Protection Act", they're opposing individual state FAPAs derived from the 10th Amendment Center's Model State FAPA, which can hold a corporation in violation of state law for honoring a federal subpoena or court order, which would (a) create potentially 50 different new data protection policies and (b) put Internet companies in an absolutely impossible position of needing to choose between violating either a federal law or a state law?


> On the other hand, Google's corporate goals are very much against end-to-end encryption and strong privacy

If there's money in privacy, I don't think that would be the case. If Google made web-of-trust and key exchange easy, I'd pay to use it. I already pay for other things Google can't monetize with ads. So it isn't as if Google fails to see alternative monetization approaches.


Has anyone else noticed they spelled pidgin as pidgeon? It's a really weird typo especially when you consider the link points to the right site and it can't be a spelling correction as both pidgin and pigeon are words but pidgeon is not [1]. It's also weird that a news outlet such as wired would make such a mistake.

[1] Though you might say it's a meta-pidgin borne out of the hybridization of pidgin and pigeon.


How would that be a 'meta-pidgin'? A 'meta-pidgin' would be a pidgin formed by combining two separate pidgins.

What you are describing in [1] is a language change due to user error, not the creation of a pidgin.


That was a joke: seeing how a pidgin is a hybrid between two languages pidgeon is a hybrid between two words. Nevermind


What about Adam Langley's own Pond protocol, which last I checked replaced OTR with TextSecure's Axolotl ratchet? But I think someone was saying it's quite a nightmare from a UX point of view for now. Any improvements there lately? And couldn't TextSecure be used effectively as e-mail, since it's async, but just put an email-like UI on top of it?

https://pond.imperialviolet.org/


Even Adam Langley doesn't think that large service providers should roll out Pond, presumably because Langley is serious about cryptography and understands that systems like this can take years to iron out and aren't ready for deployment simply because somebody wrote them up.

Axolotl is Trevor Perrin's protocol; it doesn't belong to TextSecure. However: TextSecure procured a pretty significant block of Trevor Perrin's attention to help review and improve their cryptography.

TextSecure is also, as I understand it, significantly older than Pond.


Google: if you wanted to take encryption mainstream, you shouldn't have gone miles out of your way to sabatoge compatibility with web email encryption plugins.

https://support.mozilla.org/en-US/questions/831463


Question, even if Google were to actually do this, given that GMail runs on server-provided JS code inside the browser, doesn't this carry the same problems as all other in-browser encryption applications?

Isn't this type of in-browser encryption code considered broken, no matter what way you go about it?


Could? Does the article imply that Google plans to do it? I didn't see anything from Google about such plans. Or it's simply "if Google ever does that it would be great" etc.?

Google's approach of harvesting the data from e-mail doesn't fit with end-to-end encryption.


even if they did, i'll call anybody who thinks this would make anything better an idiot.

google is in bed with the cia via iqt. lord knows what else is behind this. they even work together: http://www.wired.com/2010/07/exclusive-google-cia/

ppl like to forget the news of yesterday. all is so shiny. all is so well.


Implying that encryption is not already mainstream. What do you think that lock symbol in your browser means?


I means your connection to the server is encrypted and that your browser trusts their certificate, preventing stuff like sniffing and mim attacks. It does not mean that emails on the server are encrypted, which is what this is article is about.


... grypto! ...


I would, quite happily, forgive Google for everything if they do what's described in this article.


You would forgive Google for spending millions of dollars over the last decade to work harder than virtually any other tech company on the Internet to resist NSA surveillance, thanklessly and quietly, or, when not quietly, under the duress of thousands of shrill, under-informed detractors? For essentially orchestrating the worldwide deployment of TLS forward secrecy, for more or less inventing browser certificate pinning, for donating high-quality crypto code to NSS and OpenSSL --- by the way, also, for finding Heartbleed and publishing it, rather than holding it as a "competitive advantage" --- and for killing probably several thousand browser RCEs? And, in all of this, for spending god knows how much money on lawyers behind the scenes?

That's generous.


Working harder than others simply makes them the least bad. They may have struggled valiantly to keep data secured, but they failed often enough. Google did more than many to pioneer the model of centralised information-gathering as a commercial strategy, which is part of what made surveillance so rewarding. I think we expect far too little of companies that manage our data, and Google manages far more than most.

Client-side crypto, along a PGP model, would be a welcome admission that Google can't secure everyone's email within their network. It would be a step away from the idea that we simply have to trust utility-scale cloud providers with our data. I see that as putting right a mistake.

EDIT: To de-escalate the argument, I should say that we're probably perceiving 'Google' differently. Their security people are excellent people, and Google has undertaken many excellent security initiatives. Many people at Google are on the side of the angels. As you probably know these people and their work much better than I do, I can imagine that your picture of Google's activities is different to mine. But from a consumer's perspective, Google is much more ambiguous. As a matter of corporate strategy they have pooled vast amounts of customer data via the integration of their services, and they have created a security risk by doing so. When faced with a choice between doing something that might make users safer but might harm their ability to gather data on them, I don't believe Google as a company has often chosen the former.


Working harder than everyone else does not simply make something "the least bad". It also makes them "the best".


The thing about privacy against a threat like PRISM and other mass-surveillance threats is that there is a threshold below which efforts don't actually protect.

End-to-end encryption is a pretty reasonable threshold. Skype proved it could be convenient enough for grandma (and yes I'm aware that user-controlled keys for store and forward is more difficult).

So, yeah. Below that threshold the best is just least bad. I don't see why you are so touchy about that. Many people here foresaw that the government would be so intransigent that, unless services implemented open and verifiable tools for enabling end-to-end encryption, anything short of that would be ineffective in restoring trust in the services we use.


End-to-end encryption is a pretty reasonable threshold. Skype proved it could be convenient enough for grandma

Err .. are you are aware they give out keys to certain governments and send different code to certain clients (eg. within China)? In privacy terms they are basically the same as Google now with its centralized model and SSL, just using some obfuscated vaguaries of P2P slash centralized communications paths (which they refuse to document openly) instead of centralized store and forward.


I usually respect your comments tptacek, but the fact is that Google have acted and continue to act to strongly effect a centralization of much personal information on the internet in an unencrypted form accessible to parts of the company and its host governments. That's just not cool, versus the traditional decentralized model.

All of their mitigation efforts are only lipstick on the fundamental pig here. Yes, they're not the only ones. Yes, ease of use. But that doesn't change the model.


Yea but they do all that to secure their services so they can harvest the information. "Rather than holding it as a "competitive advantage"" Doing so gives them the "competitive advantage" by creating the illusion of some "white knight" protector of the internet and its users. I don't even mind Google as much as I mind the weird delusional back-flips people do in order to forget the fact that Google is an advertising/tracking company. They have no business plan if people stop trusting the internet or use browsers that can make tracking/analytic data harder for them to collect.


I agree that Google's efforts, as you have described, have been exemplary.

Have they not also gone along with the NSA in what appear to be violations of the 4th amendment?

I'm mostly ignorant of this stuff, but reading the quote below from the Guardian makes them look complicit. I think it's fine to be complicit when you're powerless, but Google is not powerless. What bad thing would have happened to Larry Page if he had said "Uh, we're not handing over the data." Would he actually have been arrested?

(Just to be clear, you've probably thought about this for 400 hours more than I have, so if I'm totally wrong, sorry.)

From the Guardian [1]:

"The senior lawyer for the National Security Agency stated on Wednesday that US technology companies were fully aware of the surveillance agency’s widespread collection of data.

Rajesh De, the NSA general counsel, said all communications content and associated metadata harvested by the NSA under a 2008 surveillance law occurred with the knowledge of the companies – both for the internet collection program known as Prism and for the so-called “upstream” collection of communications moving across the internet.

Asked during a Wednesday hearing of the US government’s institutional privacy watchdog if collection under the law, known as Section 702 or the Fisa Amendments Act, occurred with the “full knowledge and assistance of any company from which information is obtained,” De replied: “Yes.”"

[1]: http://www.theguardian.com/world/2014/mar/19/us-tech-giants-...


Surely he means Google's anti-competitive and anti-employee pacts with other companies in the area, their desire to take away privacy via forcing people to use their real name across all of their products, and their commitment to forcing mobile paradigms into a desktop environment.

It would be generous.


Got it. Internet company. Guilty. Sure thing.


It sounds like your interests and focuses differ from others in this area. I'm not sure why you feel it justifies so much snark.


Perhaps. My interests and focuses are in Internet and application security and privacy; that's been my career for the past 15 years. I'm not sure what the incompatible other interest might be.


Well said.


Wow, you hold a grudge over Google killing Reader for a long time :)

In general, I'd say Google has done mostly great things for the Internet, and good things for Internet security. They're a huge target -- their biggest sin is that they're inherently centralizing a lot of stuff which used to be decentralized (mail servers, etc.). In general that increases security because Google does a better job in software and operations than virtually anyone else, but it creates a big juicy target.

agl alone probably makes up for any negatives Google has brought to security. The other thing I hate them for was Rubin's hatred of security and thus no platform security on Android, but that's being fixed, and ChromeOS is pretty amazing in contrast.


This is what happens when you piss-off such a large number of people. Suddenly end-to-end encrypted e-mail not only becomes a differentiator but something that could get regular <gender neutral grand parent> excited.

For the hundredth time Thank you Snowden!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: