Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Which is a nice way of knowing that Google employees aren't randomly snooping on your files."

does it do that? Or does it just show you times that Google is willing to tell you Google employees snooped on your files?



The generic answer to these concerns is usually the following: if a (well-known, very scrutinized) company such as Google writes that kind of promise in public documentation that is part of a binding contract with paying customers, there is a good chance they won't purposefully break that agreement, and risk being caught by an audit, just for the sake of accessing someone's personal data.


That's great, but if it's only true until it isn't. The moments when that idea is false (however rare) are the life altering, permanent moments that result in irrevocable ruin for whomsoever might dare trust the promises and honor of [faceless corporation].

The truth is twofold.

One: if the barrier can be melted according to magic rules, then it is no real barrier. It is a sweet candy coating that melts in your mouth, not in your hands.

Two: if a corporation is made of many incidental strangers who happen to share an employer for overlapping moments in time, and the system has at least one authorization bypass, then so does the audit trail.

If you don't think corporations implode, suffer from disgruntled criminal employees, sell out to rivals, go completely bankrupt, or land themselves in jail, then bet all of your secrets on the idea that what they tell you is 100% truth.


Yep, a solution exists through. Here's how you get there:

* Strong identity: employees must be strongly identified before acting.

* Multi-party authz: nobody ever acts alone. One person can't be trusted, two people might be, M of N effectively represents the company.

* Noisy security: making a change to security parameters notifies all relevant parties in a way that intentionally avoids notification fatigue. You can't sneak a change through.

* Full auditability: even after the fact you can readily unravel what was done, seeing what the old state was, what was changed, who made the change, and who approved it.

Get those points, and a few other minor details, and this larger problem actually becomes tractible.


You know, we, working at Google, are people, right? We have moral and ethical standards just like everyone else. Many (but not all) of us also aren't locked in to Google and can find employment elsewhere easily but choose not to.

The following isn't about Google as such: Thing with a disgruntled criminal employee is that they don't usually come in bunches and don't collude because they can't easily identify each other. Which means they can't generally commit such acts and then also corrupt a whole 'nother department to cover it up.


Trusting your privacy on the moral and ethical compass of every individual at giant corporations is incredibly foolish. If this is a wide-spread belief at Google, it only further erodes my trust in the company.


It's not every employee, but rather something like any. As in: any employee with access to user data can check that their actions are logged correctly.

This doesn't protect against government action, and not at Google leadership specifically targetting you. But it does prevent the (rather common) abuse of such access by regular employees.


> You know, we, working at Google, are people, right? We have moral and ethical standards just like everyone else.

Could’ve fooled me. Or maybe your standards are just particularly low. Do you mind explaining where surveillance capitalism fits into your principled worldview?


"If a well known agency such as the FBI writes that kind of promise in congressional hearings ... "

https://www.theregister.co.uk/2019/10/08/fbi_spying_abuse/

I'm not entirely sure the old generic answers apply these days...


There's at least one major difference here, which is that corporate entities don't have sovereign immunity. The CIA and NSA are immune from consequences when they systematically abuse our rights.


That may be a de facto outcome but I don’t think there is a de jure legal basis there.


Who audits Google? Serious question.


This suggests they are SOC compliant, among other things, and been audited by an independent accounting firm: https://support.google.com/googlecloud/answer/6056694?hl=en

SOC seems to be the gold standard in terms of what enterprises are asking for, these days. Not that it addresses all the concerns as discussed here, but it does probably start to answer your question.


According to this: https://support.google.com/googlecloud/answer/6056650?hl=en

E&Y does (apparently), and Google is compliant with some ISO standard for software security. See "Does giving Google access to my data create a security risk? How does Google ensure that its employees do not pose a threat?"


Your assumption is that the company will knowingly access your data, but the more likely scenario is that a rogue employee working for your competitor (or simply looking to start their own startup) will access and steal your data/code/client list.


Right. As an end user how can I actually verify that instead of just taking Google's word.


This is a difficult argument to counter; for instance, are you sure that signal can't decrypt your messages? If so, do you remain sure knowing that they can update the app?

As a security person I really can't think of any service (or piece of hardware) which I think satisfies the threat model where the provider is both clever and truly hostile.


Yes, you're pretty sure about Signal, because it's end-to-end encrypted; you can verify what the binary you're running is actually doing (you have to have the expertise to do so, but then, even if Signal was written entirely in browser Javascript, you'd still need the cryptography expertise to verify it). By design, Signal doesn't depend on its serverside deployment for cryptographic security. That's not true of G Suite.


You can verify what the binary did while you were watching. You can't verify what it did before, or what it will do next. OP said hostile and clever, and part of clever is only being hostile when nobody is watching. Apps that don't snoop constantly, delay transmissions, and hide transmissions in existing and expected communication channels are much harder to catch.


No, I'm saying, you can crack open the binary and see what it's capable of doing. If Signal wanted, it could obfuscate itself in various ways to make that hard, but (1) you'd notice that pretty quickly (that the code was hinky) and (2) Signal does not in fact want to do that.

You personally might not be able to do that (but then, you personally might not be able to spot a defective authenticated key exchange either), but people can. Once someone spots the "Signal Backdoor", that's it for Signal. There's a lot of incentive to do that legwork.

In contrast, G Suite could be comprehensively backdoored, and you'd have no way of knowing, no matter what your level of systems programming competence. I'm not saying they are backdoored; I rather doubt that they are, and I myself trust G Suite more than most other applications I use. But the point is, the trust you have to have in G Suite is different and more demanding than the trust you have to have in Signal.


This assumes that everyone gets the same binary, and the binary doesn't get updated. There is no reason that the binary delivered to your phone by the Google Play Store needs to be the same as the binary delivered to a reporter's phone.

Even if we can trust the binary (and I agree, with Signal as the example we probably can), the application distribution mechanism and the underlying OS and its update mechanisms are still a problem.


There is no reason that the binary delivered to your phone by the Google Play Store needs to be the same as the binary delivered to a reporter's phone.

That's moving the goalposts to individual targeting, though. The individual targeting scenario is not that interesting because, as the winged quote from the technical literature goes, "YOU’RE STILL GONNA BE MOSSAD’ED UPON".


There are still mechanisms to mitigate reliance on trust.

If you truly cared you wouldn't download it from Play Store and you wouldn't use a stock Android ROM.

Of course that moves the problem up to the firmware level but the attack space is getting narrower.

With G Suite you rely on trust from the ground up.


> As a security person I really can't think of any service (or piece of hardware) which I think satisfies the threat model where the provider is both clever and truly hostile.

...I can. Disconnect from the internet.

It's a pain, and it won't be useful advice in many cases, but if you're a newsroom doing sensitive investigations on powerful individuals? I could make a case for it. Although, you'd want to ditch G Suite.

(You can certainly think up clever attacks that work without internet, but disconnecting really does remove most vectors.)


The threat model does not need to be that the provider as a whole is truly hostile. It could be "a rogue employee went snooping" or "the server got hacked" or "there was an access control bug."

Instead of "trust us to keep your data", what if Google said "we don't have your data." That would give me more confidence, since it both makes the hostile actor's job much harder and it's also easier to verify.


It may be that we never can say "Google employees aren't randomly snooping on your files."

We shouldn't start saying it for things that prove nearby but entirely different things just because we won't ever be able to say it definitively.


"Google is willing"? You make it sound like there's a person making a decision. The system is automated, there's no mechanism for employees to be unwilling.

A lot of effort goes in to ensuring that audit trails are non-optional.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: