The Android Flock client works with any standards compliant WebDAV/CardDAV/CalDAV server. There is nothing stopping anyone from running a server of their own, at which point you can do anything you want with the server including charge for its use.
Right now the initial sync does take an unreasonable amount of time, I definitely agree.
After first sync you will never have to experience a sync operation anywhere near the length of that but it is a bad first experience to have with the app. Very close to the top of my TODO list is "support bulk upload" which will cut the initial import time (and bandwidth) considerably.
Soon we should be able to upload entire address books and calendars in one POST request, working on it :)
not to be a bummer, but it doesn't seem like anything special was done with this special purpose hardware. why go to the trouble to engineer and advertise this as a piece of security enhancing hardware when it's really just "PrivOS"? also, any plans on open sourcing "PrivOS"?
did I miss something in the writeup? OSS modem firmware, OS wifi chipset, anything hardware or firmware related?
You're missing the fact that this can be sold (at an outrageous markup) to large enterprises and government agencies because it looks secure/private.
Beyond that, it provides literally nothing that you can't install for free on any Android device. I could make you an equally "secure" or "private" device for $300 and an hour's time.
> Beyond that, it provides literally nothing that you can't install for free on any Android device. I could make you an equally "secure" or "private" device for $300 and an hour's time.
Yeah, this was my takeaway from the article. They even link to the Google Play store entries for the software that comes packaged on the phone. Missed opportunity, I think.
I don't think you missed anything. I don't see any reason to trust blackphone more than a properly configured Nexus.
The OS might have some neat UI for privacy stuff, but fundamentally if it's closed source and has a closed baseband (afaik, there's no phone with an open baseband), then there's no real security.
Is there no middle ground? Doesn't a device that changes your threat model from 'passive dragnet' to 'active compromise by a nation state' have some value?
Kind of-- some older code was (is?) copyright WS, newer code is OWS, in both cases everything is GPLv3 and all future code will continue to be OWS & GPLv3.
We've got a browser extension in development, with the possibility of using an email address instead of a phone # as ID-- no promises on time line though.
There is no way that this attack method is profitable if the attacker is fronting the cost of manufacturing. This leads me to believe that this article is incorrect or fabricated, or that this is a seriously interesting attack on a iron manufacturer.
I'm inclined to agree with you, generally speaking simplicity is security. Also, I believe that the "secure JS in-browser crypto is impossible" argument is entirely bunk in this context-- people need to stop reciting this compulsively and take the time to think each situation through.
Realize that the SecureDrop document submission client is a web application. The browser of the document submitter will run whatever the SecureDrop Source Server provides it barring the edge case of the submitter verifying the source page source with GitHub before allowing JS in NoScript.
The security of the document submitter is already prone to compromise by way of a malicious web app provided by malicious Source Server or MITM. Moving the project to something more JS heavy on the client side would in no way worsen the threat model.
> people need to stop reciting this compulsively and take the time to think each situation through.
I believe that thinking has already been done, and the reasoning published. If you can refute the well-publicized arguments against JS crypto, then of course that would be productive and appreciated. Given the discussions that have already taken place, I believe the burden of proof now rests with those who support JS crypto, not those who oppose it.
To make an analogy: We "compulsively" assert that the earth orbits the sun. But that's not a cargo cult. It's a conclusion which we've confidently accepted based on the weight of the evidence.
> Realize that the SecureDrop document submission client is a web application. The browser of the document submitter will run whatever the SecureDrop Source Server provides it barring the edge case of the submitter verifying the source page source with GitHub before allowing JS in NoScript.
If that's true, then SecureDrop might not be so secure. I'm not pointing to SecureDrop as a gold standard. There are very few cryptosystems I trust.
> The security of the document submitter is already prone to compromise by way of a malicious web app provided by malicious Source Server or MITM. Moving the project to something more JS heavy on the client side would in no way worsen the threat model.
That might be true, but then again it might not. We don't know much about JS crypto. We don't know what attacks are possible. (We know about compromising the JS source, but that's only one threat model. There could be others that are unstudied.) Thus, it's quite possible that there are attacks which depend on the application using a specific browser feature, such as drag-and-drop. Is this likely? Not terribly so. But is it possible? Absolutely.
But I'm sort of nitpicking. Like you said, JS can be MITMed, so there's not much point in debating whether a given JS crypto app is secure or not. The best strategy is to just not use JS crypto.