Hacker Newsnew | past | comments | ask | show | jobs | submit | asutherland's commentslogin

viz.js (https://github.com/mdaines/viz.js demo at http://viz-js.com/) is graphviz compiled to JS using emscripten. graphviz-d3-renderer (https://github.com/mstefaniuk/graph-viz-d3-js demo at http://graphviz.it/) uses viz.js to render to (x)dot format, then parse that so you can use d3 on top of it. Using those two, you should be able to leverage the layout smarts of graphviz with the presentation smarts of d3.


Clarification for those who parsed this (incorrectly) like I did: localStorage does work in Firefox in Private Browsing Mode[1], it's IndexedDB and the DOM Cache API that throw. (Although Firefox has private browsing support for IndexedDB in the works. That effort is tracked on https://bugzilla.mozilla.org/show_bug.cgi?id=781982.)

1: But it's exclusively memory-backed and will be forgotten somewhere between 0 and 20 seconds after the last active page for the origin is gone. It is not persistent for the lifetime of the private browsing session. However, because Firefox has a back-forward cache ("bfcache") that keeps pages around (frozen) for a while after you navigate the window to another location, the storage may be kept alive for some time even with no active windows.


If you're willing to try Firefox Nightly, check out the containers experiment: https://wiki.mozilla.org/Security/Contextual_Identity_Projec...

While everything still happens within a single profile, sites in different containers get different storage (cookies, localStorage, IndexedDB, etc.) and cannot see each other.


Cool.

I've been using a combination of NoScript,Self-destruction coockies,ublock and a personal vimperator script to wipe out all that nasty stuff when I close the window.

https://github.com/liloman/dotfiles/blob/master/vimperator/....

https://github.com/liloman/dotfiles/blob/master/Scripts/Scri...

By the way firefox is pretty nasty by default and you must do a lot of hard work to evade tracking, in special by google:

https://github.com/liloman/dotfiles/blob/master/vimperator/....

Of course you must use custom fonts if you don't want to be tracked by google for every single webpage you enter.

I think I will play with containers for firefox when I try firefox 50. Something like a new container for every tab, I reckon It must be pretty easy with vimperator(or whatever plugin you like) to make .

Put your vimperator/penta/vimium/X to work for you. :)


That looks way more powerful than what Chrome is doing. I'll give it a try. Thanks for the suggestion


I think this is great, but is there going to be a simple "sign in" feature similar to Chrome? People just love having their bookmarks / toolbars / etc follow them wherever they go.


You can sign in to a Firefox Account to sync your bookmarks, passwords, history, etc.:

https://www.mozilla.org/firefox/accounts/


2 quick hopefully informative nits:

It's better to consult the living standard of the editor's draft over the TR ("TR is for the TRash" as they say). The security section has been fleshed out a lot, for example: https://w3c.github.io/ServiceWorker/#security-considerations

In Firefox, about:serviceworkers is in the process of being replaced by about:debugging. The bug is https://bugzilla.mozilla.org/show_bug.cgi?id=1220747 if you want to follow-along, but start re-training your muscle memory now! :)


So Mozilla is making it harder for end users to find and kill service workers?

The main difference between the TR and the suggested document is a weak "privacy" section, suggesting that the data stored by service workers locally should be flushed on user request. Mozilla currently does not allow service workers to run in incognito windows, which sort of complies with that.


> So Mozilla is making it harder for end users to find and kill service workers?

about:serviceworkers was added as a hacky debugging/introspection tool with the expectation that the developer tools team would integrate the functionality into more full-featured, supported tooling. about:debugging is that tooling.

For example, about:debugging's "Workers" tab (on nightly): - Displays all registered Service Workers as well as whether they are stopped, enables starting them if desired, and attaching a debugger to them. - Also displays any open (Dedicated DOM) Worker instances as well as SharedWorker instances and provides debugging support. - Listens for changes and updates the UI in real-time covering whether the ServiceWorker is stopped/running/installing/etc. (about:serviceworkers renders in a single snapshot).

I'm parsing your underlying concern to be that ServiceWorkers are a powerful new capability for websites and that you want to make sure they're not being brushed under the rug as a "don't worry about it, nothing could possi-blye go wrong" (https://frinkiac.com/caption/S06E04/442491) situation.

It's indeed the case that ServiceWorkers are powerful and they open up new avenues of potential misbehavior. While the fundamental event-driven design of SW's where they're only spawned by explicit user action (browsing the site providing the service worker) or user-approved site actions (push notifications which require explicit user approval), there's still the potential for bad actors. Implementers share these concerns, which is where efforts like https://github.com/beverloo/budget-api and (potential? not sure if this is actually implemented?) heuristics like Chrome generating a (desktop) notification if a ServiceWorker gets woken up by a push and doesn't generate its own notification come from.

And I believe it's in these goals of having browsers proactively inhibit bad behavior and make users aware of the behavior of sites they have implicitly or explicitly granted permissions to, that the best defense of users is found. Which is to say, there will always be UI that helps users find and kill service workers, but the design goals are likely to center around a) debugging, b) performance/"what on earth is making my browser/computer run so slow!", or c) storage/"where did all my disk space go?" rather than one of manual gardening of service workers like they are weeds that spring up and users are responsible for cleaning up after.


The SQLite numbers are not going to be realistic here given that "no SQLite transactions" are used and the write-ahead-log is probably not used. Assuming defaults are used, that means "PRAGMA synchronous=FULL" and "PRAGMA journal_mode=DELETE" (so no Write-Ahead-Log). This means every mutating statement is potentially going to result in multiple fsync() operations. See https://www.sqlite.org/pragma.html for context.


This is one example of the pitfalls of making decisions "based on data", as is now the fad. Turns out that your data probably sucks or doesn't measure what you think it does. It's better to not know than to know the wrong thing. A posteriori data analysis can be great, but nothing can beat a priori understanding. Ideally, you should be using your data to get to that understanding, not replace it.


Great point. The phrase "garbage in, garbage out" seems relevant.

I always like reading comments on these types of analyses, since they often use the author's data to help me understand what might have been overlooked (which I often would have overlooked also!).


I didn't use any of the sophisticated stuff at all. As article states, I did it like what a typical programmer might have done if he wanted to do some basic key-value storage.


Please put a note in your article explaining this. SQLite is literally 10x faster using the WAL vs standard mode. And while you're at it, please consider an index on the SQL server and SQLite tables, so the reads are faster as well.

I would love to see the article updated using SQLite's PRAGMA journal_mode = WAL and PRAGMA synchronous = NORMAL. Then it would be a much more fair comparison.

Please don't give programmers a license to be lazy and not learn about their tools!!! If this article is trying to inform, or give benchmarks, it should not come to invalid conclusions without explaining the tradeoffs.


Yep have tried it with WAL and it still doesn't beat LevelDB. Thanks for the lazy suggestion I added disclaimer on top of the page right away!


But what is the point of your page? If it's to make recommendations then go a step further and include changing a setting or two for better recommendations.

Considering your background and endorsement of nosql databases, it comes off as more of a puff piece if you don't actually try to make the other db's run fast. It especially seems that way when you tout it as "blazing fast compared to any other storage solutions", and then proceed to test it in one narrow setup.

That noted, you said that you used the default settings, and I'd be curious if there are settings for LevelDB that could be changed to make it faster.

I'd be interested if you did more tests explore more than just one particular setup.

Edit: I realized that this may have come off a little negative. Overall I think it's nice, but seems less applicable than you make it seem.


So--this is really more of a statement of how the typical programmer doesn't RTFM than a comparison of the capabilities of the technologies.


SQLite is NOT a key/value store, it's a full featured SQLDB, and hence the default parameters are set to serve this purpose.

Comparing it to levelDB is like comparing redis to postgres with default value.

You can use postgres as redis. You can actually use the key/value store engine and tweak the settings to get very nice performance out of it. But it's not the default behavior.


Here's the relevant Gecko (Firefox) code which tries to use the max-age and expires headers first and then will set it to forever if the response code was 300, 410, 301, or 308. Note that I'm going by a somewhat shallow code reading after a recent investigation. There's a lot of stuff going on in gecko/necko that could potentially apply some failsafe time limit, maybe in the cache implementation, so I wouldn't take this as 100% for sure. Breaking on this function with gdb and tracing the flow is probably a better idea if you really want to know.

https://dxr.mozilla.org/mozilla-central/source/netwerk/proto...

The 301, 308 stuff comes from IsPermanentRedirect which is here: https://dxr.mozilla.org/mozilla-central/source/netwerk/proto...


http://kangax.github.io/compat-table/es6/ and http://kangax.github.io/compat-table/es7/ are particularly good for ES since it breaks out things into the nitty gritty features, providing the code samples that are tested.


SQLite addressed writers blocking readers in 3.7.0 with its Write-Ahead Log. See https://www.sqlite.org/wal.html for more details, but point 2 at the top is "WAL provides more concurrency as readers do not block writers and a writer does not block readers. Reading and writing can proceed concurrently."

(Writes will still block writes of course.)


How would this be a witch hunt? The third definition from https://en.wiktionary.org/wiki/witch-hunt is "A public smear-campaign against an individual" which seems to better describe the actions of https://www.reddit.com/user/aoiyama. One could try to be pedantic about the second definition, "An attempt to find and publicly punish a group of people perceived as a threat, usually on ideological or political grounds.", but if you read the posts at the reddit link and the Mozilla Community Participation Guidelines at https://www.mozilla.org/en-US/about/governance/policies/part..., even if you don't agree with the Mozilla ideology, the aoiyama one is clearly incompatible with the Mozilla one.

If the only course of action is to say "Oh no! They used a throwaway account, so there's nothing we can do about the toxic environment posts like this create for members of our community!" that doesn't bode well for having a non-toxic environment. And since I do need to disclaim that I am a MoCo employee (but 100% speaking for myself alone), I should also mention that you will find in that list of posts a link to https://www.reddit.com/r/MensRights/comments/35u1yp/an_email... where the contents of a Mozilla Corporation internal-mailing list post were reposted. So even if one wanted to write this off as just a random internet troll, the situation is that there is either a MoCo employee intentionally harassing another MoCo employee and Mozillian or a MoCo employee passing information to someone doing the same thing.

I do want to be clear that I am not attributing these hypotheticals to your one-sentence reply. But I also want to be clear that we can't write off this type of toxic behaviour as acceptable because of the risk of being perceived as engaging in witch hunts/star chambers/other hyperbolic misapplications of known-bad-terms.


"If the only course of action is to say "Oh no! They used a throwaway account, so there's nothing we can do about the toxic environment posts like this create for members of our community!" that doesn't bode well for having a non-toxic environment."

How on Earth was that comment "creating a toxic environment"? It was added a week and a half after the original post went up. By that time nobody but the most obsessive are still reading the same Reddit post.

Calling it "toxic" and having the CEO himself declaim from the pulpit about how this random throwaway account on Reddit with all of six posts to its name over the span of three months must be destroyed seems like using a nuclear weapon to kill a fly.


The flies add up. There's not a harassment noise floor beneath which we should ask the harassed to just shrug off harassment by their coworkers.


There's not a harassment noise floor beneath which we should ask the harassed to just shrug off harassment by their coworkers.

I'm not gonna argue what side this is on, just that there is there is a floor. They're called "haters." Conventional wisdom is to shake it off. (Just think about the time spent getting down and out. You could have been getting down to sick beats.)

Different people will have different opinions on if this is above the floor and actions should to be taken in response. People are going to to disagree about when actions cross boundaries, so simply labeling actions as harassment doesn't cut it to make everybody agree with your view that it has crossed boundaries.


Upvoting partly for partial agreement, but mostly for the taylor swift quoting


This then assumes that authentication is handled by some combination of device proximity, physically pushing a "grant admin access" button on the device, or falling back to password management (possibly on a sticker on the bottom of the device).

There is something to be said for tying the device into an existing strong authentication infrastructure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: