Hacker Newsnew | past | comments | ask | show | jobs | submit | Frencil's commentslogin

> One gotcha is that you won't get any CSS this way, so all styling has to be done via SVG props.

There actually is a way to bake stylesheets directly into the SVG in a way that at least Inkscape can read, but it's tricky. We're successfully doing this in a graphing library tuned for genetic data that's built on top of d3 called LocusZoom.

The first step is to get the content of the stylesheet into a string. We do that here with a CORS request in case the stylesheet is being loaded over a CDN, as javascript won't be able to access the content of CDN-loaded stylesheets from document.styleSheets:

https://github.com/statgen/locuszoom/blob/v0.4.0/assets/js/a...

Since the stylesheet doesn't change once the page is loaded this only needs to happen once. From there, in our case, we have a dynamic/interactive SVG plot so the download needs to bundle up the current state of the SVG on request. Here's the method that does that:

https://github.com/statgen/locuszoom/blob/v0.4.0/assets/js/a...

Other than what no doubt looks familiar for pulling the raw SVG markup into a string we also insert a tag right after the opening <svg> tag that looks like this:

"<style type=\"text/css\"><![CDATA[ " + this.css_string + " ]]></style>"

Where this.css_string is our stylesheet content. We then base64 encode the string and stick in the href of an <a> tag styled to look like a download button as seen here:

https://github.com/statgen/locuszoom/blob/v0.4.0/assets/js/a...

The end result is a download SVG link/button that is 100% client-side with our chosen stylesheet embedded.

Now, I have noticed that some styles defined in the stylesheet are valid in the browser but fail in InkScape and other SVG viewers. A prime example would be setting a stroke or fill to "transparent". InkScape doesn't interpret this properly and you can end up with solid black, just about the exact opposite of transparent. Keeping this in mind I stick to defining explicit "fill-opacity" and "stroke-opacity" values in the stylesheet for SVG elements and everything plays nice.

A working demo of this can be seen here: http://statgen.github.io/locuszoom/plot_builder.html


Xively was one of the services we used as inspiration for Phant / data.sparkfun.com. Before they were Xively they were Pachube and things were easier to work with and free. Then Xively gobbled up Pachube and things got all business-y. Thus Xively has been an example of how we didn't want to build it.


Glad to hear it! I've been looking for a viable Pachube alternative since Pachube got gobbled up.


Try ThingSpeak on GitHub... Lots of active development and growing community.


It makes me very happy to hear this. I've always been sad that the initial promise of Pachube being an open place to store and exchange data was subsumed by corporate overlords.

This is as close to the mindset of the original Pachube as I've seen in a long while.

Nice work!


Very nice and thanks! Writing a replacement for Xively/Cosm/Pachube was on my TODO list, but now I don't have to. I'll take this for a spin later, and hope to contribute if there is anything I have to offer.


This is a bit of a false dichotomy here, as we do have occasional work from home which varies between positions and departments. That perk is a separate concern from the dog perk. I know of some employees who do choose to favor the former to opt out of the latter (working from home to be with their dog) because they don't want to bring their dog to work for whatever reason.


In all fairness it used to be more of a waste of time (as mentioned in the post it got progressively worse and widely viewed as a waste of time) until the tribunal came along. Under that setup it's 30 minutes of 5 peoples' time once a month, which is pretty easy to swing.


This is perhaps one of the more succinct ways of summarizing our core reasons for the switch. There's a lot more to it than that, of course, but in a nutshell we saw the upcoming scaling of our data footprint and complexity and wanted to use (what we evaluated to be) the better tool for the job.


Inventory location is one of many, many problems our ERP system is tasked with. And we've looked at a lot of software packages over the years, both proprietary and open. In terms of open software the options are actually pretty limited and have the potential to introduce more problems with integration and customization than they would purport to solve by bringing in new suites of features. What is less limited and truly abundant are examples of other companies in our same position that went full-bore into an off-the-shelf solution and ended up severely disrupting their business operations and aborting transitions.


Well, I'm biased since I'm paid to build those systems (we are OpenERP partners), but if it works for you, I won't argue ;)


If nothing else, it's probably time we give OpenERP another look just to get an idea of how problems are being handled there. Pointers to useful reading would be welcome, if you've got any.


The best introduction is probably Integrate You Logistic Processes with OpenERP, a free ebook written by OpenERP S.A. themselves[1]. It's slightly outdated, since it was published in 2011 and meanwhile version 7.0 has been released, but the changes to the stock area where limited (except for the UI).

We have a demo instance for version 7.0 [2], but it's in Portuguese (though you can change the language). OpenERP S.A. offers a free trial.

[1] http://www.brain-tec.ch/ebusiness-de-2/openERP/logistic-proz...

[2] https://demo7.thinkopensolutions.com/?db=demo&user=demo&pass...


I presume you're recommending PostGIS because of the allusions to the inventory location problem. While better modelling of geo data would be nice it's kind of a different concern. For us inventory location isn't about where on Earth our inventory is, or even where geometrically it is relative to something else, it's about where in a hierarchy of named locations it is. For instance: we have 10 of widget X in the storefront stock room, row F, rack 17, shelf 3, bin 1. Or we have 22 of widget Y in the receiving room on the quarantine table. Stuff like that. When modelling that many semantic nested locations and all of the transfers of items between them the primary issue is scalability and speed, not so much modelling geographic data.


Thanks for the reply. I see in another comment thread you agreed with the sentiment, "From my experience with PostgreSQL and MySQL, it is a bit like the difference between python and php, one is a great work of craftmanship that is reliable, predictible, coherent, enjoyable to work with and minimise the wtf/mn rate (which is the best quality measure in software), while the other is a bunch of hacks knit together to make it work asap and its wtf/mn is skyscraping."

Makes sense. I guess I've always assumed that while MySQL has gotchas, so too would database X. They come with the territory of databases, being such a large part of any app, yet separate from it and usually infinitely more complex than the apps that use them. To that end, I think it's why some people end up writing their own simpler data stores or caching middleware between databases/OLAP and front-ends. Such written-from-scratch solutions can be easier to debug when things are mission-critical.

From what I've seen with databases, the really complicated part is "setting them up correctly". If you start out with most defaults, something will eventually become a gotcha forcing a schema rewrite or a database transplant using an SQL dump. But that's just been my experience: with databases, expect to not get it 100% right the first time (unless it's not your first time with the technology, of course). And from that perspective, the technology used is less important, it will equally have "gotchas".

That all said, I too will be playing around more with Postgres to see how it compares to the InnoDB/XtraDB I've a hate/love relationship with. ;)


These are all very good questions and underscore the need to come to an interview prepared to do no less.

As a manager who frequently interviews developers nothing is more deflating than an applicant who, when asked if they have any questions, fumbles an awkward "no, not really" and the interview comes to an abrupt end.


Amazing how the insides of these chips are composed of similar looking traces and microcomponents to the PCBs that house them on a larger order of magnitude.


I'm booked for a tour in April and had no idea it was scheduled to be shut down. What a stroke of luck! Here's hoping we get a more in-depth look than we would have during normal operation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: