Hacker Newsnew | past | comments | ask | show | jobs | submit | henryprecheur's commentslogin

That policy is a bit risky. That might diminish the use of other things that reduce significantly the spread of HIV, like condoms. It's still unknown how effective the gel is: maybe 90%, maybe 5%, condoms are 90% effective AFAIK[1].

[1] http://www.advocatesforyouth.org/publications/416


I doubt that a contraceptive with 5% effectiveness would make it this far in trials. Even "pulling out" is more effective than 5%.


Also, contraceptive success is usually annualized, like with correct condom use, pregnancy happens less than 3 percent per year. I assume there is a standardized level of sexual activity for that, too.


I think he was talking about HIV.


If you just look at the number of hours, it's "just" a full time job for a year:

8 hours a day * 200 working days = 1600 hours

With 30 people working just a week (5 days), we're already at 1200 hours.

Maybe they got their hours wrong? Or it's not that impressive (I'm a grumpy old man).


They didn't say the 1300 figure was people hours, they just said hours. So for each of those hours you probably much more people hours.


Excellent point. I bet we (many of us professional software developers) are so used to the concept of man-hours that we may be grossly misinterpreting the 1300 hours figure. It could well be up to 1300 times 30 people.

Spreading 1300 manhours for 30 people over 22 months seems a bit thinly spread as well, if you ask me.


Sometime shitwork can be a very good way to find new possibilities. A lot of people do shitwork that's immensely useful, like editors on Wikipedia, moderators on reddit and forums, people who enter all the data into imdb. I don't see how those people could be replaced by algorithms with what we know now.

Sometime shitwork needs to be done because you can't simplify. I'm doubtful that Facebook's auto-group feature would work for me. Maybe me doing shitwork on Google+ is what work for me, because I value my freedom to control my information online.


> Who pays what in taxes is really a diversion.

The "who" is actually pretty important.

Tax the working man more, and you'll discourage working. Tax the rich, and he might move to another country.

There are many ways to raise money for the state, and none is really "fair". There will always somebody who'll be screwed in some way.


I don't know if I like it or not. If you know SQL well and want to switch to a NoSQL database, what's easier learn? The "proprietary" API of the dababase (like Redis, or MongoDB) or the limitations of unSQL?

I can't speak of other NoSQL databases, but unSQL doesn't seem to expose most Redis' features, like lists & sets.


"If people invested as much in learning to tune MySQL or Postgres as they did in working around MongoDB flaws they wouldn't need MongoDB." ~Benjamin Black


This is false, the writer of THE book on MySQL tuning (Jeremy Zawodny) is also the guy that is/has converted CraigsList to MongoDB from MySQL.

But let me interrupt your propaganda. We wouldn't want to address the reality that not all data and work sets can fit well in a relational database.

Example: http://blog.zawodny.com/2010/05/22/mongodb-early-impressions...


Calm down a bit, man. I've got a MongoDB mug sitting on my desk. I think you'd agree that even if the data structure fits into a key-value model more so than a relational one, that still doesn't mean you need NoSQL, and if you take a common definition of "big data" to mean "your data needs exceed the capabilities of a single machine", then a lot of people don't need all that scalability. (And if it did, you'd use Hadoop anyway. =P ) (There's also CAP Theorem considerations for added fun. http://en.wikipedia.org/wiki/CAP_theorem )

There was a presentation up here a few months ago on how the guys at http://wordsquared.com/ used MongoDB; they basically made the choice since they knew it already, instead of using postgres with their great geo libraries. And that's fine. What's stupid is when people who know one or the other pretty well spend a lot of time learning about the other for a use case that's most likely not really necessary anyway, or their current choice could handle with tweaks.

Of course, once public CS starts moving forward into innovative big analytics rather than just managing big data storage (such as the theta-join paper I linked elsewhere on this page), things may start shifting in favor of one of the NoSQL systems and the above quote would be equally suitable when comparing the Hadoop ecosystem or Mongo with some fancy new relational DB.


I wish people would stop linking CAP theorem, as if it proves something about one database or the other.

It doesn't.

It expresses some useful things about trade-offs but they aren't necessarily binary properties and it doesn't say anything about the underlying data structures or features of a database.

> then a lot of people don't need all that scalability. (And if it did, you'd use Hadoop anyway. =P

Maybe, not necessarily.


what's easier learn? The "proprietary" API of the dababase (like Redis, or MongoDB) or the limitations of unSQL?

That's what bothers me about this, too.

While I can understand the goal of making noSQL databases more accessible to people who already know SQL, as well as the need to unify the commands across all the different flavor of noSQL databases, there's something fundamentally flawed about accessing unstructured data via logic intended for structured data.


The data is not necessarily unstructured, it just doesn't have a strictly inforced schema. So I'm very intrigued by this if they're adding it as a layer on top of CouchDB views. If they are, you could use your views to selectively filter on documents with a known structure, and then safely operate across a subset of your docs with unql. We'll see where they go with this though.


To clarify for the downvoters, I'm talking about this in the context of CouchDB, which I believe was a fair assumption to make seeing as Damien Katz is one of the principle people involved and it was announced today at CouchConf.

In CouchDB, running a filter on top of view results is something you can only do in list functions or client side, so I am very curious to see how they incorporate this into CouchDB.


The data is not unstructured, though. It is in JSON format which can be taken advantage of.


I think good programmer should aim to know both SQL & NoSQL.

The backend should be chosen because of the data characteristics, not because of someone's experience..


Keep Your Crises Small, http://www.youtube.com/watch?v=k2h2lvhzMDc

It's a lecture from Ed Catmull. I highly recommend it, many gems from a brilliant developer turned brilliant manager.


Husky & HD (another StarCraft caster) both have a healthy revenue stream from their Youtube Channel. There are at least 12 other casters/players in North America/Europe making a living casting StarCraft II games.

Husky wasn't the first to cast. Guys like Day9, Artosis, & Tasteless cast too, and are quite popular. But HD & Husky were the first to focus mostly on "Virtual" tournaments. The other successful casters focused more on "Real life" tournaments. Artosis & Tasteless even went to Korea for a year to comment StarCraft II games on GomTV. Day9 regularly travel to big tournaments.

And yet, none of them make as much money as Husky & HD (supposedly, I don't have hard numbers.)

It shows the power of the web. The old medias are here to stay, but most of the creativity & growth today comes from the web.


Husky and HD appeal to the more casual audience. Their casts are more for entertainment, and they don't do too much in-depth analysis. Artosis, Day9 and Tasteless among others are pretty friendly for the average viewer, but they are definitely more knowledgeable and insightful.

Just a slight correction. Artosis and Tasteless are actually still living in Korea and casting fulltime.


Just a slight correction. Artosis and Tasteless are actually still living in Korea and casting fulltime.

Aye; this past weekend at MLG they even identified themselves as "from South Korea", not America. They'be been there for a few years and haven't announced any plans to move.


Anyone wanting to see their stuff can go to http://www.gomtv.net/

You'll need to pay 10$ to watch all matches from a 'season', but many are free.


I'd have to say that Day9's "Funday Monday" casts are among the most entertaining videos I've ever seen. The guy is absolutely hilarious and remarkably nerdy even for Starcraft casters, qualities which don't show as much in his more analytic casts.


It's in fact entertaining enough to appeal to non-StarCraft 2 players.


It really is like watching a very fast-paced sport match, except without all the constant in-your-face product branding. Like I remember college basketball used to be, except more cerebral.

I'd not played SC or SC2 much but I bought a (replacement - long story) copy last night.


It’s pretty cool that you can be so successful with something that’s utterly inaccessible to me and pretty much everyone I know.


AFAIK, they still both have day jobs, so I'm not sure they're making enough to be casting full time?


I don't like SPDY. It's trying to solve a transport problem at the application level. Plus it seems to be quite complex.

I'd love to see Google promote a transport protocol like SCTP[1], and do HTTP over SCTP instead. If Google pushed SCTP a little bit, we might see it pop on Linux and Windows within a few years.

[1]:http://en.wikipedia.org/wiki/Stream_Control_Transmission_Pro...


"Q: What about SCTP?

A: SCTP is an interesting potential alternate transport, which offers multiple streams over a single connection. However, again, it requires changing the transport stack, which will make it very difficult to deploy across existing home routers. Also, SCTP alone isn't the silver bullet; application-layer changes still need to be made to efficiently use the channel between the server and client."

http://www.chromium.org/spdy/spdy-whitepaper


I agree that SPDY would be a "quicker" solution. Just modify the browsers and wait a year or two, and 30%+ of the people browsing the web will have it.

SCTP have the nice side effect of improving things like streaming, and games.

As for application-layer changes, I don't think it would be too difficult to do, kind of like ipv6 (I don't have anything to back this up, it's just a hunch).

SCTP deployment will much longer than SPDY, but SCTP seems to be the "right thing to do". Not only for the web, but for other things that use the network. Internet is not only http://.

UPDATE: I just realized that saying that the transition to SCTP will be like IPv6 isn't necessarily a good point for SCTP :-D ... I guess I'm a purist and not a pragmatist.


Same thoughts.

Present-day GNU/Linux should work with IPPROTO_SCTP, it's only Windows who lack the implementation out of the box.


Correct me if I'm wrong, but as far as I'm aware, GNU has nothing to do with SCTP. Linux supports SCTP.

My non-GNU Linux system talks SCTP just fine.


SCTP addresses some of the same needs (primarily the multiplexing), but the major hurdle there is the fact that we're ditching TCP. As far as an upgrade path goes: it is much harder to get existing servers to support SCTP (think about Apache, Nginx, etc), then to bolt on support for a different application-level protocl (replace/augment HTTP with SPDY).


I disagree. It's much, much easier to support SCTP (since it's already provided by the operating system) than it is to support SPDY with all the server pushes, etc.

But to be honest, I don't know what was the source of such comparison... It's apples to oranges.


SCTP is not available on Windows by default, and you need administrative privileges to deploy it. I'm not trying to start an OS war debate, but this is a very practical problem.

Mobile, oddly enough, may be the best route to bring both IPv6 and SCTP to life.


Good point... But we were talking about servers ;)


Wouldn't SCTP need to be installed on both the client and the server?


Yes (for adoption) and no (for having server support).

But those are two very different problems ;)


Heh - I didn't think a server side SCTP implementation was very interesting if you don't have clients to use it.


The source of the comparison is that the main thrust of both SCTP and SPDY is multiplexing multiple independent data streams on a single connection.


Is is, really? For me multiplexing is just nice addition to SPDY.

Personally, I consider "server pushes" as the main feature and "full encryption and compression" as nice improvements.


Well, the primary reason for the existence of SPDY is encryption everywhere and speed. Something which with current HTTP/HTTPS doesn't really exist. HTTPS is still slower than HTTP and HTTP is slower than SPDY.

A lot of the speed increase comes from the use of multiplexing. So without it, it wouldn't be able to achieve most of it's goals.


afaik, nginx does not have support for spdy


SCTP is a good start for sure, and someday may make sense. The problem is a deployment one: it can't pass through NAT, making it off limits to most users today.

As for solving problems from the transport, that is not true. I assume you're suggesting that streams can only be tackled at the transport layer, but SPDY's compression is clearly an app-level endeavor.


Indeed, just to make it clear: There's no more C++ code in the OpenBSD's base, but g++ the C++ compiler is still present.

I don't think that g++ will be removed. Too many things are written in C++. And as long as GCC is present in OpenBSD, there's no reason to remove G++.


And what will they do when GCC moves over to C++?


I believe they are working on pcc (for various reasons).


Unless that version of GCC is released under GPL 2, nothing.


The main reasons for replacing groff were:

1. Groff is buggy

2. Groff is slow to compile & execute

3. Groff is not BSD licensed

mandoc has none of these problems. If you want a "generic" roff system, groff is available as a package.

According to http://mdocml.bsd.lv/

> groff amounts to over 5 MB of source code, most of which is C++ and all of which is GPL. It runs slowly, produces uncertain output, and varies in operation from system to system. mdocml strives to fix this (respectively small, C, ISC-licensed, fast and regular).


Those are reason why you wouldn't use groff, it still doesn't explain why – if you're taking the effort – you're not building a proper roff replacement, but some subset bastard program. And then hack stuff like tbl into the main executable. (Never mind that there's Plan 9 roff or the heirloom version)

I get it, nobody really seems to use non-man roff anymore. Still, subsetting and reinventing the wheel (partially) seems a odd solution for this.


Resources, need, and code audits are the reason to build a simple tool that does the job you want.


Why is reducing the scope of the problem in order to simplify the solution so odd?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: