That policy is a bit risky. That might diminish the use of other things that reduce significantly the spread of HIV, like condoms. It's still unknown how effective the gel is: maybe 90%, maybe 5%, condoms are 90% effective AFAIK[1].
Also, contraceptive success is usually annualized, like with correct condom use, pregnancy happens less than 3 percent per year. I assume there is a standardized level of sexual activity for that, too.
Excellent point. I bet we (many of us professional software developers) are so used to the concept of man-hours that we may be grossly misinterpreting the 1300 hours figure. It could well be up to 1300 times 30 people.
Spreading 1300 manhours for 30 people over 22 months seems a bit thinly spread as well, if you ask me.
Sometime shitwork can be a very good way to find new possibilities. A lot of people do shitwork that's immensely useful, like editors on Wikipedia, moderators on reddit and forums, people who enter all the data into imdb. I don't see how those people could be replaced by algorithms with what we know now.
Sometime shitwork needs to be done because you can't simplify. I'm doubtful that Facebook's auto-group feature would work for me. Maybe me doing shitwork on Google+ is what work for me, because I value my freedom to control my information online.
I don't know if I like it or not. If you know SQL well and want to switch to a NoSQL database, what's easier learn? The "proprietary" API of the dababase (like Redis, or MongoDB) or the limitations of unSQL?
I can't speak of other NoSQL databases, but unSQL doesn't seem to expose most Redis' features, like lists & sets.
"If people invested as much in learning to tune MySQL or Postgres as they did in working around MongoDB flaws they wouldn't need MongoDB." ~Benjamin Black
Calm down a bit, man. I've got a MongoDB mug sitting on my desk. I think you'd agree that even if the data structure fits into a key-value model more so than a relational one, that still doesn't mean you need NoSQL, and if you take a common definition of "big data" to mean "your data needs exceed the capabilities of a single machine", then a lot of people don't need all that scalability. (And if it did, you'd use Hadoop anyway. =P ) (There's also CAP Theorem considerations for added fun. http://en.wikipedia.org/wiki/CAP_theorem )
There was a presentation up here a few months ago on how the guys at http://wordsquared.com/ used MongoDB; they basically made the choice since they knew it already, instead of using postgres with their great geo libraries. And that's fine. What's stupid is when people who know one or the other pretty well spend a lot of time learning about the other for a use case that's most likely not really necessary anyway, or their current choice could handle with tweaks.
Of course, once public CS starts moving forward into innovative big analytics rather than just managing big data storage (such as the theta-join paper I linked elsewhere on this page), things may start shifting in favor of one of the NoSQL systems and the above quote would be equally suitable when comparing the Hadoop ecosystem or Mongo with some fancy new relational DB.
I wish people would stop linking CAP theorem, as if it proves something about one database or the other.
It doesn't.
It expresses some useful things about trade-offs but they aren't necessarily binary properties and it doesn't say anything about the underlying data structures or features of a database.
> then a lot of people don't need all that scalability. (And if it did, you'd use Hadoop anyway. =P
what's easier learn? The "proprietary" API of the dababase (like Redis, or MongoDB) or the limitations of unSQL?
That's what bothers me about this, too.
While I can understand the goal of making noSQL databases more accessible to people who already know SQL, as well as the need to unify the commands across all the different flavor of noSQL databases, there's something fundamentally flawed about accessing unstructured data via logic intended for structured data.
The data is not necessarily unstructured, it just doesn't have a strictly inforced schema. So I'm very intrigued by this if they're adding it as a layer on top of CouchDB views. If they are, you could use your views to selectively filter on documents with a known structure, and then safely operate across a subset of your docs with unql. We'll see where they go with this though.
To clarify for the downvoters, I'm talking about this in the context of CouchDB, which I believe was a fair assumption to make seeing as Damien Katz is one of the principle people involved and it was announced today at CouchConf.
In CouchDB, running a filter on top of view results is something you can only do in list functions or client side, so I am very curious to see how they incorporate this into CouchDB.
Husky & HD (another StarCraft caster) both have a healthy revenue stream from their Youtube Channel. There are at least 12 other casters/players in North America/Europe making a living casting StarCraft II games.
Husky wasn't the first to cast. Guys like Day9, Artosis, & Tasteless cast too, and are quite popular. But HD & Husky were the first to focus mostly on "Virtual" tournaments. The other successful casters focused more on "Real life" tournaments. Artosis & Tasteless even went to Korea for a year to comment StarCraft II games on GomTV. Day9 regularly travel to big tournaments.
And yet, none of them make as much money as Husky & HD (supposedly, I don't have hard numbers.)
It shows the power of the web. The old medias are here to stay, but most of the creativity & growth today comes from the web.
Husky and HD appeal to the more casual audience. Their casts are more for entertainment, and they don't do too much in-depth analysis. Artosis, Day9 and Tasteless among others are pretty friendly for the average viewer, but they are definitely more knowledgeable and insightful.
Just a slight correction. Artosis and Tasteless are actually still living in Korea and casting fulltime.
Just a slight correction. Artosis and Tasteless are actually still living in Korea and casting fulltime.
Aye; this past weekend at MLG they even identified themselves as "from South Korea", not America. They'be been there for a few years and haven't announced any plans to move.
I'd have to say that Day9's "Funday Monday" casts are among the most entertaining videos I've ever seen. The guy is absolutely hilarious and remarkably nerdy even for Starcraft casters, qualities which don't show as much in his more analytic casts.
It really is like watching a very fast-paced sport match, except without all the constant in-your-face product branding. Like I remember college basketball used to be, except more cerebral.
I'd not played SC or SC2 much but I bought a (replacement - long story) copy last night.
I don't like SPDY. It's trying to solve a transport problem at the application level. Plus it seems to be quite complex.
I'd love to see Google promote a transport protocol like SCTP[1], and do HTTP over SCTP instead. If Google pushed SCTP a little bit, we might see it pop on Linux and Windows within a few years.
A: SCTP is an interesting potential alternate transport, which offers multiple streams over a single connection. However, again, it requires changing the transport stack, which will make it very difficult to deploy across existing home routers. Also, SCTP alone isn't the silver bullet; application-layer changes still need to be made to efficiently use the channel between the server and client."
I agree that SPDY would be a "quicker" solution. Just modify the browsers and wait a year or two, and 30%+ of the people browsing the web will have it.
SCTP have the nice side effect of improving things like streaming, and games.
As for application-layer changes, I don't think it would be too difficult to do, kind of like ipv6 (I don't have anything to back this up, it's just a hunch).
SCTP deployment will much longer than SPDY, but SCTP seems to be the "right thing to do". Not only for the web, but for other things that use the network. Internet is not only http://.
UPDATE: I just realized that saying that the transition to SCTP will be like IPv6 isn't necessarily a good point for SCTP :-D ... I guess I'm a purist and not a pragmatist.
SCTP addresses some of the same needs (primarily the multiplexing), but the major hurdle there is the fact that we're ditching TCP. As far as an upgrade path goes: it is much harder to get existing servers to support SCTP (think about Apache, Nginx, etc), then to bolt on support for a different application-level protocl (replace/augment HTTP with SPDY).
I disagree. It's much, much easier to support SCTP (since it's already provided by the operating system) than it is to support SPDY with all the server pushes, etc.
But to be honest, I don't know what was the source of such comparison... It's apples to oranges.
SCTP is not available on Windows by default, and you need administrative privileges to deploy it. I'm not trying to start an OS war debate, but this is a very practical problem.
Mobile, oddly enough, may be the best route to bring both IPv6 and SCTP to life.
Well, the primary reason for the existence of SPDY is encryption everywhere and speed. Something which with current HTTP/HTTPS doesn't really exist. HTTPS is still slower than HTTP and HTTP is slower than SPDY.
A lot of the speed increase comes from the use of multiplexing. So without it, it wouldn't be able to achieve most of it's goals.
SCTP is a good start for sure, and someday may make sense. The problem is a deployment one: it can't pass through NAT, making it off limits to most users today.
As for solving problems from the transport, that is not true. I assume you're suggesting that streams can only be tackled at the transport layer, but SPDY's compression is clearly an app-level endeavor.
> groff amounts to over 5 MB of source code, most of which is C++ and all of which is GPL. It runs slowly, produces uncertain output, and varies in operation from system to system. mdocml strives to fix this (respectively small, C, ISC-licensed, fast and regular).
Those are reason why you wouldn't use groff, it still doesn't explain why – if you're taking the effort – you're not building a proper roff replacement, but some subset bastard program. And then hack stuff like tbl into the main executable. (Never mind that there's Plan 9 roff or the heirloom version)
I get it, nobody really seems to use non-man roff anymore. Still, subsetting and reinventing the wheel (partially) seems a odd solution for this.
[1] http://www.advocatesforyouth.org/publications/416