Hacker Newsnew | past | comments | ask | show | jobs | submit | nerd_in_rage's commentslogin

I've taken a look at both Backbone and Knockout over the past week or so, and I definitely prefer the Knockout approach.

Backbone... I just don't get it. It seems like there's too much complexity. I'm mainly a back-end developer, so perhaps it's just me.


Like you, I'm more a back-end developer and attempted to pick Backbone initially for my SaaS, but I wasn't very successful with it.

Later I discovered Knockout and in 2 hours, I was up and running, and it never got in the way so far.

I can only encourage people to at least try the tutorials, it will be time well spent:

http://learn.knockoutjs.com/


  > It seems like there's too much complexity
Backbone helps you to do the things you understand. Knockout let's you do things you don't necessarily understand. Sometimes black-box approach is OK. Sometimes not.


Knockout, while it seems magical at first, is actually not so hard to reason about.

It's not really a "blackbox". It's quite easy to understand and extend too.

For instance, you can create your own custom bindings: http://knockoutjs.com/documentation/custom-bindings.html

This is about the only place where it gets low and dirty and involves you writing DOM manipulation code.


I'm not familiar with Knockout, but it sounds like my experience when we were getting into the Flex framework at work a couple of years ago. We thought the data binding was so magical, but once I started to work with it, I had a very good mental model of how the MXML/ActionScript compilers were doing their magic.

It was much more pleasurable to work with when it was no longer magical.


I'm not familiar with either framework, but at least I can understand the intent (if not mechanics) behind Knockout code. With Backbone, it's like reading a whole different language.


Good. User-Agent detection has always been a hack.


How is this good? How is this considered to be a hack? You make this sound like something that can be achieved in a more sophisticated way. How?


It's good because it's not something that should be done.

Ask yourself why you need to detect the user agent.


Why? Because it can help improve the user experience (page payload, image sizes etc). There's a good reason why Google, Facebook, Netflix, eBay, Yahoo, BBC, Amazon etc. do it. It's also part of RFC 1945 (HTTP 1.0) and RFC 2616 (RFC 1.1). Not exactly a hack -- it's designed into the HTTP protocol since 1996.


To allow folks to read legible text. To ensure that elements aren't that small you tap on the wrong thing. To provide a reasonable user experience to your audience.

How exactly would you resolve the problem?


Is it a framework for making a site that looks like it was designed by a pre-schooler? Then, yes, good job.

Otherwise ...


No, that's unlikely. PHP's main advantage is its ease of deployment on shared web hosts: upload your files and you're done.


Just keep your mouth shut about what you do in your own time and you'll be find.


What? It's the absolute truth. If someone's wall post or tweet goes missing, it's totally inconsequential.

"Social graph" - LOL.


You think FB doesn't consider durability and consistency essential? With almost one billion users?!

Wow.

Whether it is consequential or not is completely beside the point. As is whether you understand the "social graph".


Eventual consistency, yes. What I'm saying is if a few pics disappear for a half hour while things resolve themselves, nobody is going to care... or likely notice.

It's not a bank.

By the way, I believe social networking is mostly bullshit, which is my "LOL" at the "social graph".


Eventual consistency however provides no guarantee of consistency unless it is backed by absolute local consistency on every node. Don't you run into CAP theorem problems otherwise?


Sure. But if a person or a photo data object goes missing ?

Facebook has different data stores for each of their features so I imagine the loss of a person would be pretty nasty as would a photo data object that multiple people have tagged or commented on.

My point is that when you are the size of Facebook, Twitter etc a loss of data in 0.001% of cases equates to a LOT of data and relationships.


I have my own /24 routed to my basement. Back in the early/mid 90's could get their own provider independent block.

If I knew now what I knew then, I would've gone for a class B block. lol.


I witnessed one guy implement an OAuth 2.0 provider completely wrong (he was accepting user credentials as client credentials, or something similar.) This guy was smart, and just couldn't understand the spec.

Upon reading the spec, it seemed that OAuth2 is really just some rough guidelines. Pick and choose what you need for the particular flow you're implementing.


If it's 1999 again, you know what's around the corner, right? pop


A weasel? I'm confused.


Bubble.


A terrorist attack?


I hope not. Every time one of those happens a gaggle of retarded politicians take their pants off, put them on their head and start running around enacting egregious laws.

The aftermath is much worse than the event.


This is what I assumed the headline meant. Like he was telling us all to brace for the inevitable crash and try to be Amazon instead of Pets.com.

That would be more accurate advice in the present climate. imo.


hasn't lying has always been a business model?


It's been a business practice, certainly. And it's been part of law and lore since the dawn of history. The code of Hammurabi includes punishment for false witness: http://www.jstor.org/stable/3153879?origin=JSTOR-pdf

In Dante's Inferno, the penultimate (8th) circle of hell is for the fraudulent (the 9th and innermost is for traitors -- another form of fraud, if you like): http://www.wolfram.demon.co.uk/rp_dante_hell.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: