Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Cost of JavaScript Frameworks (timkadlec.com)
154 points by elorant on May 8, 2020 | hide | past | favorite | 169 comments


I've working at large F100 financial services companies as an engineer and technical manager for 10 years. I've seen front ends built using a myriad of techniques and frameworks.

Leaving bytes and bundle weight aside, in 2020 JS frameworks like React are far and away the best approach for front-end engineering from a large organizational standpoint.

1)Not difficult to hire for. JS frameworks are currently in the sweet spot where they're both widely popular and "cool". Consequently it's not hard to find good candidates. That wasn't the case 5 years ago where React/Angular developers were still uncommon.

2) The HUGE and helpful community of developers. There are so many excellent resources for learning and solving problems with JS frameworks. JS developers are eager to show off on medium and similar sites. I've never seen such a large and enthusiastic community. This will inevitably subside as JS fall out of favor but I hope it doesn't for a long time.

2)Rapid UI development. The JS ecosystem make it easy to build UIs quickly that don't completely suck and look good for the ever-important demos to senior leadership.

3)Opinionated frameworks naturally enforce best practices and ensure relatively consistent architecture. This makes it easy to move engineers from one team to another and make grokking new codebases a lot easier. New engineers are productive quickly and begin creating value for the company. This was not the case with the inevitable jquery monstrosities that resulted after a critical mass of code or # of engineers.

4)Fewer bugs. JS ecosystem makes it very easy to write unit and integration tests.

Companies with IT budgets that are 9+ figures will happily pay for extra bytes across the wire in exchange for those benefits


One of my main things I argue strongly for on any new project is "don't do anything weird". Don't use weird fringe languages, don't use weird fringe frameworks. If the thing you want to do doesn't exist, think real hard about why it does not.

At my office, we've somehow ended up with a critical graphql server that's written entirely in Kotlin, which is cool but completely impossible to hire for or get any kind of community support for it. It feels like absolutely everything you want it to do, you have to do build yourself, whereas if we'd used a normal off the shelf graphql server it all already exists.

I'm so sick of maintaining old weird forks and unnecessary custom code that doesn't need to exist.


I think the advice is sound, but Kotlin and Graphql are pretty mainstream. This sounds like a local market problem.


I think if you're putting out "Senior dev with 3+yrs experience in GraphQL and Kotlin" ads you're going to have real trouble in most markets, if not all.

Of course you shouldn't be doing that, is the thing, but that's how most job ads read.


Yeah, this is what I mean

Kotlin is fine, and GraphQL is fine, but when you combine those, you have a very small niche.


Kotlin is really very similar to C#. Java developers shouldn't have much of a problem either (now that Java finally has lambdas), and I hear Scala is mostly a superset of Kotlin.

GraphQL and (Kotlin or C# or Java or Scala) probably isn't that small of a niche.

Edit: Also, Kotlin does have near total compatibility with Java and can use any Java library (aside from the most magical bytecode-rewriting annotation processors) so I don't understand what you're saying about having to rewrite things that exist.


I'd put it the other way around, Kotlin is a subset of Scala that the masses might be ready for. I would always prefer the kind of candidate who's excited about stuff like Kotlin over others.


The nice thing about Kotlin is that you can just bind to the existing Java GraphQL libs. Clojure is similar in this respect. Sure it means you have to also be familiar with Java and the Java ecosystem, but it's actually great to interact with Java-the-ecosystem when you don't have to write any java. ^^


Why are people even hiring specifically for such a niche subset? These are not huge complex systems, anyone can become familiar with GraphQL in days.


I would further describe 'weird' as being out of the norm for the company and where hires come from. In general, I have a 'weird' budget of 1.0 thing(s) for non-critical projects.

Recently, I've been spending it on Vuejs/TypeScript (with Go backend) at a company that uses React/TypeScript, GraphQL, Rails/Ruby, and Go.


Absolutely agreed. I have never in my life had a stakeholder get on my case because our site was 3mb when it could have been 1mb. I have had many stakeholders get on my case because we couldn't turn around features fast enough, or the site was buggy, or our UX was worse than our competitors. Those are the things that the people who pay me and my team care about and if I could 10x our bundle size for a 1.5x improvement in development speed I would do it in a heartbeat.


That depresses me because it's clear who is being ignored here: the user.

A 3MB bundle size instead of 1MB does make a difference, and if you're working on the kind of site that relies on conversions/purchases to succeed then you're absolutely leaving money on the table. Not to mention that Google factors in page load time to rankings.

> our UX was worse than our competitors

If your competitor has a < 1MB bundle and yours is 3MB then any user on a low end Android phone is absolutely going to have a worse UX experience. Maybe stakeholders aren't aware of all this. Maybe you should tell them.


Most of the apps I've worked on don't have conversions and aren't indexed by search engines. One app I worked on recently did have a mobile version that received a small fraction of all traffic, and even then overwhelmingly from relatively modern iOS devices.

It's like all engineering: use the right tools for the job. If low-end Android users are a significant portion of your target audience you have to design your system around that.


Dev experience and productivity is key, but don't forget that 10x bundle size has effects you can't simply dismiss. It does cause a UX problem since a slower pageload is part of the UX.

SEO. Google also takes into account page / paint times.

These are the things your stakeholders probably don't know about but you should.


As a Frontend Engineer I can assure you there are ways to counter that issues but they're not exactly popular: Server Side Rendering. It adds another step t the build process in which HTML is prerendered, so you get the speed of loading plain HTML. It then loads the actual JS in the background. It's a PITA to implement


With tools like Next.js or Gatsby it's not that bad. We use Next at work on a large scale and it works for us. But do you know what language gives you instant SSR? PHP :) It's funny how we had to invest so much effort to replicate with another language supports out of the box. I've built a blog in a few hours with Craft CMS, perfect tool for the job this case, I didn't even have to touch PHP files, just the templates.


The majority of the projects I work on target users on desktops with modern browsers. You have to get into the massive 10mb range before noticing any lag, and even then only on an empty cache. SEO isn't a concern either, all of our marketing pages (when we have them) are static sites.

It's all about picking the right tools for the job. If low latency is a business requirement by all means don't use a framework.


Wait another year or two and your stakeholders will smarten up - users are noticing. Svelte JS, for example, has everything you listed minus the costs. React is very far from the end of the road.


Maturity matters, unfortunately. I've been looking for a chance to try out Sapper but I wouldn't feel comfortable recommending that we start a major new project with it.


Also don't miss out S-js and Surplus.js. They are helpful tools to build reactive tools without VDOM.


[flagged]


For what it's worth I've been a native mobile dev for the past 10 years, barely touching any sort of web stuff (maybe updating my resume here and there). I was able to pick up Svelte very quickly, about 2-3 weeks tops.


It's about a 4 hr learning curve from react - but I'm not shilling svelte, just showing that there are already superior models. Pre-compiling is really objectively better in every way except popularity, and that will change because of its betterness.


What if you're compiling to JavaScript from another language, like e.g. Kotlin/JS? Does the Svelte compiler rely on some very particular JavaScript syntax, or could you run it on the JavaScript output from another compiler?


Svelte source code is not Javascript; it's a mix of JS, HTML, CSS and mustache-like syntax, so you wouldn't necessarily be able to compile to Svelte using out-of-box tooling from other ecosystems. I suppose you could compile to Svelte output, but then you'd just be reimplementing it in a different language.


Ah, that's unfortunate. At least with React (and Mithril) you can use it from pure JavaScript, which lets you replace JSX with Kotlin's DSL[1] for HTML.

[1] https://github.com/Kotlin/kotlinx.html/blob/master/README.md


I find using HTML for HTML much more pleasant.


I do too, for simple things. But what do you do when you have repeating patterns and want to eliminate duplication? Usually a template language would be introduced, and now it's not HTML anymore; tools that work on plain HTML don't work with the special template language; commenting out code gets tricky because the same file has multiple kinds of syntax intermixed. And in addition to your HTML and your template language you still have your programming language.

Kotlin solves this by using the programming language for the whole thing, allowing you to extract repeating parts using plain functions as you would with any other code. Of course it's not as well suited as HTML itself for writing HTML, but it might be better than HTML + some template language + some programming language.

Another way to go, to avoid having a mysterious mixture of almost-HTML embedded within some other syntax, is to do what Thymeleaf does[1] and have the template language be embedded inside HTML (rather than the other way around) so that the template file is actually valid HTML.

[1] https://www.thymeleaf.org/doc/tutorials/2.1/usingthymeleaf.h...


Superior in what way? Do you really trust that SvelteJS is going to be a better product in 5 years over something like React that Facebook invests millions of dollars every year?


I don't think anyone is going to sell you on something else, you seem very ingrained in the React ecosystem. More dollars doesn't equate to "better product" down the road. Angular came and is still a part of Google. Would anyone starting a new project use it over something like VueJS or React today? Probably not. Right? So that sort of tosses your argument out the window.

I don't do JavaScript professionally anymore, but I've worked with jQuery, BackboneJS, lots of AngularJS, some Elm, some VueJS, and some React, but being the curious person that I am, in search of greener pastures, I checked out Svelte, by first watching Rich Harris' video https://www.youtube.com/watch?v=AdNJ3fydeao on "Thinking Reactivity", then seeing the benchmarks: https://twitter.com/Rich_Harris/status/1200807516529147904, and watching Rich and Dan Abramov spar a bit over their challenges. From an outsider with no skin in the JS game, it is clear that Svelte seems far superior now and into the future...

1. it is a compiler (it's like comparing C/C++ to Python) 2. the joy of out-of-the-box animations that use the GPU in CSS (reminds me of the joy of using jQuery for the first time w/ it's OOB animations) 3. you can drop in any other JS lib (doesn't have to be in a particular JS framework ecosystem) and have it inter-op! This is huge! It's because it's literally like writing vanilla JS 4. no need to learn some new syntax like JSX (not a huge fan)

Just take a peek. If you don't like Svelte, so be it, but at least you know what it is that you don't like rather than turning a blind eye because you don't want to be convinced -- that's tunnel vision.

I was trying to decide between React/Vue/Svelte for a project I started recently, and after having attempted to build to-do apps in each, I landed on Svelte.


I am not at all a React fanboy, I'm just using is a metaphor for Vue/Angular/React. You're selling Angular short it's still a great framework that is very popular, even better than React for some use cases as it has more out of the box and is more opinionated


Your actions say otherwise.

You want to turn this into a framework/lib debate? I'm checking out. Angular is popular in the same vein that PHP is popular. People still use it. In fact, CodeIgniter is still very popular. It certainly has some use cases that certainly beat things like Rails, too. I see where you're going with this -- you care about popularity. Got it. Well, Svelte will check out now.

I could care less for any framework. I'm looking for the least friction, highest expressiveness, maintainability, and last, but not least, portability.

I just left a company that did React having spent the last year ripping out Nuclear for Redux and now there are React Hooks and Contexts. Hire massive teams to spin wheels rather than feature development and chalk it all up to tech debt sounds like fun.

WhatsApp would not exist if people didn't use more obscure technologies. 50 person team on Erlang for nearly a billion-user product while some organizations want to run 500 person teams to refactor the code written only a year ago be it Java or JavaScript.


A hello world react app with everything you would typically need (redux, router, etc.) starts at 100K+. Svelte is a couple K.

A react app has massive abstraction to render on every frame, Svelte compiles out the framework to render instant straight vanilla js (wins all perf tests).

Pre-compiling allows for beautiful syntax and more flexible app structure, since you aren't limited to the runtime.

SSR and code-splitting are significantly easier and more straightforward.

React is designed for a billion-user site and it shows.

There really is zero cost to pre-compiling except mind-share, and only benefits. Of course an F100 company has many concerns above tech awesomeness, so no judgement on your selection criteria. But I assure you this movement is coming, react/vue/angular will adapt/change and/or people will move to Svelte-like solutions.

You can already see it happening in the article facebook released today [1]:

"By using rems, we can respect user-specified defaults and are able to provide controls for customizing font size without requiring changes to the stylesheet. Designs, however, are usually created using CSS pixel values. Manually converting to rems adds engineering overhead and the potential for bugs, so we have our build tool do this conversion for us."

[1] https://news.ycombinator.com/item?id=23116300


I'm not convinced that the 100kb is what's causing the perf problems though. It's not that hard to make a React app that loads almost instantly. There are lots of huge, slow react apps, but that's due to poor engineering choices not the framework.


You have not looked at Svelte. You've still the overhead of a virtual DOM and its diffing. People that turn a blind eye to Svelte will be blind-sided. It's like writing vanilla JS without all the baggage, because it's a compiler (think C++ versus Python).

I highly doubt that one can build a ReactJS app that loads faster than a Svelte one. I encourage you to try and share it. This is coming from someone that is a backend engineer that doesn't really do JS, but have had to for my own personal project and both perf and the subjective bit of expressiveness and joy of writing are important. Contrary to popular belief, people think that Svelte is trading perf for ugliness and lack of expressiveness.


So I had (an admittedly brief) look into Svelte on the back of your comment. And one thing that stands out to me is that it appears to be using string templating. This isn't a complete dealbreaker, but it's definitely a step back from the React world where everything is JavaScript. This was by far the biggest pain point in other frameworks like Angular, so I'd be pretty reluctant to go back to it.

Looking at the framework benchmarks (https://krausest.github.io/js-framework-benchmark/current.ht...), svelte is indeed a good bit faster than React. However, Inferno which takes a react-like virtual DOM approach but better optimised is faster still. So I'd be more inclined to go with that if I were willing to give up the React library ecosystem.


I see TypeScript support is on the road map. I haven't used it yet, but that's exciting.


I love react. But when I hear developers say their react app loads instantly I think:

1) Their app is pretty small

2) Their machine is really fast


That may be the case for React, but the vast majority of your stack (e.g. react-router, react-redux, styled-components, etc) certainly don't have millions of dollars being poured into them. The base frameworks are very very similar in terms of what types of features they provide.


The author of redux works at Facebook


Both Dan Abramov and Andrew Clark (co-creators of Redux) were indeed hired by Facebook to work on the React core.

However, Dan and Andrew haven't actively touched Redux library code since mid-2016. (Dan occasionally drops into an issue to comment, like his suggestion to not ship a `useActions` hook for React-Redux, but that's it.

The _current_ maintainers (Tim Dorr and myself) do not work for Facebook, and Redux has never been a Facebook-sponsored project. We're an independent open source project.

We chat with the React team frequently via Github and Twitter and such, but that's just OSS collaboration work.

See https://blog.isquaredsoftware.com/2018/03/redux-not-dead-yet... for more details.


AFAIK gaeron hasn't touched redux stuff at all for a while, and acemarke/timdorr are not FB employees.

Also same ecosystem story there: redux-saga, redux-form, etc are all low/no budget projects.


And also server side rendered SPA (keeping the VDOM on the server)


I was lucky at one point to work with product owners who understood that latency translated to lost revenue and so they helped champion the idea that engineering excellence was actually valuable, and that squeezing out performance was itself a worthy goal. It helped that we could compare our metrics to comparable sites, and could give upper management bragging rights when our pages loaded faster than bigger higher staffed “competitors”.

But otherwise, I think you tend to be right, in a lot of areas even an extra second of latency doesn’t matter, and more features will move the needle more.


2 more MBs is going to matter to some users with limited data or bandwith.


Sure, that's a trade-off you have to make a decision on when designing the core architecture. If your users are sitting behind corporate T1 connections near an AWS data center bundle size doesn't matter at all. If your users are still on 3G phones it matters a ton. For somewhere in between it becomes a business decision: how much performance are you willing to trade for developer productivity?


You're comparison implies that T1 is faster than 3G, but that's probably not the case except in really poor coverage areas.

T1's maximum data transmission rate is 1.544 megabits per second. [0] While 3G's theoretical maximum is 21.6 Mbit/s for HSPA+. [1]

[0] https://en.wikipedia.org/wiki/T-carrier#Transmission_System_...

[1] https://en.wikipedia.org/wiki/3G#Data_rates


In practice, 3G is quite slower than that.


>I have never in my life had a stakeholder get on my case because our site was 3mb when it could have been 1mb.

As cloud costs become tougher to swallow, this is going to change.


The common thread in a lot of this is disregard for end user experience. But I think the biggest reason that these discussions never go anywhere is that people are using the same tools for wildly different purposes.

At a guess, I'd say that a webapp for a financial services company is going to be loaded once then interacted with heavily? Single Page App model. In that instance the time taken to download, parse and execute JS upfront isn't a big deal. Same as Gmail: when it loads, it takes a couple of seconds, then I leave the tab active for who knows how long. Plus in this situation the audience is relatively monolithic: a lot of desktops, and most of them fast.

The problem is people taking lessons learned in that environment and applying them elsewhere. Shopping sites, news sites, essentially anything that people arrive to from a Google search pay a much bigger price for a giant payload. Users can and will leave your site if it's taking too long to load. They'll be running on all kinds of devices, including low-end Androids with very underpowered CPUs.

Whenever I see a site like that implementing a full React page with Redux and who knows what else bolted on, I cringe. It's not an appropriate use of tools. But these days people only look at developer productivity (we can add so many more features so much faster!) rather than focusing on the experience in the user's hand.


The extra bytes across the wire come at a cost of worse search engine performance (extremely costly for ecommerce) and bad accessibility for people using screen readers. #2 I also don't agree with. The JS ecosystem seems like the opposite of "build UIs quickly". You can't swing a stick without hitting like 5 tools you have to use.


> This will inevitably subside as JS falls out of favor but I hope it doesn't for a long time.

I don't think JS will fall out of favor for a long time.

Languages that compile to wasm are unlikely to be competitive until native dom access has broad browser support, which could be years. On top of that, compilers will have to be written to do code splitting and compile times will have to drop.

Languages with strong type systems are unlikely to be competitive because TypeScript is good enough that JS projects are now manageable.


> 1)Not difficult to hire for. JS frameworks are currently in the sweet spot where they're both widely popular and "cool". Consequently it's not hard to find good candidates

If someone is good at JS why not hot hire him? Wouldn't a professional be up & running with ANY framework you throw at him in a month or less?


I know JS like the back of my hand, I’ve been working with it off and on throughout the years since it was first in beta for Netscape Navigator. I’ve had to do some relatively complicated JS logic on the front end and back end.

But, I wouldn’t work for any company crazy enough to hire me as a modern front end developer.


This is also true for other well established web (not JS) frameworks like Django or Rails


Not only this, but eventually we will ditch JS in favor of something more "isomorphic" with whatever the back-end is using.

Tech like Blazor and related asm.js type thinking will eventually replace JS when it bypasses asmjs completely and ships bytecode that the browser executes directly.

I doubt these frameworks will be much different compared to what we are doing today, because we will still need to deal with layouts, alignments, the color and shape and proportion of things, event handlers etc. The usual stuff.

Ragging on React, Vue, Angular is misguided. These frameworks allow developers to write front-ends as if they are applications, because in a lot of cases, this is what the modern "website" really is. They are training the next generation of front-end developers much like how jQuery did in mid 2000s.


> Not only this, but eventually we will ditch JS in favor of something more "isomorphic" with whatever the back-end is using.

Why not use JavaScript itself for this? That is the strategy employed by Next.js/Nuxt.js/Sapper/etc. Your React/Vue/Svelte components are rendered on the server and then hydrated on the client.


Why not just have everyone code in Javascript now and forever?

Same reason we didn't settle with everyone coding in C. Or C++. Or Java. Or Python.

The web is just a method of distribution. Not every problem or domain takes equally well to each programming model or language. It would be nice to be able to pick a tech stack that works for my domain and the characteristics of my project, rather than picking a tech stack because of how users access it.


Because Node.js isn't appropriate for a lot of back-end applications, or for legacy back-end applications that already exist.


Mostly because for all the development done on it, Javascript is still a %50 accurate name; it ain't Java, but it is definitely a scripting language. Node may be popular, but it's lightyears away from being able to say that it's chasing all the other languages out of the server niche, and it's been around for long enough that we can judge it's unlikely ever going to. In 5 years it's going to be a lot easier to get a non-scripting language compiled on to the client than it is to make a scripting language work as your sole backend language.


That dream is already here if you use Clojure, and it's great! Web assembly makes me very bullish on this kind of setup. Maybe we won't even be targeting the DOM in the future?


> Maybe we won't even be targeting the DOM in the future?

Unless the other thing you target instead is similarly well-developed, you're gonna have real trouble with accessibility.


That's a really good point. Accessibility is super important and a shame that many devs (including myself) forget about it.


> eventually we will ditch JS in favor of something more "isomorphic" with whatever the back-end is using.

I think you are looking for Nim. It compiles to assembly (through C), javascript, wasm and more.


It’s like Java... tons of orgs will trade off perf to enable an engineering team of mixed quality to deliver projects.


What is wrong with java performance? I think the garbage collection in Java is probably (if not) one of the best.

Start-up times are an issue. But once warmed-up the JVM is very performant provided tuned well.


Except Java is extremely performant and if your engineering team doesn't know exactly what they're doing there's often a good chance they'll both use more time and create a slower product with a low level language as far as I know.


umm wat.

> Except Java is extremely performant and if your engineering team doesn't know exactly what they're doing there's often a good chance they'll both use more time and create a slower product with a low level language as far as I know.

So agreed that it'll probably take more time if the team doesn't know what they're doing. But as for slower product? Ehh I think that was the point GP was making; Yeah, its' an overenginnered product that results at times, but it's got just enough guardrails that you hopefully just get a slow, or at worst buggy product, versus a product that doubles as a loot box of future CVEs.


The point is: for the majority of programmers - including a number who can write both reasonable reasonable C and reasonable Java - they will create better, faster programs with Java than with C and in shorter time.

There are absolutely times when Java doesn't cut it but at that point your options are getting increasingly limited in many ways: who can build and maintain it, what kind of languages and libraries you can use, the hardware you use, tuning etc.


Exactly. Ensure the average developer can contribute. Of course top tier devs can deliver better products using bare JS.


It’s completely different. Java is a back end language - even if we accept that Java is 20% slower (just making up a number. I don’t know whether it’s slower or not), hardware is cheap and you control it. You don’t have any control of the client’s computer or their bandwidth.


Ok, so I generally agree with the content of the article.

I want to pick on the up front assumption though. He lists 4 costs that are paid -

---

The cost of downloading the file on the network

The cost of parsing and compiling the uncompressed file once downloaded

The cost of executing the JavaScript

The memory cost

---

What isn't mentioned is WHO'S paying those costs. And I think that generally matters when you're trying to convince companies with a profit motive that they're doing the wrong thing.

My issue is that of his 4 costs, 3 are all paid by the consumer of the product. The ONLY cost paid by the company is the extra network traffic to download the JS bundle. Even that can be optimized to a VERY small number if you're using good caching and have a cdn in place. The individual consumer might still wait for that network payload, but the company's objective cost is still going down.

So in general, what's the compelling argument that I should care about the rest of the article? None of it is bad advice. It just seems that the only real benefits I might get are two user features

- Faster sites

- Better battery usage

Users like those things, but they don't put them anywhere close to the top of the priority list for most products.

So I'm really walking away from this seeing a savings still. As a developer, I don't really love that opinion (I want to make good products that are fast and and small), but as a company stake-holder with a lot of possibly things to prioritize... This sounds like bad advice.


So basically we're getting shit slow apps as users because they're cheap to make.

As a user I thank you.


Would it be better to not have the app at all? Time to market and dev costs are key considerations for startups. Many don't have the option or funding to do things perfectly.


90% of the cases... yes, it would be better.


Last I checked no one is making you use something you don't want to.

If you think you're better off without a product, don't use it and let the market work itself out.

Frankly, I mostly even agree with you. I rarely install more than 5 or so apps on my phone these days, and I visit a pretty paltry number of sites consistently, but that doesn't mean someone, somewhere, isn't getting value from those products.

One man's trash is another man's treasure.


> Last I checked no one is making you use something you don't want to.

I just checked, my boss is making me use slack, and nobody is writing a non-electron competitor to slack...


Welcome to capitalism... If what you're building is solving a valuable problem, and you don't build it fast enough, someone else will build it quicker and shittier but still solve the problem. You're getting shit slow apps because you use them when they're delivered to you.


I mean, same reason high fructose corn syrup is in most of the foods we eat...


"WHO's paying the cost" - it's not even always the end user.

On Wednesday night, Anna (9 years old) said that Epic Books wasn't displaying properly (Chrome v49, WinXP). She wouldn't even have a computer if it weren't office trash a few months ago, and I brought it home to give to her. Now she depends on it for homeschooling during the lockdown.

Epic Books tech support insist that they will not provide support for old browsers. Anna has homework to do. Should the school district provide laptops? Someone has to make a decision about what hardware is needed for thousands of students.

Why does the school district need to spend $100k on brand new iPads? Because Epic Books want to use the latest JS framework.


Oh wait, I know this one!

>Why does the school district need to spend $100k on brand new iPads?

Because the school chose to use Epic Books.


The problem is a complex client-provider relationship. The end user (children) don't make the purchasing decision. The ones choosing the software (teachers) have no control over the hardware. The ones choosing the hardware (school tech admins) have no idea what the teachers really want or need.

But the child will be punished if they don't do their homework. I think that's a grave injustice. It's not her fault that the school didn't provide the tools. We should be encouraging people to learn when they want to, not making it hard for them!

Healthcare is another area where there's a complex client relationship between doctors, insurance companies, and patients. I really think that developers in education and health are doing important work to save lives, but the capitalist approach of prioritising the person who pays isn't that simple.

With those fields, it shouldn't be just about money. On that note, I liked this comment yesterday: "hey, will you help me move? I'll pay you $21.58 for your time," I'd probably bristle. Even though that might be the equivalent price of a few slices of pizza and a beer" https://news.ycombinator.com/item?id=23106932


I don't disagree with you, I just think you're placing blame in the wrong spot.

The decision that harmed the child was made by the school. The entity that has the power in this situation is NOT company selling the product, it's the purchaser that chose to use them.

The school is the immediate and obvious place to resolve the issue, and that's why the school is responsible for the new costs (or alternatively - Choosing a new provider).

I think it's an incredible oversimplification to assume that a company is updating "just because", and that they've done it without any consideration.

I'm not bullshitting when I say I've seen multi-year discussions about when/where we can afford to drop support for a particular devices/system.

It's a constant analysis of

- What does the upgrade buy us?

- What does losing those users cost?

- What relationships will be damaged?

- What impact does it have on company morale?

And the answers to all of those questions vary over time, as the user base changes and new systems are released.

Basically - I promise the company acted in a way that they believe will help themselves and their customers most. They might be wrong, but who cares? No one is omniscient, we're humans not gods.

So does updating happen to hurt this one kid on an older device? Yes. No on is happy about it, but they believe that the compromise was justified by the benefits it brings to all their OTHER users. Who they are also responsible for thinking about.

----

Now, I strongly agree that products that are purchased by people who don't use them tend to be inferior. But again, I find it hard to blame the developers of those products because the unfortunate reality is that when the person buying and the person using are different, usually the buyer just comes in with a checklist of "features" and selects the product that checks the most boxes. That means the company selling the product is forced to act in a way that checks boxes instead of making happy users.

But again, the solution to that isn't to blame the developer - The solution is to make actually using the damn product a requirement of the purchasers job description. And again, only the school can do that.

Now, the final ugly bit of this is as follows - You cannot wave a magic wand and make all problems disappear. There will always be better/worse outcomes for different parties after a decision is made. Maybe the school purchaser DID actually use the product, and DOES actually want the features a new upgrade brings, and has done exactly the same calculations for their school district that the company made, and has STILL decided to use that product even though it happens to hurt you, because they believe it helps more folks.

I can't help you with that, but it does bring to mind the quote from Churchill

>Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.…

It's a shitty situation and if you know how to make it better, let me know...


What you are describing is an ethics problem common to software. Developers are putting their self interest before the product arguing its necessary because otherwise it would take too long, be too challenging, or too expensive. Those could be valid arguments, but they are entirely unqualified, and without supporting data remain invalid and self serving to the developer.


I can't honestly take this opinion seriously.

The problem is that - as currently structured - your ideology demands that another party enter into an exchange or agreement that they don't believe is beneficial to them.

I don't actually give a flying fuck whether the "developer" you've outlined in your post is right or wrong with their opinion. They have the right to make and sell the product they believe will benefit them the most. Could they be wrong? SURE! And if they're wrong often enough or long enough they usually go out of business (usually... some caveats here around monopolies/markets/regulation/etc).

So I'm not really sure why you see an ethics problem. If it truly is self-serving and invalid reasoning, why not put your money where your mouth is and do it better?

Basically - Step up or shut up.


> They have the right to make and sell the product they believe will benefit them the most.

If you honestly believed that you would advocate for more qualified data oriented decisions about your software to ensure the decisions at present aren’t wasteful and inefficient.

And that’s what makes software unique: ethics aren’t of value in this line of work. Developers frequently make unqualified decisions for their own self interests in conflict to their employers profit motive and in conflict to their user’s experience. If ethics were important you would serve as an advocate for your product as qualified with evidence.

Consider the difference if your medical doctor acted primarily in their own self interest for profit instead of primarily in the interest of your health. Consider how that same reasoning can extend to software developers who produce poorly tested medical equipment.

Yet somehow this reasoning seems to have offended you. I am unclear as to some ideology that you speak of.


You're just filling comments with strawman arguments now.

Good luck with life buddy.


When the article mentions costs they mean in terms of performance for the user. Faster sites and better battery have been objectively shown to be material. Users don't know to vocalize it, but they bounce from fat slow sites.


i think the battery cost of websites and even apps is fairly well invisible to users. there's certainly facilities which say how much battery some app has used, but if it's a frequently used app it's a tautology.

users don't care because they can't. the can pick battery life when they buy their phone (at the expense of important features), but they cannot optimise at any time thereafter. most people who would care are rational and since they can't they don't

therefore, it is browser and os developers who need to act to stop these costs being externalised. show current battery consumption rate for this app/page in the status bar. then, people will start complaining about dirty webpages


if your js is small, you can avoid two of the costs by inlining it.


And download it every time? It is better to download it once and tell the browser to cache it.


If it's inlined into another static resource, why not just cache that? HTML with an inlined script is just as valid a cache object as the plain script itself.


In practice, performance is rarely the bottleneck.

On the job, most developere care about shipping on time and spending less time rewriting things as they learn via experience.

Anyone who avoids frameworks is going to be in for a world of hurt when they have to explain to their boss "I know the project is 3 months late but I saved 100kb in bundle size"


Counter point: I've seen projects that were blowing their budgets/deadlines due to:

- people not being familiar w/ the chosen framework and being unable to figure out how to accomplish some of the deliverables

- wrong framework for the job

- analysis paralysis figuring out which frameworks to use

- using the framework wrong and then drowning in code debt

The point of the article, I think, is that there's a tangible difference in performance between _similar_ frameworks, but which isn't often considered when choosing frameworks.

You'll often see developers say that they prioritize developer experience over things like runtime characteristics of a framework and there's a lot of handwaving about "maintainability". This despite there being extremely complex apps out in the wild written in obscure frameworks (e.g. lichess uses mithril.js and the code looks just fine).

There's clearly a disconnect somewhere when, for example, facebook announces a homepage redesign using the latest and greatest tools and immediately people start complaining about its performance.

I would not buy a slightly wobbly table from a carpenter that wants to use a more ergonomic hammer, over a not wobbly table from a carpenter that uses a cheap hammer. Similarly, I think what the article illustrates is that a developer should also act like a craftsman and critically think about the effectiveness of their tools in terms of end results, rather than blindly accepting tool marketing/status quo and disregarding the quality of the end product.


I like your table analogy. I would think that a more ergonomic hammer would also have tangible benefits like the ability to work longer and faster, creating more tables per day and lowering the overall costs of production resulting in cheaper tables.

Would customers be willing to buy a cheaper table that's wobbly vs a more expensive table that's not?


I'm not comparing a wooden stick with a high end hammer. I would think a competent carpenter would have a decent enough hammer, but not necessarily the shiny cool x-titanium-3000(tm), because any difference, if they exist at all, would be well into diminishing returns territory.

At the end of the day, the wobbliness of the table is a result of whether the carpenter cares about table wobbliness more than which hammer they use. However, it may say something about the level of craftmanship and competence of the carpenter if they got suckered into buying a well advertised x-titanium-3000(tm) to compensate for an unwillingness to put in the effort to master core carpentry skills.

Another similar analogy is an audiophile buying gold plated HDMI cables for better sound quality, oblivious that the reality is that the signal going through said cable is using a digital protocol with error correction.


Yes, but when the framework continues to grow 100kb release after release we blame the person who uses it for doing so.

If you don't like it, stop using it.

You can't win, and there will always be a developer ready and willing to kick you while you're down.


While there’s some truth to this, 3 years down the road when the codebase is a spaghetti nightmare, all the original authors have moved onto other companies and the impending refactor is looming, management starts to wonder: was it really worth it?


I wish that were what happens. Then maybe management would learn over time and the industry would smarten up.

Instead what happens is that the old developers built big things on-time so they were "good developers," and then the new developers struggle with the code and so management assumes the new devs are less competent.


Probably the old developers did so also because being on time meant skimping on technical debt due to management setting unrealistic deadlines.


> Anyone who avoids frameworks is going to be in for a world of hurt when they have to explain to their boss "I know the project is 3 months late but I saved 100kb in bundle size"

Perhaps they failed to educate management on the consequences.

3 billion people in the world have slow Internet access and pretty old phones and PCs.

Many websites can be too slow for every day use and this leads to increasing digital divide and inequality.


Why would you think avoiding frameworks means rewriting things? Most of the arguments people advance supporting frameworks seem rather hollow, as in based upon imaginary assumptions.


This is interesting data, but unless there's a way to normalize this against the feature set of the sites in question, then there's way too much selection bias to use this to draw any meaningful conclusions.



Looking at the framework-less ES6 implementation makes me hope I never have to maintain code like this.

https://github.com/tastejs/todomvc/blob/gh-pages/examples/va...


Yeah, this is what web dev was like before these frameworks came about (except web standards were worse 8 years ago). If you're building a complex application with more than 2 or 3 devs, a framework is a must.


That looks like fairly standard code for defining UI behavior in native frameworks as well, it's not particularly nightmarish.


... what's so bad about it?


It's a minefield of potential bugs:

    this.$todoList = qs('.todo-list');
This bit will only work if there's only one `.todo-list` on the page and it'll break if you try to use more than one component.

    const listItem = target.parentElement.parentElement;
This will break the second you change the template and have to add a wrapper for any reason.

    input.value = target.innerText;
Using the DOM to store your data is just wrong. Browsers can and will change then contents of your nodes if they feel like it. They can strip whitespace, for example. This would be a problem if you're trying to use some kind of change detection.

    this.$todoList.innerHTML = this.template.itemList(items);
Better hope that whoever wrote itemList knows to escape the data properly. Have you noticed how XSS used to be a much more common issue until the advent of view frameworks? Experience has taught us that at least for security you simply cannot depend on the developers to always do the right thing, especially since their experience level might be all over the place.

    const elem = qs(`[data-id="${id}"]`);
    
    if (elem) {
        this.$todoList.removeChild(elem);
    }
Global selector again, it also depends on the elem being the child of $todoList which is not guaranteed if there's any change to the template.

    listItem.className = completed ? 'completed' : '';
This will clear any other classes that the element might have on it. You might add some styling to it and then wonder why it breaks after you click a button.

    target.dataset.iscanceled = true;
Storing data on elements is also risky, since there's a lot of other code that updates innerHTML wholesale so your element might not be there anymore.

This is just one file. The point I'm trying to make is that this direct DOM manipulation makes for brittle code and only works for very simple cases. Once you have even a small amount of behavior, any sane developer will want to abstract it out a bit and you'll end up with a framework anyway. Better to choose a low overhead one from the beginning.


grouping feature set complexity is going to be subjective exercise as well.

A randomly sampling is better.


The article sticks to performance expenses upon the end user where it has the data.

I am more curious how much frameworks cost at development time. Framework advocates always claim their pet framework makes development so much faster and cheaper but there is no data on that. That begs a few questions:

Do frameworks reduce hours of labor to build an application? If so how much?

Do frameworks lower the maintenance expense of a given application? If so how much? That number should include any sunset or revisions due to changes in the framework not associated to changes in the business requirements.

Do frameworks reduce the team size or business support staff to ship an application? If so how much?

Do frameworks lower developer hiring expenses and wages? If so how much?


There are almost no good studies on any question of software engineering. We all go our entire days based on feelings and anecdotes, it's not just about Javascript frameworks.

It would be good to change this, yet, software engineering is a very hard field to study.


Too hard to know is not a business justification. Imagine if you floated that argument as a lawyer, doctor, engineer, or even a truck driver. Your license would be revoked.


Hum... Are you candidating yourself?


After having worked in other professions where ethics are such an early foundational principle it is rather interesting to see people so hopelessly lost and foreign to the concept.


So, if I'm reading it right, no you are not candidating yourself, and you think not candidating oneself is an ethical failure?

By the way, that argument that something being difficult is not a business justification is flawed on every level.


If vanilla HTML/CSS/JS is "Assembly", and heavy frameworks like React/Angular are "Java", what's the web app equivalent of C (light abstraction and close to the metal)? Svelte?


I like your analogy, but I think it's a little off. HTML/CSS/JS are "Java" and React/Angular are "Spring."

I love writing vanilla js, but the problem that the frameworks are really solving for is stripping away the history of HTML/CSS/JS for a 'document oriented web' and present a "re-abstraction" for an crossplatform 'application oriented web' that just happens to live in the most dominant VM platform (the browser).

In the analogy, Assembly is WebAssembly, and C is C. We just don't have the frameworks and libraries written for them yet!

--- Edit: ...but yeah you're right it's Svelte. Although I hear that Angular also compiles down to something pretty "close to the metal" these days.



Well with wasm, this analogy doesn't really make much sense.


wasm doesn’t have the ability to access the DOM, which is the majority of the purpose of JS in the browser.


It is a very very loose analogy ;)


Not quite in my opinion as a Web and Java developer. Java is a high performance, beautiful language. React/Angular/Vue are more like Java Server Side frameworks such as Spring etc which are Overly Complicated and Bulky.


I think I failed to emphasize the "If". :) I wasn't offering a comparison, so much as posing a framing device in order to ask a question: if minimal abstraction is 1, and highly abstracted is 10, what's the best-practice/modern web equivalent of 2-3?


I have a Moto G5 Plus from 2017. It does almost everything I need, and performs perfectly fine doing those things: making phone calls, checking my calendar, moving files, and sending text messages. While performance in these areas have mainly remained static, one area where my phone has gotten progressively worse over time is web browsing. Sites that I used to visit often are now bloated, resource intensive webapps that overheat my phone and crash Firefox Focus.

I know it's unrealistic to impose strict dependency criteria, framework choices, and bundle-size limits on the job. But if you're the one in charge, even if it's in your hobby project, please consider your consumers who may not have the latest in greatest hardware. I find it absurd that a phone from not 3 years ago struggles to run most popular sites.


I created a framework a year ago and have been battle testing it since. It uses no virtual dom, but still has state management.

I just wanted to say that, because I've been holding it in so long.


Seriously. I'm kind of excited about it. I definitely discovered something amazing here... I can do more than React, Vue, and Angular combined with a very tiny "framework". It doesn't have a virtual dom, so it's not really a framework, it just makes it quicker and easier to make auto-updating HTML Elements. You could mix it with anything else that uses HTML Elements no problem. Also, it leverages native code for everything, doing as little as possible in JS.

The real trick was something I call Source Derivation (SDx). I'm certain it will be a language level feature 10 years from now.

I'd like to release it soon, but I'm afraid of doing it alone. It's depressing to put a project out there, then have to be alone as nobody cares.


Technically, the project is alone and nobody cares right now (obviously due to it being anonymous). So the only way is up!


Lol. Thanks.

I released some solo albums a long time ago and learned the hard way that sometimes it's better to just keep your passion projects to yourself.


where can I read more about SDx? never heard of that term before


EDIT: I made a really simple demo of SDx. It's missing all of the features for nicer syntax, better performance (derivation batching), and garbage collection features (avoiding direct references). You also can't run derivers inside of derivers. But it gets the idea across.

https://codepen.io/SephReed/pen/gOaeQLv?editors=1010

--------

I made it up, but MobX has some rudimentary implementations of it:

https://mobx.js.org/README.html


Thanks, that's a pretty cool demo with almost no code. Seems like a lightweight observable as you say.


At least with jQuery, I could at least understand what you were getting for all that performance cost. I haven't had a chance to work with React or Vue, but I did spend a lot of time working with Angular and I was always left scratching my head as to what benefit it was supposed to provide beyond making my pages take longer to develop, download and run.


Ironically, the standard and nicer API that was jQuery’s value proposition has mostly already arrived in browsers.


Yup, I can no longer recommend jquery at all unless you happen to need to support IE10 and below (and please, honestly, for the love of god, don't bother).


Nah. Nowadays JQuery is a simple and small library with high-level abstractions and convenient synonyms for the standard in browser functionality.

Unless you need IE support. If so you need a different version that is quite large.


Eh... I still wouldn't recommend adding it to new projects. Why add a new framework that's mostly just a synonym for the standard browser functionality?

I'm certainly not going to suggest you stop using it if you've already pulled it in (like you said, it's pretty tiny these days), but most times I just don't think it's worth it. More so if you're not already familiar with it.


angularjs gives you something no other framework has. Two way binding. Change a variable in the frontend the backend changes. Change it in the backend the frontend changes.


That seems a bit misleading. Two way data binding simply means your changes are propagated from UI -> Model and Model -> UI

Also vue, backbone and many others provide two way data binding. React is the cool pure kid.

Some people might think of meteor js from your description.


I hope no one thinks of meteor js.

I should have been more exact.

AngularJS uses two-way binding between scopes, while Vue enforces a one-way data flow between components. This makes the flow of data easier to reason about in non-trivial applications.


Why do you hope no one thinks of meteor? It's a fantastic framework.


Was gonna comment this as well.


Turbonlinks is a decent compromise for many sites.


Honestly I found turbolinks to be an abomination, even the newer version.

It's "ok", but doesn't play nicely with any 3rd party anything.


I would like the same test done without ads (let's say behind a PiHole). In my experience that makes much more of a difference than the framework used


Am I reading this right? Of the frameworks studied, Vue consistently has the best numbers by far?


There are lies, damned lies and statistics.

The data is comparing frameworks along percentile, which is misleading, __when you realise that performance is a function of total payload,__ not relative percentage.

If the data of various frameworks was compared against total userland code & Toal size (user +framework), then we could have some meaningful data.

As it stands, I can say with confidence, that most of the larger and bigger apps use Angular, or react combination because they are most popular.

Considering that angular itself is heavier than React, angular apps will be heavier than React ones.

While smaller apps (or pages) will use Vue, because:

1. Vue is the only one of top 3 which can scale down to a script tag. So, for large classes of Web Apps, Vue is the only option.

2. Vue (&vuex) is smaller in size. And does not require boilerplate like react and redux. So I would expect Vue apps to be generally smaller than alternatives.

Given these situations, I would be surprised if Vue apps (on both average, and relative percentile) do not perform better than Angular & React


You can't really draw any conclusions from any of this because of selection bias.


Very interesting report of something that everyone is experiencing without being able to quantify it.

A big issue with big sites operators and compagnies is that everyone is looking at his own site consumption and performance, but look at that alone, on an empty computer. But, the issue is more the impact on their single page, on a computer that is already loaded with a few applications and tabs. Suddenly, a website using 5% of cpu and 250 MB of memory, starts to have a very huge impact on the user.


No idea if the author will read this, but it would be nice to break out sites that have none of the frameworks. It's a tiny fraction of them, but still could be interesting.



Honestly, it's kind of a confusing footnote. It provides a subset of the stats, while sorta-but-maybe-not arguing that they're not relevant?


I'm loving React at the moment, but I'm also learning Svelte for this very reason too.


> JavaScript Bytes Served to Desktop Devices, by Percentile

Is this gzipped or uncompressed bytes?

Article doesn't specify, but based on the word "Served", I'd assume gzipped? As in bytes over the wire? Would love clarification.


Bytes over the wire (I can clarify in the post too).

So _most_ of that weight is gonna have gzip or brotli applied. (Last I looked, around 17% of JS requests recorded by HTTP Archive are uncompressed.)


Isn't relative comparison they key point here? Absolutes doesn't mean much


Would be interesting to see this normalized by say DOM elements per page. Also Svelte and other lean bundle approaches should be included.


What about browser-cache? Doesn't it solve this problem to a large extend?


On the bright side, the performance tax we had to pay before the current generation of frameworks was often the "full page reload" tax, needing to parse megabytes of CSS, JS for almost every user interaction.


Aren't those usually cached? Page reloads these days are fast.


They said parse which is true regardless of caching


The JS parsing would be less.


Yeah, parsing of megabytes of CSS and JS is not necessary on each page load, but you still need a round trip to a server on each interaction, and a full page of HTML, even if only a small part of it has changed.


And then the Phoenix (Elixir) framework introduces LiveView in 2020...

If you haven't already figured out, everything old is new again. This was an old pattern when someone taught it to me at my first job. It felt like it wasn't happening for a few years but only because everything was new to me. Then I started seeing people win arguments arguing for the things that we had just finished replacing, and the most I've ever seen since is brief respites as the pendulum approaches the apex on each swing.


So is the round-trip to the server going to be noticeably slower to the user than an SPA on average?


This is a very good point. Ironically, SPAs feel slower to me than most server rendered apps.


SPAs have to do quite a bit before they're ready to start doing what the SSR page was doing as soon as the last byte landed on the client.

Old-school AJAX shipped HTML to be dumped straight into the document. I suspect that was much faster than the modern approach of sending, typically, JSON, which then requires deserialization, a bunch of transforming, memory allocation, shifting data around, template application, et c before any DOM changes are handed over to the browser engine, at which point you've just finally caught up with the approach of shoving HTML into the DOM directly.


You could do some Ajax request with jQuery and update the page.


Meanwhile, "basic HTML" gmail reloads its full page while the AJAXy versions are still showing a loading state for whatever you clicked on.


With Static Site Generation, you can use React to generate plain HTML/CSS/JS that you then deploy as your actual website. Seriously, this tech is very very fast. Just check out how fast these websites are:

https://spectrum.chat https://vercel.com https://gatsbyjs.org




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: