Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not sure whether to find it amusing or concerning that 20MB now qualifies as a reasonable size for an email client. My first PC that ran Linux had only 8MB of RAM and an 850MB disk, and it felt like a lot at the time.


I was thinking the same thing. What happened? I started programming on machines with 16K of memory or less. It was a point of pride for developers to make programs that did the most with less, like the whole demoscene movement. It's like the contest changed from who can make the leanest code to who can get away with packaging the most bloat in one app without causing the device to break. We went from shaving off individual machine instructions to packaging entire browsers with super-simple apps.


What happened is that hardware got orders of magnitude more powerful, user expectations for performance stayed the same, and developers got lazy.


I don’t think that’s entirely it. All of our programs got significantly more powerful at the same time. Your email program, for example, can handle email in hundreds of encodings plus several varieties of Unicode. Properly handling Unicode text requires detailed information about each character. Just the Unicode data tables are multiple megabytes, though the exact size will vary depending on exactly which properties you need. Figure on at least a megabyte just to handle bidirectional text rendering, and add a bit more so that you can distinguish between word characters and punctuation well enough that double–clicking can reliably select a whole word. And then there are emoji!

Most people just see text, and not the complexity that supporting all of that text requires. Most computers from 20 years ago could barely handle it, and you can forget about doing it on the computers from 40 years ago.

There are plenty of other features that have invisible complexity too.

The article we’re all discussing talks about the symbols as if they are completely unnecessary, but that may not entirely be true. They are certainly unnecessary to _run_ the app, but it is probable that all of the error reporting and logging done by the app uses those symbols to explain where the errors and log messages came from. This makes it possible for the developers to actually fix problems. Granted, they could strip those symbols from the distributed app while still keeping them available to developers, but unfortunately that’s easier said than done.

One of the few ways that developing on Windows is better than on other platforms is that MSVC makes it very, very easy to build a symbol server that collects all of the symbols from all of the applications you have released. If you get a minidump from one of your programs crashing, it will automatically load the correct symbols from the symbol server, as well as the correct version of the source code. It will even download symbols for Windows itself, to make those dump files as easy to debug as possible. Linux is only very gradually gaining similar features. and I have no idea about Android or ios.


Developers have always been lazy. I think some of the best developers are the laziest developers, the ones who are motivated enough to find innovative ways to do less work.

When I was programming for DOS, if I wanted to write to the screen in color I had a few options. I could use Borland's conio.h, I could invoke INT 21h and rely on ANSI.SYS to render my color, I could use INT 10h to write a character at a time in the color I want, or I could write directly to screen memory. Writing directly was really the only way to get the performance I needed, but it added complexity, since I needed knowledge of the hardware I was running on (not all video adapters map character cells to segment B800h).

Today, for a similar "text mode" application, I can use ncurses, or I can write CSI codes directly to the terminal. I can write in C or Python or even Javascript. There are more moving parts, but it would be less work to dust off my old 386SX than it would be to get those moving parts out of the way.

No matter what, I'm always going to do the minimal amount of work I need to get the desired result. If someone comes up with a way to do less work and get better performance, that's the ticket to high speed.


Its less about minimal amount of work and more about what you can get away with. You now tolerate less performant and lazier solutions because hardware is so much faster and you can now get away with it in front of your users. For example, microsoft excel feels no faster or more special or does anything more really than the version I was using in 1995 which was served on cd rom or even floppies, compared to a couple GB over the air download for an application that today still takes time to open up on modern hardware. MS gets away with this because users expect the same features as 30 years ago and also expect excel to remain as slow as it felt 30 years ago. You could be lazy back in the day, but you still had to ensure your program was small enough to actually ship on physical media, and could actually run on this hardware thats several orders of magnitude slower than what developers expect users to be running today. So you can definitely get away with being even more lazy, less optimized, less performant, and still have a job shipping 'functional' software today, than what the constraints were decades ago.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: