Brilliant - I've long observed that almost everything we are doing with our GHz computers, we were doing previously with old MHz systems. We have more pixels (but use them for the same result), more memory (but everything has grown to fill it), more bandwidth (but websites packed with junk bytes).
The only "new" thing enabled by our bigger systems is "big data", which is largely a process of finding patterns that will fool the user/purchaser/customer.
> I've long observed that almost everything we are doing with
> our GHz computers, we were doing previously with old MHz
> systems.
I do also, but then I do have to pinch myself and remember some of the cutting edge software that really does make use of the hardware - i.e. games, video editing, neural networks, etc. I know we could already do these things on lesser machines, but there is no doubt that the level of which they are currently done could not be replicated on a lesser machine. And also remember the efficiency of systems such as web servers, todays average usage would be a DoS attack of the past.
I do look in disappointment though at text editors, window managers, file viewers, etc - that despite having much more computing power, offer not many more features but still eat tonnes of resources.
I'm currently (slowly) working on a Window Manger for X11 which tries to bring back some of these ideas but for modern devices: https://github.com/danielbarry/oakwm/ It's built on top of wm2 (which itself has roots going back to Bell Labs plan9). A lot of the work is in ripping out the unnecessary features and making it touch friendly. The idea is to run it on the PineTab Linux tablet.
I'd like a browser that is just a nice browser. It should fit easily into a few 10s MB and start instantly. It wouldn't waste 100s of MB on implementing dangerous/pointless/wasteful javascript APIs, like battery state or physical screen APIs; it wouldn't support (literally) 4300 options; wouldn't save 200 MB of config data; and wouldn't load many MB of data just to start and display a blank page!
Dillo/NetSurf may be of interest to you. They fit the size requirement as well as the "lack of JS" requirement (i.e. no JS support at all). "appsites" will definitely not work in them, but the document-oriented web and things like HN are fine.
I've used Dillo, but unfortunately most web pages are seriously malformed. Would be nice if it was possible to get some better CSS support.
I would like to see better JS support, but the scope of JS is simply insane for such a small browser. It's unfortunate much of the web is completely unusable without running JS. Perhaps it's possible to first run the page through a larger browser engine and then send the processed content to the small browser (such as Dillo), that would massively widen the scope of what it could display.
For the record, chrome on android is ~60MB, less than half the size of desktop chrome (~150MB). Android chrome supports the same set of javascript APIs. Clearly that isn't the whole picture.
Actually, you can select which Chrome to use for the Webview in Android developer settings if you have Installed multiple versions of Chrome. Chrome stands on its own.
So Chrome for Android is 10MB larger than the entirety of Damn Small Linux, which includes a desktop environment with three web browsers (firefox, dillo and netrik).
While 60MB is still humongous, it does make sense that Google would at least care slightly on Android, as the majority of the Android market are low-to-mid-tier smartphones with reasonable performance instead of the "desktop-in-a-pocket" that are the current flagship phones.
But while they care slightly on Android, they don't care at all on desktops.
Depends largely on the state of code caches, microcode and bus structure. Loop unrolling can be the worst thing you can do, on some architectures.
See, the hardware folks have listened in on the compiler people and their problems. They've done things like identify loops and rewritten them in microcode for optimization. If a short loop can fit entirely within the CPU code buffer, speed goes way up. Unroll the loop and blow the CPU code buffer, defeat the optimization and lose all that.
what you want isn't possible with the web being like MTV and moving far beyond it's original intention. The best you can get is get one of the small browsers and live with broken pages. Some people do just that. I think you should be think in terms of relativity though. Think about how much a "10 MB" only browser took up in main memory back in the day.
> I think you should be think in terms of relativity though.
> Think about how much a "10 MB" only browser took up in main
> memory back in the day.
This is exactly the point, software has swelled to use the resources available to it, so with each new iteration your machine gets faster but what it runs gets slower. It doesn't feel like this is something we should settle for.
Well that and high quality video. One and only one tangible benefit of this new web is better quality video. Everything else sucks as much or worse than in the days of Altavista and punch the monkey ads.
But apart from the HD video, the search, the aqueducts, the roads, the education, the browsers that don't crash all the time, and the wine... what have the Romans ever done for us?
I disagree. Searching for keywords is practically impossible nowadays, with all smart AI-powered™ search engines that unhelpfully return results that are not relevant to your query. And SEO spam, which has killed the niche high signal-to-noise handcrafted websites for content aggregators.
Since 2-3 years, I saw a dramatic deterioration of search.
Whenever I am looking for anything that is not very mainstream, I get pages and pages of irrelevant results / SEO hacking sites
Remember "popup spam" and then how enough windows would open that none would respond?
I'm guessing anyone who didn't get browser crashes in the 90s was only browsing a couple of sites or something. Certainly anything before IE5.5 was crash city - in particular Netscape 2.0!
Chrome still crashes all the time for me - in fact, it seems to be crashing more than it used to. It's just that now the crash is isolated to a particular tab.
I've been a developer for a while, starting from QBasic, then Pascal, Delphi, C/C++, Flash, Qt, etc. happily switching from one technology to the next when it makes sense.
Today I use Electron and React Native, which needless to say are not very popular, but for sure I could never have developed the kind of cross-platform software I write today with the technologies I used many years ago. Partly for lack of skills, but also of time (cross-platform development was way more difficult), or simply because computers back them were not powerful enough.
I don't have any special nostalgia for old technologies, some stuff were good, some not so much. And as OP is showing you're still free to use old software if you don't like what's being done today.
MHz level stuff works for 'work' that normal people do - writing documents, emailing, etc. I dreamed for a while about creating a MHz-level processor from the ground up with an open hardware specification and creating a simple kernel + OS to run on top of it, with new secure protocols for networking (websites, not reinventing TCP/IP) and communications (email). Right now every single computer is backdoored by the governments of the most powerful countries: we have Intel MEs and AMD PSPs in the hardware, and the frameworks everything uses - Linux, OpenSSL, etc. - are almost all huge complex codebases that the NSA has tons of zero days for. If you could get open MHz-level hardware and get OpenBSD or something to run on it, and develop applications with a focus on secure code, it would be pretty great.
But ultimately this would all fail. Aside from the network effects - nobody's going to use it - the fast processors and tons of memory are necessary. We just don't think about it unless we're running out of it.
- high resolution photo and video - photo editing programs like GIMP and Darktable can spend some time processing photos; these days even with complex effects you usually never experience lag more than a few hundred ms on many megapixel photos, because our hardware is fast. Same for video, the memory and storage space and bandwith is a hard requirement, and going back to 360p is not really acceptable.
- high resolution monitors - no point in having great looking 4k 10 bit color video without a 4k monitor, and now you're stuck having to push 20gbps through your displayport cable on a 500 MHz processor. And text also looks much better at 4k. Also you could say this is a bit unnecessary but compositing window managers are pretty great and I would say a core requirement of modern GUIs, and having eye candy like wobbly windows, window shadows (those are actually very helpful, try turning them off), etc. is expensive. My Dell XPS laptop from 2017 couldn't handle wobbly windows without visible stutter at 4k60; my desktop with a $400 GPU from 2019 can keep up at 4k@144hz.
- new technologies - VR/AR, fast voice recognition and neural networks - this is all cutting edge stuff but the use cases are obvious and they have started to be applied more commonly. Also the failures like eye tracking, Kinect - they may have failed commercially but they were good ideas and a valid use of fast computers. Also online meetings with many participants, each with their own video streams that require decompression.
- obviously, video games - not that you can't have fun games with shitty graphics, but good graphics are nice.
I thought of a few more but can't remember them at the moment. Oh, also, Bluetooth audio - 192kbps audio, but you need to decode it since it takes compression to get it to that level, and then you have the additional overhead of sending it over a digital protocol instead of just having your ADC do the work. You would need expansion cards or a dedicated audio core in your CPU to accomplish this if your computer wasn't fast.
And of course, science needs big data and big hardware to process all of that data.
If you're only working with text then slow computers are fine, but as soon as media gets involved they are not practical.
Don't get me wrong - I like hi-res images, and bluetooth (when it works), and I'm still amazed that my phone has more grunt that a Cray.
But the stuff we do day-to-day (excepting graphics) is appallingly inefficient. And most of the look-at-this stuff (speech recognition, and you mention VR and eye-tracking) is not actually used for anything day-to-day; we don't routinely talk to our computers ("Hello Computer" https://youtu.be/v9kTVZiJ3Uc?t=10 ).
That's perhaps because doing so in an office or cafe environment isn't great.
Kids are using text-to-speech more while doing their homework. Word has been beefing up its transcription feature for a while now[1].
I got to see this happen during lockdown, where teachers I know recommended transcription for younger kids who don't have touch typing skills and were still expected to turn in work on a computer. Talking to your computer is much more natural while doing your homework in your room.
When I was a young person with executive function issues (late 90s/early 2000s) voice to text was a lifesaver for my ability to write coherently. I found it much more natural (even then) to "write" by speaking my mind to a computer than to struggle to focus while typing slowly. It wasn't until I was older and using IM heavily as a social outlet (which was much more in the "bursty" configuration that works well for my brain) that my typing ability caught up with my pace of dictation.
Saddest part of speech recognition is with all the gigahertz and storage we have, we send it out to the internet to be processed by something else then get the result.
> Right now every single computer is backdoored by the governments of the most powerful countries: we have Intel MEs and AMD PSPs in the hardware, and the frameworks everything uses - Linux, OpenSSL, etc. - are almost all huge complex codebases that the NSA has tons of zero days for.
You've giving the gubment too much credit. It's easy to throw up your hands and say "aww geez I guess everything is backdoored anyway, why bother?" -- this is exactly what they want. The truth is a lot more complex, and as a lot of leaks have revealed, their capabilities are far from the supernatural omnipotence you seem to be implying.
Not perceived: actual, effective latency is higher. Today's hardware is capable of better latencies, but most software guys don't care about latency all the way from firmware to OS to apps to websites. It does matter to some degree in games and vehicles but it's hard to find an industry that values latency outside of toys or industrial applications. Practitioners say this is because "99% of users don't care" but that's only because 99% of users have either (1) never experienced fast latencies or (2) have forgotten them because latencies have gotten gradually slower over the last three decades.
All large systems of systems have "duct tape", I'd go as far as to say it's an emergent property of systems of systems.
It comes about because unless you design every sub-component system in lockstep with every other (which is impossible, the romans didn't lay out londons for cars, the victorians didn't put in sewers with respect to where we'd want to run fiber) you end up with an impedance mismatch at the boundaries.
It's why back-hoe operators from a gas network rip up fiber depressingly often, why (in the UK) the UK transport system has choke points between road, rail and air and on and on.
Modern computers aren't a unified system, they are lots of seperate systems that talk to each other and frankly having some (minimal understanding) of what has to happen for Gnome to appear on my screen when I press my power button I'm amazed that it ever works never mind mostly without fuss.
Exactly. Even if you do care, you can only exert so much influence on the platform you're developing for, unless you're working for a company like Apple or Sony. Even then, there are many, many layers to the latency problem.
Our systems are doing so much now, and all that "extra" lets us do numerous things at once without blinking an eye and it allows new comers to step up to a computer and almost immediately start using it after some exploring. Not clicking something and waiting 20-30 seconds to see what the program is or a blinking green cursor wondering what to do next.
> Sure you might be able to do some of those things at a very low resolution on a Mhz system, but not in any meaningful way.
> You must be really young.
Maybe they are, maybe they aren't, but that's an unfounded assumption. There are plenty of middle-aged people who haven't got a solid idea of the limits of pre-Ghz-scale hardware for use cases that weren't all that mainstream back then, and I'm sure there will also be young people who've tinkered with some old Windows 98 box and could give us a pretty solid opinion.
At any rate, I've used a succession of Mhz-scale systems (starting with a 386 that struggled valiantly to run a copy of Windows 3.1 that someone had installed on it) and I'm definitely not very young.
> >* Graphics design
> We did
With rough and simple tools, limited to low-res designs, very limited brushes, super crude brush simulations (if any). Procreate on the iPad blows all of it out of the water effortlessly with its worst brushes using a finger for a stylus, and that realtime brush simulation has been taxing for earlier iPads, that have always been orders of magnitude above Mhz systems. I used Photoshop CS and CS2 a lot back in the day; they were resource hogs and still very crude compared to current entry-level apps. We've really gained a huge lot in that department.
> >* Video editing
> Ditto
At what, 640x480, 15fps? I remember having annoying render times for things that wouldn't hardly even count as filters nowadays at such resolutions, maybe 800x600, but I'm sure that ball-shaped logitech webcam could do no more than that. Snapchat replaces whole faces in realtime with eerie accuracy on what must be much higher res.
Color grading Full HD, 30fps? As if. I guess you couldn't even play that without stutters unless with dedicated silicon to begin with.
> >* Animation
> Ditto
Low-res, crude, and anything halfway detailed and interactive: Low-fps. Manipulating anatomy models like those at https://www.zygotebody.com/ in realtime? I doubt my Core 2 Duo laptop would have been up to that. I had similar software for Windows 98, and it was absolutely primitive and still taxing the CPU and GPU.
> >* 3D CAD
> Older than you think
With less precision, much, much lower model complexity, and much cruder tooling. A large number of components, complex shapes, joints, ... that's going to hit hard limits very quickly. I'm hitting hard limits with that sort of thing nowadays, but I'm hitting them somewhat later than even a few years ago.
> >* 3D Rendering
> 3D rendering was born here, and we were playing 3D games perfectly in 1996-2002
I don't play a lot, but between Gothic 1 and Witcher 3, graphics have improved by an incredible amount, it's day and night, and I can't even go to full details in Witcher due to my aging GPU. Technically, those systems could do it, sure – but only with super short visibility, very crude models and extremely limited shaders and animations, crude collision detection, ... Gothic required at least a 700mhz Pentium 3, so it's pretty representative I think. Of course, those limitations work better for some games than for others, but they still were brutal limitations.
> >* Multitasking
> Linux, KDE3.
Also quite limited, though. Just what fits on a single Mhz-scale core. On my Windows 98 PC, I had to close CPU-intensive applications all the time because things would start to stutter; I believe Windows 95 would sometimes just bluescreen under such conditions. Things got a lot better on Windows 2000, but I think that may have been on my 1Ghz Athlon already. Those early Linux desktops were pretty unstable with lots of multitasking as well, I faintly remember lots of freezing. Things did slow down perceptibly at any rate when doing multiple resource-intensive things. Technically possible, sure – but it absolutely helps to have lots of fast cores.
Of course, a lot of the legwork to make those use cases perform is nowadays done by GPUs or huge amounts of RAM, and profit lots from multiple cores, but I'd say a Mhz-scale system should have a period video card, too, and MB-scale RAM, and be single-core, otherwise it's kinda pointless. And under those conditions, all of above things were technically possible, but really severely limited – still are, in some cases (CAD...), but wayyy less than back then.
Does that qualify as "not in any meaningful way"? I guess it depends. It was meaningful for me back then, and it made possible things that hadn't been possible before, and of course we were always content with what we had, not like thouse young'uns nowadays, and walked uphill to school, in the snow, both ways, every day – but looking back, the capabilities of my Mhz computers feel incredibly crude and primitive by today's standards, and even a lower-end gaming PC has little problem running Autodesk Fusion 360 (for which there's even a free hobbyist license) with models of surprising complexity, and I'm sure that enables many many more things that wouldn't have been feasible on Mhz hardware.
TBH w98 was a joke on multitasking. You can't even compare NT4 or W2000 (not mention to Linux) with Windows 98, where it struggled even under a Pentium 4.
On freezing, Mandrake was a joke, but Slackware and Debian were rock solid.
The only "new" thing enabled by our bigger systems is "big data", which is largely a process of finding patterns that will fool the user/purchaser/customer.