Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For me the interesting alternate reality is where CPUs got stuck in the 200-400mhz range for speed, but somehow continued to become more efficient.

It’s kind of the ideal combination in some ways. It’s fast enough to competently run a nice desktop GUI, but not so fast that you can get overly fancy with it. Eventually you’d end up OSes that look like highly refined versions of System 7.6/Mac OS 8 or Windows 2000, which sounds lovely.



I loved System 7 for its simplicity yet all of the potential it had for individual developers.

Hypercard was absolutely dope as an entry-level programming environment.


The Classic Mac OS model in general I think is the best that has been or ever will be in terms of sheer practical user power/control/customization thanks to its extension and control panel based architecture. Sure, it was a security nightmare, but there was practically nothing that couldn’t be achieved by installing some combination of third party extensions.

Even modern desktop Linux pales in comparison because although it’s technically possible to change anything imaginable about it, to do a lot of things that extensions did you’re looking at at minimum writing your own DE/compositor/etc and at worst needing to tweak a whole stack of layers or wade through kernel code. Not really general user accessible.

Because extensions were capable of changing anything imaginable and often did so with tiny-niche tweaks and all targeted the same system, any moderately technically capable person could stack extensions (or conversely, disable system-provided ones which implemented a lot of stock functionality) and have a hyper-personalized system without ever writing a line of code or opening a terminal. It was beautiful, even if it was unstable.


I’m not too nostalgic for an OS that only had cooperative scheduling. I don’t miss the days of Conflict Catcher, or having to order my extensions correctly. Illegal instruction? Program accessed a dangling pointer? Bomb message held up your own computer and you had to restart (unless you had a non-stock debugger attached and can run ExitToShell, but no promises there.)


It had major flaws for sure, but also some excellent concepts that I wish could've found a way to survive through to the modern day. Modern operating systems may be stable and secure, but they're also far more complex, inflexible, generic, and inaccessible and don't empower users to anywhere near the extent they could.


> unless you had a non-stock debugger attached and can run ExitToShell

You could also directly jump into the ExitToShell code in ROM (G 49F6D8, IIRC). Later versions of Minibug had an “es” command that more or less did the same thing (that direct jump always jumps into the ROM code, “es” would, I think, jump to any patched versions)


> The Classic Mac OS model in general I think is the best that has been or ever will be in terms of sheer practical user power/control/customization

A point for discussion is whether image-based systems are the same kind of thing as OSes where system and applications are separate things, but if we include them, Smalltalk-80 is better in that regard. It doesn’t require you to reboot to install a new version of your patch (if you’re very careful, that’s sometimes possible in classic Mac OS, too, but it definitely is harder) and is/has an IDE that fully supports it.

Lisp systems and Self also have better support for it, I think.


smalltalk missed the opportunity to incorporate more sophisticated versioning, including distributed versioning with current SotA ideas

of course modern smalltalks or st-inspired systems could still incorporate these ideas


Perhaps decades ago there was "more sophisticated versioning" for Smalltalk implementations:

2001 "Mastering ENVY/Developer".

https://www.google.com/books/edition/Mastering_ENVY_Develope...


ENVY suffered of a problem that many other Smalltalk technologies suffered: a conflict between a culture of proprietary zeal as a business model and powerful network effects of adoption. Visualage in general was plagued by this. I used to blame Microsoft and Apple successes for the pervasive push for lock-in and "integration" as a feature that defined the era so strongly.

You had on the one hand had a technology that desperately needed adoption to build a culture and best-practices documentation, and on the other hand you had short term profit motive seriously getting in the way, so what you had that was completely cutting edge for decades, eventually it wasn't anymore - or the world moved in another direction and your once revolutionary technology became an ill fit for it.

By the 2000s with monotone and darcs, but specially with the rise of git, other standards for versioning have superseded what could have been. Smalltalkers already by the 2010s should have been wise to try to incorporate what is clearly a standard now but instead a bunch of invented-here systems for versioning and repositories and hybrids have developed in its place. And by incorporate i don't mean "let's make X for ST" but making it core in their implementation so that the system itself is more easily understood and used, even if its to take pieces of it away and use them which is actually a strength and not a weakness! contrary to some brand of 90s-era beliefs.

Generally speaking, to this very day it's regarded as cool and as a feature in ST world that something is ST-only, conveniently "integrated" into the system as tightly as possible and, implicitly but insidiously and glaringly, near-impossible to use elsewhere except maybe as a concept and laundered of its origin.


> Not really general user accessible.

Writing a MacOS classic extension wasn’t exactly easy. Debugging one could be a nightmare.

I’m not sure how GTK themes are done now, but they used to be very easy to make.


Right, but my point is that users didn’t have to write extensions because developers had already written one for just about any niche use one could think of.

And it wasn’t just theming. Classic Mac OS extensions could do anything from add support for new hardware to overhaul the text rendering system entirely to giving dragged desktop icons gravity and inertia to adding a taskbar or a dock. The sky was the limit, and having a single common target to do any of those things (vs. being split between the kernel and a thousand layers/daemons/DEs/etc) meant that if it could be done, it probably had been.


You’d need to touch many different parts of the OS to write those extensions. The difference is that, on MacOS classic, there isn’t much of a boundary between user space and kernel space.

I’ve done a couple MITM toys with Windows 3.x and the trick is always exposing the same interface as the thing you want to replace, even if you only do something like inverting mouse movements on odd minutes, you just pass everything else down to the original module.


I sometimes drop by cpu down to the 400Mhz-800Mhz range. 400 is rough. 800, not so bad. It runs fine, with something like i3 or sway.

If we really got stuck in the hundreds of MHz range, I guess we’d see many-core designs coming to consumers earlier. Could have been an interesting world.

Although, I think it would mostly be impossible. Or maybe we’re in that universe already. If you are getting efficiency but not speed, you can always add parallelism. One form of parallelism is pipelining. We’re at like 20 pipeline stages nowadays, right? So in the ideal case if we weren’t able to parallelize in that dimension we’d be at something like 6Ghz/20=300Mhz. That’s pretty hand-wavey, but maybe it is a fun framing.


The alternative reality I wish we could move to, across the universe, is the one where SGI were the first to build a titanium laptop and became the worlds #1 Unix laptop vendor ..


I love the IRIX look, but they’d need to update it past the 1990s. It’d look very dated to current audiences.


NextStep looked pretty dated too, but it went through a nice evolution to bring it up to modern design standards .. if SGI had made that laptop and increased their marketshare I'm pretty sure Irix would've gotten a face-lift.

Anyway, its all about that alternative-universe, where the success of the SGI tiBook has everyone running Irix in their pockets ..


When NeXT acquired Apple (for one Steve Jobs, getting $400 million as change) OPENSTEP was not dated - it still looked impressive next to MacOS 9 and Windows. And CDE, of course, but that’s a very low bar.


It didn't look as great as Irix did back then, though ..


No, but it had text anti-aliasing. That looked pretty neat, the one thing I wish SGI had done.

That and powering their GUI to Linux.


Given enough power and space efficiency you would start putting multiple cpus together for specialized tasks. Distributed computing could have looked differently


This is more or less what we have now. Even a very pedestrian laptop has 8 cores. If 10 years ago you wanted to develop software for today’s laptop, you’d get a 32-gigabyte 8-core machine with a high-end GPU. And a very fast RAID system to get close to an NVMe drive.

Computers have been “fast enough” for a very long time now. I recently retired a Mac not because it was too slow but because the OS is no longer getting security patches. While their CPUs haven’t gotten twice as fast for single-threaded code every couple years, cores have become more numerous and extracting performance requires writing code that distributes functionality well across increasingly larger core pools.


Half my Linux machines are Macs “retired” for exactly this reason.


This was the Amiga. Custom coprpcessors for sound, video, etc.


Commodore 64 and Ataris had intelligent peripherals. Commodore’s drive knew about the filesystem and could stream the contents of a file to the computer without the computer ever becoming aware of where the files were on the disk. They also could copy data from one disk to another without the computer being involved.

Mainframes are also like that - while a PDP-11 would be interrupted every time a user at a terminal pressed a key, IBM systems offloaded that to the terminals, that kept one or more screens in memory, and sent the data to another computer, a terminal controller, that would, then, and only then, disturb the all important mainframe with the mundane needs or its users.


Ya...IBM and CDC both had/have architectures that heavily distributed tasks to subprocessors of various sorts. Pretty much dates to the invention of large-scale computers.

You also have things like the IBM Cell processor from PS3 days: a PowerPC 'main' processor with 7 "Synergistic Processing Elements" that could be offloaded to. The SPEs were kinda like the current idea of 'big/small processors' a la ARM, except SPEs are way dumber and much harder to program.

Of course, specialized math, cryptographic and compression processors have been around forever. And you can even look at something like SCSI, where virtually all of the intelligence for working the drive was offloaded to the drive ccontroller.

Lots of ways the implement this idea.


This is what the Mac effectively does now - background tasks run on low-power cores, keeping the fast ones free for the interactive tasks. More specialised ARM processors have 3 or more tiers, and often have cores with different ISAs (32 and 64 bit ones). Current PC architectures are already very distributed - your GPU, NIC/DPU, and NVMe SSD all run their own OSs internally, and most of the time don’t expose any programmability to the main OS. You could, for instance, offload filesystem logic or compression to the NVMe controller, freeing the main CPU from having to run it. Same could be done for a NIC - it could manage remote filesystem mounts and only expose a high-level file interface to the OS.

The downside would be we’d have to think about binary compatibility between different platforms from different vendors. Anyway, it’d be really interesting to see what we could do.


The GameBoy Advance could run 2D games (and some 3D demos) on 2 AA batteries for 16 hours. I wonder if we could get something more efficient with modern tech? It seems research made things faster but more power hungry. We compensate with better batteries instead. I guess we can and it's a design goal problem, I also do love a screen with backlight.


> It seems research made things faster but more power hungry

No, modern CPUs are far more power efficient for the same compute.

The primary power draw in a simple handheld console like would be the screen and sound.

Putting an equivalent MCU on a modern process into that console would make the CPU power consumption so low as to be negligible.


As a consumer product example: e-ink readers. (Of course, it helps as well that the GameBoy had no radios etc...)


E-ink use energy when changing state. A 30fps 3D game would require a lot of energy. Also, e-ink is electromechanical in nature, so there would be a lot of wear as well.


Yes; yet... I thought the efficiency per compute has to do more with the nm process shrinking the die than anything else. That and power use is divided by so many more instructions per second


My alternate reality "one of these days" projects is to have a RISC-V RV32E core on a small FPGA (or even emulated by a different SOC) that sits on a 40- or 64-pin DIP carrier board, ready to be plugged into a breadboard. You could create a Ben Eater-style small computer around this, with RAM, a UART, maybe something like the VERA board from the Commander X16...

It would probably need a decent memory controller, since it wouldn't be able to dedicate 32 pins for a data bus, loads and stores would need to be done wither 8 or 16 bits at a time, depending on how many pins you want to use for that..


Have you thought about building a RISC-V “fantasy computer” core for the MiSTer FPGA platform? https://github.com/MiSTer-devel/Wiki_MiSTer/wiki

From a software-complexity standpoint, something like 64 MiB of RAM possibly even 32 MiB for a single-tasking system seems sufficient.

Projects such as PC/GEOS show that a full GUI OS written largely in assembly can live comfortably within just a few MiB: https://github.com/bluewaysw/pcgeos

At this point, re-targeting the stack to RISC-V is mostly an engineering effort rather than a research problem - small AI coding assistants could likely handle much of the porting work over a few months.


The really cool thing about RISC-V is that you can design your own core and get full access to a massive software ecosystem.

All you need is RV32I.


Or if 640k was not only all you'd ever need, it was all we'd ever get.


Ya, but that means no high-res GUI. And pretty annoying limits on data set size.


There's something to this. The 200-400MHz era was roughly where hardware capability and software ambition were in balance — the OS did what you asked, no more.

What killed that balance wasn't raw speed, it was cheap RAM. Once you could throw gigabytes at a problem, the incentive to write tight code disappeared. Electron exists because memory is effectively free. An alternate timeline where CPUs got efficient but RAM stayed expensive would be fascinating — you'd probably see something like Plan 9's philosophy win out, with tiny focused processes communicating over clean interfaces instead of monolithic apps loading entire browser engines to show a chat window.

The irony is that embedded and mobile development partially lives in that world. The best iOS and Android apps feel exactly like your description — refined, responsive, deliberate. The constraint forces good design.


> What killed that balance wasn't raw speed, it was cheap RAM. Once you could throw gigabytes at a problem, the incentive to write tight code disappeared. Electron exists because memory is effectively free.

I dunno if it was cheap RAM or just developer convenience. In one of my recent comments on HN (https://news.ycombinator.com/item?id=46986999) I pointed out the performance difference in my 2001 desktop between a `ls` program written in Java at the time and the one that came with the distro.

Had processor speeds not increased at that time, Java would have been relegated to history, along with a lot of other languages that became mainstream and popular (Ruby, C#, Python)[1]. There was simply no way that companies would continue spending 6 - 8 times more on hardware for a specific workload.

C++ would have been the enterprise language solution (a new sort of hell!) and languages like Go (Native code with a GC) would have been created sooner.

In 1998-2005, computer speeds were increasing so fast there was no incentive to develop new languages. All you had to do was wait a few months for a program to run faster!

What we did was trade-off efficiency for developer velocity, and it was a good trade at the time. Since around 2010 performance increases have been dropping, and when faced with stagnant increases in hardware performance, new languages were created to address that (Rust, Zig, Go, Nim, etc).

-------------------------------

[1] It took two decades of constant work for those high-dev-velocity languages to reach some sort of acceptable performance. Some of them are still orders of magnitude slower.


> Had processor speeds not increased at that time, Java would have been relegated to history, along with a lot of other languages that became mainstream and popular (Ruby, C#, Python)[1].

I'd go look at the start date for all these languages. Except for C#, which was a direct response to the Sun lawsuit, all these languages spawned in the early 90s.

Had processor speed and memory advanced slower, I don't think you see these languages go away, I see they just end up being used for different things or in different ways.

JavaOS, in particular, probably would have had more success. Seeing an entire OS written in and for a language with a garbage collector to make sure memory isn't wasted would have been much more appealing.


> I'd go look at the start date for all these languages. Except for C#, which was a direct response to the Sun lawsuit, all these languages spawned in the early 90s.

I don't understand your point here - I did not say those languages came only after 2000, I said they would have been relegated to history if they didn't become usable due to hardware increases.

Remember that Java was not designed as a enterprise/server language. Sun pivoted when it failed at its original task (set top boxes). It was only able to pivot due to hardware performance increases.


> I said they would have been relegated to history if they didn't become usable due to hardware increases.

And I disagree with this assessment. These languages became popular before they were fast or the hardware support was mature. They may have taken different evolution routes, but they still found themselves useful.

Python, for example, entered in a world where perl was being used for one off scripts in the shell. Python replacing perl would have still happened because the performance characteristics of it (and what perl replaced, bash scripts) is similar. We may not have used python or ruby as web backends because they were too slow for that purpose. That, however, doesn't mean we wouldn't have used them for all sorts of other tasks including data processing.

> Remember that Java was not designed as a enterprise/server language. Sun pivoted when it failed at its original task (set top boxes). It was only able to pivot due to hardware performance increases.

Right, but the java of old was extremely slow compared to today's Java. The JVM for Java 1 to 1.4 was dogshit. It wasn't hardware that made it fast.

Yet still, java was pretty popular even without a fast JVM and JIT. Hotspot would have still likely happened but maybe the GC would have evolved differently as the current crop of GC algorithms trade memory for performance. In a constrained environment Java may have never adopted moving collectors and instead relied on Go like collection strategies.

Java applets were a thing in the 90s even though hardware was slow and memory constrained. That's because the JVM was simply a different beast in that era. One better suited to the hardware at the time.

Even today, Java runs on hardware that is roughly 80s quality (see Java Card). It's deployed on very limited hardware.

What you are mistaking is the modern JVM's performance characteristics for Java's requirements for running. The JVM evolved with hardware and made tradeoffs appropriate for Java's usage and hardware capabilities.

I remember the early era of the internet. I ran Java applets in my netscape and IE browsers on a computer with 32MB of ram and a 233MHz processor. It was fine.


I remember running Java applets under Netscape 3.x and 4.x on System 7.5 on a 200Mhz PPC 603ev and 16MB RAM. It was “fine” mostly, but loading was slow as mud (though that might’ve just been the 28k dialup) and they crashed Netscape or the whole system a lot more than the rest of the web did. Technically usable, but practicality was questionable.


As you say, the trade-off is developer productivity vs resources.

If resources are limited, that changes the calculus. But it can still make sense to spend a lot on hardware instead of development.


Lots of good practices! I remember how aggressively iPhoneOS would kill your application when you got close to being out of physical memory, or how you had to quickly serialize state when the user switched apps (no background execution, after all!) And, or better or for worse, it was native code because you couldn’t and still can’t get a “good enough” JITing language.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: