The 268 had a segmented memory model which effectively made it possible to only access RAM in 64K pages. The 68000 had a flat memory mode and other differences that would have made a port of AmigaOS to a 286 very difficult because of the downgrade in capabilities (video controller architecture is another huge issue, too). In many ways the 286 should have been awesome, but wasn't. The 386 was more of a 68000 series competitor, and I believe it came out about the same time as the 68030 - so by the time Intel chips could compete with Motorola, four years had passed (and in that era, four years was like two decades).
It blew my mind when I was able to set the viewport pointer to where my program code was, and SEE THE CODE AS IT RAN. Like, watch the bits flip as counters incremented, etc etc.
It was a wonderful, impactful lesson in foundations of software engineering, before I even knew what any of that meant.
A lot of that "before I knew what that meant" going on for me looking back at that time. It seems like even though we have better technology now we somehow back slid into the worst uses for it.
Indeed, a 286 would cause some breakage and some OS-level APIs would need more complicated code, and some programs would be difficult to port due to HGA, VGA, and SVGA graphics being very different from Amiga modes (although somewhat superior).
Still, would be fun to have a 386 version of the Amiga OS.
I say this with respect, but I don't think it would. The x86 architecture had an absolute paucity of registers, the mixture of MMIO and PMIO was insane, and it just didn't have any PC-relative addressing modes. Relocation for a flat address space would be an annoyance. Prior to PCs becoming about a billion times faster and the compact encoding coming into its own, x86 code is not fun to write.
Saying "some programs would be difficult to port" is putting it mildly; a huge part of the Amiga's software catalogue was written in assembler, and pretty much every single game wrote directly to the hardware, in a way that can't be abstracted without a complete emulation of that hardware. The Commodore graphics APIs were absolutely too slow to do anything except productivity software. There were retargetable graphics APIs (e.g. CyberGraphX, Picasso 96), introduced to allow well-behaved software to access expensive graphics cards you could buy for big-box Amigas. But generally you wouldn't use them for game graphics (unless it's a very simple game).
The Amiga really held well together; the custom chips were well designed to fit with the 68000. Perhaps too well, and that coupling became the downfall when the computer hardware moved faster than Amiga software could be brought along. It was destroyed once games became more about per-pixel operations (1990s 3D games) than blitting large areas (1980s platform games); the existence of VGA mode 13h made the former so much easier to write. Commodore even added a chip to provide a form of hardware-accelerated chunky-to-planar conversion, but it was too late, only in the CD32 and just couldn't compete with a native chunky framebuffer.
> that coupling became the downfall when the computer hardware moved faster than Amiga software could be brought along.
Both hardware and software moved ahead while Commodore invested in the wrong things. Amiga was late to adopt VGA monitors and PCs quickly surpassed its capabilities. When I mentioned porting AmigaDOS and Intuition to 2/386's, it would be to keep the OS compatible at the C source level as much as possible, with workarounds for hardware when it's different, but always at OS API level so that non-game software (or non-Amiga-like games) could be ported from source.
I don't think it'd have saved Commodore or the Amiga (I think the Sun deal, where Sun would sell 3000/UX boxes was a better bet for that, or maybe not ditching the CBM 900 but, instead, building Amiga with Coherent instead of AmigaDOS).
I don't think most of it was originally implemented in C.
> at OS API level
That's the problem. Keep the original non-protected-memory APIs and you can't have memory protection; add memory protection, and it can't be API-compatible.
Could have packaged up the Amiga chipset on an ISA card, an all-in-one video/audio/io gizmo. Sell that to 386 owners and give the OS away. Bonus points for a ROM socket to insta-boot AmigaOS with no disk.
By the time they would've decided to do this, the Amiga chipset was looking pretty dated. 386's with SuperVGA and SoundBlaster cards were becoming common place in the early 90's. Once the 486 came out (1989), the price of 386 hardware dropped fast. Commodore seemed more interested in going the other direction: putting an x86 (BridgeBoard) into an Amiga.
I was an Amiga fan from roughly 1988 through 1994, started with an A500, expanded it, then moved on to an A3000. The platform was incredible. I learned a ton from it and taught myself C on that system. But by 1994, I wanted a Linux system, so a 486 it was...
ISTM that if Microsoft had managed to add preemptive multitasking to Windows 3.x in a backward-compatible way, this would've given us as close to an Amiga-like OS as one could ever hope to run on 8086 and 286 hardware. Perhaps even more so, if hardware could be expected to have multimedia capabilities as a standard, without relying on 3rd-party drivers - PC had this on the PCjr, and later Tandy.
> ISTM that if Microsoft had managed to add preemptive multitasking to Windows 3.x in a backward-compatible way, this would've given us as close to an Amiga-like OS as one could ever hope to run on 8086 and 286 hardware.
8086 didn't have the instruction set to do preemptive, and the 286's memory model was 64kb pages... and barely good enough. The enabler for MPC (Multimedia PC) was the combination of 386 (and 386sx) + 16 bit sound card + CDROM + PCI VGA+.
Just for accuracy: Win3.x had pre-emptive multitasking. However, it could only preempt DOS apps. Win16 apps could not be pre-empted.
This compares quite closely to the FOSS RISC OS for Arm machines, which can preempt CLI apps in a "task window", but GUI apps only multitask cooperatively.
Co-op multitasking is faster and more memory efficient. That's why Acorn and MS chose it. But it's very vulnerable to a single app failing to relinquish control, locking up the OS.