Back in 2000 I got the M1 Air with 8G of RAM (needed the cheapest Mac to test some arm64 stuff) and that laptop served me very well. I never felt RAM-limited. I was always expecting to run out of memory during a big Bazel build or something, but never did.
It isn't the most powerful computer in the world but I never ran into any problems... so it's probably an OK compromise for most people, especially in the world where RAM is scarce because of AI datacenter buildouts.
The M1 Air would have blown people’s minds in 2000. 128MB of RAM was luxurious at the time for a laptop. In 2003 I borrowed and bought several sticks for a presentation (senior thesis on 3D presentation software), and got to 1GB in my desktop and felt like I’d broken some law of physics.
Shortly after I had a TiBook (PowerBook G4) that was _only_ 1-inch thick! Compared to 1.75” Dells my coworkers had, it seemed like the future. DVD drive, modem, Ethernet, full sized DVI port, FireWire, WiFi, Bluetooth, optical audio in and out, gigantic display with a bezel that was unrivaled for years, even among Macs. What a beast!
(I know you meant 2020, but it’s fun to think about the air in 2000).
In the year 2000, a M1 MacBook Air would have been the world's fastest supercomputer (or second fastest if you had the base model with the 7-core GPU).
Impressive, of course; but not quite that impressive.
Only true if all you're running is matmul (supercomputer has general purpose CPUs so more flexible than M1 GPU) - also those flops are probably FP64 in supercomputer ratings and FP32 in M1.
As a smart man I knew used to say, supercomputers are about I/O not raw compute. Those have terabytes of RAM not 8GB.
Your question hits directly at latency vs. throughput distinction. Depends on which you mean by "fast."
Throughput-wise, the supercomputer is competitive because it has a lot of local RAM connected to lots of independent nodes, which, in aggregate, is comparable to modern laptop's RAM throughput (still much more than disk) with a caveat, that you can only leverage the supercomputer bandwidth if your workload is embarrassingly parallel running on all nodes[1]. Latency-wise, old RAM still beats NVMe by two or three orders of magnitude.
[1]: there's another advantage that supercomputer has which is lots more of local SRAM caches. If the workload is parallel and can benefit from cache locality, it blows away the modern microprocessor.
as someone who wasn't around for PowerPC mac times (I was alive but I didn't have internet and only knew apple for iPod and Apple II), did non artist people use FireWire for anything other than synchronizing their first generation iPods? Was it common to have a firewire external drive and were there any other devices that aren't cameras, film scanners or audio interfaces that utilized firewire?
There were FireWire HDDs too. Non-artist people also used FireWire for their DV camcorders for home videos. It wasn't really common because most PCs didn't have Firewire.
It was also used by the PS2 for local multiplayer between multiple consoles. Although Sony eventually removed that port.
I have a 2008 iMac with (I think) 16Gb of RAM which is used for just Firefox. I've been meaning to upgrade it to Linux but that generation didn't boot from USB, need to burn a CD.
All our intel MacBooks now run Linux just fine. The oldest is 2012, with 4Gb but most are 8 or 16Gb.
I would always recommend more RAM first over a faster processor; back when I would build desktop machines for Windows, I would use the second best CPU and put the savings into RAM.
It isn't the most powerful computer in the world but I never ran into any problems... so it's probably an OK compromise for most people, especially in the world where RAM is scarce because of AI datacenter buildouts.