As someone in the midst of transitioning to Linux for the first time ever, the thing is: I still kinda hate Unix, but my AI friends (Claude Code / Codex) are very good at Unix/Linux and the everything is a file nature of it is amenable to AI helping me make my OS do what I want in a way that Windows definitely isn't.
On UNIX the "everything is a file" quickly breaks down, when networking, or features added post UNIX System V get used, but the meme still holds apparently.
If you want really everything is a file, that was fixed by UNIX authors in Plan 9 and Inferno.
Yeah, I was really confused when I learned that every device was simply a file in /dev, except the network interfaces. I never understood why there is no /dev/eth0 ...
That was back in the mid-90s but even today I still don't understand why network interfaces are treated differently than other devices
It's probably because ethernet and early versions of what became TCP/IP were not originally developed on Unix, and weren't tied to it's paradigms, they were ported to it.
Plan 9 does exactly this but all networking protocols live in /net - ethernet, tcp, udp, tls, icmp, etc. The dial string in the form of "net!address!service" abstracts the protocol from the application. A program can dial tcp!1.2.3.4!7788 or maybe udp!1.2.3.4!7788. How about raw Ethernet? /net/ether1!aabbccddeeff!12345. The dial(2) routine takes a dial string and returns an fd you read() and write(). Very simple networking API.
What would it mean to write to a network interface? Blast everyone as multicast? Not that useful. But Plan9 had connections as files, though I’ve never tried.
That's a bad argument. What does it mean to write to a mouse device? To the audio mixer? To the i2c bus device? To a raw SCSI device (scanner or whatever)? Those are all not very useful either.
Especially since there actually is a very useful thing that writing to /dev/eth0 would do: Put a raw frame on the wire, and reading from it would read raw frames.
You haven't thought through what you're asking. That's the bad argument. Network packets are not viable without a destination address. Nor does anyone want unaddressed (garbage) packets on their network.
Network packets don't need a destination address. Broadcast addresses exist. Also, packets to invalid/unknown destinations exist. You can send network packets with invalid source or destination addresses already anyway.
Taking a raw chunk of data and putting it on the wire as-is is the most logical interpretation of "writing to the ethernet device". Does it make sense to allow everyone to do that? Certainly not, that's why you restrict access to devices anyway.
The fact that not every chunk of data "makes sense" for every device in /dev is certainly nothing new, since that is the case for all other devices already (I mentioned a few in my post above).
Packets don't need to be routed. Sometimes you just want to communicate with a host on the same Layer-2 network. I said "Broadcast" (not Multicast) on purpose.
Sometimes you don't even want TCP/IP on the wire. Heck, sometimes you maybe don't even want DIX Ethernet on the wire.
Anyway, this discussion is going nowhere. Handcrafting packets is possible (it's basically what the kernel does anyway), sometimes it's useful, and if you could write a user-space program that could just open /dev/eth0 and write its own handcrafted packets to that stream would be helpful.
Well it depends on what "file" means. Linuxian interpretation would be that file is something you can get file descriptor for. And then the "everything is a file" mantra holds better again.
Windows is actually much closer to this limited, meaningless, form of the "everything is a file" meme. In Windows literally every kernel object is a Handle. A file, a thread, a mutex, a socket - all Handles. In Linux, some of these are file descriptors, some are completely different things.
Of course, this is meaningless, as you can't actually do any common operation, except maybe Close*, on all of them. So them being the same type is actually a hindrance, not a help - it makes it easier to accidentally pass a socket to a function that expects a file, and will fail badly when trying to, for example, seek() in it.
* to be fair, Windows actually has WaitForSingleObject / WaitForMultipleObjects as well, which I think does do something meaningful for any Handle. I don't think Linux has anything similar.
You can call write() and read() on any file descriptor, but it won't necessarily do something meaningful. For example, calling them on a socket in listen mode won't do anything meaningful. And many special files don't implement at least one of read or write - for example, reading or writing to many of the special files in /proc/fs doesn't do anything.
You can try to read/write the same on Windows: ReadFile (and friends) take a HANDLE.
It won't make sense to try to read from all things you can get a HANDLE to on Windows either, but it's up to what created the HANDLE/object as to what operations are valid.
I was recently thinking that object orientation is kind of everything is a file 2.0 in the form everything is an object I mean ofcourse didn’t pan out that good.
Haven’t googled yet what people had to say about that already before.
P.s. big fan of ur comments.
> object orientation is kind of everything is a file 2.0 in the form everything is an object
That is why I love Plan 9. 9P serves you a tree of named objects that can be byte addressed. Those objects are on the other end of an RPC server that can run anywhere, on any machine, thanks to 9p being architecture agnostic. Those named objects could be memory, hardware devices, actual on-disk files, etc. Very flexible and simple architecture.
I rather pick Inferno, as it improved on top of Plan 9 learnings, like the safe userspace in form of Limbo, after conclusion throwing away Alef wasn't that great in the end.
Inferno was a commercial attempt at competing with Sun's Java. The plan 9 folks had to shift gears so they took Plan 9 and built a smaller portable version of it in about a year. Both the Plan 9 kernel and Inferno kernel share a lot of code and build system so moving code between them is pretty simple.
The real interesting magic behind Plan 9 is 9P and its VFS design so that leaves Inferno with one thing going for it: Dis, its user space VM. However, Dis does not protect memory as it was developed for mmu-less embedded use. It implicitly trusts the programmer not to clobber other programs memory. It is also hopelessly stuck in 32bit land.
These days Inferno is not actively maintained by anyone. There are a few forks in various states and a few attempts to make inferno 64 bit but so far no one has succeeded. You can check: https://github.com/henesy/awesome-inferno
Alef was abandoned because they needed to build a compiler for each arch and they already had a full C compiler suite. So they took the ideas from Alef and made the thread(2) C library. If you're curious about the history of Alef and how it influenced thread(2), Limbo and Go: https://seh.dev/go-legacy/
These days Plan 9 is still alive and well in the form of 9front, an actively developed fork. I know a lot of the devs and some of them daily drive their work via 9front running on actual hardware. I also daily drive 9front via drawterm to a physical CPU sever that also serves DNS and DHCP so my network is managed via ndb. Super simple to setup vs other clunky operating systems.
And lastly, I would like to see a better Inferno but it would be a lot of work. 64 bit support and memory protection would be key along with other languages. It would make a better drawterm and a good platform for web applications.
> I would like to see a better Inferno but it would be a lot of work. 64 bit support and memory protection would be key along with other languages. It would make a better drawterm and a good platform for web applications.
Doesn't Wasm/WASI provide these same features already? That doesn't seem like "a lot of work", it's basically there already. Does dis add anything compelling when compared to that existing technology stack?
Inferno was initially released in 1996, 21 years before WASM existed.
An inferno built using WASM would be interesting. Though WASI would likely be supplanted by a Plan 9/Inferno interface possibly with WASI compatibility. Instead of a hacked up hyper text viewer you start with a real portable virtual OS that can run hosted or native. Then you build whatever you'd like on top like HTML renderers, JS interpreters, media players/codecs, etc. You profile is a user account so you get security for free using the OS mechanisms. Would make a very interesting platform.
I am well aware of that. My point is a web browser, originally a hypertext viewer, is now a clunky runtime for all sorts of ad-hoc standards including a WASM VM. So instead, start with a portable WASM VM that is a light weight OS that you build a browser inside of composed of individual components like Lego. You get all the benefits of having a real OS including process isolation, memory management, file system, security, and tooling. WASI is a POSIX like ABI/API that does not fit the Plan 9/Inferno design as they thankfully aren't Unix.
The WASI folks are accepting new API proposals. If the existing API does not fit an Inferno-like design, you can propose tweaked APIs in order to improve that fit.
All of that prose doesn't change the fact that at the time Inferno was built, it was an improvement over Plan 9, taking its experience into consideration for improvements.
I know pretty well the history, I was around at the time after all, and Plan 9 gets more attention these days, exactly because most UNIX heads usually ignore Inferno.
"We 've painted a dim picture of what it takes to bring IPEs to UNIX. The problems of locating. user interfaces. system seamlessness. and incrementality are hard to solve for current UNIXes--but not impossible. One of the reasons so little attention has been paid to the needs of IPEs in UNIX is that
UNIX had not had good examples of IPEs for inspiration. This is changing: for instance. one of this article's authors has helped to develop the Small talk IPE for UNIX (see the adjacent story). and two others of us are working to make the Cedar IPE available on UNIX.
What's more. new UNIX facilities. such as shared memory and lightweight processes (threads). go a long way toward enabling seamless integration. Of course. these features don't themselves deliver integration: that takes UNIX programmers shaping UNIX as they always have--in the context of a friendly and cooperative community. As more UNIX programmers come to know IPEs and their
power. UNIX itself will inevitably evolve toward being a full IPE. And then UNIX programmers can have what Lisp and Small talk and Cedar programmers have had for many years: a truly comfortable place to program."
Some GOSIP (remember that?) implementations on some Unicies did have files for network connections, but it was very much in the minority. Since BSD was the home of the first widely usable socket() implementations for TCP/IP it became the norm; sockets are a file, but just not linked to any filesystem and control is connect()/accept() and the networking equivalent (setsockopt()) of the Unix system call dumping ground; ioctl().
Linus finally relented and changed it to "everything is a stream of bits." Still, it's a useful metaphor and way to think about interacting with bits of the OS.
Having observed my fair share of beginners transition from win to linux, the most common source of pain I've seen is getting used to the file permissions, and playing fast and loose with sudo.