The reasoning can be simplified to two things.
1. Linux does not have the bhyve hypervisor ported
2. Maintaining a Linux distribution will require more effort and have more churn than illumos.
Because Linux is just a kernel and users have to provide all of their own user space and system services there is a lot of opportunity for churn. Illumos is a traditional operating system that goes from the kernel to the systemd layer. Illumos is also very stable at this point so most of the churn is managed up front
The choice is between porting a handful of apps to illumos or jumping on to the Debian treadmill while pioneering a new to Linux hypervisor. Would Linux have enabled a faster development cycle or just a easier MVP?
There's no churn in a graveyard, either. Debian's not much of a treadmill on stable; it's famous for it.
The justifications for bhyve over kvm are similarly inscrutable; you can simply not build the code you don't want. Nobody's forcing you to use shadow paging. Comments like "reportedly iffy on AMD" are bizarre. What does "iffy" mean? This wasn't worth testing? Why should I, a potential customer, believe that these people are better at this than the established companies who have been producing nearly-identical products for twenty years? At the domain of development they're discussing why bother using an x86_64 processor from a manufacturer who does not bother to push code into the kernel you've chosen?
Again, it's their company, and if they (as I suspect) chose these tools because they're familiar, that's a totally supportable position. I just can't understand why we get handwaving and assurances instead of any meat.
You may disagree with our rationale, but it is absolutely absurd to complain that that RFD 26[0] does not have "any meat." This is in fact dense technical content (10,000+ words!), for which I would expect a thorough read to take on the order of an hour. Not that I think you read it thoroughly: you skimmed to parts, perhaps -- but certainly glossed over aspects that are assuredly not your domain of expertise (or, to be fair, of interest to you): postmortem debuggability, service management, fault management, etc. These things don't matter to you, but they matter to us quite a bit -- and they are absolutely meaty topics.
Now, in your defense, an update on RFD 26 is likely merited: the document itself is five years old, and in the interim we built the whole thing, shipped to customers, are supporting it, etc. In short, we have learned a lot and it merits elucidating it. Of course, given the non-attention you gave to the document, it's unlikely you would read any update either, so let me give you the tl;dr: in addition to the motivation outlined in RFD 26, there are quite a few reasons -- meaty ones! -- that we didn't anticipate that give us even greater resolve in the decision that we made.
I did indeed read your document (twice, as I explicitly reported). I didn't address those parts because I found them better-supported. Instead, I addressed the parts I found confusing, and since your rebuttal here is just whining about what you think my behavior is, I continue to be mystified. That's okay; nobody expects you to explain yourself to me. If I thought it would help, I would suggest that perhaps a more effective defense would involve answering literally any of the questions I already asked. However, I don't appreciate accusations of bad faith based on your unwarranted assumptions about what I did or did not do and, bizarrely, what you imagine my motivations are. I'll just assume that the answers to the "why" questions I asked are rooted in similar wild-ass speculation.
There is a reasonable explanation for the "foregone conclusion" flavour of the RFD that doesn't cast aspersions (quite as much as you are) on the authors:
It is simultaneously an assertion of the culturally determined preferences of a group of people steeped in Sun Microsystems engineering culture (and Joyent trauma?), and a clinical assessment of the technology. The key is that technology options are evaluated against values of that culture (hence the outcome seems predictable).
For example, if you value safety over performance, you'll prioritise the safety of the DTrace interpreter over "performance at all costs" JIT of eBPF. This and many other value judgements form the "meat" of the document.
The ultimate judge is the market. Does open firmware written in Rust result in higher CSAT? This is one of the many bets Oxide is making.
Frankly, I don't think Oxide would capture so much interest among technical folks if it was just the combination of bcantrill fandom + radically open engineering. The constant stream of non-conformist/NIH technology bets is why everyone is gripping their popcorn. I get to shout "Ooooooh, nooo! Tofino is a mistake!" into my podcast app, while I'm feeding the dog, and that makes my life just a little bit richer.
i'm not sure if Dtrace interpreter was safer than EBPF. I guess in theory it should be because a JIT is just extra surface area but I'm not sure in practice. Both EBPF and DTrace had bugs. Also, I always thought EBPF JIT was just a translation to machine code and it didn't do any kind of optimization pass so should be very similar to how DTrace works. They both ship byte code to the kernel. But I guess the big difference is EBPF relies more on a verification pass while I think most of DTrace safety verification was performed while executing the bytecode. I remember there was a lot of stuff in EBPF where the verifier was meant to be able statically determine you were only accessing memory you were able to. I think there was a lot of bugs around this because the verifier would assume slightly different behaviour than what the runtime was producing. But this is also not necessarily a JIT problem you could have an interpreter that relied on a static safety pass as well.
...but your top post didn't ask any questions; certainly not ones that would justify a detailed answer.
It was several assertions, plus your admission of confusion. I mean, there are no stupid questions, but there wasn't even a question there, so I don't blame anyone for thinking you're communicating poorly.
I wasn't accused of communicating poorly. I was accused of lying about reading a text file and having some kind of ulterior motive for my own opinions.
Furthermore, advanced readers are generally able to infer from "I am not sure why x" that a similar flow of discussion might be as feasible as if it were phrased "why x?".
As though that were necessary. There's plenty of room in this comment box for questions better fleshed out than "why x?". Are you expecting advanced readers, or clairvoyant ones?
Because 2^128 is too big to be reasonably filled even if you give a ip address to every grain of sand. 64 bits is good enough for network routing and 64 bits for the host to auto configure an ip address is a bonus feature. The reason why 64 bits is because it large enough for no collisions with picking a ephemeral random number or and it can fit your 48 bit mac address if you want a consistent number.
With a fixed size host identifier compared to a variable size ipv4 host identifier network renumbering becomes easier. If you separate out the host part of the ip address a network operator can change ip ranges by simply replacing the top 64 bits with prefix translation and other computers can still be routed to with the unique bottom 64 bits in the new ip network.
This is what you do if you start with a clean sheet and design a protocol where you don't need to put address scarcity as the first priority.
If you software has no bugs then unikernels are a straight upgrade. If your software has bugs then the blast area for issues is now much larger. When was the last time you needed a kernel debugger for a misbehaving application?
With a standard windows server license you are only allowed to have a two hyperv virtual machines but unlimited "windows containers". The design is similar to Linux with namespaces bolted onto the main kernel so they don't provide any better security guaranies than Linux namespaces.
Very useful if you are packaging trusted software don't want to upgrade your windows server license.
There have been countless articles claiming the demise and failure of the F35 but that is just one side of the story. There has been an argument started 50 years ago in the 1970's about how to build the best next generation fighter jets. One of these camps was called the "Fighter mafia"[0] figure headed by John Boyd. The main argument they bing was the only thing that matters for a jet fighter is how well it performs in one-on-one short ranged dog fighting. They claim that stealth, beyond visual range missiles, electronic warfare and sensors/datalink systems are useless junk that only hinders the dog fighting capability and bloat the cost of new jets.
The evidence for this claim was found in testing for the F35 where it was dog fighting a older F16. The results of the test where that the F35 won almost every scenario except one where a lightweight fitted F16 was teleported directed behind a F35 weighed down by heavy missiles and won the fight. This one loss has spawned hundreds of articles about how the F35 is junk that can't dogfight.
In the end the F35 has a lot of fancy features that are not optional for modern operations. The jet has now found enough buyers across the west for economies of scale to kick in and the cost is about ~80 million each which is cheaper than retrofitting stealth and sensors onto other air frames like what you get with the F15-EX
Yeah unfortunately no amount of manoeuvering is a substitute for a kill chain where a distributed web of sensors and relays and weapon carriers can result in an AAM being dispatched from any direction at lightspeed.
It's that as well, but that part of the description doesn't catch how objects are automatically freed once the last reference to them (the owning one) is dropped.
Meanwhile my description doesn't fully capture how it guarantees unique access for writing, while yours does.
> but that part of the description doesn't catch how objects are automatically freed once the last reference to them (the owning one) is dropped.
You're confusing the borrow checker with RAII.
Dropping the last reference to an object does nothing (and even the exclusive &mut is not an "owning" reference). Dropping the object itself is what automatically frees it. See also Box::leak.
The reason for detecting the orientation of the connector is for higher speed communication. USB-C 20gbps uses both sets of pins on the connector to shotgun two usb3.2 10gbps to get 20gbps. That is why the technical spec name for 20gbps is "USB 3.2 gen 2x2". That is what the "x2" means.
Knowing that USB has this feature is follows that USB-C needs to be self orienting in case both ends of the connector plugged in different orientations.
You say Ethernet got this part right, well it got this part right by not having a reversible connector. Ethernet has 4 tx/rx pair and USB-C has 2 rx/tx pairs per usb 3 connection with 4 in total for 20gbps. The difference is reversibility. Is it worth the tradeoff?
That might work for Ethernet, but how would you do that for any unidirectional USB-C alternate mode without protocol-level feedback such as analog audio or DisplayPort video?
If you want to allow all of
- Reversible connectors
- Passive (except for marking), and as such relatively cheap, adapters and cables
- Not wasting 50% of all pins on a completely symmetric design connected together in the cable or socket
there's no way around having an asymmetrical cable design that lets the host know which signal to output on which pins.
That’s basically how USB-C does it too (except that the chip isn’t strictly necessary; an identifying resistor does the job for legacy adapters and cables).
One misconception that everyone keeps repeating is that the pi 5 expects and needs a 5v/5a power supply to work. The CPU and all the IO will work as expected with any USB pd charger that can do at least 15 watts. The only issue you will have is a power limit on USB peripherals that use a lot of power like hard drives. Keyboards, mice and webcams will work just fine with the 600 milliamp power limit.
Previous raspberry pis had low usb power limits and people did not consider those products dead on arrival. Now that they are trying to address a limitation in the original product people are discovering that the raspberry pi was always a very limited platform to begin and the next step is not an incremental bump to the specs but to just buy a regular computer.
Except as soon as you have some issue first comment will be "are you using official power supply"? I hate such comments with passion. Feels very corporatish support.
Quick summary of the technology is that there is two software parts for virtualization, the hypervisor and the virtual machine monitor.
First is the hypervisor that uses the hardware virtualization features of your cpu to emulate hardware interrupts and virtual memory paging. This part is usually buint into the operating system kernel and one will be prefered per operating system. Common ones are Hyper-V on Windows, Virtualization.Framework on Mac and KVM on Linux
With the kernel handling the low level virtualization you need a Virtual Machine Monitor to handle the higher level details. The VMM will manage what vm image mounted and how the packets in and out of the vm are routed. Some example of VMMs are QEMU, VirtualBox and libVirt.
Flint, the app being shown is a vibe coded web app wrapper around libVirt. On the bright side this app should be safe to use but it also does not do much beyond launching pre made virtual machines. As a developer the work you need to do is provide an Linux distribution (Ubuntu, etc), a container manager (Kubernetes, Docker) and launch your own containers or pre made ones from the internet (Dev Containers).
Because Linux is just a kernel and users have to provide all of their own user space and system services there is a lot of opportunity for churn. Illumos is a traditional operating system that goes from the kernel to the systemd layer. Illumos is also very stable at this point so most of the churn is managed up front
The choice is between porting a handful of apps to illumos or jumping on to the Debian treadmill while pioneering a new to Linux hypervisor. Would Linux have enabled a faster development cycle or just a easier MVP?
reply