There's no zfs grenade. It's CDDL, feel free to use it wherever you want. Oracle can't come after you for violating the gpl even if somehow using zfs on linux violates the gpl.
Everything I have read is that the cddl is not compatible with binary deployments of zfs on linux so actually wouldnt that mean yes they could press that if you bundled it with gpl? Actual lawyers have said yes it could which is what I am refering to, however I think the actual answer is that Oracle has created a latch by inaction on this subject for so long now.
CDDL is more permissive than gpl. It's not a violation of cddl to intermingle with code under a different license. GPL is the issue and it's the individual contributors to linux that _could_ sue.
I'm not a lawyer. I don't k is what Oracle's lawyers can and can't do. Even if I'm legally in the right, Oracle's lawyers could break me if they wanted. I can't know if there is a ZFS grenade, and neither can you. But we can choose to not deal with Oracle.
At that point, if they wanted to, they could sue mort96 for saying something bad about Oracle. It's unlikely they'll do that and perhaps a bit less unlikely they'll sue over ZFS.
Most of their legal shenanigans appear to be restricted to companies that already license some software from them.
AWS loop a long while back wanted me to design a playlist system so my dumbass brain snapped to m3u files or w/e people were using back then and designed a system to host/share playlist files. The teenager (ok probably in their 20s) interviewing me seemed more and more confused as we went on but he never tried to redirect me to what he really intended.
What are your discovery mechanisms? I don't know what exists for automatic peer management with wg. If you're doing bgp evpn for vxlan endpoint discovery then I'd think WG over vxlan would be the easier to manage option.
If you actually want to use vxlan ids to isolate l2 domains, like if you want multiple hypervisors separated by public networks to run groups of VMs on distinct l2 domains, then vxlan over WG seems like the way to go.
> This leads to authors having to re-explain their thinking in detail, covering points that they’d omitted for brevity or because they are obvious to those with a good understanding of the problem.
There's nothing wrong with this. Being able to explain your thinking in detail to someone that doesn't necessarily understand the problem is a pretty good exercise to make sure you yourself fully understand the problem _and your thinking._ Of course, this can't turn in to a lecture on basic things people should know or have looked up before commenting.
Sure, now imagine answering 10 different people to all of their questions? It's the largest hindrance I have ever seen but I agree with the above comment that it largely depends on the team.
My take is that an RFC should be very early in the engineering process, like as part of a proof of concept phase, and should not block progress towards completing a design proposal. The design proposal should list any legitimate alternatives to overall or component designs discussed during RFC along with the reasoning for not using them in a "designs not chosen" appendix. This at least gives your engineering leadership an opportunity to evaluate the general design ideas before anyone is prepared to die on the hill of those ideas.
Architecture / Design review happens post proof of concept but still before any significant development work and major action items are blockers to beginning development. Further discussion about designs not chosen can happen here, especially when a flaw is uncovered that would be addressed by one of those not chosen.
What a timely article and comment. I've been watching a lecture series over the last few days about quantum mechanics and the many worlds interpretation. And I have questions.
I may have missed it or didn't understand it when I heard it explained. What underpins the notion that when a particle transitions from a superposed to defined state, the other basis states continue to exist? If they have to continue to exist, then okay many worlds, but why do we think (or know?) they must continue to exist?
Double slit experiment has been done with electrons which are, afaik, much easier to detect and send single file. It's been done with molecules. It's not a thought experiment.
Quantum superposition is real. There's no doubt about that.
Not a physicist, just here to observe single photons weren't reliably emitted until the modern era. like the 1970s. The double slit experiment pre-dates this. it's from 1801. The one which confirms "self interaction" was 1974. I was in high school 1973-78 so the stuff we did, was comparatively "new" physics in that sense. Not a message I remember receiving at the time.
From the pop-sci history reading I do, "detecting" reliable generation of single photon STREAMS in the early days depended on using mechanisms which inherently would release a sequence of photons on a time base, over time, and then gating the time sufficiently accurately to have high confidence you know the time base, and can discriminate an "individual" from the herd.
I don't doubt quantum theory. I only observe it's mostly for young students (like almost all received wisdom) grounded in experiments which don't actually do what people think they do. The ones you run in the school lab are illustrative not probative.
What people do in places like the BIPM in Paris, or CERN, isn't the same as that experiment you did with a ticker-tape and a weighted trolleycar down a ramp. "it's been confirmed" is the unfortunate reality of received wisdom, and inherently depends on trust in science. I do trust science.
Now we have quantum dots, and processes which will depend on reliably emitting single photons and single electrons, the trust has moved beyond "because they did it in CERN" into "because it's implemented in the chipset attached to the system I am using" -QC will need massive amounts of reliably generated single instance signals.
> just here to observe single photons weren't reliably emitted until the modern era.
A dim light bulb from a few feet away emits on the order of 1k photons/sec, which is low enough that you can count individual emissions using fairly simple analog equipment [0] [1].
> The double slit experiment pre-dates this. it's from 1801. The one which confirms "self interaction" was 1974.
There's an experiment from 1909 that demonstrated the double-slit experiment with single(ish) photons [2].
> I only observe it's mostly for young students (like almost all received wisdom) grounded in experiments which don't actually do what people think they do. The ones you run in the school lab are illustrative not probative.
> What people do in places like the BIPM in Paris, or CERN, isn't the same as that experiment you did with a ticker-tape and a weighted trolleycar down a ramp. "it's been confirmed" is the unfortunate reality of received wisdom, and inherently depends on trust in science. I do trust science.
The double-slit experiment is actually fairly easy and cheap to run [3]. Certainly more complicated than ticker tape, but not by much.
It's difficult to quantify the value of "I know the shit out of linux" to a prospective employer when they're looking for cog developer #471.
In my experience it's the network of people you've worked with that know how beneficial you are and want to work with you again (this is key) that will keep you in demand regardless of the market conditions.
Victim-blaming is not necessary in this hiring environment. In the last decade only small companies have been available to me which means there’s under five folks I can turn to directly for jobs, and all are not hiring now.
I've made quite a career out of knowing how linux works and not reinventing the wheels it provides. I read man pages. I sometimes run `systemctl list-unit-files` and say, "hmm what is that??" then go find out what it is. I've been at this for decades and curiosity keeps pushing me to learn new things and keep up with recent developments.
But how did you get your first Linux job? That's where I'm stuck at. Where I live, there's literally zero entry level Linux roles, and the literally couple of Linux roles that are available require you to have centuries worth of enterprise experience with Kuberneres, Openshift, Ansible, Chef, Puppet, Terraform etc...
I was a windows guy at a large auction site and started bringing linux in to my workflows and solutions. I'd already been gaining personal experience with linux and the BSDs, solaris, etc. That was my last "windows job."
I'd say there's really no "linux roles" out there. Entry level or not. Everyone collectively decided "devops" was a big bright beautiful tomorrow and made devs learn release management and made ops people get lost (or become the developer they never wanted to be). Everyone shifted their focus towards "as code" solutions because the SRE book said nobody should log in to servers or something. So we hire people that know the abstractions instead and assume nobody really needs to go deeper than that.
It sucks, but learning the abstractions is how you're gonna have to get started. If you're already a linux nerd then you may benefit from understanding what the abstraction is doing under the hood.
If I was starting out right now, I'd go work through Kelsey Hightower's 'Kubernetes The Hard Way' and build functional kubernetes clusters from scratch on a couple of the cloud providers. Do not copy&paste anything from the blog. Every line, every command, by hand. Type out those CSRs and manifests! Recognize what each component you're setting up is and what it's used for. Like "what is the CCM and what's it responsible for?" Or "What's going on with every step of the kubelet bootstrapping process? What controllers are involved and what are they doing?" Read EVERYTHING under kubernetes.io/docs. Understand the relationships between all the primitives.
If you already have some linux, networking, and containers knowledge to build on top of, I think you could work through all of that in less than 4 weeks and have a better understanding of kubernetes than 80%+ of engineers at any level and crush a kubernetes focused interview.
Thanks but my point still stands: there's no entry-level roles, whether it's "Linux" or a Linux-based "DevOps" role. I'm actually working in a windows-based mostly-DevOps type role, but we use almost zero opensource tools and it's very Microsoft centric.
The closest Linux-y roles that I might have a shot at getting into are "cloud engineer" type roles, with a heavy emphasis on AWS - and I hate AWS with a passion (just as much as I hate Azure).
Regardless, the biggest issue is getting that interview call - now in the age of AI, people are faking their CVs and companies are getting flooded with hundreds or thousands of junk applications, so getting that interview call - especially when you don't meet their professional experience requirements - is next to impossible. I could have all the Kuberneres certs in the world, but what's the point if I get filtered out right at the first stage?
Start introducing it where you are. I was an early advocate for the use of WSL2/Docker and along with that a push towards deploying to Linux initially as a cost saving as projects started shifting away from .Net Framework and into .Net Core and Node that were actually easier to deploy to Linux... WSL/Docker became a natural fit as it was "closer to production" for the development workflow.
It's not always possible, but there are definitely selling points that can help you introduce these things. Hell, scripting out the onboarding chores from a clean windows install (powershell to bootstrap in windows, then bash, etc for the WSL environment) with only 3-4 manual steps... and you get a new dev onboarded in a couple hours with a fully working environment and software stack, including an initialized database... You can raise some eyebrows.
Do the same for automated deployments on your existing projects... shift the testing environments to Linux as a "test" or "experiment" ... you can eat away at both directions.
Before you know it, developers can choose windows or mac instead of one or the other, and can use whatever editor they like. Maybe still stuck with C# or MS-SQL, maybe PostgreSQL for green projects.
It’s been 17 years since I got my first Linux job in 2008. Where I live, that’s rare, 99% of the industry here is a 'Microsoft Shop,' and the biggest player in town is practically married to them.
I started out at a small Linux company working with Plone CMS. The pay wasn’t great, but it was the perfect place to learn Linux and Python. Since then, I’ve used Linux every single day, become a Java developer, and started a few businesses. Using Linux of course.
But lately, things are changing. Companies are realizing that when it comes to Data Engineering and Science, C# just can't compete with Python's ecosystem. Now that they need to pivot, they're looking for help, and there are very few people in this area with the experience to answer that call.
I was working in a Windows-centric environment and started using ProxMox as the hypervisor instead of Windows Server. This combined with my self. Hosting hobby (Proxmox mini PC cluster, network diagrams of vlans, self hosting my own blog website, having a handful of small tools in my git repos) was what sold my current company on hiring me, more than my resume of working in tech.
You can make almost any job into a Linux job. Use a linux VM on your desktop to solve a problem for the company. Things change once your employer knows its essential.
I've also seen Linux make inroads in "windows only" enterprises when it became essential for performance reasons. A couple of times, towards the start of a project, windows APIs were discovered to be too slow to meet requirements:
In one case, customer needed us to send a report packet every 40ms. But even when we passed "0" to the windows Sleep() function, it would sometimes stop our program for 100ms at a time. The sleep function on linux was highly accurate, so we shipped linux. Along the way 5-6 devs switched to, or got a second PC to run linux.
In another case, we needed to saturate a 10GbE link with a data stream. We evaluated windows with a simple program:
while(1) send(sock, &buffer, len(buffer);
... but we found windows could only squeeze out 10% of the link capacity. Linux, on the other hand, could saturate the 10GbE link before we had even done any performance tuning. On linux, our production program met all requirements while using only 3% CPU usage. Windows simply couldn't touch this. More devs learned linux to support this product.
Those companies still don't require linux skills when hiring, because everyone there was once a windows guy who figured it out on the job. But when we see linux abilities on the resume it gives candidates a boost because we know they'll be up to speed faster.
reply