I agree. .NET is the opposite of Go. Calls to System.Random use Xoshiro128++ under the hood (as of .NET 6 I believe). On the other hand, calls to RandomNumberGenerator.GetBytes() are cryptographically secure, using the Windows kernel cryptographic provider on Windows and /dev/urandom (chacha20) on Linux and arc4random_buf() on MacOS (which also uses chacha20 under the hood).
I ported around 20 RNGs to C# (all non-cs), and there are tons of uses for non-cryptographic RNGs, so I'm a little torn. I guess in modern development most people who need an RNG need it for crypto purposes (I would guess salts, keys and nonces mostly), but I'd hate to see all the Xoshiros, Mersenne Twisters, PCGs, and MWCs, etc. go the way of the dodo simply because they are not deemed fit for crypto purposes. Games, simulations, non-cryptographic hashes all need deterministic and high performance RNGs, and don't need all of the cryptographic guarantees.
To top it off, there is no standard definition of what makes an RNG cryptographically secure, so it's a slightly loaded question anyway. Everything I've read says an algo needs the following properties: forward secrecy (unable to guess future outputs given the current state), backward secrecy (if I know current outputs, I shouldn't be able to recover previous internal state or previous outputs), and the output must be indistinguishable from true random bits, even with a chosen-input attack. This is where I politely defer to the expert mathematicians and cryptographers, because I'm not equipped to perform such an analysis.
I can understand why things have developed this way though- people have needed random numbers far longer than they've needed cryptographically secure random numbers, so the default is the non-cryptographically secure variant. A language created tomorrow would likely follow in Go's footsteps and default to the cryptographically secure.
No, CSPRNG vs. RNG isn't a loaded question. Every RNG that says "this isn't an according-to-Hoyle cryptographically random number generator, but..." isn't one. Most modern CSPRNGs are designed with well-understood cryptographic primitives, so they draft off those security properties. Establishing those properties for a novel set of primitives is a major undertaking.
It's a little frustrating, because there are definitely fast RNGs that have tried to blur this line. A reasonable first approximation of the current situation is that a CSPRNG should have somewhere in its core a mixing function based on an actual cryptographic hash or permutation function; if the design has to explain what that function is and how it works (as opposed to just saying "this is ChaCha20"), it's not secure. These fast RNGs, like Xorshiro and PCG, all get there by not having cryptographically strong mixing functions at their core.
For what it's worth, I think the "GetBytes() means secure, IntN() means it's not secure" is a clever misfeature. Just make all the standard library random interfaces back onto a real CSPRNG, and let people pull in PCG or whatever if they have specialized needs for fast insecure RNGs.
> Just make all the standard library random interfaces back onto a real CSPRNG
That's what OpenBSD has done for the traditional C and POSIX randomness APIs.
Also, re your earlier comment, OpenBSD's arc4random API is everywhere now except linux/musl and Windows. POSIX now has getentropy, which on recent Linux kernel will be as fast as arc4random_buf. But it would be nice if musl got the arc4random API, which includes arc4random_uniform for generating 32-bit numbers in the interval [0, N), minimizing the risk of people screwing that up.
Unlikely vendors will takes OpenBSD's approach to the historic PRNG APIs, but they're almost all there for the arc4random API. Also, the former approach is less ideal than it sounds; the latest versions of PUC Lua, for example, use an included non-CSPRNG rather than merely binding the historic C APIs. Explicitly using the arc4random API means the semantics are explicit, too, and you can more easily audit code. It's conspicuously missing an API for floating point intervals, but perhaps that'll come along.
What's your threshold for "high performance"? A modern CPU can use a secure algorithm and produce more than one byte per cycle. Xorshift is a bit faster but not much faster.
LINQ? Just throwing it out there; obviously not everybody can or wants to run a C#/.NET stack, but entity framework (core) is about as close as you can get to the perl and regex integration. I think Ruby on Rails gets there too, but I'm not a RoR guy, so I can't comment.
This is disappointing. 4o has been performing great for me, and now I see I only have access to the 5-level models. Already it's not as good. More verbose with technical wording, but it adds very little to what I'm using GPT for.
I hear this a lot, and I do seem to remember back when I first got Windows 11 I might have seen something stupid like Candy Crush, but I'll be honest, I literally never see ads anywhere in the OS. Truth be told I hardly ever use the start menu since they ruined it, but this complaint about ads everywhere make it sound like a typical webpage. I just don't see it. Maybe because I'm on Win11 Pro?
I am in the middle of my third AI assisted project. I disagree ~90%.
If you prompt an LLM like an architect and feed it code rather than expecting it to write your code, both ChatGPT 4o and Claude 3.7 Sonnet do a great job. Do they mess up? Regularly. But the key is to guide the LLM and not let the LLM guide you, otherwise you'll end up in purgatory.
It takes some time to get used to what types of prompts work. Remember, LLMs are just tools; used in a naive way, they can be a drain, but used effectively they can be great. The typing speed alone is something I could never match.
But just like anything, you have to know what you're doing. Don't go slapping together a bunch of source files that they spit out. Be specific, be firm, tell it what to review first, what's important and what is not. Mention specific algorithms. Detail exactly how you want something to fit together, or describe shortcomings or pitfalls. I'll be damned if they don't get it most of the time.
> Be specific, be firm, tell it what to review first, what's important and what is not [...] Detail exactly how you want something to fit together [...]
You mean like in a fast food chain?
I know how to use it. All of that was already implied in my original comment. Sometimes though, I want to cook without a rigid mindset.
If I had to guess, probably a couple hundred lines a day, maybe more if I get in a groove or have a deadline.
But with an LLM, that number goes to about 500 or so, 200 of which are real code and not definitions of some kind. Truthfully, that’s where the LLMs shine. I have this enum with 50 variants, and need to build a dictionary (with further constraints and more complex objects). That shit takes forever even with cut & paste, unless you code with code, and those one-offs aren’t my cup of tea anymore.
There's also going to be quite a big ecosystem / standard library difference between languages that had fundamental type system features since the beginning vs. languages that added fundamental features 23 years later.
Imagine all the functions that might return one thing or another, which was inexpressible in C# (until this proposal is shipped), will all these functions release new versions to express what they couldn't express before? Will there be an ecosystem split between static typing and dynamic typing?
Having reviewed the proposal (of course no guarantee that's what discriminated unions (aka product types aka algebraic data types) will look like), and it appears that it very much integrates nicely with the language.
I don't suspect they'll make too many changes (if any) to the existing standard library itself, but rather will put functions into their own sub-namespace moving forward like they did with concurrent containers and the like.
Given their penchant for backwards compatibility, I think old code is safe. Will it create a schism? In some codebases, sure. Folks always want to eat the freshest meat. But if they do it in a non-obtrusive way, it could integrate nicely. It reminds me of tuples, which had a similar expansion of capabilities, but the integration went pretty well.
>that's what discriminated unions (aka product types aka algebraic data types)
Just an FYI, discriminated unions are not product types, they are sum types. It's named so because the total number of possibilities is the sum of the number of possibilities for each variant.
Ex. A sum type Color is a PrimaryColor (Red, Blue, Yellow) OR a SecondaryColor (Purple, Green, Orange), the total number of possibilities is 3 + 3 = 6.
For a product type, if a ColorPair is a PrimaryColor AND a SecondaryColor, the total number of possibilities is 3 * 3 = 9
Both sum types and product types are algebraic types, in the same way that algebra includes both sums and products.
For the standard library, I'm curious for the family of TryParse kind of functions, since those are well modeled by a sum type. Maybe adding an overload without an `out` argument would give you back a DU to `switch` on
I make that mistake more often than I care to, but you are 100% spot-on. They are sum types, not product types. Thank you for making me walk the walk of shame!
I have points to burn, so I'll post, because I know this will scratch some folks the wrong way- apologies in advance.
I use Windows. In fact, I like Windows. I know lots of (ok, more than 5) greybeards who feel exactly the same way. I don't want Linux to be Windows, but I also don't want Linux on my personal desktop either.
I have a Mac Mini M1 on my desk, and I use that for the things it's good for, mainly videoconferencing. It's also my secondary Adobe Creative Suite machine.
On my Win11 desktop, I have WSL2 with Ubuntu 24.04 for the things it is good for- currently that's Python, SageMath, CUDA, and ffmpeg. For my Unix fix, I use Git Bash (MSYS2) for my "common, everyday Unix-isms" on Windows.
I also use PowerShell and Windows Scripting on my box when I need to.
Why? Well, firstly, it's easy and I've got stuff to do. Secondly, cost is not really an issue- I bought my Windows Pro license back with Win7, and it was about $180. That was maybe 15 years ago. They have graciously upgraded me at every step- Win7 -> Win10 -> Win11, all at no cost. Even if I had had to buy it, my Taco Bell tab is higher in any given month than a Windows license (love that inflation).
Why else? Everything works. I get no annoying popups, and I really no longer sweat garbage living on my drive, because that ship has sailed; wanna waste 50GB? Sure, go ahead.
But the most important reason? My hardware is supported. My monitors look great; printers, scanners, mice and USB drives & keys all work. In fact, >90% of the time, everything just works. Further, I can share effortlessly with my Mac, all my Linux servers speak SMB (CIFS), Wireshark works, and my programs are all supported including most open source software. And I do run apps that are 20+ years old from time to time.
Truth be told, I have tried the dance of daily driving Linux, and it's a laundry list of explanations to others why my stuff is different or deficient in some way. The kicker is that my clients don't care about purity or coolness factors.
Linux has its place. But please don't put in on my main machine, and please don't give it to my family members. They're only being nice by living with a sub-par desktop experience. It will always take a herculean effort to stay on par with Windows or MacOS, and no one really wants to put their money where their mouth is.
Please don't misunderstand. I admire and respect authors of open source software, and my servers thank them. But being a contrarian and dogfooding what KDE and GNOME put out, fighting with Nvidia and AMD, dealing with constant driver interface changes, and not having proper commercial software support is not my idea of fun. It was 30 years ago. Today? I'd rather hang with my daughter or write some code.
These distros have had 35 years. I don't know what else to say.
I have the same experience. I've tried to use Linux as a desktop since 2000. And tried and retried. Year after year and distro after distro.
Until I realized the desktop experience on Linux will never be on par with Windows, that I need things to just work instead of constantly fiddling to make them work.
I discovered that Gimp is not Photoshop and Libre Office is not MS Office. And I discovered that running things under Wine are not always great.
I discovered I need and want to run Windows software.
I discovered that I like the hardware to work out of the box.
For me, Windows is great as a desktop. And I develop microservice based apps that run under Linux containers/Kubernetes in cloud.
Docker Desktop, WSL and Hyper-V are taking care of all of my potential Linux needs.
I also have a MacBook Pro, but I don't care much about the OS, I mainly bought it for the good battery life and use it to browse the web and watch movies in bed or on the couch or while traveling.
> But the most important reason? My hardware is supported. My monitors look great; printers, scanners, mice and USB drives & keys all work. In fact, >90% of the time, everything just works.
the only thing I have had issues with is one printer, and one graphics card with many machines over 20 years, so I would say I Linux manages better than 95% "just works".
I strongly disagree. Linux (KDE) is a far superior desktop experience these days, compared to Windows 11. Have you even seen the new Win11 taskbar and the shitty Start Menu - they ruined something which they perfected in Win7. The overall UX has taken a deep dive - like with the unwanted removal of classic Control Panel applets like "Window Color and Appearance" (which doesn't have a replacement), and the continued bolting-on of unwanted crap like Copilot and forced MS Accounts - like, even the CLOCK app requires you to sign-in (why?) [1]. There are even ads in MS PAINT [2]! Tell me if this is acceptable?
> It will always take a herculean effort to stay on par with Windows or MacOS, and no one really wants to put their money where their mouth is.
I also disagree with this, in fact, Linux has surpassed Windows and macOS in many areas.
Take updates for instance: especially on distros with atomic updates, they are far more reliable and a pleasant experience compared to Windows. Atomic transactions means updates either apply or don't - there's no partial/failed state, so no chance of an update failing and potentially borking your PC. Plus, distros which offer atomic updates also offer easy rollbacks - right from the boot menu - in case of any regressions. Updates also do not interrupt you, nor force you to reboot unexpectedly - you reboot whenever YOU want to, without any annoying nag messages.
Most importantly, updates do not hold your PC hostage like Windows does - seeing that "please wait, do not turn off your computer" has got to be the #1 most annoying thing about Windows.
It's amazing that even with 40 years of development + trillions of dollars at their disposal, Microsoft still can't figure out how to do updates properly.
Finally, your PC will continue to receive updates/upgrades for its entire practical lifespan, unlike Windows (regular versions) which turns a perfectly capable PC into e-waste. Win11 blocking Kaby Lake and older CPUs is a perfect example of planned obsolescence and honestly, it's disgusting that people like you find this acceptable.
There are several other areas where Linux shines, like immutable distros, Flatpak apps, sched_ext schedulers, x86_64 microarchitecture optimisations, low resource usage... I could write an entire essay about this, but that will make this lengthy post even lengthier.
> But being a contrarian and dogfooding what KDE and GNOME put out
Please don't put KDE in the same sentence as GNOME. The GNOME foundation have lost the plot and have betrayed their fans, ever since they released the abomination that is GNOME 3. KDE on the other hand, still delivers what users want (ignoring the KDE 4 era). KDE v6 has been a near-flawless experience, and still has a classic, familiar desktop UX that any old time Windows user would love and feel right at home with, unlike Win11.
> fighting with Nvidia and AMD
Please don't put nVidia and AMD in the same sentence. nVidia sucks and that's completely nVidia's fault for not supplying a full opensource driver stack (their new open kernel module is an improvement, but many driver components are still proprietary and rely on their GSP).
AMD on the other hand, has been a super-pleasant experience over the past few years. Ever since Valve got involved with their Steam Deck efforts, AMD drivers, KDE, Wine and several other related areas have seen massive improvements. I seriously doubt you would have any major complaints with AMD GPUs if you've tried them on a recent distro.
> not having proper commercial software support
What sort of commercial software does your family require? Mine don't need any (and nor do I). The family members who are still working have their own work-supplied Windows/macOS laptops, so that takes care of the commercial side of things, and outside of work we don't need any commercial software - and we do everything most normal PC users do - surfing the web, basic document/graphics/video editing, printing/scanning, file/photo backups etc. Everything works just fine under Linux, so I'm not sure what we're missing out on by not using commercial software.
> These distros have had 35 years. I don't know what else to say.
Maybe don't use an ancient distro that's stuck in the past? Try a modern immutable distro like Aurora [3] or Bazzite [4] and see for yourself how much things have changed.
> the unwanted removal of classic Control Panel applets like "Window Color and Appearance" (which doesn't have a replacement)
Annoys the fsck out of me too. Hmm... Just occurred to me: What if one copied the correct .cpl file into the C:\Windows\System32 directory of a Win 10/11 box and just ran it (if nothing else, from the command line)?
Probably won't help. :-( AFAICT, the Control Panel applets just put the values you specify into the correct place in the Registry, and the latest versions of Windows just don't care what's there any more. Source: I used to edit the Windows Border Width value directly with RegEdit, but after a while it stopped using that value (and even started setting it back IIRC).
"Maybe don't use an ancient distro that's stuck in the past? Try a modern immutable distro like Aurora [3] or Bazzite [4] and see for yourself how much things have changed."
This has always been the riposte to Linux-for-normies sceptics - "you haven't tried these modern distros, X, Y, Z".
I've gone down that route several times and they always have issues, from drivers to config settings to just being too different compared to Windows or even MacOS.
Non-tech (and especially older) people will generally have expectations that obscure linux distros (despite their good intentions) cannot meet; they may well suit users who are more confident and curious with sorting things out themselves but this idea that somehow "this time its different" is ultimately on the distro-champions to prove; they've been wrong too many times in the past.
> I've gone down that route several times and they always have issues, from drivers to config settings to just being too different compared to Windows or even MacOS.
You really should give KDE-based distros a try, the UI isn't that much different from the traditional Windows UI paradigm. In fact I'd say KDE is more similar to the Windows 7 UI, than Windows 11 is.
Also, drivers aren't really a problem with compatible hardware. As the person recommending/installing Linux, it is your duty to ensure that they've got compatible hardware. In my experience, anything older than a couple of years, from mainstream brands, work fine. The only couple of cases where I've had to manually install a driver was for printers, but even that is now almost a non-issue these days thanks to driverless/IPP printing.
> Non-tech (and especially older) people will generally have expectations that obscure linux distros (despite their good intentions) cannot meet
I'm surprised you mentioned non-tech and older people, because that's exactly who my ideal targets for Linux are, because their needs are simple, predictable and generally unchanging. It's usually the tech-savvy and younger people who've got complex software needs and specific workflows that find it hard to adjust to Linux. This was also the case for me, I had over a decade worth of custom AutoHotkey scripts + mental dependencies on various proprietary software that I had to wean myself off from, before I ultimately switched.
Older, non-techy folks are mostly fine with just a browser and a document editor. This was the case with my mum, and pretty much most of my older relatives. As long as you set up the desktop in a way it looks familiar (aka creating shortcuts on the desktop), they don't cause too much of a fuss. Initially there may be a "how do I do this" or "where did xxxx go?" depending on their needs/workflow. At least in my mum's case, there wasn't much of an issue after showing her the basics.
I'm curious what needs the older folks you know have, which can't be met with an atomic KDE-based distro like Aurora.
> Have you even seen the new Win11 taskbar and the shitty Start Menu - they ruined something which they perfected in Win7.
Yes, one of the biggest visible downgrade of a core feature with W11! It's is awful and buggy, but then Windhawk mods and menu alternatives and app launchers exist, so it can tweaked to be good again (though they didn't perfect anything in any W7 or any other version, there is not a single perfect UI component)
> The overall UX has taken a deep dive - like with the unwanted removal of classic Control Panel applets like "Window Color and Appearance" (which doesn't have a replacement)
Again, bad stuff, though the classic control panel was also bad, the only consolation is that at the steady state you don't use those often
> CLOCK app requires you to sign-in (why?) [1]. There are even ads in MS PAINT [2]! Tell me if this is acceptable?
It isn't , but then why would you ever use these bad stock apps even if they had no ads??? Much better options exist!
But all of those mostly fixable annoyances pale in comparison with the inability to have a great file manager like Directory Opus or being able to find any file anywhere instantly with Everything or having a bunch of other apps (and then you'd have plenty of other issues tweaking OS UI or have sleep or hardware compatibility issues people keep complaining about)
Windows is still enshittified and everyone needs an exit plan.
For family users I recognize macOS. For windows apps I have virtualized win11 IoT with a passed through GPU. My monitor has multiple inputs and I can't even tell it's not native.
Had we had better process isolation in the mid-90s, I assume web application development would mostly be Java apps, with a mini-vm for each one (sort of a qubes like environment).
We just couldn't keeps apps' hands out of the cookie jar back then.
Java tried to, and mostly successfully did, run trusted and untrusted code in the same VM. Your applet code ran in the same VM as all the code for managing applets. However, holes were frequent enough that they abandoned the whole idea. (Instead of sandboxing the VM as a whole? Why?)
I ported around 20 RNGs to C# (all non-cs), and there are tons of uses for non-cryptographic RNGs, so I'm a little torn. I guess in modern development most people who need an RNG need it for crypto purposes (I would guess salts, keys and nonces mostly), but I'd hate to see all the Xoshiros, Mersenne Twisters, PCGs, and MWCs, etc. go the way of the dodo simply because they are not deemed fit for crypto purposes. Games, simulations, non-cryptographic hashes all need deterministic and high performance RNGs, and don't need all of the cryptographic guarantees.
To top it off, there is no standard definition of what makes an RNG cryptographically secure, so it's a slightly loaded question anyway. Everything I've read says an algo needs the following properties: forward secrecy (unable to guess future outputs given the current state), backward secrecy (if I know current outputs, I shouldn't be able to recover previous internal state or previous outputs), and the output must be indistinguishable from true random bits, even with a chosen-input attack. This is where I politely defer to the expert mathematicians and cryptographers, because I'm not equipped to perform such an analysis.
I can understand why things have developed this way though- people have needed random numbers far longer than they've needed cryptographically secure random numbers, so the default is the non-cryptographically secure variant. A language created tomorrow would likely follow in Go's footsteps and default to the cryptographically secure.