But land is not distributed equally in the two hemispheres. In the southern hemisphere it's generally concentrated near the equator, where it gets more sunlight.
Re: specialization and the comptime/reflection initiative
Since they allow observing whether a trait is implemented or not in the current crate they would probably become unsound if impls can be declared in downstream crates. They are a partial solution but also make other solutions harder to implement soundly (and viceversa)
Rather than login failures I would monitor login successes. A sharp decrease of successes likely points to some issue, but an increase in login failures might easily be someone trying tons of random credentials on your website (still not ideal, but much harder to act on)
As long as IPv4x support was just something you got via software update rather than a whole separate configuration you had to set up, the vast majority of servers probably would have supported IPv4x by the time addresses got scarce.
However, if it did become a problem, it might be solvable with something like CGNAT.
CGNAT would also be easier on routers too, since currently they need to maintain a table of their port being used to the destination ip and port. Whereas with ipv4x, the routing information can be determined from the packet itself and no extra memory would be required
That's only true when forwarding IPv4x -> IPv4. When you're going the reverse direction and you need to forward IPv4 -> IPv4x, well, still need a table then.
Re: async in traits, the feature was delayed because it relied on the "Generic Associated Types" and "Impl Trait in Traits" features. If Rust delayed the whole `async` feature for working on those pretty type-theoretic features what would you have thought?
Well in practice the async_trait crate worked just fine. If Rust delayed the whole async feature I'd have thought they'd have better been able to handle the function coloring problem via something like OCaml's algebraic effects rather than following the trend of JS and C# back then, as OCaml's came along much later after more research into the model.
What do you mean? Editions don't require full access to source code. Rust in general relies heavily on having access to source code, but that has nothing to do with how editions work
You can write a binary library that exposes a C ABI using Rust (which is indistinguishable from an ordinary C/C++ library) and then provide source for a Rust wrapper crate that provides a "safe" interface to it, much like a C header file.
> when mixing crates from various editions and how changes interact together.
Could you elaborate more on this? It's not obvious to me right now why (for example) Crate A using the 2024 edition and Crate B using the 2015 edition would require both full access to both crates' source beyond the standard lack of a stable ABI.
Because in order to have standard library breaking changes across editions, if those types are exposed in the crate public types, or change their semantics across editions, the compiler has to be able to translate between them when generating code.
See the Rust documentation on what editions are allowed to change, and the advanced migration guide on examples regarding manual code migration.
Not so much what has happened thus far, rather the limitations imposed in what is possible to actually break across editions.
Or put another way, a hypothetical feature that you made up in your head is the thing that requires source access. Editions do not let you change the semantics of types.
To be fair, Rust tooling does tend toward build-from-source. But this is for completely different reasons than the edition system: if you had a way to build a crate and then feed the binary into builds by future compilers, it would require zero additional work to link it into a crate using a different edition.
Exactly, hence why people should stop talking about editions as if they sort out all Rust evolution problems, in your own words it doesn't allow changing type semantics
I think you're too stuck on the current implementation. Work is going into investigating how to evolve the standard library over editions. The "easiest" win would be to have a way to do edition-dependent re-exports of types.
What you're describing sounds more like a potential issue with editions if/when they allow breaking stdlib changes more than a problem with editions as they exist today, which is more what I took the original comment to be talking about.
OK, sure, but again what breaking changes editions do/don't currently allow is independent from what SkiFire13/I was responding to, which was the "requires full access to source code" bit.
Depends on the change. Obviously the compiler doesn't need to care about cross-edition compatibility between crates if the changes in question don't impact the public API. Otherwise, I'd expect the compiler to canonicalize the changes, and from what I understand that is precisely how edition changes are chosen/designed/implemented.
I don't fully agree with this. Yes, surely shared mutable state can suffer from similar issues, however the cooperative nature of coroutines makes this much easier to handle. OS threads are preemptive and actually run in parallel, so you have to be aware of CPU concurrency and always be ready for a context switch.
Hard disagree. Co-routines are utter hell. They should have never become popular.
With traditional locking, the locked segment is usually very clear. It's possible to use race detectors to verify that objects are accessed with consistent locking. The yield points are also clear.
With async stuff, ANY await point can change ANY state. And await points are common, even sometimes for things like logging. There are also no tools to verify the consistent "locking".
So I often spend hours staring blankly at logs, trying to reconstruct a possible sequence of callbacks that could have led to a bug. E.g.: https://github.com/expo/expo/issues/39428
Memory access performance depends on the _maximum size of memory you need to address_. You can clearly see it in the graph of that article where L1, L2, L3 and RAM are no longer enough to fit the linked list. However while the working set fits in them the performance scales much better. So as long as you give priority to the working set, you can fill the rest of the biggest memory with whatever you want without affecting performance.
Italy's digital ID (SPID) works by having multiple trusted providers that can attest your identity. You can sign up with multiple of them, and if one is not available you could use another one. Not perfect (it's still centralized in the hand of 10-20 providers) but better than nothing. Unfortunately most people only ever signed up with one provider, and the government is now pushing for a more centralized digital ID istead (CieID).
All of these IDs in the EEA are based on a common set of EU requirements, and in theory that means multiple providers, but in practice in many countries the set of providers is small and with feature gaps. E.g. Norway has several providers, but they provide different levels of security and features, which means in practice most people rely on BankID...
10-20 is fantastic in comparison. Even if people don't have more than one it at least reduces the blast radius..
reply