The article is actually pretty interesting. The only one mentioned is Curl, and that's because of abuses by uneducated developers using AI with no idea of what they're doing.
I actually think that's the central thesis of the article, especially the last example that discusses the LLVM compiler project getting raked over the coals after not engaging with a non-developer that had used AI to make pull requests, and admitted he had no idea what the code did.
Buried in the middle of the article is a paragraph that I think sums up the main point well.
> More broadly, the very hardest problem in open source is not code, it’s people — how to work with others. Some AI users just don’t understand the level they simply aren’t working at.
The point being that without a good programmer, AI is not very useful.
One complaint that I have about ipv6 and mac addresses is that they use hex separated by colons. Not only is it way longer than an ipv4 address, you can't rattle one off using a number pad. Back when I did full time IT, that sounds like a nightmare if in ipv6 land you have to enter addresses as commonly as you have to enter ipv4 addresses.
I think 1 IP address per human was short sighted. We ran out before the human population doubled. But I think a billion per human was someone liking powers of two, and nothing more. “Ipv5” with 48 bit addressing would have done pretty well. As 6 octets or 4 base 12. For humans you could reserve all ambiguous addresses and have about 50k times as many addresses while people sort themselves out. You could still be able to see at a glance that they were ipv5 addresses.
1047.258.300.0/24
v4 was 32-bit, v6 was 128-bit. I think that 64 bits is a more obvious happy medium.
Conveniently, 2^13 = 8192 allows you to use most of the information available in four decimal digits. And 64 = 13•5 - 1 means that you get a roughly even division into five address tiers (with either the first or last one half the size). 4095.8191.8191.8191.8191 is a bit worse than 255.255.255.255 but not nearly as bad as ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff.
While I do agree with you, I do think we should be long past the point of needing to manually enter IP addresses. We have several good service discovery protocols, and DHCP and DNS, which are less great but still has pretty good tooling these days
I am using IPv6 on home network and I don’t know any addresses. Everything picks up the details and assigns address. To access hosts that matter, I use the mDNS names.
No way, they're a dime a dozen million. NCP addresses are worth much more, because there were only 256 of them. That's where the big money's at. I saw the classic NCP address 134 (MIT-AI) appraised for millions on the Antique Information Superhighway Road Show.
ipv6 is such a failure. even if it eventually 'takes over', its still a failure. Its over engineered for something customers didnt ask for (an ip address for every grain of sand in the universe).
There's something to be said about a human readable IP address. Where you can't tell a person the address, you have to copy and paste. where you cant infer any information about the IP address just by looking at it. It adds unnecessary overhead to small packets, etc.
Is human readable IP that important though? I mean even though I know about CIDR it's still hard for me to intuitively think about it, and I will say by far the part of kubernetes I dislike the most is figuring out the networking layer design. I feel like the only reason we need human readable IP's in the first place is for understanding all this NAT stuff that we wouldn't particularly need if we had bigger IP ranges
NAT means we can work with IPv4 already and makes IPv6 moot. you just supported my argument.
Honestly, all they had to do was add alpha characters. Just doing so would have gave us more IP addresses than we ever needed while keeping it human readable.
10^53 kg / 2^128 = 10^14 kg per address, though I have no idea what fraction of the universe is sand.
In practice, the number of allocations is much smaller because IPv6 is effectively a 64 bit address space, with the second half reserved for edge networks.
> There's something to be said about a human readable IP address.
Is 2a01:4f8:1c1c:f6aa::1 really so unreadable, given that every device needs a different number?
The large address space and thus the king addresses are an artifact of optimizing the routing.
I think just saving the endless discussions of "my Xbox only got nat type 2, how do I change it?" alone saves more lifetimes than are lost by c&p adresses (not to speak of the large infrastructure costs of maintaining gcnat at scale).