People posting have mentioned that IPv4 is working for what they use the internet for. But of course it is. When NATs has been required for your whole life, how could the internet have built features that needed p2p routing? Just convince businesses to build something that requires special router configuration? And still wouldn’t work on phones or with ISPs that require CG NAT? You got what worked out of the box. You obviously couldn’t use what didn’t exist.
Even if NAT will be gone one day, the stateful firewalls won't. Every every home router would still ship with "deny all incoming" by default, and every
corporate network would have the same setting as well.
Same as IPv4, IPv6 serving would still need registration with border device, either manual by user, or via UPnP-equivalent.
UDP hole punching works when you don't have symmetric NAT. So e.g. voice and video calls don't need a proxy and can be higher quality. You only need a third party to locate/signal your peer.
"everything gets a global IP, no more NAT headaches" was one of marketing talking points for IPv6. Not necessarily the case nor welcomed by everyone, but that was the intent.
Wide scale deployment of NAT (the "home router" that allowed you to connect multiple devices) was the greatest leap in internet security we ever made. I remember the days when we had "everything gets a global IP," and we do NOT want to go back to that. Look up Conficker, Code Red, Blaster, etc.
People naively assume the large IPv6 address space somehow hides your computer on the internet. That isn't true. Both because v6 host discovery is a solved-ish problem for attackers, and worms have near unlimited resources to throw at the wall.
NAT is technically not a firewall in itself, I believe early/some NAT implementations used deterministic assignments between external range to internal ip:port. They can be more transparent if that is the goal.
But the effect of proliferation of cheap Wi-Fi routers with cheap dynamic NAPTs in conjunction with UPnP did to XP-era PC security - 100% agreed, it was like sunlight self-disinfecting brass door handles.
They had to do with computers being directly addressable, routable, and reachable by the entire Internet, which was the default prior to widespread deployment of NAT. NAT isn't the best way to do it, but it probably is the single biggest factor in reducing the external reachability of endpoint IPs.
NAT deployment here is only tangential to the real differentiator: the firewall. I mean, you can make a case that NAT is a poor man's firewall but you should know that it's not a substitute for a security model. Zero trust is now the dominant philosophy, and it allows for firewall rules to be derived procedurally.
It's a shame the likes of Microsoft only care about "zero trust" insofar their compliance checkboxes with the the US government. They see it as a chore. Contrary to Google, Cloudflare, et al.
With how trivial generating new addresses in IPv6 is, it'd be cool to have a host block all incoming traffic on its own and have each service that deserves to be reached over the listen on an address unique to the service.
In a /64, enumerating all hosts will not be as practical as enumerating all ports on a single IP. Further, you will not be able to link that two services are running on the same host by just the IP.
I can do more with the Internet today than I could with a static /22 assigned over my ISDN BRI back in the mid-1990s. A lot of things I would do back then, I would do differently today; running a chat system by connecting directly out to 6667/tcp feels pretty silly now, for instance. It's rough to build protocols that work that way today, but you're not missing much. Things were not better before the advent of presumptive NAT.
p2p was simpler. The NAT epidemic has totally suffocated P2P because no one can host anything anymore.
You can't trivially host your own blog, for example, without going to your ISP and requesting a static address, and then configuring port forwarding. This is why everyone got stuck on social media, because they need someone else to run their website essentially.
That's a retcon. People used Blogger because it was more convenient than setting up Apache and PHP on a webserver of their own. Linux nerds for whom doing that is no big deal are an infinitesimal fraction of everyone who blogged.
why does it have to be such a big ordeal? A blog is pretty much just a static site.
Is it unimaginable that someone uses a HTML editor like microsoft word or something to write a blog and then copies it into the folder of a static web server? I'm sure it would be way simpler if people had the time to figure out P2P and the associated UI, it's not fundamentally super complicated versus client-server.
Just the idea of having an always-on computer anywhere in your home excludes probably more than 80% of everyone who has ever written a blog. IPv4 is not why people use hosted services.
> Just the idea of having an always-on computer anywhere in your home excludes probably more than 80% of everyone who has ever written a blog.
I have yet to meet someone who turns off the router at night, although I have heard of such people.
Then if you think about it, TVs, washing machines, etc. people are too lazy to turn them off, and OLED TVs even require being turned on while not being used.
Because you have to have a TV for movies (unless you want to watch them on a small laptop screen)[1]. Whereas there is very literal practical reason to self-host a blog.
[1]: Yes I know cinemas exist but they are very expensive and don't show content on demand.
Well sure, I’m not trying to say that the internet is less capable generally now than in the past.
I’m suggesting that the way you build an app is shaped by the prevalence of NAT, the same way the apps you build are shaped by how much bandwidth home users have for devices.
Some types of apps benefit from p2p functionality, and those hit obstacles for normal users due to port forwarding requirements, and are largely impossible which CG. I don’t think NAT is a villain, just something that does affect what and how we build stuff.
Did something new happen with paint.net? Or just a post to remind us?
I love paint.net. Recently purchased a windows store license for it. Clearly a winner for most of the image editing needs I have, for things like basic cropping, dpi changes, or changing formats. I treat it like I did GraphicConverter on Mac. Just a beloved image tool.
Lately I’ve been using it for simple file conversion with roll20, to hand-tune my assets for small downloads with webp.
Do you have any evidence to back up the claim that the async efforts have taken away from other useful async features?
Also, lots of major rust projects depend on async for their design characteristics, not just for the significant performance improvements over the thread-based alternatives. These benefits are easy to see in just about any major IO-bound workload. I think the widespread adoption of async in major crates (by smart people solving real world problems) is a strong indicator that async is a language feature that is "actually useful".
The fight is mostly on hackernews and reddit, and mostly in the form of people who don't need async being upset it exists, because all the crates they use for IO want async now. I understand that it isn't fun when that happens, and there are clearly some real problems with async that they are still solving. It isn't perfect. But it feels like the split over async that is apparent in forum discussions just isn't nearly as wide or dramatic in actual projects.
Yeah, the rod did nothing. I also think it would be odd to assume that this thing is single-phase.
As other have suggested, compaction is potentially needed to transport waste from the deeper parts. But there are several security considerations of just ejecting trash as-is from a high security area. I imagine a trash compactor is also a way to destroy the trash to prevent the old “spy tosses a data device into the prisoner cell block waste chute and have his allies follow the death star to pick it up during ejection”.
The privacy issue is that your local WiFi provider, direct isp, and all the intermediate isps can see not only which site you visit, but all your activity within that site (like which pages you visit or things you download).
The security part is that any of those who can view can also do a “man in the middle” attack. Comcast could decide to send you a different version of the website that was more favorable to their company, or inject ads (ISPs have been known to inject ads on sites they don’t own before https was big).
A hacker could send you a version that gets you to download malware by replacing content or links. They can see and effect everything you do and see in such a site if they can intercept your request.
Yeah, but there are so many reasons not to torrent. Cable upload speeds still suck because of a combo of technical limitations and very slow investment in new equipment, and not enough people have fiber yet.
Worse, many are getting deeper into the CG NAT on IPv4 with possibly no ipv6 available. Combined with households having more people and devices needing the internet from the same router than ever before, makes it less likely someone is just going to figure out how to forward a port.
Sure, there are some workarounds, but the real failure here is in not moving to ipv6 and fiber fast enough to support any p2p tech. There’s basically zero generations that ever had p2p ready internet.
Upload speed seems like a non-concern, just based on time. Even very active people are going to spend like 1% of available time downloading. So effective net upload bandwidth is going to be 100x bigger (times by your connections up/down speed ratio).
So long as people seed for some time, there tends to be amazing availability and absurdly fast speeds.
I'd love exact numbers but my impression is 80%+ of consumer wifi routers ship with upnp-igd and often Apple's nat-pmpd, which means any torrent program you open worth a salt will port-forward just fine. I can't think of the last household wifi I was on that didn't have upnp-igd, it's been so long.
In some ways p2p has been underinvested in because it has worked so well for so long. Various public & private trackers have come and gone but theres been a variety of good-enough options. Tribler pioneered p2p search over BitTorrent a long time ago & some users report that works surprisingly well for them & I think maybe that technique is semi widely done in clients now.
I think the main thing is just getting new folks in the door, and setup to find stuff. Bandwidth & connectivity seem pretty great. But we keep having major trackers collapse, and it's unclear how to get people started & successful.
Torrenting from your home connection is a fools mission. Shared infra in a friendly jurisdiction is less than a Netflix subscription, and comes with one-click web UIs for your favorite torrent client.
Clicking around in my providers web panel yesterday I discovered they went so far as to serve up my downloads directory over http behind basic auth so I can download things directly to my phone while on the go.
A little bit more secret sauce: I have a script that rsyncs everything from the downloads/kids dir into a folder the kids can access on the Linux machine plugged into the TVs (and idem for the adults dir for wifey and me).
It is always the year of Linux on the desktop if you're willing to make your family suffer! Freedom from the copyright subscription fascists etc...
Good point re: investment not being made for p2p tech, maybe consumers never demanded it, or it could be that the ISPs are also the broadcasters who want to protect their copyrights
(thinking about it, it's not clear this had to be the case, the phone lines were made for symmetric communication but dial-up was subsumed by cable providers - we got higher speeds but only because it was in the broadcasters' interest to sell us media packages, they were never going to help us send files to each other)
Worse, a lot of ISPs have implemented transfer caps that disincentivize seeding. I dropped my residential-class service and switched to business-class which has no caps, but that's probably not an option for everyone.
The author doesn’t know what perfectionism is. They are using the word in “I like to do things well” meaning, which is basically never the version people are concerned with.
I interpreted her statements about free will and climate change very differently.
My perception was that she thought she was getting pushback from climate groups when she said that free will didn’t exist, and that this groups were effectively accusing her of being opposed to climate action, on the basis of of them thinking their free will was the only way to fix climate change.
Yeah, so this is best thought of as a backup tool in which the files are kept separately from each the, more than just a different way to encrypt a file.
If you give a plaintext file to a friend for safe keeping, the friend can see the file. You could encrypt the file, but encrypting a file requires a key or password, which you could forget or lose.
The idea here is: split the file into three parts. Give one part to the bank deposit box, one to a friend, and put one in a safe at home. Now, if you want to restore the original, nobody can restore without getting (for example) two of the three pieces.
This is nice because your friend and the bank can’t do anything with their copies alone, and won’t trust each other with their copy normally. But if you lose your piece, you can still ask the friend and the bank for theirs.