what's especially strange to me is that in the more distant past, he was a pretty normal guy - at least as normal as any other linux user. Heck, he had a super great podcast (Linux Action Show).
Something changed in the 2014ish time-frame when it got more and more politically extreme.
As much as I like to hate on a new OS like the next person, I think it's worth pointing out we're probably not seeing the full picture here:
When trying to reproduce the problem as shown in the article by resizing the Safari window currently displaying the article, the drag cursor changes shape at the visible border of the window, not the shadow and consequently, dragging works as expected.
It wasn't meant as a rebuttal. Just as a point of thought: By showing that at least one application doesn't exhibit the problem, I thought I was showing that the problem might not be related to the Tahoe redesign at all but might have other causes.
It definitely serves to prove that this is not a design-issue but just a simple bug and thus has at least some chance of being fixed.
FWIW, I cannot reproduce the issue demonstrated in the original article with any window of any application on my machine (M1 Mac Studio), but I thought that listing a very commonly used application alone would be enough to challenge the article's assertion ("the macOS designers are stupid because they make me do something that doesn't make sense in order to resize windows").
This is absolutely true. The demo in the original article seems quite deceptive in that respect. Nobody would attempt to resize a window by launching their cursor at the corner with great speed as the demo shows. The resize pointer seems to show in exactly the right place, and allows for an extra hit area slightly outside the rounded corner — I don’t see any problem with that.
As for the fact that one cannot resize from inside the window, it makes absolute sense for every other corner of the window, where the user would instead be clicking an icon or some other button (try the top right corner of the finder, where the search button sits).
So, while I agree on the whole that Tahoe is a huge step backwards in terms of design, this seems like an odd gripe to point out, as it doesn’t in fact seem to be an issue at all.
> As for the fact that one cannot resize from inside the window,
if you check the screencast I posted, you'll see that you can indeed resize from inside the window. Not by a huge margin, but definitely from inside the actual window boundaries.
Indeed, just enough. And the correct resize pointer shows all along the rounded edge, so I agree, this doesn’t seem like the problem it’s made out to be.
I’m referring to the demo in the original article. The mouse pointer moves rather rapidly onto the inside of the window. You can just about see the resize pointer flashing as the user does so. I don’t think I ever attempted to resize a window with such erratic mouse movements. Approaching the corner at reasonable speed shows the resize pointer where expected.
> I’m referring to the demo in the original article.
The article from noheger.at? I am also referring to it. My guess is that the pointer speed is exaggerated due to zoom of the gif, and/or that we are using the mouse in different ways.
Yes, that demo. You can clearly see the resize pointer flashing briefly, but the user continues aiming right inside the window. I’m not sure why he’s not stopping when the resize pointer appears. It seems erratic.
Arguably the feedback via the cursor change is feedback to help you learn, like the icons that appear in the close / minimise / zoom, or stickers on the keys of a musical instrument. You pretty quickly learn which one is which, or you can't use them effectively. At some point you'd hope that common actions become muscle memory.
So if it was something that was learned whilst using the previous version, and worked, I'd argue it wasn't 'erratic'.
USD money market funds from Vanguard pay about 3.7% now. Personally, I would recommend a 50/50 split between a Bloomberg Agg bond ETF and a high-yield bond ETF. You can easily boost that yield by 100bps with a modest increase in risk.
Another thing overlooked in this debate: Data center costs normally increase at the rate of inflation. This is not included in most estimates. That said, I still agree with the broad sentiment here: 400K USD is plenty of money to run a colo server for 10+ years from the risk-free interest rate.
Stupid question from me: What are their other costs? I'm a total newbie about data center colo setups, but as I understand, it includes: power and internet access with ingress and egress. Are you thinking their egress will be very high, thus thus need to pay additional bandwidth charges?
One thing that’s not quite clear to me is how safe it is to generate v7 uuids on the client.
That’s one of the nice properties of v4 uuids: you can make up a primary key of a new entity directly on the client and the database can use it directly. Sure: there is tiny collision risk, but it’s so small, you can get away with mostly ignoring it
With v7 however, such a large chunk of the uuid is based on the time, so I’m not sure whether it’s still safe to ignore collisions in any application, especially when you consider client’s clocks to probably be very inaccurate.
Yes. Obviously dumb but also nearly 100% successful at the current point in time.
And likely going to stay successful as the non-protected internet still provides enough information to dumb crawlers that it’s not financially worth it to even vibe-code a workaround.
Or in other words: Anubis may be dumb, but the average crawler that completely exhausting some sites resources is even dumber.
And so it all works out.
And so the question remains: how dumb was it exactly, when it works so well and continues to work so well?
I understand this as an argument that it’s better to be down for everyone than have a minority of users switch browsers.
I’m not convinced by that makes sense.
Now ideally you would have the resources to serve all users and all the AI bots without performance degradation, but for some projects that’s not feasible.
does it work well? I run chromium controlled by playwright for scraping and typically make Gemini implement the script for it because it's not worth my time otherwise. -but I'm not crawling the Internet generally (which I think there is very little financial incentive to do; it's a very expensive process even ignoring Anubis et al); it's always that I want something specific and am sufficiently annoyed by lack of API.
regarding authentication mentioned elsewhere, passing cookies is no big deal.
Anubis is not meant to stop single endpoints from scraping. It's meant to make it harder for massive AI scrapers. The problematic ones evade rate limiting by using many different ip addresses, and make scraping cheaper on themselves by running headless. Anubis is specifically built to make that kind of scraping harder as i understand it.
And of all the high-profile projects implementing it, like the LKML archives, none have backed down yet, so I’m assuming the initial improvement in numbers must continue or it would have been removed since
I run a service under the protection of go-away[0], which is similar to Anubis, and can attest it works very well, still. Went from constant outages due to ridiculous volumes of requests to good load times for real users and no bad crawlers coming through.
the workaround is literally just running a headless browser, and that's pretty much the default nowadays.
if you want to save some $$$ you can spend like 30 minutes making a cracker like in the article. just make it multi threaded, add a queue and boom, your scraper nodes can go back to their cheap configuration. or since these are AI orgs we're talking about, write a gpu cracker and laugh as it solves challenges far faster than any user could.
custom solutions aren't worth it for individual sites, but with how widespread anubis is it's become worth it.
Reading the report you reference and other issues linked there, I would say that multiple attempts were made to fix it, all of which unfortunately required some heuristics and all of which have broken something else that was deemed worse.
It seems to boil down to an issue in the underlying X11 machinery and it would need to be fixed there first to build a basis on which proper fixes can be implemented.
Given that X11 is in maintenance mode (and as its fans keep saying: It works perfectly fine and doesn't need any more work done on it), it's not likely that's happening.
So, yes, given that information (and I just arrived at that bug report through your post), I would indeed say that waiting for Wayland is the only option they have. All other attempts ended up causing worse issues.
So I guess the "fix" would be to have two completely separate input handlers on X11, one of which supporting smooth scrolling and multitouch, the other not and then offering users a toggle in the style of
[ ] do not ignore the first scroll input after focus loss, but disable smooth scrolling and multitouch
Plus handling all the potential issues by having two separate input handlers.
That's asking a bit much for this particular issue und greatly smells like a case of XKCD 1172
Something changed in the 2014ish time-frame when it got more and more politically extreme.
reply