Hacker Newsnew | past | comments | ask | show | jobs | submit | pilif's commentslogin

what's especially strange to me is that in the more distant past, he was a pretty normal guy - at least as normal as any other linux user. Heck, he had a super great podcast (Linux Action Show).

Something changed in the 2014ish time-frame when it got more and more politically extreme.


what do you think changed culturally around 2014 (I'd say it started a little earlier, maybe 2011)?

His views are the normal ones.

As much as I like to hate on a new OS like the next person, I think it's worth pointing out we're probably not seeing the full picture here:

When trying to reproduce the problem as shown in the article by resizing the Safari window currently displaying the article, the drag cursor changes shape at the visible border of the window, not the shadow and consequently, dragging works as expected.

https://youtu.be/kNovjjvYP8g

This might be an application- or driver specific issue, not necessarily a common Tahoe issue.


I'm not sure "it works this way in Application A, and this other way in Application B" is a particularly strong rebuttal.

It wasn't meant as a rebuttal. Just as a point of thought: By showing that at least one application doesn't exhibit the problem, I thought I was showing that the problem might not be related to the Tahoe redesign at all but might have other causes.

It definitely serves to prove that this is not a design-issue but just a simple bug and thus has at least some chance of being fixed.

FWIW, I cannot reproduce the issue demonstrated in the original article with any window of any application on my machine (M1 Mac Studio), but I thought that listing a very commonly used application alone would be enough to challenge the article's assertion ("the macOS designers are stupid because they make me do something that doesn't make sense in order to resize windows").


> It wasn't meant as a rebuttal.

“As much as I like to *” is a common way to start a rebuttal (the subsequent “I’m not going to see/do that” is implied by that turn of phrase).

> but I thought that listing a very commonly used application alone would be enough to challenge the article's assertion

So it was a rebuttal? Why the disingenuous doublethink?


This is absolutely true. The demo in the original article seems quite deceptive in that respect. Nobody would attempt to resize a window by launching their cursor at the corner with great speed as the demo shows. The resize pointer seems to show in exactly the right place, and allows for an extra hit area slightly outside the rounded corner — I don’t see any problem with that.

As for the fact that one cannot resize from inside the window, it makes absolute sense for every other corner of the window, where the user would instead be clicking an icon or some other button (try the top right corner of the finder, where the search button sits).

So, while I agree on the whole that Tahoe is a huge step backwards in terms of design, this seems like an odd gripe to point out, as it doesn’t in fact seem to be an issue at all.

Edit: clarification


> As for the fact that one cannot resize from inside the window,

if you check the screencast I posted, you'll see that you can indeed resize from inside the window. Not by a huge margin, but definitely from inside the actual window boundaries.


Indeed, just enough. And the correct resize pointer shows all along the rounded edge, so I agree, this doesn’t seem like the problem it’s made out to be.

> Nobody would attempt to resize a window by launching their cursor at the corner with great speed as the demo shows.

... great speed? Interpolating from the zoom, I would say its not fast at all.


I’m referring to the demo in the original article. The mouse pointer moves rather rapidly onto the inside of the window. You can just about see the resize pointer flashing as the user does so. I don’t think I ever attempted to resize a window with such erratic mouse movements. Approaching the corner at reasonable speed shows the resize pointer where expected.

> I’m referring to the demo in the original article. The article from noheger.at? I am also referring to it. My guess is that the pointer speed is exaggerated due to zoom of the gif, and/or that we are using the mouse in different ways.

Yes, that demo. You can clearly see the resize pointer flashing briefly, but the user continues aiming right inside the window. I’m not sure why he’s not stopping when the resize pointer appears. It seems erratic.

Arguably the feedback via the cursor change is feedback to help you learn, like the icons that appear in the close / minimise / zoom, or stickers on the keys of a musical instrument. You pretty quickly learn which one is which, or you can't use them effectively. At some point you'd hope that common actions become muscle memory.

So if it was something that was learned whilst using the previous version, and worked, I'd argue it wasn't 'erratic'.


Judging by this comment https://news.ycombinator.com/item?id=46599464

It seems to be common.


400k would last me 13 years for a rack, power and 10Gbit/s bandwidth at my colo place (Switzerland, traditionally high prices)


Yes, but that's not their only expense.


Yes, but that’s not the last or only donation they’re receiving either.


Don't bet on receiving money in the future.


It's a community donation-supported project. That's kind of the whole deal.

Regardless, the ongoing interest on $400K alone would be enough to pay colo fees.


Since you've already done the math, what's the interest on $400k pay for the colo costs?


at a (fairly modest) 3.3 its like 1100/month.

I don't know what kind of rates are available to non-profits, but with 400k in hand you can find nicer rates than 3.3 (as of today, at least).

that covers quite a few colo possibilities.


USD money market funds from Vanguard pay about 3.7% now. Personally, I would recommend a 50/50 split between a Bloomberg Agg bond ETF and a high-yield bond ETF. You can easily boost that yield by 100bps with a modest increase in risk.

Another thing overlooked in this debate: Data center costs normally increase at the rate of inflation. This is not included in most estimates. That said, I still agree with the broad sentiment here: 400K USD is plenty of money to run a colo server for 10+ years from the risk-free interest rate.


Stupid question from me: What are their other costs? I'm a total newbie about data center colo setups, but as I understand, it includes: power and internet access with ingress and egress. Are you thinking their egress will be very high, thus thus need to pay additional bandwidth charges?


Becky was so good for participating in mailing lists. I could slip by as a Unix user even though I was still mostly using Windows as my client OS.


Ha!

I have a Becky backup on a Iomega Zip disk I have to check one day :D


One thing that’s not quite clear to me is how safe it is to generate v7 uuids on the client.

That’s one of the nice properties of v4 uuids: you can make up a primary key of a new entity directly on the client and the database can use it directly. Sure: there is tiny collision risk, but it’s so small, you can get away with mostly ignoring it

With v7 however, such a large chunk of the uuid is based on the time, so I’m not sure whether it’s still safe to ignore collisions in any application, especially when you consider client’s clocks to probably be very inaccurate.

Am I overthinking things here?


How many clients requests do you get in the same millisecond?

With UUIDv7 it's split into:

- 48 bits: Unix timestamp in milliseconds

- 12 bis: Sub-millisecond timestamp fraction for additional ordering

- 62 bits: Random data for uniqueness

- 6 bits: Version and variant identifiers

So >4,600,000,000,000,000,000 IDs per fraction of a millisecond.

And unprecise time on the client doesn't matter, because some are ahead and some behind, vut that doesn't make them more likely to clash.


Does that factor in the birthday paradox?


If the client can generate a uuid4 they can also reuse a known uuid4


They are the same because both projects are inspired by Norton Commander for DOS which also used those keys.


> They may not be as energy efficient as using more exotic materials

yes and given that the energy you put in is practically free, it doesn't matter if it's not as efficient.


> This was obviously dumb when it launched:

Yes. Obviously dumb but also nearly 100% successful at the current point in time.

And likely going to stay successful as the non-protected internet still provides enough information to dumb crawlers that it’s not financially worth it to even vibe-code a workaround.

Or in other words: Anubis may be dumb, but the average crawler that completely exhausting some sites resources is even dumber.

And so it all works out.

And so the question remains: how dumb was it exactly, when it works so well and continues to work so well?


> Yes. Obviously dumb but also nearly 100% successful at the current point in time.

Only if you don't care about negatively affecting real users.


I understand this as an argument that it’s better to be down for everyone than have a minority of users switch browsers.

I’m not convinced by that makes sense.

Now ideally you would have the resources to serve all users and all the AI bots without performance degradation, but for some projects that’s not feasible.

In the end it’s all a compromise.


does it work well? I run chromium controlled by playwright for scraping and typically make Gemini implement the script for it because it's not worth my time otherwise. -but I'm not crawling the Internet generally (which I think there is very little financial incentive to do; it's a very expensive process even ignoring Anubis et al); it's always that I want something specific and am sufficiently annoyed by lack of API.

regarding authentication mentioned elsewhere, passing cookies is no big deal.


Anubis is not meant to stop single endpoints from scraping. It's meant to make it harder for massive AI scrapers. The problematic ones evade rate limiting by using many different ip addresses, and make scraping cheaper on themselves by running headless. Anubis is specifically built to make that kind of scraping harder as i understand it.


Does it actually? I don't think I've seen a case study with hard numbers.


Here’s one study

https://dukespace.lib.duke.edu/server/api/core/bitstreams/81...

And of all the high-profile projects implementing it, like the LKML archives, none have backed down yet, so I’m assuming the initial improvement in numbers must continue or it would have been removed since


I run a service under the protection of go-away[0], which is similar to Anubis, and can attest it works very well, still. Went from constant outages due to ridiculous volumes of requests to good load times for real users and no bad crawlers coming through.

[0]: https://git.gammaspectra.live/git/go-away


Great, thanks for the link.


the workaround is literally just running a headless browser, and that's pretty much the default nowadays.

if you want to save some $$$ you can spend like 30 minutes making a cracker like in the article. just make it multi threaded, add a queue and boom, your scraper nodes can go back to their cheap configuration. or since these are AI orgs we're talking about, write a gpu cracker and laugh as it solves challenges far faster than any user could.

custom solutions aren't worth it for individual sites, but with how widespread anubis is it's become worth it.


color me surprised to see an article mostly constructive and positive about systemd using the wrong capitalization of the project name.

Normally the rule is that people mis-capitalizing the name are usually critical of the project.

It's systemd, not SystemD


Reading the report you reference and other issues linked there, I would say that multiple attempts were made to fix it, all of which unfortunately required some heuristics and all of which have broken something else that was deemed worse.

It seems to boil down to an issue in the underlying X11 machinery and it would need to be fixed there first to build a basis on which proper fixes can be implemented.

Given that X11 is in maintenance mode (and as its fans keep saying: It works perfectly fine and doesn't need any more work done on it), it's not likely that's happening.

So, yes, given that information (and I just arrived at that bug report through your post), I would indeed say that waiting for Wayland is the only option they have. All other attempts ended up causing worse issues.


Other toolkits don't seem to have this issue on X11.


those toolkits do not support smooth scrolling.

The issue comes from XInput2 (https://www.x.org/releases/X11R7.7/doc/inputproto/XI2proto.t...)

So I guess the "fix" would be to have two completely separate input handlers on X11, one of which supporting smooth scrolling and multitouch, the other not and then offering users a toggle in the style of

[ ] do not ignore the first scroll input after focus loss, but disable smooth scrolling and multitouch

Plus handling all the potential issues by having two separate input handlers.

That's asking a bit much for this particular issue und greatly smells like a case of XKCD 1172


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: