I’m running CachyOS for a year now as my daily driver (non-work) on my ancient desktop from 2019 and ancient Nvidia card. It is very fast and smooth. I mainly use it to development using LLM sidekicks and it doesn’t break sweat. I use XFCE and just love how fast the experience is.
Which sucks because it's not exactly fantastic as a competitor. There's still very, very noticeable performance differences and render speed/pattern differences that after you've been using a chromium based browser for a long time give firefox a feeling of being slow (it's not, it is absolutely just a perception thing, but it's enough to put you off using it)
It is a service available to Cloudflare customers and is opt-in. I fail to see how they’re being gatekeepers when site owners have option not to use it.
>They ended up with a final sample size of 813 people.
I want BlueSky to succeed but this sampling bias is simply too much to ignore.
This comment (by nunobrito) from few days ago on a similar topic is best analysis of this topic.
> These news are awfully similar to click-bait stating "the science is settled" by grouping a small set of the group and then pretending it represents the whole.
The paper failed both to identify the overall number of scientists using X or the cases where multiple platforms are used (most common scenario). Therefore the paper only seems biased on its best scenario or downright propaganda at its worst.
> NOSTR and Mastodon should never be left out of any serious research.
If the poll was done _properly_, that sample size is _fine_; there’ll be a decent margin of error, but not as much as you might expect. 1k people is a fairly standard size for polls, with even very high quality ones rarely doing over a few thousand.
The real consideration was whether the poll was done properly.
The article itself talks about this self-selection/sampling bias due to a minuscule sample size of 813 people. Reducing “science community” to such a small sample is not convincing.
Self-selections, sure, that's a risk with any and all surveys and questionnaires, which must be mentioned.
But 813? Why wouldn't that be enough? Basic stats puts that at a very healthy number for most questions, and the researchers don't raise any questions about bias about the number.