It did not feel off at all. I read every single word and that is all that counts.
I think what you are getting wrong is thinking that the reader cares about your effort. The reader doesn't care about your effort. It doesn't matter if it took you 12 seconds or 5 days to write a piece of content.
The key thing is people reading the entirety of it. If it is AI slop, I just automatically skim to the end and nothing registers in my head. The combination of em dashes and the sentence structure just makes my mind tune it out.
So, your thesis is correct. If you put in the custom visualization and put in the effort, folks will read it. But not because they think you put in the effort. They don't care. But because right now AI produces generic fluff that's overly perfectly correct. That's why I skip most LinkedIn posts as well. Like, I personally don't care if it's AI or not. But mentally, I just automatically discount and skip it. So, your effort basically interrupts that automatic pattern recognition.
Ironically, being anti-science is pro-science. Skepticism of institutions and consensus is the scientific method.
The main reason being scientific consensus can lag reality significantly, especially when career incentives discourage dissent. The history of science includes many cases where consensus was wrong and critics were marginalized rather than engaged.
Deference to science as an authority is the opposite.
Feynman has a quote on this:
"Science is the belief in the ignorance of experts. When someone says, 'Science teaches such and such,' he is using the word incorrectly. Science doesn't teach anything; experience teaches it. If they say to you, 'Science has shown such and such,' you might ask, 'How does science show it? How did the scientists find out? How? What? Where?' It should not be 'science has shown' but 'this experiment, this effect, has shown.' And you have as much right as anyone else, upon hearing about the experiments — but be patient and listen to all the evidence — to judge whether a sensible conclusion has been arrived at."
Somewhere there's a quote about how the old guard has to literally die out before certain new ideas can take root; even if the new idea is obviously correct.
I think we've been pampered by a few hundred years of rapid "scientific advancement" and now we're firmly in the area where things are not grade-school science fair easy to see or prove.
"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it." - Max Planck
>Ironically, being anti-science is pro-science. Skepticism of institutions and consensus is the scientific method
skepticism is necessary, but not sufficient.
if they merely nay-say institutions and then go with their gut, it's certainly not.
only when someone attempts to rationally disprove a position, offering alternate testable theories and actually performing those tests is science done.
if you suspect an institution is wrong, that's fine, but it's just a hunch until someone does a test.
I was a witness of wrong prescribing medication by doctors many time. For example Novalgin for mother releasing from hospital after painful birth. This medicament is not suitable for breastfeeding mothers!
> Skepticism of institutions and consensus is the scientific method.
Which is why one of the core tenets of practicing Science is “trust, but verify”.
Science is based on the trust of what came before.
But the fallible, ego-driven, and dishonest nature of humanity means that trust alone cannot be relied upon. Hence the “but verify”. That is why replication studies and falsification tests exist - to cull that which cannot be reliably replicated.
Unfortunately, capitalism has stepped in and f*ked up even that, when for-profit universities who rely on public funding place “publish or die” mandates on researchers. This makes any repeat experiments untenable because it takes researchers away from publishing new data. So they just cite prior papers and chase the latest shiny -- because their continued employment is predicated upon publishing.
We have perverse incentives in place that have distorted science, sure. And almost all of these distortions come directly down to a violently coercive economic system that forces you to be profitable to someone else least you suffer homelessness, destitution, and even death.
But what else is there? Belief in an insane, evil, and omnicidal sky-daddy?
Sorry, but no. We should counteract the sources of distortions by crushing capitalism and the corrosive influence of money, not switching over to systems that have always proven themselves to be supremely untrustworthy.
Skepticism needs to be calibrated based on the weight of the evidence. There's a broad spectrum from being skeptical about the latest overhyped study in subfield X to being skeptical about quantum mechanics. If you want to challenge established science, you need to bring the receipts. To quote Carl Sagan, "extraordinary claims require extraordinary evidence".
Here is the uncomfortable truth. Only a small group of people are capable of operating at an elite level. The talent pool is extremely small and the companies want the absolute best.
It is the same thing in sports as well. There will only ever be one Michael Jordan one Lionel Messi one Tiger Woods one Magnus Carlsen. And they are paid a lot because they are worth it.
>> Meta seem to be spending so much so they don't later have to fight a war against an external Facebook-as-chatbot style competitor
Meta moved on from facebook a while back.It has been years since I last logged into facebook and hardly anybody I know actually post anything there. Its a relic of the past.
> Here is the uncomfortable truth. Only a small group of people are capable of operating at an elite level. […] It is the same thing in sports as well.
It’s not just uncomfortable but might not be true at all. Sports is practically the opposite type of skills: easy to measure, known rules, enormous amount of repetition. Research is unknown. A researcher that guarantees result is not doing research. (Coincidentally, the increasing rewards in academia for incrementalist result driven work is a big factor in the declining overall quality, imo.)
I think what’s happening is kind of what happened in Wall Street. Those with a few documented successes got disproportionately more business based to a large part on initial conditions and timing.
Not to take away from AI researchers specifically, I’m sure they’re a smart bunch. But I see no reason to think they stand out against other academic fields.
Occam’s razor says it’s panic in the C-suites and they perceive it as an existential race. It’s not important whether it actually is, but rather that’s how they feel. And they have such enormous amount of cash that they’re willing to play many risky bets at the same time. One of them being to hire/poach the hottest names.
Hot fucking take - but if these 100 (or whatever small number is being thrown around these days) elite researchers disappeared overnight, the world would go on and little of it would be noticed. New people in the field would catch up, and things would be up to speed quick enough.
It is not a question of exquisitely rare intellect, but rather the opportunity and funding/resources to prosper.
Hmmmm, I think only assuming those 100 have not been accurately identified. In pretty much all fields I am familiar with, the ability distribution seems to approximate a power law near the top: the gap between the best and the 20th best can be absolutely gigantic.
(And while there are certainly those who could have been the best who did not have the opportunity to succeed, or just didn't actually want to pursue it, I think usually this is way at the edges, i.e. removing the top would not make room for these people, because they're probably not even on anyone's radar at all, like the 'Einstein toiling in a field')
I think sports is even more susceptible to the influence of capital.
Athletes need the following:
- talent/potential
- ability (talent that has been realized)
- work ethic
- luck (could be something as simple as avoiding injuries, supportive family / friends / guardians, etc.)
That will usually get you on the radar. You'll be identified by your coach, talent agents, etc.
Once you cross a certain threshold, usually by the time you've been picked out by talent agents / joined a youth academy, and signed for a sports club with the financial means, you get access to a whole infrastructure that has one goal, and one goal only: To unlock your full potential, and make you the best athlete you can be.
And it is not that unsimilar to how AI researchers are brought up. If you look at pretty much any of the top AI talent, they have the following pedigree:
Very gifted HS students that went to feeder schools / academies, and / or participated in some STEM Olympiad -> Prestigious universities or some top ranking schools in their field -> Well-funded and prestigious research group -> top internships and post-grad employment (or they dropped out to join/found a startup)
You could be the smartest researcher in the world, but if you're stuck at some dinky school with zero budget, and can't (or don't get the change to) relocate, you're going to be stuck at the B/C/D-league.
I think it would be about the same, to be honest. There is always someone who is 90% as good as Stephen Curry and we'll rescale our expectations to match.
While I don’t doubt that these people have great experience and skills what they really have that others don’t is connections and the ability to market themselves well.
All you need is to publish a couple of right papers and/or significantly contribute to a couple of right projects. If you have brains for that you’ll be noticed.
Using AI for autocomplete is like using a racecar to pick up groceries.
This is exactly what the author says about avoiding LLMs for some ideological or psychological refusal.
"Context Engineering" is all the rage now. Picked up by AI Influencers.
Basically you give context to the AI models. Like if you want to build a social network you give the AI all the documentation around it so it can follow the best practices and guidelines etc.
I think what you are getting wrong is thinking that the reader cares about your effort. The reader doesn't care about your effort. It doesn't matter if it took you 12 seconds or 5 days to write a piece of content.
The key thing is people reading the entirety of it. If it is AI slop, I just automatically skim to the end and nothing registers in my head. The combination of em dashes and the sentence structure just makes my mind tune it out.
So, your thesis is correct. If you put in the custom visualization and put in the effort, folks will read it. But not because they think you put in the effort. They don't care. But because right now AI produces generic fluff that's overly perfectly correct. That's why I skip most LinkedIn posts as well. Like, I personally don't care if it's AI or not. But mentally, I just automatically discount and skip it. So, your effort basically interrupts that automatic pattern recognition.