The 0.1% thing ... Is that even the right label? I'm guessing one in a thousand people globally isn't using these mechanisms. The article spends some paragraphs on the world's richest person and his company's tax strategy. Is the millionaire next door quietly doing these things or is this about billionaires in which case it's more like one in a million.
ok so it seems pretty bad that they changed the index rules both to allow spacex in early and the wonky weighting stuff.
But if one already has index-based things that are likely to be captive on the wrong side of this, and one wanted to benefit or at least balance out, to confirm my limited understanding the goal would be:
- buy shortly after the IPO, ideally less than 15 days
- and sell less than 6 months later when lockups would end and insiders are set to cash out?
I think the "Leave them Behind" section at the end sort of ignores the whole "they will ruthlessly copy your material, and put aggressive extra load on your server while repeatedly stealing your work" dimension.
You can try to avoid consuming AI-generated material, but of course part-way through a lot of things you may wonder whether it is partly AI-generated, and we don't yet have a credible "human-authored" stamp. But you can't really keep them from using your work to make cheap copies of you, or at least reducing your audience by including information or insights from your work in the chat sessions of people who otherwise might have read your work.
> Microsoft bought it for OpenAI only, to train Copilot on the vast amount of code.
I think this gets the timeline wrong. Microsoft acquired GH in 2018 and started the partnership with OpenAI in summer 2019.
I'm sure there was some strategy to extract value from it that wouldn't serve its users but I think OpenAI was not initially meant to be the beneficiary.
Maybe MS just got extremely lucky, like winning-the-lottery-lucky.
But your timeline is off, however. Their partnership started in 2016[1]. In 2019 MS started to invest publicly in OpenAI - but by then they have had some history.
To me, this is at least suspicious. Granted, I have no hard proof.
While I agree that we keep reinventing stuff, in CS doesn't the ease of creating isomorphisms between different ways of doing things mean that canonicalization will always be a matter of some community choosing their favorite form, perhaps based on aesthetic or cultural reasons, rather than anything "universal and eternal"?
We can still speak of equivalence classes under said isomorphisms and choose a representative out of them, up to the aesthetic preferences of the implementor. We are nowhere near finding equivalence classes or isomorphisms between representations because the things being compared are probably not equal, thanks to all the burrs and rough corners of incidental (non essential) complexity.
I worked for a startup that used clojure and found it so frustrating because, following the idiomatic style, pathways passed maps around, added keys to maps, etc. For any definition which received some such map, you had to read the whole pathway to understand what was expected to be in it (and therefore how you could call it, modify it, etc).
I think the thing is that yes, `[a] -> [a]` tells you relatively little about the particular relationship between lists that the function achieves, but in other languages such a signature tells you _everything_ about:
- what you need to call invoke it
- what the implementation can assume about its argument
i.e. how to use or change the function is much clearer
I think the pipeline paradigm you speak of is powerful, and some of the clarity issues you claim can be improved through clear and consistent use of keyword destructuring in function signatures. Also by using function naming conventions ('add-service-handle' etc.) and grouping functions in threading forms which have additive dependencies, can also address these frustrations.
Do publishers really have fact-checkers? My understanding was that support for authors is now relatively minimal, even for established authors, and no one really has the time or resources to second-guess everything an author has claimed. I take as a key example Naomi Wolf learning after her book was "done" that a significant chunk of it was based on a misunderstanding of an admittedly confusing 19th century British legal phrase.
https://nymag.com/intelligencer/2019/05/naomi-wolfs-book-cor...
I think maybe the idea that a single author spending months or years on their research, which the publish as a single bound and polished work is misguided -- an academic trying to do similar work in multiple articles would have gotten review from peers on each article, and hopefully have not spent so much time working under a correctable misunderstanding.
Fact checking as a separate job is more for journalism than books. But editors have fact checking as part of their jobs. (It is not copy-editing, which is a different job.)
Many nonfiction authors will hire a fact checker separately. They don't want to look like they missed something. Errors still happen, of course.
This paper describes finding security related concepts and using them to steer at generation time. While this is an interesting contribution on its own, the approach could also be applied to a range of other concerns -- e.g. can we use this to steer away from performance problems? can we make llm code generation anticipate maintainability or readability issues?
If people want to try untested peptides, I think society should use that as the engine to _test those peptides_. Instead of buying something that's supposed to but may not be the peptide you want, you should pay 50+k% + data and get something that has a 50% chance of being the peptide and 50% chance of being a placebo, and you're _required_ to submit a report about effects and side effects before you can get a refill.
Rather than complain about how these things have not yet gone through real experiments and are marketed as having been "studied" rather than "effective", I would love to see society use the obvious demand for some of these to actually test them.
So I'm actually confused that in the little image of his run in the article it seems he's often making absolute progress in the opposite direction the ship is going for part of each lap. Like, was the ship going unusually slowly?
reply