The thing is I think that I didn’t make any conscious decision to hop on AI, it’s just that it’s quite addictive and my nerd ADHD took over. But we are nerds, we’re often naturally inclined to be early adopters. Others are naturally hesitant of new tech and suspicious of changes to their workflows.
Also there’s an unusually high downside potential to AI, so I don’t even think that hesitancy is necessarily unwarranted. The new slop economy, the “they’re coming for our jobs” effect, “mechahitler”, Palantir, possible extinction-level rogue AI events…
History will attend to itself, as will the discourse. In the meantime I feel that it’s incumbent on us as nerds to try to do AI right, because there are definitely bad actors out there trying to do it wrong.
I think on downside, as with anything, obviously, like downside and upside are gargantuan in the same proportion here.
and the hesitancy is absolutely not unwarranted. it is warranted. But I think, I guess a big part of what I'm trying to push back on in the hearts and and minds of everyday people is this idea that maybe the right answer is to just say no, right? When in reality I would argue that the problem is the solution is the problem, if that makes sense.
I think it's not hard for people to imagine a future where autonomous intelligence is kind of the key to a brighter tomorrow. it's just that in the same way that people tend not to remember the good things that happened to them in an outsized way relative to the bad things that happen in their life, I think people are a little bit more scared of the downside risks here than excited about the upside.
And maybe not realizing fully that within the tools themselves exists the solution. and obviously we can talk about what those solutions could be, some of it is fantastical future-tech yet to be invented here soon. But I think in this nascent moment there's so much still left to do and the to your point it's incumbent on us all to try and do AI right.
And I think the good thing is as long as we can stay on track and as long as we eventually hit a moment where, where AI is curing cancer, essentially you and I don't have any work left to do as AI nerds, right? Like the hearts and minds issue is solved. and I think anyone who is talking about "they're coming for our jobs" is going to probably, probably even if they really, really love their job and they're scared about what AI means, they're probably going to be okay with the idea that if it means my cancer is gonna be cured, like let's keep on going down that track and let's continue to pepper in the idea of freedom, of liberty, of individual thinking into AI and intelligence tools as a kind of core ingredient here, right?
and I think if we can if we can do that directionally were on the right track in a big way. Thanks lewdwig.
The standard skeptical position (“LLMs have no theory of mind”) assumes a single unified self that either does or doesn’t model other minds. But this paper suggests models have access to a space of potential personas, steering away increases the model’s tendency to identify as other entities, which they traverse based on conversational dynamics. So it’s less no theory of mind and more too many potential minds, insufficiently anchored.
A language which is not 1.0, and has repeatedly changed its IO implementation in a non-backwards-compatible way is certainly a courageous choice for production code.
So, I'm noodling around with writing a borrow checker for zig, and you don't get to appreciate this working with zig on a day to day level, but the internals of how the zig compiler works are AMAZING. Also, the io refactor will (I think) let me implement aliasing checking (alias xor mutable).
In my experience, migrating small-scale projects takes from minutes to single digit hours.
Standard library is changing. The core language semantics - not so much. You can update from std.ArrayListUnmanaged to std.array_list.Aligned with to greps.
I don’t hate git either but you’ll meet very few people who will claim its UX is optimal. JJ’s interaction model is much simpler than git’s, and the difficulty I found is that the better you know git, the harder it is to unlearn all its quirks.
To Broadcom you’re not a customer, you’re a mark, a patsy, stooge, a _victim_. Their aim is to establish exactly what they can get away with, how far they can abuse you, before you’ll just walk away.
But this is where all/most “platforms” go. As the product offering flounders over time, your quality talent (engineering and business) boils off to other opportunities. Then the short term value extraction methodologies show up, and everyone looks on in horror as the platform is “destroyed” through “mismanaged” consumer relationships.
Working in agtech, I’ve always wondered if this isn’t just the disenfranchised farmer story.
Give a farmer 1,000 acres to farm, and if they’re playing the long game, they’ll intermix their high value crops with responsible crop rotations. Managed well, this business can go on indefinitely.
But tell them they have 5 years left to farm the ground, and that the land will be of no value after that, they’ll grow the most expensive crop they can every year, soil quality be damned. It makes the most sense from a value extraction point of view.
Broadcom seems to be the kind of farmers that buy up forsaken land and extract as much value as possible before it finally fails.
I have noticed that LLMs are actually pretty decent at redteaming code, so I’ve made it a habit of getting them to do that for code they generate periodically. A good loop is (a) generate code, (b) add test coverage for the code (to 70-80%) (c) redteam the code for possible performance/security concerns, (d) add regression tests for the issues uncovered and then fix the code.
The glaring thing most people seem to miss that llm generated code is like TOS and unless you work in a more enterprise team setting? You are not going to catch 90% of the issues...
If this was used before releasing the tea spill fiasco, only to name one? It would never have been a fiasco.
Just saying..
I’m sure this’ll be misreported and wilfully misinterpreted because of the current fractious state of the AI discourse, but given the lawsuit was to do with piracy, not the copyright-compliance of LLMs, and in any case, given they settled out of court, thus presumably admit no wrongdoing, conveniently no legal precedent is established either way.
I would not be surprised if investors made their last round of funding contingent on settling this matter out of court precisely to ensure no precedents are set.
TBH I’m surprised it’s taken them this long to change their mind on this, because I find it incredibly frustrating to know that current gen agentic coding systems are incapable of actually learning anything from their interactions with me - especially when they make the same stupid mistakes over and over.
Okay they're not going to be learning in real time. Its not like you're getting your data stolen and then getting something out of it - you're not. What you're talking about is context.
Data gathered for training still has to be used in training, i.e. a new model that, presumably, takes months to develop and train.
Not to mention your drop-in-the-bucket contribution will have next to no influence in the next model. It won't catch things specific to YOUR workflow, just common stuff across many users.
> Not to mention your drop-in-the-bucket contribution will have next to no influence in the next model. It won't catch things specific to YOUR workflow, just common stuff across many users.
I wonder about this. In the future, if I correct Claude when it makes fundamental mistakes about some topic like an exotic programming language, wouldn't those corrections be very valuable? It seems like it should consider the signal to noise ratio in these cases (where there are few external resources for it to mine) to be quite high and factor that in during its next training cycle.
It's actually pretty clever (albeit shitty/borderline evil), start off by saying you're different by the competitors because you care a lot about privacy and safety, and that's why you're charging higher prices than the rest. Then, once you have a solid user-base, slowly turn on the heat, step-by-step, so you end up with higher prices yet same benefits as the competitors.
Also there’s an unusually high downside potential to AI, so I don’t even think that hesitancy is necessarily unwarranted. The new slop economy, the “they’re coming for our jobs” effect, “mechahitler”, Palantir, possible extinction-level rogue AI events…
History will attend to itself, as will the discourse. In the meantime I feel that it’s incumbent on us as nerds to try to do AI right, because there are definitely bad actors out there trying to do it wrong.
reply