So if one were to buy all their clothes at Decathlon (clothes for sports and other outdoor activities) and Zara (everyday wear as well as fancier clothing), and found a reader that can read the RFID tags they use, one would save the time needed to add RFID tags to one’s clothes ;)
There might be other stores that have RFID tags on all of their products too. I only mention these two in particular because I have purchased products from both of them using their RFID-based self-checkout in their stores and thus seen it first-hand.
However, I am not sure if all of the products have the RFID label embedded in the actual fabric or if some or most have the RFID label attached to paper labels that you’d remove before using the clothes. So that would also need to be determined before deciding to replace one’s whole wardrobe with clothes exclusively from these stores.
As someone who is not much of a sports person, now I was wondering what CTE means in sports.
Seems to be this:
> Chronic traumatic encephalopathy (CTE) is a progressive neurodegenerative disease […]
> Evidence indicates that repetitive concussive and subconcussive blows to the head cause CTE. In particular, it is associated with contact sports such as boxing, American football, Australian rules football, wrestling, mixed martial arts, ice hockey, rugby, and association football.
The NFL in the US has famously gone to great lengths to downplay the impact of CTE on current and retired players. And there have been several famous players who literally lost their minds as they aged, and we now know that was due to CTE. Something like 90% of ex-NFLers have it. The number is still really bad for collegiate players. And even high school players are at risk.
Yeah - Muhammad Ali is the most famous victim (or at least likely victim, I don’t think he was officially diagnosed with CTE as it wasn’t well understood back then). In the UK, it’s gradually becoming recognised as a serious problem in rugby.
I assumed the C stood for Concussion. Wrong but also partly right!
What is CC and TC? I have not heard these abbreviations (except for CC to mean credit card or carbon copy, neither of which is what I think you mean here).
> […] and also the way Claude injects itself as a co-author.
> Seeing them is an easy signal to recognize work that was submitted by someone so lazy they couldn’t even edit the commit message. You can see the vibe coded PRs right away.
I was doing the opposite when using ChatGPT. Specifically manually setting the git commit author as ChatGPT complete with model used, and setting myself as committer. That way I (and everyone else) can see what parts of the code were completely written by ChatGPT.
For changes that I made myself, I commit with myself as author.
Why would I commit something written by AI with myself as author?
> I think we should continue encouraging AI-generated PRs to label themselves, honestly.
"Why would I commit something written by AI with myself as author?"
Because you're the one who decided to take responsibility for it, and actually choose to PR it in its ultimate form.
What utility do the reviews/maintainers get from you marking whats written by you vs. chatgpt? Other than your ability to scapegoat the LLM?
The only thing that actually affects me (the hypothetical reviewer) and the project is the quality of the actual code, and, ideally, the presence of a contributer (you) who can actually answer for that code. The presence or absence of LLM generated code by your hand makes no difference to me or the project, why would it? Why would it affect my decision making whatsoever?
Its your code, end of story. Either that or the PR should just be rejected, because nobody is taking responsibility for it.
As someone mostly outside of the vibe coding stuff, I can see the benefit in having both the model and the author information.
Model information for traceability and possibly future analysis/statistics, and author to know who is taking responsibility for the changes (and, thus, has deeply reviewed and understood them).
As long as those two information are present in the commit, I guess which commit field should hold which information is for the project to standardise. (but it should be normalised within a project, otherwise the "traceability/statistics" part cannot be applied reliably).
Yeah, nothing wrong with keeping the metadata - but "Authored-by" is both credit and an attestation of responsibility. I think people just haven't thought about it too much and see it mostly as credit and less as responsibility.
I disagree. “Authored by” - and authorship in general - says who did the work. Not who signed off on the work. Reviewed-by me, authored by Claude feels most correct.
> Before AI, did you credit your code completion engine for the portions of code it completed?
Code completions before LLMs was helping me type faster by completing variable names, variable types, function arguments, and that’s about it. It was faster than typing it all out character by character, but the auto completion wasn’t doing anything outside of what I was already intending to write.
With an LLM, I give brief explanations in English to it and it returns tens to hundreds of lines of code at a time. For some people perhaps even more than that. Or you could be having a “conversation” with the LLM about the feature to be added first and then when you’ve explored what it will be like conceptually, you tell it to implement that.
In either case, I would then commit all of that resulting code with the name of the LLM I used as author, and my name as the committer. The tool wrote the code. I committed it.
As the committer of the code, I am responsible for what I commit to the code base, and everyone is able to see who the committer was. I don’t need to claim authorship over the code that the tool wrote in order for people to be able to see who committed it. And it is in my opinion incorrect to claim authorship over any commit that consists for the very most part of AI generated code.
True. Might also vary depending on how one uses the LLM.
For example, in a given interaction the user of the LLM might be acting more like someone requesting a feature, and the LLM is left to implement it. Or the user might be acting akin to a bug reporter providing details on something that’s not working the way it should and again leaving the LLM to implement it.
While on the other hand, someone might instruct the LLM to do something very specific with detailed constraints, and in that way the LLM would perhaps be more along the line of a fancy auto-complete to write the lines of code for something that the user of the LLM would otherwise have written more or less exactly the same by hand.
Claude adds "Co-authored by" attribution for itself when committing, so you can see the human author and also the bot.
I think this is a good balance, because if you don't care about the bot you still see the human author. And if you do care (for example, I'd like to be able to review commits and see which were substantially bot-written and which were mostly human) then it's also easy.
> I'd like to be able to review commits and see which were substantially bot-written and which were mostly human) then it's also easy.
Why is this, though? I'm genuinely curious. My code-quality bar doesn't change either way, so why would this be anything but distracting to my decision making?
Personally it would make the choice to say no to the entire thing a whole lot easier if they self-reported on themselves automatically and with no recourse to hide the fact that they've used LLMs. I want to see it for dependencies (I already avoid them, and would especially do so with ones heavily developed via LLMs), products I'd like to use, PRs submitted to my projects, and so on, so I can choose to avoid them.
Mostly this is because, all things considered, I really do not need to interact with any of that, so I'm doing it by choice. Since it's entirely voluntary I have absolutely no incentive to interact with things no one bothered to spend real time and effort on.
If you choose not to use software written with LLM assisstance, you'll use to a first approximation 0% of software in the coming years.
Even excluding open source, there are no serious tech companies not using AI right now. I don't see how your position is tenable, unless you plan to completely disconnect.
This is shouting at the clouds I'm afraid (I don't mean this in a dismissive way). I understand the reasoning, but it's frankly none of your business how I write my code or my commits, unless I choose to share that with you. You also have a right to deny my PRs in your own project of course, and you don't even have to tell me why! I think on github at least you can even ban me from submitting PRs.
While I agree that it would be nice to filter out low effort PRs, I just don't see how you could possibly police it without infringing on freedoms. If you made it mandatory for frontier models, people would find a way around it, or simply write commits themselves, or use open weight models from China, etc.
Accountability. Same reason I want to read human written content rather than obvious AI: both can be equally shit, but at least with humans there's a high probability of the aspirational quality of wanting to be considered "good"
With AI I have no way of telling if it was from a one line prompt or hundreds. I have to assume it was one line by default if there's no human sticking their neck out for it.
LLMs can make mistakes in different ways than humans tend to. Think "confidently wrong human throwing flags up with their entire approach" vs. "confidently wrong LLM writing convincing-looking code that misunderstands or ignores things under the surface."
Outside of your one personal project, it can also benefit you to understand the current tendencies and limitations of AI agents, either to consider whether they're in a state that'd be useful to use for yourself, or to know if there are any patterns in how they operate (or not, if you're claiming that).
Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.
Sure, the point about LLM "mistakes" etc being harder to detect is valid, although I'm not entirely sure how to compare this with human hard to detect mistakes. If anything I find LLM code shortcomings often a bit easier to spot because a lot of the time they're just uneeded dependencies, useless comments, useless replication of logic, etc. This is where testing come into play too and I'm definitely reviewing your tests (obviously).
>Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.
I mean listen: I wish with every fiber of my being that LLMs would dissapear off the face of the earth for eternity, but I really don't think I'm being "isolating myself from the industry" by not simply dismissing LLM code. If I find a PR to be problematic I would just cut it off, thats how I review in the first place. I'm telling some random human who submitted the code to me that I am rejecting their PR cause its low quality, I'm not sending anthropic some long detailed list of my feedback.
This is also kind of a moot point either way, because everyone can just trivially hide the fact that they used LLMs if they want to.
> If anything I find LLM code shortcomings often a bit easier to spot because a lot of the time they're just uneeded dependencies, useless comments, useless replication of logic, etc.
By this logic, it's useful to know whether something was LLM-generated or not because if it was, you can more quickly come to the conclusion that it's LLM weirdness and short-circuit your review there. If it's human code (or if you don't know), then you have to assume there might be a reason for whatever you're looking at, and may spend more time looking into it before coming to the conclusion that it's simple nonsense.
> This is also kind of a moot point either way, because everyone can just trivially hide the fact that they used LLMs if they want to.
Maybe, but this thread's about someone who said "I'd like to be able to review commits and see which were substantially bot-written and which were mostly human," and you asking why. It seems we've uncovered several feasible answers to your question of "why would you want that?"
I'm not against putting AI as coauthor, but removing the human who allowed the commit to be pushed/deployed from the commit would be a security issue at my job. The only reason we're allowed to deploy code with a generic account is that we tag the repo/commit hash, and we wrote a small piece of code that retrieve the author UID from git, so that in the log it say 'user XXXNNN opened the flux xxx' (or something else depending on what our code does)
If it contributed significantly to the design and execution, and was a major contributing factor yes. Would you say a reserve parachute saved your life or would you say you saved your own life? What about the maker of the parachute?
I'd be thanking the reserve and the people who made it, and credit myself with the small action of slightly moving my hand as much as its worth.
Also, text editors would be a better analogy if the commit message referenced whether it was created in the web ui, tui, or desktop app.
I suppose that for me the tool rarely contributes to the design and execution. At work and for any project I care about, I prompt once I know what I want, in terms of both function and the shape of the program to do it. If the model gen matches the shape closely enough, I accept, otherwise iterate from there. To me this is authorship.
When I vibe code - which for me, means using very high level prompts and largely not reading the output - then I could see attributing authorship to a model; but then I wonder what the purpose of authorship attribution is to begin with. Is it to tell you who to talk to about the code? Is it personal attestation to quality, or to responsibility? Is it credit? Some combination of these certainly, but AI can hold none except the last, and the last is, to me, rather pointless. Objects don't have feelings and therefore are unaffected by whether credit is given or not; that's purely a human concern.
I suppose the dividing line is fuzzy and perhaps best judged on the basis of the obscenity rule, that is, I know it when I see it.
> Why would I commit something written by AI as myself?
I don't use any paid AI models (for all my usecases, free models usually work really well) and so for some small scripts/prototypes, I usually just use even sometimes the gemini model but aistudio.google.com is good one too.
I then sometimes, manually paste it and just hit enter.
These are prototypes though, although I build in public. Mostly done for experimental purpoess.
I am not sure how many people might be doing the same though.
But in some previous projects I have had projects stating "made by gemini" etc.
maybe I should write commit message/description stating AI has written this but I really like having the msg be something relevant to the creation of file etc. and there is also the fact that github copilot itself sometimes generate them for you so you have to manually remove it if you wish to change what the commit says.
Copyright violation would happen before LLMs yes, but it would have to be done by a person who either didn’t understand copyright (which is not a valid defence in court), or intentionally chose to ignore it.
With LLMs, future generations are growing up with being handed code that may or not be a verbatim copy of something that someone else originally wrote with specific licensing terms, but with no mention of any license terms or origin being provided by the LLM.
It remains to be seen if there will be any lawsuits in the future specifically about source code that is substantially copied from someone else indirectly via LLM use. In any case I doubt that even if such lawsuits happen they will help small developers writing open source. It would probably be one of the big tech companies suing other companies or persons and any money resulting from such a lawsuit would go to the big tech company suing.
I’m also pretty sure that on an episode of The Standup, one of the Neovim core maintainers TJ DeVries (Teej) said that it is a good idea to prove new ideas in the form of a plugin rather than submitting pull requests for Neovim itself with new ideas that have not yet been tested out and proven in the real world. Implicitly implying that indeed Neovim is open to bring features from plugins into core Neovim itself, if they are proven to be useful for a lot of people.
Unfortunately I don’t remember what episode it was or even if it was specifically on an episode of The Standup, or if it was some other video that he and ThePrimagen did outside of The Standup.
This is essentially how the new package manager got done. `mini.deps` was created as basically a proposal for a built in package manager (beyond also just being its own thing), sat in the wild for a year or two then a derived version got imported.
Seems a clever and fitting name to me. A poison pit would probably smell bad. And at the same time, the theory that this tool would actually cause “illness” (bad training data) in AI is not proven.
> The closest set up for x11 would be to use x11 forwarding with xpra.
Older versions of macOS even had an X server distributed by Apple that you could install on your machine, and if memory serves right you were then easily able to forward X11 from a remote Linux host (or other operating systems running X11 applications) using ssh and have it render to your macOS desktop.
From a quick google search there is apparently still an Apple supported third-party open source project called XQuartz one can use.
X11 forwarding with ssh and XQuartz looks to work the same way that I remember using the Apple distributed X server in the past. Install the X server and then use the -X flag of ssh. Same way that you forward X11 between two Linux computers, or Sun workstations or whatever with an X11 desktop, over ssh.
I tried that a few times back in the day, but I found it so jarring & ugly against the macOS GUI. The problem was that it was rendering the application alone, for a seamless integration. I don't remember if there was even an option to run a compositor or window manager such that you had a proper window with it's own background and the linux apps show up inside that (like the cocoa-way example).
Used to use XQuartz often years ago for (I think?) forwarding Firefox running in containers for browser-facing integration testing. It was pretty slow IIRC. Switched to VNC, which worked much better.
Decathlon and Zara both have RFID tags in their products.
https://sustainability.decathlon.com/product-traceability-an... (Decathlon)
https://www.inditex.com/itxcomweb/so/en/press/news-detail/7f... (Inditex is the parent company of Zara. Link is a press release from 2014.)
So if one were to buy all their clothes at Decathlon (clothes for sports and other outdoor activities) and Zara (everyday wear as well as fancier clothing), and found a reader that can read the RFID tags they use, one would save the time needed to add RFID tags to one’s clothes ;)
There might be other stores that have RFID tags on all of their products too. I only mention these two in particular because I have purchased products from both of them using their RFID-based self-checkout in their stores and thus seen it first-hand.
However, I am not sure if all of the products have the RFID label embedded in the actual fabric or if some or most have the RFID label attached to paper labels that you’d remove before using the clothes. So that would also need to be determined before deciding to replace one’s whole wardrobe with clothes exclusively from these stores.
reply