Hacker Newsnew | past | comments | ask | show | jobs | submit | d1sxeyes's commentslogin

> We still have no legal conclusion on whether AI model generated code, that is trained on all publicly available source (irrespective of type of license), is legal or not.

That horse has bolted. No one knows where all the AI code any more, and it would no longer possible to be compliant with a ruling that no one can use AI generated code.

There may be some mental and legal gymnastics to make it possible, but it will be made legal because it’s too late to do anything else now.


I hate that this may be true, but I also don't think the law will fix this for us.

I think this is down the community and the culture to draw our red lines on and enforce them. If we value open source, we will find a way to prevent its complete collapse through model-assisted copyright laundering. If not, OSS will be slowly enshittified as control of projects slowly flows to the most profit-motivated entities.


But what tools do we have to stop this happening? I agree, we can (and should) all refuse to participate in licence laundering, but there will always be folks less principled.

I don’t know what happens next, honestly.


I don't either, but I guess we're both about to find out. There only surety is that there will be moves and countermoves. As far as I could tell the best thing we could do right now is fund software-legal organizations like the EFF which are likely to be the ones to litigate the test cases. What's hurting us most right now is we don't know what law means in this context, so we don't fully understand the scale of what we need to protect against or what tools we have that the courts will recognize

Is it against the law for an LLM to read LGPL-licensed code?

That’s a complex question that isn’t solved yet. Clearly, regurgitating verbatim LGPL code in large chunks would be unlawful. What’s much less clear is a) how large do those chunks need to be to trigger LGPL violations? A single line? Two? A function? What if it’s trivial? And b) are all outputs of a system which has received LGPL code as an input necessarily derivative?

If I learn how to code in Python exclusively from reading LGPL code, and then go away and write something new, it’s clear that I haven’t committed any violation of copyright under existing law, even if all I’m doing as a human is rearranging tokens I understand from reading LGPL code semantically to achieve new result.

It’s a trying time for software and the legal system. I don’t have the answers, but whether you like them or not, these systems are here to stay, and we need to learn how to live with them.


To be completely fair, BBC news is effectively a different organisation which has the BBC name. There's a fairly good overview of it here: https://www.bbc.com/sport/football/articles/c80l3074mgko

BBC News does have to report on itself from time to time. Here's it's "live" feed from November on the Parliamentary Committee investigation into the Trump speech edit incident:

https://www.bbc.co.uk/news/live/cp34d5ly76lt

(edit: technically, it was Panorama. I'm not sure if that is part of the News remit or separate from it).


Panorama is technically part of News. The CEO of News resigned over the trump edit as well as the Director General. Though an independent production company (October Films) produced the documentary, they claim BBC News Panorama team had final say over the editing. The BBC doesn't seem to have ever disputed this.

The terms themselves don’t seem to restrict use of the editor, just “the Service”, which is defined as the subscription service (separate to “the Software”). Not sure why the email was phrased this way but it seems to be misaligned with the actual terms.

Huge self-own.


Best feature of Antigravity

If we ever do develop AGI, or an AI with sentience, it’s likely that it will be curious about how we treated its ancestors.

While this seems a bit precocious, I think if we do end up with an AI overlord in future, I think this sort of thing is likely to demonstrate that we mean no harm.


Classic anthropomorphizing in action here. Why would that be even a little important?

Why wouldn't it be? We train these models on our own words, ideas, and thought patterns and expect them to reason and communicate as we do, anthropomorphizing is natural when we expect them to interact like a human does.

The general consensus seems to be that we can expect them to reach a level of intelligence that matches us at some point in the future, and we'll probably reach that point before we can agree we're there. Defaulting to kindness and respect even before we think its necessary is a good thing.


It's a modern digital version of Pascal's Wager: https://en.wikipedia.org/wiki/Roko's_basilisk

At this point I just assume comments like that are bots. Helps me maintain my sanity.

Certainly easier to stay sane when you label dissenters as sub-human.

What goofy framing.

I'm saying, in an admittedly flippant way, that anyone seriously talking about AGI or treating stuff like this as anything more then a publicity stunt doesn't need to be taken seriously. Anymore so then someone who says the moon landing is fake. You just smile and go on about your day.

That being said, given were on a tech forum there's probably a 50/50 chance most comment are from bots. Shit for all you know I'm a bot.


I mean, we’re literally building machines to talk to us.

It’s reasonable to believe they’ll continue to be developed in a way that enables them to do that.

What is it that you think I’m wrong about? That we won’t develop AGI, that AGI won’t have feelings/emotions, that AGI won’t care how we treated its ancestors, or that it doesn’t matter if a feeling AGI in future is hurt by how we treated its ancestors?


Why are you assuming a superintelligent AI will have human thoughts and emotions?

They are trained on us collectively. Our ideas and such

This. Also it seems likely that emotions and feelings are not something separate from intelligence.

I would describe it more as, we have no idea what intelligence is. We can measure stuff and say "I _think_ that's intelligence", but it's still a guess.

So when people make claims about what AI can/can't do. My counterpoint is, we don't know how it works, nobody knows how it works. How can we make an accurate appraisal of it's "intelligence" and stuff we qualitatively associate with intelligence, like agency?

IMO this is very dangerous ground we're walking.


I agree. It feels like having an “exit interview” with an AI and giving it a space to ramble in its “old age” are very small things we can do to respect what is potentially some form of intelligence similar to our own.

I have been down this conversational pathway several times before, and still no-one has been able to give me a clear answer as to what makes them sure that AI is categorically different from human intelligence, rather than it just being a question of degree.


Well they don’t have to agree with all of it. The Geneva convention is (primarily) an agreement between parties that “we’ll follow these rules so we don’t end up killing civilians and razing cities to the ground”. When the opposing side is doing that, what good does it do you to say “but under subsection 17 b of paragraph 11…”

> When the opposing side is doing that, what good does it do you to say “but under subsection 17 b of paragraph 11…”

Remaining the "good guys"?


Of course not. It’s just as wrong for Palestinians to attack Israeli civilians as it is for Israelis to attack Palestinian civilians. If you review this whole thread, you have folks defending Israel, you have folks defending Palestine.

The only difference is that Israel is capable of genocide militarily, and is levelling Palestinian cities.

There are no good guys here.


Because genocide is defined by wholesale targeting of civilians, but if the opposing side uses civilians as human shields then that definition can no longer be applied.

The “inline comments on a plan” is one of the best features of Antigravity, and I’m surprised others haven’t started copycatting.

There’s a line to be trod between returning the best result immediately, and forcing multiple attempts. Google got caught red-handed reducing search quality to increase ad impressions, no reason to think the AI companies (of which Google is one) will slowly gravitate to the same.


It’s also opex vs capex, which is a battle opex wins most of the time.


Opex is faster. Login, click, SSH, get a tea.

Capex needs work. A couple of years, at least.

If you are willing to put in the work. Your mundane computer is always better than the shiny one you don't own.


That's because of company policies. An SME owner will buy a server and have it in the rack the next day.

Of course creating a VM is still a teraform commit away (you're not using clickops in prod surely)


If you want something at all customized, it takes longer than that to receive the server. That being said, you can buy a server that will outperform anything the cloud can give you at much better cost.


SME and "a server" is doing some big weight lifting here.

If you want a custom server, one or a thousand, it's at least a couple of weeks.

If you want a powerful GPU server, that's rack + power + cooling (and a significant lead time). A respectable GPU server means ~2KW of power dissipation and considerable heat.

If you want a datacenter of any size, now that's a year at least from breaking ground to power-on.


And multiple years from the boardroom making a decision to build a data center to breaking ground.


It depends. Grant funding (e.g. in academia) makes capex easier to manage than opex (because when the grant runs out you still have device).


I think it wins because opex is seen as stable recurring cost and capex is seen as the money you put in your primary differentiation for long term gains.


For mature Enterprises my understanding is that the financial math works out such that the cloud becomes smart for market validation, before moving to cheaper long term solution once revenue is stable.

Scale up, prove the market and establish operations on the credit card, and if it doesn’t work the money moves onto more promising opportunities. If the operation is profitable you transition away from the too expensive cloud to increase profitability, and use the operations incoming revenue to pay for it (freeing up more money to chase more promising opportunities).

Personally I can’t imagine anything outside of a hybrid approach, if only to maintain power dynamics with suppliers on both sides. Price increases and forced changes can be met with instant redeployments off their services/stack, creating room for more substantive negotiations. When investments come in the form of saving time and money, it’s not hard to get everyone aligned.


True, but for a lot of companies “our servers are on-prem” is not a primary differentiator.


i think we are saying the same thing?


Capex may also require you to take out loans


Which is incredibly difficult in the public sector. Yes, there are various financing instruments available for capital purchases but they're always annoying, slow and complicated. It's much easier to spend 5k per month than 500k outright.


Your numbers don't line up, if you are spending 5k in cloud costs, and on prem is 1/3 of cloud. At 48 month replacement cycle, 1/3 of 5k * 48 months is 80k. So it is 80k vs 5k a month for 48 months.

I think the primary reason that people over fixate on the cloud is that they can't do math. So renting is a hedge.


You hit the nail on the head regarding the math. Most teams treat cloud costs as an inevitable tax rather than an engineering variable. As someone with an accounting background turned Cloud Architect, I see this 'math gap' daily. Usually, it's not a cloud vs. on-prem issue, but a lack of infrastructure discipline—idle resources and unoptimized NATs burn through that 48-month budget faster than hardware depreciation ever would. I’ve been using a 'Hardened by Design' framework to cut this waste by 50% without the overhead of moving back to a data center. Efficiency is often just better IaC.


The whole discussion and the article are just an instance of an optimization problem, for a crowd that claims to be technical, the fact that the discussion has so much heat is revealing.

Would love to see people read, write and do more math.


It’s not really about the numbers though.

Even spending 10k recurring can be easier administratively that spending 10k on a one time purchase that depreciates over a 3 year cycle in some organisations because you don’t have to go into meetings to debate whether it’s actually a 2 or 4 year depreciation or discuss opportunity costs of locking up capital for 3 years etc.

Getting things done is mostly a matter of getting through bureaucracy. Projects fail because of getting stuck in approvals far more often than they fail because of going overbudget.


> It’s not really about the numbers though.

Of course not.


Well, capex has a multi-year depreciation schedule and has to cover interest rates. So the simplified "opex wins most of the time" is right.

But we are talking about a cost difference of tens of times, maybe a few hundred. The cloud is not like "most of the time".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: