Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why is chatGPT legal? Obviously the United States has no ability to regulate its ass into a pair of trousers atm, but why aren't European or Asian nations taking a stand to start regulating a technology with such clear potential for harm?




If governments went around banning any technology with a "clear potential for harm" it would be bad news for laptops, cell phones, kitchen knives, automobiles, power tools, bleach and, well, you get the idea.

But government does regulate each of those things.

What type of kitchen knife regulations are there? I don't even think I saw a "knives are sharp" disclaimer.

https://www.akti.org/age-based-knife-laws/

NYC has a four inch limit on knives carried in public, even kitchen knives. https://www.nyc.gov/site/nypd/about/faq/knives-faq.page

And you can't display that knife. "New York City law prohibits carrying a knife that can be seen in public, including wearing a knife outside of your clothing."

(You can take one to work. "This rule does not apply to those who carry knives for work that customarily requires the use of such knife, members of the military, or on-duty ambulance drivers and EMTs while engaged in the performance of their duties.")

"Knives are sharp" disclaimers are easy to find. https://www.henckels.com/us/use-and-care.html

(The CPSC is likely to weigh in if you make your knife unusually unsafe, too.)


>https://www.akti.org/age-based-knife-laws/

From chatgpt: >Minimum age. You must be at least 13 years old or the minimum age required in your country to consent to use the Services. If you are under 18 you must have your parent or legal guardian’s permission to use the Services.

>NYC has a four inch limit on knives carried in public, even kitchen knives. https://www.nyc.gov/site/nypd/about/faq/knives-faq.page

>And you can't display that knife. "New York City law prohibits carrying a knife that can be seen in public, including wearing a knife outside of your clothing."

Not relevant to this case (ie. self harm), because someone intent on harming themselves obviously aren't going to follow such regulations. You can substitute "knife" for "bleach" in this case.

>"Knives are sharp" disclaimers are easy to find. https://www.henckels.com/us/use-and-care.html

That proves my point? That information is on a separate page on their website, and the point about it being sharp is buried half way in the page. For someone who just bought a knife, there's 0 chance they'll find that unless they're specifically seeking it out.


Ah, you weren't actually hoping for an answer.

It's an answer, but it's fair to point out that these regulations seem fairly useless.

We could certainly apply similar rules to AI, but would that actually change anything?


Governments don't ban any of those things.

I wish I could argue the "regulate" point but you failed to provide even a single example AI regulation you want to see enforced. My guess is the regulation you want to see enacted for AI is nowhere close to being analogous with the regulation currently in place for knives.


> Governments don't ban any of those things.

And the poster upthread used "regulate" for that reason, I presume.

> I wish I could argue the "regulate" point but you failed to provide even a single example AI regulation you want to see enforced.

It's OK to want something to be regulated without a proposal. I want dangerous chemicals regulated, but I'm happy to let chemical experts weigh in on how rather than guessing myself. I want fecal bacterial standards for water, but I couldn't possibly tell you the right level to pick.

If you really need a specific proposal example, I'd like to see a moratorium on AI-powered therapy for now; I think it's a form of human medical experimentation that'd be subject to licensing, IRB approval, and serious compliance requirements in any other form.


Agent evangelists are are really using “you can’t ban kitchen knives” comparison on a murder-suicide coverup story? Unreal.

Any coverup should not be legal.

I'm not sure how you regulate chatbots to NOT encourage this kind of behavior, it's not like the principle labs aren't trying to prevent this - see the unpopular reigning in of GPT-4o.


They are absolutely not trying to prevent this. They made 5 more sycophantic because of user backlash. It became less useful for certain tasks because they needed to keep their stranglehold on the crazy user base who they hope to milk as whales later.

Well, children are banned from driving cars, for instance. I don't think anybody really has issues with this? but the current laissez faire attitude is killing people, idk, this seems bad.

Using the same logic... why aren't automobiles illegal?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: