Why is chatGPT legal? Obviously the United States has no ability to regulate its ass into a pair of trousers atm, but why aren't European or Asian nations taking a stand to start regulating a technology with such clear potential for harm?
If governments went around banning any technology with a "clear potential for harm" it would be bad news for laptops, cell phones, kitchen knives, automobiles, power tools, bleach and, well, you get the idea.
And you can't display that knife. "New York City law prohibits carrying a knife that can be seen in public, including wearing a knife outside of your clothing."
(You can take one to work. "This rule does not apply to those who carry knives for work that customarily requires the use of such knife, members of the military, or on-duty ambulance drivers and EMTs while engaged in the performance of their duties.")
From chatgpt: >Minimum age. You must be at least 13 years old or the minimum age required in your country to consent to use the Services. If you are under 18 you must have your parent or legal guardian’s permission to use the Services.
>And you can't display that knife. "New York City law prohibits carrying a knife that can be seen in public, including wearing a knife outside of your clothing."
Not relevant to this case (ie. self harm), because someone intent on harming themselves obviously aren't going to follow such regulations. You can substitute "knife" for "bleach" in this case.
That proves my point? That information is on a separate page on their website, and the point about it being sharp is buried half way in the page. For someone who just bought a knife, there's 0 chance they'll find that unless they're specifically seeking it out.
I wish I could argue the "regulate" point but you failed to provide even a single example AI regulation you want to see enforced. My guess is the regulation you want to see enacted for AI is nowhere close to being analogous with the regulation currently in place for knives.
And the poster upthread used "regulate" for that reason, I presume.
> I wish I could argue the "regulate" point but you failed to provide even a single example AI regulation you want to see enforced.
It's OK to want something to be regulated without a proposal. I want dangerous chemicals regulated, but I'm happy to let chemical experts weigh in on how rather than guessing myself. I want fecal bacterial standards for water, but I couldn't possibly tell you the right level to pick.
If you really need a specific proposal example, I'd like to see a moratorium on AI-powered therapy for now; I think it's a form of human medical experimentation that'd be subject to licensing, IRB approval, and serious compliance requirements in any other form.
I'm not sure how you regulate chatbots to NOT encourage this kind of behavior, it's not like the principle labs aren't trying to prevent this - see the unpopular reigning in of GPT-4o.
They are absolutely not trying to prevent this. They made 5 more sycophantic because of user backlash. It became less useful for certain tasks because they needed to keep their stranglehold on the crazy user base who they hope to milk as whales later.
Well, children are banned from driving cars, for instance. I don't think anybody really has issues with this? but the current laissez faire attitude is killing people, idk, this seems bad.