Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Chiefly, scale and accountability.

The work of a person can be mitigated and a person can be held accountable for their actions.

Much of our society operates on the idea that we don’t need to codify and enforce every single good or bad thing due to these reasons; and having such an underpinning affords us greater personal freedom.



This does not actually answer the question of why it is bad (in your opinion) in the first place, it just states that bad things are mitigated. I am looking for a concrete answer to the former, not a justification of the latter. The former is what usually AI opponents can never answer, they assume prima facie that AI is bad, for whatever reason.


I answered your question plainly, but I'll try to go into detail. I have a suspicion that you don't see this as the philosophical issue that AI detractors do, and perhaps that hasn't been clearly communicated to you in the answers you've received, leading to your distaste for them or confusion at why they don't meet your criteria.

I believe that this kind of generative AI is bad because it approximates human behavior at an inhuman scale and cannot be held accountable in any way. This upends the entire social structure upon which humans have relied to keep each other in-check since the advent of the modern concept of "justice" beginning with the Code of Hammurabi.

In essence: Because you cannot punish, rehabilitate or extract recompense from a machine, it should not be allowed in any way to approximate a member of society.

This logic does not apply to machines that "automate" labor, because those machines do not approximate human communication - they do not pretend to be us.


Your argument can be applied to the printing press or the automatic loom, and before you say that AI is much more at scale, I do not think that it is any more at scale than producing billions of books and garments cheaply. If you instead say that AI is more autonomous than the prior which require human functionality, I will remind you that no AI today (and likely into the future) produces outputs autonomously with no human input (and indeed, many humans tweak those outputs further, making it more like photo editing than end-to-end solutions). Even if they could perfectly read your mind and output end-to-end, you must first think for them to do what you desire.

Should those machines then be subject to your same philosophies? I'd suspect you'd say "that's different" somehow but it is only because you are alive at this moment and these machines have been normalized to you that you do not care about them. Were you to be born in a few centuries, you would likely feel the same way most do about the prior machines, and indeed, you'd be hard pressed to find anyone who think that future generation's AI (probably simply called technology then) is problematic as you do today. Recency bias is one hell of a drug.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: