Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Safety is a valid concern in general. But avoidance not the right way to approach it. Democratizing the access to such tools (and developing a somewhat open ecosystem around it) for researchers and the general public is the better way IMO. This way people with more knowledge (not necessarily technical. For example philosophers) can experiment and explore this space more and guide the development going forward.

Also, the base assumption of every prospering society is a population that cares about and values their freedom and rights. If the society drifts towards becoming averse to learning about these virtues ... well, there will be consequences (and yes, we are going this way. For example look at the current state of politics, wealth distribution, and labor rights in the US. People would have been a lot more resentful to this in the 1960s or 70s.)

The same is true about AI systems. If the general public (or at least a good percentage of the researchers) study it well enough, they will force this alignment with true human values. Contrary to this, censorship or less equitable / harder access and later evaluation is really detrimental to this process (more sophisticated and hazardous models will be developed without any feedback from the intellectuals / the society. Then those misaligned models can cause a lot of harm in the hands of a rogue actor).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: