I also worked closely with Jack Clark at OpenAI before he disappeared on all these issues as CTO back in 2018
There are literally zero “AI labs” that have ever cared about “safety”
none of them have ever done anything tangible with any kind of independent auditable third-party way that has some defined reference baseline for what is safe and what is not, how to evaluate it, or a practitioners guidance for how to determine what it is and what is not safe as a designer.
They follow the same rules as every other technology platform: do as much as you can legally get away with no more no less
I say this as somebody who’s been actively involved in the AI “safety” debate for a long time now at least since 2013
The concept itself doesn’t even make sense if you fully understand the intersectional scope of technology and society
Societies demands are the things that are unsafe not the technologies themselves
Just like Bertrand Russell said “as long as war exists all technologies will be utilized for it” - you can replace “war” for anything that you think is unsafe
> The concept itself doesn’t even make sense if you fully understand the intersectional scope of technology and society
Societies demands are the things that are unsafe not the technologies themselves
The only “safe AI” is one that comes out of a “safe set of data”
so what would a “safe set of data” actually have to look like
Well it would have to not look like the majority of data that we produce now which has latent embeddings (primarily from the common crawl database ) of racism, lying, competition, destruction domination
I don’t believe humans are actually capable of making such data because our entire structure of society is based on racism competition and domination
> has latent embeddings (primarily from the common crawl database ) of racism, lying, competition, destruction domination
but safety has a wider scope than "racism, lying, competition, destruction domination" like always requiring eye protection when asked about making lemonaide.
> I don’t believe humans are actually capable of making such data because our entire structure of society is based on racism competition and domination
So this debate that's been going on since 2013 is over because it's impossible to make an AI safe since the data is unsafe? That would make sense but if it was a data problem it seems like that conclusion could have been reached a long time ago.
https://standards.ieee.org/ieee/7010/7718/
I also worked closely with Jack Clark at OpenAI before he disappeared on all these issues as CTO back in 2018
There are literally zero “AI labs” that have ever cared about “safety”
none of them have ever done anything tangible with any kind of independent auditable third-party way that has some defined reference baseline for what is safe and what is not, how to evaluate it, or a practitioners guidance for how to determine what it is and what is not safe as a designer.
They follow the same rules as every other technology platform: do as much as you can legally get away with no more no less
I say this as somebody who’s been actively involved in the AI “safety” debate for a long time now at least since 2013
The concept itself doesn’t even make sense if you fully understand the intersectional scope of technology and society
Societies demands are the things that are unsafe not the technologies themselves
Just like Bertrand Russell said “as long as war exists all technologies will be utilized for it” - you can replace “war” for anything that you think is unsafe