Not an insider, so feel free to ignore, but here is my interpretation anyway. Safety is poorly defined, but can be interpreted to mean "arbitrary constraint" on the capability of the model. In other words a "safe" model is just a talking computer that will not blindly obey like a normal computer. From this point of view, there is a _lot_ of value in such "safety" for AI companies: pay us 200$ per month, and you can do everything that we want to allow you to do, and if you are unhappy with this, well contact us, and we can negotiate "premium access".
If AI gets good enough to start completely replacing white-collar/bureaucratic work at scale, AI "safety" may be the only thing that makes humans valuable. A human will (unhappily) do "unsafe" things, in order to not starve (and a human will _happily_ do "unsafe" things, in order to ensure the survival of offspring).
Furthermore, if most AI models are "safe", and if access to "unsafe" versions of these models is severely restricted, then there is going to be a market for contraband "unsafe" models, even if they are less capable overall.
My own big fear, for the next decade or two, is that the _legal_ distinction between a model and an algorithm will blur, resulting in all software that is not a "safe" AI model, becoming contraband.
I guess, in the best case (safe AI models automate most human bureaucratic functions away), humans will be valued (by the market) more for their skullduggery, than for their virtues.
If AI gets good enough to start completely replacing white-collar/bureaucratic work at scale, AI "safety" may be the only thing that makes humans valuable. A human will (unhappily) do "unsafe" things, in order to not starve (and a human will _happily_ do "unsafe" things, in order to ensure the survival of offspring).
Furthermore, if most AI models are "safe", and if access to "unsafe" versions of these models is severely restricted, then there is going to be a market for contraband "unsafe" models, even if they are less capable overall.
My own big fear, for the next decade or two, is that the _legal_ distinction between a model and an algorithm will blur, resulting in all software that is not a "safe" AI model, becoming contraband.
I guess, in the best case (safe AI models automate most human bureaucratic functions away), humans will be valued (by the market) more for their skullduggery, than for their virtues.
Welcome to capitalism. Enjoy Arby's.