I wish this was available as a tool for people to use! It's neat to see their list of pregenerated examples, but it would be more interesting to be able to try things out. Personally, I get a better sense of the powers and limitations of a technology when I can brainstorm some functionality I might want, and then see how close I can come to creating it. Perhaps at some point someone will make an open source version.
I wish so too! I don't expect them to release the code (they rarely do) and they wield their usual "it might have societal impact, let us decide what's good for the world":
> We recognize that work involving generative models has the potential for significant, broad societal impacts
The community did raise up to the challenge of re-implementing it (sometimes better) in the past, so I'm hopeful.
I don't think the goal is for them to "decide what's good for the world". You can classify disruptiveness/risk of a piece of tech fairly objectively.
Delaying release is to give others (most clearly social media) time to adjust and ensure safety within their own platforms/institutions (of which they are the arbiters). It also gives researchers and entrepreneurs a strong motivation of "we have to solve these risk points before this technology starts being used". While there are clearly incentive issues and gatekeeping in the research/startup community, this is a form of decentralized decision-making.
I don't see a strong case for why access should be open-sourced at announcement time, especially if it's reproducible. Issues will arise when their tech reaches billions of dollars to train, making it impossible to reproduce for 99.99% of labs/users. At that point, OpenAI will have sole ownership and discretion over their tech, which is an extremely dangerous world. GPT-3 is the first omen of this.
They note that the model has 12B parameters, which in terms of order of magnitude make it sit right between gpt 2 and 3 (1.5 and 170 respectively). With some tricks, you can run gpt 2 on good personal hardware, so this might be reachable as well with the latest hardware.
EDIT: I'm assuming you mean for inference, for training it would be an other kind of challenge and the answer would be a clear no