I doubt you'll ever again see a model being publicly released for something as capable as this.
I can't say I fully understand the mechanisms by which they achieve that, but it's clear that the powers that be have decided that the public cannot be trusted with powerful AI models. Stable Diffusion and LLaMA were mistakes that won't be repeated anytime soon.
AI models with obvious, broad, real world applications always seem get reproduced in public. Nvidia’s result is obviously great, but it’s still a long way from being useful. It reminds me of image generation models maybe 4 years ago.
We need a killer application for video generation models first, then I’m sure someone will throw a $100k at training an open source version.
I am going to guess: Making "dog-nature-like" humanoids that aim to please lonely people, who are nicer to be around than people, easier than real relationships.
The current generation of GPT-3, which started with text-davinci-003, was actually released on November 2022, not quite 3 years ago. I'm not even sure the model that was released 3 years ago is still available to test, but it was much less impressive than more recent models - I wouldn't be surprised if LLaMa were actually better.
The model trained 3 years ago was only trained on 300B tokens, heavily undertrained (in terms of the Chinchilla scale), that's why LLaMa models can easily beat it on most benchmarks (they were trained on 1/1.4T tokens). About the current GPT-3.5 models, who knows, OpenAI is not very open about it.
The tragedy of the commons is at play here. We could get amazing ML models rivaling the best if people interested could pool together money for a $100k or $1 million training run. But there doesn't seem to be the willingness, so we humbly rely on the benevolence of companies like Stability and Meta to release models to the public.
Kickstarters like the one you linked don't suffer from the tragedy of the commons because people are essentially just pre-paying for a product. With funding an open source ML model, there's little incentive to not be a free-rider.
> but it's clear that the powers that be have decided that the public cannot be trusted with powerful AI models.
The most dangerous AI model today (in a practical sense, as people are actually using for shady stuff) is ChatGPT, which is closed source, but open to the public so anyone can cheat on their exams, write convincing fake product reviews, or generate SEO spams, etc.
The fact that a model is closed source doesn't change anything as long as it's available for use. Bad actors don't care about running the code on their own machine…
But they're still showing us that the results exist. They're trying to have it both ways, by showing the results are tangible progress while implicitly admitting that that progress is too powerful in the hands of the public.
Is there anything that incentivizes Nvidia to publish these results? Is it just needing to get papers out in the public for the academic clout? Something tells me that all this accomplishes is setting the standards of everyone who sees the possibilities, that "this will be the future", and a third party without the moral framework of Nvidia will become motivated to develop and openly release their own version at some point.
That's just good marketing, isn't it? "Our product is amazing! In fact it's too good, no you can't have it. Unless just maybe, we might let you buy access to it." Oh wow if it's so good that they won't let me have it then I definitely want it!
I can't say I fully understand the mechanisms by which they achieve that, but it's clear that the powers that be have decided that the public cannot be trusted with powerful AI models. Stable Diffusion and LLaMA were mistakes that won't be repeated anytime soon.