I feel this is kind of obvious. I'd think that research would go into an area exploring other models than an LLM as that has these obvious drawbacks. Another being the sheer computing power needed to train this, even if you had infinite good quality data. There is not enough power to train these things. It's hit a wall.
The article only goes into results based on current models though. I'd hope there will be different kinds of models, which might produce more accurate results with less data, and optimizing in that direction instead. for instance, all information on how to write code is available, yet training a current model on all that information does not yield a model which can program all things. There's different types of information involved, as well as a different type of 'inspiration' or 'creativity' that a model might posses in order to utilize the training data optimally.
That being said I know next to nothing on how these things are built or where research is going now. It just seems this article is overly focused on LLMs being the ultimate thing, and having more data the only option to improve generative AI. I don't think that's true. We just need to invent new ways rather than trying to scale up the old ones.
The article only goes into results based on current models though. I'd hope there will be different kinds of models, which might produce more accurate results with less data, and optimizing in that direction instead. for instance, all information on how to write code is available, yet training a current model on all that information does not yield a model which can program all things. There's different types of information involved, as well as a different type of 'inspiration' or 'creativity' that a model might posses in order to utilize the training data optimally.
That being said I know next to nothing on how these things are built or where research is going now. It just seems this article is overly focused on LLMs being the ultimate thing, and having more data the only option to improve generative AI. I don't think that's true. We just need to invent new ways rather than trying to scale up the old ones.