"frontier" as in the frontier of using everybody else's code, books, art of everyone else for a specific purpose that was never intended to, as in, not even open source projects ever imagined LLMs becoming a thing and their licenses reflected as much.
Not saying we shouldn't be careful with AGI. But the glib tone of "who cares if these companies die?" is where one needs to consider the consequences of AGI not happening or being delayed.
I struggle with the idea that AGI (which I don't think is coming via LLMs, but sidebar) will improve the outcomes of lives and not end up as a tool of privilege and control.
Pitch me on this utopian outlook, because nothing about any of the Frontier companies points away from dystopia to me
I thought we gave up on AGI and turned into making sex chatbots and simulated porn instead. Wasn't that what Sam was pushing for all along when he went all in on Sora and the erotic modes?
- you are assuming that an AGI will prevent more deaths than it would cause
- you are assuming that AGI is just around the corner and that scaling up language models is the path to get there
- you can make this argument about basically anything (nuclear power, tuberculosis medication, free healthcare). I’d say the burden of proof is on you to back up your extraordinary claim with extraordinary evidence.
> every delay to AGI results in deaths that AGI could have prevented
Sure, that's what AGI would be used for /s
In other news, we are not even close to AGI and even with the current experimental technology, frontier AI model companies are already fighting to help departments of war, which actually results in the most deaths. What makes you think AGI would be used for not leading to the same millions of deaths?