The benefit of the LLMs is that the checkpoint already "understands" a lot by default. Finetuning is relatively cheap and makes many tasks such as this one perform decently well simply by shoving some data into it.
The base checkpoint takes a lot of compute to make, but that's what holds most of it's "knowledge" so to speak.
Making a NN from scratch means you'll have to somehow map the cards into inputs. I have limited knowledge of how MTG works, but most TGG have text descriptions and complex effects. Mapping text to logic is what LLMs are really good at, otherwise you're starting from scratch and will also need a relatively large amount of compute before it starts displaying any type of decent behaviour.
It's also easy for most software devs to do this - finetuning mostly consists of collecting text and feeding it into a finetuning script. You don't need to know linear algebra, what a "convolution" is, etc. to do finetuning.
The base checkpoint takes a lot of compute to make, but that's what holds most of it's "knowledge" so to speak.
Making a NN from scratch means you'll have to somehow map the cards into inputs. I have limited knowledge of how MTG works, but most TGG have text descriptions and complex effects. Mapping text to logic is what LLMs are really good at, otherwise you're starting from scratch and will also need a relatively large amount of compute before it starts displaying any type of decent behaviour.
It's also easy for most software devs to do this - finetuning mostly consists of collecting text and feeding it into a finetuning script. You don't need to know linear algebra, what a "convolution" is, etc. to do finetuning.