I've seen recent interesting papers on reasoning models with costs from US$6 to US$4,5000. The problem is that you need a bunch of fast RAM for efficient training. But you can do some limited fine-tuning (Q-LoRA, etc) of models up to 14G on a 24 GB graphics card, and full fine-tunes of 1.5G models.
It's very affordable for a small university research group. And not totally out of reach for hobbyists.
It's very affordable for a small university research group. And not totally out of reach for hobbyists.