I spent the last day deep diving on what I can do with MLX with local models. I still feel limited, because you have to use quantized models, but I think it's enough to do /something/, so I went ahead and bit the bullet and pre-ordered just now. I am driven a little bit by concern about ongoing memory market pressures over the next 1-3 years, and thinking it's a bit now or never.
128 GB maximum.
Sigh.