Could Google theoretically calculate the probability that some text on a webpage is extremely hard or impossible for humans to readily see based on info from the source page about the background and text?
Well I said probability because depending on how complex the background is such as photographs, video, or some weird scrolling behavior, there could be some hindrances to having a computer accurately or computationally-efficiently determine this.
Google uses filters, which run in different intervals apart from the main algorithm and have different effects for a page or the whole website. This is why spammy SEO tactics often "work for a while" - until said computationally intensive filter catches them and often punishes the whole website. Prominent examples for these filters are Panda [0] and Penguin [1].
Yeah, and I don't understand why people still do that. It may work. For like a couple of weeks, before you get banned forever.
It's even easier to do stuff the right way, especially in a one page design. Just leave the logic up above. Then put a long, well written text about math, the importance of brain exercise and a description of the game with well divided H1/H2 headings, below it. Here's all the SEO you need. The link from HN and all the sites that scrape it will do the rest.