Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is mad impressive, for sure. With most AI problems, I can at least comprehend the approach, and the necessary combination of models. Not so much here.

My first question is what is the input to the AI? Is it the raw pixel array of the display? Or does it get API-level readouts of what’s happening? Because implementing the CV just to segment the display output in real time is crazy enough. I would assume the latter.

I think this basically proves that any problem that can be exhaustively simulated is solvable now. This may mark a tipping point, as every problem for which simulations exist (essentially infinite labels) is solved - then the balance will tip back toward making faster and more accurate sims (think multi-scale first principles physics stuff).



Blizzard released a client with an output accessible to machines, but still preserving fog of war. See here:

https://deepmind.com/blog/deepmind-and-blizzard-open-starcra...

Here's what I want to know, were these agents developed from scratch a la AlphaZero in chess, or did they have to create a number of abstractions in order to get the AI to start learning the game? In the initial demonstration they could hardly get the AI to mine minerals or do anything. How did they make the jump to actually good play?


They mentioned that they initially used imitation learning on human replays.


Open source API. Please re-watch the start of the presentation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: