Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Verifying a candidate solution for these problems is relatively easy so wrong answers aren't so bad.


I understand.

To explain: the thing that's super interesting to me about this paper (i.e., "strong result" vs. "best paper contender") is not integration per se. It's the possible applications of the method to problems with much, much, much higher computational complexity than integration. On those problems, validating the correctness of a solution is also intractable. In those cases, a sound function approximation approach would be an absolute game changer for symbolic methods.

(Not that integration isn't interesting as well.)


How are they going to generate training data if verifying solutions is hard?


Some of these decision problems have thousands of examples because they correspond to industrially relevant problems. So, not automatically generated all at once, but gleaned from people who have been using CAS for decades to solve specific problems.

Still, I fear, the numbers are currently too small to get past the information bottleneck (mere thousands). We'll see.


Are these gathered in one place anywhere? I and probably many others, including the authors of this paper, would be interested in these as a test set for models like this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: