Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Much of what I did in my work was to reduce or constrain the search space.

1. Don't evolve constants or coefficients, use regression to find

2. Leverage associativity and commutativity, simplify with SymPy, sort operands to add/mul

So much effort in GP for SR is spent evaluating models which are effectively the same, even though their "DNA" is different. Computational effort, and algorithmic effort (to deal with loss of population diversity, i.e. premature convergence)

I've seen a few papers since pick up on the idea of local search operators, the simplification, and regression, trying to maintain the evolution aspect. Every algo ends up in local optima and works of effectively the same form by adding useless "DNA". I could see the PGE algo doing this too, going down a branch of the search space that did not add meaningful improvement. With the recent (~5y) advancements in AI, there are some interesting things to try



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: