Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A really long time ago when I was in grad school playing with genetic algorithms and simulated annealing, I implemented something that seems awfully similar to quantum adiabatic annealing. It worked as follows:

1. Assign all states of each variable equal probability.

2. Sample the living crap out of the search space with some sort of energy function for every possible configuration, scoring the discrete values of the variables of each individual sample by Boltzmann weighting of the energy function.

3. Every so often update the weights for selecting each variables potential values using the accumulated scores for each variable generated during step 2.

4. Repeat steps 2 through 3 until each variable converges to a single state.

I never published anything but I learned three things from this process.

1. It worked like gangbusters to find the space surrounding global optima of reasonably complex functions

2. It worked like crap to refine really good solutions into really great solutions.

3. It was horribly dependent on the underlying representation of the variables (i.e. if you mapped the input variables to a spin glass, it was just awful)

That and I suspect somebody already has a fancy name for exactly what I did back then...




Sounds an awful lot like Monte Carlo to me as well.


It's not quite like it from what I just read, it almost seems like it's halfway between Quantum Monte Carlo and a multi-armed bandit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: