Great question. It's a long answer and it gets sort of involved.
1) It is easier to get adoption of A/B testing -- which many people have heard of, which many agree that they should be doing, and which captures substantially all of the benefits of bandit testing -- than bandit testing, at the typical company. e.g. If I go to your software company CMO and say "Do you know what A/B testing is?", if the answer is "No", then the CMO is not quite top drawer. "No, and there is no reason why I should know that" is a perfectly acceptable answer for bandit testing.
2) There are some subtleties about actually administering bandit tests, for example in how tests interact which each other or with exogenous trends in your traffic mix, which sound like they could cause operational nightmares. A/B testing does not have 1-to-1 analogues to these problems, and many of the theoretical problems with A/B testing are addressable in practice via e.g. good software and good implementation practices, both of which exist in quantity.
3) A/B testing has vastly better tool support than bandit testing, which currently has one SaaS startup and zero OSS frameworks which I am personally aware of.
4) On a purely selfish note which I'd be remiss in not mentioning, I'm personally identified with A/B testing in a way that I am not with bandit testing.
5) Again, convincing people to start A/B testing will be better 100 out of 100 times than failing to convince people to start bandit testing, which is the default result. Consider the operational superiority of A/B testing for software companies in August 2013, then look at the empirical results: very few companies actually test every week.
(There is also a zeroth answer, which is "I have reviewed the arguments for doing bandit algorithms over A/B testing and frankly don't find them all that credible" but for the purpose of the above answers I assumed that we both agreed bandit was theoretically superior.)
1) It is easier to get adoption of A/B testing -- which many people have heard of, which many agree that they should be doing, and which captures substantially all of the benefits of bandit testing -- than bandit testing, at the typical company. e.g. If I go to your software company CMO and say "Do you know what A/B testing is?", if the answer is "No", then the CMO is not quite top drawer. "No, and there is no reason why I should know that" is a perfectly acceptable answer for bandit testing.
2) There are some subtleties about actually administering bandit tests, for example in how tests interact which each other or with exogenous trends in your traffic mix, which sound like they could cause operational nightmares. A/B testing does not have 1-to-1 analogues to these problems, and many of the theoretical problems with A/B testing are addressable in practice via e.g. good software and good implementation practices, both of which exist in quantity.
3) A/B testing has vastly better tool support than bandit testing, which currently has one SaaS startup and zero OSS frameworks which I am personally aware of.
4) On a purely selfish note which I'd be remiss in not mentioning, I'm personally identified with A/B testing in a way that I am not with bandit testing.
5) Again, convincing people to start A/B testing will be better 100 out of 100 times than failing to convince people to start bandit testing, which is the default result. Consider the operational superiority of A/B testing for software companies in August 2013, then look at the empirical results: very few companies actually test every week.
(There is also a zeroth answer, which is "I have reviewed the arguments for doing bandit algorithms over A/B testing and frankly don't find them all that credible" but for the purpose of the above answers I assumed that we both agreed bandit was theoretically superior.)