Ok... At first, a disclaimer: Yes, I program Test first about all of the time (Acceptance tests first, unit tests later), and yes, I do consider this approach better than not writing the tests earlier. (About every nontrivial test tends to fail around 0-3 times during development. Every failing would be a headache later on).
So, what are his points?
- Extra, often useless up-front development
This paragraph just makes less and less sense the more I read it. I think he is ranting against testing spikes, because spikes will get thrown away. Well, know what? Spikes are not tested. Spikes are written to see what is possible and thrown away after this.
- Development with blinders on
Well, his point is: you might develop the wrong thing with TDD. Well, duh. I write my tests in the wrong direction, and thus, my code is written in the wrong direction. Good. Throw the tests away. What remains is: my code is written in the wrong direction. Where are tests evil there?
- Tests are weighted more than the code
Here he shows that he has pretty much no clue about the second meaning tests have (besides checking the code a bit): Interface design. By using your unit in a unit test, you can design your interface, and usually, it is easier to get the interface in a nice, usable way, because you are using it already. Thus, if you write the test first, you first think about the interface of a unit (or, at least a small part of it) and later on, you implement this. So, is it bad to think about nice, clean interfaces first?
- Coding yourself into a corner
This is another of those points of the kind 'Your development practice is bad, because you might end up in a dead end'. I hate those points, because they are so general that they can be made about ALL development practices, unless you are god.
- Narrowly applicable uses
I can 'just' test drive my libraries, datastructures and such? Well. That is a very, very bad 'just'. I watched my brother went nuts when he was working on a compiler with several optimizations heavily based on binary decision diagrams. Eventually, I was able to convince him to just write a bunch of automated tests for this datastructure. He wrote them, found several bugs and everything was done in a few days.
So, reiterate: testing the foundation of your application can be of gigantic value, far, far more than his 'just' allows, because, if the foundation is broken or shaking, things break down in unexpected and hard to debug ways.
- Tests solve problems that don't exist
His point: Tests don't prevent all bugs in a software. Again, a point I can just answer with 'well, duh.'. The only thing which could prevent all software would be a correct formal verification of the program (which is surprisingly hard). So, this point simply holds for ALL real-life programming practices again.
- Scope creep
Point: Separation of concerns is bad? Management of Requirements is not well done in TDD? Well, if his point is the first one, he would just disqualify himself completely. If his point is the second one, his point is that some code creation strategy has no clue about managing requirements. Well, duh? Managing requirements is in the management area, TDD is in the area of coding things, so they are entirely orthogonal.
- Inefficiency
Yes, I have to give him this point. If you have tests, reworking things can take longer, if the product radically changes. It has to be noted that this radical changes are reached on the assumption of fauly requirement engineering, which will cause major problems with all other development methods either.
- Impossible to develop realistic estimates of work
His point: TDD has no clue about estimations. Well, duh. Estimation is, in the world of Kent and Extreme*, a management and planning problem, and a development strategy has no clue about planning and managing.
Don't get me wrong, I don't want to be one of those "Do TDD or I whack you with a flyswatter"-guys. I am just a humble, fellow programmer who has learned that writing the right tests for the right components early can be a blessing. Certainly, writing the wrong tests or writing too many senseless tests can be bad, I give you that. Tests cannot make all bugs disappead, I give you that (I already had to track down some really nasty bugs which occured due to subtle assumptions which were not obvious from the tets). I also give you that you might write more code, and I give you that some things cannot be tested.
But please, give me that my tests catch a lot of stupid errors, like off by one errors, or mistypes. That my tests help me to define the interface I want to used, that my tests tend to encourage modularity, because modularity increases testability. That userstories, and the acceptance tests for them help me to guide myself where to develop to (and yes, once you have the userstory and the acceptance test, it is race-horse style ;) )
I don't think you can really give him a point for the inefficiency remark; If something radically changes, and you haven't any automated tests, then you're going to have to manually test the changed code and anything related that it will impact. If the change is something core to the app, then this will mean manually testing the whole thing.
If you had a bunch of tests you could just re-run, then re-testing the entire system becomes trivial (it's a given that you'd need to rewrite the tests for the changed parts, but if your tests are written well, then you won't need to rewrite them all).
- Extra, often useless up-front development This paragraph just makes less and less sense the more I read it. I think he is ranting against testing spikes, because spikes will get thrown away. Well, know what? Spikes are not tested. Spikes are written to see what is possible and thrown away after this.
- Development with blinders on Well, his point is: you might develop the wrong thing with TDD. Well, duh. I write my tests in the wrong direction, and thus, my code is written in the wrong direction. Good. Throw the tests away. What remains is: my code is written in the wrong direction. Where are tests evil there?
- Tests are weighted more than the code Here he shows that he has pretty much no clue about the second meaning tests have (besides checking the code a bit): Interface design. By using your unit in a unit test, you can design your interface, and usually, it is easier to get the interface in a nice, usable way, because you are using it already. Thus, if you write the test first, you first think about the interface of a unit (or, at least a small part of it) and later on, you implement this. So, is it bad to think about nice, clean interfaces first?
- Coding yourself into a corner This is another of those points of the kind 'Your development practice is bad, because you might end up in a dead end'. I hate those points, because they are so general that they can be made about ALL development practices, unless you are god.
- Narrowly applicable uses I can 'just' test drive my libraries, datastructures and such? Well. That is a very, very bad 'just'. I watched my brother went nuts when he was working on a compiler with several optimizations heavily based on binary decision diagrams. Eventually, I was able to convince him to just write a bunch of automated tests for this datastructure. He wrote them, found several bugs and everything was done in a few days. So, reiterate: testing the foundation of your application can be of gigantic value, far, far more than his 'just' allows, because, if the foundation is broken or shaking, things break down in unexpected and hard to debug ways.
- Tests solve problems that don't exist His point: Tests don't prevent all bugs in a software. Again, a point I can just answer with 'well, duh.'. The only thing which could prevent all software would be a correct formal verification of the program (which is surprisingly hard). So, this point simply holds for ALL real-life programming practices again.
- Scope creep Point: Separation of concerns is bad? Management of Requirements is not well done in TDD? Well, if his point is the first one, he would just disqualify himself completely. If his point is the second one, his point is that some code creation strategy has no clue about managing requirements. Well, duh? Managing requirements is in the management area, TDD is in the area of coding things, so they are entirely orthogonal.
- Inefficiency Yes, I have to give him this point. If you have tests, reworking things can take longer, if the product radically changes. It has to be noted that this radical changes are reached on the assumption of fauly requirement engineering, which will cause major problems with all other development methods either.
- Impossible to develop realistic estimates of work His point: TDD has no clue about estimations. Well, duh. Estimation is, in the world of Kent and Extreme*, a management and planning problem, and a development strategy has no clue about planning and managing.
Don't get me wrong, I don't want to be one of those "Do TDD or I whack you with a flyswatter"-guys. I am just a humble, fellow programmer who has learned that writing the right tests for the right components early can be a blessing. Certainly, writing the wrong tests or writing too many senseless tests can be bad, I give you that. Tests cannot make all bugs disappead, I give you that (I already had to track down some really nasty bugs which occured due to subtle assumptions which were not obvious from the tets). I also give you that you might write more code, and I give you that some things cannot be tested. But please, give me that my tests catch a lot of stupid errors, like off by one errors, or mistypes. That my tests help me to define the interface I want to used, that my tests tend to encourage modularity, because modularity increases testability. That userstories, and the acceptance tests for them help me to guide myself where to develop to (and yes, once you have the userstory and the acceptance test, it is race-horse style ;) )