There are obviously many reasons, but to add a couple;
Fight regression
You don't want to fix the same bug several times, do you?
When a bug is found, I first write a test to repeat the bug. Then I fix the code. Now every time I want to release a new version, that particular bug is tested yet again.
The 'quick fix' 5 minutes before launch
You're approaching deadline, everything is looking fine. Then, just before launch, a small bug or a tiny new feature has to be added. You can't imagine that would have side effects, can you? Well, that one bit of code, a small module that 'never changes', which you haven't looked at for 12+ months, reacts badly to this change. With good tests in place you catch this before you hit production problems.
The universe tends towards maximum irony, and you are seriously underestimating that maximum. Even with 100% test coverage, fixing something 5 minutes before launch is an awful idea. Either delay the launch, or accept that the bug is small enough to not matter.
Make refactoring easier - I quite often write tests for legacy code if I'm going to refactor it. It's the only way I can know it's still doing the same thing. Sadly, this sometimes means doing some blind (test-less) refactoring to make testing possible, but that's sometimes unavoidable... And still far better than doing the whole thing blind.
Have more confidence - I use to dread the day we pushed things live. I would actually cringe while it happened, and just wait for the shit to the fan. The more tests I have, the less I cringe and the more I enjoy the releasing of new, better code.
I am relatively new to HN and have _read about_ but not _seen_ an anti-testing mindset around here. I imagine this article will stir that debate if it exists.
Can anyone tell me why they would not want to write tests? I have no biases here.
What the research team found was that the TDD teams produced code that was 60 to 90 percent better in terms of defect density than non-TDD teams. They also discovered that TDD teams took longer to complete their projects—15 to 35 percent longer.
So, as with all classic engineering debates -- emacs vs vi, MySQL vs MongoDB, et cetera -- there is no definitive answer to the question should we write these tests or not? because it's an engineering tradeoff. Writing tests and then throwing them away is foolish: You wasted time. Not writing tests and then fixing 60-90% more bugs is foolish: Bug fixing is a waste of time, often wasting more time than it would have cost to avoid the bugs up-front, and risking breakages in the process.
Which waste of time is better for you? It depends on the business situation. There are situations in startups where it's not worth while to write tests. Indeed, there are situations where it is a mistake to write code at all: Use a big pile of post-it notes and a cheap outsourced worker to mock-up the system while you prove that you actually need it. Whereas once you've found your business plan the more typical mistake is to write too few tests, which is why there's a big popular movement that promotes writing lots of tests: There are more people with too little testing than people with too much testing.
I assume that “not wanting to write tests” is when the tests are more complex than the code tested, for example writing tests for a GUI application using a closed-source platform framework is complex (if at all possible).
You can write tests for the core algorithms and data structures but that is often only around 10%.
I do not have a solid stance, but I am inconsistent on writing tests. I like to differentiate between the notions of hand-testing code and automated-testing code. I firmly believe code written should be verified somehow. I have not totally bought into writing tests. Right now, there is a lot of snake oil about what is better than what, and I would like to see good scientific studies that point in solid directions.
Some of the blur is caused by different kinds of programmer experience. We have to work with the "art" and "creativity" variable here, along with "has seen a similar pattern before". The other problem is asking the right question: What kinds of testing can create a consistent and solid improvement in acquiring money (or some other important pointy-haired boss measurement) over a 10-year period of time?
There are things that can make us feel good, nice placebos. For example, 100% test coverage is known to not equate to good, working, maintainable code: you may be missing features or doing things completely wrong, but your tests validate that your wrong thing or lack of thing is working.
Is there a magic bullet? Probably not... but it is widely believed (and probably justifiably so) that automated testing helps somehow. People will expound on the benefits. Personally, I want numbers. I want to know the pain before the tests and the less pain afterward. I want to know confidently that the time spent writing tests as part of development is less than the time spent discovering and fixing problems. Note the careful wording there.
I agree; I like the idea of Selenium for web GUIs. I remember wanting to write my own similar tester for Win32 GUIs a while back, but I ultimately did not see enough value in it. For the same reason, I do not use Selenium: you have to completely "rewrite" the tests to adapt to small interface changes. So I wrote up test plans for humans to run, and they could adapt to the changes; it was faster than writing an AI. ;)
At an old job, we had what was called "random testing". This was a time to relax and take a few hours to pound at the application with whatever we could think of. With luck, someone would go ahead and attempt to automate those tests. One of our favorite "random" tests was to start some complicated process and then shake the window all around the screen to see if it would crash -- sometimes it did! It forces people to rethink their threading approaches. Another fun trick is closing an app mid-processing; does it exit gracefully?
If the project is new and the code is changing frequently tests slow me down and get thrown away frequently. So I don't write them until the app is on a stable path.
Can anyone tell me why they would not want to write tests?
Laziness, ignorance, rapid prototyping, no budget, more important problems... all the obvious reasons. I can't think of any "good" reasons from an engineering perspective, only business.
Rapid prototyping is the best anti-testing reason I've heard yet. I even halfway agree with it.
But in my experience, even prototypes can benefit from it.
For me, testing actually makes the whole thing go faster. I spend less time worrying and more time developing, because I have the confidence that the rest of my code works exactly like I want it to.
Totally agree. We're following a lean startup approach and have identified our MVP and roadmap. Unit testing allows us to ensure our MVP works and that we have a reliable way to add features per our roadmap and still have a viable product. This sets the stage for us to rapidly and reliably enhance the UI, as well, while still shipping on time and getting customer feedback.
Rapid Prototyping gets my vote..I'm in the process of working on a prototype for a new web service that's attempting to make easier a very complicated industry (where things are evolving at a rapid pace). I've noticed that we never have any time to create test user cases, however I am working with 2 others who hire outsiders to perform real user tests, which works well enough for the time being...but testing only a few users, with a minimal amount of requests doesn't really give you a good indication of whether or not your application can handle the load of multiple users. From a functionality standpoint user testing is good, but from a stress-testing go-live standpoint it's not really that good an indication as to where you stand
If you're paid per hour and your client want you to take the less possible time. It's stupid; I agree. You can try argue with him/her about the importance of it, but in the end, it's their money so it's their choice.
I am not a Randroid or very libertarian but I like Howard Roark's way of dealing with clients: you ask me to build X, and I will do it the best way I know how. If you want so damn much control over the how then you build it.
It irks me when clients tell me how I should do the thing they're paying me to do. It seems counterproductive and is insulting.
Hmm, it's more.. I can do it in x hours the best way I know of; or I can do it in y hours by writing crappy code. So, at the end, if you tell your client that 10 hours is not enough to build a quality software but he still insist, well, it's his money.
It's a little bit like if you eat at a restaurant and you've got 20$. The cook could tell you there's something really good and way better at 50$.. but if you're happy with something less good at 15$, well, it's your body and your money. (I know that development is way different and in the long term tests will save money, but depending of the project, it's not always the case.)
I don't even ask anymore. If I'm writing an app for you, I'm starting with tests, before I even write a line of executable code. If you don't want tests for your app, you'll need to find a different developer. It's not an optional part of the process, it's how the app is developed.
> In a way, using tests to check code is like designing two trucks and having them pull against each other; if nothing breaks, you know they both work.
('bad metaphorical thinking' Dijkstra alert)
This is wrong. If the tests pass, it is entirely possible that both the original code and the tests are faulty.
I'd love to see an article on unit testing that gives reasons they add business value, so hackers like us can convince pointy headed bosses that we should be allowed the budget to write them.
"Improving the quality of our software" is the one I tend to use, but it's so vague. How will unit testing convert into cold, hard cash is what upper management want to know.
Feigning incomprehension, I will ask, “Why do you need budget to write them?”
If I am estimating how long the work will take, I am estimating how long it will take to get the functionality working correctly with X% confidence it works correctly when the product ships. Unless “X” is less than 50%, automated testing is part of writing the code, not a nice-to-have.
“Improving the quality of our code” does sound vague, kind of like arguing how many knots are permissible in a plank of wood destined or a kitchen floor. Instead, I would simply explain that these tests I write are part of writing the code, and to explain that if management want the code without the tests, they need to tell me in writing that they are ok with a confidence level hovering around 50%.
I am not kidding about this number. I may deliver code that appears to do what it is supposed to do, but my experience is that either it is broken in ways that a simple QA test doesn’t show, or somebody (somebody else or even me next week) is going to regress it between now and when the product ships. Automated tests give me the confidence that it works, works correctly, and will continue to work correctly.
You don't give reasons for using an ORM in your code, do you? You just do it as part of your development work. It's nothing that's being questioned by PHB's.
Writing tests is (or should be) part of your development work.
Make sure you include the time it takes to write the tests in your "time estimate".
Sadly, like marketing, it's really hard to give definites with testing. Or any other good programming practice, for that matter.
The best you might do would be to come up with examples of people who used unit testing and things came out beautifully because of it. Things rarely turn out that beautifully without testing, so it can be pretty convincing to anyone who has dealt with software development for any length of time.
It's hard enough to convince them faster computers add value. It's almost like they have never heard of the concept of Return On Investment. It's all about cutting costs and not about increasing profitability. We resorted to being very vocal about how much time they were paying us to sit and wait. Our non-tech wrangler got the picture.
Fight regression
You don't want to fix the same bug several times, do you? When a bug is found, I first write a test to repeat the bug. Then I fix the code. Now every time I want to release a new version, that particular bug is tested yet again.
The 'quick fix' 5 minutes before launch
You're approaching deadline, everything is looking fine. Then, just before launch, a small bug or a tiny new feature has to be added. You can't imagine that would have side effects, can you? Well, that one bit of code, a small module that 'never changes', which you haven't looked at for 12+ months, reacts badly to this change. With good tests in place you catch this before you hit production problems.