But unit testing in itself didn’t sound like a bad idea… in some circumstances. Finding out what those circumstances were was pretty much impossible third hand, when they were being proposed as the floor wax and dessert topping of the programming world. I went through that argument here and realized that yes, mock objects could work, but it might be a slog to do it.
The heart of ATG Dynamo lies in Nucleus. Nucleus is a very powerful IoC container, which can resolve objects at the global, session or request scoped layers. It makes object references very easy. Untangling those object references could be hard. For good or ill, the law of Demeter is not always observed.
And the meat of ATG Dynamo lies in the repository framework. The Repository API is a distributed persistence framework that can invalidate, lock and update caches concurrently across multiple servers. The idea of writing a mock object for that thing scared me.
Finally, most of the projects I work on are not given much to experimentation. There is rarely, if ever, time given to setting up testing frameworks. I bought a copy of Unit testing in Java and read through it, but it was difficult to think how to replicate the functionality of the repository without… well, writing the repository. The areas that I was interested in (persistence, threading, complex data structures) were not covered in as much detail as I would like, while the trivial subjects were covered over and over again.
An argument in Orkut: ”Why don’t people unit test” brought me back to the mock repository project, and I started playing with the mockobjects library. Seeing how the request and response objects were tested to be in a certain state after being passed through the droplet made the whole thing click into focus. Unit testing is Design By Contract, from the outside.
After arguing with Stephen, I realized that unit testing is an overused term, and not all testing is unit testing. If you make a request to a server and it goes through a servlet pipeline, gets some data from a database, applies some rules and returns you an HTML page, that’s end to end testing. Cactus, jWebUnit and Solex do not test units, they test the integration or application. As such, they should not be written until most of the coding is done and the interaction with the application can be codified. If the pages are not done, if the design or requirements change, then the application testing has to be changed to match it.
Unit testing does not replace integration testing. It does not replace functional testing. It does not cover dependencies or bad data. A unit test only covers a single method in a single class, trying to see if it gives the incorrect output from correct input.
The big problem with unit tests in general is state. If a method requires that a thousand objects in a specific order must be passed in, then manipulated and passed back out, you have to manually create that state, call the method, then check the state afterwards to make sure the post conditions have been observed. The more state or preconditions you have, the harder it is to write a unit test.
Anyway, writing a mock repository was a bitch, and I’m still not sure that I got it right. But it is actually useful. I can write droplets and formhandlers now without starting the application server. I even wrote some tests on the pipeline processors that parse posts into HTML markup. It hasn’t exposed any bugs, but it’s proven to have some value. Law of Demeter or not, most classes in Dynamo have dependencies on a few central classes or interfaces which can be replaced by mock objects without too much trouble.
However, it’s not even close to a complete replacement for other testing. A single test is close to useless. If I were writing commerce pipeline processors, I still need to run an order through a ton of processors, some of which would have data I could never predict in unit testing. And a droplet may pass all its unit tests and still render incorrectly because it’s used in the wrong place or missing some data. It doesn’t fix bad design. In some ways, it may even facilitate bad design because “hey, the code works, right? And who wants to rewrite the unit tests?”
And my discomfort with unit tests is clearer now. It’s a crutch. When I wrote unit tests, I was lazy. I only wrote the unit tests that would ensure success, and a few very basic failures. I didn’t write unit tests that tried passing in null parameters, partially completed values, or seemingly correct values which were not valid in conjunction. No doubt this was because I’d written the code beforehand and was writing the unit tests afterwards as an experiment. But I believe it goes beyond that.
It’s well known that programmers have a very hard time remembering what happens in a piece of code a few months (or even a few weeks) after it has been written. I believe that the converse is also true. Possible problems with the code will be most apparent at the time that it is written. The obvious point might be made that a unit test should be written then for every possible problem seen. I think that’s unrealistic. A programmer in the middle of writing code is just not going to stop every few lines and write a new unit test. He’s going to write until the end, make a few notes, and then see if he can write the unit tests afterwards. And if they’re esoteric, require a lot of state, or seem really unlikely, they’re going to sink to the bottom of the pile, never to be seen again.
The unit tests show that the code works. The coder will be happy (even if he has this nagging feeling about it). And the code will have undetected bugs that don’t come out until integration or end-to-end testing, or may go out into production.
This is why I prefer defensive programming. I tend to think of defensive programming as an internal design by contract. If I write my preconditions and postconditions into the method itself, and document that breaking the preconditions will result in exceptions, then I can write my code expecting failure. If there’s a subtle problem that could arise in the code, then I’ll write the test for it inside the method and document it so it doesn’t break my flow. Then, after I’m pretty certain that I have all the state I need and there’s no possible way the method could fail, I’ll write the success case. It shouldn’t be possible to ever fail a unit test in the first place. (And if it does, I have the code heavily logged so I can track down where it broke down and what the state was.)
Unit testing on top of that sounds like a pretty good idea. In fact, unit tests written by your worst enemy sound like a great idea. I want to see Antagonist Programming, where instead of code review people just try flat out to break your code to little pieces.
Integration testing on top of that sounds lovely, and I would be positively ecstatic at the idea of automated end to end testing if I could figure out how to do that. But I’ll settle for the easy case and work upwards as I’ve got the time.
I’ve submitted the mock repository code to the mockobjects site. Contributions are welcome.