Thursday, 8 September 2011

Write the Tests, Run the Tests

Write tests before you write the code to implement them. This will feel
odd, but you'll get better with time. Soon, it will feel odd not to write
tests first.
If you are like most of the readers of XP Explained, you are already convinced
that having tests is a good thing. You also aren't in the habit of writing tests before
you write code. You're probably a little intimidated or confused by the idea. We
typically hear things like this:
How can I write tests before I've written the code I'm going to test? I
don't even know what the classes are going to look like yet. They're
going to evolve.
Ken used to say stuff like this back in the 80s when he heard some proponents of
Objective-C claim that each class should have associated tests that should be written
first. He has now confessed and repented. You need to write tests first precisely
because you don't know what the classes are going to look like yet, and because
they are going to evolve.
Writing tests first keeps entropy from destroying your code. We write tests.
Then we write just enough code to get the tests to pass, no more. That is the best
way to keep your code clean.
Nothing gives you more confidence when changing code than having immediate
feedback about whether or not your changes broke anything. That's what having
tests does for you. Without the tests, you can't have confidence. Without confidence,
code doesn't get changed, even when it needs it.Better still, we have found that well-written unit tests are some of the best
documentation possible. They communicate the intent of the code being tested better
than any description on paper ever could. We can kill two birds with one stone.1
Keeping Code Clean
Writing the bare minimum code necessary to make the tests run keeps you from
wasting time on extra features. These hooks you might need later usually have to
change once to you get to "later," if they're ever used at all. But without the tests
there to guide you while you write code, the temptation to design for the future is
just too great.
Writing the minimum necessary code the first time around ensures that the
refactoring you do later isn't a complete overhaul. You can't get it perfect the first
time all the time. You will refactor your code. But start with the simplest code that
could possibly work. If your refactoring turns into redesign on a regular basis, your
velocity will tank. Moving at top speed is your bread and butter. Guard your velocity
with your life.
Think of this like keeping a room clean. Roy's mother used to say that doing that
is simple. Don't let the room get dirty. If you let the dirt build up, cleaning it
becomes a much bigger job that you end up putting off. Pretty soon you've got an
intractable mess. Roy's room is still always impeccably neat, by the way.
Confidence
In 1955, Dr. Jonas Salk administered his experimental polio vaccine to himself,
his wife, and their three sons. In a newspaper interview, he said that he did it
because he knew it would work. That's courage in action. Where did it come from?
Salk himself said, "It is courage based on confidence, not daring, and it is
confidence based on experience."
At least once or twice during most iterations on our projects, someone sees that
they need to make a radical change to some aspect of the system. When they do, a
bunch of tests break. This is expected. The pair works through the failures one by
one, modifying code to get each test to pass. This could take a few minutes or a few
hours. Invariably, when the pair is integrating the changes one member says, "Wow,
can you imagine how hard this would have been to do without these tests?!"
XP depends on courage, not bravado or reckless daring. Having tests to run to
confirm each change to our code keeps us from being reckless. It lets us proceed
bravely, knowing that we can prove to ourselves that our changes work. Instead of
holding lots of meetings to determine if a change will break something else, we can
just try it, run the tests, and find out. It's hard to be bold in the dark. The tests turn
on the lights.
Tests As Documentation
We have found that well-written tests are less painful to produce than other
forms of documentation. Not only that, they're better.
When you write tests first, you don't have to loop back and write documentation
later. It is an inseparable part of writing the code itself. It's sort of like having a
certain dollar amount each month taken out of your paycheck and put into a 401K.
After a while, you just don't notice it anymore. Any speed we lose in writing tests
first, we more than make up in reduced debugging time and reduced documentation
time. This is a great boost to your team's velocity in and of itself, not to mention the
increased velocity you get from having more confidence.
Even if this weren't true, tests still would be worth the effort. They are better
than any other kind of documentation we've ever seen.
Tests are more detailed than verbal descriptions on paper, without being
cluttered by extra words that describe what the code syntax says. Think about it. The
written code documentation we've seen wastes a lot of space saying things like "The
getMeal(PotPie) method on Shopper asks a Store for a PotPie. It casts this as a Meal
and calls addToBasket(Meal), passing it the Meal…" Why not just look at the code?
When you have a test, you can see a discrete Assertion (in JUnit) that tests this very
behavior and displays a message.
Tests also are more useful than other forms of documentation we've seen. They
give you a scenario in which the behavior of the code should work as intended.
Want to understand what a class does? Look at its tests to see exactly what you
should expect in certain scenarios. Other documentation struggles to do that. It
usually fails, if it tries at all.
Best of all, tests are absolutely up to date all the time (unless you're cheating).
What other documentation can claim that? We remember life before XP. Nine times
out of ten, whenever we walked onto a new project somebody gave us a document
that supposedly described the system we would be working on. It was usually a very
thick binder. As soon as they put this in our hands, they said something like, "This
should help a little, but it's out of date. Come talk to me after you read it." So what
earthly good was that document? Maybe it built character to put it together, but
that's all it was good for.
Not all non-code documentation is bad. When a customer needs a written
document other than code (say, for regulatory approval), you should produce it. XP
simply says you should produce only the written documents that you absolutely

need. The problem is that traditional approaches tend to substitute documentation for
communication, and tend to exaggerate progress with documentation. Producing a
document may get you closer to your goal, but documents don't run. In the end, it's
the code the counts. Alistair Cockburn says:
Written, reviewed requirements and design documents are "promises"
for what will be built, serving as timed progress markers. There are
times when creating them is good. However, a more accurate timed
progress marker is running tested code. It is more accurate because it
is not a timed promise, it is a timed accomplishment. – Alistair
Cockburn in Jim Highsmith, e-business application delivery, Vol. 12,
No. 2, February 2000 [http://cutter.com/ead/ead0002.html]
Tests are the best form of system documentation because they are the form that
distracts least from producing releasable code. Don't settle for less.
How To Write Tests First
Before you write code, think about what it will do. Write a test that will use the
methods you haven't written yet. Assume the methods exist and use them in the test.
The test won't compile (if you have to compile things in your environment). Write
the class to be tested, and its methods. Just a stub, not all the details. Your test
should compile now.
Add the test to the test suite that holds all of your other tests for related stuff.
Run the test suite. Your new test will cause it to fail. You don't have any
implementation for the methods you're testing, so this shouldn't be a surprise.
Running a test just to see it fail might seem a little strange, but it's important. Failure
validates the test at this point. If it passes, it's clearly wrong.
Now write just enough code to get the test to pass when you run the test suite
again. Doing things this way guarantees the smallest design that could possibly
work. It keeps your code simple. Yes, there will be refactoring to do later, but it will
be small.
Add your test suite to the suite of tests for the entire system and run it. That may
seem redundant, but it isn't. A few days before OOPSLA '99, a project he was
heavily involved with was coming to the end of an iteration. He was trying to add a
new feature to the persistence framework. Pairing with someone who was new to the
project, to Java, and to XP, Ken took charge. He coded like a madman for an entire
day, writing tons of code without writing tests first (strike one) and not rerunning the
test suite for the entire persistence framework (strike two). After struggling to get
the new feature to work, he rotated pairs and tried to integrate with a new partner.
They ran the test suite for the persistence framework and got roughly 30 failures.
Thinking out loud, Ken's pair said, "Let's see, they all ran with your changes before
we integrated, right?" Ken felt like and idiot.
Save yourself this pain. Get addicted to running the tests. Made a change? Run
the tests. Made a change that you didn't like and backed it out? Run the tests.
Integrated? Run the tests. Took a break for coffee and just got back to your desk?
Run the tests. You get the point.
Refactor the tests when they don't seem to be giving you the leverage you want.2
If you notice that the suite of tests for a given class doesn't include something that
should be tested, write the new test and run the suite to make sure it passes. If you
need to refactor some portion of the code, and you aren't satisfied that the existing
tests help you (maybe they don't cover everything well enough), refactor the tests
first. Keeping your tests clean is just like keeping your code clean, only more
important. Better tests give you more leverage to make things work better and to add
to new features quickly.
In XP Explored (which should be published by the time you read this), Bill
Wake gives a simple set of steps for what he called the "Test/Code Cycle" in XP:
 Write a single test.
 Compile it. It shouldn't compile, because you haven't written the
implementation code it calls.
 Implement just enough code to get the test to compile.
 Run the test and see it fail.
 Implement just enough code to get the test to pass.
 Refactor for clarity and "once and only once."
 Repeat.
Bill claims this process should take about ten minutes per test. If it takes longer,
start writing smaller tests. That may be a bit short, but it's not crazy. In early 2000,
Ward Cunningham stopped by RoleModel Software, Inc. to check out how Ken's
team was implementing XP. He paired with a number of folks on the team. Every
person he paired with made one major observation: Ward took small steps,
sometimes ridiculously small, but he moved like the wind.
Test-first programming is all about building confidence so that you can work at
maximum speed. It only works if you test a lot, which means you have to take steps

pass. Make a change. If the tests fail, the change must have caused it. If you write a
few tests, then code for a couple of days, where is the error when a test fails? You'll
be hunting for a while, which will slow you down. [WHAT'S A GOOD ANALOGY
HERE? SOMETHING ABOUT PROBLEM BUILD-UP…]
What To Test
The rule of thumb we use is to write tests for non-trivial things that could break.
We tend not to write tests for getter and setter methods. We've also learned to shy
away from writing tests for methods that simply invoke another method, as long as
that method already has a test. You'll come up with your own exceptional cases for
when you don't need to write tests.
Err on the side of having too many tests. When in doubt, write one. If you need
the confidence that your getters and setters absolutely will not break, write tests for
them. There aren't hard and fast rules here. Write they tests you need.
Very few of us like writing tests, but we lave having them. It's some extra work,
but it's well worth it. Until you have a bunch of tests and a lot of experience getting
burnt by not having one you need, it's better to have too many.
How To Start Writing Tests First
You should be convinced that you need to write tests first. You might have no
idea how to do it. Relax. You're in good company. Every time we talk with people
who aren't used to writing tests first, we hear things like
 How exactly do I write a test first?
 How would you write a test first for XYZ?
 Huh?
To be honest, most people "in the know" about XP respond roughly the same
way. They say they can show you how to write tests before you write code, but they
can't describe it to you. This is a cop out. This is not a cop out. That was not a typo.
Asking someone how to write tests first is like asking someone how to write
source code. You can't code-by-number. The code you write depends on a host of
variables. Likewise, there isn't a comprehensive set of rules for you to follow when
writing tests. The tests you write will depend on the code you want to build to
exercise the tests.

Unit tests are just another type of source code. You write them, compile them if
you have to, and run them. If you use xUnit3 (and we recommend it), it's just like
writing code that employs a small library of components, methods, or functions. The
only real difference is the goal.
Old habits die hard. If you aren't used to writing tests first, it feels awkward.
Take comfort in knowing that
 eventually, it will become as natural as writing source code
 your first tests will stink, but that's okay
 your tests will improve with practice
 even crummy tests are better than none
The key to getting into the habit of writing tests first is to realize that you've
been doing it for years, but you probably didn't know it.
When you write code, you are imagining the tests. You just haven't written them
down yet. If you're writing some code to compute a salary, you're thinking…get the
base salary and bonuses for a given employee from the database…compute FICA…
determine the employee's tax bracket…compute withholding…compute take-home
pay…and so on.
Stop. You already have imagined how the code is supposed to work. The only
thing you have to do now in order to write a test first is to write a test method for the
first step that assumes that step is in the code already. Write a method that calls the
as-yet-unwritten method getEmployeeSalaryData(String employeeName). Give it a
known employeeName and check that the results are what you would expect. The
test will fail, of course, because the method isn't there.
Write the real code for getEmployeeSalaryData(). Run the test again. Fix the
code until it passes all the tests. That's all there is to it.
Some things are harder to test than others (see Testing User Interfaces below).
Sometimes it's hard to imagine the tests first. Keep at it. It will feel more natural in
time.
Testing User Interfaces
Testing user interfaces is royal pain in the backside. We get asked a lot how we
write tests for these things. We have lots of answers, but they all stink. The bottom
line is that the coverage we get from our user interface unit tests seems significantly

lower than what we get from testing our business logic. The value received from the
tests doesn't seem to be worth the effort put into writing them.
The best way we've found to test user interfaces is to isolate what you're testing
first. Separate the UI from the business logic behind it as much as possible. Test the
business logic with unit tests, which make the UI less prone to break. Of course, that
just means there is less to break, not that it's been tested adequately. Try to write unit
tests for what remains in the UI (probably because it belongs there). You probably
won't be able to test everything this way, but that's all right. It's better to have solid
unit tests for 90% of the system than none at all.
One of the best ways we've seen to test user interfaces is to test as much as we
can with unit tests, and then use functional tests to fill in the blanks.
Functional Tests
Functional testing ("acceptance testing" is probably a better term) is actually
more important than unit testing in many ways.
There is no functional testing equivalent of JUnit. You can't just download
something and start testing on day one. Even if you could, you still would need
customers to help define the tests.
Customers often have as much trouble writing functional tests as developers
have writing unit tests. Give them time. Figure out how you can automate these. Run
them nightly. Give your customers the same kind of confidence boost that unit tests
give developers. Without functional tests to prove a customer story "works", it
doesn't exist. It certainly can't ship.
We use a functional testing framework affectionately known as JAccept™. The
customer enter tests into Microsoft Word tables. The tests consist of user actions that
can be performed on the UI being tested. A set of Java classes read the test files each
night (we actually generate HTML from the Word files which the classes then parse)
and run the tests automatically. Results are written to an HTML output file. We're
currently working on porting the entire thing to XML, using Ant as the file
dependency engine, and adding a customized XML editor on the front end.
The way you do functional testing isn't as important as doing it. Functional tests
are the only thing that prove to your customer that the system "works." Without
these, you're telling your customer to trust you. We hope they can, and we hope they
believe they can. Proof removes all doubt. It is the customer's equivalent of the
xUnit green bar. Nothing gives a programmer a shot in the arm quite like seeing that
wonderful green bar. Give your customer the same confidence.

to being "done". This lets your customer make informed decisions about whether
or not a system is ready for release.
Automate functional testing whenever you can. This makes it a normal part of
life. Run them daily. This will give your customers the confidence they need to
remain enthusiastically involved with the project.
[Need to insert some of our Acceptance Tests stories.

No comments:

Post a Comment

Your comments are welcome!