TDD Anti-Patterns

Recently I began to write a paper on TDD Anti-Patterns, and decided to first quickly jot down some of the most common ones that others or myself have encountered “in the wild.” I then posted what I had on the testdrivendevelopment on yahoogroups mailing list, and got an excellent bit of feedback!

As a side note, keep in mind this is merely a catalog at the moment, hopefully to be expanded in full down the road.

TDD Anti-Pattern Catalogue

The Liar
An entire unit test that passes all of the test cases it has and appears valid, but upon closer inspection it is discovered that it doesn’t really test the intended target at all.
Excessive Setup
A test that requires a lot of work setting up in order to even begin testing. Sometimes several hundred lines of code is used to setup the environment for one test, with several objects involved, which can make it difficult to really ascertain what is tested due to the “noise” of all of the setup going on.
The Giant
A unit test that, although it is validly testing the object under test, can span thousands of lines and contain many many test cases. This can be an indicator that the system under tests is a God Object
The Mockery
Sometimes mocking can be good, and handy. But sometimes developers can lose themselves and in their effort to mock out what isn’t being tested. In this case, a unit test contains so many mocks, stubs, and/or fakes that the system under test isn’t even being tested at all, instead data returned from mocks is what is being tested.
The Inspector
A unit test that violates encapsulation in an effort to achieve 100% code coverage, but knows so much about what is going on in the object that any attempt to refactor will break the existing test and require any change to be reflected in the unit test.
Generous Leftovers [4]
An instance where one unit test creates data that is persisted somewhere, and another test reuses the data for its own devious purposes. If the “generator” is ran afterward, or not at all, the test using that data will outright fail.
The Local Hero [1]
A test case that is dependent on something specific to the development environment it was written on in order to run. The result is the test passes on development boxes, but fails when someone attempts to run it elsewhere.
The Nitpicker [1]
A unit test which compares a complete output when it’s really only interested in small parts of it, so the test has to continually be kept in line with otherwise unimportant details. Endemic in web application testing.
The Secret Catcher[1]
A test that at first glance appears to be doing no testing due to the absence of assertions, but as they say, “the devil’s in the details.” The test is really relying on an exception to be thrown when a mishap occurs, and is expecting the testing framework to capture the exception and report it to the user as a failure.
The Dodger [1]
A unit test which has lots of tests for minor (and presumably easy to test) side effects, but never tests the core desired behavior. Sometimes you may find this in database access related tests, where a method is called, then the test selects from the database and runs assertions against the result.
The Loudmouth [1]
A unit test (or test suite) that clutters up the console with diagnostic messages, logging messages, and other miscellaneous chatter, even when tests are passing. Sometimes during test creation there was a desire to manually see output, but even though it’s no longer needed, it was left behind.
The Greedy Catcher [1]
A unit test which catches exceptions and swallows the stack trace, sometimes replacing it with a less informative failure message, but sometimes even just logging (c.f. Loudmouth) and letting the test pass.
The Sequencer [1]
A unit test that depends on items in an unordered list appearing in the same order during assertions.
Hidden Dependency
A close cousin of The Local Hero, a unit test that requires some existing data to have been populated somewhere before the test runs. If that data wasn’t populated, the test will fail and leave little indication to the developer what it wanted, or why… forcing them to dig through acres of code to find out where the data it was using was supposed to come from.
The Enumerator [2]
A unit test with each test case method name is only an enumeration, i.e. test1, test2, test3. As a result, the intention of the test case is unclear, and the only way to be sure is to read the test case code and pray for clarity.
The Stranger
A test case that doesn’t even belong in the unit test it is part of. it’s really testing a separate object, most likely an object that is used by the object under test, but the test case has gone and tested that object directly without relying on the output from the object under test making use of that object for its own behavior. Also known as TheDistantRelative[5].
The Operating System Evangelist [3]
A unit test that relies on a specific operation system environment to be in place in order to work. A good example would be a test case that uses the newline sequence for windows in an assertion, only to break when ran on Linux.
Success Against All Odds [2]
A test that was written pass first rather than fail first. As an unfortunate side effect, the test case happens to always pass even though the test should fail.
The Free Ride
Rather than write a new test case method to test another feature or functionality, a new assertion rides along in an existing test case.
The One
A combination of several patterns, particularly TheFreeRide and TheGiant, a unit test that contains only one test method which tests the entire set of functionality an object has. A common indicator is that the test method is often the same as the unit test name, and contains multiple lines of setup and assertions.
The Peeping Tom
A test that, due to shared resources, can see the result data of another test, and may cause the test to fail even though the system under test is perfectly valid. This has been seen commonly in fitnesse, where the use of static member variables to hold collections aren’t properly cleaned after test execution, often popping up unexpectedly in other test runs. Also known as TheUninvitedGuests
The Slow Poke
A unit test that runs incredibly slow. When developers kick it off, they have time to go to the bathroom, grab a smoke, or worse, kick the test off before they go home at the end of the day.

Well, that wraps it up for now. I’d like it if people could vote on the top ten anti-patterns listed here so I can go about writing up more detailed descriptions, examples, symptoms, and (hopefully) refactorings to move away from the anti-pattern listed.

If I left anyone out, please let me know, and any additional patterns are highly welcomed!

  • [1]Frank Carver
  • [2]Tim Ottinger
  • [3]Cory Foy
  • [4]Joakim Ohlrogge
  • [5]Kelly Anderson

  • Pingback: Fear and Loathing : Refactoring to Anti-Patterns

  • http://www.efsol.com/ Frank Carver

    I’ve been mulling over this list and thought of another few I’ve faced
    in the past. It took me a while to think up appropriate names, though.
    Somehow these all seem to have an avian theme :)

    * The Cuckoo – A unit test which sits in a test case with
    several others, and enjoys the same (potentially lengthy) setup
    process as the other tests in the test case, but then discards the
    some or all of the artefacts from the setup and creates its own.

    One of a cluster of test case structure anti-patterns along with
    Stranger/Distant Relative, Mother Hen and probably Long Conversation

    This is really a sort of “test case envy” and would probably benefit
    from refactoring into a separate test case. In more subtle cases it
    might also be asking for a shared base class for the two test cases.

    If left unchecked, this can lead to test cases where hardly any tests
    make use of the common setup process, but its done for every test
    anyway.

    One thing to avoid is the temptation to just shove *all* the
    initialization into the common setup, which just confuses and slows
    down the whole process. This can lead to a Mother Hen (below).

    * The Mother Hen – A common setup which does far more than the actual
    test cases need. For example creating all sorts of complex data structures
    populated with apparently important and unique values when the tests
    only assert for presence or absence of something.

    This can be a sign that the setup was written before the tests
    themselves, which is a subtle bit of up-front design easily hidden in
    an otherwise fully TDD process.

    One of a cluster of test case structure anti-patterns along with
    Stranger/Distant Relative, Cuckoo and probably Long Conversation

    * The Wild Goose – A unit test which, even though it initially appears
    simple, seems to require an ever increasing amount of the application
    to be created or initialised in order to make it pass.

    This sort of test can take up a disproportionate amount of developer
    time, and lose the benefits of the fast turn-round TDD cycle. In the
    worst case, TDD is effectively abandoned while the all or most of the
    application is developed using an untested up-front process to make a
    single test pass.

    This is common among developers new to TDD who are not yet comfortable
    with “faking” a response before moving the focus of design to a lower
    level.

    * The Homing Pigeon – A unit test which (typically because it requires
    non-public access to a class under test) needs to be created and run
    at a particular place in a package hierarchy. This initially seems
    perfectly reasonable, but can unexpectedly fail if the class under
    test is moved to a new location or (worse) split so that its behaviour
    goes to more than one such location.

    One of a cluster of implicit assumption anti-patterns along with Local
    Hero, Operating System Evangelist and Hidden Dependency

    It should probably be noted that this is still an active area of
    discussion, with many people arguing that it is a necessary or even
    benficial approach to design by allowing testing of non-public access.

    * The Dodo – A unit test which tests behavior no longer required in
    the application. Both the behaviour and the test should probably be
    deleted.

    * Love Birds – A test case which is so closely coupled with a
    particular class under test that it distracts from the need to test
    important interaction between classes. Commonly found in a test package
    structure created for Homing Pigeons where there is no obvious place
    to put tests for class interaction.

    I’m really enjoying this game, can you tell?

  • chuck

    * Old Predictable – A test case that tests multithreaded conditions with the same thread ordering every time, or random data with the same random sample data (or seed) every time.

  • http://blog.james-carr.org James Carr

    Frank: What’s up with the bird themed names? Have you been watching National Geographic all day? ;)

    Chuck: Thanks for including a threaded related test. I have not had the experience of doing TDD with threaded applications much, but from what I have seen on some of the mailing list, it can be quite easy to write lousy tests for threaded applications.

    Thanks,
    James

  • http://blog.james-carr.org James Carr

    Hmm… here’s another that came to mind just now while looking at the unit tests for a certain project I just downloaded…

    The Web Surfer
    A unit test that requires a connection to internet with access to the outside world in order to run.

    Wow! Some these are really starting to get generalized!

  • http://cime.net/~ricky/ Ricky Clarkson

    I’d like to see some solutions or advice about some of these, as well as just identifying them.

    The Inspector, for example – my solution is to make the class under test package private (but it implements a public interface) and test it that way.. although I find that a test that depends on breaking encapsulation is probably testing behaviour that is impossible via the public interface, so the test has little value. What’s your solution?

    The Loudmouth – I normally just run my tests and delete the logging code when they pass, but I can see the value in keeping it, and only showing it when the tests fail. With the standard exception-based test frameworks, I suppose this would be quite verbose. Thankfully I rolled my own test framework where you return SUCCESS or FAILURE (enum members – slightly clearer than true/false), so I don’t think this would be too verbose. Subjective, I know. A nice feature is that if the test throws an exception instead, it is not just a failure, but ‘blocked’. The test suite stops – of course that might not always be good, but it’s currently my preference.

    Frank Carver’s Wild Goose is surely pushing the word ‘unit’ somewhat. I have my doubts as to the usefulness of unit testing, as opposed to just automated testing. What is the largest thing that you can legitimately call a unit? Is it a method, an object, a package, an entire project?

    I tend to write tests first only when I expect buggy code. This is usually when I find the problem to be solved difficult to understand. I work on a network simulator, and sometimes my boss comes up with requirements that I’m not too sure about. He is accustomed to doing subnet calculations in his head, I’m not, so sometimes his requirements confuse me. Writing the test first, sometimes even in his presence, can help me there. Perhaps my approach is actually an antipattern – Selective Tester. What do you think?

    I also write tests when I find bugs. This is proving difficult with Swing-related code, especially as I try to run my tests in ‘headless’ mode. I’ve (temporarily) broken my application by wrapping JFrame and JDialog so that I can instantiate real versions in the real app and dummy versions in tests. I know that this is because of the behaviour-dependent design of Swing, but either way, it’s annoying.

    Cheers.

  • http://www.agical.com Joakim Ohlrogge

    I didn’t recognise greedy leftovers as something I said. I think The sloppy worker could have that effect but I can’t take credit for Gredy worker :)

    Here are the ones I came up with on the list.

    The overly stateful – A test that uses the filesystem unneccesarily.
    For instance storing a file and reding it with a FileReader when a
    StringReader would suffice to prove your point.

    Which leads me to:

    The sloppy worker – A test that creates persistent resources but
    doesn’t clean after itself.

    The mime: Developing the code first and then repeating what the code does with expectations mocks. This makes the code drive the tests rather than the other way around. Usually leads to excessive setup and poorly named tests that are hard to see what they do.

    Inversion of design aka Creative setup: This is closely related to
    James excessive setup anti-pattern and I’ve seen it with mocks.
    Instead of redesigning the subject under test so that it doesn’t
    require excessive setup; the testcase is redesigned so that it is
    easier to create a complex setup.

    This one is more about people behviour but can be a cause of interesting petters:

    The bounty hunter: Making sure to exercise all code in order to reach
    the required level of X% testcoverage. But not really testing
    anything.

  • http://blog.james-carr.org James Carr

    Joakim: true… even though I did draw some of it from what you mentioned about sloppy worker. Thought I’d at least give partial credit since you mentioned some of the elements of it on the list. ;)

    Another amusing one someone brought up that was summarized in one statement:

    Old and Busted: Test Driven Development.
    The new hotness: Test Driven Unit Testing.

    I have yet to see a unit test for a unit test yet, however. :(

  • Pingback: jot down « re-thinking

  • http://blogs.msdn.com/agilmonkey casper

    A very enlightening and amusing post :) Could you clarify ‘The Secret Catcher’ a bit? I’m not sure exactly what you mean by it, but certainly there are valid NUnit tests that have no Assert() but rather an [ExpectedException] attribute.

  • Pingback: Adventures of an agile developer in a not-quite agile world : TDD Anti-Patterns

  • http://blog.james-carr.org James Carr

    Casper,

    Yeah, there’s been a bit of debate over the Secret catcher on the TDD list… after all, it is valid if you are testing for an exception to be thrown. The Secret Catcher is meant to be the case where you do not expect the exception to be thrown, but rather just rely on the framework to fail when it encounters the exception.

    Essentially, in java code it might look something like this:

    public void testSomething(){
    foo.makeCallThatCanThrowTwentyDifferentExceptions();
    }

    I’m sure you get the picture. ;)

  • http://www.chaliy.com Mike Chaliy

    Another anti-pattern with another crazy name :)

    The Initalization Hero

    Occured when every test method makes initialization of the target. Must be refactored in the factory method.

  • http://blogs.msdn.com/agilmonkey casper

    James – thanks for clearing that up. I’m in total agreement. Again, very well-written post :)

  • Pingback: KDS Software Group » Software development basics: where test-driven development is from

  • Srihari Y.

    Am currently still a novice with TDD but one i’ve usually encountered in my initial steps is this :

    - Write testcases which mock interfaces/classes and use “setters” to put the mocked objects to the “under test” class and have tests that test the functionality correctly.
    - But finally end up writing code in constructor of that “under test” class without tests or with tests which just test “initialisation” of the real world equivalents of the “mocked” interfaces/classes.

    So would this or some inherent assumption of mine (in above), qualify for a TDD antipattern? :)

    Have been actually wondering what would be the best way to avoid this though? any suggestions?

    If really there is something, then for naming.. well.. seems close to Mockery but actually bit different.. Cannot come up with any good names.. How about “Forget the constructor”? (ahh.. forget it.. sorry cant be creative enough! :) )

  • Pingback: Основи розробки ПЗ: мета + декомпозиція = TDD « блог разработчиков « developers.org.ua

  • Pingback: the ‘bee log / Strategies for Structuring Unit Tests

  • http://www.trajano.net/ Archimedes Trajano

    I am not sure if the “Giant setup” is really an anti-pattern. Some test cases do require a big setup routine because it requires you to set up a lot of components before doing anything.

    e.g. in a case management system
    You may have to create the person (there are many mandatory fields for the person to be valid e.g. name, birth date, gender)
    Then you have to create the address for the person
    Then you have to create the case with the person associated

    … at this point you can do operations to the case…

    As you can see the setup is quite big. The teardown is usually simple if we set up the unit test to rollback all the changes.

  • Pingback: VusCode - Coding dreams since 1998! : James Carr » Blog Archive » TDD Anti-Patterns

  • http://community.vuscode.com/archive/2007/02/17/james-carr-blog-archive-tdd-anti-patterns.aspx Nikola malovic

    James Car gathered an excellent collection of TDD anti patterns which is HAVE TO for every TDD oriented developer

    Bookmark it at once! :)

  • http://www.obtiva.com Andy Maleh

    The Giant helps me often in detecting code smells.

    Very interesting anti-patterns. Love the naming. Thanks for cataloging them.

  • Pingback: Jean-Paul S. Boodhoo : TDD Anti-Patterns

  • http://www.leftturnsolutions.com Jon Herron

    Excellent article. Unfortantually I have been bit by The Mockery one too many times, now at least I have a good name to refer to it by.

  • http://www.cwk-technologies.co.uk/blog/2007/04/10/How+I+Avoid+TDD+Antipatterns.aspx Andrew Macdonald

    I bumped into this article via JP Boodhoo’s blog the other day (I know the original article has been out in the wild for a while now).

    Anyhow, as I read through the article, realisation hit that I subconsciously follow a set of practices that help avoid the aforementioned anti-patterns ….

  • http://icelava.net Aaron Seet

    The worst anti-pattern of them all:

    The Void

    Not a single test to confirm the system works.

  • http://jupitermoonbeam.blogspot.com jupitermoonbeam

    The Mad Hatter’s Tea Party.
    This is one of those test cases that seems to test a whole party of objects without testing any specific one. This is often found in poorly designed systems that cannot use mocks or stubs and as a result end up testing the state and behaviour of every peripheral object in order to ensure the object under test is working correctly.

  • http://blog.james-carr.org James Carr

    Hi jupitermoonbeam,
    Yes… that’s one of the most annoying test smells I’ve seen.. you wind up with a lot of methods that test other objects that the object under test uses, and the result is quite often along the lines of “The Giant.”

    Thankfully the strategy for refactoring is simple… isolate methods that are testing other objects and extract them out to their own unit tests, and provide mocks/stubs for access to those objects during test execution.

    I’ve also found this to be a result of a code smell as well, particularly a God Object or one that has too many dependencies which could be isolated.

    Cheers,
    James

  • Pingback: James Carr » Blog Archive » TDD Anti-Patterns: The Wiki!

  • http://www.google.com LolitochkaBC

    Ааанукаребятки голосуем!!!

    Признавайтесь проказники и владельцы сайта blog.james-carr.org ))))

    ЧТО вы бдетее делать этим летом?!

  • http://blog.james-carr.org James Carr

    I ran your comment through babelfish, and I got:
    Of aaanukarebyatki we vote!!! Acknowledge mischief-makers the owners of site blog.james-carr.org)))) THAT you bdeteye to make with this summer?!

    Thanks… I guess. ;)

  • Bill Smith

    How about the “Roll the Dice” test? Rather than actually figuring out where the software’s boundary conditions are, the test uses randomly generated data for parameters. Consequently, the test passes one time and fails the next.

  • Pingback: Studiowhiz.com » Blog Archive » On Unit Testing and TDD

  • Pingback: Worth Resting - Testing Anti-patterns » Skillful Software

  • http://blog.softwarearchitecture.com Brian Sondergaard

    Good work James. I’ll be interested in seeing the final article and the pattern descriptions. You have an important set of principles captured here. As is typically the case at this stage of development, the current list may be lacking in orthogonality, but I’m sure that will get fleshed out.

    I picked up on a couple of your anti-patterns to talk about the value of tests as documentation at http://blog.softwarearchitecture.com/2007/06/test-driven-knowledge-management.html.

    Keep it up.

    Brian
    http://blog.softwarearchitecture.com

  • Pingback: Pomme::TAB » Blog Archive » Testing Anti-Patterns

  • omo

    thanks for your work!
    i post a Japanese translation here:
    http://www.hyuki.com/yukiwiki/wiki.cgi?TddAntiPatterns

    if you have any trouble with this, please let me know.

  • http://www.kevingabbert.com Kevin Gabbert

    It would be good to see some kind of engine that could review a bunch of tests and classify them into some of these categories.

    “Hey Dude, 35% of your tests suck, here is why..”

  • http://bruno-orsier.developpez.com Bruno Orsier

    Hello James, thanks for publishing this list. Here is a french translation:

    http://bruno-orsier.developpez.com/anti-patterns/james-carr/

    Best regards

  • D Lawson

    excellent list – with some amusing names :-)

  • Pingback: napyfab:blog» Blog Archive » links for 2007-09-17

  • Pingback: James Carr » Blog Archive » TDD Anti-Patterns - VusCode - Coding dreams since 1998!

  • http://www.david-dylan.co.uk/david DavidDylan

    A very useful list – and one that I think would be useful to a much wider audience than just TDD practitioners. Do you think many of these anti-patterns are equally likely to show up in tests written after the code?

    I appreciate that, if you were originally targetting the IEEE Software TDD edition, it made sense to include TDD in the title. But now that your list seems to have a life of its own, perhaps you should consider renaming it: “unit test anti-patterns”.

    In fact I don’t even think they’re even limited to unit tests – maybe “xUnit test anti-patterns” to complement the recent book (which I haven’t read, yet) called xUnit test patterns would be the best name. I think “automated test patterns” might be going a bit too far, as that takes you too far into the domain of tester/scripters rather than developers writing tests.

  • http://blog.james-carr.org James Carr

    Indeed it has! It’s even been translated to 4 different languages now, so I’m starting to think maybe it’s time to dust this off and see what can be improved on it.

  • http://blogs.it-coder.com Ivan Atanasov

    Excellent list, because we must know “What is wrong?”, before starts writing unit testing.

    Best Regards

  • Pingback: Codesthetics » Blog Archive » TDD ain’t no snowboarding

  • http://www.ademiller.com/tech/ Ade Miller

    Great list. This was just what I was looking for to describe some anti-patterns my team has been inadventently using.

    The Greedy Catcher – The XUnit.net framework supports much better exception catching than NUnit or MSTest.

    A variant of excessive setup and giant is complex base classes. Tests classes derive from a hierachy of base classes making the actual test very hard to read. Lots of people fall into this trap when first creating unit tests in an attempt to follow the DRY principle 100%. With tests sometimes readability is better than no duplication.

    Thanks,

    Ade

  • Pingback: #2872 » Blog Archive » TDD anti-pattern “inherited or hidden test”

  • http://www.ademiller.com/tech/ Ade Miller

    I blogged about another anti-pattern I’ve seen used a fair bit. If you’re still collecting here it is:

    http://www.ademiller.com/blogs/tech/2007/11/tdd-anti-pattern-inherited-test/

    Cheers,

    Ade

  • Pingback: Motivations to adopt Behavior Driven Development | Yet Another Programmer Bitching

  • Pingback: Ian Joyce » Blog Archive » links for 2007-06-20

  • Pingback: » Daily Bits - January 20, 2008 Alvin Ashcraft’s Daily Geek Bits: Daily links plus random ramblings about development, gadgets and raising rugrats.

  • Pingback: links for 2008-01-21

  • Pingback: The DaxMindMapper Reloaded » TDD Anti-Patterns .. as TDD matures new lessons are learnt. - .NET Software Development

  • Pingback: :: 2web :: » Blog Archive » Anti-padrões de TDD

  • Pingback: Finds of the Week - Jan 20, 2008 » Chinh Do

  • Pingback: afongen » links for 2008-01-23

  • http://blog.geekdaily.org/ Jim

    Wonderful post! I’m glad it’s resurfaced for those of us who missed it the first time around. The Web Surfer and The Nitpicker are two I keep seeing repeatedly, as well as Excessive Setup which I’ve always thought of as Much Ado About Nothing.

    Thanks again!

  • http://blogs.adobe.com/tomsugden/ Tom Sugden

    Here is another simple anti-pattern that I think has been missed: The Denier or Contradiction. This is a test case where the message parameter of an assertion describes a success rather than a failure. If the test case fails, the test runner will display a contradictory failure message suggesting the test was successful.

  • http://www.codebureau.com/blog Matt

    Love the post..

    How about the ‘Stealth Bomber’. This is probably an extreme extension of the Hidden Dependency. I’ve seen a few tests that would simply ‘never’ fail when run interactively or debugged, or during a full test run in the day. When a supposed ‘fix’ was implemented it would even fool the developer into feeling pleased with themselves by passing for a day or two before bombing again in an overnight run. The logic wouldn’t show anything untoward, and it continued to bomb every few days without warning or trace of ‘why’. More than likely a group of no-no’s of execution sequence-dependent, date/time-dependent, database-dependent, context-dependent etc.

    There’s probably another variation – maybe the ‘Blue Moon’. A test that’s specifically dependent on the current date, and fails as a result of things like public holidays, leap years, weekends, 5 week months etc. This is again guilty of not setting up its own data.

  • Kevin

    Locale Evangelist – We work with an offshore team, and sometimes we create tests that fail in one locale but pass in another. Our most frequent culprit is a timezone dependency.

  • Pingback: This Week’s Geek Links (Jan. 25th, 2008) « Brian Di Croce

  • http://www.nelz.net/roller Nelz

    James, great post!

    Since you asked for votes, I would vote for The Inspector, and The Excessive Setup, as those are the anti-patterns I’m most guilty of.

    I am also interested in The Nitpicker and how to combat its use in web-app testing.

  • http://la-stories.blogspot.com Don Hosek

    For the nitpicker, the best approach is to use regular expression matching on the output that you’re inspecting. This of course can create some potential issues of its own, especially for the regexp newbie, but is probably the best of the possible solutions.

  • PandaWood

    Rick Clarkson says:
    “Thankfully I rolled my own test framework where you return SUCCESS or FAILURE (enum members – slightly clearer than true/false)”

    That sounds scary, nothing is clearer than true/false!

  • Pingback: temujin, the fabulous freak… » TDD Anti-Patterns

  • http://www.dataconstructor.com Max Guernsey, III

    Chuck: I’m not sure that “old predictable” is an anti-pattern. A test that is not predictable is, itself, an anti-pattern. If you need good coverage like that, you might consider generating several different tests that order threads differently or use different, predictable, random seeds.

    – Max

  • Pingback: La implantación de TDD en nuestra empresa

  • Pingback: TDD Anti Patterns | Developer Home

  • Pingback: Boston Bay Digital » TDD Anti Patterns

  • Pingback: TDD Anti-Patterns - not just another working title

  • Adrian Mouat

    Are these really “patterns”? I would argue they are closer to “code smells” as defined by Martin Fowler in “Refactoring”.

  • Pingback: links for 2008-07-09

  • http://www.grenswoningen.nl Wonen in Duitsland

    Are these really “patterns”? I would argue they are closer to “code smells” as defined by Martin Fowler in “Refactoring”.

    Yes, Indeed that was the samething i was thinking about.
    But i can be wrong also.

  • http://www.ccil.org/~cowan John Cowan

    Sometimes the Giant has its uses.

    A little while ago, I wrote a test for a method that has to take a number of factors into account and return one of two results. For example, if the content is precious, always return A, but if not, then if the version is 3, always return B unless the user has requested A, etc. etc.

    The giant test I wrote does the full combinatorial explosion of assertEquals tests: version 1, 2, or 3; user requested A or B; content is precious or not; etc. There end up being 72 calls.

    Now I could refactor that to 72 tests, but I don’t think that would help anyone much. And doing “just enough” testing would have turned out to be not enough: just one of the 72 tests actually failed after I wrote the code, but it exposed a deep logic error. (Now I’ve had to change the code to satisfy new requirements, and half the tests are failing! Arrgh. But I’ll get it fixed eventually.)

  • Graham Lenton

    How about the ‘Honest, Guv’ where the expected outcome is so entropic that the developer simply asserts true with a comment ‘this works, honestly..’

  • Pingback: :jasonrudolph => :blog » Blog Archive » Testing Anti-Patterns Potpourri - Quotes, Resources, and Collective Wisdom

  • Pingback: Links - Kartones Blog

  • http://madcoderspeak.blogspot.com Gishu Pillai

    I started a thread on stackoverflow.com, which is what I associate with voting nowadays :)
    Anyways you may want to take a look at an informal anti-pattern survey of sorts here
    http://stackoverflow.com/questions/333682/tdd-anti-patterns-catalogue#333697

    and of course.. uber post. Thanks for taking the time to put this down.

  • Pingback: GettingAgile.com » Blog Archive » TDD Anti-Patterns

  • Pingback: El blog de Carlos Ble » Antipatrones en TDD

  • http://carlosble.com Carlos Ble

    Excelent article James!
    I’ve translated it to spanish: http://www.carlosble.com/?p=225

    Thanks

  • Pingback: Test Driven Development - Patterns & Practices: References « Ibn e Muazzam’s Blog

  • Pingback: Introducción a Test Driven Development (TDD) « Grimpi IT Blog

  • Pingback: TDD Anti-Patterns « Pekka ‘Peks’ Suomalan blogi

  • fact

    TDD is an anti-pattern.

  • Pingback: Indefinite Articles » cool list of TDD anti-patterns

  • Pingback: Knowtu » links for 2009-04-28

  • Pingback: TDD 안티 패턴

  • Pingback: links for 2009-04-29 | the higher you fly

  • Peter Hickman

    Having just sat down and read through a bunch of unit tests that can take 15 minutes to run I would like to propose the “test everything” anti-pattern. In this anti-pattern the unit tests test the functionality of the programming language along with the application.

    For example I have just seen a load of tests to see if a subclass has it’s superclass as an ancestor and that the methods of the superclass are all inherited. The only way in which this can break is if the language changes!

    I fully expect to see tests to make sure that 1 + 1 still equals 2, you known, just incase!

  • Pingback: Helltime for May 1 « I Built His Cage

  • Pingback: » Антипаттерны TDD и не только QAлификация

  • Pingback: Ennuyer.net » Blog Archive » I am way behind on my rails link blogging. Link dump and reboot.

  • Pingback: [ mkhairul.com ] » Thoughts on TDD

  • Pingback: links for 2009-06-01 « pabloidz

  • Andrej

    The problem of the Nitpicker is that it is not a Unit test at all. A unit test is supposed to test a Unit using the API of the unit. If the system is split into units well, an API of a unit should not change often or contain too many unimportant details.
    Testing a Web interface is not unit testing, because intead of testing units we test the whole application via it user interface.

  • http://blog.james-carr.org James Carr

    @Andrej

    Indeed! A good unit test would not test the entire web application through the user interface, but rather test a single individual unit of the interface or system.

    However, it is not uncommon in web projects that span multiple teams to come across a JUnit test (being referred to by other devs as a “unit test”) that uses Selenium, HtmlUnit, or some other means to test a specific unit in the whole bang of the system. The problem is it’s an integration test or acceptance test, but the developers end up having it ran with the developer tests.

  • Pingback: Michael Tsai - Blog - TDD Anti-Patterns

  • Pingback: Quick links: short, right, anti | rapid-DEV.net

  • http://mcherm.com/ Michael Chermside

    The Time Switcher:

    A test case that passes or fails depending on the time zone of the local machine (and/or whether or not daylight savings is in effect). Often fixed by switching the hour so that the test breaks twice a year.

  • andreifeld

    Hi all,
    I’m thinking maybe the next question will be appropriate here in a TDD blog article…

    Let’s presume I’m running unit tests on a mysql DB.
    The DB is required to be up and running and I’m using my application API’s in order to load and save from the DB.

    Now this is very slow as using this API to a persistence DB is very expensive.

    I am not a DBA, but is there a simple way that I can use my existing sql tables as ‘in memory tables’? This will robust my testing framework endlessly.

    thanks

  • http://AgileInAFlash.blogspot.com/ Tim Ottinger

    James:

    We produced an index-card-sized version of the list, reduced for space considerations, but any attempt to write up a quick statement on each would be about the same size as your article. Any chance you would agree with me posting this article, or as much as fits, on the site at http://agileinaflash.blogspot.com/2009/06/tdd-antipatterns.html?

    Feel free to use this posting’s email

  • Pingback: James Carr » Blog Archive » Agile in a Flash

  • Pingback: Reflective Perspective - Chris Alcock » The Morning Brew #392

  • http://www.machete.ca Allain Lalonde

    Excellent Post

  • Pingback: Roman Joost (romanofski) 's status on Sunday, 19-Jul-09 13:36:17 UTC - Identi.ca

  • Pingback: Maxim’s blog » links for 2009-07-20

  • Pingback: links for 2009-07-27 « pabloidz

  • Pingback: 網站製作學習誌 » [Web] 連結分享

  • Pingback: Weekly Links #63 | GrantPalin.com

  • http://tvoeimja.ru/ Иван

    имя,имени,человека,людей,руси,значение имени,женские имена,имена бесплатно,мужские имена,совместимость имен,имя человека,имена девочек,тайна имени,имена мальчиковУ каждого человека есть своё имя, но не каждый человек может сказать что
    же значти его имя .
    Не каждый знает происхождение и корни своего имени, хотя это очень важно.

  • http://www.officerental.nl Kantoorpand / Bedrijfspand huren

    I discovered an interesting blog thanks to Tim Ottinger’s comment on my TDD Anti-Patterns post, Agile in a Flash.

  • Pingback: TDD Anti-Patterns | the fabulous freak

  • http://www.ChrisShayan.com Chris Shayan
  • Pingback: TDD Antipatterns « Lean, Agile and good software devepment

  • Pingback: Surfing « Everything technical

  • http://argehaber.blogspot.com argehaber

    could not exactly turn some sentences. but in general was a useful article. I have read with admiration. good work.

  • Pingback: Sleep Sitting « Mo Khan: My Blog!

  • Pingback: TDD and Testing - Kartones Blog

  • http://www.casualmiracles.com Lance Walton

    This is a good catalog. But they’re not actually TDD anti-patterns; they are unit-test anti-patterns.

  • Pingback: Test Harder and Smarter | Just about .NET

  • Pingback: What NOT to do during test creation « Automated Software Testing

  • Pingback: Extensible Test Assertions With MSTest VSTS – Clarius Consulting Blogs

  • http://thedance.net Charles Roth

    I’d love to be able to automatically *detect* some of these anti-patterns.

    For example… I work on a large (java) code base, where many of the unit-tests were written by monkeys (who were apparently taking a break from typing Shakespeare).

    There are numerous “Liar” anti-pattern tests that:
    1. Have no assert’s at all(!)
    2. Have EasyMock.replay() without a matching EasyMock.verify()

    I’d love to find (or sigh, write) a tool that will find such methods, with at least a reasonable degree of probability.

    Any thoughts?

  • Marcelino Deseo

    Any effort to classify these anti-patterns? Seems to be overwhelming. I’ll these overthe week end, and avoid them. Thanks for the post.

  • TTRZ

    The virus..

    A test that changes system settings.