Talk:Test-driven development

From Wikipedia, the free encyclopedia

Contents

[edit] Limitations section

The "Limitations" section says that TDD "is currently immature". I don't understand what is meant by that.

It also lists "GUI" as one of the limitations. As someone who works on complex Swing applications for a living and even has written a whole book chapter about TDDing Swing GUIs, I guess I simply disagree. It would be nice if the original author could clarify why he thinks it is a limitation.

Thanks!

Ilja

I would like to hear your thoughts on testing GUIs. Are you talking about abstracting functionallity away from the GUI into a business logic layer, and just using the GUI as a thin interface layer on top? If so, technically you still aren't testing the GUI. Testing GUIs and graphics in general just isn't very feasible right now. Computers get lost in not "seeing the forest for the trees". One can use tests to check pixel values on the screen, but what good is it if the test can't determine *what* the picture shows. capnmidnight 16:18, 30 April 2006 (UTC)

[edit] non-functional link

The link to "Test-driven Development using NUnit" at in the References is non-functional in my browser (Firefox 1.0, Windows 2000). I suggest this link be eliminated, and along with it the Jason Gorman vanity page as non-encyclopedic. --Ghewgill 18:35, 13 Jan 2005 (UTC)

Works fine on my Linux box with Firefox. He probably didn't have a pdf viewer installed. --Nigelj 20:34, 30 August 2006 (UTC)

[edit] Merge

A merge with Test Driven Development is in order. Actually that article frames the generic issues better, though it's a short stub, and this article seems (in the section Test Driven Development Cycle) to imply that there's a proper-named methodology here.

Ummm, the Test Driven Development page just is re-direct to this page: they are the same article.--Nigelj 20:31, 30 August 2006 (UTC)

[edit] Where is "Test Driven Development" from?

Please, Who is the author of first article about "Test Driven Development"??? Who defends "Test Driven Development"? Etc... I think it would be very helpful for everyone is researching about it. --200.144.39.146 17:16, 10 March 2006 (UTC)

From the article: "the book Test-Driven Development by Example [Beck, K., Addison Wesley, 2003], which many consider to be the original source text on the concept in its modern form."--Nigelj 20:31, 30 August 2006 (UTC)
It predates 2003. We used the phrase in Java Development with Ant in 2002, and took it from one of Kent's earlier XP books. I think I first saw it in 2000, associated with JUnit work by Kent and Erich Gamma. SteveLoughran 10:41, 17 April 2007 (UTC)

[edit] Merge with Tester Driven Development

I read Tester Driven Development and it seems appropriate to move it into a "criticisms" section of this page. --Chris Pickett 22:32, 30 November 2006 (UTC)

No, I think that it is a play on similar words, but as a concept it is entirely different. I've never heard what that article describes called 'tester driven development', but I have heard it called 'feature-creep', 'buggy spec' etc. It's an example of one way a project manager can begin to lose control of the project, not anything to do with the development methodology the developers may or may not be using to produce the actual code. --Nigelj 22:04, 30 November 2006 (UTC)
I realize that "Tester Driven Development" is not the same thing as TDD. But it seems to me like this anti-pattern might actually describe TDD done badly. "It refers to any software development project where the software testing phase is too long."---clearly that includes TDD, since you can't get "longer" than "since before the project begins"! :) Well, at the very least, there should be a disambiguation page or "not to be confused with" bit or something, so that people looking for Tester Driven Development when they mean TDD don't get the wrong impression. --Chris Pickett 22:32, 30 November 2006 (UTC)
Tester-Driven Development is clearly a pun on TDD, but a different concept. I think Tester-Driven-Development could be pulled into an anti-patterns in testing article, which could include other critiques of TDD, and of common mistakes in testing (like not spending any time on it). SteveLoughran 10:42, 17 April 2007 (UTC)

[edit] Limits of computer science

Automated testing may not be feasible at the limits of computer science (cryptography, robustness, artificial intelligence, computer security etc).

What's the problem with automated testing of cryptography? This sentence is weird and needs to be clarified. — ciphergoth 11:08, 19 April 2007 (UTC)

Automated testing is pretty tricky today with testing that a web site looks ok on mobile phones, especially if its a limited run phone in a different country/network from where dev team is; I'd worry more about that than AI limitations. Security is a hard one because one single defect can make a system insecure; testing tries to reassure through statistics (most configurations appear to work), but can never be used to guarantee correctness of a protocol or implementation. 'Robustness' is getting easier to test with tools like VMWare and Xen...you can create virtual networks with virtual hardware and simulate network failures. SteveLoughran 21:24, 19 April 2007 (UTC)

[edit] Benefits need some citation

Test driven development is a great idea. But some of the claims in the Benefits section are unsupported. I think at the least they need to be cited, or downgrade the claims to be opinions (and reference those who hold these opinions). For example, the claim that even though more code is written, that a project can be completed more quickly. Has anyone documented an experiment with two identical teams one using TDD and the other not?

Similarly, the claim that TDD programmers hardy ever need to use a debugger sounds ludicrous to me (how about debugging the tests?). As well as the claim that it's more productive to delete code that fails a test and rewrite it, than it is to debug and fix it??? Here's the current text from the article:

Programmers using pure TDD on new ("greenfield") projects report they only rarely feel the need to invoke a debugger. Used in conjunction with a Version control system, when tests fail unexpectedly, reverting the code to the last version that passed all tests is almost always more productive than debugging.

Mike Koss 21:57, 28 July 2007 (UTC)

I agree, the claim that "reverting" is the best approach to a test failure is, to use a UK technical term, bollocks. To roll back all changes the moment a test fails implies you cannot add new features to a program. Because it is inevitable that changes break tests -regresssion testing is one of their values. Once a test fails, you have a number of options
  • roll back the change, hit the person who made it with a rolled up copy of a WS-* specification
  • run the tests with extra diagnostics turned on, and try and infer why the recent change(s) are breaking it.
  • attach a debugger to the test case and see what happens when the test runs.
Debugging a test run is a fantastic way of finding out why a test fails. The test setup puts you in to the right state for the probem arising, and once you've found the problem, you can use the test outside the debugger to verify it has gone away. This is another reason why you should turn every bugrep into a test case -it should be the first step to tracking down the problem. The only times you wouldnt use a debugger are when logging and diagnostics are sufficient to identify the problem, or when you are in a situation where you can't debug the test run (its remote, embedded or the debugger affects the outcome).
I would argue strongly for removing that claimed benefit entirely. SteveLoughran 14:27, 18 October 2007 (UTC)
In fact, modern revision control systems are adding features to make this process more convenient. For example, the git revision control system contains a git-bisect command which takes you on a binary search between the current broken revision and a known working revision, driven by the results of your unit tests to find the exact commit where your test failed. --IanOsgood 19:43, 28 September 2007 (UTC)
Nice. Continuous Integratin tools do a good job of catching problems early, provided they are running regularly enough, and developers check in regularly, instead of committing a weeks worth of changes on a friday. SteveLoughran 14:29, 18 October 2007 (UTC)


[edit] Low Quality external links

I'm pretty unimpressed with the quality of half the external links here...they primarily point to various blog entries in the .NET land. Those aren't the in-depth references that wikipedia likes to link to. I've gone through and (a) cut out anything that wasnt very educational or required logins to read and (b) limited links to one per blog. Furthermore I'm not convinced by the referenced articles that VS2005 is capable of test-first development, at least if you use the built in framework, but I left the articles there in.

Given that TDD originated in Smalltalk and Java, I'd expect to see some more there. Here's some options

  • Push all product listings to the separate List of unit testing frameworks
  • Split stuff up by .NET, Java, Ruby, other platforms, and try and keep coverage of all of them.
  • Create separate pages for testing in Java, testing in .net, where we can cover the controversy (The junit4 problem, testng vs junit, NUnit versus the MS Test tools...)

If there is value in all the remaining blog entries, this content could be used as the inspiration for a better wikipedia article. —Preceding unsigned comment added by SteveLoughran (talkcontribs) 20:46, 17 January 2008 (UTC)

The links havent improved much. Unless anyone else wants to, I'm going to take a sharp knife to the links. Those that make good articles to reference, they should become references. Those that dont get cut. I don't want to impose a 'no blog' rule because it is how some of the best TDD proponents get their message out. But we have to have high standards: it has to be something on the class of Martin Fowler's blog to get a mention. SteveLoughran (talk) 21:32, 25 May 2008 (UTC)
I've just moved some of the links around and purged any that weren't current or particularly good. Another iteration trimming out the .NET articles is needed. SteveLoughran (talk) 22:12, 25 May 2008 (UTC)

[edit] Code visibility

In this edit an anonymous user at 65.57.245.11 completely re-wrote the Code visibility section without comment or discussion. S/he introduced a discussion of black-box, white-box and glass-box testing. As far as I know, these concepts relate to testing, but are quite alien to test-driven development. In my experience, if you are unit-testing as you develop, you often have to test the details of code whose function is vital, but that will end up 'private' in the final deployment.

I have been re-reading some of 'TDD by example' by Kent Beck, but can't find any discussion of these issues. Have other editors got some reputable references that can help us get to the bottom of what should be in this section? —Preceding unsigned comment added by Nigelj (talkcontribs) 23:04, 25 January 2008 (UTC)

Certainly Black Box and White Box is used in testing, but not usually in TDD, where the tests are written before the code. That said. there is always the issue of whether to test the internals of code or just the public API. Internals: more detailed checking of state. Externals: guarantee stability of public API, provides good code examples for others. SteveLoughran (talk) 22:10, 25 May 2008 (UTC)

I think that the discussion of black-box, white-box and glass-box testing is out of place here as it has no relevance to TDD. In my experience, you have to write tests at potentially every level of the code, depending on your confidence level and therefore the size of the TDD 'steps' that you are taking at the time. If that means you have to end up making encapsulated members public just to test them, it can ruin the proper encapsulation of the actual design. That's why I originally wrote this section, as that's important, and there are tricks that help with it that are non-obvious at first. The section's fairly meaningless now, but now that every entry in WP has to be referenced, I'm afraid I don't have time at the moment to track down references for all that was there before this edit. It's a shame that the section's currently full of such irrelevant and useless tosh at the moment, though. --Nigelj (talk) 17:21, 26 May 2008 (UTC)
Having read it again, it reads like almost academic paper-style definitions of things, apparently not written by a practitioner of TDD; some projects clearly scope test cases to have access to internal state of the classes. As you say, it needs to be massively reworked. We can put the x-references in after. Overall, I'm worried about this article...it seems to have got a bit long and become a place for everyone with a test framework or blog on testing to put a link. If we trim the core text, we'd be making a start at improving it. SteveLoughran (talk) 09:51, 27 May 2008 (UTC)