Maybe not an XP challenge, but certainly a question whose answer is fuzzy to me: the customer has no time to write AcceptanceTests. Customers are busy people with many business concerns outside the development group. So they create a QA department (RJ's "separate team" from the AcceptanceTests page) to write the func tests. How does QA make sure they have enough resource to track the developers? What do you do if a lag appears? What if the IdealDays required to develop a story don't map well onto the ones required to test it? How does XP control these variables?
These practices are in specific conflict with the XP practice of having an On-site Customer. XP doesn't control the variables as stated above - it eliminates them when you follow the practices. If a project isn't doing it that way, the principles and practices of XP will still be helpful, but the team is doing a variant process, not XP.
If the customer doesn't have time to specify the AcceptanceTests, you aren't doing ExtremeProgramming. You can do something similar, but you lose the communication of requirements that CustomerDefinesTests? provides, and you lose the confidence that seeing the tests run generates.
In XP, QA doesn't run the AcceptanceTests, the programmers do it. If you have QA run them, you lose the feedback that comes from running them all the time, and you lose the communication that the programmers get from seeing the tests.
In XP, there is only one specialization of roles, that being Customer / Programmer. There is no QA department in XP. If you have a QA department, you are doing something else. It might be better or worse, but it's different for sure.
If the AcceptanceTests come out a long time after the code is written, the inevitable result will be poor communication on requirements and slower development due to backtracking to fix defects when the programmers' minds have moved on to later stories.
Language differences still leave room for fuzziness in this answer. You say Customers specify the AcceptanceTests. Good, as expected. You say programmers run them. Also kosher. But who writes them? If customers, why shouldn't we regard QA as customers here, and then how do you answer the questions when you substitute these terms? If programmers, you lose one of the fundamental tenets of QA, that developers are too close to the problem to write good end-to-end tests. Or do you happily lose that?
For the rest, sure, naturally.
There is no QA department in XP. Customers specify the tests. Programmers code the tests, or write the tools that allow the customer, or someone clerical, to type them in. There are many good uses for a QA department if you have one, such as making sure the system works on all platforms, etc. AcceptanceTests are part of the requirements, and it makes no sense to me to imagine the customers specifying the stories but not the test results.
"Write a program that computes the foozle function. See QA for the details." Huh??
There's no problem with programmers typing in the tests that the customers specify, except that it's boring and they should build a tool. Programmers might not be able to specify the tests, since they don't know the requirements, but why couldn't they type them in?
Woah. On the AcceptanceTests page RonJeffries writes:
Is there some reason this separate team shouldn't be called QA? And what kind of tool do you use for your FT scripts? Something commercial, or do you just extend the StarUnit framework? And do you CodeAcceptanceTestFirst like you CodeUnitTestFirst?
Really Ron you're being very confusing here and all of the original challenge questions above remain unanswered. Please be more direct.
Peter, a former colleague of mine, has put forward similar questions about the role of QA, etc. See WhatAboutTesting. -- StuartBarker
The question down the bottom of that page is very good. If XP is placing a lot of what was QA onto the programmers, we need to more closely define the role of QA on an XP development: ExtremeQuality -- PeterMerel
As for the role of QA and the question about its existence, I think it logical to identify what QA departments do. I do agree that development should be all over unit and acceptance test... getting feedback early. But to believe you are done when the customer acceptance tests are is interesting, and identifies a common misunderstanding about what TDD doesn't do and what QA does. Everyone is an advocate for quality and customer, but to dismiss the need for QA (some are going to hate me saying this) is still the proverbial "wolf watching the henhouse".
QA departments actually expect that development should test the code at a unit level and functional level before they subject QA to it... when they don't QA is usually vocal about it. Mapping unit test and functional tests and coverage tests into the Ant scripts for quick feedback and CI... Yes... QA will help you get started! But QA departments do something else... stress test, concurrency test, security verifications - their goal is also customer advocate - but their approach is to see if they can break it, and that is an art and science as much as programming is. QA often is the one who makes deployment really work, by paying attention to and capturing all the little things that must happen along with the move of a tar-ball... Someone needs to convey to the operations people how the new app is made available, what config edits must be included in the critical once-a month 4 minute 3AM sunday maintenance window. You would be surprised how often development manages to skip a little detail in passing on deployment info - When you have seen the last bug clear, the last screen beautiful, it is easy to forget that - oh yeah, we made a property file change too for that. (Yes - Your CI mechanics can help here - if you have attended to the detail)
Now, granted, most "testers" may not be amazing at all this... but usability and less technical tests need to be designed and run also. I am so much on Alan Coopers side... I have seen very few developers(even acting in pairs or on a team) who can produce a decent user interface, and let customers and support people deal with the fallout... and I can tell you, customers don't usually know a good user interface ahead of time either - they mostly just know a bad one when it is dumped on them... production can be late for realizing you have a clunky UI because your customer advocate or customer was not totally cognizant of all the things a good usability person might be.
Best practices may help you avoid issues around SQL-injection and concurrency issues, but until you put on the hat that says "I am going to find bugs, because they always exist" and until you devote significant focus to QA, you will get a lot of weak testing through simple functional test. Minor things can make a lot of angry customers. Passing a functional test can be easy.... put a thousand users on it, including a dozen hackers bent on breaking it, a five-9s SLA including 80% sub-6-second response expectations, with ongoing encryption and re-encryption demands for mastercard/visa compliance, and subject the back-end database to a couple concurrent reporting efforts and a few table locks... if you aren't doing that in your tests, you will get the resulting surprises instead in production.
Not all tests can be automated or run in a tight CI loop - there are long-running tests that poke and prod, fishing for memory leaks and connection pool issues and resistance to bad data and more; There are exploratory UI tests. And for some businesses, the right error is a company-killer. (I can tell you that customers have no idea how to ask for what it takes to deliver such systems - their acceptance tests will not include the kind of scrutiny good QA does - but they might sue or move elsewhere or at least become more hesitant in bringing new business when reliability falters in the face of real-world events. Its not cool software to them - its their business.)
No QA - shows a limited understanding of what real applications do - the lab of the development shop is not production. I don't think the question is whether QA should exist... programmers are not skeptical enough of their own code or willing to assume the burden described above - nor do they typically know how to really make code fail. Design goals and grand intentions do not produce perfect execution.... we are human. I had 16 years in development before my recent switch to QA... Because developer practice is not enough... We need to see quality in this industry. I have yet to see flawless executon from development and some domains actually need a lot better. Quality does not end with developers. (Have you met any developers that thought they were part of the problem - and you know they exist somewhere.) The discipline and experience in our QA organization regularly kicks up large defects from development, because of this. The question is, how to integrate good QA into the agile endeavor. If Agile values people over process - I have news - Our QA people seem to know more about design patterns, testability, and the customer than our developers do - so maybe recognizing the value in good QA would be a good start.
It took a little proding, but Kent Beck managed to acknowledge the value of true QA in a recent webinar... I don't think he ever actually conciously discluded them. But because of their absence in much of the material, the complete focus on code practice, and the perhaps regular adversarial relationship(development versus QA) - Do we now decide that developers(even good ones) are simply above scrutiny? You just haven't met any good test engineers perhaps? (Agile coding is high discipline - believe me, so is good test; And I have yet to see any code that would not benefit from another good set of eyes). Not everything - that is not code - is overhead.
Ultimately there is one more thing that I think should be understood... XP has not yet stabilized and neither has the competency of the practitioners - I love where the very good teams are going - but while the theory is good, I see almost no one working 40 hour weeks yet. (I can tell you about the long term effects of working too much O/T) There is no perfect execution on XP or even consistency on what that would mean... so how does a company risk millions or billions to what essentially leaves their life blood to programmers who simply say "trust us - our kung fu is good"... And when we do not yet have a history of truly delivering? And now lets throw in a new manager who breaks the team - should that risk the whole company? QA is good CYA. I believe that, for development work, a developer-oriented view is right... as long as its not myopic and self-serving to a fault.
The question - what about QA? - really needs a better answer.
- Jeff Smith - A developer who believes in QA -