Can you really freeze code? Can you thaw it? Melt it? ...
One could argue that all you can really do is
...This typically happens when a team is already late going into a
In order to avoid CodeFreeze, an XP project has SmallReleases, automated UnitTests, and automated AcceptanceTests. If a project does not have some of these elements, that is not an argument against CodeFreeze being an AntiPattern in XP. Further, on ExtremeProgrammingCorePractices you will find the word "Continuous process rather than batch". CodeFreeze tends to result in batch, rather than continuous. Branches are one way to try to deal with the issue, but can be a violation of OnceAndOnlyOnce. It may be better to sometimes violate OnceAndOnlyOnce for brief intervals (for example to avoid CodeFreeze), but in XP you would prefer not to, and you would try to keep violations as short as possible. Cheers, -- jn
I think that the idea on a non-XP project is that testing will proceed faster if the pesky developers would just stop changing the code.
But, I haven't worked on a team that did this in a while, so I can't remember why it seemed like a good idea to the team at the time...
From my current perspective, it seems really strange when teams prefer to add bugs to the ListOfKnownBugs? rather than adding an AcceptanceTest or UnitTest that demonstrates what needs to be fixed.
Cheers, -- JasonNocks
It's not an AntiPattern, it's critical at certain times, although people do get confused about what the appropriate definition of "freeze" is.
But the basic idea is that one should be shipping a known entity, not something that was still in the process of changing, whether for better or for worse, 20 seconds before shipping. That's just sloppy, and you end up with software in the field that is in a completely unknown state, and there is no exact copy of it remaining inside the company.
I have seen that state of affairs many times, this is not hypothetical, and it truly sucks.
So instead, you do a CodeFreeze, which means absolutely no new features added, period, no exceptions, and you do final acceptance test. Exactly how to proceed at this point depends on the market segment and on business policies, but typically critical bugs found in this phase are allowed to be fixed, while non-critical bugs are merely recorded for later fixing. Once the acceptance test is passed (which means, among other things, no remaining critical bugs), then the product is allowed to be shipped.
The definition of "critical" can vary, of course. Lots of things can vary. But don't ever think that shipping an unknown quantity is a good thing.
Development can meanwhile proceed, but on a separate fork of the code. Thus it is indirectly important to have all your ducks in a row on things like source code control, forking and merging methodologies, etc.
-- DougMerritt
To me, CodeFreeze is definitely an AntiPattern in the context of ExtremeProgramming. One of the primary concepts is EmbraceChange. IMHO, CodeFreeze is fairly opposed to this.
Thanks for your reply. You provided a nicely detailed response that fits fairly well with my recollection. So, I understand where you are coming from and I'm not saying that I think the approach you are describing is bad. If it works for you, great.
I've just become used to doing things a little differently. Branching code in SourceControl? is OK, but I prefer to use it only when truly necessary. It kind of violates OnceAndOnlyOnce. I would prefer to just label the code that gets released.
Some of what you are talking about may also be a difference of size and organization of teams, teams working on different releases (QA/QC working on getting the current release out, developers starting to work on the next release), etc. In my experience, ExtremeProgramming teams tend to be smaller and work directly with the people writing AcceptanceTests, etc. So, there seems to be less need for CodeFreeze, or CodeSlush (I like the term).
The AcceptanceTests are written up front. The code is in a releasable state at the end of each iteration. It's up to the customer to decide when a sufficient number of AcceptanceTests have been satisfied.
Cheers, -- JasonNocks
I'm primarily talking about projects that deliver software to many customers, not to a single client, for instance, shrink-wrapped software sold to millions of people (or even merely dozens). In that context, I think CodeFreeze is critical, and offhand I don't see that you said anything that contradicts that.
If you are saying CodeFreeze isn't necessary when you have a single client for whom you're doing custom work for hire, that may be true. My experience on that is that the whole world tends to stop while the client examines current efforts; there's no need to do concurrent development on some next version. Presumably it still depends on the details of the situation, though.
Actually I'm not sure that I understand your objections to CodeFreeze; maybe I'm missing something.
P.S. CodeSlush is typically a phase that precedes CodeFreeze, rather than being something that replaces it outright.
-- DougMerritt
No, I don't think that you're missing anything. I'm not objecting to non-XP practices. I think two different approaches are both valid here. I think we're on our way to ViolentAgreement. :) Cheers, -- JasonNocks
Development can meanwhile proceed, but on a separate fork of the code. Thus it is indirectly important to have all your ducks in a row on things like source code control, forking and merging methodologies, etc.
It becomes an AntiPattern when there is no concept of forking or parallel development. For example, in an internal development team, when all development stops just before testing. Of course associated with a very literal waterfall process. -- StevenNewton
Agreed. -- dm
Only a simple product can have a branch always at release state. Unit tests are not nearly sufficient to guarantee anything. Take cisco for example. One of their system tests on a release probably takes a month. You can not possibly say a branch is always releasable unless the product is relatively simple. That's not the kind of thing a lot of us work on.
I've done contract work at Cisco, but I don't quite follow your line of thought here. Nor did I mention the term "unit test".
As for guarantees, as is well known, you can't prove the absence of bugs. The point of testing is to find their presence. Once you stop finding them, you call it good enough to ship. That has nothing to do with some mythical state of bug-free, which is of course never achieved.
Methodology is important, but one should not mistake it for unattainable perfection. -- dm
Jason mentioned XP. So at any given time the only tests known to pass on a branch will be the unit tests. The point is, on a complicated product like a router the system tests take days, to weeks, to months. Clearly these tests can't be run for every change. A product can only be GA ready when these tests pass.
And that leads up to the point that....what?
That you can't get by without a feeze using one branch just because you are using XP. The branch isn't release ready at all times because many products require long and complicated system tests. Unit testing is not sufficient for GA quality. Nor are acceptance tests.
Yes, UnitTests or AcceptanceTests alone are typically Insufficient. I'm not stating otherwise. If you team has not obtained automated AcceptanceTests up front, then you will be correct. You cannot stay in a releasable state. -- jn
Is GA just a typo for Quality Assurance? Or what? GA stands for general availability, it's a synonym for public release that's used it many coomercial software vendors. Also CA, or controlled availability, is a kind of non-public beta, with software given out to trusted customers for evaluation and testing.
I would prefer to just label the code that gets released.
Hate labels. Branches are much simpler and cleaner. But your source control system may dictate this somewhat.
Branches are not simpler if you have to merge. Labels can easily be ignored, merging cannot.
When you branch for a release then fixes are on that branch, there's no need to merge.
So the fixes for the release are thrown away before the next release?
The ease of which labels are not maintained is the problem. A branch always has the correct source and there's nothing to maintain.
Labels don't need to be maintained. They indicate the state of the code at an instant in time.
I prefer avoiding having to merge if I don't see a need. There are times where it makes sense, but that does not indicate that it should be the general case. That's just my opinion. Cheers, -- JasonNocks
A branch can only be said to "always" have "the correct source" if it is the only branch. "Nothing to maintain" indicates the same. If I never intend to merge again, that is typically called a fork, not a branch.
Some SourceCodeControl? programs actually implement branches more like tags than conventional branches. In this sense, a branch tag is almost the same thing as a label.
The main difference is that if you are creating two or more versions of the same file, you are creating work in the future for some perceived immediate gain. That work will be realized when merging takes place. If there is no real immediate gain, then you have created unnecessary work that will have to be done in the future. Consider the CostOfBranching. Cheers, -- JasonNocks
Your model seems to be only working on one release at a time. You can't use labels when developing on multiple releases with different features sets. Even in a single model managing labels to match the current release set is a major pain. Just branch and all the changes in that branch are for your release. Merging is up to you.
My model is to prefer one codebase whenever it makes sense, which for me does tend to be more often. If I always branch the code, even when I don't really need to, then when I merge the code later, a certain portion of that work is unnecessary and wasted effort. If I only branch when I have a compelling reason, I am trying to avoid doing unnecessary work down the road (merging later). Cheers, -- JasonNocks
We used to CodeFreeze for several weeks during final acceptance test. We'd fix only serious bugs, and we had a process in place to determine which bugs were serious enough to get fixed during CodeFreeze. One reason why we didn't address non-serious issues is that changes to the code tend inadvertently to introduce other bugs. So we only changed the code when absolutely necessary. We couldn't have implemented this using labels. If we'd wanted to continue development during CodeFreeze, we'd need branches: one for the frozen code, one for ongoing development. -- TimKing
You are freezing people from checking in for weeks. That's not acceptable in my book. Branch and let people develop in parallel. I'd like to submit changes for the next release. Not wait.
If the entire team works on testing this isn't a problem. No one is submitting changes for the next release during this time.
If you want to move the changes to the new release then that is a merge. Only a small team would be working on something entirely. Often we have people working on more than 3 branches at a time. Even for small teams not everyone will be actively working on the release. They can get on with the next release.
It isn't a merge if the next release doesn't start development until the current release ships. It has nothing to do with team size; it's about team organization. I've worked on large teams that worked this way. The danger in "getting on with the next release" is that current work may depend on code that will change as part of this release's bug fixes. I've worked both ways and found the benefit of finishing one release before beginning the next is greater than doing it the other way.
If you can waste several weeks not working on the next release then more power to you. That's a lot of time. The current release will likely have dot releases and code changes will be made independently of the new release as well, so you have to always deal with this issue. I think you are too afraid of merging.
[Of course, there are organizations that have truly poor merge tools, so what they're afraid of is actually the tools. In environments with good merge tools, merging is automatic trivial about 95% of the time, and only needs careful human eyeballs that other 5% of the time.]
Even an automatic trivial merge with the best tools can take a couple of minutes. If everyone is branching and merging all the time, that cost starts to add up. And the 5% can take quite a bit longer to resolve (sometimes tracking down someone else). If there are legitimate reasons for branching, fine, but consider the CostOfBranching. Why not only branch when necessary?
[Another thing that can screw up merging is if the team is in chaos, and 10 people are working on exactly the same lines of code all the time, so that automatic merging is inherently impossible. This does indeed happen, but in such cases, merging is the least of their problems.]
I'm not afraid of merging. I do it all the time. I'm relating my experience. I've had better results from finishing one release before starting the next. I don't waste any weeks. Everyone works on the current release until it is shipped. Then everyone starts to work on the next release.
[As others have already said, bully for you that you have had the sheer luxury of being able to do things sequentially. There are a zillions reasons why that can't work for all companies, but one of the obvious ones is if the QA phase lasts for 6 months and is done by a different department than the one that did the development. This is common in a lot of the biggest companies, e.g. IBM.]
If your QA phase lasts 6 months, then you are probably not doing XP (particularly SmallReleases). CodeFreeze is an anti-pattern in XP, possibly not in other practices. -- jn
I'm not saying it works for all organizations. I'm saying it isn't an anti-pattern at all organizations.
[Parts of the pre-ship process must be a little like that (developers not needed) in some cases, e.g. when the "testing" that's going on involves internationalization in 15 languages and locales, and testing and fixing bugs in that i18n stuff.]
[The environment you describe is in the minority, but it certainly is easier to do it that way when possible.]
One of the shops I worked at did a lot of i18n work and they used the technique I'm describing. The entire development team became testers (and tester assistants) for several weeks before the CDs were pressed. The i18n work was outsourced as a set of resource files after the US version shipped. I suppose the i18n shops could have reported bugs, but they never did it while I worked there.
The resource files branched, but the code didn't. The code was frozen. Each international version was made from the latest US release, so there was no need to merge the internationalized resource files.
I think CodeFreeze is fine in the context of "Hold on, everyone. I've broken the repository. Stop checking things in while I sort this out. Sorry. I'll notify you when you can commit changes again." TheMozillaProject closes when the build breaks, until things are fixed or the changes that broke the build are backed out. The MozillaTinderbox tracks the build.
It might be this reflects a more serious process problem. How did the repository get broken? That's bad. That's one of the advantages a change-based process with regression tests, such as AeGis implements. The repository doesn't get broken. I've even used this process manually in a ConcurrentVersionsSystem environment, with good results.
I'm not sure if you did, but I assume that he meant "the code in the repository is broken", rather than literally that the repository itself was broken.
If you both meant that, still, you can't claim that any methodology can 100% prevent code breakage, because there are too many ways it could be considered to be broken that tests could not possibly catch, such as that a global search and replace broke a large number of comments... "broke" could mean they are no longer in esthetic format, or that they now say something not quite right.
Or any of zillions of other possibilities. Regression tests are essential, but they can never be a panacea. -- DougMerritt
''Indeed. I never said a good process was a panacea. I only implied that most developers don't use one and that we shouldn't be nonchalant about Repository Freeze.
If you do use a good process, then probably your RepositoryFreeze? can't be helped by adopting one. (This is not a comment on TheMozillaProject; I've never worked on the TheMozillaProject.
-- It's a comment about projects I've worked on.)
In the case of global search-and-replace, that you had to do a global search-and-replace might mean you need to refactor in order to implement OnceAndOnlyOnce. Is RepositoryFreeze? beneficial while that sort of global refactoring is going on?
I'm having difficulty understanding (or possibly parsing) that paragraph. Global search-and-replace is sometimes a means of refactoring (with or without regard to OnceAndOnlyOnce). -- dm
If I have to search and replace, even within a single file, much more across multiple files, I automatically ask whether I need a new ShieldPattern, whether I need to shield the changes behind some less volatile symbol or language construct. (Even if the problem is just that someone else keeps changing the API I use, and I'm getting ticked at him, a shield can make further replacements unnecessary.) The reason I mention OnceAndOnlyOnce is that if you really need to replace globally, that implies some variable or expression appears in many different places, an expression that is changing. You shouldn't usually need to change anything in a zillion places, and if you do, OnceAndOnlyOnceIsNotJustForCode; it's for labor, too.
''Should we move some of this discussion to a different page, perhaps
[Simplicity isn't the determining factor; development process is. The most complex product can have its code frozen.]
See CodeSlush
HasWantedPages?