Productivity Measurement

ProductivityMeasurement is a top management concern.

To do so, one needs to find a metric for value. To-date there is not much recognised process outside of FunctionPointAnalysis.

FPA isn't a metric for value. It's a method for estimating effort

(FunctionPoints per manhour). : Using that as a producivity measure assumes function points correlate with value. They probably do, but I reckon only weakly.

Doing something utterly pointless with half the resources makes you twice as productive

Shouldn't do. Productivity should be value delivered / effort. If it's utterly pointless, it should have zero value and so you have (at best) zero productivity.

I'm not sure that we're not violently agreeing here. How do you measure the output to determine what is "the most" - I think you use a metric that involves value. If I spend an entire working day entering things at an enormous rate into WardsWiki, there's a lot of output, but I'm not being highly productive from a work perspective.

Another things to measure are: What thing are we trying to measure the value of here?


I work with FunctionPointAnalysis daily, being a CFPS (certified function point specialist. you can search my name at http://www.ifpug.org/other/search.htm). So I want to clarify one thing: FPA is used to measure functional size of an application (i.e. a scope metric). That's it. People can use mathematical models like CoCoMo to correlate a scope quantity with other quantities one wishes to measure (e.g. time, cost, quality, resources), but it'll be as correct as your model is. For example we can assess that a set of requirements amounts to 150 FunctionPoints. Using a simple model we can verify the cost of developing an application that fullfil them:

 estimated_cost = function_points * cost_per_function_point * knowledge_of_business_factor * technology_proficiency_factor

Some people use a simple model make ProductivityMeasurements: pick some completed projects (with known effort to complete), assess their functional size using FunctionPoints, measure the productivity per function point for each one and use the average productivity as the company's sacred productivity number. There are several mistakes with this approach:

ProductivityMeasurement is like any other kind of measumerement: you need a model, a hypothesis and statistical knowledge to ensure that your measurements and your results are accurate. Having experience in experimental physics helps a lot.

-- DanielYokomiso


IMHO ProductivityMeasurement is a misplaced concern. If we study ProjectManagement theory we'll see that, in a project, we need to manage scope, time, costs, quality and risks (according to the PMI we can also manage resources, communication and acquisitions, but they're minor concerns). Good productivity is just a side-effect of a good management.

-- DanielYokomiso

This is simplistic. Do you not care, as a project manager, how long it will take someone to produce a deliverable? Improving this over time reduces your costs.

How long is a time estimate and time is something project managers should manage. The correlation between time needed to produce a deliverable and it's cost isn't linear. When people focus on productivity they are transforming two dimensions in one. There are three things we need to be aware at any given time: how much from our budget did we spent (AC), how much from our initial scope did we delivered (EV) and how much should we have delivered so far (PV). For those we need good estimates, not something like: we can deliver four FunctionPoints / (man * week). If we assume that the time needed to deliver some scope is a simple equation (i.e. time = scope / (productivity * people)) we'll have wrong intermediate dates in our project because producing software isn't a simple linear equation (i.e. sometimes delivering 3 FunctionPoints may take one day sometimes a month, the ProductivityMeasurement just gives us a average rate over longer periods).

That there is not a linear equation is not a reason for dismissing a measure. Even if you think, as a project manager, productivity is not your concern, you should be hoping it is *someone's* concern.

Productivity isn't a number you can apply to some scope size value and get a accurate time estimate. It's something much more complex than that and trying to transform it into a simple number is futile.

"Good estimates" require a knowledge of productivity - or you'll be ignoring the fact that one of your developers may be able to deliver something in 2 weeks that would take another 10 weeks

A developer may estimate to deliver 10 FunctionPoints worth of functionalities on one week, but later estimate 4 weeks to deliver another set of 10 FunctionPoints. If we assume productivity is a simple number we will find an average of 6.25 with a standard deviation of 5.3. What should you use to estimate, his experience or your imprecise ProductivityMeasurement? Trying to find a productivity number for a developer is something doomed to failure, because he may be more proficient in some tasks than others and usually you will know this only for things that are very similar to other things he did.

OK. Survey all your developers for estimates for the same set of project deliverables, and look for patterns. I'll bet some developers will fairly consistently (think that they will) deliver things faster than others. That's _a_ measure of productivity. If you think they can't estimate accurately enough for that, why are you basing your project plans on their estimates?

You assume that one developer can be consistently more productivity than other for all tasks. It isn't true for most of the developers.

No, I don't assume that. That's why I said "fairly consistently". And that is true in my experience. Where I've worked, some developers have been clearly more productive than others.

Also you can't say that keeping track of estimates is a productivity measure, you're measuring estimates. I do think that developers can have good estimating skills, but these estimates aren't necessarily correlated with your deliverable size measure (be it FunctionPoints, UseCasePoints?, GummiBearsOfComplexity). Try to document the realized times and costs for tasks that you measured and do the statistical analysis

It may not be a concern of a project manager for one particular project, but between projects, if you don't care about productivity (and improving it) at all, I think you've got a problem.

Again we need to reduce our costs or the time to produce some deliverable. Even if we could create a valid model for productivity how could we manage it? You can't just say "Be more productive", "Just write perfect code", etc and expect to improve productivity. Trying to measure productivity is like trying to make complex numbers comparable and assuming that this is a total order relation.

Of course you can manage it, or at least affect it. Working conditions, the kind of people you employ, the pay and benefits you give them, the training offered, the range of experience they have and can gain can all affect productivity. And you can measure it - in my experience, it's been pretty easy after a few projects to say which developers are more productive than others. Again, if this is the way your organisation treats (or seemingly ignores) productivity, then I'd say you have a problem. Do you treat your people as PlugCompatibleInterchangeableEngineers?

And how are you making a ProductivityMeasurement? Looking at how developers performed in other projects and guessing who is better than who isn't a measurement.

Yes it is. It is _a_ measure. How well it correlates with actual productivity can be argued about. But it's a measure.

It isn't a measure. Measurements aren't based on guessing. If you guess you're guessing, to measure you need to find actual numbers (i.e. quantities) not just a feeling (i.e. quality). Read papers and books on metrology, specially Guide to the Expression of Uncertainty in Measurement and International Vocabulary of Basic and General Terms in Metrology to understand what a measurement is.

It's not necessarily a guess - you labelled it that, not me. I claim it's an indicator of productivity.

If you want to measure you need numbers to compare and I can bet with you the numbers in your organization aren't as precise as you think they are, so using them for estimates would give imprecise estimates:

 scope in fps, time in weeks
 time = scope * productivity
 (stdev(time) / time)^2 = (stdev(scope) / scope) ^ 2 + (stdev(productivity) / productivity) ^ 2;
 scope = 15, stdev(scope) = 1.5, productivity = 10, stdev(productivity) = 2.5, time = 1.5, stdev(time) = 0.4

That is you'll have a variance coeficient of 27% in your time estimate (i.e. 2 days of uncertainty). In this example we had a very precise measurement for developer productivity, my experience indicates a much larger standard deviation. Managing with 27% uncertainty (without considering any other risks) isn't really managing.

I'm curious: what's your measured error for developer estimates?

In the organizations that measured we verified variance coefficients varying from 10% to 150%, with lower values for more experienced the developer and better development processes. My own variance coefficient is under 5% today, but I'm much more experience in estimating and metrics than the average developer.

Well, that's my point. With 150% error, you're not "managing" either. With a collection of developers with 5% errors, a pretty good idea of productivity can be calculated.

I trust the developers with low variance coefficients. The others should be trained in estimating and assessing their own skills. I use numbers to back my claims on who can make good estimates and who don't, not guesses (no matter how educated they are).

I don't believe in PlugCompatibleInterchangeableEngineers, I believe we need to use science to manage projects, not guessing which developers are better and trying to figure out our time and budget estimates.

And in the absence of those exact figures you crave, you do nothing. I think you're throwing the baby out with the bath water.

It's easy to find those figures (which, btw aren't exact because they have uncertainties associated), it's the first thing I do when I'm involved in any development project and it usually only take a couple of days.

How can you get an idea of the accuracy of estimates of something that might take several weeks in a couple of days?

Looking to past projects data.

If you want to use ProductivityMeasurement do it correctly (i.e. formulate a productivity hypothesis, make several measurements and find the average and standard deviation). IME you can't find a productivity number for a single developer that isn't too imprecise (usually they have a variance coefficient larger than 40%) so it's useless to use it on time and cost estimates. It's better to have developers able to do good estimates (which aren't based on a simple equation) and use productivity number for teams or even your entire organization and use them to make whole project time and cost estimates or to make decisions involving several projects at once (i.e. PortfolioManagement).

Amusing. A "good estimate" - if you want it to be based on science and hard numbers - requires a developer to have a good idea of their own productivity, of course. If you're prepared to accept the developer's subjective best estimate, you've not got your science. Why accept that his or her attempt to produce a number for an uncertain estimate is OK, but say that trying to do the same for productivity isn't?

A productivity value is a ratio scope / time, I'm talking about good time estimates for development tasks. You can measure the accuracy of someone's time estimates by just comparing the ratio realized / estimate and applying basic statistical analysis on the values. It's very interesting because you can assess if a developer has systematic errors (i.e. always estimate better/worse than reality). The problem with productivity is that it can vary wildly, experience in one task earlier in a project can result in better productivity in similar activities later in the same project, so we can't use a single productivity number to estimate both tasks.

I've never had the luxury of working with developers over a long enough time and with similar enough projects to get anything statistically valid - and as you say, it will change anyway within the project and as developers get more experienced. So I think your reliance on accurate estimates is a bit of a house of cards - your reported errors would tend to support that.

The problem with ProductivityMeasurement is that we don't know how to model productivity, we just have gut feelings of what affects it (e.g. technology, experience, kind of business, management, development process, salary, weather). Trying to measure something we don't know (and don't have a hypothesis on) how it works is doomed to failure. For example correlating salary with shoe size, age and religion: we're mixing both qualitative and quantitative data we hope are part of our model but we don't know how they influence the result.

I think we'll have to AgreeToDisagree. I'm talking about something objective: ProductivityMeasurement, not productivity (which is something we don't have a testable definition for). I'm providing verifiable data and methods on how to correctly assess one's estimating process, using standard definitions for terms (e.g. measurement, variance coefficient, uncertainty). As you refuse to refute what I'm saying and keep saying about the importance of productivity (which I never denied) I don't think we can reach any sensible conclusion to this.

I think we'll have to AgreeToDisagree. I think we will. I believe I am refuting what you're saying. I think you are measuring productivity, but refusing to recognise it as that. If you're not seeing that, we're not going to get anywhere.


See XpProductivityMeasurementProblem


CategoryEnterpriseComputingConcerns, CategoryMetrics


EditText of this page (last edited June 3, 2005) or FindPage with title or text search