# Function Point

Function points are a measure of the size of computer applications and the projects that build them. The size is measured from a functional, or user, point of view. It is independent of the computer language, development methodology, technology or capability of the project team used to develop the application.

from http://ourworld.compuserve.com/homepages/softcomp/fpfaq.htm

I think I've finally figured out why I dislike function points, at least how they are currently calculated. Function points violate the RepresentationCondition and the PrincipleOfParsimony.

Jeffery, D.R., Low, G.C. and Barnes, M., "A comparison of function point counting techniques,", IEEE Transactions on Software Engineering, 19(5), pp. 529-32, 1993.

This was an empirical study that suggests that the adjusted function point measure is no better at predicting effort than the unadjusted function point. So why then bother with the technical complexity factor? -- JasonYip

Function Points have strayed a bit from their original RepresentationCondition as its practitioners have striven to make it "better". The original "software science" paper established a pretty stable relation between the amount of data types handled by a program and the number of instructions in the program. The logical extension of that observation is modern FP.

As for the PrincipleOfParsimony, I recall asking my instructor why bother counting ninteen ILF's when the result on the computation was the same as if the count was one. He said, "you count ninteen to make sure there aren't twenty". I was looking for a precision that simply isn't in the model.

If the adjusted function point measure is no better than the unadjusted one (in your particular, repeatable context), then don't bother with the adjustment. Just like everything else, it works better if you use your head. FP won't just work off the shelf because there are too many variables. It has to be calibrated to your environment through a bit of trial and error and remembering what happened last time.

-- WaldenMathews

Do FunctionPoints measure UserValue? or effort/cost of a feature? If it measures the UserValue?, why does it focus on the operational aspects (number of database queries, number of dialogs, etc.) when this only has a weak relation to the user value? If it measures effort/cost, how is it better than LinesOfCode? Or do FunctionPoints measure something different yet? -- JohannesBrodwall (confused)

The confusion arises from the fundamental (and difficult) question "How do you decide on the size of a software program?" Actually, it is assumed that operational aspects are a pretty good indicator of how much function (and therefore value) an end user will see. FP proponents prefer it to LOC (lines of code) for two main reasons. First, it can be applied at very early stages of requirements. Second, it is immune to LOC perturbations caused by switching programming languages. But to use it to estimate effort, you still have to translate FP's into LOC first, using conversion tables.

FP analysis is essentially about interfaces, not "files", although the nomenclature still in use is confusing. If you're good at object analysis and design, you'll be good at FP. It's not magic. Doing a decent FP analysis of a proposed software system gets you asking many of the right questions, causing your familiarity with the system to skyrocket, and your estimate to improve correspondingly.

-- WaldenMathews

I suddenly realised one metric that FunctionPointAnalysis would be valuable for: Average LOC/FunctionPoint. The lower the value, the more effective the code is. For a more realistic version: Average Statement Count/FunctionPoint. Could this be a valuable metric for Refactoring or for comparing designs?

Does FunctionPointAnalysis have to be performed all at once, or can it be done in stages of incremental refinement? It seems to me that FP analysis assumes a BigDesignUpFront.

-- JohannesBrodwall

Very nice questions. If choice of programming language is held constant, then big fluctuations in LOC/FP probably mean something, but beyond that I wouldn't venture. I believe, along with others, that macro management is best done with the senses, not with instruments. To make the point more sharply, try steering towards well-refactored code looking only at your metric, not at the code and not at what people think about the code.

FP supports both incremental and "big bang" approaches. You might use "big bang" on an existing product to establish an FP baseline. For enhancements, FP analysis looks at added, modified and deleted functionality. The incremental model of FP is more complex (as is the incremental model of anything), and therefore FP falls into WaterfallSyndrome.

-- WaldenMathews

With due respect, you count the FunctionPoints that your use cases or user stories reveal at any given point in time. If you have an evolutionary approach to functional requirements capture, your FunctionPointsAnalysis follows the same pattern. FunctionPoints measure what you know about the SystemUnderDiscussion?. – GastónNusimovich

The AdjustedFunctionPoints? algorithm tries to estimate a “how” metric, which is strongly contextual, from a UnadjustedFunctionPointsCount?, a “what” metric, that is context agnostic.

The AdjustedFunctionPoints? algorithm is an initial attempt to identify the key drivers to a “real-world” algorithm, that must be based on “strongly contextual” parameters for every case, which means, from the history of your own past projects.

When you apply a “real-world” algorithm like SLIM with data from past projects of your own development organization, you get really good predictors. This means that you must keep accurate and detailed records of key metrics from every project. You may think that all that effort is not worth the outcome. Well, that depends on perspective. -- GastónNusimovich

CategoryMetrics

EditText of this page (last edited May 4, 2004) or FindPage with title or text search