Payroll Example Two Discussion

Discussion of PayrollExampleTwo to avoid making it long, since it already has code.

One immediate drawback I see compared to PayrollExample is that a "power user" (administrator) cannot change the formulas. A Java programmer has to be hired or rented to make any changes. (I will agree that for a bigger organization, that may be less of any issue, but more so the smaller the org.) -t

The client explicitly set a requirement that all payroll updates be made by a "payroll specialist", i.e., me or a member of my staff. However, we did make versions of the system that allowed users to change various constants such as the EI & CPP rates and yearly maximums. Providing updates to formulae, however, was considered too technical for even administrative users, and was both a source of revenue for us and a source of comfort for our users who knew that they were getting a tested and validated payroll system without having to worry about potential (expensive) mistakes.

Understood. I agree they both have their place. Generally I was thinking closer to the SAP model where you have programmers and "configurators". The configurators are sort of half-way between programmers and business analysts. That way they can focus on payroll rules and customer relations without having to learn the in's and out's of Java and compilers.

At one point, we considered developing a scripting language on top of other parts of the production system (it isn't just a payroll program), specifically to make it easier for "configurators" to customise it. However, the business case for this was rather weak, as: (a) our client didn't want anyone outside of a professional development team mucking about with code (for good reason -- the system was used by small, mostly-independent offices to support in-home nursing provision, so there are financial, patient care, and liability justifications for this), and (b) the reality was that any "configurator" of suitable ability would be a programmer working for us in-house, and therefore able to understand C++ (I only used Java for PayrollExampleTwo) enough to code on the business side and use our build system, etc. However, I would be interested in seeing a ProceduralProgramming & TableOrientedProgramming equivalent to my ObjectOriented example, in order to compare the two approaches. In particular, I'd like to see how you tackle the requirement to override certain federal constants and formulae on a province-by-province basis.

I'll work on it. But first let me take this opportunity to rant how bloated typical Java apps are. Contrast by comparing these test stubs:

		//Example P-1
		Employee julie = new Employee(52) {
			public double TC() {
				return 26467;
			}			
			public double TCP() {
				return 16422;
			}
			public double Invest() {
				return 2000;
			}
			public int getDisabledDependents() {
				return 1;
			}
			public int getDependentsUnder19() {
				return 2;
			}
		};
Versus:
 // Example P-2
 e = new Employee(52);
 e.name = "Julie";
 e.TC = 16422;
 e.TPC = 16422;
 e.invest = 2000;
 e.disabDependants = 1;
 e.dependantsUnder19 = 2;
Maybe you have fast eyes that can read verbose code fast. I don't. I like info compact and clean such that it reads almost like pseudo-code. Why people tolerate such shit, I don't know, but I hope it falls off the edge of the Earth and goes the way of the starter crank. We finally got away from COBOL to come back to THAT???? Damn! No wonder scriptish languages are making a comeback. Do we really have to nearly triple the size of our code to get "type safety"? Maybe anal domains need it, I don't know, but I don't want to be part of it. -t

Whilst Java is verbose, the above is an uncharacteristic example. Remember that Employee is a stub. It has only one member attribute (because its value varies for each test), and the return values of the methods default to what's required to support the majority of tests. That approach means two tests are rather verbose -- due to having to override methods -- but the rest are not. In production, Employee would have the same methods, but would have attributes populated appropriately via lookup from a database, set via constructor, etc. If Java supported properties (like C#), which I wish it did, the syntax for setting attributes could be almost exactly what you've suggested.

And, yes, I can read verbose code fast.

I did limit my criticism to Java, because for one, Eiffel offers shortcuts also.


(Note: I have inserted various "post-edits" to the discussion below to clarify information and add details. -top)

As it is, PayrollExample doesn't have enough "grouping power" to do these kinds of calculations in a streamlined way. But such could be added by making "PayItemID" be non-unique in table "payItems" and perhaps adding a new "payItemGroups" table. This would allow bunches of calculations to be performed based on region-hood. But such could complicate configuration for items that don't need grouping. It's a trade-off between optimizing the design to be a "glue" tool or a heavy-duty rules & math processing tool. At this point I'd probably leave most of the above kinds of calculations in code, whether it be procedural, OOP, heavy-typed, light-typed, spreadsheets, etc.

The best tools are often those that let you mix the best tools (recursive problem here? Hmmm). My approach is an aggregator that allows different tools and paradigms to do what they do best. This reflects a realistic business environment where different tools and sources of info must work together to solve a problem. Some info may come from custom spreadsheets, others from MS-Access, and yet others from "enterprise" languages like Java, COBOL, or Ada. In many cases one cannot control where the info or calculations are done. You are hired to make them work together, not overhaul the company with your Great Master Paradigm. I'd like to see how alternatives address such coordination.

I'm not quite following you here. The example presented simply performs tax deduction calculations. It would inevitably be part of something bigger, such as payroll and cost forecasting, which could be written in Java, COBOL, Ada, C#, VB, VBA, you-name-it. The intent was not to show some "Great Master Paradigm" (whatever that is) at work, but merely to be an example of a real business problem that benefited from OO. I understand you've been looking for such an example for a while.

The specific scenario you stated doesn't seem something that TOP could help with, by my assessment, except possibly to make it easier for non-programmers to set up formulas, which I won't explore further at this time other than set afloat a what-if to ponder and perhaps explore the approaches used in related configuration-centric tools such as SAP Payroll.

PageAnchor: non_top_procedural

As far as whether a non-TOP procedural or OO version is "better", it's hard to tell without an analysis of possible and likely future change scenarios and seems to be a classic case-statement-versus-subclassing debate found in other topics. My guess of future change is that it's close to a wash: OO will not offer any huge effort savings over time, only marginal improvement at best. Canada is not likely to gain a huge quantity of new territories, barring a world-changing event that may do away with or overhaul its current tax system anyhow.

I do see a very rough pattern such that if you have up to a dozen or two "nodes" which are a variation on a theme (such as tax regions), then old-fashioned CASE statements are sufficient and the simplest. Between about 20 to 75, OOP may have a slight advantage[1]. Past 75, TOP is more helpful so that one can sift, sort, print, search, cross-reference, etc. the larger volume of variation nodes. --top

Let's look at using CASE statements. E.g:
        // Provincial non-refundable personal tax credit
        double K1P(int taxregion) {
		switch (taxregion) {
			case AB:
				double TCP = employee.TCP();
				if (TCP == -1)
					TCP = 16775;
				return 0.10 * TCP;
			case QC:
				return 0;
			case ON:
				double TCP = employee.TCP();
				if (TCP == -1)
					TCP = 8881;
				return 0.0605 * TCP;
			default:
				throw new InvalidRegionException("K1P");
		}
	}
Is this better? I suppose if the Canada Payroll documents were structured to resemble CASE statements, with the formulae provided in (say) tables organised by factor (K1P, K2P, etc.) and region within factor, then it might make sense. But, the Payroll documents are structured by region, and formulae within region. Thus, the CASE statements are conceptually more distant from the source document than using the OO approach. In other words, it's more effort to translate the source document into CASE statements than into classes.

Are you saying the OO approach happens to be closer to the documentation layout? It may be true, but that's merely a happenstance argument. In this particular situation the documentation may indeed be noun-grouped instead of verb-grouped, and sub-classing may indeed help fit that. I won't dispute such, but a different writer(s) could just as well have done it verb/task-grouped in an alternative universe. I've organized code and/or tables to match the specific layout of source documents also (not necessarily OOP). -t

Yes, the OO approach in this case happens to be closer to the documentation layout, but it's also closer to the real-world, er, noun-grouping (the tax regions), rather than the abstract verb-grouping (the individual factors).

CASE statements also increase the likelihood of inadvertently changing formulae in the wrong region. When tax updates occur, they are documented on a per-region basis. This month, Quebec and Ontario might change. In six months, BC and New Brunswick might change. Every six months, at least one region changes. Grouping the factors by tax region, as is done in the OO version, reflects the real world and facilitates making the changes that regularly occur in the real world. A CASE-statement approach does not. It's true that new regions are unlikely to be added (it only occurred once during a decade), but OO doesn't just facilitate adding new regions, it facilitates maintaining existing regions.

There's always going to be adjacent code of some kind regardless of whether you use case statements or sub-classing. "Bump thy neighbor" can happen regardless, so the "wrong region" accident argument can work both ways. I suppose with sub-classing, one is more likely to change the wrong method. As far as "reflects the real world", I'd like more details on that. Taxes are a mental concept, not a physical concept. (For something as sensitive as payroll, I'd hope you have unit tests that would detect adjacent breakage anyhow.)

Taxes may be a mental concept (essentially, but try not paying your "concept"), but the real world structure the system needs to reflect are the individual tax regions. These have a physical reality, for tax purposes, that the system reflects. As for changing the wrong method, I'd argue that's far less likely -- because the methods are visually and functionally very distinct from each other -- than accidentally changing a very similar implementation of a given factor in region MB when you should have changed region NB immediately below it. Yes, there are extensive unit tests to detect breakage, but they won't tell you where the breakage lies, only that the final figures for a given region are wrong. Interestingly, an early version of this system was procedural and relied on CASE statements. Not only was it more difficult to maintain than the OO version, the length of time it took to debug -- due to accidental mis-changes and the like -- was higher than the OO version. Indeed, everyone on the development team agreed that the OO version was a vast improvement in terms of readability, maintainability, and elegance. Switching to an OO approach, in this case, demonstrated all benefits and no downsides.

It's hard to know how much of that difficulty was due to the stupid way the C syntax family of languages implements CASE statements (IsBreakStatementArchaic). C's CASE syntax is so poor that it probably pollutes any such test. In other words, we don't have enough info to separate language shortcomings from paradigm shortcomings. I'd need to see what changes were actually made and what the coder was looking at when they allegedly hit the wrong keys. Your abbreviations may be too short. OO code-unit names using those same conventions of "MB" and "NB" may have caused confusion also. (More on this below.) But if sub-classing works for you in that scenario in that language, then fine. Let success be your guide.

IsBreakStatementArchaic seems, not surprisingly, to be more about the 'break' statement than CASE statements in general. 'Break' hasn't been an issue here, though I'd have no objection to CASE statements that don't fall through. How would you change CASE statements in general to improve over the inheritance/polymorphism example I've shown?

I thought you said they were using a C-like syntax? If that's the case, then C-style syntax is indeed a possible or contributing factor. It's a bad syntax design. That being said, I don't have enough details about the psychological (WetWare) causes of the errors your staff makes, and thus cannot work around their mental barriers without guessing. Like I said elsewhere, editing the wrong case row is not a high-frequency error of mine compared to other errors. I fix what's broke long before I fix what's not broke; and case slot confusion is not a real problem for me. Your mind/fingers may differ. And again the flip side is accidentally changing the wrong method which it should be weighed against because methods don't float in space all by themselves.

Surely an improved CASE statement would apply to everyone, not just my staff and their mental barriers? Or are you saying that apart from 'break', CASE is good enough, and we don't need inheritance/polymorphism-based solutions?

I believe the utility of polymorphism is exaggerated. As far as fixing CASE statements, first I need to know what is wrong with them, and whether it's universal or a brain-specific thing. And we'd have to compare to polymorphism-related errors, such as editing the wrong sub-class or method.

Perhaps the utility of polymorphism is exaggerated. That is a fair hypothesis. Can you suggest how we might quantitatively and objectively test it?

As for fixing CASE statements, didn't you write, above, that it's "hard to know how much of that difficulty was due to the stupid way the C syntax family of languages implements CASE statements"? As such, do you have suggestions for improving them (other than the obvious step of removing fall-through) and thereby eliminating that difficulty? Or are you proposing that there is some, as yet unknown difficulty that could be resolved within something that is still recognisably a CASE statement without going as far as inheritance & polymorphism?

Further, the CASE grouping makes it easier to spot similarities and differences in algorithm patterns for the different regions. It may help one recognize a flaw and/or suggest re-factorings. If they are all spread apart, patterns are not readily visible. One can see that "AB" and "ON" have similar algorithms. In larger code sections, this ability grows more valuable. -t

I don't disagree with that. However, I'd argue the maintenance benefits of having the regions in separate classes outweighs the benefits of having them in CASE groupings merely to identify possible refactorings. Such refactorings can still be identified if one decides to go looking for them.

Let's not forget that if we cast aside all the OO philosophy, inheritance and polymorphism simply provide an implicit CASE mechanism that makes it easy to maintain each tax region's calculations in distinct, independent, region-based units.

I will agree that perhaps sometimes the problem is best solved using a single tool or language. The decision devil is in the specifics. Maybe Canada's payroll rules are not sufficiently complex or dynamic enough to test a multi-tool/multi-source scenario where TOP may shine more. I hinted at a sales-commission engine that may be need to be integrated on top of hourly payroll. Suppose it is canned software such that integrating it directly into your Java app is not entirely practical. If we have a lot of these kinds of things, then the utility of a multi-source example may start to become more recognizable. I set out to solve an integration problem, not merely a tax computation problem. --top

The intent was that you re-implement PayrollExampleTwo using TableOrientedProgramming and/or ProceduralProgramming, not (necessarily) implement it using the PayrollExample framework.

If I do add grouping, it may make an interesting exercise, but won't necessarily show any advantages that I could identify to readers. Some portions of tax calculations are simply best implemented as textual code (although I personally don't like your coding style). My framework allows us to take this into consideration, and to me that is one of the benefits of my payroll framework. Maybe another portion is best done by custom spreadsheets because they involve fast-changing business requirements, and my framework can accommodate that also. -top

Is your PayrollExample a definitive illustration of TableOrientedProgramming? What don't you like about my coding style?

As MentalMasturbation, I've also kicked around ideas for a "set-based formula engine" that allows sharing of formulas based on set theory instead of inheritance. It could do combos that would be difficult or messy to replicate using "traditional" inheritance. -top


PageAnchor: Calc01

Here's a table that could map features and formulas to provinces/states:

 Prov./St.|Feature-A|feB|feC|feD|Formula-A|ForB|Etc...
 ---------|---------|---|---|---|---------|----|------
 Califor..|.........|.Y.|...|.Y.|foo+bar()|....|
 Texas....|....Y....|...|...|.Y.|.........|....|    
 Nevada...|.........|.Y.|...|...|A*B-C....|V(x)|
 New York.|....Y....|.Y.|.Y.|.Y.|fed-hls..|....|
 Vermont..|....Y....|...|.Y.|...|.........|....|
 Etc......

"feX" is an abbreviation for "Feature-X", where X is a letter. A similar shortcut is done for formulas. These abbreviations and letterings are merely done to simplify and size the illustration. More meaningful names would likely be used in practice. (Dots to prevent TabMunging.)

We could have it all in one table, or for more flexibility (and complexity) we could split out the entities and use many-to-many tables to link them:

 table: province_state   // province or state
 ----------
 ps_ID
 ps_title

table: ps_features // features of province/state ------------ feature_id // could be an integer or abbrev. feature_title has // Boolean implementation // optional

table: ps_feature_links ---------------- ps_Ref // f.k. to province_state table feature_ref

table: formulas --------------- formula_id formula_title

table: ps_formula_links --------------- ps_ref formula_ref

Oh my. I suggest you try it with PayrollExampleTwo.

Again, I don't claim it would make example 2 "better", other than possibly help the grade of the "non-programmer configuration" feature. As-is, it would only serve as a "teaching guide" at best, and a failed experiment at worse.

I've seen a packaged production software product (outside of ExBase) that kept formulas in tables not to too different from PayrollExample, although they were broken down to their elementary level of two operands and one operator instead of expressions, similar to an AbstractSyntaxTree, although more "linear" and reference-based than tree-like. It was an IBM AS/400 billing system for civil engineering firms from a company called McCosker? (as I remember it). Generally a power-user(s) created the formula templates for the non-power-users (billers) to use. -t
Attempted Summary

Here is my attempt at a summary of this discussion. Feel free to describe alternative interpretations.

Approach A may result in human errors/difficulties G, H, I, etc., and approach B may result in human errors/difficulties J, K, L, etc. However, without "lab" or field observations, it's difficult to verify which are actually more common, which are the more costly errors, and how much variation there is between individuals.

--top
Footnotes

[1] In my observation, when you start to have a lot of variations on a theme, such as maybe cities, then there is often a lot of similarities among them such that they don't fit a clean pattern of one algorithm per one "sub-type". The pattern becomes much more complex and "fractured". This particular example doesn't have enough variations to see how the pattern scales out in practice. I often lean toward using TOP to manage the sub-variations. See Example "Pete-82" in HofPattern for a related discussion on patterns of variations. -t

PayrollExampleTwo exemplifies the pattern you describe. Commonalities go into the base class, differences go into derived classes. This can be extended to arbitrarily-complex hierarchies, with delegation (or multiple inheritance, where available) handling even non-hierarchical commonalities and differences. If you feel TOP can do this better, I invite you to start demonstrating its efficacy by converting PayrollExampleTwo to a TOP equivalent.

DeltaIsolation can grow into a sprawling mess and is arguably unnatural because we generally don't think about things that way in our head, at least not on a large scale. It's essentially hierarchical instead of set-based, meaning it scales ugly. The TOP equivalent is roughly that blank "cells" get the default (common) behavior, by the way. PayrollExampleTwo does not have enough variations and no "nested variations", and thus does not make a very good example either way. It doesn't "stress test" the patterns.

I see. Then I'll have to take that as meaning polymorphism and inheritance are superior for handling situations like PayrollExampleTwo. Fortunately, the majority of my work in the past three decades (or so) has been more like PayrollExampleTwo than not, so I'll keep using polymorphism and inheritance instead of TOP until a suitable demonstration or two gives me enough evidence to consider changing.

I didn't agree to that. I only agree we don't have enough dissectable info about the change patterns of the said domain to make an informed determination. It may be that in that domain poly is better. By the way, enough others have agreed that OopNotForDomainModeling such that I don't feel that compelled to fight that fight anymore. -t

Indeed, I wrote a goodly bit on OopNotForDomainModeling and OopBizDomainGap, and if I recall correctly, several other related pages. I guess we'll have to forgo seeing any working benefit of TOP, then? As for the change patterns of PayrollExampleTwo, they are explained in PayrollExampleTwoDiscussion -- some individual provinces change every six months, and the federal calculations can change every six months.

I can see patterns in that example that could make use of TOP if there were more or larger occurrences, such as range tables. In bigger systems/orgs, power users will want to edit those ranges without calling a programmer, and tablizing them would facilitate that. And the add/subtract sequences (A + B - C + E - F, etc.) could also perhaps be tablized. It would almost be like building a domain-specific spreadsheet processor, kind of a more constrained version of PayrollExample (one).

Please, build it so we can see it in action for PayrollExampleTwo, which is real business code unlike PayrollExample. Surely if it works for "more or larger occurrences, such as range tables" (what are "range tables"?), then it can work -- and be easily implemented -- for the relatively simple PayrollExampleTwo.

It would be overkill for PayrollExampleTwo as it currently is. It would be of no net benefit unless there is a need for non-programmers to change the values/constants. A range table is something equivalent to code in the example that resembles "If a<300 then x1() elseif a<450 then x2() elseif a<600 then...". I worked in a postal rate calculation system that used many range tables once. Maybe I'll kick around the idea some as MentalMasturbation ("Tablebation") and let you know what I come up with.

Yes, I'm sure it would be overkill for a production application, but its simplicity could make for an excellent example. It's not unusual for CPP/QPP and EI rates to be changeable by non-programmers, so you could illustrate that. By "range table", do you mean something like this?
	// Additional tax on taxable income (Ontario Health Premium)
	double V2() {
		if (A() <= 20000)
			return 0;
		else if (A() <= 36000)
			return theLesserOf(300, 0.06 * (A() - 20000));
		else if (A() <= 48000)
			return theLesserOf(450, 300 + 0.06 * (A() - 36000));
		else if (A() <= 72000)
			return theLesserOf(600, 450 + 0.25 * (A() - 48000));
		else if (A() <= 200000)
			return theLesserOf(750, 600 + 0.25 * (A() - 72000));
		else
			return theLesserOf(900, 750 + 0.25 * (A() - 200000));
	}
That's taken from PayrollExampleTwo.

It's not overkill for all applications, only for this one, unless there is a stated need to make the constants non-programmer-changable.

Sorry, I didn't mean TOP was overkill for all applications. It is perhaps overkill for this one, but as stated above, its simplicity could make for an excellent example. It's not unusual for CPP/QPP and EI rates to be changeable by non-programmers, so you could illustrate that.

I'm trying to find a TOP design that's generic enough to flex yet not too confusing and/or has too many layers of indirection. So far I haven't, but will keep kicking around draft ideas.

Cool. I look forward to seeing it when it's done.

Dammit, you tricked me into thinking about such gizmos :-) Now various designs keep involuntarily spinning around in my head like an Abba elevator tune.

What my drafts are looking like when I generalize them tend to resemble the back end of what would or could end up being very similar to "graphical business rule" tools, as can be found at: http://decisions.com

How well such tools integrate with existing systems, I don't know. (Related: BusinessRulesMetabase.)

If you want to make the constants power-user-editable but specific to the existing system, then assign constant names in a ConstantTable of some sort. The entry form could resemble:

		If A <= [20000] then
			result = [0]
		else if A <= [36000] then
			result = theLesserOf([300], [0.06] * (A - [20000]))
		else if A <= [48000] then
			result = theLesserOf([450], [300] + [0.06] * (A - [36000]))
                Etc...
The "[.....]" represent input boxes. I can envision "in between" designs in terms of variations between a "generic" formula engine and something "hard wired" like above that only allows changes to the constants instead of new ranges, new formulas, etc.

It would be trivial to replace the constants with variables, or an array of elements to allow for an arbitrary number of conditions, and parametrise them so they can be edited and retrieved elsewhere in the program. I presume that's not what is meant by TableOrientedProgramming, however. Or is it?

There are various design approaches, each with various levels of what I'd consider TOP. For example, we could put the formulas in range tables:

 Group | ...From... | .....To..... | Formula
 -----------------------------------------------
 ..V2..| ......0.00 | ....20000.00 | 0
 ..V2..| ..20000.01 | ....36000.00 | lesserOf(300, 0.06 * (A() - 20000))
 ..V2..| ..36000.01 | ....48000.00 | lesserOf(450, 300 + 0.06 * (A() - 36000))
 ..V2..| ..48000.01 | ....72000.00 | lesserOf(600, 450 + 0.25 * (A() - 48000))
 ..V2..| ..72000.01 | ...200000.00 | lesserOf(750, 600 + 0.25 * (A() - 72000))
 ..V2..| .200000.01 | 999999999.99 | lesserOf(900, 750 + 0.25 * (A() - 200000))
(Dots to prevent TabMunging) An "eval" would be used to process the formula. This approaoch is programmer-friendly, but may not be power-user friendly. There are different ways to make range tables, but since they are usually published as "from/to" charts in my experience, I chose to use the from/to approach here. An alternative is to specify only the "floor" of the range to avoid the ugly "99999999.99" thing. The floor approach would be similar to only having "from", but could be "higher than" or "higher than or equal". The query would then select the lowest "match". But I'll leave those tradeoff choices to the reader, for cases can be made for each.

If this approach is considered programmer-friendly, but not power-user friendly, I have two questions: (1) As a programmer-friendly approach, how is it better than encoding the above in source code? (2) As a non power-user friendly approach, given that the goal is to expose the formulae to power-users, how could you change it to make it more power-user friendly?

I find the tablular approach much more readable than the case version, at least to my WetWare. If I followed WorkBackwardFromPseudoCode, I'd start with a similar table, and then case-ify it if I cannot find a practical way to implement it as a direct table.

I said the above wasn't "power-user-friendly" largely because it doesn't protect them from typos etc. To make it more PUF, we'd probably have to make a controlled UI around it. To do so, we could have the following tables:

 ranges (table)
 --------------
 rangeID
 groupRef     // such as the "v2" group
 from
 to           // perhaps optional, see notes above
 formulaRef   // f.k. to formula table  Ex: "v2x"

parameters (table) // AKA "constants" ------------------ paramID rangeRef // f.k. to "ranges" table formulaVar // a letter code defined by the formula paramValue // double-precision

formula (table) ------------ formulaID // Ex: "v2x" descript implementation // [design choice] actual formula if using EVAL approach userNotes

formulaParamDict (table) // parameter dictionary ---------------- formulaRef varName varDescript // prompt text type // integer (see note), number (double), yes/no (1,0) // potenetial validation columns, not shown

The formula would be associated with each given range row. The parameter values (constants) would then be entered via some kind of data entry form. Note how this is edging closer to the what I callded "the back end of what would or could end up being very similar to graphical business rule tools" above. However, the formulas themselves would still probably be hard-wired, but the parameters (constants) would be alter-able by power-users. Our sample formula may be defined something like this in the back-end code:

 function v2x(p[]) {
	return lesserOf(p['a'], p['b'] + p['c'] * (A() - p['d']);
 }

However, the choice of using formula "v2x" could be in a pull-down list for the front end targeted to power-users. Thus, power users could:

In our particular example, all 6 "case" rows can use the same formula, but in the above I'm not assuming this will always be the case (no pun intended).

Note that for the "integer type" in the formulaParamDict table, this is a virtual type in that the destination field is actually Double (or maybe Decimal if the tools/languages support it). Checking floating-point values for such can sometimes get sticky (RealNumbersAreNotEqual). A possible validation operation for integer could resemble:

    if (round(fld.foo, 4) <> round(fld.foo, 0)) {  // validate for integer
       frm.addErr("Foo can't have decimal places");
    } 
The second parameter of the "round" function is the number of decimal places to round to. The value of "4" may not be appropriate for certain application uses.

--top
See also: CompilingVersusMetaDataAid
Category CategoryConditionalsAndDispatching, CategoryPolymorphism, CategoryExample
AprilZeroNine SeptemberZeroNine JuneThirteen

EditText of this page (last edited November 29, 2014) or FindPage with title or text search