Techniques and issues for dealing with "computational abstractions" using various paradigms.
Based on a conversation that began in PolymorphismLimits
Have you ever written an object-oriented program that employed inheritance and polymorphism?
Only for relatively simple things where the difference/impact between polymorphism and non-polymorphic techniques is relatively small (and thus was not impressed). For mass variation management, I have always used a database. And even many OO proponents suggest switching to something "more powerful" than polymorphism, such as composition, when things scale in complexity. The "pattern movement" is partly a result of limitations of basic inheritance-based polymorphism.
That is an interesting, and not un-expected response. I've seen similar complaints about OO from programmers who attempt to use OO to model domain objects -- which many OO texts naively seem to encourage. The best OO programmers use OO to implement computational objects that facilitate manipulation of domain data; there are few or no "domain" objects per se. As a trivialised example, a poor approach to building an object-oriented accounts receivable package would be to implement Customer, Invoice, Payment, Creditmemo, Debitmemo and Journal classes. The good OO programmer will instead build computational abstractions -- perhaps things like Entryform, Report, Menu, Datastorage, Debit, Credit, Query and so on, where domain data itself is modelled and manipulated via run-time instances of dynamic objects. The former would certainly be subject to complexity and maintenance issues as the project scales up or needs modification; at best it becomes an OO database, at worst an awkward OO mirror of the real SQL database. The latter tends to scale well, and adapts not only to changes within a given project, but supplies classes that can be used in other projects.
This is more or less the view that some raised in OopBizDomainGap. I sort of agree, but the thing is that "computational abstractions" tend to be small and isolated enough that procedural techniques are usually perfectly fine for them (such as found in HelpersInsteadOfWrappers). But the difference between the two approaches is not large enough to fuss over much. The "domain objects" issue is the bigger thorn in my side, such that we can perhaps half agree on this computational abstractions thing. Some places OO fits well (outside of domains I use), some places the difference is a wash such that personal preference is the tie breaker, and some places it makes a mess. (I will agree that OO probably makes it easier to apply heavier type-checking to computational abstractions, but then again I prefer a scriptish approach such that I don't explore that area much.
I was the one who advocated the computational-objects approach in OopBizDomainGap
. I am surprised, though, by your statement that "'computational abstractions' tend to be small and isolated enough," etc. My business applications tend to consist almost entirely of such abstractions. Certainly there are OO messes in some places, just like there are functional, declarative, and procedural messes. I wouldn't blame the paradigm for that; I'd blame bad programmers. Bad programmers can make a mess out of anything, and no paradigm is immune from it or more prone to it than any other.
I've found I get easier reuse if I keep CA's small. There may be a lot and I may use them a lot, but sprawling sizes of any single CA is something I've learned to avoid when practical. If a given app needs a complex one, then I only make it complex for that app rather than trying to make it feature-pack-rat generic across apps and projects.
However, if it was for a larger team, then keeping track of which project used which sub-feature in order to cross-borrow may be difficult such that some pack-rat-ism may make more sense there. But scaling up to such, using sets to track bunches of features is usually better than trees in my opinion.
For example of size reduction, I used to try to combine SQL generators for Insert/Update statements with data dictionary traversal because they shared/used a lot of similar info. Over time I found this to be over-coupling because for simpler projects I didn't need a data dictionary, but still needed the SQL generators. Now they are decoupled. I've also decoupled querying the ConstantTable with generation of drop-down lists for similar reasons. But they do share similar conventions to make communication between them easier if and when I do use them together.
In other words, you apply YouArentGoingToNeedIt
Kind of. I am not a YagNi extremer. I sometimes go for low-hanging future-proofing or apply predictions of common occurrences based on past experience. Software engineering is about balancing shitloads of competing goals, not purity of rules. For example, my original SQL generators (for Update/Insert-like statements) hard-wired in the quoting escape sequences. After changing it for different vendors repeatedly, it's now an option passed into the set-up map (with a default if none given) so that I don't have to hunt down the hard-wiring. (I suppose this can still be called YagNi on a inter-project scale.)
See Also: OopNotForDomainModeling