I'm going to define a unified data model as a model of data architecture that is necessarily rooted in actual digital hardware (otherwise it would just be a simple DataModel or naive ObjectModel) that can encompass every data relationship and scale arbitrarily large. Ironically, in order to hold any kind of data relationship, one must constrain the problem with and within machine types. The foundation of such ability would necessarily lie in a DataStructure like a FractalGraph and that is rooted in a universal and "atomic" integer: 1.
How would you scale such a data model infinitely large? CrowdSourceing, of course, taking an exponential (O(n^2)) problem and turning it back into a linear one. (Such a project is over at PangaiaProject.)
Somewhere on this wiki it's pointed out that tools are often defined by their limits as well as what they can do. Standards impose limits, otherwise they won't be standards. It could be called a form of discipline or regimentation in order to "tame" things. The trick is to find something that's flexible and can cover a lot of domains, yet not be so open-ended that it's RAM-like mush.
That's an interesting concept, although I would argue that the limit you're suggesting *is* imposed: the atomic unit imposes the constraint. The VotingModel does the rest and prevents the data from becoming mush. See DataEcosystem. The other limit is external - when people corrupt the data or the organization, they don't get the love.