Overall, I liked this paper. I found the exposition a bit terse, although it might have been clearer to someone with more experience in the financial domain (it took a couple of readings for some of the statements to sink in).
The patterns seem to operate at different levels of specificity, or "size". Whole Value, for example, works at the level of designing individual data items (and interface widgets), while the later patterns involve larger considerations. Diagnostic Query, in particular, seems to be the kind of pattern that needs to be considered all through the design (rather like Alexander's Light from Two Sides). I'd like to see a reformulation of this language working from the larger patterns to the smaller.
The patterns also vary with their specificity to the financial domain. Again, Whole Value could easily apply in many application areas, while Forecast Confirmation seems to be pretty specific. What would an information integrity language look like, for instance, in the CAD domain?
You say of Whole Values: "Make sure that these objects capture the whole quantity with all its implications beyond merely magnitude, but, keep them independent of any particular domain...these objects do not have an identity of importance". I'm not sure here whether I disagree or just don't understand; could you elaborate somewhat on the rationale here, and give an example of a whole quantity, independent of domain?
I particularly like Exceptional Value and Meaningless Behavior. I'm used to two different approaches to integrity here: suspicious, in which data are constantly checked for validity, and trusting, in which the assumption is that invalid data have been weeded out before reaching this section of code. These patterns provide yet a third approach: make invalid data "pass through" the code without adverse effects (at least at this level).