Model First

Often a system is about the model, not about the GUI. Take payroll for example, because it happens to be on my mind. The database structure is complex and full of historical information, the payment rules are baroque, the reporting requirements comprehensive. We take in three SQL datasets, a handful of different file formats, and export around 12000 bytes per person. The risk in payroll is not getting people paid correctly.

Similarly for programs like air traffic control, process control,telephone switching, operating systems. The risk is that the model won't work. In such a system, concentrate your work initially on the model. Have no GUI at all if you can get away with it - otherwise a SpartanUserInterface. -- RonJeffries

See EarlyPrototyping. -- RaySchneider


I know that this can be hard for new (and maybe not so new) OOers to accept at first.

Not too long ago I was working through the VW tutorial with a newbie. It was a pretty simple problem - implement a checking account with all the normal stuff that a checking account has and knows how to do.

The problem was that the tutorial started with the GUI. The focus was very clearly on the user interface - not on the domain.

So I skipped all of the sections until I got to the ones involving the domain. Then we (the newbie and I) implemented the domain without any GUIs. We used workspaces, inspectors, and debuggers.

When we were finished, the system was working. But my partner still had a hard time separating his idea of the system from the user interface.

For him, the system was the user interface. -- AnnAnderson


At KSC I use an analogy when I talk about the order of building things. First of all, imagine that Applications are like hamburgers -- there's a bun and the hamburger. The "bun" is the GUI and database code that supports the "meat" -- which is the domain. When you make a hamburger, first you must prepare the meat, then you place it in the bun. After all, who can argue with McDonald's? -- KyleBrown

P.S. Seriously, this is something we cover in the very first day of the the SmalltalkApprenticeProgram. A student has to understand this to "grok" the way in which we teach Smalltalk (domain coding, then gui coding). The pattern FourLayerArchitecture describes some of the logic behind this. A paper I wrote [1] speaks about it more in depth


I like the McDonald's metaphor!

I'm currently on a project where the initial design was driven completely by a committee that designed the GUI screens. The committee is made up of some of the best subject-matter experts around, but contains no one who knows how to design a GUI. Regardless, they did it and the current release of the system shows it. Not only is the GUI itself unfriendly and non-standard, but the all the business logic wound up contained within the GUI code. The development team simply took the screens and coded them up. As the GUI was initially coded "for a demo", all of the navigation worked early, giving the impression that the project was much further along than it was. This put the team in the unenviable position of having something that appeared to "work", while having to justify the time required to put in the actual smarts.

Had the SMEs, helped by a good OO-experienced engineer, concentrated on a model, the system would not be in the shape it is now. In fact there was no one on the committee with Windows GUI design knowledge. The only software experience on the committee consisted of two guys who had developed the DOS version of the system 10+ years ago.

While I can agree that early GUI prototypes and throw-away GUIs for the developers are both reasonable tools, it's clear to me (not only from this project, but from others) that if you don't know how you are going to cook the meat, there is no sense in dressing the bun.

-- BryanGlennon


While I agree about the postponement of the full blown GUI, I'd like to point out that a user interface is frequently useful. That is, a user interface written for the developers.

For example: right now, I am working on a set of algorithms for data processing. Basically, they take in lots of time-stamped "truck at location (x,y)" tuples and they try (well, that may be a bit anthropomorphic) to spot patterns in the data.

The first thing I wrote was a a visualization component. Being able to graph data and to have the system throw up some overlays when it thinks it's found something is *invaluable* in telling me when ideas are good or not (and for spotting logical errors in code).

The visualization component has nothing to do with the eventual applications. It is a piece of GUI code, and throwaway GUI code at that. But it lets me peer down into the model in a coherent way and is an enormously useful tool.

-- WilliamGrosso


I always start to worry when I hear development strategies that place the user interface last (after designing/implementing the guts). You just can't tack on interfaces... On the other hand, for applications that involve mostly background work (where the user is peripheral), it makes sense to concentrate on the guts first. There is a balance... we (programmers), historically, tend to focus on the guts side. Nowadays, we are told we need to pay more attention to the user's perspective.

There is no one hard fast rule. It really depends on the application at hand. Here is a good rule of thumb:

Any application that involves DirectManipulation, and the end user as a fundamental part of the system's operation (i.e. the user is the controlling entity), should be developed from the user's perspective.

As a programmer, I care about the engine behind my spreadsheet or wordprocessor. But, ultimately, as a user I really only care about my interface with the application. (The user also uses the interface to judge whether the software is correct. The criteria is usually usability.)

If your air traffic control system has the user in an important role (such as determining who should land, and when), then that is the place to start. Of course the guts should work, but the guts are useless if the interface is not correct. If the interface is simple, work it out (let the users play with it) and put it away until later. Now you can work on the guts.

-- ToddCoram


we are told we need to pay more attention to the user's perspective

This does not contradict ModelFirst. The balance to be struck is that the "right" system to develop results from a dialog between what is easy or hard to develop (the developer's perspective) and what is valuable or useless to develop (the client's perspective). And you can't know either dimension perfectly before you start. Worse, the act of developing changes both dimensions. As developers get experience, what was hard may become easy (or impossible). As clients get experience, what seemed valuable may be seen as worthless, and vice versa.

I think Ward calls this "matching the desirable with the possible".

ModelFirst in conjunction with ExtremePlanning concentrates on getting the most valuable stuff done first and in refining the metaphor and software design early, while there isn't a load of UserInterfaceInertia slowing evolution.

-- KentBeck

I have trouble visualizing a team doing ModelFirst in the early stages of a project if they follow UserStories. Would a customer ever be satisfied with "OK, the system can do X as you asked but you can't really *see* it doing X right now"? Specifically, I want to know how XP manages to do ModelFirst and still satisfy their UserStories. -- PieterNagel

The user stories are turned into AcceptanceTests. The customers see the tests run. -- DonWells

There should not be a conflict between ModelFirst and UserStories. CRC cards are driven from stories. If you don't have a story, you can't CRC card. So you work on the stories you have. The CRC cards don't particularly care about the GUI, they care about partitioning of the object model, discovery and evaluation of the classes. So the cards represent the model, not the UI. ergo. -- AlistairCockburn

My problem is with that, to me, UserStory implies *Use*. I envision the user saying "I want to keep track of my outstanding orders". So now you go and enhance the "Order" model, but without a GUI you can only tell the client: "Yup, the *system* can keep track of the orders, and we can prove it by running dummy orders from a text file, but until we have a GUI we can't prove to you that *you* can actually *use* the system to keep track of orders."

See where I'm coming from? Unless clients are particularly smart, they might not see ModelFirst as having satisfied any UserStory yet. -- PieterNagel

But it's easier to tell a user that the guts of the application work it's just that you can't easily get at them yet than it is to explain that although the user interface is apparently everything they wanted that half of it doesn't work, or worse, doesn't work properly. -- John Burton

See UserStoryExamples for some stories that are quite important but don't smack much of *Use*.

Perhaps the problem is with your conception of UserStories. I can't imagine estimating a story like "keep track of orders". By the time this story got into the schedule it would be turned into one or more much more specific stories, each of which would more likely to be coverable by AcceptanceTests. -- KentBeck

Thinking in terms of "proof" is a good start. Remember that if you have a GUI, it might prove on Monday that the system works, but how will they prove it again on Friday? Pretty soon you're buying GUI testing tools and trying to record clicks and develop coverage tests all through the GUI. This is nearly impossible. So how can you REALLY prove, week in and week out, that the old stuff works and the new stuff has started to work? Once you get the answer to that question, you also have the answer to the question of why you can wait for most GUIs.

However, there are usability issues that do need addressing, and I've seen that when you train users and developers to be GUI-shy, there's some relearning to do when you finally get around to doing more than the rudimentary GUIs to keep things going. Also, remember that doing things in Inspectors is not as efficient for anyone as having a GUI. So there's trading off to be done. Many more projects do too much GUI than too little. That's why we push so hard on the "do less" side. -- RonJeffries

If the system is all GUI (i.e., not model), the GUI may be necessary to see whether it works. On the other hand, if the system has a model, it's your AcceptanceTests that let you see whether it works. Effectively, you model so that you can test.


I'm trying to get into extreme programming, but instead of user-stories, I've been given prototypes of the GUI (done in PhotoShop). Looking at these six mocked-up windows, I've come up with more than 40 user stories.

Maybe my stories are too small -- for example, a story, "convert remotely-scanned image to black and white", corresponds to a popup menu in dialog dealing with network scanning. Another story is "when user drags a rectangle on the image preview, add an entry to the scrolling-list of selections".

While I will do the model first (actually test first, model second, gui third), I would not consider the story completely implemented until the user can click that pop-up menu in that dialog and get the color image saved as a black and white file.

-- KeithRay


I do believe this page is confused. It seems the phrase the model is used to mean an underlying business or computational model of the data that is distinct from direct user interaction. This sends me in a couple of directions.

  1. You can't possibly know what the computational model is unless you already understand the semantics of the user interaction. For example, when the user says "I want the system to keep track of...", your analysis of interaction is incomplete. In this case, the user is doing your design for you. You need to ask how and when this stuff being "kept track of" will be of interest in the future. That leads to the discovery of new interaction semantics. If the data won't be used in the future, there's no need to store it, or to worry about data models for doing so. If the data will be used with other data in esoteric computations to yield results interesting to the Users, then someone has to be able to describe the semantics of those computations for you to build it, even if they cannot construct the formulae from memory.

  2. In evolutionary development especially, there are Users and there are "users". The Users are the people who will interact with the new system. The "users" are the older parts of the system that will interact with the new system. From the perspective of those developing the new system, both of these users are significant in that they are problem context givens. This fact gives rise to the new observation that sometimes part or all of your data and business models already exist in the older parts of the system, and their presence exerts influence back through the development process to the Users to accommodate the system as it is, rather than correct those parts that actually conflict with new requirements. If you have an existing implementation, but no way to convey to your new Users what it does, then it could be time to develop some model of that to communicate existing semantics to your Users. Or a simple verbal description might do as well.

-- WaldenMathews (did I miss the window of interest in this topic?)

YesAnd: there's a third "User" - the User who contributes the UserStories may well not be one of the people who will actually use the system - EndUsers. So UserStories should specify EndUser views, and the "business model" (whether it's BigDesignUpFront or an emergent property of XP development) should be meta enough to encompass these. -- TomRossen


The model is usually retrieved from the concepts the customer mentions when he explains his buisness. Do we scrub those concepts then, like we ScrubTheRequirements??

Are you suggesting we should do a big design (model) up front?


See UiDrivenDesign


CategoryAnalysis


EditText of this page (last edited October 2, 2004)
FindPage by browsing or searching

This page mirrored in ExtremeProgrammingRoadmap as of April 29, 2006