Study The Source Witha Debugger

The act of acquiring knowledge about third-party libraries and frameworks by tracing the execution of the program in a symbolic debugger or other interactive run-time environment. Brings the code and design to life in ways that simply reading the source can never achieve. Helpful to have a goal in mind while doing this activity, like figuring out why your incremental evolution to the library/framework is not working.

Challenge

I believe the premise has the relationship backwards. One should use a debugger for debugging, i.e., isolating problems. Yes, use a debugger if an "incremental evolution to the library/framework is not working," but if one is using the debugger just for "acquiring knowledge about third-party libraries and frameworks," the reasonable reviewer can only conclude there is no benefit. The onus is really on this page to present a case of why StudyTheSourceWithaDebugger is a beneficial activity. -- anonymous

In the absence of further refinements, this "pattern" is likely to be viewed as an AntiPattern. It is certainly a strong CodeSmell if nothing else that a source has to be studied with the debugger. In the absence of any justifying or compensating factors, such a source should be regarded as very poor quality. At the opposite side of the spectrum, see TexTheProgram. -- CostinCozianu

Because prima facie, if you have two sources doing the same thing, and you need to read only one, while you have to study the other with a debugger, then obviously the first one is better. The burden then rests with the proponents of StudyTheSourceWithaDebugger to show that for certain classes of problems or within certain engineering constraints, only such sources can be written that StudyTheSourceWithaDebugger is the best option, i.e. that it is not possible to write good sources by classical standards. And may we also quote the venerable TonyHoare on this subject:

"I conclude that there are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies."

When you StudyTheSourceWithaDebugger, you're begging for the non-obvious deficiencies, and you are not likely to find them anyways.

Real code, like real DNA, contains a large volume of information, of which only a tiny percentage is relevant at any one time for any one task. In mature, well-factored, singly-inherited object-oriented code, things that are closer to the root of the inheritance hierarchy tend to be more relevant than things near the leaves (because heavily-reused fragments tend to migrate towards the root), but even there MOST things don't matter for any given task at hand. It is well-nigh impossible to determine, from a hard-copy or online view of the source, what part of the source is relevant. It can be done, of course. But the debugger provides necessary run-time context that provides invaluable information about what does and doesn't matter. Here are some specific things that crop up:

By the way, I realize that I'm making a HUGE assumption, namely, that The debugger presents the source. All of my comments above assume a Smalltalk-like inspector (VenkmanDebugger for Javascript, Komodo for perl, VisualAgeJava, etc) where I'm looking at source code in context with variable state at the same time. I contrast this with the various unix and assembler debuggers that require the developer to emit line-noise hieroglyphics to extract information.

See Also: UseTheSourceLuke, ExperimentStudyRefine, or for an alternate viewpoint, ForgetTheDebugger


Yes, but if you must do this, it kinda' says something about the quality of the product's documentation, and how (un)readable the source is, doesn't it?

No. it says something about the complexity of the requirements and the extent of refactoring. Sure, it's easy to read flat procedural functions (if not too much invention of home-built objects is going on), and you really don't have to because they're easy to document. But an object-oriented framework is another matter. Understanding these without a debugger is like understanding a video tape without a VCR. :-)

What do you mean by "understanding?" I can use lots of things without having to understand very much about them. I would worry more about the code I write than the code I use.

I guess it is most useful when you encounter code that relies on a non-obvious side effect of certain methods to work. You have very little hopes of seeing this at first glance unless you run the code first. Of course this way of coding should not have been used in the first place ...

Why should one be concerned if the code "relies on a non-obvious side effect of certain methods to work"? How would one even know where to look to find such things? My admitted preference is to only investigate code on a need to know basis, i.e., it is not working or the way it is working needs to be changed, and I fail to see the rationale for just plowing through code, either through a debugger or a test editor.


No to you, too. "There is no substitute for understanding." Unfortunately, the debugger doesn't provide understanding, it only gives clues. Careful thinking about what actually happened (for the examples we followed in the debugger) will eventually lead to understanding, but is much more difficult without the source. And, in any case, the debugger always shows you exactly what the code did, not what it was supposed to do. The source code (which is not always available, by the way) is often much more informative about the intent. -- StevePowell
Seems to me that this would not work so well for code with lots of delegation going on (jumping all over the place). I just recently had this technique recommended to me, but I haven't tried it yet, so please correct me if I'm wrong.
For me, studying code in a debugger is like doing biology in vivo. All sorts of useful and valuable information can be learned from texts and in vitro experiments. In order to really understand what's going on, however, you have to be looking at the real thing, in context. Similarly, if you're trying to get an automobile engine running, there's only so much you can learn from plans, manuals, and tv shows. At some point, you HAVE to get under the hood while it's running (or not running).


There seem to be many different contexts.


Lots of people recommend
"Immediately after writing/changing a method, UseTheDebugger to single step through it."

Do UnitTests make this recommendation obsolete?

I would say that TestFirstDesign, as opposed to merely having ProgrammerTests (nee UnitTests) eliminates the rationale to always single-step through a change. If the test fails initially and then passes, there is no need to look at it any further. Mission accomplished, go on to the next item to be addressed. If however, the test initially and unexpectedly passes, or after implementing the necessary code, the test still fails, then it is time to use debugging tools.

That being said, I also have to point out that I have worked with a programmer who always single stepped through his code after writing it. It personally drove me crazy; I wanted to see the result rather then wade through how the program got there, but he was effective using his style. I would not recommend always single-stepping through code as a standard practice, but I would not forbid it either.


When I came to this page, I expected to see something along the lines of "Given the choice between understanding a code base by either using a debugger in conjunction with code reading, or using code reading alone, you should prefer to use the debugger."

Instead, I see a lot of complaining that code shouldn't need a debugger to be understood, which is another point entirely!

Is there anyone who will seriously argue that when trying to understand code that you don't currently understand, you're actually worse off if you use a debugger? (And I assume you will apply as much intelligence to the use of the debugger as your code reading, so no fair implicitly invoking some mythical code wizard who can read kilobytes of crappy code with single-char variable names and lying comments with instant comprehension but is somehow too stupid to set a conditional breakpoint or jump out of a loop when needed.) Is there anyone who can seriously argue that seeing the code running and alive is somehow less useful than only seeing it dead?

I'd submit that there are lots of words here from people who spend more time studying code and writing pages like this than actually making things work. I'm also guessing that most of the brickbats come from those who have the least experience with source-level debuggers as mentioned above (Smalltalk, Venkman, Wing/Komodo, etc.). -- TomStambaugh

The lack of imagination and cultural background (other than Smalltalk, mind you) that affects the SmugSmalltalkWeenie community never ceases to amaze. So, here are some facts:

So the contention: that "StudyTheSourceOfTheDebugger? brings the code and design to life in ways that simply reading the source can never achieve." is simply false as a matter of fact. It may survive if you restrict its generality to some Smalltalk corners of the world.

For the record (in case TomStambaugh is still curious or just guessing), I do make things work and I do use debuggers much more often than I'd want to. Almost every time I use a debugger, I end up wishing that I didn't (because it is the lazy way out from having to think harder and it is a very wasteful activity in terms of time for any non-trivial piece of code). It is not rarely that I end up with no immediate results from the debugging session, but after stressing my brain for 5-15 minutes I end up solving the problem. -- CostinCozianu


EditText of this page (last edited January 28, 2006)
FindPage by browsing or searching

This page mirrored in WikiPagesAboutRefactoring as of April 29, 2006