Lisp Lacks Visual Cues

(Moved from WhyWeHateLisp because of the size of that topic.)

To explore the allegation that Lisp supplies fewer visual cues to know what is what.


I know I already said this somewhere, but I cannot find it anymore. The Homoiconic nature of Lisp means that one cannot tell by looking at the code alone whether something is a data structure or a function. You have to read and understand the context. Other languages make it pretty clear because they use different symbols and syntax for each. They have consistent visual clues to what is a function and what is a data. It is roughly analogous to only being able to tell whether somebody is male or female by studying their surroundings instead of directly seeing clues on the individual themself. I realize the advantage is dynamic swappability between data and code, but is that really done (and advantageous) often enough to justify losing visual cues?

I think that the premise is mostly false. When looking at source code, data is usually easily distinguished by syntactic clues: double-quotes for strings, single-quote for lists-as-data, and various prefixes starting with the "#" standard reader macro character to introduce what can be thought of as literals that represent most data objects other than symbols, lists, and strings (e.g. pathname objects start with "#P"). Unquoted parens introduce lists, but those lists are almost always code that is intended to be compiled or interpreted, unless marked by one of these constructs. The only case I can think of where this is not true is when arguments to special operators or macros are lists which will be treated as data rather than code. Such cases are not infrequent, but they're also rarely confusing because the first elements of such a list usually make it immediately apparent that it's not code, even if one is not familiar with the particular operator or macro being called.

But where it ends is not always clear. Other languages use parenths, square brackets, curly braces, etc. to give clues about the nature of things. Lisp has a one-size-fits-all bracketer. The variations complicate the syntax, but give sign-posts in exchange.

This is brought up frequently, and the answer is: Use a minimally capable editor that can highlight the matches for you, and indent properly (preferably with the help of same editor). Indentation clarifies where things end. As mentioned above, syntax does in fact usually make it clear what a thing is. The lack of matched differing bracketing pairs means that you can't scan forward for a matching closing character, as your eye might be accustomed to do from reading other languages. Instead, learn to rely on indentation, as one would in Python. I suspect that's why modern CommonLisp style insists that closing parens never be placed on a line by themselves; this deemphasizes their importance visually, and emphasizes instead the indentation. The skills for reading Lisp code printouts are perhaps similar to those used in reading Python.

Python reliance on white-space indentation is annoying also in my opinion. TabMunging can drive one crackers. Even indentation does not visually indicate what is what, only how deeply nested it is. And, reliance on the editor means that studying printed code (paper is easier on my eyes and easier to mark notes and lines) is tough to do. I will admit that maybe I am spoiled by the syntactic diversity, and could perhaps eventually get used to just parenths. But relying on such a long learning curve to untrain from common language patterns makes Lisp a CareerLanguage.

I feel the same way when I look at Python code, but it didn't take me long to get comfortable reading Lisp, and I trust that I could get used to Python quickly. TabMunging can indeed be a headache - in any programming language. (In Python it's worse because the compiler cares, too, but Lisp isn't like that.) For me at least, the learning curve was not long. I'm surprised that you would find it so difficult as to warrant the CareerLanguage comment. (As an aside, I consider C++ a CareerLanguage because it has many quirks to learn that don't teach anything fundamentally new or transferable. With Lisp, I find that many of the unfamiliar bits are truly different and useful ways of thinking about programming issues, which I can to some extent apply to problems via other languages. So, to my way of thinking, Lisp is not a CareerLanguage, because much of what one learns to understand it well has wider application.)

We look forward to such lessons in LispLessonsForOtherLanguages.

Sorry, but I'm not talking about things that I, at least, could easily show in examples. I'm talking about fundamental aspects of how the language works. For example, the concept of symbols as something distinct that you get your hands on, where symbols are usually something that only a compiler or interpreter knows about. This seems trivial at first glance, when the most notable aspect of a symbol is its common role as a variable, but the implications of the variable-as-object and the other uses of symbols as efficient keys or monikers for things are subtle and affect both problem-solving style and program performance. Many concepts in Lisp exist in other languages, but in Lisp they often seem more distinct and decoupled from other concepts, making them clearer and enhancing understanding of how other languages work. (Multimethods are a good example of this, separating polymorphism from encapsulation.) At least so it has seemed to me.

I'm not skilled enough in Lisp to know how to convey these things in discrete examples; they would just seem like strange and impenetrable idioms if you didn't take the time to explore the language yourself and internalize them. Honestly, time spent asking for examples and analyzing answers would probably be better spent exploring the language yourself and working through some tutorials. Nibble at the strange parts until you basically understand them. Write some non-essential projects in Lisp to get a decent feel for it. Then walk away from it and see if anything from Lisp starts to intrude on your programming in other languages.

Perhaps another way of saying something similar is that other languages use a richer set of symbols to convey information of "kinds" of constructs. If you see "{" you know you are looking at blocks. If you see "(" you know you are probably looking at programmer-defined expressions. If you see "[" you know you are looking at an array, etc. You see "." between tokens and know you are probably looking at an object path. In Lisp one relies too much on context because parentheses are used for everything. If different symbols carry more meaning, then the symbol alone conveys information; you don't need to rely on context, which in Lisp you do. (Of course, some languages get carried away with symbols IMO.) I am a visual thinker and like visual cues in languages.

If one spends a lot of time, as Lisp users do, writing code that manipulates other code, then one should make that as effortless as possible. One way of doing that is to make the syntax as simple and consistent as possible. "Visual cues" are only needed in languages where all the complexity is piled up on the surface and the programmer is forced to deal with it directly. ("SyntacticSugar causes cancer of the semicolon." -- AlanPerlis) Experienced Lispers don't rely on "visual cues" to manage complexity; they instead try to abstract it away, leaving simplicity in its place.

"Block", "Array", "Object path", "Method call" are all lower level of concept than the problem you are working at. What is the visual cue the language give you for Transaction? what is the visual cue database give you for State Diagram? Lisp on the other handd free you from thinking "To use transaction, invoke the method call, passing a block" like this

 .... //gathering data
 _doTransaction(
 ____new Transaction(Transaction tran){
 _______void run(){ ....}
 ____});
To "do the following statements in transaction" of Lisp

 _(in-transaction-do tran
 ___(action1)
 ___(action2)
 ___....)
(It's actually not quite fair that I picked java, since it's verbose, ruby will give a closest looks to Lisp)

 _withTansation do | tran |
 ___ statements...
 _end
But for whole new control structure like LOOP, you can see how the language is extended more. The point is that, in the high level of your system, in other language, all you get is only the visual cue your language provide, Block, Array, Object path. What about Hashtable? What about Tree? What about Network Diagram? All that are left out from your language primitive show out like too much details. In Lisp, at high level of your system, you mostly have something that look declarative and the implementation can mostly be changed any time, because there is no such thing as "From new version, please use () instead of [] cause we are not using array any more". All in all, When you think that you can't have your language provides all visual cue for everything else except block, array, method call and object path, Then you should consider whether it is appropriate to have those exception after all -- PisinBootvong

You seem to be suggesting that the reduction of visual cues results in flexibility which outweighs the loss. But, I have not seen a convincing example. I asked for one in ChallengeSixVersusFpDiscussion, but no takers. As far as the transaction example, I am not sure what kind of transaction you are illustrating. Transaction code has not been a significant source of bloat in my experience. Is this a contest to produce the shortest transaction blocks/code? Regarding your "declarative" statement, if we are going to declaratize an app, I would rather the info be in tables, not nested lists. I find nested lists an inferior data structure.

But what I am really trying to say is that Lisp does not take advantage of the potential to have a richer set of symbols. It is not using the "full spectrum" of available symbols to convey information. Yes, one can get used to black-and-white, but why not take advantage of the existence of colors to help out the eyes? You seem to be suggesting that one must live with black-and-white to get the full power of programmer-built abstractions. I am not convinced yet that such a tradeoff is hard-wired into the universe (why would it be?). I am not ready to give up the idea of having both.

For example, one cannot tell what is a function name/block, what is a data block, and what is a control block without knowing the context. Unlike my earlier examples, those are something that Lisp does use regularly. If they exist in Lisp code, why not use syntax to indicate the difference between them to the reader? Even if one learned to recognize them quickly by context only, visual cues are almost always faster for the human mind to process than "context parsing". Thus, it is not just a matter of "getting used to". Contexting is inherently a slower mental process for most, at least for me. In context processing, the mind generally has to study the left side, the middle, and the right-side (and perhaps further), and then do a look-up of for relationship information. With direct visual cues, a mental lookup can be done without first checking the left and right neighborhood. A single index key usually performs quicker than a compound key; or another way of saying this is that the fewer predicates (factors) in a query, the quicker the result on average. True, the context will eventually have to be checked anyhow, but at least the initial guess comes quicker and can be put into the processing hopper at an earlier stage. This is somewhat analogous to "path prediction" done in some microprocessors. If the processor can guess the most likely path ahead of time (based on past patterns and other prediction algorithms), then it can parallel the calculation of the probable result so that it is ready as soon as we have verified the path. A catering truck might start brewing Bob's favorite coffee as soon as they see him walk in the building because in the past he always comes out half an hour later for his coffee. The truck might feel it is better business to quickly serve him and to risk wasting coffee for the few times he doesn't come back out rather than make Bob wait because repeat customers are usually the most profitable. Maybe 5% of the time a curly brace is not really part of a control block, but the ability to guess right quickly for the 95% speeds up the net results such that we can eat the cost of occasional bad guesses, which is performing reversals or a redo.

Do we really have to give up this fundamental nature of psychology and processing to get better abstraction and power? And even if there was a universal law making them mutually-exclusive (I doubt it), I suspect choosing the visual route is the net better, at least for me, a visual thinker for the most part.

I suspect you're suggesting that to allow something to be mutable from data to function call to control structure; it cannot be "pre-marked". Otherwise, we would have to keep changing or removing the markers to use them in different contexts. I agree that there might be a penalty for such, but is cross-purposing really that common in Lisp, or better yet, is it really necessary for significant abstraction and code reduction? (Just because it can be done does not necessarily mean it should.) Links about demonstrations of this alleged power already exist above, so are demos the current waiting point on this issue?

-- top

The basic criticism of a lack of visual cues seems valid, but it's just not that big a deal in practice. For instance, you say: "... the mind generally has to study the left side, the middle, and the right-side (and perhaps further)...". By far most of the time in typical Lisp code, only the beginning of an expression (the operator name) has to be studied, and proper indentation makes it very easy to find the operator without counting parenthesis.

But if you roll your own operators and block types, as higher abstraction allegedly needs, then you still have to mentally lookup what kind of operator that is to know what follows. It is a two-step process. And the larger the library, the longer this process will be. Hard-wiring symbol conventions into a language means that you almost always know what you are looking at without a mental lookup to find out. (You can still roll your own blocks, etc., you just must stay within the conventions to do it.) And, indentation can only tell you a limited amount of information. The fact that it depends so heavily on the indentation is itself a suspicious sign.

CommonLisp makes it easy to define additional syntax via macro characters (cf. http://www.lispworks.com/reference/HyperSpec/Body/02_.htm). Programmers use them occasionally to make domain-specific programming more concise. But AFAICT, this capability is used very rarely; most programmers are content to use the standard predefined macro characters (which include the parentheses!). The fact that it is so easy to create additional syntax, but so rarely done, seems to support the notion that most Lisp programmers find it unnecessary to have additional syntax. (One could also argue that people avoid doing this because it would be non-standard, and that may be one reason. But no additional syntax definitions have come into common usage, despite some seemingly attractive ideas.)

In favor of simple syntax, I give you a quote from someone on comp.lang.lisp: "Lisp's minimal contextualization of and regularization of syntactic and semantic constructs makes writing macros in Lisp much easier. It makes a big difference." As you say, the tradeoff of harder macros writing might be worthwhile if it eases reading - the reasoning seems valid. But in practice, after decades of use, additional syntax is not commonly added to Lisp, despite the presence of mechanisms that empower programmers to do so.

-- DanMuller

It is possible that those who want such ability would switch to TCL or another language better designed for that kind of thing, or at least more acceptable to its "culture". Like minds tend to congregate around similar languages.

And, designing your own syntax can be hairy when it comes to such things as precedence and escaping. It is often better to pick a language with the conventions already built-in and tested so that you get "symbol spectrum" without the pain of invention. -- top

From what I saw, The complaining of lacks of visual clues so far seem to be about The issue about ways of accessing element seems to be minor point for me. Because it's a minor thing in programming. When you develop an application, what's your main problem? Is it accessing array, hashtable, and set? Or is it "The business problem domain"? complaining about how Lisp lacks "[]" for accessing array is like complaining about Lisp having "car" function and use prefix notation, which is bad for "+ - * /". How many symbol is that? 4 if I count them correctly, right? How much of "+ - * /" does spread in your application? how much of "[]" does spread in your application? Instead of nit picking about how every line on "+ - * / []" must look like you please, consider how your code looks like you want when it is at higher level like

{I am not sure what you mean by "spread".)

 (with-transation-do transaction :if-error :rollback
     (.... statement...))
About "distinguishing between code and data is hard", personally I think that is good. Good Lisp code tends to be declarative. What you passed to function is usually closure (in case of macro, it's the source code). So it make the resulting interface looks cleaner. Consider this

(my-fi predicate
     :then "do if true"
     :else "do if false")

(my-if predicate :then #'(lambda () "do if true") :else #'(lambda () "do if false"))
Above code is using home-grown IF structure which evaluate the predicate and do first block is predicate is true, do another block otherwise. You can see that the first version is because it hide the explicit closure

I agree that a declarative approach is good, however, I would rather manage declarative code via attribute browsers, table browsers, etc. If you have your Lisp code as mostly declarative, then Lisp is serving mostly as a data structure converted to text, and if we are going to have a one-size-fits-all data structure, again I would pick something besides nested lists as the King structure. Do declarative right. -- top

No, you can not put every thing in DB table. And I said nothing about one-size-fits-all data structure.


Another example, from CollectionOrientedProgramming. Suppose I wanted to make a little array-handling sub-language or functions that looked something like this:

  average(myArray, columns=3:8, rows=2:5)
In Lisp one might end up with something like:

  (average myArray (columns 3 8) (rows 2 5))
It says the same thing, but colons are a nice visual cue to indicate array ranges. One could live with the Lisp version, but the colonated version makes nice use of symbols to convey a range. True, most languages don't offer such out-of-the-box, but something like TCL makes such at least as easy as to do as Lisp and probably more because as you say above, it is not done very often in Lisp and so it has less mature libraries or training material for such. And the culture around TCL won't balk at it. If we did not have a powerful string-processing system/language, at least do something like this:

  average(myArray, columns="3:8", rows="2:5")
And then parse by splitting on the colons (noting they might also be variable names, so caller scope grabbing ability may be needed). I don't want just semantic abstraction in this case, but syntactic abstraction. When I make a sub-language, generally I first draft up the ideal, and then work backwards to make it practical in the given language. Lisp is further from the ideal. I want to use a fuller spectrum of the keyboard when designing a sub-language. Lisp makes the same mistake as BrainfuckLanguage with regard to readability, just to a lessor extent. -- top

For the record, (average myArray columns=3:8 rows=2:5) is exactly one macro away in lisp, and it is in fact very common to do such things. The entire point of lisp macros is to make it possible to draft up the ideal, and leave it at that. -- WilliamUnderwood

Also for the record, I like the Lisp version better. This may just be my Lisp background speaking (though I doubt it, because Lisp is about the 5th syntax family I've learned), but the parentheses provide a much better visual cue than the colon, and they don't distract from the numbers. When I see :, I think "cons". But that's just Haskell speaking (syntax family #9!).

I could go for average(myArray, columns = 3..8, rows = 2..5) though.

Which just shows how arbitrary syntax is. -- JonathanTang

Yes, it often is. But at least it *can* offer visual variety to serve visual cues even if one does not always agree with the choice. It is variety versus non-variety. Again, Lisp does not use the full "spectrum" of available symbols. Generally more information can be conveyed if the full spectrum is used. It is the same for the keyboard as it is for diagrams. Even if the colors on a printed chart are not our first choice, having colors usually helps over black-and-white.


Colon example continued...

 // non-Lisp
 x = average(myArray, columns=frompi:topi, rows=bento:mento)
 y = zarkagg(troGaxi, mognoto(flig, biko), trogows(glig, misk))

// Lisp or Lisp-like (set x (average myArray (columns frompi topi) (rows bento mento)) (set y (zargagg troGaxi (mognoto flig biko) (trogows glig misk))
In the non-Lisp version, the syntax is giving visual cues to what it is. In the second Lisp-ish set, the first line's syntax pattern is identical to the second. My head can simply process symbols faster than it can with function names. I can spot the colons and equal signs pretty quick and start making out domain-specific patterns, such as the ranges. I cannot instinctively do that with the second.

Actually, your second example is not identical to the former. To make it identical, you'd need this:

 (set x (average myArray (columns (range frompi topi)) (rows (range bento mento))))
And, given that we are now using a consistent function name range, we can rename it:

 (set x (average myArray (columns (: frompi topi) (rows (: bento mento)))))
Since this convention is used more than once, we can refactor this into a reader macro somewhere, and end up writing this instead:

 (set x (average myArray (columns \#:frompi:topi) (rows \#:bento:mento)))
But, I am digressing.

I suppose you could argue again that one gets used to it after a while using Lisp, but I have been working with function names for more than a decade in mixed symbol and function-API languages, and symbols still process faster overall. Being forced to use only function names (or list starters) may improve my function name recognition ability a little bit, but it is not going to significantly improve on a decade of experience. Why would symbols "stick" faster anyhow in those hybrid languages? I had to recognize function names in them also, so it is not like I am a newborn in that area. Symbols simply process faster overall in my head. Do you understand what I am trying to say here? I am not sure it is coming out right.

Let me try to explain it this way. When I see a function name, my mind sort of does a hash search on each letter. In practice, to some extent I tend to process the shape of the whole word as a kind of long symbol rather than as phonetic pieces. However, a "long symbol" still takes longer to mentally look-up than a shorter one. Thus, when I see symbols such as equal signs, colons, etc, the lookup is quick and I can use that information to quickly narrow down what I am looking at. It is a quicker mental hash than names. We still need to rely on names, but the scope of what their context is has already been greatly reduced by looking at their placement relative to the quickly-spotted symbols.

It can sort of be compared to sign colors of road signs. The color of the road sign narrows down what you are looking at. Bright colors generally mean caution. Green means location-related information (at least in the US) such as town names and distances, and white means traffic rule statements, such as "No Left Turn". These colors offer a nice "pre-parse" about the signs. If we had to read the content to know whether something is cautionary, location-related, or rule indicative, then it would slow down the processing, at least for me, and I suspect others because they came up with that convention even without my input.

Without the colors, "No Left Turn" could mean either that a turn is not permitted, or as a warning that the left turn lane is missing or closed due to construction. If you are driving a birth-giving lady to the hospital, you may want to know which rules can be broken without the end-result being to crash down into a piping ditch. To be clear, such a sign might have to word it differently, and longer, such as "Left Turn Not Permitted".

Symbols are a lot like the sign colors - they quickly narrow the context of the sign content to help you know how to read them (content classification) and know what to ignore when looking for something specific. Lisp is a lot like having traffic signs all be the same color; you have to read the content before you have any clue about what they are.

Further, generally content lookups and visual pattern recognition are more or less separate parts of the brain. If one can use both at the same time, then some parallel processing takes place, speeding up the reading process. Lisp tends to let the visual side sit idle while overtaxing the content lookup portion of the brain. In other words, it is not doing LoadBalancing? well. (Yes, I know that indentation plays some part, but it does just about for all languages, so is not a difference maker.) Being able to use both in conjunction uses more parts of the brain to come up with an answer faster. It is like going hunting with a dog versus by yourself; the dog uses its sniffer and you use your eyes. Together the two triangulate the prey faster.

-- top

Actually, the extra syntax in "average(myArray, columns=frompi:topi, rows=bento:mento) " tells me nothing at all. What clues me in to the meaning are the names: columns, rows, from*, to*. Syntax can only give you small clues; reading what's written is essential. That's why I think that the difference with or without the extra syntax is small. In either case, the identifiers tell you what's going on. The gibberish versions are both equally gibberish to my eye.

I don't dispute that the names eventually carry the full meaning, but generally when one is reading code they are looking for something specific rather than running the entire thing in their head.

There are a few things about the examples that I wonder about. myArray is obviously not an array, but a matrix, yes? Is columns a function that returns something designating a column range, or is average supposed to be a macro that interprets its arguments as a "little language"? In the former case, "columns" and "rows" might be useful functions in and of themselves. In the latter case, they would just be symbols, and must be recognized by the average macro - and average must be a macro, otherwise the system would try to treat "columns" and "rows" as names of functions. It would be typical to use keyword symbols for this. Keyword symbols are distinguished by starting with a colon, thus: (average myMatrix (:columns 2 5))

This points to one of the reasons that macros should be used somewhat sparingly in Lisp. The evaluation rules are usually very simple: all argument expressions are evaluated before a function is called. Macros, on the other hand, can use whatever rules they wish for evaluating or not evaluating their arguments. Giving a pedestrian function like "average" (however that's defined for a matrix) unusual evaluation rules would, I think, usually be avoided.

Some other styles that one might use within the context of Lisp are:

 ; functional, returning slices of the matrix/table/whatever (hardly an array)
 (set x (average (rows (columns myArray frompi topi) bento mento)))

; using simple keyword/arg pairs (set x (average myArray :from-row frompi :to-row topi :from-col bento :to-col mento))
In this first example, rows, columns, and average are all functions. In the second, average is a function that takes keyword arguments. (Keyword arguments can appear in any order, are optional, and can have default values.)

Regarding the value of additional cues above and beyond indentation: Phooey, I say. Go read some Perl and tell me how wonderful it is to have a bunch of additional syntactic symbols embedded in the code. I use Perl pretty often, because it's good for quick file-handling and search tasks. But it's damn near a write-only language.

-- DanMuller

Like I said earlier [where? not on this page], Perl is probably the extreme on the other end. But I won't dispute that Perl jives well with Perl fan's brains. I won't dictate to people how best to think about computer languages as long as they don't make objectively measurable claims. I can only be sure about what makes my brain happy and fast, not a Perler's brain. EverythingIsRelative; I am only the messenger.

-- top

"I can only be sure about what makes my brain happy..." True. And a good reason not to waste time on topics like this, I suppose. As you're fond of saying, there's no objective evidence, and if the difference is on the order of a few percent of reading comprehension speed, then the syntactic differences by themselves should neither discourage nor encourage use of Lisp over some other language.

-- DanMuller

Re: "And a good reason not to waste time on topics like this" - The problem is that software engineering depends largely on psychology with no known way to escape that fact. Thus we would have to delete 3/4 of this wiki if we go with that guideline. -- top

Perhaps 3/4 of the wiki could stand to be deleted. :) I don't agree that software engineering is mostly about psychology. The effect you're claiming on this page (Lisp's lack of punctuation slows comprehension) is, if it exists at all, negligible and unimportant compared to actual software engineering issues. People have become comfortable and productive with languages that use all kinds of different syntactic forms - APL's panoply of symbols, FORTRAN's whitespace-agnosticism, FORTHs read-me-backwards stack-based approach. It's the ideas represented by the notation, not the notation itself, that make up the meat of software engineering. This page is an amusing side-excursion into a detail of language design.

Unfortunately, code is primarily how we have to communicate with the system so problems with it add up. Further, just because it is a minor issue does not make it not worth documenting. Personally I would like to deal with code as a big data structure rather than syntax. That way the presentation can be tylor-made for each reader/user (with some common ready-made templates) rather than force one presentation approach on everybody. See SeparateMeaningFromPresentation. If we do this, then we will fight over which is the best data structure to hold the meaning, and nested lists won't win in my book.

Some of your analogies regarding "single key" and "multiple key" lookups strike me as particularly flawed. In Lisp, the operator name is usually the key to understanding what follows. The operator name is always easily found at the beginning of a list. (This has been mentioned a few times, I think, but you've never acknowledged it.

Like I said above, I just find symbols often result in faster look-ups in my head than names, regardless of how easy they are to locate in the first place. The milage in your head may vary. I don't know what else to say about it. It is a truth for my head, period. Whether that is the way my head "should" work is a separate issue. Some programmers are visual thinkers and some are linguistical thinkers it seems. Visual thinkers probably prefer more symbol variety to indicate information.

You only express a dislike for relying on indentation to help find it - an assertion that I view with scepticism, since indentation and alignment are used by programmers in most any language to help visually locate important keywords and punctuation.)

Like I said above, I excluded indentation from the discussion because most languages use it, so it is not a difference to consider.

As far as recognizing operator names goes, protestations about the speed of recognizing words versus punctuation are bogus. (I use the word "punctuation" instead of "symbol", since the latter means something different in Lisp than what you are using it for. In fact, operator names are symbols in Lisp terminology.) Any adult with a reasonable reading ability recognizes complete words, and often even entire phrases, without examining individual letters. (This is shown to be true, at least as the norm, by a lot of psychological research.)

I would like to see references on this. If symbol preference is a minority preference, does this mean that Perlers are also "abnormal"? Does LeftHandedSyndrome apply here? Above you disagreed that software engineering was mostly about psychology, but the above seems to contradict this view. Is it the only exception?

The idea that longer words necessarily take longer to recognize is incorrect. The notion that "{my code}" is easier to recognize as an unnamed block of code than "(lambda () my code)" is unlikely. Add to that the fact that curly braces can be hard to distinguish from other types of braces, and languages which rely heavily on punctuation often have to reuse the same punctuation for different purposes, due to a paucity of them. (APL being the notable exception.)

Nothing is perfect. Some fonts/browsers display curly braces better than others, I would note. Bad fonts/browsers do indeed result in more confusion for me. Regarding your example, I would prefer curly braces over having "lambda" stated repeatedly all over the place. For one, it is more compact. Note that if we follow my SeparateMeaningFromPresentation suggestion above, then how such blocks are presented is an end-user presentation detail. The internal structure would simply have a way to indicate that a block is a lambda block, and whether it is visually demarcated with braces, names such as "lambda", or bleeding gerbil GIF icons is demoted down to a user presentation issue. That is the ideal we should strive for. That way we no longer have to bicker for these kinds of things. -- top

On the subject of storing code in structured binary form (tables, or whatever), I'm lukewarm. Plain text allows you to organize code (and data) in meaningful ways that don't necessarily fit into the language structure -- comments, alignments of similar items, etc. I find the free-form formatting of modern languages to be a feature, not a hindrance.

But again how it is presented, as text or line diagrams or whatever, would be your personal choice. I am not suggesting a presentation, thus I have not presented an alternative to text as a representation, so you have nothing to complain about yet. One advantage to comments in a structure is that it may be more clear about what the comment applies to. Example:

  x = 0;  // initialize
  y = 0;
  count = 1;
Here the "initialize" comment applies to all 3 code items, but this is only implied. Comments could be made to indicate the entire range of code which they apply to (as an option).

I'm sceptical that a system could be built with enough flexibility and ease of use to rival free-form text. Yes, you can theorize a system, and such language systems have been discussed and even built in the past, but nobody has built one that caught on, to my knowledge.

Run-time engines such as Java's and Microsoft's are a step in that direction. I think it is something that will require more powerful browsers that have yet to be invented or perfected.

Here's what really puzzles me about the whole punctuation-versus-words thing: How many punctuation glyphs do you typically have available? How many concepts can you actually assign to them, without overloading them to the point of counterproductivity? Given that the answer is "relatively few", what concepts would you pick in a language to map to these limited symbols, and how much would this really speed code comprehension? (Also, note that I'll repeat here that I'm not arguing that you're wrong, just that you are giving this issue undue importance.)

-- DanMuller

Like the different colors of road signs or folder names (below), the "first level classification" does not need very many symbols. The trick is to find a good general classification system, as the road-sign system appears to have done. Again, some languages over-do symbols. I don't dispute that (although some minds might dig it for yet unexplained reasons).


To build on top of the above "road sign" example, symbols tend to be used (or work best) as follows.

Symbols:

Lisp:

Under the symbol approach, the symbols tend to indicate context, similar to the road-sign colors. When one sees a symbol, they can narrow down in their head what context they are looking at. With the Lisp approach one has to first mentally do the name look-up and then translate that into it's context or "type" of feature/block. Symbols reduce the size of the list that has to be mentally searched. It is a kind of mental hierarchical index. It is like first going into a folder so that we have fewer files to look at. Lisp names are like looking through a giant list of names that have not be pre-classified into folders. (Yes, I know I rant about LimitsOfHierarchies all the time, but the hierarchy here is somewhat informal, and I have agreed that trees make pretty good informal or small-scale organization tools. They just don't scale to massive structures very well.)

Again, this is a very rough general description and not meant to be precise. The symbols are just (pretty good) "hints" about what is around them.

Further, the symbol lookup often applies to multiple statements/names, not just one. Thus, one lookup can be done for many other parts. For example, if I see:

  foo = bar(glob) * snorg(grog);
I am pretty sure that everything to the right of "=" is part the "right" side of the assignment statement. The equal sign "classifies" the whole right side as something. -- top

I wish you would stop using gibberish examples. The above makes no sense to me whatsoever. I can't even begin to guess what you were trying to get at. More time taken, and more realistic examples, would be greatly appreciated.

-- DanMuller

I am just trying to communicate general thought patterns. If I failed, I apologize. I will try to bring specific examples as I encounter them. Also, I don't consider existing languages to necessarily be the ideal. One of the things that makes me curious about TCL is that it makes it fairly easy to create one's own symbol system. Of course the down-side is that one person's symbol system may not be the favored of another.


More Examples

 (setfoo 
     (glob "nob")
     (sob "rag")
     (snib 7.2))

  glob := "nob"
  sob := rag
  snib := 7.2
The multiple ":=" catch one's eye and indicate that we have a bunch of assignment statements (Pascal-style) and help indicate where the "list" of assignments ends. In the Lisp version only "setfoo" tells us that we have assignments (and there lots of different "set" names in Lisp, adding to the confusion and mental parsing slowdowns.)

  (foo zog (glot "dork") (nark 7) (can trog))

  foo(zog, glot="dork", nark=7, can=trog)
In the second example a bunch of "=" between names and commas suggests a bunch of named parameters. We know that without knowing anything about what "foo" does. The mind recognizes the visual pattern of:

  *(... *=*, *=*, *=*...)  // where * is a name or constant

and usually guesses correctly that we are looking at named parameters.

The Lisp version's parameters could be anything, including variable assignments like example 1A.

We would have to know what "foo" is to know whether they are named parameters or something else. If it is an application-specific function, it will not necessarily be hard-wired into our brain the way that named parameter syntax generally is. This is because the named parameter syntax is the same for the entire language, while in the "foo" function it is entirely determined by how "foo" wants to interpret it. It is a "local standard" you could say where we have to know or lookup the local culture to know how to interpret stuff. It is like the road sign colors being different in each county/state/province.
From example 1A and 1B, The appropriate lisp version would be

 (setfoo (glob "nob"))
 (setfoo (sob "rag"))
 (setfoo (snib 7.2))
And I don't know why you called the lisp function SETFOO while the 2B version uses :=. If you called it SETFOO, it seems to me like you are making your own version of multiple assignment operator. The assignment for lisp is SETF. How in the world can you defined the function with named ":=" in your non-lisp language? What if := and = is already keyword in that language, what symbol would you use? To really compare the 1B version must be using only one statement to assign all variable. And in case := is already reserved word, you will have to name that function something else, probably SETFOO. So 1B version becomes

  setfoo ( {glob, "nob"},
                {sob, rag},
                {snib, 7.2})
In above version I assume that the language you use support variable arguments list and uses {} as easy way to create arrays or tuples. And that still disregards the fact that in most language, which doesn't support macro, you won't be able to create the above multiple assignment operator if the language haven't already provided it for you. Because what you passed to function is the value of the variable, not the variable itself.

Also, for the record, I could define ":=" macro in lisp to call SETF. and then the lisp version will be

 (:= glob "nob")
which is as good, := even stands out more than infix version.

For example 2A and 2B, there is standard way for lisp to pass name parameter, which is

  (foo zog :glot "dork" :nark 7 :can trog)
Where the colon indicates keyword name. We know that without knowing anything about what "foo" does. The mind recognizes the visual pattern of:

  (* ... :* * :* * :* *)  // where * is a name or constant
and usually guesses correctly that we are looking at named parameters.

You should have studies about Lisp first before present examples which demonstrate your misunderstanding of Lisp.

I'm tempted to give up on this page, since Top's "Lisp" examples are all so uninformed and unresearched as to make my head hurt. I do wish he'd take the time to be more accurate. There is no such thing as "setfoo"; if you invent your own operators, they can have whatever bastardized syntax you want. If you meant to illustrate assignment in Lisp, it's probably about five minutes' worth of work to look it up in a copy of the Hyperspec on the Web. OK, maybe ten if you're unfamiliar with the Hyperspec.

Lisp has exactly three assignment operators, all with very similar names and syntax. Of the three, two are treated by many people as deprecated. The remaining one is:

  (setf place value [place value]...)
SETF is a macro. I think it's relatively rare to see more than one place/value pair, just as it's rare to see chained assignments in C or C++ (as in "var1 = var2 = var3 = var4 = value").

It would seem that most people agree with Top that ganging up multiple assignments together (in any language that supports it) is not stylistically desirable. (I also agree, in general.) The argument "place" is a "generalized place", which leads to a longer topic of little interest here. However, when place is a symbol (Lisp's analogue to variables), then setf works just like you'd expect:

  (setf a 5)  ; returns 5
  a           ; returns 5
  (setf b a)  ; returns 5
  b           ; returns 5
The other assignment forms use the operators SET and SETQ, and have similar syntax but slightly different semantics. Both are subsumed by SETF for almost all purposes, which is why they're not used as often in new code. (SET is a function instead of a macro, which I guess makes it marginally useful for some esoteric applications.)

-- DanMuller

If you make the assignments separate, you still have ugly syntax. It takes up more space than equal-based. (However, I suppose that is a separate topic.) Even without the parenths, it still suffers from the same problem if in-lined. Complaints about my "made-up Lisp" are based on trivia not relevant to this topic, other than the fact that I should have pointed out there are alternative variations that I did not show. "setf" still does not stand out as readily as ":=". -- top

The word "setf" in the leftmost position of a block of code is actually probably easier to find than an assignment operator buried further to the right. I don't know what you mean by "in-lined" . Your made-up setfoo didn't have at all the same syntax as SETF; it had extra parenthesis, which is a convenient skew when complaining about a lack of compactness. Also, your example didn't take into account common usage; the harder-to-read multiple-pairs assignment is not often used. There are not "lots of different "set" names in Lisp"; there are exactly three, of which most people nowadays use exactly one. (You can hardly fault a language as old as Lisp for retaining some history for backwards-compatibility.) Please stop creating strawmen. It would cost you very little to be more accurate and fair.

-- DanMuller

PageAnchor: peripheral_alpha

  snarp(tabbo, niff)
  glob := "nob"
  sob  := rag
  snib := 7.2
  grub(lark, park)
  neb(slick, "yip", 8)

  (snarp tabbo niff)
  (setf glob "nob")
  (setf sob  "rag")
  (setf snib 7.2)
  (grub lark  park)
  (neb slick "yip" 8)

I see a syntactic pattern that separates the 2nd, 3rd, and 4th lines from the rest in 3A. In 3B they tend to visually blend together. I have to look almost directly at the "setf" lines to know where they are and how many, and if I move my eye away, I have to do it almost all over again to pick up the boundary again (it is quicker the second time, but still more work than the symbol version). I can pick up the ":=" grouping with peripheral vision out to a radius almost three times that of "setf". I don't claim everyone is the same, but that is how my eyes work, for good or bad. I cannot give you a physiological reason for the radius difference, it just exists. The lack of parenthesis around the assignments also tends to set them apart, helping yet more build visual distinctions. -- top

Yeah, OK, I can see that. If the assignments are not grouped together, the effect is a bit less pronounced. And if the right-hand sides included expressions with parentheses, as they often would in real code, then the lack of parentheses wouldn't be a feature of these lines. (Note: Fixed my confusion over which hand holds the fork in this and the next paragraph.)

Expressions on the right side? Then the symbols are even more important for indicating what they are. Note that I tend to align such assignments vertically if possible.

But, should assignments stand out starkly versus other code? This goes back to the question I asked earlier: "... what concepts would you pick in a language to map to these limited symbols, and how much would this really speed code comprehension?"

-- DanMuller

Being that assignments are common, standing out can be used often. Perhaps we have not reached the optimum assignments of symbols with the typical languages in use, but even with that flaw the symbols now still help common patterns stand out. I don't want to give this up unless the sacrifice brings fairly strong benefits in other areas. And if so, I would like to check to see if they are mutually-exclusive. -- top

To add another data point, I can pick out the setf's in my peripheral vision at about 50% farther away than the :=. As for why your brain works that way, there's a really simple explanation: that's what you're familiar with. If you spent an equal amount of time with Lisp as with misfix languages, you'd pick out the setf's too. I'm hardly a hard-core Lisp programmer (my day job now is Java, and I did mostly PHP during the semester), but I've read enough Lisp code to make them about equally comfortable.

Like I stated above, I have dealt with both names and punctuation in other languages, including commonly-used names in such languages. The punctuation still stands out more to me even after longer use. I agree that the difference might shrink, but still exists, at least for me. Also, the lack of parentheses on the left side of assignments adds further visual cues to differentiate. -- top

There is no "optimum assignment of symbols". It depends on linguistic idioms, paradigms, and most importantly the syntax that the programmer is most familiar with.

And assignments are not common in Lisp. You use rebinding far more often. Besides that, most Lisp code is a DomainSpecificLanguage, so there is no "common language feature". Probably that's why Lisp users eschew punctuation, even though there're reader macros and infix parsers for Lisp. -- JonathanTang

On the one hand you say that frequency of use improves recognition, and then here saying "setf" is not used much in practice. It appears to be a bit of a contradiction. Also, the position of the parenthesis should not change one's ability to recognize the name itself, and all languages use names so it is not a new skill we are looking at here. It is about name recognition, not infix-versus-prefix, at least not in this example.

Regarding DomainSpecificLanguage, I think you mean SubLanguage; for a DomainSpecificLanguage is generally considered hard-wired for a domain. -- top

'setf' is not used much in practice. Names are. That leads to familiarity in recognizing names. It also leads to the perception that := is just another name (it is, in DylanLanguage), which probably leads to the disparity in recognizing it.

And I do mean DomainSpecificLanguage, as you can see on that page. LispMacros are very much a DSL, they are hardwired at macroexpansion-time, and the fact that they're easy to change at EditTime? is one of the big features of Lisp. -- JonathanTang

Which brings to mind the fact that Lisp would be an excellent vehicle for prototyping your own language. Rather than arguing its merits or shortcomings, one could simply apply it as a means to an end, and in the process learn it well enough to critique it more convincingly. -- DanMuller

As a test, let's see how easily Lisp can emulate C or SmallTalk.

Feel free to test whatever you like, if you feel the need. C will require a complete and complex parser, but I certainly hope your own language designs won't have the ambiguities of C syntax. What little I know of SmalltalkSyntax indicates a small, simple syntax that one could probably implement in Lisp in a fairly straightforward fashion. -- DanMuller

Everybody would probably pick a different approach to their ideal language. I personally wouldn't go the SmallTalk route because it lacks the option of compact positional parameters.

I thought Smalltalk was first prototyped in Lisp, wasn't it?


Re: "But, should assignments stand out starkly versus other code? This goes back to the question I asked earlier: "... what concepts would you pick in a language to map to these limited symbols, and how much would this really speed code comprehension?""

Even if somewhat arbitrary things stand out, at least it provides a variety in the "landscape" on which to hitch mental points of reference. They are landmarks. Landmarks do not necessarily provide heavy meaning by themselves, but help in orienting oneself. Lisp's neighborhood is too bland. It does not matter if there are great things inside the houses, finding and navigating the houses in the first place is more difficult. (I am running out of metaphores, guys.) I just plain find visual variety helpful. -- top

LOL. I get more than my share of variety from strategic use of comments and whitespace, without having to resort to metaphors. -- DanMuller

Those are not enough. Most languages use those anyhow, so they are not an issue to compare here. If you don't need landmarks, I am happy for you. You have highly evolved eyes.

If you can't write a comment block that will draw attention to something important in the code, then you have an under-evolved ability to write code. As to whether comments and whitespace are an issue or not, of course they are.

PageAnchor: Difference

They are not an issue to compare because they exist in both things being compared. I keep saying this, but it does not seem to sink in. Am I saying it wrong? I am comparing A and B by performing A - B. Or, more specifically (A + C) - (B + C) where C are comments. Since both sides have it, we remove them from the comparison to simplify the comparison process.

The claim on this page is that "Lisp lacks visual cues". It quite categorically does not. It has some of the same visual cues that are available in most languages - whitespace and comments. It lacks only the more complex syntax that varies considerably among other languages, and offers benefits by virtue of its simpler syntax. That is all simple, plain, indisputable fact. Those particular visual cues, it lacks. Whether one can come to appreciate the syntax, or will forever loathe it, is a entirely personal question. But the bald statement of the page's title is simply incorrect.

As an absolute statement, you are entirely correct. But it was a comparative statement, as illustrated above with algebraic analogies (PageAnchor: Difference). Perhaps you would rather have the topic named "LispHasLessVisualCues?" to make this clear. However, brevity was used instead. As far as benefits that counter loss of visual cues, such as benefits that allegedly accrue from a simpler syntax, that is another topic, such as ObjectiveAdvantagesOfLisp. I like a lot of the concepts of Lisp from an intellectual standpoint, but it just does not make my eyes happy for the reasons given. I would like to see language techniques that SeparateMeaningFromPresentation so that I can see "it" the way I want to see it and you can see it the way you want without imposing One Right View on each other. Whether it is achievable for not, well, at least let's try. I see too many fights over syntax that are perhaps in theory not necessary because syntax can be a person-specific thing. --top


The "Get Used to it" Defense

The Lisp defense seems to be "you will eventually get better at reading Lisp". But isn't the opposite also true? One can get used to languages with more complicated syntax. Why isn't time the solution there also? I suppose one could argue that such languages lack enough meta ability, but being syntactically complicated and having meta ability are probably orthogonal (and another topic). Nor does being syntactically complicated necessarily mean it requires more code to do the same thing. -- AnonymousDonor

The difference is precisely in the meta ability. With Lisp, you get used to the parentheses, and then get the benefits of LispMacros as payment for putting up with the lack of syntax. With other languages, you get used to the syntax, but don't get anything for it.

Being syntactically complicated and having meta ability aren't quite orthogonal. It's possible to design a language with a structural macro system but infix system: DylanLanguage and CamlPeeFour? are good examples. But the resulting macro systems are significantly harder to use. Writing a macro in Dylan or ObjectiveCaml requires some knowledge of the language grammar and how the parser translates the program text. In Lisp, this is all explicit in the parentheses. So at the meta level, it's not so much LispLacksVisualCues as Lisp provides the visual cues that parsers hide. -- JonathanTang

Of course this invites one to ask for examples of practical uses of Lisp's meta ability that other langs cannot match. I am a bit skeptical; but, there are existing topics on that [to be inserted]. --top

To say "no other language [at ALL] can match" is asking for too much, since Lisp has had 45 years to influence other languages, but if we reinterpret that as "no other language that Top, or indeed any language that the majority of professional programmers know" -- why then, yes indeed, Lisp meets that without question, no need for skepticism.

But you are comparing features, not necessarily practical ability. However, this is not the topic to debate the alleged merits of Lisp.

Compared with all of the most popular languages of the last 3-4 decades, all of which are Algol-family languages, it suffices to note that none of them allow first class functions, but Lisp does, and it is well known (and easy to demonstrate) that first class functions add a new and deep dimension to programming, and that is not a subjective issue, it is objectively more powerful than languages which do not allow such.

That's not to say that Lisp is the ultimate language, which cannot be improved upon -- but that was not the question at hand. -- Doug

Whether first-class functions are a GoldenHammer or not should probably go to another topic (see indent note), for EssExpressions are not the only way to include them syntactically. Having a feature and how the feature "looks" is probably orthogonal.
I just need to point this out. I recently found myself simultaneously taking graduate level scheme and teaching undergrad introductory scheme. I had some previous experience with ML, and I hold no real grudge against functional languages. However, this was my first experience with anything Lisp-like. I now spend 1-8 hours nearly every day of the week, debugging scheme code.

So, I need to point this out. Loudly. Now. Optimizing a programming languages does *not* mean using the minimal amount of ink when printing out your programs. Apostrophes, commas, and back quotes are *hard* *to* *see*. All ()'s look alike, and they're hard to count/sort out when there's more than 3 or 4 in a row: ))))). It is neither a *healthy* nor a *sane* idea to build a programming language whose significant syntax consists entirely of: ,`,().

Thank you. -- GoingBlind?

So you are explicitly talking about a scenario where someone who is not a fan of, and not otherwise deeply experienced with, Lisp/Scheme, ends up full-time trying to correct the errors of students who are (obviously) also not fans of/experienced with Lisp/Scheme.

So we not only have the classic situation of "the blind leading the blind", we also have someone complaining that the language makes no sense, when what they see all day long are examples of the language which indeed make no sense (that is, horrid student programs).

I'm afraid that you've gone blind for perfectly reasonable reasons.

When I took Scheme, we wrote programs out by hand on paper before typing them in. In order to make quotes more visible, we wrote them as little triangles. To make nested parens more clear, we wrote the parens for certain expressions, such as cond, as square brackets. ISTR the Scheme system we used actually permitted those and enforced matching on them.


"The Homoiconic nature of Lisp means that one cannot tell by looking at the code alone whether something is a data structure or a function."

Lisp doesn't seem to offend in this area worse than C all the languages derived from it. They all use { & } to delimit both code blocks & data structures. It takes me roughly the same amount of time to identify code vs. data in both Lisp & C.

Note also that some Lisp-inspired languages do syntactically distinguish code & data. e.g. Qi, which uses [ & ] to delimit list while ( & ) are only used for code.

(Using < & > instead of ( & ) in ZIL seems to mean something as well, but I'm not sure what.)

--RobertFisher

Consider the following chunk of Lisp:

 (foo (bar baz) (llama pants))
You can't tell if foo is a function or a macro. If it's a function, then bar and llama are also functions or macros. But if foo is a macro, the other symbols don't even need to be defined -- they could be some magic words used by foo. Indeed, the whole thing could be effectively a piece of data: foo could mean "add this binding to some data structure somewhere".

Lisp allows you to use symbols (identifiers) for things other than just naming variables and functions. This is a huge big feature, and dynamic languages that don't have it (such as Python) end up using kludges to replace it (such as using empty classes). But it means that you can't tell by looking at the surface syntax whether a given symbol is going to get evaluated or used as a piece of data.

--AnonymousDonor

Let me read from the book of Genesis:

 The path is clear,
     though no eyes can see,
 the course
     that was laid out long before...

whoops, wrong passage. Let's try this one:

 And God said, "Lo, and let there be coding conventions!"  And, there were.  
 God saw that all was good, commited his changes to the repository, and took 
 the remainder of the afternoon off.

This is why we have a Sabbath, actually.

Seriously, Lisp has a number of coding conventions intended to address these issues. Global and/or special variables have asterisks surrounding them. Constants have plus-signs. Lisp is case insensitive, so it is entirely possible to have macros use all-caps, like C coders do. Or, surround them with ampersands or some such. (I prefer all-caps myself.) The point is, it is not Lisp's job to make this stuff patently obvious -- it's the management's job.

Some would argue that with proper functional support, you wouldn't need macros anyway. To a large extent, they're right, but even Haskell folks have found a need to introduce TemplateHaskell, which restores some aspects of macros.

So, as far as I can see, it's best to not complain about a language because it offers no visual cues of its own, but rather to have your organization debate on the best coding convention to make them stand out. If, indeed, it is at all desired in the first place. Most macros are pretty self-evident by virtue of their name. If it looks like it is a control construct, it's likely a macro. If you can't tell, use your IDE to jump quickly to the definition and find out.

And, yes, I'm going to pull the IDE excuse out of the hat, because it's what all the Java hackers do whenever you complain about how badly Java sucks too. So, there. --SamuelFalvo?

BrainFsck is wonderful if you have a really powerful IDE for it.

In SQL sometimes I consciously forgo an "inline" conditional, such as DECODE function, in place of an explicit "block" conditional, such as a CASE...WHEN clause, because inline conditions don't stand out. Conditionals can be important business logic and I don't want to bury them in trivia. I'll use the inline's for things like simple zeros-for-nulls substitutions, but not for business rules such as "IF INVENTORY_COUNT < MIN_STOCK_ALLOWED THEN ORDER_STATUS = 'HOLD'" (pseudo-code). It's intentional verbosity, or at least using a readily-visible and recognizable code pattern. --top

Shrug. I started Lisp sometime in 2000 with no prior Lisp exposure (but a varied background, including a lot of strength in C and C++). Around January 2003, I rewrote CLISP's backquote implementation. Year before that, I developed MetaCVS in Common Lisp, using CLISP. I found Lisp code to be easy to work with from the beginning.

The trick is to read enough of the language spec so that you have a feeling for the vocabulary. Fact is that we don't know the meaning of (foo bar) in the form (xyzzy (foo bar)) if we do not know what xyzzy is. xyzzy could be a special operator or macro which gives an arbitrary meaning to (foo bar). You can't read Lisp like you read non-extensible languages with lots of surface syntax. You have to have the vocabulary. You have to read Lisp forms from the outside in, and use the ones you know to help you guess facts about the ones you don't know. What is helpful about Lisp is that there is no difficult grammar. Even if we don't know what a form means, we know what its tree structure is. In (xyzzy (foo bar)), we know that bar is not a direct constituent of xyzzy, but in the (foo bar) clause. In many languages, we do not know stuff like this. Some functional languages, for instance, have numerous operators (much more than C) with crazy precedence rules. The Haskell report has a special exponential notation for dealing with precedence levels.

Example: (let ((x (blah)) (foo (bar x))). We do not know what is blah, what is foo and what is bar. However, reading from the outside in, we know let. If we don't know let, we're an utter newbie and need to read a tutorial. From the syntax and semantics of let it is obvious that (blah) is being evaluated: so it is a macro or function call. Moreover, x is established as a binding. (foo ...) is obviously a form to be evaluated since it is in the body of the let, not wrapped in anything. Now foo could be a macro or a function call, so we cannot infer much about (bar x), but we can guess that x refers to the lexical variable x created in the let. Either (bar x) does something with the value of x, or with the place x itself, or else (foo ...) is a macro such that (foo (bar x)) gives a meaning to the form (bar x) in such a way that the x refers to the binding in scope. All this stuff flashes through my head in less than a second. Then, since I'm sitting in the Vim editor and have ctags, I just follow tags to the definitions of blah, foo and bar. Aha, foo is just a function; oh, and bar is a macro which reads and updates x, etc.

Once you work with a code base for a while, you absorb its vocabulary, so together with the vocabulary from the language, you can read it easily. --KazKylheku

But isn't one still retraining their head to think different from all the AlgolFamily experience they have gained up? For many, flipping their mindset is not easy and quick. Now maybe a Lisp-only shop could leverage the alleged advantage of those who have already made the mental change-over to demonstrate the advantage of Lisp-prepared minds; but; so far such attempts haven't flown: IfFooIsSoGreatHowComeYouAreNotRich. I still suspect that hard-wiring in certain control and flow structures in such a way that they are consistent across shops and the mind learns to read them in a quick reflex-like fashion is the strong point of AlgolFamily. Perhaps memorizing patterns of Lisp's library can eventually do the same, but it requires one invests the medium-term life of their org on a supply of talent that knows those libraries well, which will be a far smaller pool than the AlgolFamily-trained heads.


Having encountered Clojure as my first Lisp I was amazed by it's simplicity and did barely, if at all, struggle with the syntax (at least way less than with EVERY other language), but when encountering Scheme or CL I can see why people complain; normal parentheses are used not only for function calls, but also for ANY kind of grouping (especially prevalent in macros). The creator of Clojure also stated that he tried to add visual cues intentionally(for example through vectors). Do you think Clojure (please read a good amount of it first) solves the problem or considerably moves in the direction? Would love to hear some opinions on this, since I'd like to confirm or deny the impression that there are few complains and adoption is great 'despite being a Lisp'.

bh: Clojure has certainly helped me learn to read lisp. The visual cues inherent in the different literal forms helps to separate them out and make them more obvious than when you're dealing with apostrophes and parentheses alone.


See also: NestedListsAsDictionaries, UniversalStatement, ChallengeSixLispVersionDiscussion (more examples)
CategoryHumanFactors, CategoryLisp

EditText of this page (last edited April 10, 2014) or FindPage with title or text search