Ideal Programming Language

Surely every experienced software developer daydreams about his or her IdealProgrammingLanguage for general-purpose programming. Don't you have echoes of that college class on "Comparative Programming Languages" or "Compiler Design and Construction" ringing in the nether regions of your brain when you're struggling with some obtuse syntax in your daily lingua franca? Surely I'm not the only one suffering this affliction...

So what would be the main elements of your own ideal programming language?

-- KirkBailey

These points seem to be addressed by a NaturalProgrammingLanguage.

Why is top down better? Any real programming language has to allow both - I need to be able to do stepwise refinement, and build reusable components once I see the need.

Also what is meant by NaturalProgrammingLanguage? I just see an ACM article with very little meat in it... -- PaulMorrison

This page seems divides between a description of Dylan if you like static typing and Self if you like dynamic typing. Both are surprisingly similar in other respects. They are completely object oriented, have modules, have multiple inheritance. The rest is in DylanVsSelf.

Self does not support RealMacros, and Dylan only supports DefineSyntax-style macros (but see <> for a more expressive macro extension).

My ideal language would have built-in support for a set of paradigms that cover the most common application areas. Built-in support implies convenient and efficient syntax. Alternately, the language syntax could be seamlessly extensible, but this is very hard to achieve and might complicate the language excessively. My experiences with building APIs that implement 'sublanguages' in C++ makes me think that there are practical limits to this approach.

Here's my current list of favored features for an IdealProgrammingLanguage:

I would not hobble a language with requirements to fit the classic 'separate compilation and linking' model, which I think has hobbled C++'s development. BertrandMeyer was right to jettison this for the EiffelLanguage. In fact, there are a number of good ideas that could be cribbed from Eiffel, including the requirement for a tool that shows a stripped-down definition of a type and its interface. In my IdealProgrammingLanguage, a type's interface would include all functions in a program that take a value of that type as an argument, which follows current thinking in the C++ world on what constitutes a class' interface.

-- DanMuller

My ideal language would look very different from yours, but I'm interested in your ideas. In particular, what kinds of features are you looking for in a packaging system? (What is it about Modula-2's and C++'s namespaces that you like?) And what kind of primitive support would be required for TableOrientedProgramming? (Why does it require a separate language feature, rather than just a library?) -- AdamSpitz

Some form of package-oriented naming system seems essential to me for the development of large programs. Identifier name clashes are a fact of life, especially when you start using third-party software. I think that C++ programmers have learned a lot about such systems as namespaces have evolved. The KoenigLookup algorithm, for instance, makes GenericProgramming usable in the presence of namespaces.

Regarding TableOrientedProgramming: Given the other features mentioned, e.g. procedures as first-class values, it might not take much to support it. I just want to make sure that it is convenient to use. (But storing procedures in persistent tables could take some doing! {FileSystemAlternatives}) Language design is in large part about making it as easy as possible (but no easier) to express common constructs. The desire to have RelationalModel support built in, for instance, comes from observing the discontinuity between current general purpose programming languages and database systems. Embedding or constructing SQL strings in code and dealing with issues of data translation between result queries and native variables is hardly seamless! The bookkeeping activity seriously distracts from 'getting the job done'.

Some people would argue that you should just use different languages for different parts of your program, but I can't help but think that many general-purpose languages come close to covering the spectrum of needs already. It seems silly not to continue pushing the envelope. But having said that, I'd have to admit that my pace of new language acquisition has slowed in recent years. There may be less popular languages out there with features that I'm not familiar with which come closer to my ideal than I know.

So - tell us what your ideal language would look like!

-- DanMuller

I agree with all of your general opinions. :) Avoiding name clashes is important. Making features seamless, rather than just possible-but-ugly, is important. And I do believe that it ought to be possible to have one general-purpose language that does everything well, rather than needing to glue multiple languages together.

My favourite approach to namespaces is the one taken by the SelfLanguage. Self has no extra rules regarding namespaces at all - we just use ordinary objects as namespaces. Now that I've seen how it's done, I can't imagine doing it any other way. Self's approach gives me name-collision protection, but it's also ridiculously flexible. I can override a name at any level - globally, or for a whole subsystem, or for just one class of objects, or for just one object. I can make a name call a method instead of just referring to a particular object. And it's dynamic, too - swapping a namespace is as easy as assigning to a slot; I can do it at runtime if I want to.

(I don't often need all that flexibility, of course. The point is that I get the flexibility for free, without any extra rules added to the language, because Self's ordinary objects are so simple and so flexible that they make wonderful namespaces. I can describe the system in more detail, if you want.)

I don't know much about TableOrientedProgramming, and it's been a while since I've had to use a database at all. Could a SQL query be represented by an object? Could translation between result queries and native variables be done by a library, if the language was reasonably reflective? (If it sounds like I don't understand what I'm talking about, could you give me an example of the kind of ugly bookkeeping activity you mean?)

Ever since I met the SelfLanguage last summer, my tastes have changed. A year ago, I could have described my IdealProgrammingLanguage to you (it would have been something like a cross between Ruby and Lisp); today, I'm not sure. If I figure it out, I'll write something here.

-- AdamSpitz

Thanks, I'll take a look at Self. A brief initial perusal, however, shows me that it's heavily object-oriented. The OO paradigm, with single-dispatch methods at the heart of its polymorphism, is more cumbersome than I'd like for many problems that I encounter; I'd like to see it replaced by MultiMethods, which can give you all the benefits of OO polymorphism with one easy-to-understand and far more flexible mechanism. Also, the dynamism of Self gives me pause; I worry that such languages can never be efficient enough to be truly general-purpose. (OTOH, being able to store procedures in tables and databases as I would like to is probably very difficult to reconcile with the notion of a compiled-to-native-code language.) I'm intrigued by whole-program analysis methods as used by Eiffel, as a means of improving program correctness checks and enabling optimizations. Such analysis seems like it would be prohibitively difficult in languages like Self. As you say yourself, the extreme flexibility of, say, swapping namespaces is rarely needed - and such flexibility comes with a cost attached, with respect to program optimization and static correctness checks. Personally, being a static-checking kind of guy, I wouldn't be willing to pay that price in large systems.

The improvements that Self seems to promote versus class-based systems are less of an issue if you get rid of classes and methods, and return the notion of 'types' to what they once were, namely data. The ability to treat namespaces and types as first-class objects (the latter being, in a sense, what prototypes get you) is not that important to me. It enables introspection, which is useful for some types of problems, but I suspect that MultiMethods are also a sufficient and convenient way to address such problems. The dynamism of being able to modify object behavior on-the-fly is no doubt useful during development; in my ideal language, it would be nice to be able to interactively add procedures to a namespace, for instance. But I see this as more of a quality-of-implementation issue for a development environment, and I'd be nervous about using a language that encouraged this sort of programming for large system development. With first-class procedures you could, of course, get this sort of behavior when you need it without too much fuss, but I think that it would be more explicit in the code when it was happening.

Regarding interacting with databases: Yes, you can certainly encapsulate queries and ease the translation of data types with suitable libraries. Where things get cumbersome, though, is in writing queries. If the query is not dependent on run-time data in your program, you typically define it in the database and refer to it using a name string (or a trivial SQL query string that refers to it) in your program (encapsulated or not). If your query depends on run-time data in the program (which is often the case), you have to build the SQL string - and string manipulation and formatting is not the most straightforward way of expressing a data query! The query string is then parsed and interpreted at run-time by your SQL engine.

I've been working quite hard on this problem at work, building a relational query 'sublanguage' in C++. I've done a reasonably good job of making it easy to write even complex relational queries in straightforward C++-like syntax. The result is an expression tree which I later translate to SQL. One of the downsides to my approach (IMO) is that the types of the results (i.e. the types of the tuples retrieved from a query, which are collections of name/type/value triples) are all determined at run-time. Getting such a system to work with static typing might be possible using advanced template techniques, but it would be (to say the least) a bear, since the relational expressions are defining new types from base relations in the database. I do use reflection in some places, particularly in the code that moves data out of the DBMS API into my own system.

It would be interesting to see your SQL-creating API. I have heard about many failed attempts to make such cross-platform unless a small subset of SQL is assumed. Is that even your goal? Also, IMO dynamic languages work best with database-intensive stuff so that you don't have to worry about converting DB types to language types as much. I like it when languages allow one to have dictionary arrays using dot syntax, so that one can say "row.firstName" instead of "row['firstName']" (in the ObjectsAreDictionaries spirit). The second approach is only needed if there are funny characters in the field name. See NotesOnaCeePlusPlusRdbmsApi.

No doubt this sort of thing would be far easier in a language without compile-time typing. :-) But I value strict compile-time typing. If I change a base relation to remove a field or change its type, I would like the compiler to tell me about code that's still referring to what no longer exists, or that makes now-incorrect assumptions about its type. Unit tests don't help much with this sort of thing; having a unit test for every field in a large database schema would be an incredible maintenance overhead, and wouldn't tell you a damn thing about actual application code that might be wrong.

-- DanMuller

It sounds to me like you've thought this through, and that you know what you like. I still don't think I'd like your language very much, though. Here are the kinds of questions I'd want you to ask yourself:

I'm asking these questions because I think that the two things I value most in a language are simplicity and, um, humanness (for lack of a better word). Some language features just add so much complexity that they hurt more than they help (StaticTyping is the big one, though I know that half the world disagrees with me). And some language features have really cool mathematical properties, but don't fit my brain very well (I think MultiMethods might be like this), or else compromise some of the psychological properties that I want the language to have (I'm pretty sure LispMacros are like this).

I've written a little bit about the humanness thing on the RealObject page, though I don't think I did a very good job of explaining myself.

-- AdamSpitz

P.S. If you're worried about performance, take a look at some of the papers on the SelfLanguage website. These guys invented a whole bunch of really cool VM techniques.

I suspect that most people who dislike strict typing put an emphasis on the ease of writing code. Although that's important, I prefer to balance it against maintainability. Strict typing can find problems early, which is important for large systems that undergo change. The additional complexity (provided you don't get too baroque with type definitions) is worthwhile, IMO. I'll happily turn to a language like Perl for a quick program - but I shudder at the idea of writing an entire accounting application in it. Perhaps a truly general-purpose language should offer both strict and weak forms of typing?

Making programming languages "human" goes only so far. Programming is not a natural human activity, like gardening or jogging. Every technical area of expertise develops its own language that must be mastered by practitioners, for good reasons: The need for unambiguous precision and the ability to express concepts that have no clear analogue to everyday experience. This necessarily introduces some complexity. It's all a balancing act.

I'm not ignoring ease-of-maintenance in favour of ease-of-writing; I think DynamicTyping is a win for both. We can debate the actual merits of StaticTyping elsewhere (there are plenty of pages for that on this wiki), but I promise that this isn't just a case where we dynamic-typers are ignoring the big picture; we've seen the big picture, from both sides, and we like it better from the dynamic side.

(Of course, I wouldn't write a large system in Perl, either. ;)

Given that we're both interested in large systems and both concerned with maintenance, why do you think our IdealProgrammingLanguages are so different? (That's an honest question. I really don't know the answer.)

About the human thing: I don't know what to tell you. All I can really say is that I believe it's more important than you seem to believe it is. At the very least, I think that this psychological stuff is worth understanding, even if you decide to compromise it in the pursuit of some other desirable quality.

-- AdamSpitz

Regarding our ideal languages being different: Most likely due more to our experiences rather than our reasoning abilities. :-) In my twenty years of programming, I've mostly used procedural, imperative, strongly-typed languages with compile-time type checking. Defining and using types is an abstraction process that I'm thoroughly comfortable with, and I find that when the compiler gives me a type error, I more often than not have in fact not written what I intended to. I see debugging such errors at run-time as being more time-consuming than diagnosing a compile-time error. (Although I'll admit that compile times are typically longer for compile-time checked languages, so I do pay a price for this.) Since I've worked mostly on 'shrink-wrapped' products, and diagnosing errors that occur in the field is difficult to impossible, I'm constantly paranoid about finding errors as early as possible. Although I value and use unit testing, I don't believe that it can give me adequate coverage to find all type errors without inordinate effort.

To some extent, the difference between strong and weak typing may be a chimera. At some level, all languages have to check types. It's more a matter of when the type checks occur, what conversions may be done implicitly for you, and how much information the compiler wants the human to provide. I've just started looking at ML a little, and it looks interesting - strictly typed, allowing the definition of new types, but the compiler infers the types of variables for you. (I'm downloading OCaml as I write this.) This sounds like a good allocation of tasks between human and computer!

Regarding 'the human thing': I agree to the extent that computers should do the tedious bookkeeping, and humans should be allowed to deal in abstractions that are relevant to the problem domain. I don't think that necessarily means coming up with abstractions that are metaphors for the 'real world'; sometimes those are useful, sometimes they're not appropriate. Modern physics advances in part because people come up with new mathematical tools (or rediscover old ones) that allow physicists to work with powerful abstractions that often have little do with the real world. Without that topic-specific expressiveness, they'd be severely hobbled. Providing powerful abstractions is the most useful thing that a language designer can do. The object metaphor has been taken to an extreme that is becoming harmful; it's time to refine and separate some of the useful abstractions that tend to get bundled tightly inside it. -- DanMuller

"At some level, all languages have to check types. It's more a matter of when the type checks occur..."

Yes! I definitely do value strong typing (as opposed to StaticTyping) - I think it's important that if I try to call a method that doesn't exist, the system gives me an error, rather than silently producing garbage.

Or if I try to divide a string by an integer, or ...

We agree on strong typing then. I could definitely see making static typing optional. Perhaps our preferences not so far removed from one another.

But I'm not willing to pay the price of reduced flexibility and greater complexity just so that I can catch those errors at CompileTime rather than TestTime?. CompileTime is only a few seconds or minutes before TestTime?, anyway.

I think that flexibility is a matter of what kinds of implicit type conversions a language allows, and the complexity allowed in user-defined types. (E.g. multiple inheritance introduces a form of flexibility. Actually, someday I need to delve into the similarities between convertibility and inheritance relationships.) Some conversions are always safe - for instance, a number can always be converted to some string representation. But the inverse of this example is data-dependent. Perhaps here is where we might differ - I have no problem with implicit safe conversions, but I prefer a language to require my explicit say-so in order to apply an unsafe conversion. I want to explicitly identify possible failure points in a program.

Runtime is a really great time to check for type correctness, because we have perfect information. The system doesn't have to do any reasoning at all to determine what types an object could be; it just looks at the object and sees what type it is.

See comments above re safe vs. unsafe conversions. The system may have to do some reasoning, with a run-time cost. Another reason to explicitly identify such places in your program, in my mind. Also, there are times when it's possible for a compiler to determine from the code when a type error is definitely being made. Surely you wouldn't object to the compiler alerting you to such cases when it can determine them?

The main scary thing is that my testing might not actually cover all of my code. But that thought is just as scary whether I have static type-checking or not, because most of the mistakes I make are things that a type-checker can't possibly catch. If someone shows me a piece of code and says, "This code isn't covered by any of your tests, but all the types are right," I don't feel particularly confident about it at all.

-- AdamSpitz

But you would feel slightly more confident than if it didn't check at all! When you're writing software to send outside your company, every bug that makes it to the field is terribly costly. It involves Support time, Development time to reproduce the problem in their own environment instead of the user's (often very, very hard, and involving getting a copy of parts of their environment), and a some intangible measure of goodwill lost on the customer's part. Quite aside from the fact that I would much rather be writing new software than fixing bugs that come in from the field.

I understand that bugs from the field are expensive, and I used to be a StaticTyping advocate for exactly that reason. Then I started programming in Smalltalk, and my bug rates went way down, not up. And the reason they went down is: StaticTyping causes more errors than it catches.

I didn't realize this before I tried Smalltalk, because the errors that StaticTyping causes are a particularly insidious kind: they're just ordinary errors, and you make them because you're human and your brain can't do everything perfectly all the time. And so when you discover one of these errors, you don't think, "This static type system made my life more complex, which caused me to create this bug." You just smack your forehead and laugh at how stupid you are.

Our industry is churning out some really lousy programs, and I don't think it's because we don't have enough static safety guarantees. I think it's because we're trying to build some really complex things, and our brains aren't up to it. And so when I hear people say, "This big complex system lets me end up with fewer bugs," I get suspicious. I'm pretty sure that static typing causes a lot more bugs than it catches - but we don't notice, because we don't realize that we wouldn't have created the bugs if the system were a lot simpler.

-- AdamSpitz

Perhaps Perl's 'strict' option is on the right track. Let me have implicit unsafe conversions while I'm messing around. Before I commit my code, make me identify the places where I'm relying on these.

Are you saying that static typing distracts a programmer, causing him/her to make more mistakes? If so, I agree it's possible, but I'm skeptical that it's a big factor. I've programmed some medium-sized stuff in Perl (which has a neither a truly static nor dynamic type system) and didn't notice a lower bug rate. Would be interesting to see if anyone's studied this objectively. Or are you saying that there are bugs specific to static typing that you can't make in a dynamically typed language? If so, could you describe them? -- DanMuller

No. It's not just that it's distracting; it's that it makes everything more complicated.

A static type system is a very non-orthogonal language feature. If you add a static type system to a language, you have to go through virtually all of your other language features and make up rules for how they interact with the type system. You have to add auxiliary features to the language (like a namespace system, or your TableOrientedProgramming stuff) because your core language features aren't expressive enough to represent those things anymore. Some of the features you'll need to invent (like ContraVsCoVariance, or generics, or Java's interfaces) will exist solely for the purpose of making the type system more livable. And some features have so much trouble fitting into the type system that they'll just get left out of the language altogether. (Why isn't a Java class or a C++ class a first-class value, for example?)

This complexity doesn't just manifest itself in the language rules. Code written in the language becomes more complex, too. You have to write extra declarations and extra interfaces to get the compiler to bless your program. You have to write extra code to work around the times when the concepts in your head don't fit into the type system. You have to write extra code to work around the lack of the features that were left out of the language because the type system made them too complex.

And the main reason for all this complexity, the fundamental underlying purpose of it all, is to catch some bugs at compile time.

I don't buy it. I don't think it's a net win. I think it's too much complexity, and I think that type errors just aren't important enough. Type errors aren't a particularly common kind of error, or a particularly hard kind of error to debug; they just happen to be a kind that we know how to invent static checking systems for. But the cost of the complexity of these systems is just too high.

Some of my objections don't apply to languages with clever TypeInference systems. I like those a lot better. If I'm going to work in a language with a static type system, I want it to stay out of my way, be flexible enough to let me express my ideas directly, let me change my mind about things without penalty, and not force me to spew type declarations all over my code. The more flexible and simple it is, the happier I am.

But I'm still not convinced that any static type system is worth the trouble, even a very well-behaved one like that, because, frankly, my Smalltalk programs aren't riddled with type errors. They're just not. All of these type errors that everybody's worried about - they're just not much of a problem. The cure is worse than the disease.

StaticTypingRepelsElephants. :)

-- AdamSpitz

I think you've got it backwards. A static type system typically isn't added to a language. When you set out to build your language you make an early decision: either give it a static type system or give it dynamic typing. I'm not a language designer but my guess is it's easier to add a static type system than to add support for efficient dynamic typing. There are many, many facets to the perceived advantages of a dynamically typed system and I think by far the biggest one is that dynamically typed languages attract good programmers (this said by a C++ programmer.) -- AndrewQueisser

I'm sure you're right. I wasn't really talking about retrofitting a static type system onto an already-implemented language; all I mean is that if you look at the list of features for a statically-typed language (even an unimplemented one like the one DanMuller described at the top of this page), it's scary to realize how many of the features wouldn't need to be there if you gave up the static type system. -- AdamSpitz

The ideal programming language is probably one that fits the way I think and work, but probably would be distasteful to many others. I have been in enough language debates to know that many things are probably subjective preferences.

Many things are subjective, but not all. Details of syntax certainly are, and engender endless, and relatively uninformative, discussion. These issues interest me little, and weren't the point behind my creation of this page. Some semantic issues are also subject to preferences, such as the choice between strict and weak typing - which primarily affects the semantics of changes to variables. (Even weakly-typed languages check types at some point!)

I have kicked around a draft language with virtually *no* typing (ThereAreNoTypes). If you give an operation that means "compare as number", then the variables are parsed to see if they can be interpreted as a number. Thus, it is more like "validation" than "type checking". The concept of validation and type checking are sort of linked in my observation, especially if you go the ultra-dynamic route.

You might want to take a look at TheThirdManifesto's type system. They define type inheritance in terms of via "specialization by constraint", which has similarities to the validation you're talking about. The difference between the two approaches seems to be, still, mostly a matter of when you check type (== adherence to validation criteria), not whether you do or not. My issues with weak typing have to do with maintainability of large systems. Although I'm a unit test advocate, I'm not convinced that unit tests are an efficient or effective way to test for type errors in large systems.

I'm more interested in getting at the bigger issues of code and data organization, addressed by things like the packaging mechanisms implied by OOP, and integration of things like relational database models. (Are there any good models of UI design? I'm less experienced with UI programming.)

One solution maybe to make the syntactical representative customizable. The actual internal representation would be something more machine-friendly (CRL - .NET style?) or parse-friendly, or perhaps a data structure of some sort (DataStructureCentricViewDiscussion) that is converted to code using a favored format when one goes to edit it.

Certainly an interesting topic. Using a non-textual format as your primary means of storing source code has the disadvantage of invalidating a whole universe of text-based tools, however. It's been tried, but never caught on.

Editing it as text does not necessarily preclude editing it some other way also. (I am assuming 2-way translation.) Further, do you have any links to prior attempts and possible reasons for their failures, BTW?

Agreed. I'm just pointing out that if your primary storage means is not text, then the use of text-based tools to manipulate source code becomes more complex. (Although I worded this more strongly than warranted above.) Regarding previous attempts, no, I have only vague memories of various projects that tried this in the past. I was also interested in this possibility at one time, so I was attentive to such things when they came to my attention. But it's been a long time.

There are actually probably quite a few proprietary systems that store source code in binary form today. One that I'm vaguely familiar with is Crystal Reports. There is no convenient way to move this to and from a textual representation, unfortunately, and this fact really complicates refactoring tasks.

I would like to see an OpenSource Crystal-like tool.

Why don't you check out DataVision? on SourceForge at

I did, thank you! Looks very interesting. I'll bring it to the attention of my co-workers that are involved with the report generating portion of our product. -- DanMuller

My ideal language would have:

TCL/TK is easy because it is arguably broken; it was never meant to scale, but it has. C is fast because you have to do low-level junk, which makes it a PITA. Perl is "powerful on the internet" because:

  1. It has good text-munging facilities.
  2. You can write code fast in it (need website up by tomorrow! okay, perl "%incomprehensible($_ ~ /390(#&*()@/": okay, website's up!)
  3. It was there first.

So what you're basically asking for, when you boil it all down, is a language that's dynamically typed, has familiar Algolish syntax, good texty facilities and can be byte-compiled and run on a VM. That's three of your four goals; interestingly maintainability wasn't among them. Also left out was object orientation or sophisticated support for FunctionalProgramming; I assume you did this deliberately.

SchemeFortyEight and many brands of SmallTalk would fulfill your goals, but violate your unstated anti-goals. The ParrotVirtualMachine?, which is the backend of the forthcoming Perl6, would fulfill all of them. It's also supposed to target Python and Ruby. So far (Perl6 isn't out) it's been made to work with SchemeLanguage, IntegerBasic?, and a subset of Python. Keep an eye out for it as it develops. -- AnonymousDonor
Contrast having lots of specialized LittleLanguage s though I would like to see general languages become more standardized. Most languages agree a = b (or a := b) is an assignment, a > b comparison, a*b^c+d is an expression (except ones like Lisp, Scheme) and even function definitions are fairly similar but how to express classes, inheritance etc then things diverge. Perhaps mathematical syntax should be a guide as it was for expressions but then what really are classes, closures etc? See TheoryOfObjects, CategoryTheory. We should be able to write things like Set x = {"a","b","c"} or Vector v =(1,2,3) in the IdealProgrammingLanguage but right now most languages use "{}" meaning ordered arrays. Mathematicians have given a lot of thought to their symbol systems so I think they should be followed where possible within the constraints of a QWERTY keyboard. Perhaps that too should evolve (and be accepted).

You're right that there are some basic mathematical notations that we can build on, but only to a point: there is no agreement about the right way to express exponentiation, and even standard mathematical precedence rules break down when you have too many different operators. The designers of APL had so many operators that they decided on a straight right-to-left precedence rule for everything, with brackets for overriding. Now once you get beyond maths, as you said, every language goes its own way - I can never remember whether it's 'while', 'for' or 'do', 'leave' or 'break'. Given that humans have difficulty learning different syntaxes, I'd like to see multiple small languages, but with a fairly standard syntax, and it has to support Unicode - we can no longer force everyone to speak English! Is that unrealistic?! -- PaulMorrison

I don't wish for much. I just want a good, standard Scheme implementation that works well with Unix and C and has a full featured set of libraries and where it isn't too hard to port apps to Windows and that has an O'Reilly book about how to do useful things in it. There's a couple of contenders (GuileScheme, SchemeFortyEight) but they each have problems and none of them is really "standardized" yet. The trouble is that Scheme is so "easy" to implement that lots of people do, and that many of the major Scheme implementations are meant as LanguageForTeachings (MzScheme, MitScheme) so the authors have no interest in standardization or lots of libraries. (Feel free to RefactorByMerging with some other page)

Here is my idea for a good Programming Language: --- AaronBlack

ROTFL - but I'm intrigued about puding types?

Good idea. Now we just need a big fat committee to make it completely unusable.


Why not just take a Java/C# style language and make it morphable into a JavaScript-like language? Perhaps have an optional strong-typing setting that forces things to be declared as a specific type (other than Object or Variant) if set. VB was able to be both scriptish and type declared and was accepted by corporate for the most part (although I did not like the right-side auto-nil feature). I don't know if that is enough meta features (like closures) for some, but it allows a single language to potentially satisfy both strong-typing fans and scriptish fans and is still familiar. Perhaps only arrays could be of type object/variant if the typing-forced setting is set since the need to store collections of different types always comes up. Note that if the forced typing is not selected, one can still define types, it is just not a requirement. Also, please have a string concatenation character besides the plus sign. Let's call such a language JavaMaybeScript?, eh?

Is there a page for IdealProgrammingEnvironment?? Because I see this as a ease-of-development issue on par with the actual language. For instance, Eclipse has partially changed the way I approach Java programming, much as the Turbo Pascal IDE did when I switched from BASIC. I now make certain programming decisions that I wouldn't have previously, because Eclipse makes it feasible to do certain kinds of code transformations. And I can go faster, because I don't have to stress about the names of variables and so forth - I know that Eclipse can easily change them across my entire code base later.

I would like to see an environment more like a spreadsheet app, one more geared toward experimentation and immediately seeing the results of your changes (or your unit tests). I would also like to see an environment that is not text-file-centric, but supports tables, images, charts, equations, flow-diagrams, GUI forms, and other domain-specific representations. Kind of like LiterateProgramming, but more like a playground than a tome. Now I admit I haven't really tried SelfLanguage or SmalltalkLanguage, but I still see the text-centric approach in these languages. Maybe a closer analog is a more general-purpose MathematicaPackage.

I don't have it all thought out yet, but here are some features I'd like to see:

-- DougKing

(From a discussion in WhyIsTheFirstArgSpecial): The sort of intertwingled impure functional approach to accomplishing this is what creates much confusion and violates a number of assumptions for useful optimizations. Lisp and OCaml are good examples of how one shouldn't go about achieving both OO and functional in one language.

That's a strong statement, considering that CLOS is considered by some to be the best object system around. Which functional languages do achieve the right blend of OO, in your opinion?

It's a statement not that CLOS is bad, but, rather, that we could do better still. I'm not alone in thinking we could do better (e.g. see AlanKayOnMessaging), and I have some thoughts on how to get there, starting with rejecting 'EverythingIsa object' and scrapping entirely the impure functional paradigm in favor of interleaving pure functional with a procedural or workflow paradigm. I am quite convinced that doing so would be better for meeting a very wide variety of useful NonFunctionalRequirements (including persistence, performance, distribution, mobility, modularity, concurrency, security, and simplicity) while still allowing the encapsulation benefits of OO. And before I go on, I'll note that the issue of 'impure functional' applies equally well to OCaml, Ruby, Python, C++, Java, and Smalltalk. I'm not picking on CLOS in particular.

There are no mainstream functional languages that I'm satisfied with, but ErlangLanguage at least has the right philosophy. ErlangLanguage uses FirstClass processes as objects and communicates between objects via immutable values, and thus it makes at least one of the valuable distinctions. There is a short discussion on OO in the ErlangLanguage page that is worthy of perusal. However, the ErlangLanguage still has 'impure' functions, and while they make best practices of avoiding the mix of communication and calculation, even Erlang would have greater potential as a language if this was enforced semantically.

I'll try to summarize the problems I see with Impure Functional (As compared to Pure Functional interleaved with Procedural): One can get around all of these problems to a degree, but the lack of support for (and enforcement of) the distinction at the language level makes it almost impossible to implement them in a manner transparent or orthogonal to the language application and existing language libraries. Essentially, one must shatter the uninteresting and SimplySimplistic 'EverythingIsa' object paradigm by recognizing, at the language level, a few very useful distinctions (behavior object = FirstClass process, and message object = immutable value). To get the other features one sprinkles in some SymmetryOfLanguage (e.g. procedure-description = value for which 'execute' primitive works; process identity is also a value (e.g. a URI, preferably unique and unforgeable); state of FirstClass process can be described in terms of a value and thus mobilized or persisted; state is managed by special FirstClass processes that by default provide such features as transactions, versioning, etc.).

Erlang doesn't have all those SymmetryOfLanguage features, but it is a good step in the right direction. There is a lot of simplicity to be gained by making various NonFunctionalRequirements easier to achieve, which is done by making and enforcing useful distinctions at the language design level. CLOS may be a pretty decent object systems, but that is true only so long as your NonFunctionalRequirements don't force you to hand-cut a path through the glaring forest of DiscontinuitySpikes at the edges of the Lisp virtual machine.

Of course, why stop with just communication vs. calculation? There are yet more useful distinctions one can make to simplify and optimize things even further. For example, FirstClass process-object mobility is somewhat painful because it essentially provides a service from one location at a time. But mobility is only required insofar as the processes are stateful and long-lived. If one were instead to distinguish between process and service by, for example, saying that a service is a stateless value that describes a process to spin off upon receiving a message queue and a process is a uniquely scheduled continuation then one can easily distribute copies of the service (which is a value and therefore immutable) to any number of machines, each of which can spin off the necessary process when they need that service. Where one requires shared state, one simply has URI references to common state objects as part of the service description. Where one requires local state, it is created at need within the individual FirstClass process.

Suddenly such things as cheap (LDAP) service registries, associated matchmaker services, and even cheap mashups by functionally or procedurally (or by hand) combining or constructing new service-description values, all become cheap and extremely viable. One can create 'abstract services' that specialize and optimize via partial-evaluation based on what is already inside the 'local' and 'domain' service-registries. One avoids the hassles and complexities of 'jumping beans' with stateful 'objects' (or 'processes') moving from one machine to another; one simply constructs and shares a 'service-descriptor' value (or a functionally abstracted service-descriptor value). Communication overhead for services can be reduced, often even eliminated, because far more functionality can simply be embedded in the shared service-description and handled locally. As with the distinction between calculation and communication above, this is win for almost every NonFunctionalRequirement one might care to name.

Once again, one doesn't need language-support for this sort of distinction between service and process, but it is extremely useful to have it already embedded in standard libraries and language methodologies in a manner that makes it a common part of the whole-language system and hassle-free rather than specific to applications. Erlang could do all of this, but it doesn't because the language designers didn't think to make that particular useful distinction.

In any case, my IdealProgrammingLanguage will:

How about some real features:

Why should I tell the computer how to sort? It should figure it out on its own!

See GoalBasedProgramming, DeclarativeMetaprogramming.

See also QuestForThePerfectLanguage FutureOfProgrammingLanguages MyFavouriteProgrammingLanguage

View edit of July 8, 2010 or FindPage with title or text search