William Underwood

 RecentChanges | RandomPages | WikiList | WikiHistory
Just a naive programmer from Saskatoon, Saskatchewan. Give me a shout some time, if you wanna talk patterns, programming or whatever... it's not like I sleep these days anyway :)

Be aware that I occasionally use the short form of 'cwillu' rather than repeating WilliamUnderwood many times over, just so ya' know. And like MichaelBolton, I'm not related... (there's a reason I don't typically use my first name online :p)

Archived Conversations and Bloggish Material

:)

[I'm still not quite sure what this one was about]

Yay for hardware failures...

What happened?

Laptop died... pain in the arse to backup everything off of it onto my desktop, server died at work, so we have fun with the hex editor, another computer's power supply crapped out, and yet another one's bios chip got a scratch in it... I didn't even want to ask them how that one happened. It's been a fun week or two :)
Pages I'm Currently Interested In RE: GoedelNumbering comment above: you do know that Goedel Numbers are inherently composite, formed by multiplying primes raised to powers, yes? So literally speaking, there cannot be a prime Goedel number. If you meant something more metaphorical, you might want to expand on that. There are probably things I could say about your clarification. -- dm

I sure don't know much about them... it was just a sense I had, which I might go into more detail about.

In that case, yes, it would be more metaphorical. Something along the lines of general operations which are primitive, that cannot be factored into more primitive general operations.

For example, I think I would consider addition and multiplication to be relatively prime in a mathematical domain, there's no factoring of the logic which can take place that doesn't leave some remainder to be left in a third function.

My thinking is admittedly unclear at the moment; hopefully it's clear enough to either convey the sense of it, or to make clear the uselessness of it (if this has been noted, explored and abandoned already)

Or is it the case that a GoedelNumbering of a programming language is already what I'm thinking of...

-- cwillu

I suspected it was something like that. The traditional primitive is simply successor, and addition is built from repeated successors, and multiplication is built from repeated addition. Along the way, closure forms comprehension of negative and then rational numbers, from the initial positives (actually, from the initial zero).

You may also be interested to know that unique factorization, although possible for integers, is the exception, not the rule, for systems in general; it is not possible for multivariable polynomials, for instance. When composite, there are multiple possible non-equivalent factorizations. So most things cannot be cast into a normal form.

This is also true for interesting things like graphs and programs: there is no unique factorization into a normal form, which is a large part of what makes many otherwise-interesting algorithms untenable.

Many, many, many people have attempted to form a universal set of primitives for human language or human thought, famously beginning with Liebnitz, continuing with Frege, Boole, et al. BasicEnglish attempted this. Many AI projects have attempted this. Loglan and LojbanLanguage attempted this in some ways. The SemanticWeb sort of attempts this. You'll know when there's a satisfactory one, because it will effectively enable strong AI.

For programming languages, the Lambda Calculus does this. TuringMachines do this. CombinatoryLogic does this. (HaskellCurry, EssAndKayCombinators)

Any further clarification of where you're going?

-- dm

Yay... more reading to do, namely, to get my head around the universal set of primitives effectively enabling strong AI, and unique factorization. Is 'normal form' in the same sense of Dr Codd's relational normal forms, i.e, eliminating ambiguity/duplication?

-- cwillu

[For the sake of third parties, let] me be careful: Yes and no. No, not exactly in the way casual database people mean it. Yes, exactly, in the sense that it was meant when the name was coined in database theory (by Codd, IIRC). In database theory, 5th normal form means fully factored, in quite the same sense that it means to factor an integer. "Normal" really just means "nominal": a selection of some form to which all others can be reduced. Which is a factorization. Or refactoring, as is mentioned all the time around here. Even integers can be partially factored, if they are composed of more than 2 primes, but again, for complex systems, the issue of factoring gets much more complex than it does for integers. -- dm

Tangentially related, my sense is that (TuringCompleteness? aside) any operation can be written easier given some set of primitives vs another. Lisp's eval construct comes to mind. And I think this is a general phenomenon, and that the primitives which make a given operation easier to write aren't necessarily computable from the ones which make those primitives easier to write. This is what I intended by the analogy with primes.

-- cwillu

It's a dirty little secret that computational complexity depends on the primitives on which algorithms are based. In theory. In practice, everything is based on the same primitives: electronic circuits that implement boolean operators as primitives.

But aside from complexity, certainly expressiveness varies wildly depending on the base set of primitives. This is exactly what a lot of SmalltalkBestPracticePatterns is about, this is what Lisp programmers mean when they talk about programming by creating new languages in Lisp, this is what ChuckMoore means when he talks about Forth programming by finding just the right set of Forth words to create for a project (he is actually the clearest about the importance of this, but he can't describe how to do so, whereas the Lisp and Smalltalk communities do talk a lot about how, but tend to regard the motivation as self-evident, so it is described less, and less well).

Finding the right set of primitives for a project is, nonetheless, an art, and again, will remain so until we nail the subject sufficiently to understand the mechanism of thought. But there is lots of guidance for the art, and many sets of non-universal primitives. PatternLanguagesOfProgramDesign can be regarded as a facet of the same topic.

-- dm

Fair enough (by which I mean, that's what I mean :p)

Now, the next stop in my train of thinking is that the language which provides the ability to deal with different factorings dynamically is the next step.

AOP strikes me as a way of trying to deal with this (although it somehow seems a bit too static to make it work in a general way; the extent of my experience being aspect-j, that might be just an artifact of my experience).

Relational (specifically excluding SQL) also strikes me as a good approach: the ability to define constraints is not limited to a particular predefined view of data, and given a more powerful execution model (i.e., multi-methods), it seems pretty damn close to providing that arbitrary refactorization.

-- cwillu

[meta: adding sigs, hope you don't mind, makes it a bit easier when the inevitable dynamic factoring occurs :)]

Incidentally, an awful lot of the mainstream of mathematics for the last few centuries also was directly concerning a search for primitive building blocks.

Polynomials were discovered as a widely useful normal form, particularly when limited to powers of a single variable each multiplied by a different coefficient and added. That normal form seems intuitive today, but it took a long time to develop. And it took a long time to discover/prove that, unfortunately, they couldn't all be solved, at least, not in terms of the primitives being used.

Infinite series and infinite polynomials were explored to expand those limits, and thereby, new primitives were found: power series. These were sufficient for all "analytic functions", which is a lot.

But still not enough; many e.g. integrals still had no solution in the existing toolbox of primitives, which lead to Fourier series and integrals as new primitives, which allowed for a much larger set of things to be solved.

This in turn was problematic, and required redefinition of "function" to resolve, so then "generalized functions"/"distributions" were added as primitives to the toolbox, along with a generalization of Fourier transforms into general transforms with any kernel, some aspects of which have recently been high profile for engineering problems: wavelets.

There remain many issues on that path of mainstream development that are still unsolvable with the current toolbox, particularly with nonlinear functions.

The search goes on. -- dm

Oh, I forgot to mention; that development in math isn't just parallel, it's on the some topic. Part of the problem is that we do in fact keep inventing new primitives via a bootstrapping process on top of systems that use the old primitives. So finding the perfect set of primitives for all thought would be an unending process. It would, however, be great if we could just find the a set that covered everything the human race knew as of, say, 12,000 BCE as a core, with add-on sets for speciality areas that are updated every year. -- dm

Doesn't that imply that the set of primitives is not sufficient for strong AI, or at the very least, that strong AI is a red herring, seeing as it requires as a prerequisite an infinite number of primitives? -- cwillu

No, it's a red herring for the notion that successful AI will be inherently vastly smarter than humans (which a large number of silly but otherwise rational-seeming people believe).

People of circa 12,000 BCE were exactly as intelligent as we are today (skipping some interesting side issues), but we know vastly more than they did. The core set of primitives should (at least in a handwaving sense) be sufficient to make strong AI as smart as them, and therefore as smart as us, but not knowledgeable about advanced math concepts by the standards of the 21st century.

You'll complain that I'm mixing up knowledge and intelligence. Up to a certain basic human level, effective intelligence requires a minimum amount of knowledge. "Common sense" is, in part, knowing "obvious" things about the world and the effects of actions, but is completely lacking in AI to date, Cyc notwithstanding.

Humans don't have an infinite set of primitives; we do invent new ones, but at a slow rate (compared with infinity) -- and they are layered on top of the primitives that are hardwired into our brains. -- dm

I'll complain that you're making intelligence meaningless. People of circa 1950 had some pretty ------ emotional problems which limited their capacity for learning and adapting in practice. IOW, their intelligence was lower than ours. People of circa 12,000 BCE? Screw them, their emotional problems (not to mention their lack of consciousness) made it impossible for them to exercise any of intelligence. Do you really think it's intelligent to just sit there waiting for someone to kill you because you haven't been ordered to defend yourself yet?

The human mind is infinitely more malleable than you seem to believe. I'll make just one observation about it, and you'll be in a position to tell me if I've got the facts straight or if I'm basing my conclusions on an apocryphal story. The ancient Egyptians discarded the brain when making mummies because they didn't have any idea what purpose it served. Now, if you asked any modern person what purpose the brain serves, it would be pretty astonishing to hear them say that they don't know. And not for the trivial reason that we've advanced in medical science.

No, for the far more important reason that modern people are conscious and consciousness is something that's metaphorized to occur between the ears and behind the eyes, exactly where the brain is. IOW, the crudest observation would lead any modern person to believe that the brain is the seat of consciousness, the seat of the modern human mind. And it follows implacably that any peoples who didn't make this simple realization simply didn't possess any consciousness behind their eyes and between their ears.

Consciousness is part of the most very basic intellectual tools available to every modern human being. The fact that such a fundamental tool had to develop puts the lie to your many ridiculous claims. Chief among those ridiculous claims being the idea that the human mind is some sort of Turing machine, able to uplift itself to any theoretically reachable abstraction / understanding / knowledge, that there are no hard limits to any one mind's malleability.

Oh, and it's true that strong AI will be vastly more intelligent than human beings. Pure technical considerations make this inevitable. Something that's at least as smart as the smartest human being in human history but with thousands of times more knowledge, life experience, and fine control over its own knowledge base can't help but be vastly more intelligent. -- RK

Postscript added at top: Ah! I had forgotten you have a thing about neural nets vs Turing machines. Some neural nets are exactly as powerful as Turing machines, others are less powerful, others (that no one knows how to implement) are more powerful. To some extent, that's an implementation detail; I wasn't talking about that, either. Some of the mathematical primitives I mentioned above are more powerful than any machine we know how to build; when applied to the real world, they are translated to less powerful approximations, so far.

The Egyptians, actually, did have a semi-modern view of the brain, which was forgotten by the Greeks, Romans, etc later. Their funerary rites did not treat the brain with the respect you might expect because ...well, actually, trying to make their beliefs about the afterlife seem logical by modern standards is futile. For one thing, though, they knew that the brain as an organ related to thought was as useless after death as were the other internal organs they removed. But they had very complex beliefs about three aspects of soul; the ka, the ba, and the akh. It should be immediately striking that that, all by itself, is rather different than any modern view, whether religious or scientific.

If you want to really understand the Egyptian's view of things, including on this topic, you have to first understand their view of ma'at, which was the basis of just about everything in their culture, and is one of the least-translatable words known in any language.

As for the rest -- once again, you've misunderstood me. Nothing you said seems to contradict what I was attempting to talk about. My point is merely that we cannot, as yet, make a computer be even as intelligent as people were 12,000 BCE, that's all. Being able to do so would be a big breakthrough, no matter how low your opinion of neolithic man. Hell, making a computer as smart as a dog would be a huge breakthrough.

The issue of intelligence is a very complicated one, and I am not even attempting, anywhere on this page, to say much in particular about it. I have been talking about operational primitives in various areas, and several times touched on the fact that a good set is a prerequisite to strong AI.

Among other things that I did not mean: I certainly did not mean to imply that a good set of primitives is sufficient for strong AI, but it may well have sounded like I meant that. It just wasn't the topic.

So, as I say regularly, please try not to jump to conclusions like that; very often I'm not even trying to talk about what you think I am. -- dm

P.S. "it's true that strong AI will be vastly more intelligent than human beings" -- yes, eventually. I was giving a reason why the first one won't be. You don't have to believe me, but I did offer a reason.

What I meant about human minds not being turing equivalent is this. That though you can use a primitive turing equivalent language running on a computer to uplift yourself to any other programming language in the same computer, no matter how abstract and powerful it might be, you can't do that with any particular human mind. The human mind may be able to use a set of operational primitives to construct a more powerful set of such, but it isn't flexible enough to accept the constructed set sufficiently to construct the still higher set. It doesn't have the necessary reflection, either in software or hardware, to accomplish such a feat. I don't know anyone who seriously claims that the very first AI, version 1.0, will be smarter than human beings right off the bat. That seems like a red herring to me. The story about the Egyptians' funerary rites wasn't that they took out the brain along with the other internal organs, but rather that they discarded the brain and not the other internal organs. The Greeks also divided a person into three parts roughly associated with thought, emotions and physical needs. And the only way to understand such an ancient belief system is with a heavy psychological basis. -- rk
Wow. I leave for 3 hours, and come home to a treatise on artificial intelligence and its relation to early human intelligence. I gotta go see movies more often (or is that less often?)! :p
-- cwillu

It occurs to me that RK no doubt perceived major disagreement with me when I said

"People of circa 12,000 BCE were exactly as intelligent as we are today (skipping some interesting side issues), but we know vastly more than they did"

But when I said "skipping some interesting side issues", I meant all nitpicks about ways in which neolithic man was not as intelligent as modern man, because that wasn't the topic, that was a side issue that I didn't want to get into. In terms of effective intelligence, obviously neolithic man suffers by comparison. He'd look like a retard trying to drive a car, for instance. But that's all far, far afield from what I was trying to talk about.

In terms of genetic traits, neolithic man is considered 100% modern (there is in fact a possible counter-argument that some degree of evolution may have happened very quickly since then, but that is not the majority consensus of specialists). Their hard-wired intelligence was, as I said, exactly the same as modern humans.

It has now taken two paragraphs to make a truly minimal distinction between effective intelligence and genetic intelligence, and there are already dozens of obvious points to be made in followup -- and that is why I was trying to skip all that as an "interesting side issue". The primary point is that we don't have AI as smart as neolithic man, but it would constitute strong AI if we did; it wouldn't have to equal Beethoven or Einstein to count as strong AI. -- dm

Hey guy, just in case you don't check your email, which I dunno, you may, but. the schedule is up at getdominos.com/staff/

Hey there this is Travis, I'm hoping you actually check this, I don't know where you look first for new messages, so I'm putting it topish.. If you have email and it's easier let me know, otherwise I was just looking for the program you told me about that collected entire sites. I can't remember it. Anyways, yeah, let me know, email is travis xianity net or you could leave it here. I'll look back in a couple of days, and yeah, just wondering what's up. IS this thing in Ottawa going to work?

Hey Guy! Yep, this is a reasonably good way of getting hold of me.... In general, you can put it anywhere; clicking the date at the bottom of the page shows changes, of which any new messages will display prominently. Quite handy, actually.

The tool is "HTTrack Website Copier, Offline Browser for Windows and Unix". I've got a copy I can send you (messenger, ftp, or name your pleasure), or it should be reasonably easy to get hold of as well (GPL'd, so it's legal :). There may be some better programs out there as well [anyone lurking have any suggestions?], but ya. Also, some discretion is probably in order using it, although I'm sure you already know this (e.g., I'd avoid running it on a wiki)...

Things are going reasonably good here, although the combo of my sleep patterns and his work schedule do limit the number of useful hours of work that get done. But we've got enough groundwork laid that it should be fairly easy to make progress once I'm home. Good to hear from ya :) -- cwillu

Well, it's me again, I have a day off so I'm spending it on my computer. What was that type of logic called that used a yes, no and muh? I was just thinking about it while driving today.

From RK

From LanguageIsAnOs:

O William, where were you when I needed aid and wise counsel? You're now a member of the SyndicateOfInitiative. Help, advise, criticize, demand, whine, bitch, moan and complain; I'll take whatever I can get. -- RK (the devil)

[It's RK wanting to say hello and wanting you to contact him. RK is certainly not the mean guy everyone sees in his nightmares. He tends to be a little unreasonable sometimes though! -- ra]

Really, I hadn't noticed :p


Hi, I'm a friend of your dad's friend (long story, don't ask me about it). I currently live in saskatoon as well. Anyway, I'm working on a Java Desktop Project called JD4X (jdx.sourceforge.net). Drop me a mail if you are interested in a project like this. -- Tay.

Hi, I'd love to see some of the code behind your thoughts on TransparentRmi. Also, I didn't understand the comment regarding the packing of the source files -- it's plain old .tar.gz, unless I'm missing something. -- GuyGurAri

Hmm... I must have gotten a slightly corrupted download... cygwin and Mandrake couldn't unpack the uncompressed file, although I was able to extract them by hand with a text editor (weird).

From/to SunirShah

SunirShah, your worst nightmare? :)

You haven't met my roommate. Avid java fan, expert troll. Which is terrible (and entertaining) because I hate java, and I'm also an expert troll. -- ss

Out of curiosity, why do you hate java? (I can't seem to find much on that around here)

I'm fine with Java as long as people don't claim that it's technically good. There's always a better technical solution than Java, but I'd almost always recommend Java for anything mind numbing because it's the lingua franca of programming. Nonetheless, it's an instant indicator of how little someone knows about the art if they claim that Java 1 (Java 2 is different) has any technical merit. Java was made quick and dirty. Now that the Java Community Process has been established, things are improving, although it's probably too late for much of it. If you read what the engineers of Java say, even they have serious regrets about how it's built--and praise, but I'm responding to zealots here. I get pinned as a Java hater because I disdain Java zealots who boast hollow opinions. If you want to understand further what was wrong with JamesGosling's Oak project, just consider that Sun Research had already developed the most advanced VirtualMachine ever invented to date--SELF. I'm not sure if Gosling was aware of this project, but I question whether he suffers from NotInventedHere syndrome. Then again, considering his goal was to build a prototype mobile network, and thus write "a better C++" not a better language, maybe he didn't care. Marketing got a hold of it, and so there we are. Don't forget the ultimate goal was not to create EnterpriseJavaBeans but to create JINI, which failed. I think they lost control once JINI got out of the bottle, so to speak. Is it hatred to keep perspective? An unqualified yes. This is the software industry after all. 0 or 1. -- ss

[note, this is the original conversation, not quoted from elsewhere]


The Link List

Wiki Links

Interesting / Amusing Links Library / Tool Links

Musings When you're up in the clouds writing classes and lambdas and relations, don't forget that 'main(args)' is where the rubber meets the road.

My take on OO

I like object oriented code. But, from what I've seen around here, probably not for the same reasons you do (or don't). It's really something quite similar to why lispers like their macros. I use objects to describe my problem, and then use procedural code to solve them. The trick is, that if that solution becomes the description of a new problem, then it will probably find its way into an object as well... the procedural code looks deceptively like object oriented code :).

Changing the World

By the way, BlueAbyss has implications for wikis, which have implications for BlueAbyss. -- rk (1900 11/21/03)

Myself, I'm quite eager to get some OS hacked up, although I'm finding that intel seems to have some unhealthy assumptions regarding os's as well.

Seriously, if you want to talk, give me a shout. Rumour has it around here that I don't sleep anyway. But then again rumour also has it I repair hardware with my mere presence... I dunno :p

What are some of those unhealthy assumptions regarding OSes?

Mainly regarding the instructions and structures geared towards os writers / security. Nothing fatal really, but could (will?) cause a performance hit when the manner in which one uses system calls and/or one's concept of privilege/security differs from the manner which intel intended for them to be used. Not a huge surprise, of course :) It mainly means that, as usual, it's not as straight-forward as it could be; which is of course true in pretty much any useful endeavour. It's been a few weeks since I read through the specs, so I'm not terribly clear on the details right now.
I loved the analogy you posted on WikiSquatting. Perfect! -- TimLesher

ThankYou - I wasn't sure if nobody noticed, nobody cared, or whether it was going to be disappearing soon :)


From SelfLanguage:

The only really interesting thing is the collaborative features of the Self environment. When we program in pairs, instead of both sitting at the same computer, we sometimes sit at our own individual computers (which are right beside each other) and pop up the Self window on both screens. Self gives us each a mouse pointer and a keyboard focus (so it's not like we're constantly fighting over control of the mouse cursor). Most of the time it's pretty much the same as sharing a single computer, but occasionally it smooths the process a little bit. (Instead of trying to get me to notice a typo - "Complier? Look, you spelled it complier instead of compiler. Over there. Up one line." - my partner can just fix it himself.) And sometimes there's the occasional boring task (renaming a bunch of methods, for example) that we can divide up and work on simultaneously.


Hi Liam,

cwillu.com doesn't seem to work.

Check out what I ran across (http://images.linspire.com/screenshots/ ss_coho_big.jpg); the background image should be interesting for obvious reasons. -- RK

Sorry... the computer running behind cwillu.com's been off for a couple days; I really shouldn't run the http server off my testbench, for obvious reasons :). Now we just need the realtime wave of objects... and technically, the photo's pointing the wrong way... should be into the abyss, not out into the sky, eh? -- cwillu

Actually, I'd want a 3D image. The surface above would be above and would let you distinguish between up and down. Would like some kind of anisotropy in the horizontal plane too. But this is always how I imagined it. -- RK


So I've been thinking. 'anObject.anOperation(theParameters...)' This is infix. Two parameters. 'anObject', and the tuple '(theParameters...)'.

'foo, bar, baz doSomethingTo bing, bang, boom'

I'm already considering a language consisting of multi-methods which are passed a tuple straight out of relational theory (dispatching therefore occurs according to column names).

Relating back to capability theory... anything that you can with a method, you can do with the data that method has access to. It should be possible (maybe?) to limit the data provided to a function such that one can be limited to exactly the access provided by an arbitrary method (lambda, function, whatever). Providing a tuple should therefore be sufficient context for a method to operate on (no other context... not sure that this can jive with some nice things that lambdas do). This is probably trivially true if functions can be put in the same tuples (and this is probably necessary anyway)...

I'm also considering some practical security issues: no one has demonstrated success in encrypting an algorithm in such a way that one can safely give someone else to execute without them knowing what they're executing.

An Informal, Incomplete and hopefully Incorrect Disproof of Encrypted Computation
Data is Code, Code is Data. One can be converted to the other. You could in theory consider the viewing of a JPEG to be the execution of the JPEG code. Now, consider an encrypted JPEG. Quite useless, unless you can decrypt it. There's intuitively (red flag?) no way to view an encrypted JPEG in a sensible way without decrypting first. I guess the real question is this: by moving to a turing-complete language, do we gain anything on that front?

It strikes me that the whole HaltingProblem (the 'no shortcuts to computation' aspect of it) seems to make it impossible.

Now, a possibility might be along the lines of a shared compilation/execution model, such that neither party reveals the entirety of their secrets. I don't know though...

You can encrypt the algorithm, and you can encrypt the data, but you can't expect to use either of them in that form. That's the intent.

What state is acceptable to store in a lambda, as opposed to in a relational tuple (assuming that performance of the database is not an issue)?
Ever notice that (other factors aside) the value of a wiki page is inversely proportional to the highest level of nesting in a 'discussion'? If both parties are reasonable, some extremely clear and lucid descriptions tend to develop. If one or both aren't, large trees start obstructing the view.

I never said there was any value to this page :p -- cwillu


You mentioned implementing the RelationalModel. Care to give more details? Your homepage seems to be down.

The details on TupleOrientedProgramming are what are available right now (yay for balancing between two jobs, etc...). The most striking details (to my mind) are the updating of every view implicitly unless explicitly copied, the use of those copies to provide transactions, and the philosophy that any tuple is an object.

And the homepage is hosted on a computer that's sitting in a few pieces; I'm in the process of moving the appropriate servers off my test bench (not sure what I was thinking) to one of the old clunkers I have lying around (which will be more than sufficient given the amount of traffic I draw + the amount of bandwidth I have). -- WilliamUnderwood

Liam, here's an idea I want to run by you considering I'm too internet challenged these days to do the proper research.

You can do things by creating a log and then doing all the legwork of distribution over TCP; marshalling, serializing, deserializing, et cetera. OR if I recall correctly, you can use a distribution mechanism (SOAP??) wherein you create your objects directly in a distributed store and then let the stores in different machines handle all the details. The question then, is there such a distributed store that will suit? That is, is there one with partial replication where you only get the objects you request? -- rk

Update: searching is utterly trivial as long as you don't involve esoteric functions like "find X such that title of X contains string Z" and "find X such that author of X is also author of Y such that Y is in category Z".

Simple queries are all about ORing and ANDing large bitmaps whose lookup is always trivial as long as categories keep two maps (one immediate, the other all descendants) and the log itself is publication date structured.

I don't anticipate more than one esoteric function in a query. Either way, there is no need for threading or parallelism of any kind. It's really straightforward; build a map then apply the esoteric function. It's only the final step, the application of the esoteric function, that even requires starting/stopping and management of results. And of course, the only way to speed even a few of them up is to build a search index like google's over titles / summaries / contents.

The categories themselves don't need any of that stuff. It's beautiful how you can AND and OR categories to see whatever you want. -- rk

Not completely related, but in my implementation of the relational model so far, selection and projection is done quite like this... the function accept an arbitrary predicate which is applied to each tuple (with specific provisions for maintaining state within a run). -- cwillu

Here's how I'd do "find all X such that X is not in category Z, X has author of Y such that Y is author of W, W is in category Y";

First, you resolve all of the categories, dates, authors, and all other basic terms. So you lookup the map for category Y, then you computer the map for category NOT Y. Then you take a sample of the maps you've got to find out which is most sparse. In this case it's almost always gonna be category Y. Then you apply the function Y is author of W to that map to build a new map consisting of all authors that wrote something in category Y. Then you apply the function X is written by Y to that map. Then you AND that final map with category NOT Y. Now you've got your results.

The key point here is that there aren't twenty different ways to do this. There's only ever one way that makes sense. The only trick involved is to pipeline all the functions and be able to stop and start the pipeline.

Of course, this is what Prolog which makes me wonder.... -- rk


You know what's disturbing? Working all day. Watching a movie. Working some more. Lamenting the fact that it's 6:30, and so it's too late to buy the new tires for your car today. Working a bit more. Cursing the sun glaring off your monitor.

And then remembering that your window faces east.
DisagreeByDeleting. I don't mean to offend (as I've said three times now (hence the yawn)), I already apologized for not posting an explanation on your homepage first, and I'd rather have a broken window boarded up rather than just leaving it broken until a stain glass window replaces it, if you get my drift.

Ok. Thanks for having me reconsider something ill-advised.


Stuck in Linux for a while. Actually installed it on an empty partition I forgot I had (which I had reserved a long ago for guess what?). And GAIM doesn't seem to work well with MSN so I can see you're online but I don't think you can receive my messages. -- RK


William, I reverted your recent change to WikiSquatting. "cum" is the correct term, in the sense of "together with". "Come" in this context is incorrect. -- MarkIrons?, 2004-07-25


Hey William, quick question. I don't recall where, but I recall reading something you wrote about how you've reimplemented half of common lisp in Java in your own personal library in order to feel productive. I'd love to see a brief summary of what pieces you've implemented, maybe a brief description of your api if you have time. I enjoy your writings and I'd find this valuable as a learning experience, since I'm currently trying to learn Lisp/Scheme and am interested in those parts that you felt you must have.

Sure, (although that was almost a tongue in cheek remark :))

First, a set of classes giving me HigherOrderFunctions. The basic form borrows heavily from the structure recommended in BlocksInJava: users of the classes use only the interfaces, allowing (for instance) a function accepting no arguments and returning an Object to be used in place of a function returning void and taking a parameter of type Object. There's a utility class which then provides basic operations with those functions.

Sweet, already did that too using the same info from BlocksInJava.

A set of wrappers for java's collection classes, which makes use of the above functions. This lets me have a common interface to many different types of collections/implementations, with the bonus of almost faking list comprehensions in a languages which doesn't support them. Map, fold, reduce, etc are all now possible. There's again a utility class which lets me do pretty much arbitrary conversions between several common classes and lists (i.e., asList() type things for String, StringBuffer, streams, arrays, etc).

That's a good idea, I have my own collection API, but stopped short of wrapping the built in types, guess that's next.

Then there's my personal conventions to fake such things as adding third party methods to a class, and the wonderful tool support of EclipseIde to give me a ReadEvalPrintLoop, to make the boilerplate code from the above quick and easy to work with (which is a major point of itself), etc..

This sounds interesting, can you expand on this a bit more. I'd love to fake up a ReadEvalPrintLoop for CSharp.

All of this, and I still miss simple code-rewriting (i.e., macros); my first attempt at implementing a table based collections api failed due to some complexity that couldn't be sufficiently abstracted in pure java. I was using a 'aList.atElement(<someIndex>).put(Object)|get()|exists()' style api, which was nice as the public api of the collections themselves were extremely small, and a lot of commonality of dealing with individual elements between different implementations/classes were easily factored out. The problem was that the boilerplate code for the more involved structures (namely tables) started getting stupidly complicated, boilerblate code inside of boilerplate code inside of boilerplate code. That which would have been simply a couple parenthesis with macros became repeated blocks of slightly but predictably different code, which then became the boilerplate for the next level up. I take full credit for failing to find the appropriate class-based abstraction for this; I have no doubts that it was there. But it sure wasn't easy to find and I, being the lazy thing that I am, went looking for another approach.

I've never had macro's so I don't know what I'm missing here. But thanks for the extra info.

-- WilliamUnderwood


Um, yeah.

I'm replacing piecemeal an app from the dark ages. I can't help it. The users, who don't know the difference between a hard-drive and the big box that contains it, can list off the update phenomena from experience, and can't imagine a world without the word "re-index".

So, I'm busy ripping out a JDBC -> ODBC -> dBaseIII bridge which, complements of microsoft, doesn't actually update the index files associated with the dBase files. It knows they're there, makes use of them for its own queries, but doesn't actually keep them up to date. Oh well, nothing terribly unexpected considering the provider (microsoft), the format (obsolete for how many years now?), and the simple fact that there's really no good way to update four b-trees in a concurrent environment while avoiding any of the usual update phenomena. I'd settle for a best-effort basis, that's already better than the users are used to.

I track down a JDBC library which can handle dBase (including indices) natively. I'm pleasantly surprised to see a good amount of my replacement app works unmodified. Of course, this is actually a side-effect of the previous bridge, as I was half-way through moving the relational algebra into my app from the SQL when I realized the killer issue was the index files not updating.

But I still have a problem. You see, "SELECT * FROM order" happily spits out all the orders. Of course, SQL has this nice feature "WHERE [conditional-clause]". Tends to simplify code. But "SELECT * FROM order WHERE ord_num = [some order number]" doesn't do anything. Doesn't return anything. Nothing. Check, double-check, triple-check, the column name is correct, the table name is correct, the order number actually exists, etc...

On a hunch, I try "SELECT * FROM order WHERE NOT ord_num = [...]". And this happily returns all the rows except the one in question.

Which of course means that "SELECT * FROM order WHERE NOT NOT ord_num = [...]" gives the correct result.

Um, yeah.

-- WilliamUnderwood


Hey William, by an chance do you have any of your code available for reading, any work on open source tools, or maybe any private programs you'd be willing to share? I dig your style and just want to read some of your code to see some more ways you have bent Java to be more lispy. Also, I'm curious how you think 1.5's variable parameters will affect your BlocksInJava api, specifically, the binders?

Sure. I maintain a SVN version control, I'll explore the possibilities of exposing read-only access on my server, plugging that into a http server, and possibly opening up write access if I can easily solve the issues of specifying version numbers for libraries, etc.

As far as impact of 1.5, I haven't put enough thought into it yet. I can see several possibilities, but the proof of the technique is in its use; I haven't used it enough what helps and what only moves things around...

*thinking*

-- WilliamUnderwood

That'd be awesome, I'll check back here on occasion for an address when you get it up. Thanks.

Well, I've used 1.5 a bit more now, and unsurprisingly have developed a few opinions.

First off, generics. Quite useful for keeping one's head clear, once you get your head around a couple of the glitches. Major gripe: why oh why isn't there any way of making an array? The hole that it would 'create' in the type system is already there, and it would make many pieces of api much easier to deal with.

The one thing I don't like is how it bloats the declarations of any non trivial variables, try this on for size: "public ASet<AMap<String,<AMap<String,Object>>>> table = new Set<AMap<String,<AMap<String,Object>>>();". Now imagine this nested a couple times, in order to support my AutoVivication? in java... not pretty. Of course, it beats the equivalent mental mess ("is this internal iterator iterating over a Map or a Set, and what's inside it anyway?"), but it'd be nice if there were an easier way of collapsing it other than declaring a new class.

Likewise, there's no easy way to specify a list of types of arbitrary length without reverting to a rather disturbing <A, <B, <C, <D, <E, ?>>>>> data structure. Don't get me wrong, I have nothing against lisp, but java simply doesn't have the means to do anything sensible with that, optimization wise.

In any case, as implemented, they're still powerful enough to make my table classes an order of magnitude simpler, simply because I can now actually implement a table as a set of tuples without fear of rampant unintentional misuse or extensive use of FlyweightPattern and FacetPattern (i.e., for when you really need to tread one table as set of maps, and another as a map of sets). Suffice to say that the 3 levels of nested anonymous inner classes implementing various non-trivial interfaces in non-trivially different ways have disappeared. Completely. See the smile on my face :).

VarArgs?: as I suspected, this makes a lot of techniques viable, not least of which is what is rapidly becoming a rather elegant query interface to my rewritten collections api.

Favourite bug: public <TYPE> TYPE[] evilButNecessary(TYPE... array) { return array; }
This should work, it might work, but at least under eclipse's compiler, it sure brings the whole shebang to its knees. Oh well, maybe in another month.

Enums. They are about the closest thing you can get to using generics to specify the keys to a map, for instance, but such a usage precludes set arithmetic; in other words, you cure the disease by killing the patient. This means that there is still a whole bunch of casting that needs to be done, even if you aren't directly using a Table structure. Any non trivial collections work will involve collections of collections, and if any one of those collections is a map type structure (i.e., we have Collection<Collection<?>> where ? is constant for any one collection) then casting is completely unavoidable. In other words, I was hoping for a lot more than was delivered. They do open up some interesting possibilities, but seeing as I've never found myself missing enumerations period, I find it hard to get excited about them.

Metadata. Oh boy, this is a can of worms :). All sorts of neat things possible, but yet every time I make use of one of them, I keep getting the sense that I want to write them in the same language as the rest of my code (which for my purposes, it's close enough), but I also want my users to be able to configure these things (or even do it myself in code), without having to know how to deal with scary java code. Folks, it's at this point that one realizes that as nice as you make java, it ain't ever going to be lisp. A moment of silence for the dearly departed. Or maybe it just means that I should get my act together and get writing a reasonable Lisp editor which is both free and not emacs.

Autoboxing. Can't live without it. Can't remember how I lived with out it before. Oh, wait, I didn't. It's a convenience, and combined with varargs makes frequent use of reflection much more practical, but for day to day, I never had much of an issue with simple doing an "int[] var = new int[]{ 10 };". And I'll probably continue to find myself doing the same: var[0]++ seems better than the alternatives (can't always return the new value, and any extra boxing defeats the purpose of the autoboxing in the first place).

-- WilliamUnderwood

Hi Cwillu,

It's Jared from 8th St. We put another computer in the front of the store, we cannot get it to work, it always says "Too many files open" (you know what I mean) Could you call the store and give us some ideas on how to fix the problem? I don't need to leave any phone numbers, you have em all....

Thanks --Jared

Ya, there's some voodoo involved, namely, you need a "FILES=50" and "BUFFERS=50" line in the autoexec.bat file (if they're there already, make sure they're set to the right value). If it's a winXP machine, it needs to be in the environment for the RMPlus shortcut, and you can do it without a restart; Win9x, put it in the live autoexec.bat file, give it a reset, and you should be good to go.

-- WilliamUnderwood

Thank you for that help William. Have a great weekend. Gardi still hates you. So does Hydro! --Jared

Don't worry, if you have to come back to the world of low-pay, you can work at my store, I don't hate you. ;)

?? :(

Sent you another email. Z) Is Innisfil outside the cell network? -- rk

Responded...

Yes, kinda, it's hit and miss, and seems to be time-of-day related as well. To say nothing of the financial situation, although it's much nicer to actually have a reasonable income again :)

-- WilliamUnderwood


Hey from WikiBattlebots?, I would like to say I'm making the announcement today about the wiki to the club. If that page does not work there we will move it somewhere else, but for now we are going to try it with Wiki. -- Andy If you would like to reply to me just e-mail me at sutak@aol.com

CC: sutak@aol.com

Hmm... well, it sounds like 'telling the club' is a done deal, but there are a few things you might consider:

We only have a very loose mechanism for forwarding pages that have moved off the wiki: First, if Ward Cunningham consents, a new wiki could be added to the interwiki, which would give a link on the bottom of the page if it gets deleted. Second, simply leaving a link on an otherwise blank page. Unfortunately, this second option is the one usually available, and is also very vulnerable to deletion.

There are several sites which will allow you free access to your own wiki engine, I'd be pleased to help you get set up on one of those.

On the topic of seemingly off-topic pages that are well-established on the wiki, they tend to have extensively available information, and a reasonably complete set of that information is generally available on any one site. The effect is that it isn't this Wiki supporting those communities; it simply provides a place for this community to discuss them.

And from a purely pragmatic point of view, do you _really_ want the entire world to be able to edit your plans, schematics, public face in any way they see fit? A major reason why wiki's can stand up to that force is because the vast majority of the readers care about the content, and fix it when malicious edits happen. I fear that on the c2 wiki, you would experience the worst of both worlds: you're going to have the same exposure to vandals that we have, without the exposure to interested readers to repair the damage.

With that said, there's really very little we can do to stop you if you really want to use the c2 wiki. But I would personally advise against it.

-- WilliamUnderwood


Hey, this is Travis, having problems with networking, thought you could help. I got a workstation for the store, (never did get around to putting that server in) and can't get it to connect. Have windows xp on it, it can find station 2,4, and 6 (all the front computers) but it can't find the makeline, or routing computer, or server. I tried connecting to //server/c but it says something about it not being a valid resource (can't remember.. didn't sleep enough.) anyways, if you know what my issue is, let me know, email is usually easiest. (But I'm at the store til about 8 today too.)

(The joys of mixing netbeui, tcp, multiple nics per machine and windows xp (which doesn't come with netbeui support out of the box). And we haven't even mentioned the dos-mode application issues. -- cwillu)


Hi! Just wanted to talk to you. This is you're sister Michelle. How are you doing? I'm doing fine my self really.

I'm not doing too bad, we just got our first snow... well, we had one before, but it only stuck around for about a day; it looks like this one will be sticking around for a while. I hear you've had snow for a couple months now :p -- WilliamUnderwood

In case anybody else gets really uptight when sun says that arrays can't be created generically because it would open up a hole in the type system (that's already there, I might add), here's a workaround invented in my continuing inept quest to turn java into lisp:
 {
	public static <T> T[] cdr(T[] array) {
		T[] copy = (T[]) Array.newInstance( array.getClass().getComponentType(), array.length - 1 );
		System.arraycopy( array, 1, copy, 0, copy.length );
		return copy;
	}

public Class type(){ return test().getClass().getComponentType(); } private T[] test(T... array){ return array; } }
Don't ask.

-- cwillu

--

Last edited January 12, what's the matter? Did you get busy? Anyways, this is Travis, just seeing if you remembered where you got that capture card for 8th st. We've got money issues, and I'm ready to break down and get a system. Did you ever get your program going? (I've seen a few on the internet that don't look too bad - for the price) Anyways, just looking for a little input before I get screwed out of a lot of money. (either by our thief, or by a guy trying to sell me a system.) Let me know - calling me likely works best, but I'm usually at the store or asleep by midnight now..

Travis
William,

Can I put your name under the bulletin board analogy in WikiSquatting? That was "a real good one" as observed by another person elsewhere -- DavidLiu

thinking still?

The thought was that leaving it unsigned was the closest I could come to "KillYourDarlings" without killing my darlings. More to the point, to simply cede that writing to the community.

If there is an actual benefit to my signing it...

If there is no detrimental effect from signing it...

In other words, I don't want to make this decision :p

One more think :). Have you got any information on Ejb and its suitability (compared to other common methods) for handling distributed transactions in a J2EE shop? -- dl

I can't say I know much of anything about EnterpriseJavaBeans. My impression is that they end up involving an unfortunate trade-off between scalability and integrity, but again, this is coming from somebody with no direct experience working with them.

Achieving global consensus in truly distributed databases (think peer-to-peer database links vs client-server) is an undecidable problem, and so it is quite possible that some trade-off is necessary to be able to do anything at all. But this probably isn't what your talking about. Given some centralized control (i.e., a transaction monitor of some kind), all sorts of things become possible, efficient even.

But even if I knew specifics on EJB's, it would be hard to give any solid advice without knowing the nature of the transactions that you're distributing / the nature of your application. There's likely all sorts of trade-offs as to transaction granuality, and given that, all sorts of solutions may be feasible.

-- cwillu


William,

I saw you happened to remarked in the infamous OnDavidLiuDiscussion (ODLD) and hope you weren't misled by what I did there.

I moved previous CoreWannabes to ODLD because the ODLD page was there. It has "nothing to do with previous ODLD material". (Sidetrack: BTW ODLD did not "directly involve me" in the beginning). The ODLD version, together with the CoreWannabes stuff, actually had Costins edits left there on purpose (I find it best way to relate to Costin is to let him to have the last word most of the time, so you will find lots of smearing and unresponded attacks).

So if the source CoreWannabes page survives, your edits will be moved back to CoreWannabes, and will not stay in ODLD page.

If your observations were meant for a different part of ODLD then please relocate out of the CoreWannabes sections.

-- DavidLiu DeleteWhenRead 8Nov05

I wasn't kidding when I said I didn't have time to get properly involved; I don't particularly care what happens to the page, I just wanted to state for (some) record that silence doesn't imply or deny support for the individual.


Question: What library (if any) contains the AMap, NullFunctor? and UnaryClosure? classes shown in your Java example at the top of the ComplexBagSumInJava page? [moved from BagSumInManyProgrammingLanguages] -- JeffGrigg

Personal library based in part on code found around the wiki. Although I can see where people might have a problem, I don't really consider it cheating. Saying 'show all the code not in the standard library' strikes me as missing the point: I read the intent of the challenge as being to provide the cleanest code, to the extent that a language's mechanisms allow it. It just an example of a possible approach to the problem in java.

As for the library itself, it's nothing terribly clever. An implementation of a system of functor classes described on this wiki, wrapper classes for common tasks (collections, utility functions), and some convenience methods. I can probably package it up neatly (email me if you're interested) but it'd probably be more fulfulling to write your own :)

--cwillu


EditText of this page (last edited September 15, 2008) or FindPage with title or text search

Meatball