? The thesis of this page is "Hell, yes!"
: This is a ThreadMess
. If you do not resolve it yourself, I will do a SurfaceRefactoring
(because I do not want to become involved into useless cross talk).
Almost all of it centers on rants that are provably, factually incorrect, as pointed out multiple times below, but of course ignored by the ranters. Any refactoring should bring things together by topic, to make it clear that each had been rebutted.
Initially in HowToQuashMicrosoft
now branched into its own area:
A good start to helping Linux become a better user-oriented OS would be: Take X. Stab it repeatedly until it dies. Then burn it, and jump on the ashes.
You forgot to bury the ashes at a cross-roads. Unless you do that it'll come back.
I've never seen anything fill up a vacuum and still suck...
-- The X-Windows Disaster, Chapter 7 of TheUnixHatersHandbook
And replace it with what?
A new windowing system - obviously. Without all the baggage.
Is the baggage really such a big problem? The performance isn't much of a problem any more and you're not likely to have to use the X APIs directly today.
Many of the critiques of "baggage" and "bloat" are rebutted on http://linuxfinances.info/info/xbloat.html. And that was in 2002 before the XRender extension and other enhancements became widespread.
Lots of people talk of replacing X11 as if it were a GUI system, and don't realize what they are really replacing. The argument for a *different* single-user stand-alone GUI server for workstations is completely separate. Pretending you can replace X11 with something like that is, however, ill-conceived at best. All the 'baggage' is completely necessary for many people. What the complainer is usually really
saying is 'I don't want all of X11's capabilities, so why can't I have something simpler'..."
is famous for saying "X Windows is brain-dead", way back when he was starting up NextStep
, Inc. On MacOS X today, we've got a minimum of three GUI APIS - Carbon (based on Classic MacOS APIs), Cocoa (based on NextStep
), and AWT/Swing (Java). How's GnuStep
for Linux coming along?
I should hope not. I still don't understand how the client/server nature of the whole thing works, but I do remember people trying to explain it to me, and me asking them "um, isn't the XwindowServer
doing what the client usually does?"
One of the reasons they say Microsoft Windows is so crashy and kludgy is because it's based on a legacy GUI from the late 80's, and a legacy OS from the early 80's. But at least its legacy GUI and legacy OS were designed with mortal users in mind. Most X-Windows applications I've seen, with their big windows and small fonts, think they're running on a 22-inch monitor at 1280x1024. And every application seemed to use the mouse buttons for something different. I do hope this has changed since 1999.
I don't buy that excuse for Windows. Windows 2000 is steady as a rock. There is little excuse for the dreadful 9x series.
proposes a SmalltalkExtensibleWindowServer
as a replacement for X windows, along the lines of the NetworkExtensibleWindowSystem
I'm hoping that the FrescoFramework
(formerly known as TheBerlinProject
) will eventually replace X.
<Digression> I have a feeling that the story of X is something like the story of Fortran. When I had to learn Fortran II, it was a terrible letdown from Algol -- it had no block structure, no if then else, it required line numbers (IIRC), etc. Over the years, Fortran incorporated most of the advances that other languages innovated and is now very little like the Fortran I learned, significantly better than Algol, and has characteristics of Pascal, C, etc. (IIUC, there seems to be one defining underlining characteristic that makes people who are aware of it choose between C and Fortran, but I forget what it is.) I suspect X has gone the same way, continuing to incorporate improvements without a significant enough name change (for me) to facilitate discourse about the characteristics of X.
IIRC, X does (or did) one thing that I think is bass-ackwards. If it needs to render, e.g., a combobox on a CRT display, the application generates the bits and X carries them from the application to the display. In today's world, where bandwidth is a bigger bottleneck than processing power on the computer driving the display, it would be more effective for the application to tell X that it needs a combobox displayed, and let X just carry that message and generate the combo box at the computer that is driving the CRT display (whether you call it the server or the client). IIUC, this is what Fresco is trying to do.
This was the fundamental premise of NeWS and STEWS -- the application requests that the display perform a graphical operation. The argument with the X community, at the time, was about what graphical vocabulary the display server offered the application. The X community insisted that the vocabulary should be fixed, that specific "extensions" could be installed into the server, and that applications choose from whatever server they were connected to. The NeWS/STEWS community argued that any application should be able to load any vocabulary it wished into the display, and then invoke it. The approach of NeWS was to use DisplayPostscript for both image model and behavior. The STEWS approach (unbuilt, though) was to specify behavior with Smalltalk and us DisplayPostscript as the image model. It looks to me as though Berlin has perhaps rediscovered the principal that it is often compellingly more efficient to move the program to the data then attempt to deliver processed data, especially when bandwidth is more precious than processing power. It appears to me that DisplayPostscript is still a more robust and portable image model than the baroque pixel-oriented approaches we still live with at the moment. --TomStambaugh
PS: To argue that there is no room for confusion about the client/server terminology used by X is silly or worse <not sure of the word I want -- non-perceptive, intolerant, condescending, ..., or some combination thereof>. In the ordinary user's mind, the client is the computer he is using (touching, typing into, viewing the display) -- the server is a remote computer that is providing data or services for his computer. At some level of detail, the opposite terminology of the X definition of client/server is technically correct in some contexts, but just needlessly confuses everyone -- why bother -- we should be making things easier, not more difficult.
In the best of all possible worlds, I suppose they would start calling it the "X Window Service", since it analogous to a service in MicrosoftWindows.
"I'm hoping that Berlin will eventually replace X."
Don't bet on it. Berlin embeds the toolkit into the server. Imagine where X11 would be today if it had embedded 1980's widgets into the server. Do you really believe that X11 would have survived with Athena widgets?
[This is a misunderstanding. In Berlin, a toolkit is just an object factory. Since objects are network transparent, it doesn't need to be part of the display server, though it should be colocated on the same machine. In fact, had Berlin started with Athena widgets, it would have been easy to derive the GTK+ interface from the Athena interface and then transparently replace the toolkit for all applications without recompiling anything. It's also easy for an application to supply completely new widgets, though that could incur more network roundtrips. However, Berlin is dead (again), so none of this matters any more.]
"If it needs to render, e.g., a combobox on a CRT display, the application generates the bits and X carries them from the application to the display."
Wrong. X11 has high-level graphics primitives and display-side storage, and it is getting display-side stored vector graphics.
- Wrong again. X11 is a middle-level language of sorts -- it has the graphical rendering sophistication of a good dialect of BASIC, but not much more. The raw primitives are based on that of Postscript, but unlike Postscript, you cannot define your own functions for the display server to interpret on your behalf. There is no "high level" capability of any kind. When drawing a combobox, for example, X11 sees a list of commands to execute (draw a white line here and there, draw a black line here and there, use polyfill to draw a down-ward arrow, and we're done). Considering that most combo-boxes are only 16x16 pixels (if that) even on large, color-deep displays, and considering how much space each graphics command takes, it very often is faster to just transmit the dang bitmap than it is to draw it using vector commands. I can't help but notice how, in most usage scenarios (but not all), using VNC is just plain faster for accessing a box remotely than X11. This is particularly true when using a cable modem with limited upload performance. --SamuelFalvo?
"It appears to me that DisplayPostscript
is still a more robust and portable image model than the baroque pixel-oriented approaches we still live with at the moment"
Postscript's imaging model was never in doubt--it's a great imaging model, provided you have the resolution and color depth to display it; what sucks about Postscript is the language. Fortunately, X11 now has the Postscript imaging model without the language (the defacto-standard Render extension). Same, incidentally, for Macintosh, which also got rid of the language and adopted PDF instead.
- Postscript and PDF are completely orthogonal technologies. PDF is awesome at describing static renderings. But that's about all it's good at. When you're running an interactive and highly dynamic environment, putting the display server on the same machine as your application, you can harness the local IPC mechanism for break-neck fast inter-process event notification, bitmap transfers, etc. In this case, your programming language replaces Postscript as it passes static images around to the display server. But when you've got a bottleneck in your IPC, that model breaks down very fast indeed. Now, to handle user interaction with acceptable levels of performance, you now need the ability to manipulate these static display descriptions right on the display server itself. This just screams for a language at least as powerful as Postscript. -- SamuelFalvo?
"To argue that there is no room for confusion about the client/server terminology used by X is silly"
Of course, there is room for confusion. But if you called the XwindowServer
what it actually is called, namely the "DISPLAY server", you would not be confused.
Over X11's design decisions have stood the test of time. All its contemporaries are dead, while X11 has been able to keep up with changes in tastes, imaging models, toolkits, and hardware (all the while remaining backward compatible). GUIs built on top of X11 are every bit as modern as those built on MacOS or Windows.
Is that supposed to be a hilarious joke? Have you even heard of the developments in graphics technology in the last 25 years? I suggest educating yourself with some GTA3 or something.
[X11 has its problems, but 99% of the people who complain about it, and about 99.9% of the people who complain about the X11 protocol
, have never studied the protocol in question, so typically the two sides of the discussion are talking past each other. Nor has X11 stood still over time, so implying that it is 25 years behind the times implies lack of knowledge of enhancements made over time.]
[The impressiveness of certain apps, like GTA3, doesn't mean such things are impossible under X11 with direct rendering and good vendor drivers, they're just expensive to develop, and for the most part companies don't see enough market share under Unix/Linux.]
[Most of the "Let's replace X11!!!" projects that have cropped up over time begin by deciding to outright delete important features of X11, such as network support. -- DougMerritt
"Let's replace X" rants seem to come from one of several quarters:
- People overly-concerned about "efficiency" (and thus opposed to network support), and who have little idea of just how minimal of an impact that is--especially on modern machines with modern graphics hardware.
- People who object to the low-level nature of X (and the C implementation of xlib), and want to replace it with something "higher-level"; often something which will require use of some non-mainstream programming language of which they are an advocate.
- People who are still upset about NeWs? (or similar systems) losing out in the marketplace, and wish to rewrite history.
- People who think that because X can (and was originally designed to) support low-level graphics hardware from the 80s and early 90s; that it must not be able to support modern hardware.
- People who saw an ugly GUI on a Linux/UNIX box once (using Athena widgets or something else which is similarly godawful) and thus believe that X is inherently ugly.
- People who have used X without a proper session manager/UI toolkit (e.g., Gnome or KDE), noted that their apps behaved differently, and think that X is inherently unusable.
- People who dislike X because of it's UNIX heritage.
People who defend X completely misunderstand how graphics work in the modern world. (Modern, as in "developments in the last 25 years".)
More precisely, you need to differentiate the different and separate layers of a graphics architecture:
- The display driver -- something that simply pushes instructions to the GPU and bitmap data to texture RAM.
- The composition engine -- for composing lines, circles, polygons, rendering fonts, etc.
- The network protocol and GUI toolkit.
- The 3D and physics engine.
All four parts are absolutely necessary and at the same time are built from different principles and use completely different APIs.
X tries to meld all four into one steamingly monstrous pile of crud. While this is a typical approach for a college sophomore term project like X, trying to build a modern system capable of "supporting GTA3" on top of X is simply laughable to anybody with any experience in the industry at all.
In short, if you defend X, you really do need to go educate yourself: it is a sure-fire sign of misunderstanding how multimedia software works. -- IvanTkatchev
- Sorry, but no. The above things are absolutely necessary for some things, like a good 3D game, but the total number of things that X11 is used for is far broader than that in one sense, and X11 doesn't even pretend to support a physics engine, so saying that X rolls all that together misunderstands X11.
- Also, when I say that doing a game like GTA3 with X11 is possible, I specifically mean using the direct rendering interface that was added as an extension to it (direct rendering is always how best performance is achieved for the fast graphics on all platforms), and that was architected, designed, and implemented primarily by commercial companies, not by random hobbyists, so you're making lots of assumptions that are factually incorrect. -- Doug
[I don't know how old the above comments are but I just wanted to mention that I can run GTA3 fine on X (KDE) under wine, and ut2003 is playable on my old pentium 633 under fluxbox (it woulden't even start under windows). I'm sure X has problems, but being able to support modern applications is not one of them.
It should be pointed out that X has most of these things, in a well-defined place in the SoftwareStack?. Low-level bit-twiddling with your frame buffer, GPU, or whatever-else is handled in the XwindowServer. X provides many low-level 2D primitives for things like lines, circles, etc. The Render extension, once an add-on but now part of X, allows server-side composition of more complicated documents, including full support for modern scaleable, anti-aliased fonts. OpenGl handles 3-D rendering; done in HW on platforms that support it; or on the server in SW if the hardware doesn't have good 3-D acceleration. The network protocol is a strength of X; the asynchronous nature of the X protocol makes X usable even over high-latency links. Numerous good GUI toolkits exist on top of Xlib (a common complaint is that X doesn't force a particular app framework on the user, instead it gives the user a choice; hence the KdeVsGnome situation). As Doug points out, X doesn't have a "physics engine"; but that's probably not a major liability.
I'm curious--what do you consider a good example of a GUI architecture? The Mac? DirectX?
- The above comments are not old, they were made in 2005, as was your response. I didn't notice that GTA3 had been ported to Linux, which is why I limited myself to saying simply that it was "possible".
- And that pretty much proves the bottom line: Ivan's critiques were all 100% wrong, despite his bluster.
I haven't had much experience with the Mac, but I do consider DirectX a better alternative to X. (Though it has its own problems, of course.) That said, if somebody took the time to refactor X, throw out old cruft and integrate extensions and third-party libraries into the standard core, it just might work as a viable alternative. Oh, and the frame buffer driver must reside in the kernel; I consider this to be a major blocking point for X adoption. -- IvanTkatchev
Why must the frame buffer driver reside in the kernel? For performance reasons? Seems to me if you want direct access to the hardware, then mmap()ing it into user space - which is what X does. For "normal" X clients, the frame buffer is mapped into the server process. For DRI, the clients can mmap it directly.
Keeping the graphics subsystem out of the kernel is rather nice for system security/stability. When Microsoft released Windows NT 4.0; system stability went down compared to NT 3.5. One major reason is they moved the frame buffer support into kernel space; with the attendant result that buggy drivers and the like were now in kernel space and thus able to crash the whole system. X server crashes generally don't bring down Unix boxes. (That does assume that users know how to change virtual terminals and/or log in remotely; as the primary UI may be hosed).
Are you aware that every other single driver resides in the kernel? Microkernel designs and whatnot are nice, but all important consumer OSs (including various Unixes) that I am aware of are not microkernels.
Anyways, your comment perfectly highlights the incredibly braindead design decisions of X. (Read: complete lack of any semblance of architecture.) The framebuffer/VRAM driver needs to be completely separate from the graphics engine, living in separate spaces; the framebuffer needs to be part of the kernel, whereas a graphics engine like X is simply a user process. The only way to crash a framebuffer driver is by crashing the GPU, and there is no way a user process could accomplish something like that through sending commands to the composition library.
Of course X, in their infinite wisdom, decided to act on opposing principles and amalgamated all the code they could get their hands on into one huge mass of a spaghetti-code super-user process. Brilliant, what can I say. -- IvanTkatchev
These things are not "necessary" - rather, they highlight how a proper graphics architecture stack must work. You can ignore the last 25 years of industrial experience that made modern graphics possible and persist in bolting new ugly hacks on top of old ugly hacks, (c.f. "direct rendering" or whatnot) but please be aware that in doing so you basically lag several years behind decent architectures in features while at the same time costing significantly more and requiring disproportionately larger resources.
There is a real reason why everybody in the world does things differently from X. The Unix crowd really needs to swallow its pride and face reality, otherwise the next generation of GPU technology will relegate Unix to the dustbin of history for good. -- IvanTkatchev
- And exactly what "Unix crowd" would that be? Would that be the vast majority of us who couldn't care less what "next generation of GPU technology" runs on our HTTP/FTP/SMTP, database, file and application servers, our firewalls and routers, our software development workstations, and our office productivity boxen? If IvanTkatchev has an issue with X and believes X to be the make-or-break point for the future of Unix (an idea that makes me laugh, but obviously I work in a different domain and have a very different perspective) maybe IvanTkatchev should lead a project to replace it? -- DaveVoorhis
The Unix crowd that is living in an arrested-development fantasy of 1970's hardware "boxen". Like it or not, modern "application servers" and "office productivity machines" are running on modern hardware, and supporting modern hardware means absolutely supporting things like GPUs. Currently, it is still viable to emulate a 1970s-fantasy-land just to satisfy Unix bigots, but there will come a time when paying through the nose to placate bigotry and ill education will become unprofitable. And at that point Unix will die, unless there comes along someone smart enough to understand reality. Unfortunately, for most "geeks" wearing their unixness as a badge of merit is more important than solving real problems efficiently. Such is life, and I don't really want to dedicate my life towards the goal of saving Unix. It is high time for progress already anyways. -- IvanTkatchev
I think you've missed my point, which is that the majority of applications for which Unix/Linux/*x boxes are used do not require "next generation GPU technology." Indeed, many Unix boxen are run headless in racks. Do you plan on plugging a 27 inch plasma screen into every 1U rackmount server so the computer room techs can reconfigure Apache in full 3D? Do you believe every application demands high performance graphical capability? The world, in fact, does not consist solely of games, simulators, and scientific visualization. There is a big place outside of these domains, and most of it is non-graphical -- let alone something that demands "next generation GPU technology." Certainly modern hardware will come with such things on-board, but will it be needed for anything except displaying a minimal UI? For data processing and serving (I'll make a wild guess that's the majority of Unix boxen are doing) X is sufficient for the task and nicely handles the common case where the host is headless and the display is geographically remote.
Now I'm not defending the limitations of X, and I agree that X is behind the times for (say) running games on Linux, but I doubt that's an area of interest except for a small handful of zealots. I hardly think failing to solve their "real problems" is going to matter even half a whit to the rest of the Unix community, and it certainly isn't about to relegate Unix to the dustbin. At some point in the future, it may come to pass that a minimal UI (for some new definition of "minimal") will require advanced GPU technology, but I'm sure Unix developers will accommodate it when it happens, just as they have accommodated real needs in the past instead of catering to unrealistic fantasies about the death of Unix because GTA3 won't run quite fast enough.
I think you missed my
point. The reason things like GPUs exist is not for playing games or doing visualizations; they exist to:
- lessen computational load (imposed by the modern-day gaming industry, first set off by Wolfenstein 3D. Prior to this game, the most you'd find on a VGA card was a 2-D blitter, with a level of sophistication barely exceeding that used in the Commodore-Amiga, and even then, only in high-end video cards intended for CAD work),
- promote good parallelization practices (imposed by the modern-day gaming industry, otherwise 60fps or better at detailed resolutions wouldn't be possible; prior to this market, the only way you'd find hardware-accelerated OpenGL was on multi-thousand dollar SGI workstations, for use in science applications, and in the high-end video post-production industry), and, in the end,
- for making computing more efficient (imposed by modern-day gay-as-heck eye-candy UIs, but only because they appeal to those suckered in by hollywood effects and, you guessed it, those who play high-powered games on their PCs. Remember, "sex sells.").
No matter how "minimal" your UI is, it will always
run more effectively on a GPU; after all, two processors is always better than one. The point is that modern computing is moving towards modern hardware architectures (read: advanced parallelism) and Unix is quickly losing pace thanks to bigotry and failure to understand basic industry practices. -- IvanTkatchev
This is patently and absurdly incorrect. Right now, all research into massive parallelization of programming languages, and exploiting GPUs to perform scientific-grade computation are all being spearheaded on Unix-based systems. --SamuelFalvo?
[Your use of rude phrasings is irritating; IsYourRudenessNecessary
? You're on the verge of triggering some of my own.]
[Also, you are simply factually incorrect. First off, modern GPUs are
supported with X11 (go look at Nvidia's site, for instance, amongst many other places that prove similar points). Secondly, consider that no practical GPU (read: ATI or Nvdia) is as generally capable as a general purpose CPU, which means that GPUs CANNOT
run an entire UI by themselves.]
Turing says that any program is capable of supporting any other program. (More or less :)) That's not really the point, really. The point is that the X 'architecture' is not at all equipped for dealing with things like GPUs. (Read: using X comes at a significant resource cost without any gain whatsoever.) As for the second point -- well, duh. That's exactly why it's called a
GPU and not a
- To your first and your last point, yes, sort of, but then again, no; you said "No matter how "minimal" your UI is, it will always run more effectively on a GPU", and maybe that phrasing isn't truly what you meant, but nonetheless, as phrased, it was incorrect. You can't run an entire GUI on a GPU. Now it sounds like you agree with that, so perhaps it was just an unfortunate phrasing.
- As for your middle point: It certainly is true that many kinds of graphics displays, including many parts of GUIs, will perform better if GPUs are supported. But X11 does support GPUs. And as Dave said below, the X11 "cost" and "gain" you are critiquing is questionable, for one thing, but actually, too vague to mean anything. Make it more specific.
[I happen to be pretty well acquainted with the entire history of graphics technology, hardware and software both, and you are talking about what was first called by Ivan Sutherland "the great wheel of reincarnation". If you are unaware that current GPUs are a matter of history repeating itself, or what that first history was, and therefore what inevitably follows, then you are ill-equipped to call the rest of us ignorant about the whole topic.]
Ah, the "great wheel of reincarnation". Nice name, but does having a fancy name excuse the inability to exploit modern hardware fully?
- No, it wouldn't, but the "fancy name" is merely a brief reference to a complex phenomenon that was first observed in the 1960s, that the central CPU is augmented by increasingly complex GPUs (yes, in the 1960s!), and that this then was observed to result in...I'm challenging you here, do you know your history of what happens as GPUs grow increasingly powerful? This is not the first time in history that it has happened.
[What you are actually doing is just tossing out flaming opinions and calling them fact. You would only be doing that if you were assuming that you know more than your audience, otherwise you'd realize you can't get away with that, it won't work. -- DougMerritt
Not really. X lacks a well-defined graphics architecture stack. I think this is a fact, no?
- It has an "architecture", which was the previous contention. Now that you have fleshed out the phrase to "graphics architecture stack", I must ask you to define what you mean by stack. If you mean "includes a physics engine", we are already agreed that X11 does not include that, for instance. But neither does DirectX.
Ivan, you wrote "... using X comes at a significant resource cost without any gain whatsoever." On the notebook I'm using right now, X is consuming 3.0% CPU and 4.3% RAM. It's certainly consuming some
resources, and I'll let you decide whether they are significant or not, but I consider the gain in productivity -- at least over the immediately available alternative, which is a raw text console -- to be worth it. What do you suggest I replace X with that would improve this situation? -- DaveVoorhis
No disagreement here from me. I'm speaking more along the lines of an ideal not-yet-existing system that would be designed along proper guidelines. IMO, this is a serious issue and the Unix people should start thinking about it now.
- As far as it goes, that's fine, but (A) Unix people have long since started thinking about it, but you're speaking as if they hadn't, and (B) none of those -- let's call them "renegades", for the fun of it -- have come up with an adequate replacement for X11, let alone something superior, even after many years of work (in a number of different, but ambitious, projects). The reason is primarily that X11 is more capable than people usually think, so it's much harder than they would have guessed to even so much as match it, let alone surpass it, despite the fact that it has true defects (which we still haven't discussed very much; mostly it's illusory defects that are raised) -- DougMerritt
Nice idea, but who are "the Unix people" that should start thinking about it? Sun? The SCO Group? Redhat? XFree86.org? Linus Torvalds? IBM? An ambitious new OpenSource
project inspired by this WikiPage
? The phrase "uphill battle" comes to mind... This sort of "you guys oughta build a better mousetrap because the existing mousetrap sucks"-type complaint/recommendation seems to be popular these days -- especially among folks who probably can't code their way out of wet tissue paper -- but the answer is always the same: If you don't like it, you
should do something about it. If you're convinced the world needs an X++ (Y?), maybe you should stop whinging and create it. -- DaveVoorhis
No, not really. DirectX suits me fine for now, thanks. It's not ideal, but at least it lets me access the hardware I bought. -- IvanTkatchev
- In what way? What hardware do you have, and what access is X preventing you from accomplishing? And why do you suppose that is? If the answers to the first two questions are reasonable and the answer to the third question is "because my chipset vendor refuses to support X11 themselves, nor will they publish specs so someone else can write a decent X11 driver", then I suggest the fault lies with your chipset vendor and not with X11.
Is anyone else bothered by this comparison? It seems rather silly. X may have problems, one of them possibly being the lack of widget standards leading to different applications looking different, depending on the API they use, but that has nothing to do with DirectX vs X. In fact the whole comparison is somewhat meaningless. DirectX is an API for accessing direct rendering capabilities, input devices and the like. In the X world the nearest comparison would be with the direct rendering interface, but even that does not make much sense. A more sensible comparison would be to SimpleDirectmediaLayer
. Both of which work fine with X. -- KristofferLawson
The X11 design breaks a very important principle: optimize for the most common case
. 99% of the computers display graphics on the local machine. Windows GDI, Direct3D, OpenGl
are good designs for this situation, while X11 sucks big time. Performance numbers and market forces have spoken and consumers voted with their feet. The rest pro-X11 arguments are largely a matter of handwaving and sheer speculation.
The common case is rather optimized (at least on Unix machines): Unix-domain sockets, which the X11 protocol runs over when client and server are on the same machine; is a very fast IPC mechanism. OpenGL is an orthogonal issue--it happily runs on top of X. If you think that the performance bottleneck in a X deployment is the socket layer, you ought to look elsewhere.
Unix-domain sockets are, AFAIK, only used for notifications if you also use the shared-memory extension. That makes things even faster still!
Pray tell, what do "domain sockets" have to do with pumping texture data to VRAM? Welcome to the '90ies, dude. We have these magical things called "GPU"s now.
- Indeed so. And for best performance, as I said above, a high quality 3D game using X11 would use the direct rendering interface, OpenGl, or SimpleDirectmediaLayer, not domain sockets.
- Domain sockets are, however, a great way to do normal everyday graphics that don't need the raw speed and the accompanying programming complexity and lack of hardware independence that go along with direct rendering -- the latter is simply inappropriate for all of the other purposes.
- There's a myth that domain sockets in X11 are slower than other possible approaches, but this is in fact highly dependent on the hardware and OS platform. Some implementations of X11 have used shared memory instead, for instance, and on some platforms that was faster, but on other platforms it was slower (there's always the cost of synchronization on top of the shared memory itself). -- Doug
You miss the issue. X lacks any concept of architecture. The issue of using or not "domain sockets" resides in a completely different and separate architectural layer. (c.f. above.) -- IvanTkatchev
- I started out by saying "indeed so" -- I agreed with you that they are two different issues. You just can't stand being agreed with? Saying "X lacks any concept of architecture" is merely a flame.
(Just a thought -- maybe the reason people get angry with X11 is precisely because it supports so many
different architectures at once. They fail to see the forest (the meta-architecture) for all the trees (architectures). --SamuelFalvo?
On modern Windows machines; most graphics rendering is done in the kernel--which also requires a context switch.
[[ Actually, that's false. Windows rendering requires only a privilege elevation, from user to kernel mode, which is far cheaper than a context swap. The most expensive thing about a context swap is the TLB flush -- the VM hardware has to dump all of its cached VM mappings and rebuild them on each context swap. This does not occur with U/K or K/U transitions. Thus, Windows GDI U/K/U transitions cost quite a bit LESS than even a single process-to-process-and-back context swap. -- ArlieDavis
Which "performance numbers" are you referring to? If you are going to mention any statistics/research; a pointer to same would be nice. As far as customers voting, might I suggest that the selection of Mac|Windows|UNIX/Linux has little to do with X; and has much more to do with other concerns, such as availability of applications, (perceived) ease of use, security, political concerns, and compatibility/familiarity? I know of nobody who says "I was thinking about Linux but switched to Windows because the GUI is slow.
- So, one context switch on Windows versus how many on Linux/X11? Is it 4, 5 or more? I don't know how happily OpenGL runs on top of X11 (very), but certainly OpenGL vendors don't submit their OpenGL performance results obtained by happily running on top of X11. And by the way, if you have an OpenGl already, which is a way better contract between the client app and the graphics provider than X11 could ever have hoped to be, why would anyone want to run OpenGL on top of X11? Mac OSX designers didn't thought much of this idea.
Linux can hold on to this nonsense at its own peril. One can certainly claim that Gnome or KDE are good window managers, but people who tried the latest Gnome or KDE on a 1Ghz PC with lots of RAM and felt the pain of immensely bloated and performance insensitive software packages burning CPU cycles mindlessly just to move a few bytes around to no avail, surely know better.
One would be foolish to claim that either Gnome or KDE are WindowManagers--they're not; at least not how the term is defined in X11 parlance. (Both
provide window management functionality; but both are intended to be session managers; with Gnome at least you can replace the default WM with one of your own.) I've got SUSE 9.3 installed on my PC, it runs fine. Perhaps the performance issues are elsewhere?
KDE (more specifically Qt), it should be pointed out, will happily run on top of Windows as well as X.
Actually, one thing that has long caused performance problems with KDE/Gnome on Linux has nothing to do with X. Microsoft, for all their faults, has developed a very fast method of loading DLLs (at the expense of complicating their generation) and other binary components; the dynamic loader on Linux (ld.so) has been, for a while, behind the curve. Recent Linux distros have improved the toolchain greatly. However, if you think that applications take too long to start under KDE/Gnome; chances are it's ld.so and not X11 that's at fault.
One other longstanding problem with Linux is chipset manufacturers who don't support Linux with drivers. This is also going away as Linux increases it's market share; but there are a few chipsets out there that still have better performance under Windows than Linux because the vendor won't write a driver or publish the specs.
- It runs maybe good enough for you. It does not in the general case. The performance issues that I've seen is drawing on the screen.
How you update the screen and get user interaction at a distance is an altogether different
problem, and one of the classic papers coming from Unix heritage dispels the myth that you can design the same for distributed computing as you design for local computing - AnoteOnDistributedComputing
. X11 violates this design principle just the same as it breaks the other. Maybe, just maybe
, X11 is a good design for the networked UI problem.
Perhaps. In the case of X; I think the correct decision was made. The distributed nature simply isn't a performance bottleneck for X applications. It's just not.
- Non-sense. The new Mac OSX, although coming with a healthy Unix/BSD inheritance, sent the idea of X11 on local unix-domain sockets to the /dev/null of history.
- Laughable. I guess you didn't notice the standard X11 server that Apple ships with every copy of their OS.
- I guess you either didn't notice that they ship it just for compatibility with old unix apps, while it's not part of their display architecture. Or you're just trolling for a lost clause. Pitiable
- What you do here is ShiftingTheBurdenOfProof by handwaving useless claims about the performance problem maybe being elsewhere but not X11, etc, etc. Find me performance numbers for display intensive applications running on top of X11 if you can. In comparison you can browse a little through http://www.spec.org/benchmarks.html#gpc, as well as consider why the whole PC gaming industry is disconnected from X11, or find the accounts of people who tried to port performance sensitive games to Linux, and get an idea that you are defending a lost cause, and all you do is useless speculation because you cannot find any relevant facts to back up your claims.
- Again, somewhat dubious arguments. Some of the biggest and meanest graphics hardware in the world (SiliconGraphics machines) runs with X in the mix. Sure, the intensive stuff will use OpenGl and direct rendering, but so what? They still interface with X. How many graphically intensive games have you seen using a Windows canvas widget for drawing? --KristofferLawson
- And how many games run directly on Windows GDI or Aqua? Games for all platforms tend to bypass the standard graphics layers (DirectXibrary?, SimpleDirectmediaLayer, OpenGl, etc.). Nothing to see here, move along.
- Whoa there, DirectX is the standard graphics layer since 1995 or so, I think. -- Not so. It is the standard direct access/direct rendering pipeline, but definitely not the standard graphics layer. Most Windows apps still use the GDI to do everything else-- just as a game that would be written on top of X11 would use the DRI and OpenGL, while OpenOffice.org would use standard X11 calls.
- Nope. DirectX is just for games. There have been 9 incompatible versions since then too, also making it only suitable for short-shelf-life games. The more stable Windows GDI is what real applications use.
- The fact is: X11 is already stabbed and burnt. And nobody's willing to resurrect it.
- Hogwash. Every major video card manufacturer maintains X drivers. The freedesktop.org group is actively pursuing many different acceleration options, such as running X on top of OpenGl.
- Ah, I didn't notice how freedesktop.org is taking the world by the storm. It's such a superior solution to both Windows and Aqua, but idiotic users just don't get it.
- So how many video card manufacturers are maintaining X drivers for how many of their products? And of those negligible quantities how many compare in quality and performance with their Windows counter-parts? Maybe you try to sell this BS that to the next Linux user who googles hard for X problems before buying his next computer or his next high end video card?
- Quit trolling, X11 is stabbed and burnt, it's just vegetating on life support, and you try to pretend that it's alive. And this will only last until a bunch of Linux hackers will have decided that it's way past time to move on. Just like Mac OsX designers did. The only difference is that Mac OsX hackers afforded to start with a clean sheet of paper and they ditched X11 without blinking, while Linux hackers live on a dime, so that's why they have to suffer the iniquities of X11.
[What is the claim here, that there is an alternative for Unix workstation users? It's not just about Linux/BSD hobbyist home users, after all. What alternative is that? I don't see that a difference of opinion constitutes "trolling".]
- The claim was that X11 is broken by design, and this is known for a long time (longer than TheUnixHatersHandBook?, who just happened to have made it popular). What constitutes trolling is to contest the fact that the designers of MacOsx ditched X11 from their architecture, based on the presence of an X server in their distribution (that guy is bypassed by most MacOsx applications). Or to claim that every major video card manufacturer maintains X drivers. Or to imply that people who think that X design is broken don't know what they're talking about. Or to claim without any concrete facts, that X suffers of no performance problems, when performance problems related to X are well known and documented, and there's no known X station that can compare with a similar hardware running Windows.
- X11 is broken: it is known that X11 has problems, anything beyond that is open to discussion.
- Mac OS X vs. X11: yes, but that's just a matter of emphasis. Mac OS X is a fusion of older Mac OS, which naturally never included X11, with NextStep, which as you pointed out, started with a blank slate. It is interesting that they bother to include X11 at all, considering that; some users want it, despite various aspects in which the native OS X stuff is superior.
- Major video card manufacturers: Those who decide to market to Unix/BSD/Linux maintain X11 drivers. Many do not decide to thus market, and it is absolutely true that not all major manufacturers support X11 -- if some phrasing above implies otherwise, it is wrong.
- Implying that people who critique X11 don't know what they're talking about: perhaps I overstated the percentage, but note I said they don't know the protocol, my point being that X11 is a large subject, and most looking at it casually don't know that. I know the X11 protocol (and the Xlib internals, etc etc), and my observation over the years is that most critics do not, and hence inappropriately lump together a dozen subjects rather than taking them one at a time. If you personally do, then I will apologize to you personally. If you prefer not to say one way or the other, note that I did not claim you fall into that class, just that in my experience, that's largely true of people in general.
- X suffers no performance problems: as a flat claim, that is purely false, you are right, but the topic is nontrivial. Does it suffer fewer performance problems than people usually think? Yes. Does it depend on details? Yes. Are there some really fast X11 apps on some platforms? Yes. Does X11 always win speed contests? Hell no.
- No X11 platform is as fast as the fastest Windows platform at graphics: this sort of thing obviously changes over time. Once upon a time, the SGI workstations certainly outperformed all Windows platforms, and they integrated IRIS-GL with X11. This year, I really don't know, you may be right. No doubt you're right when it comes to game performance. Is it true for scientific visualization using OpenGL on all Unix workstations? I don't know, but that seems less certain (and OpenGL, note, is an X11 extension in some cases, not always just a complete alternative, which is one of the complexities of the topic).
- What is the alternative for Unix workstation users? If you count Mac OsX as Unix, then you can see there are alternatives. If you don't count OsX as Unix, than the lack of alternatives is attributable to the high barrier of entry that discourages independent open source developments and the need to maintain compatibility with older apps. If you refer to Sun, HP, SGI engineering workstations, those are disappearing species specifically because Wintel provides more horsepower and cheaper horsepower at that, the X11 may be part of the problem but it can be argued it's not the worst of the many problems killing the high end Unix workstation species.
- Mac OS X as an alternate for Unix workstation users: A nice one, if switching to Apple hardware is an option, but you can't just ignore e.g. all of the Sun workstations in the world. The point is that, unless one is willing to switch to PC or Apple hardware, then the Unix workstations have just one primary graphics solution, X11.
- You can keep your opinion that X is nice and good all you want, but to classify the people who think the opposite into the stupid categories you put above is trolling.
- I thought it was the other guy you accused of trolling, but in any case, I'm trying to say that people who don't understand the X11 protocol will be "talking past" those who do, was the way I put it, and that's not trolling, that's just common sense.
- If you rephrased each of your sharp critiques into a simple statement of issue, rather than insisting that what you had the final word on the topic, then each each could be addressed one at a time, as I just attempted to do. -- Doug
I was unclear about one of my major points in the above: this page's title includes the word "protocol", but there's extremely little discussion of X11's protocol
per se here, and what little discussion there is, is not only wrong but also essentially completely uninformed.
- Sounds like we need to talk about specific XwindowServer implementations.
The rest of the discussion is about things that are not necessarily related to X11's protocol
at all (except that some have expressed the clear but unsubstantiated opinion that perhaps it shouldn't even exist). There's a lot concerning the inappropriateness of X11 for top speed highest quality 3D games, and that is partially true, but not completely true, since those who say so are apparently unaware of the direct rendering interface, yet any criticism of X11 in this area should single that out directly for a technical critique.
Similarly there seems to be a misunderstanding of OpenGl versus
X11, whereas actually it has been incorporated into
X11. So has e.g. DisplayPostscript
(which does indeed have some very nice features, and like Tom, I was sad when Sun's NeWS failed the marketing war). Etc, etc, etc. X11 has a vast number of extensions, and certainly has not stood still for 20 years.
As I keep saying, X11 does indeed have problems (from the start, I've always been unhappy with the sheer size and also clumsiness of the overall API), but it also has strengths that are misunderstood by non-X11 programmers.
And then finally, even if it did
completely suck, there is no complete replacement for it suitable for the same range of purposes on Unix/Linux systems, and telling people to change their hardware platforms and OSes just doesn't cut it; on those platforms, it will not be dead unless and until there is an alternative.
Note that one of X11's strengths is that it is uniquely platform independent, and that is not similarly true of any of X11's competitors on any platform. To miss this point is to miss the entire point of X11 from the beginning of time. -- Doug
I know little of the internals of X11. My opinions of it have been formed mostly as an end user. Every instance I've worked with has seemed slow and clunky compared to other window systems on similar or identical hardware. It's hard not to conclude that the fault lies in X itself and not specific implementations or uses of X. Can anyone point to modern X implementations that don't suffer by comparison with Windows? -- EricHodges
I'm currently running Suse 9.3 on an AMD Athlon 64 of respectable-but-not-great speed. X runs fine. KDE runs fine as well; my only complaint is that OpenOffice takes forever to start up; which I suspect has little to do with X. Unfortunately, X often gets blamed for things which are not really its fault (such as slow load times for applications, poorly-tuned virtual memory systems, inefficient dynamic loaders, etc). The Linux community (and this means both the KDE and Gnome camps, as well as those working on distros, the Gnu toolchain and the kernel itself) could do quite a bit to improve the performance of interactive GUIs on Linux. I don't think that scrapping X itself is the answer. At any rate, the situation has improved quite a bit in the last couple of years; there's a night and day difference between my previous PC and my current one. Part of that may be MooresLaw; but part of that is improvements on the SW side. Of course, all of this is anecdotal evidence and speculation; but your experiences are also anecdotal. (Which doesn't make them invalid; when a customer complains it isn't good to tell him "but 80% of customers are happy; what's wrong with you?")
Is that XFree86? What graphics card? How does it compare to Windows? Ignore start up times and look at basic responsiveness. I agree that X is "fine" compared to X a few years ago, but it never seems "fine" compared to Windows today. -- EH
[At the moment, I'm using an ancient HP O
mniBook XE2 with a PII 400. I have Fedora Core 4 on one partition and XP Pro on another. I mainly use it to do 'net things or program in Java using Eclipse, so I can use either XP or FC4. I have no particular operating system bias -- as far as I'm concerned, from an OS architecture point of view the similarities between XP and Linux far outweigh the differences. Yet I prefer to use FC4 for two reasons: (1) The vast gaggle of utilities, applications and goodies that come pre-installed with FC4 make it worth enduring the relatively slow start-up time and sluggish OpenOffice
load-time. (2) The fonts look better under FC4. The difference in graphics performance between XP and FC4, on the built-in Silicon Motion Lynx EM, is not sufficiently different for me to notice, let alone make me choose one OS over the other. -- DaveVoorhis
Interesting. Do you notice a difference between Java Swing apps and native Windows apps? -- EH
I'd like to hear about that, too. Also, I'm surprised that fonts in general look better under FC4; I'm aware that they could
, but historically there are issues about which particular fonts are included in a distribution, so I'd like to hear more about that, too. -- Doug
[Java Swing apps are noticeably sluggish compared to native Windows apps and, for example, GTK apps under FC4. I wish I could recall which RedHat
/Fedora release significantly improved the font quality, but I can't. It's been there for a while, for some unknown value of "a while". If you want to see what I see, go to http://shark.armchair.mb.ca/~dave/FC4Desktop/
and tell me whether that's better, worse, or the same as XP. Maybe I've simply gotten used to it. -- DaveVoorhis
See also: RemoteGuiProtocols LetsBlowUpTheUniverse