is interposed between applications and the physical hardware. Therefore, its structure has a dramatic impact on the performance and the scope of applications that can be built on it. Since its inception, the field of operating systems has been attempting to identify an appropriate structure: previous attempts include the familiar monolithic and MicroKernel
operating systems as well as more exotic language-based and VirtualMachine
operating systems. ExoKernel
s dramatically depart from this previous work. An ExoKernel
eliminates the notion that an operating system should provide abstractions on which applications are built. Instead, it concentrates solely on securely multiplexing the raw hardware: from basic hardware primitives, application-level libraries and servers can directly implement traditional operating system abstractions, specialized for appropriateness and speed.
to navigate through all 45 slides on the subject.
Also see the paper "Exokernel: An Operating System Architecture for Application-Level Resource Management"
(1995) by Dawson R. Engler, M. Frans Kaashoek, James O'Toole Jr. (http://citeseer.nj.nec.com/engler95exokernel.html
Exokernels are not so far-fetched today: for instance, Xen is a kind of exokernel.
Almost every single response on this page would have been avoided if people had just read the paper.
So, how does this differ from a MicroKernel?
A micro-kernel gives you abstractions to the hardware; you only contact the kernel, which usually gives you files, sockets, graphics etc. An exokernel gives you almost raw access to the hardware.
I was under the impression that such abstractions were provided by "application-level libraries and servers" in the wonderful world of micro-kernels as well... The only real difference I can turn up from the above linked presentation is http://pdos.lcs.mit.edu/exo/exo-slides/sld015.htm -- apparently, while micro-kernels traditionally use some sort of InterProcessCommunication for apps to talk to the servers providing the abstraction layers, in ExoKernel land one is expected to put everything (or just as much as you need) into a library which is called in-process, thus avoiding messaging / context-switching overhead. So it's not so much a difference in the kernel as in how userland is organized. Right?
Well, I'd say it's both. The notion of putting the "kernel" into a library to be directly linked into applications seems to me a fairly radical departure from the typical concept of "kernel", while also strongly blurring the distinction between "user space" and "kernel space". Side question - what ramifications does this have for security?
Precisely none. Because excluding some trivial performance issues, the difference between kernel code and user libraries means nothing. Nothing prevents a fat kernel from securely providing dynamic access to kernel space, making kernel functions as easy to override as user libraries. So if exokernel design amounts to no more than moving the kernel to user libraries, then it's all just pointless sleight of hand.
Fortunately, that's not all it amounts to because there is a powerful and far-reaching principle of system design involved. Unfortunately, the exokernel team is too blinded by the endless minutae of implementation details to grasp the insight they've made, so you can't exactly expect them to be able to express it very well.
The crucial insight is to separate abstraction from secure multiplexing in different layers, and to make sure that beneath every abstraction layer there exists a multiplexing layer. This principle can be seen in well-designed high-level software like PlanNine's window manager; rio performs nothing but secure multiplexing and as a result is recursive.
On one hand, PlanNine's rio shows that good systems design has nothing to do with kernel versus libraries. On the other hand, the fact that this fundamental principle of good design is so seldom used shows just how novel and important it is.
Finally, in addition to botching the "lessons learned" from exokernels, the team involved made a further mistake when they wrote that exokernels do nothing but "secure multiplexing and arbitration" since arbitration is a type of abstraction.
How does the ExoKernel
concept differ from a virtualizer such as IBM VM/370 (of VM/CMS infamy) or VmWare
Excellent question. To the team that came up with the concept, there is no difference between an exokernel and something like XenVirtualMachineMonitor?. To me, there's a very large difference.
That's because the team who came up with it is under the severely mistaken impression that a kernel is an operating system and that libraries aren't de facto parts of the operating system. Of course, this is false. As a result, an exokernel OS must stick to the exokernel principle (the separation of abstraction from secure multiplexing) at all levels of the OS. A layer can perform abstraction, or it can perform secure multiplexing, but never both.
An exokernel also mixes up abstraction layers by putting everything into libraries. Basic functions such as process management, address space management, context switches and interprocess communication are all performed cooperatively, each process using a library to implement the functionality for its children. This kind of distribution makes it impossible to say that the functionality exists in a distinct layer.
One critique of the ExoKernel
I wrote this critique of the exokernel concept some years ago, based on what I knew of them at the time. I have been since told that most of the issues I bring up are addressed in current exokernel designs, but was never given any details. I would be quite interested in any rebuttals. -- JayOsako
[2008 Update: I have since learned more about IPC and shared memory than I had known at the time when I wrote this (2002, roughly two years before I re-posted it here), and can see many of the errors I made in this critique. SamuelFalvo?'s comments are actually quite useful, though I still am puzzled by certain things, as my newer comments will show. -- JayOsako]
Responses to Jay's update will appear in thread-mode, at the bottom of this page, because it's getting to be a bit messy. In light of the discussions and my position on exokernels, I refrain from otherwise refactoring the page due to potential allegations of biasing the refactoring. -- SamuelFalvo?
It appears to me that the exokernel as its normally presented is aimed at a false economy - most applications don't exist in a vacuum, especially in PCs and workstations. A disk manager for a database (so the theory goes) should be different from one for a hypertext or a mail server - except that you will inevitably need to access the database records from the hypertext manager, or import a hypertext chunk into a mail message, or what have you. In an exokernel design, this can only be done by either
- providing either IPC or library sharing in the exokernel, which directly contradicts the goal of eliminating all non-essential kernel-level access, (If you're using libraries, the kernel is 100% taken out of the picture. A single JSR instruction direct to the code in question involves, last I checked, no need for a system call intervention. -- SamuelFalvo?)
- This highlights one of my main misunderstandings at the time I wrote this: I could not see how userland shared libraries could work, because I wasn't familiar enough with certain aspects of shared memory between protected processes. I had had the bizarre notion that shared libraries were, in effect, separate processes which the caller would need to IPC to. In retrospect, I can't see how I thought that, even given my limited knowledge at the time. -- JayOsako
- requiring all libraries to include the needed IPC and translation code, a ridiculous waste of disk and memory space, (Then you find libc (IPC with the kernel, plus a buttload of convenience functions) and Xlib (IPC with the X11 server, plus a buttload of convenience functions) to be a gratuitous waste of disk space? -- SamuelFalvo?)
- requiring the libraries to contain a version of the code to access the shared resources directly, an even greater waste of space. (You are aware that shared libraries are invented and used for saving space (both disk and core), right? -- SamuelFalvo?)
Since the latter is the design concept for exokernels in general (as I understand it), we see that, while the code for accessing, say, a database, may run at an order of magnitude faster and take up an order of magnitude less space, the size of the code grows exponentially with each application run on the system as a whole (since each application needs not only the code for the
application, but also the optimized - and therefore unshared - code for the system, and the code for accessing and communicating with all other applications which the app shares data with or may share data with in the future). (This is patently false. You are using, right this very moment, an operating system that
already employs some exokernel-like concepts. Windows, Linux, BSD, OS/2, it doesn't matter; all these OSes
require client-server architecture when a single service must be provided to a large number of processes. This requires IPC, and that means proxy and stub libraries (to use CORBA terminology). And those libraries consume a ridiculously small space on disk, and when loaded into RAM, are loaded precisely
once. Are you seriously trying to convince me that you are
still statically linking
everything on your computer? -- SamuelFalvo?)
- To clarify: my concern was primarily with the unshared application-specific driver code, which I had (erroneously, I now think) assumed would be the bulk of the application's system-level code. I was also significantly overestimating the size of said code: I was envisioning the equivalent of dozens of vmlinuz-sized libraries, one for each application! In any case, this objection has been overtaken by events: many, if not most, major applications these days have significantly larger memory footprints than the operating systems they run under do. Come to think of it, that would have been true in 2002, as well. -- JayOsako
Furthermore, the 'economy' it offers is of little value in a single-user system. Virtually every process in a modern workstation is 'user-event bound'; that is to say, the computer does very, very little except for those periods when the user requests something directly. Much of the inefficiency of commercial OSes today, ironically enough, comes from poor usage of the this 'free time'. Many of the housekeeping and optimization tasks that currently require operator intervention could easily be automated and run as background tasks, in an OS designed to allow such. However, an exokernel design would not only make such services impossible (extraordinary claims require extraordinary evidence. -- SamuelFalvo?)
, it would also make many of the tasks currently handled as background tasks - such as mail and print spooling - much less efficient. Distributed processing could also fill such slack time, but again, the exo design would make such operations unmanageable. (Since you've never used an exokernel-based system before, you have no basis on which to make these statements. This is FUD. -- SamuelFalvo?)
- I think that the problem I saw - and to some extent still see - is that you would still need some mechanism for scheduling timed events, and for multiplexing hardware other than the CPU, such as a printer. While I was certainly wrong about it being impossible, it seemed to me that the services in question would, in effect, become the equivalent of an operating system from the perspective of both users and processes. What I now realize is that this objection wasn't relevant; I was confusing the idea of 'no kernel' with the idea of 'no operating system', and was at a loss as to how that would be possible for a general-purpose system with memory-protected processes. Given that I already knew that most micro-kernel operating systems handle most such things in userland, I am not sure why I couldn't see this as the logical extension of the concept. -- JayOsako
Servers face different problems - they are usually network- or IPC-bound, and thus have to have very rapid communication service - but here, too, virtually everything depends on trap/interrupt services, which on most platforms are always kernel level, and thus are the slowest operations on an exokernel (which must make two
level changes, one to handle the interrupt and one to pass the result back to the 'unsecure driver', even before it is handled - admittedly, this is a platform-dependent issue, and microkernel designs suffer the same flaw). (Actually,
only the notification that an interrupt occurred takes this route; and this need only be flagged precisely once. The driver software, which runs in userspace
and has direct access to the relevant hardware registers, suffers
no performance penalty. There is a latency from the interrupt to the first packet, but subsequent packets are handled even faster than in a traditional kernel. -- SamuelFalvo?)
This is less of problem than in other areas, so it may be that dedicated servers would be a suitable application for the exokernel design, but it is still a serious flaw.
Don't look at me. I never gave a damn about the supposed performance boost. And I never put the kernel's boundary at the edge of kernel space. I only cared about the exokernel principle because it's elegant, self-evident and extremely flexible. As a result, I'm far more worried about the question of whether multiplexing a sensitive resource means that you expose other users of it or whether you abstract away the other users.
My response to your critique would be that it's simply irrelevant to anything I'm concerned about. -- RK
I'm with you RK,
practical considerations shouldn't limit the exploration of principles ... but it is always relevant whether a principle delivers in practice. In theory, there is no difference between theory and practice. In practice, there is. Musings like JOs critique above draw this difference to the fore and ensure exploration is undertaken without bloated expectations. For example, the UniversalTuringMachine
is elegant, self-evident, flexible and a good choice for exploring ideas about computation but not a good basis for a working system. I tend not to be worried about whether a concept will deliver in practice but it is relevant
. -- SH
This page is quite dated and I abandoned a recursive process architecture several months ago. It turned out to be extremely inelegant even in principle. The idea was a darling and I followed KillYourDarlings
The UTM concept is weak and powerless from an interaction design point of view, so it's a bad example. I suspect you wanted to say that implementation constraints can matter?
Thanks for the response though. Who is SH though? -- RK
Following the ExoKernel principle, there must be a multiplexer at every layer of abstraction.
This is a gross misunderstanding of the ExoKernel
concept. Talking of 'abstraction layers' in regards to an ExoKernel
system is an oxymoron - it is not about abstracting the operating system, it's about eliminating
it (hence the synonymous term 'no-kernel system'). The whole point of the ExoKernel
design is to have no
abstractions, so that each program can have it's own specially-tailored low-level software running on the bare metal (modulo the multiplexing). The issue of sharing libraries was always a secondary consideration stipulated for pragmatic reasons; they were not a necessary part of the idea at all, and in principle each program should be written as much as possible from scratch, with the libraries only used for those aspects which did not impact performance. The only goal - the only
one - of the ExoKernel
design is to micro-optimize hardware usage, a design approach you have explicitly rejected.
This approach worked really well on the Commodore 64.
- Are you sure you aren't confused with the Amiga?
While your 'multiplexed abstraction layers' concept may have been inspired by the ExoKernel
concept, it is decidedly not an ExoKernel
in the classic sense; for all I know, it is a genuinely new idea, though I would want to check the literature before coming to that conclusion. You would be wise to find another name for it, or else those familiar with the MIT work may be confused. -- JayOsako
, leaving this off before LaynesLaw
eats this page
Okay, I have to ask... An exo-kernel more or less exposes bare hardware, okay. So... how is any program written for an exo-kernel supposed to be portable? Does that burden fall to the higher-level libraries? But the whole advantage of an exokernel is supposedly the ability to poke at the bare hardware when necessary. I suppose the solution would be to write your program able to do both, interfacing directly with the hardware on processors where it knows how, and using a library otherwise. Yes? -- AnonymousDonor
It's portable the same way POSIX is portable: by specifying a library interface instead of a system call interface. You write your application to link against a specific libOS. That libOS provides the familiar process/paging/filesystem/IPC/syscall interfaces that applications expect. When you switch hardware platforms, you rewrite the libOS, but the interface remains the same, so applications don't even know it's changed. Applications usually would talk to a "standard" libOS that looks like Linux or FreeBSD or Windows, but have the option of replacing it with a custom interface if the existing abstractions get in the way. For example, a database has no need for a filesystem, because its view of the world consists of database tables, which can be mapped more efficiently to disk blocks. So it would replace the filesystem libOS and provide its own, which would need to be ported to each different hardware platform. And yet you could use that database on the same machine as a webserver that implements its own process library, a game that writes directly to the video hardware, and a UNIX app that's talking to the exokernel equivalent of Cygwin. -- JonathanTang
1. What happens when app1 thinks the disk is ntfs, whereas app2 thinks that the disk is extfs2? (An application can never do that, because the exokernel provides
the multiplexing services that prevents mutual corruption. What an application does on its own "view" of the disk, however, is entirely up to the application. -- SamuelFalvo?)
2. What happens if app1 listens on IP port 4000 and app2 listens on IP port 5000? Who's in charge of the multiplexing and arbitration?
As far as I remember, the Exokernel answer was a hand-waving along the lines 'one-machine-one-app'. Please correct me if I'm wrong. -- cristipp
1. They can't write to the same (virtual) disk. The job of the OS is to multiplex the hardware: hence, it assigns certain disk blocks to libOS ntfs, and assigns other disk blocks to libOS extfs2. If applications share a libOS they might be able to write to the same filesystem; otherwise, they would need to link against both libOSes to manually transfer data. It's as if the two apps had access to different partitions.
2. As I understand it, the exokernel would only export the hardware interface to the network card, and the TCP/IP libOS would be responsible for multiplexing ports (and creating the TCP stack abstraction, for that matter). I'm still unclear how the kernel resolves conflicts between different networking libOSes trying to access the NIC. Probably through the timeslicing mechanism: each process gets the use of the NIC turning the timeslices it's signed up for. But then how does buffering work, and what happens if a packet arrives when the program that wants it is descheduled? -- JonathanTang (This is done by exokernel-level
filters. If a Unix app is listening on port 4000, then the Unix libOS installs a filter that matches Ethernet packets with an IP and TCP header addressing port 4000. The libOS therefore sees ONLY those packets it has filters for. Other libOSes install their own filters. This is precisely how AmigaOS handled raw user input events, and it works wonderfully. -- SamuelFalvo?)
What I'm trying to get to is that Exokernel does not give any service that one expects from a (multitasking/multiuser) OS. If every (potentially malicious) app needs its own harware resources that can't be shared, than there is not a practical thing that can be done with it, not even a mere cat | grep bogus_claims. Exokernel is little but a stretched overgeneralisation of a cute memory management scheme. Given that memory pages are not usually shared, it works pretty well for that. -- AnonymousDonor
I suggest you check out the Xok (http://pdos.csail.mit.edu/exo/
). They've built an actual exokernel for x86, along with a libOS that mimics UNIX. I haven't tried it myself (I'm worried about drivers...Linux on laptops is hard enough, let alone Xok), but according to them, gcc/Perl/Apache/tcsh/telnet all work unmodified on it. -- JonathanTang
Hmmm, assuming a single user and memory protection (which exokernel gives), then it is almost as good as a regular OS. What it lacks is a serious mechanism to protect anything but the memory, thus a malicious/buggy app has ample space to completely trash all the data on the machine. I'd rather pay 5% performance penalty and go with SELinux in an attempt to keep my data healthy. (False!! Where did this conclusion come from, in light of all the evidence to the contrary? Please re-read the top portions of this page. The whole purpose for an exokernel is safe multiplexing of resources. To reuse the X11 motto (itself a good example of a "graphical exokernel"), "capability, not policy." -- SamuelFalvo?)
Some more questions, just to clear up my remaining gaps in understanding about the process/philosophy:
- Can two apps using two different libOSes interact at all? I suppose libOSes could adhere to some kind of standard about sharing a common disk drive space to pass messages (or not?). (RAM is more likely; why use disks? They're slow. Exokernel applications can share RAM just as easily as they can protect them. Using this RAM buffer approach, you're emulating a virtual network of sorts. Since you're essentially emulating a network, you can even use a finite sized ring buffer for message passing between libOSes, treating messages as datagrams. -- SamuelFalvo?)
- Is it expected that users will run applications linked to different libOSes concurrently? I'm trying not to ask about UI issues, as that is obviously far from the kernel, but if there is no way for different libOSes to communicate it would probably mean that users would be locked into a specific group of applications all linked to the same libOS chosen at boot. Which really isn't any different from how my laptop (for example) works right now. (Does IBM's VM handle this at all? Can someone switch from a CMS (or something) guest OS to a Linux (or something) guest OS on the fly?)
I read the original description of the OperatingSystem
and I thought "minimalist", but that isn't the case. It's for multiplexing hardware really, really, really well
, which is quite complicated, so the system itself is naturally complicated in order to do it.
I briefly read some of the documentation of Xok. As far as I can tell, in order not to "abstract" away access to certain resources, one of the things they did is place a LittleLanguage
the exokernel for some kinds of resource handling, so that part of the OperatingSystem
itself is a program submitted
to the kernel by a user-level program. -- JesseMillikan
This is largely correct, but the language doesn't have to be (and you'd be insane to do so) interpreted. To reuse an example from earlier in this page, network filters can be defined as longest-match AND/XOR matching. The idea is, a list of filters exists, where each list node is a (process, AND mask, XOR mask) tuple. If a network packet comes in, you AND the packet contents with the AND-mask, then XOR it with the XOR-mask. If the result consists entirely of zeros, then the filter
must match, and the packet
must be of interest to the process stated. A bytecode is more flexible than this "declarative"-style system, but it shows the range of possibilities available to the exokernel implementer. -- SamuelFalvo?
Servers face different problems - they are usually network- or IPC-bound, and thus have to have very rapid communication service - but here, too, virtually everything depends on trap/interrupt services, which on most platforms are always kernel level, and thus are the slowest operations on an exokernel (which must make two level changes, one to handle the interrupt and one to pass the result back to the 'unsecure driver', even before it is handled - admittedly, this is a platform-dependent issue, and microkernel designs suffer the same flaw).
I must not have seen this before, but I'd like to respond to it now. The accusation you raise demonstrably applies to microkernels due to its dogmatic
approach that everything relevant to the functioning system be handled by user-space applications. However, if the microkernel in question did not transcend user/kernel spaces, would you still consider the process of "pinging a driver process" flawed? Remember that just because a peripheral flags an interrupt doesn't mean its driver possesses the highest priority on the system. Given the choice between an IDE driver and an RS-232 driver, the IDE driver clearly has the lower priority,
because the RS-232 device may actually operate interactively with a user.
An exokernel pushes no such dogma, however. I like to use the analogy with X11 because I find accuracy in it.
X11 provides the means by which a GUI is constructed. It does not enforce particular policies on this GUI, however. For example, no rule explicating the necessity for a menu bar across the top of the screen exists. While it does place special significance on top-level windows, it does not dictate that these windows need titlebars, resize gadgets, depth-arrangement capabilities, or even any visual indication that they exist as such at all. The responsibility for all this gets pushed onto third-party packages called window managers. The widget libraries and (optionally) desktop environment you use dictates the policies
of the GUI you use.
Similarly, the exokernel concept provides no equivalent to policy-setting. Exokernels exist solely to multiplex hardware safely. It is free to employ any means necessary
to make good on its policy-free "policy." This is why you'll often see such dirty tricks as dynamically downloaded chunks of code.
Taking interrupts for an example, suppose we run multiple virtual OSes on the exokernel environment. They'll all need network access, so they obviously express an interest in the Ethernet card's interrupt signal. But how does this happen safely?
The best approach that I can think of, which I use in my OpenAX.25 program, and which is used by Xok, seems to involve a system of uploadable predicates.
These predicates can, once uploaded, exist in a data-driven form (as in my OpenAX.25 program), or the exokernel can compile them into native code that runs directly at the highest processor privilege, after performing sanity checks (like Xok does).
Xok uses this approach for managing competing filesystem access on a disk too. However, disk management is substantially more sophisticated than network management, it nonetheless is doable with stellar performance levels. The results of running a webserver under Xok indicated over 8x I/O throughput
than a typical web server on monolithic kernel environment, including when accessing many small files. This strongly suggests the method is viable and produces superlative performance.
The reader may now think that these techniques can just as easily appear in traditional microkernel or monolithic environments as well. Yes, they absolutely can be, and with sufficient refactoring of the services involved, your code will naturally evolve
towards an exokernel environment. When I started to write OpenAX.25, I didn't set out to reproduce Xok's network infrastructure at all. The overhead involved with using BSD sockets API to perform IPC between the network's core and its applications forced
me to this architecture. That, and TestDrivenDevelopment
With all that having been said, let's observe that most peripherals have been marketed for operation with operating systems such as Windows or Unix, where the "interrupt, pulse a process, then return" pattern appears with amazing fidelity. It just so happens that on these operating systems, the process being pulsed may exist as a kernel process
rather than a user process. But it's still a process all the same. Therefore, peripherals tend to contain some built-in tolerances for scheduling latency, and the flaws you observe lack the significance you ascribe to them.
I'd also like to respond to your comment that servers might enjoy the benefits of exokernels, but not user workstations. Oddly, servers do not appear to benefit the most from exokernel or microkernel architectures, because their dynamism is lost in the data center environment. Configuration changes of hardware almost universally accompanies bringing the server down for maintenance (the sole exception I can think of being big-iron mainframes). Therefore, monolithic operating systems, such as Unix and post-Windows NT 3.1 systems, thrive in these environments because of their "set it and forget it" construction. Wise service providers will have backup servers online when configuration management needs to occur on a given server, so customers never are the wiser. Efficiency losses in the operating system tends to get amortized across multiple servers. Administrators find it cheaper
to just use more machines to cover their losses (which aids in redundancy anyway) than to switch to an exotic new OS technology like exokernels, despite their potential performance advantages.
Astonishingly, the user's own workstation benefits the most from exokernel/microkernel designs, where reconfiguring the computer occurs seemingly on a daily basis. Back in the mid-80s, this occurred every time you inserted or removed a floppy disk. Today, it appears with the removal or insertion of CDs, USB devices, Firewire peripherals, etc. Some server-grade equipment even supports PCI-Express devices. In short, today's user workstation environment easily lays claim to the most monolithic-hostile environment yet experienced. You'd think that we'd have the problem utterly licked at this point, but we don't. Linux and Windows systems both need explicit instruction to unmount USB storage devices, for example. Unlike floppies, you cannot just pop the thing out. If a flaw exists in anything, this
is it, because I can't tell you how many times I've had the contents of a USB storage device corrupted by someone accidentally pulling the USB stick out while I wasn't looking. ("Oh, you weren't using this, were you?")
Compare today's environmental sloths to the ever agile operating systems of yesteryear, such as AmigaOS or BeOS. These systems clearly demonstrate a degree of environmental agility where these kinds of problems are reduced, if not eliminated. How do they do it? AmigaOS quite explicitly takes the form of a microkernel, and never once did you have to drag a disk icon to a trashcan to eject it. The worst you had to do was wait for five seconds, because that was the disk driver's flushback timeout. One must wonder why modern architectures do not follow a similar approach?
After observing the sheer difficulty Linux experiences in the desktop OS market, is it any wonder that even Linux systems now adopt a more dynamic architecture to deal with the user's experiences? With the increasing prevalence of hotswapping peripherals, the support for dynamic reconfiguration made possible by a microkernel (or similar) architectures becomes utterly necessary. But in Linux, at least, perfection eludes all; while kernel modules make a nice substitute for exokernel-like downloadable code fragments, the udev environment stands in its way outright. The need to preserve the now-40-year-old device semantics, built for a monolithic kernel environment during the days of static, data-center-resident computers, interferes with the dynamism many users want from their PCs. Knowing that a PC can handle a dynamic environment, the fact that they don't (at least, well) is an issue of user-space policy,
not fundamental technology.
Example: Every time I upgrade udev (a process I now come to associate with reinstalling the OS), I invariably lose my sound card. Why? Sometimes I'll even lose the mouse, along with selected other USB peripherals. It takes, very often, hours or days of tweaking and Googling to find whatever new udevd configuration syntax changes occurred, and I have even had cases where I've just given up, and gone back to a prior version of udevd, restoring its original configuration files, and be left without access to whatever device I was trying to access in the first place.
"It's a bug in the udev, so you should file a bug report with them!" While technically correct, this only addresses the symptom, and not the cause. No, it's a bug in the architecture
that gave birth to udev in the first place. No other operating system on the face of this planet has a contraption as remotely RubeGoldberg as Linux's udev system.
I'm not saying this because I hate Linux, I'm actually saying it because I like
Linux, and don't want to see it fall apart on me. All because people, logically trying to adopt support for a more dynamic computing experience, attempt to make Linux's Unix architecture do things it was never originally designed to do, by imposing a new policy on top of an old policy.
All that complexity would utterly disappear if drivers were just plain, normal, every-day user-level processes that a program not unlike inetd would kick-off automatically. I recognize that this more or less what udevd tries hard to mimick, but it truly fails miserably at it.
Anyway, hopefully I've responded to the technical issues surrounding your concerns, as well as the concept of what is and is not flawed about the architecture.
Since someone seems hell-bent on including this big ball of non sequitur, I now must address it point by point. I apologize to the other readers who are trying to actually learn something new
about the concept. I emphasize new
, because nothing disclosed herein isn't already addressed elsewhere.
writes and SamuelFalvo?
The whole separation of UserSpace and KernelSpace? was created initially to address issues that arise due to lack of safety in the development programming languages of the day.
Non sequitur. We already know this.
I.e., programmers kept trashing other processes and the kernel like a bunch of evil, hungry little gremlins with sharp little
pointers with which they could take
bytes out of everything.
What is your point?
In a computing environment where safety was enforced at the language level (where the base language the OS accepts is higher level and the OS safety-checks, compiles it down, signs, and caches binaries) the need for this separation of spaces wouldn't be nearly so great. User and kernel processes both could simply run as communicating services, accessing hardware via libraries of functions that may very well be inlined and assembled directly into the binaries, and even use shared memory and garbage collection for message passing (after all, LanguageIsAnOs).
Like the OberonSystem
. I'm already well aware of this, as is most anyone reading this page. What is your point?
But I don't believe the ExoKernel will ever succeed in the user/desktop market until it has such a firm basis for safety.
What is your point?
An exokernel has been demonstrated. It is as safe as any other existing microkernel architecture, which is to say, extremely safe, because it, like a microkernel, exercises the memory management unit of the system. It is not possible to employ an exokernel on a system without an MMU; doing so just degenerates into a glorified library, since the fundamental requirement, that of safe multiplexing
of hardware, is not enforceable. Now, if your applications choose
to abuse the memory management facilities, then sure, you can cause memory corruption, but only with other instances of applications doing similar skulduggery.
The exokernel will not
permit your faulty application from interfering with otherwise sanely-written applications. Contrast this with Linux, the world's most favorite underdog, where I can mmap() /dev/kmem and just wreck all sorts
of fun havoc on an otherwise stable system.
BTW, an exokernel's application will 99% of the time be dynamically linked to a shared object containing what you'd normally think of as an operating system; hence the term "libOS". So, you can port a Linux application to an exokernel environment by recompiling the application and making sure to link it against liblinux.so or some such. (Provided such a library existed of course!) Knowing that this is the case, it should therefore be patent that exokernels supports the LanguageIsAnOs
philosophy far better
than any existing OS technology (including microkernels).
So, I have to ask, did you read the link at the top of the page? Did you read any of the comments on this page? Have you read any of the research papers on exokernels?
The same problems that drove us towards separating UserSpace and KernelSpace? still exist today, and there is a lot of code written in C and other languages with sharp little
Yes, and I note that all applications written for exokernels are written in C. And that Xok itself is written in ... C. And that the safety concerns you raise are no worse than any other OS written in ... C.
Again, what is your point?
I deleted this chunk of non sequitur because not one statement you made
is unaddressed elsewhere on this page, or on the linked-to page on exokernels. As such, it was just useless clutter.
Now that I have wasted everyone else's time, do you have something more to contribute, or are we just going to flog a dead horse?
BTW, I'm not sure
what you were trying
at with the constant
emphasis on pointers.
I'd like to point
out that Oberon exposes pointers
to the programmer, yet is still
. Uncertainties only
come into play when you
import the SYSTEM
module, and even then
you can't do pointer arithmetic without consciously writing some incredibly
ugly code to do it. In other
words, it's impossible to accidentally
write unsafe code in Oberon,
like it is in C.
Well, I found it useful, Sam. Then again, maybe I just don't want to read the original docs. -- CalebWakeman
Posit: An exokernel cannot exist (or is meaningless) outside a PerfectSystem