A revolutionary idea for tomorrow’s PCs

A revolutionary idea for tomorrow’s PCs


PCs are complex due to underlying hardware organisation. Consequences of this include difficulty in modifying or upgrading a PC, bloated operating systems and software stability issues. Is there an alternative that wouldn’t involve scrapping everything and starting over? I will describe one possible solution with both its benefits and drawbacks.

What (most) users want

What do you want from your computer? This is quite an open question, with a large number of possible answers. Computer users can generally be categorised between two extremes. There are those who just want to type documents, write email, play games and look at porn—all the wonderful things modern PCs make possible. They want their computer to work like their TV/DVD/Hi-Fi, turn it on, use it for purpose, and then turn it off. If something goes wrong they’re not interested in the why or how, they just want the problem resolved. At the other end of the scale there are those who love to tinker, to dig into the bowels of their machine for kicks, to eke out every last ounce of performance, or to learn. If something goes wrong they’ll read the documentation, roll up their sleeves and plunge back in.

Where do you as a user belong on this scale? As a Free Software Magazine reader you’re probably nearer the tinkerer end of the spectrum (and you’re also probably asking, what does any of this have to do with free software. Well, bear with me). The majority of users, however, are nearer the indifferent end of the scale, they neither know nor care how a PC works. If their PC works for the tasks they perform and runs smoothly, all is well—if it doesn’t, they can either call up an IT helpline or twiddle their thumbs (usually both). It’s easy to dismiss such users, many problems that occur on any system are usually well documented or have solutions easily found with a bit of googling. Is it fair to dismiss such users? Sometimes it is, but, more often than not, you just can’t expect an average user to solve software or hardware problems themselves. The users in question are the silent majority, they have their own jobs to do and are unlikely to have the time, requisite knowledge or patience to troubleshoot; to them documentation, no matter how well indexed or prepared, is tedious and confusing. They don’t often change their settings and their desktops are crammed with icons. They are the target demographic of desktop operating systems. Ensnared by Windows, afraid of GNU/Linux, they were promised easy computing but have never quite seen it. Can an operating system appeal to these users without forcing everyone into the shackles of the lowest common denominator? Can developers make life easier for the less technically oriented, as well as themselves, without sacrificing freedom or power? Can we finally turn PCs into true consumer devices while retaining their awesome flexibility? I believe we can, and the answer lies (mostly) with GNU/Linux.

This article loosely describes an idealised PC, realised through changes in how the underlying hardware is organised, along with the consequences, both good and bad, of such a reorganisation. It is NOT a HOW-TO guide, you will not be able to build such a PC after reading this article (sorry to disappoint). The goal is to try and encourage some alternate thinking, and explore whether we truly are being held back by the hardware.

The goal is to try and encourage some alternate thinking, and explore whether we truly are being held back by the hardware

The problem with PCs and operating system evolution

So what are the problems with PCs that make them fundamentally difficult to use for the average person. The obvious answer would probably be the quality of software, including the operating system that is running on the PC. Quality of software covers many areas, from installation through everyday use to removal and upgrading, as well as the ability to modify the software and re-distribute it. A lot of software development also necessitates the re-use of software (shared libraries, DLLs, classes and whatnot) so the ability to re-use software is also an important measure of quality, albeit for a smaller number of users. Despite their different approaches, the major operating systems in use on the desktop today have both reached a similar level of usability for most tasks, they have the same features (at least from the users point of view), similar GUIs, even the underlying features of the operating system work in a similar manner. Some reasons for this apparent “convergence” are user demand, the programs people want to run, the GUI that works for most needs and the devices that users want to attach to their PC to unlock more performance or features. While this is evidently true for the “visible” portions of a PC (i.e. the applications) it doesn’t explain why there are so many underlying similarities at the core of the operating systems themselves. The answer to this, and by extension, the usability problem, lies in the hardware that the operating systems run on.

Am I saying the hardware is flawed? Absolutely not! I’m constantly amazed at the innovations being made in the speed, size and features of new hardware, and in the capability of the underlying architecture to accommodate these features. What I am saying is there is a fundamental problem in how hardware is organised within a PC, and that this problem leads to many, if not all, of the issues that prevent computers from being totally user friendly.

There is a fundamental problem in how hardware is organised within a PC, which leads to many, if not all, of the issues that prevent computers from being totally user friendly

The archetypal PC consists of a case and power supply. Within the case you have the parent board which forms the basis of the hardware and defines the overall capabilities of the machine. Added to this you have the CPU, memory, storage devices and expansion cards (sound/graphics etc.). Outside of the case you’ve got a screen and a keyboard/mouse so the user can interact with the system. Cosmetic features aside, all desktop machines are fundamentally like this. The job of the operating system is to “move” data between all these various components in a meaningful and predictable way, while hiding the underlying complexity of the hardware. This forms the basis for all the software that will run on that operating system. The dominant desktop operating systems have all achieved this goal to a large extent. After all, you can run software on Windows, GNU/Linux or Mac OS without worrying too much about the hardware—they all do it differently, but the results are essentially the same. What am I complaining about then? Unfortunately, hiding the underlying complexity of the hardware is, in itself, a complex task: data whizzes around within a PC at unimaginable speed; new hardware standards are constantly evolving; and the demands of users (and their software) constantly necessitate the need for new features within the operating system. The fact that an operating system is constantly “under attack” from both the underlying hardware and the demands of the software mean these systems are never “finished” but are in a constant state of evolution. On the one hand this is a good thing, evolution prevents stagnation and has got us to where we are today (in computing terms, things have improved immeasurably over the 15 years or so that I’ve been using PCs). On the other hand, it does mean that writing operating systems and the software that runs on them will not get any easier. Until developers have the ability to write reliable software that interact predictably with both the operating system/hardware and the other software on the system, PCs will remain an obtuse technology from the viewpoint of most users, regardless of which operating system they use.

How do operating systems even manage to keep up at all? The answer is they grow bigger. More code is added to accommodate features or fix problems (originating from either hardware, other operating system code or applications). Since their inception, all the major operating systems have grown considerably in the number of lines of source code. That they are progressively more stable is a testament to the skill and thoughtfulness of the many people that work on them. At the same time, they are now so complex, with so many interdependencies, that writing software for them isn’t really getting any easier, despite improvements in both coding techniques and the tools available.

Device drivers are a particularly problematic area within PCs. Drivers are software, usually written for a particular operating system, which directly control the various pieces of hardware on the PC. The operating system, in turn, controls access to the driver, allowing other software on the system to make use of the features provided by the hardware. Drivers are difficult to write and complicated, mainly because the hardware they drive is complicated but also because you can’t account for all the interactions with all the other drivers in a system. To this day most stability problems in a PC derive from driver issues.

The devices themselves are also problematic from a usability standpoint. Despite advances, making any changes to your hardware is still a daunting prospect, even for those that love to tinker. Powering off, opening your case, replacing components, powering back on, detecting the device and installing drivers if necessary can turn into a nightmare, especially when compared to the ease of plugging in and using say, a new DVD player (though some people find this tough also). Even if things go smoothly there’s still the problem of unanticipated interactions down the line. If something goes wrong the original parts/drivers must be reinstated, and even this can lead to problems. In the worst case, the user may be left with an unbootable machine.

The way PCs are organised, with a parent board and loads of devices crammed into a single case, makes them not only difficult to understand and control physically, but has the knock on effect of making the software that runs on them overly complex and difficult to maintain. More external devices are emerging and these help a bit (particularly routers) but don’t alleviate the main problem. An operating system will never make it physically easier to upgrade your PC, while the driver interaction issues and the fact that the operating systems are always chasing the hardware (or that new hardware must be made to work with existing operating systems) means that software will always be buggy.

Powering off, opening your case, replacing components, powering back on, detecting the device and installing drivers if necessary can turn into a nightmare

If we continue down the same road, will these problems eventually be resolved? Maybe, the people working on these systems, both hard and soft, are exceptionally clever and resourceful. Fairly recent introductions such as USB do solve some of the problems, making a subset of devices trivial to install and easy to use. Unfortunately, the internal devices of a PC continue to be developed along the same lines, with all the inherent problems that entails.

We could build new kinds of computer from scratch, fully specified with a concrete programming model that takes into account all we’ve learned from hardware and software (and usability) over the past half century. This, of course, wouldn’t work at all, with so much invested in current PCs both financially and intellectually we’d just end up with chaos, and no guarantee that we wouldn’t be in the same place 20 years down the line.

We could lock down our hardware, ensuring a fully understandable environment from the devices up through the OS to the applications. This would work in theory, but apart from limiting user choice it would completely strangle innovation.

More realistically, we can re-organise what we have, and this is the subject of the next section.

Abstracted OS(es) and consumer devices

What would I propose in terms of reorganisation that would make PCs easier to use?

Perhaps counter intuitively, my solution would be to dismantle your PC. Take each device from your machine, stick them in their own box with a cheap CPU, some memory and a bit of storage (flash or disk based as you prefer). Add a high-speed/low latency connection to each box and get your devices talking to each other over the “network” in a well defined way. Note that network in this context refers to high-performance serial bus technology such as PCI Express and HyperTransport, or switched fabric networking I/O such as Infiniband, possibly even SpaceWire. Add another box, again with a CPU, RAM and storage, and have this act as the kernel, co-ordinating access and authentication between the boxes. Finally, add one or more boxes containing some beefy processing power and lots of RAM to run your applications. Effectively what I’m attempting to do here is turn each component into a smart consumer appliance. Each device is controlled by its own CPU, has access to its own private memory and storage, and doesn’t have to worry about software or hardware conflicts with other devices in the system, immediately reducing stability issues. The creator of each device is free to implement whatever is needed internally to make it run, from hardware to operating system to software; it doesn’t matter, because the devices would all communicate over a well-defined interface. Linux would obviously be a great choice in many cases; I imagine you could run the kernel unmodified for many hardware configurations.

What are the advantages of this approach? Surely replacing one PC with multiple machines will make things more difficult. That is true to some extent, but handled correctly this step can remove many headaches that occur with the current organisation. Here are just some of the advantages I can think of from this approach (I’ll get to the disadvantages in the next section).

Testing a device is simpler, no need to worry about possible hardware or software conflicts, as long as the device can talk over the interconnect, things should be cool

Device isolation

Each device is isolated from all the others in the system, they can only interact over a well defined network interface. Each device advertises its services/API via the kernel box. Other devices can access these services, again authenticated and managed by the kernel box. Device manufacturers ship their goodies as a single standalone box, and they of course have full control over the internals of said box, there are no other devices in the box (apart from the networking hardware and storage) to cause conflicts. Each device can be optimised by its manufacturer, and there’s the upside that the “drivers” for the device are actually part of the device. Testing a device is simpler, no need to worry about possible hardware or software conflicts, as long as the device can talk over the interconnect, things should be cool. Note that another consequence of this is that internally the device can use any “operating system” and software the manufacturer (or the tinkering user) needs to do the job, as well as whatever internal hardware they need (though of course all devices would need to implement the networking protocol). Consequently, integrating new hardware ideas becomes somewhat less painful too. Additional benefits of isolation include individual cooling solutions on a per device basis and better per device power management. On the subject of cooling the individual devices should run cool through the use of low frequency CPUs and the fact that there isn’t a lot of hardware in the box. As well as consuming less power this is also great for the device in terms of stability. Other advantages include complete autonomy of the device when it comes to configuration (all info can be stored on the device), update information (again, relevant info can be stored on the device), documentation, including the API/capabilities of the device (guess where this is kept!) and simplified troubleshooting (a problem with a device can immediately be narrowed down, excluding much of the rest of the system, this isn’t the case in conventional PCs). Another upside is that if the device is removed it would not leave orphaned driver files on the host system. Snazzy features such as self-healing drivers are also more easily achieved when they don’t have to interfere with the rest of the system (the device “knows” where all its files are, without the need to store such information).

Many architectures

Device isolation means the creator can use whatever hardware they desire to implement the device (aside from the “networking” component, which would have to be standardised, scalable and easily upgraded). RISC, CISC, 32 bit, 64 bit, intelligent yoghurt, whatever is needed to drive the device. All these architectures could co-exist peacefully as long as they’re isolated with a well defined communication protocol between them.

No driver deprecation

As long as the network protocol connecting the boxes does not change the drivers for the device should never become obsolete due to OS changes. If the network protocol does change, more than likely only the networking portion of the device “driver” will need an update. Worst case scenario is if the network “interface” (i.e. connector) changes. This would necessitate a new device with the appropriate connection or modification to the current device.

Old devices can be kept on the system to share workload with newer devices, or the box can be unplugged and given to someone who needs it

No device deprecation

Devices generally fall out of use for two reasons: a) lack of driver support in the latest version of the operating system; and b) no physical connector for the device in the latest boxes. This problem would not exist because as mentioned in “No driver deprecation” above the driver is always “with” the device and can always be connected through the standard inter-device protocol. Old devices can be kept on the system to share workload with newer devices, or the box can be unplugged and given to someone who needs it, the new user can just plug the box into their system and start using the device, much better than the current system of farting around trying to find drivers and carrying naked expansion cards around and stuffing them into already overcrowded cases. As users upgrade their machines it is often the case that some of their hardware is not compatible with the upgrades. For example, upgrading the CPU in a machine often means a new motherboard. If a new standard for magnetic storage surfaces, very often this requires different connectors to those present on a user’s motherboard, thus necessitating a chain of upgrades to use the new device. Isolated devices remove this problem. When a device is no longer used in a PC it is generally the case that it is sold, discarded or just left to rot somewhere. Obtaining drivers for such devices becomes impossible as they are phased out at the operating system and hardware level, and naked expansion cards are so very easily damaged. In this system there would be no need to remove an old device, even if you have obtained another device that does the same job. The old device could remain in the system and continue to do its job for any applications that needs it, and taking some load from the new device where it can. As long as there is versioning on the device APIs there is no chance of interference due to each device having self-contained device drivers. Devices can be traded easily, they come with their own device drivers so can just be plugged in and are less likely to be damaged in transit. One of the great features of GNU/Linux is its ability to run on old hardware; this would be complimented greatly by the ability to include old hardware along with new in a running desktop system, something that would normally be very difficult to achieve.

Security

Security is a big issue and one which all operating systems struggle to fulfil for various reasons. While most operating systems are now reasonably secure keeping intruders out and user data safe is a constant battle. This system would potentially have several advantages in the area of security; the fact that the Kernel is a self-contained solution, probably running in firmware, meaning overwriting the operating system code, either accidentally or maliciously would be very difficult if not impossible, and would likely require physical access to the machine. For the various devices, vendors are free to build in whatever security measures they see fit without worrying about the effect on the rest of the system. They could be password protected, keyed to a particular domain or whatever to prevent their use in an unauthorised system. Additional measures such as tamper proofing on the container could ensure that it would be extremely difficult to steal data without direct access to the machine and user account information. Of course, care must be taken not to compromise the user’s control of their own system. There are several areas of security (denial of service attacks for example) where this system would be no better off than conventional systems, though it may suffer less as the network device under attack could be more easily isolated from the rest of the system. In fact it would be easier to isolate points of entry into the system (i.e. external network interfaces and removable media); this could be used to force all network traffic from points of entry through a “security box” with anti-virus and other malware tools, allowing for the benefits of on-demand scanning without placing a burden on the main system. It is likely that a new range of attacks specific to this system would appear, so security will be as much a concern as it is on other systems and should be incorporated tightly from the start.

Despite the introduction of screw-less cases, jumper-less configuration and matching plug colours, installing hardware remains a headache on most systems

Device installation

Despite the introduction of screw-less cases, jumper-less configuration and matching plug colours, installing hardware remains a headache on most systems. With for example a new graphics card, the system must be powered down and the case opened, the old card removed and the new one seated. Then follows a reboot and detection phase followed by the installation (if necessary) of the device driver. Most systems have made headway in this process, but things still do go wrong on a regular basis, and often the system must still be powered down while new hardware is installed, and if things go wrong it can be very difficult to work out what the exact problem is. By turning each device into a consumer appliance, this system would allow for hot swapping of any device. Once power and networking are plugged into the device the device would automatically boot and be integrated into the “kernel”. No device driver installation would be necessary as all the logic for driving the device is internal to the device. If the Kernel is unable to speak to the device over the network this would generally signify a connectivity problem, a power problem or a fault with the device itself. Because devices are wrappers for “actual” devices there is no need to touch the device’s circuitry, so removing risk of damage as can happen with typical expansion cards and also removing the danger of electric shocks to the user.

Increased performance

This is admittedly a grey area, true performance changes could only be checked by building one of these things. Latency over the network between the devices will be a bottleneck for performance, the degree of which is determined by the latency of whatever bus technology is used. Where performance will definitely increase though, is through the greater parallelism of devices. With each device having its own dedicated processor and memory there will be no load on the kernel or application boxes when the device is in use (except for message passing between the boxes, which will cause some overhead). It should also be easier to co-ordinate groups of similar devices, if present, for even greater performance. In the standard PC architecture, device activity generates interrupts, sometimes lots of them, which result in process switching, blocking, bus locking and several other performance draining phenomenon. With the interrupt generating hardware locked inside a box the software on the device can make intelligent decisions about data moving into and out of the device, as well as handling contention for the device. More performance tweaks can be made at the kernel level; per device paging and scheduling algorithms become a realistic prospect, instead of the one size fits all algorithms necessarily present in the standard kernel.

Device driver development

It’s easier to specify a kernel, list of modules and other software in a single place (on the device), along with update information and configuration for the device, than it is to merge the device driver either statically or dynamically into an existing kernel running on a system with several other devices and an unknown list of applications.

With device isolation it should be a lot simpler to build features that rely on that device, without forcing too much overhead on the rest of the system

Features

With device isolation it should be a lot simpler to build features that rely on that device, without forcing too much overhead on the rest of the system. For example, a box dedicated to storage could simply consist of an ext3 file system that could be mounted by other boxes in the system. On the other hand, you could build in an indexing service or a full database file system. These wouldn’t hog resources from the rest of the system as they do in a conventional PC, and would be easier to develop and test. Experimentation with new technologies would benefit due to the only constraint for interoperability with the system being the device interconnect (which would be specified simply as a way of moving data between devices), developers could experiment with new hardware and see the results in-situ on a running system (as opposed to having to connect potentially unstable hardware directly to a parent board with other devices). Users could pick and choose from a limitless array of specialised appliances, easily slotted together, which expose their capabilities to the kernel and applications. Without a parent board to limit such capabilities the possibilities literally are endless.

Accessibility

This is still a difficult area in all major operating systems, mainly because they are not designed from the ground up to be accessible. Most accessibility “features” in current operating systems seem like tacked on afterthoughts. I’m not sure if this is to do with the difficulty in incorporating features for vision or mobility impaired users or just whether a lot of developers think it’s not worth the hassle accounting for “minorities”, or maybe they don’t think about it at all. The fact that the population as a whole is getting older means that over the next few decades there are going to be many more “impaired” users and we should really be making sure they have full access to computing, as things stand now it is still too difficult for these users to perform general computing, never mind such things as changing hardware or programming. I see a distributed device architecture as being somewhat beneficial in this regard, apart from making hardware easier to put together for general and impaired users, the system is also conducive to device development, with the possible emergence of more devices adapted for those who need them. The many and varied software changes such a system would require would also offer a good opportunity to build accessibility into the operating system from the start, which certainly doesn’t seem to be the case with current systems.

The ability to upgrade the processing power of a machine in this way is just not possible with a conventional PC organisation

Applications

The actual programs, such as email, word-processing and web browsers would run on the application boxes described earlier. To re-iterate, these boxes would generally consist solely of processors, memory and storage to hold the paging file and possibly applications depending on overall system organisation. More hardware could be included for specialised systems. All hardware and kernel interactions take place over the device interconnect, so applications running on one of these boxes can do so in relative peace. Because devices in the system can be packaged with their own drivers and a list of capabilities it should be fairly easy to detect and make use of desired functionality, both from the kernel and other devices. Another advantage would be the ability to keep adding application boxes (i.e. more CPU and RAM) ad-infinitum; the kernel box would detect the extra boxes and automatically make use of them. The ability to upgrade the processing power of a machine in this way is just not possible with a conventional PC organisation.

Figure 1: Rear view of an imaginary distributed PCFigure 1: Rear view of an imaginary distributed PC

Figures 1 and 2 show front and back views of a mock-up of a distributed PC, mimicking somewhat the layout of a stacked hi-fi system. While this is a workable and pleasing configuration in itself there are literally endless permutations in size and stacking of the components. The figures also illustrate the potential problems with cabling. Only one power cable is shown in the diagram, but there could be one for every box in the system unless some “pass-through” mechanism is employed for the power (again employed in many hi-fi systems). The data cables (shown as ribbon cables in the diagram) would also need to be more flexible and thinner to allow for stacking of components in other orientations. Could the power and networking cables be combined without interference? The boxes shown are also much larger than they’d need to be, for many components it would be possible to achieve box sizes of 1-3 inches with SBC or the PC/104 modules available today. The front view also shows an LCD display listing an audio track playing on the box; this highlights another advantage in that certain parts of the system could be started up (i.e. just audio and storage) while the rest of the system is offline, allowing the user for example to play music or watch DVDs without starting the rest of the system.

Figure 2: Front view of an imaginary distributed PCFigure 2: Front view of an imaginary distributed PC

So, we have a distributed PC consisting of several “simplified” PCs, each of them running on different hardware with different “conventional” operating systems running on each. Upgrading or modifying the machine is greatly simplified, device driver issues are less problematic and the user has more freedom in the “layout” of the hardware in that it is easier to separate out the parts they need near (removable drives, input device handlers) from the parts they can stick in a cupboard somewhere (application boxes, storage boxes). What about the software? How can such a system run current applications? Would we need a new programming model for such a system?

There are many possibilities for the software organisation on such a system. One obvious possibility is running a single Linux file system on the kernel box, with the devices mapped into the tree through the Virtual File System (VFS). The /dev portion of the tree would behave somewhat like NFS, with file operations being translated across the interconnect in the background. Handled correctly, this step alone would allow a lot of current software to work. An application requesting access to a file would do so via the kernel box, which could translate the filename into the correct location on another device, a bit of overhead but authentication and file system path walking would be occurring at this point anyway. Through the mapping process it would be possible to make different parts of a “single” file appear like different files spread throughout the file system. Virtual block devices are a good way to implement this, with a single file appearing as a full file system when mounted. This feature could be utilised to improve packaging of applications which are generally spread throughout the file system, I’ve always been of the belief that an application should consist of a single file or package (not the source, just the program). Things are just so much simpler that way (while I’m grumbling about that, I also think Unicode should contain some programmer specific characters; escaping quotes, brackets and other characters used in conventional English is both tedious, error prone and an overhead, wouldn’t it be great if programmers had their own delimiters which weren’t a subset of the textual data they usually manipulate?).

There is no shortage of research into multicomputer and distributed systems in general, again all very relevant

Software would also need to access the features of a device through function calls, both as part of the VFS and also for specific capabilities not covered by the common file model of the VFS. Because the API of each device, along with documentation, could easily be included as part of the devices “image” linking to and making use of a device’s API should be relatively simple, with stubs both in the application and the device shuttling data between them.

A programming model eminently suitable for a distributed system like this is that of tuple spaces, I won’t go into detail here, you can find many resources on Google under “tuple spaces”, “Linda” and “JavaSpaces”). Tuple spaces allow easy and safe exchange of data between parallel processes, and have been used already in JINI, a system for networking devices and services in a distributed manner pretty similar to what I’m proposing, and both Sun (who developed JINI) and Apache River (though they’ve just started) have covered much ground with the problems of distributed systems; most of their ideas and implementation would be directly relevant to this “project”. The client/server model, as used by web servers could also serve as a good basis for computing on this platform; Amoeba is an example of a current distributed OS which employs this methodology. There is no shortage of research into multicomputer and distributed systems in general, again all very relevant.

I’m being pretty terse on the subject of running software, partly because the final result would be achieved through a lot of research, planning and testing; mainly because I haven’t thought of specifics yet (important as they are!). The main issue is that applications (running on application boxes) have a nicer environment within which to run, free from device interruptions and the general instability that occurs when lots of high speed devices are active on a single parent board. I’m also confident that Linux can be made to run on anything, and this system is no exception. I’m currently writing more on the specific hardware and software organisation of a distributed PC, based on current research as well as my own thoughts; if I’m not battered too much for what I’m writing now I could maybe include this in a future article. Of course, if anyone wants to build one of these systems I’m happy to share all my thoughts! None of this is really new anyway; there’s been a lot of research and development in the area of distributed computing and networked devices, especially in recent years. Looking at sites such as linuxdevices.com shows there is a keen interest among many users to build their own devices, extending this idea into the bowels of a PC, and allowing efficient interoperation between such devices, seems quite natural. With the stability of the Linux kernel (as opposed to the chaos of distros and packaging), advances in networking and the desire for portable, easy to use devices from consumers I believe this is an idea whose time has come.

The platinum bullet

At this stage you may or may not be convinced of how this organisation will make computing easier. What you are more likely aware of is the disadvantages and obstacles faced by such a system. In my eyes, the three most difficult problems to overcome are cost, latency and cabling. There are, of course, many sticky issues revolving around distributed software, particularly timing and co-ordination; but I’ll gloss over those for the purposes of this article and discuss them in the follow-up.

The three most difficult problems to overcome are cost, latency and cabling

Obviously giving each device its own dedicated processor, ram and storage, as well as a nice box; is going to add quite a bit to the price of the device. Even if you only need a weak CPU and a small amount of RAM to drive the device (which is the case for most devices) there will still be a significant overhead, even before you consider that each device will need some kind of connectivity. For the device interconnect to work well we’re talking serious bandwidth and latency constraints, and these aren’t cheap by today’s standards (Infiniband switches can be very expensive!). Though this seems cataclysmic I really don’t think it’s a problem in the long term. There are already plenty of external devices around which work in a similar way and these are getting cheaper all the time. There are also the factors that it should be cheaper and simpler to develop hardware and drivers, which should help reduce costs, particularly if the system works well and a lot of users take it on. Tied in with cost are the issues of redevelopment. While I don’t think it would take too much effort to get a smooth running prototype up, there may have to be changes in current packages or some form of emulation layer to allow current software to run.

The second issue, latency, is also tied up quite strongly with cost. There are a few candidates around with low enough latency (microsecond latency is a must) and high enough bandwidth to move even large amounts of graphical data around fairly easily. The problem is that all of these technologies are currently only seen in High Performance Computing clusters (which this system “sort of” emulates, indeed, a lot of HPC would be relevant to this system) and cost a helluva lot. Again this is a matter of getting the technology into the mainstream to reduce costs. Maybe in ten years time it’ll be cheap enough! By then Windows will probably be installed in the brains of most of the population and it’ll be too late to save them.

Interconnecting several devices in a distributed system will involve a non-trivial amount of cabling, with each box requiring both power and networking cables as well as any device specific connectors. Reducing and/or hiding this, and making the boxes look pretty individually as well as when “combined” will be a major design challenge.

Another possible problem is resistance, people really don’t like change. The current hardware organisation has been around a long time and continues to serve us well, why should we rock the boat and gamble on an untested way of using our computers?

Integration of devices on a parent board was a practical decision at the time it was made, reducing the cost and size of PCs as well as allowing for good performance. With SBC (Single Board Computers), tiny embedded devices and cheap commodity hardware, none of the factors which forced us down this route still apply. Linux doesn’t dictate hardware and even though this system would give more freedom with regards to hardware it does require a change in thinking. A lot of work would also be needed to build this system (initially) and the benefits wouldn’t be immediately realisable in comparison to the cost. My main point though is that, once the basics are nailed down, what we’d have is an easy to use and flexible platform inclusive to all hardware (even more so than Linux is already). Interoperability would exist from the outset at the hardware level, making it much easier to build interoperability at the software level. With such a solid basis, usability and ease of development should not be far behind. The way things are going now it seems that current systems are evolving toward this idea anyway (though slowly and with a lot of accumulated baggage en-route). One of the first articles I read on FSM was how to build a DVD player using Linux; this is the kind of hacking I want this system to encourage.

Imagine a PC where all the hardware is hot-swappable, with drivers that are easily specified, modified and updated, even for the average user

Welcome to the world of tomorrow

With Linux we have an extremely stable and flexible kernel. It can run on most hardware already and can generally incorporate new hardware easily. The organisation of the hardware that it (and other operating systems) runs on however, forces a cascade effect, multiplying dependencies and complexities between device drivers and hence the software that uses those devices. The underlying file paradigm of Linux is a paragon of beauty and simplicity that is unfortunately being lost, with many distros now seemingly on a full-time mission of maintaining software packages and the dependencies between them.

Imagine a PC where all the hardware is hot-swappable, with drivers that are easily specified, modified and updated, even for the average user; a system where the underlying hardware organisation, rather than forcing everything into a tight web of easily broken dependencies, promotes modularity and interoperability from the ground-up. A system that can be upgraded infinitely and at will, as opposed to one which forces a cycle of both software and hardware updates on the user. A system truly owned by the user.

The Linux community is in a fantastic position to implement such a system, there isn’t any other single company or organisation in the world that has the collective knowledge, will or courage to embrace and develop such an idea. With the Kernel acting as a solid basis for many devices, along with the huge amount of community software (and experience) that can be leveraged in the building of devices, all the components needed for a fully distributed, Linux based PC are just “a change in thinking” away.

What would be the consequences if distributed PCs were to enter the mainstream? Who would benefit and who would suffer? Device manufacturers would definitely benefit from greatly reduced constraints as opposed to those currently encountered by internal PC devices. Users would have a lot more freedom with regards to their hardware and in many cases could build/modify their own devices if needs be, extending the idea of open source to the hardware level.

Large computer manufacturers (such as Dell) could stop releasing prebuilt systems with pre-installed operating systems and instead focus on selling devices with pre-installed drivers. This is a subtle but important distinction. One piece of broken hardware (or software) in a pre-built desktop system usually means a lot of talking with tech-support or the return of the complete system to the manufacturer to be tested and rebuilt. A single broken device can be diagnosed quickly for problems and is much less of a hassle to ship.

What would be the impact on the environment? Tough to say directly, but I suspect there would be far fewer “obsolete” PCs going into landfill. With per device cooling and power saving becoming more manageable in simplified devices things could be good (though not great, even the most energy efficient devices still damage the environment, especially when you multiply the number of users by their ever-growing range of power-consuming gadgets).

A single broken device can be diagnosed quickly for problems and is much less of a hassle to ship

Big operating system vendors could lose out big-time in a distributed Linux world. In some ways this is unfortunate, whether you like them or not those companies have made a huge contribution to computing in the last couple of decades, keeping pace with user demand while innovating along the way. On the other hand, such companies could start channelling their not inconsiderable resources into being a part of the solution rather than part of the problem. It might just be part of the Jerry Maguire-esque hallucination that inspired me to write this article, but I’d really love to see a standardised basis for networked devices that everyone would want to use. I think everyone wants that, it’s just a shame no-one can agree on how.

Category: 
License: 

Comments

WoundedChin's picture
Submitted by WoundedChin (not verified) on

Brilliant Idea! With this you could re-arrange a computer into a media player, then arrange it back again. You've stumbled onto something big. I hope the hardware makers are reading this.

Dave Guard's picture
Submitted by Dave Guard on

Great idea Matthew and well thought out.

I can picture these PCs looking like CD/DVD racks - each component slotting into the rack and taking up one or more slots. No need for cables. How easy would the hot-swapping of components be if you just have to push-and-pop each one in/out? If you run out of room you buy a bigger rack.

I'm sure I've seen some sort of prototype or mock-up of this idea somewhere but they used books on a shelf rather than CDs in a rack.

Matthew Roley's picture

Found this link regarding the Asus Modular PC, possibly the mock-up you were thinking of?

http://www.reghardware.co.uk/2006/02/23/asus_concept_shelf_pc/

*Very* similar to what I've described in the article (and nice to find something backing up what I've written, I'm getting kinda excited now :)). Though still in the concept stage the idea of powering the devices by induction from the shelf is brilliant, though I have to admit I personally don't like the bookshelf layout (but admit again I can't think of anything better atm). Pure wireless connectivity (as opposed to a combination of wired and wireless which I assumed), though very conducive to usability, was something I discounted for the immediate future due to latency and bandwidth (I wanna play games on one of these things!), but is something I'd love to see just to get rid of those damn cables :)

I'm currently writing a follow-up on exactly how the components of a modular system could work together with a heavy slant on simplicity (and using open source components).

Dave Guard's picture
Submitted by Dave Guard on

That's definitely where I saw it. The CD rack picture I have in my mind is prettier than their shelf design. I can also imagine people providing skins for the individual units so you can "theme" your machine.

Anonymous visitor's picture
Submitted by Anonymous visitor (not verified) on

I'd wonder a while ago why no one has done this. Especially as home media systems started to emerge

My solution to those ungainly wires at the back is simple to have a protrusion from the top that slots into the bottom basically being a bus. Probably using removable cartridge style items that can also be pulled out to push in a nice top/bottom plate. A stack of interlocking bricks.

My plan was to make use of the AMD hyper transport intended to provide high communications between servers or extended the PCI bus up.

Taavi Lõoke's picture
Submitted by Taavi Lõoke (not verified) on

That is a really great idea, but I think that there are some problems.

The idea, that indexing service could be integrated into the storage box, seems to be a good one at first. At closer examination it revealed a weak point. I imagine there would be a communication protocol that regulates how to talk to storage devices. At first it probably wouldn't include indexing services specific commands, but when a storage box integrates such a service it would still need drivers to provide new commands over the old standard. Finally the protocol probably would be updated to facilitate those kinds of commands, but every time something new is implemented drivers would still be needed. So it is kind of the same as at the moment. What do you think, is there a way to relieve that problem?

Another thing: i have nothing against Linux, but reading the article this Linux evangelism was just a bit annoying. This kind of articles should be written without preferring any present operating systems as this system would probably need a new kind of software anyway.

Matthew Roley's picture

While this is a very important question that needs to be answered early in the design of such a system, I believe it is a non-issue. Think of the way the HTTP protocol works in conjunction with a browser and a server. This protocol, while it's been extended a bit over the years has remained pretty stable. All the user does (via the browser, or even the command line) is request a resource with a unique id (it's Uniform Resource Name, which covers the protocol used, http or ftp for example, as well as it's location, the host, and finally the requested resource). When you make such a request, your browser identifies the host for the resource (through DNS) then makes a connection to the host (through whatever protocol identified in the URN) requesting the resource. From the point of view of the browser it doesn't matter *how* the server provides that resource, it will either get the data in the expected format (e.g. HTML) or it will get an HTTP error code. From the point of view of the server; it gets requests for resources which it either honors or returns an error for (or is timed out on the browser or server side). Now the server is a black box, the resource requested could be a static file, a virtual file or even a proxy request to another server. That image your browser requested could just be a static .png or generated from a script, it doesn't matter to the browser, all it gets is what it asked for. Well run web servers update their internal configuration all the time, particularly as their userbase increases, updating their version of apache (or IIS), changing from sqllite to mysql (or from Access to SQLServer), adding an indexing daemon, even splitting into a cluster; the list of things that can be done internally to a webserver are endless, and the point is (aside from downtime and potential bugs) these changes have no impact on the end user (browser). Things may be going on differently behind the scene but the browser makes requests in the same way and gets what it asks for. And note specifically that NO change to the HTTP protocol is required, even though the services and data offered by the webserver may have changed.

Extending the web analogy to the distributed system; i imagine all the devices in the system acting as servers (and clients) with the 'kernel box' in my article acting in a way similar to a dns server (in that it routes requests for a resource to the proper device or application; but looking like a file system, in that instead of domain names you'd have a (virtual) file (e.g. /dev/storagebox), all requests for files to /dev/storagebox (e.g. /dev/storagebox/myfile) would be processed by the relevant device (or application), in the case the storage box would receive a request for /myfile which it can process internally any way it wants. A shorter way of saying this is it would probably work in a manner similar to NFS, but with optimisations for the fact the the network is very fast and very local. Management and configuration on the device (such as an indexing service) can be handled in the same way (continuing the web analogy you have things like webmin which serve exactly this kind of function).

On the issue of updating the drivers for the device. Of course there will be software updates required for the devices. Security fixes, performance updates, feature changes are all facts of life in the software world and they won't disappear whatever system you're using. Where this kind of system really scores though is that updates to the device do not impact or interact with the rest of the system. The updates are internal to the device and cannot affect the rest of the system (except indirectly in what the device offers to the system) in the same way a driver update might (like say, clobbering a file; or a buggy interaction with the kernel). Fundamentally a storage box on this system would work like any NAS (network attached storage) device available today, even in the home market (see http://en.wikipedia.org/wiki/Network-attached_storage). Updating these devices is a doddle as the update consists of a single image file (containing whatever is going to be running on the device, including operating system and software) that is flashed straight onto the device. So the device can be suspended from the system (not serve any requests temporarily), flashed then reinstated without having to power down or reboot the entire system. Backup up images also mean a rollback in case of problems is trivial, these backup images could be stored elsewhere on the system and/or in a hidden partition of the device (the way many hardware providers provide rescue solutions for home systems now).

The real software issues to solve on such a system boil down to parallelism, such as co-ordinating timing of audio output (on an audio device) with video output (to the screen) when playing say, a video stream. This of course would be a nightmare if the data were moving over a network with high variable latency (such as the internet), but the fact that much of the work is local on a high-speed, low-latency network (essential for such a system) means in my mind that they are solvable in software. Deadlock and general contention issues must also be resolved, though the theory and implementation of those doesn't really differ that much from current multi-threaded/multi-processor/multi-core systems.

On the issue of Linux evangelism. I'm sorry if my article came across this way, I'm pretty adamant that device manufacturers (including users who build their own devices) should have free choice in what they use to implement the device, whether they choose free software or proprietary should be irrelevant to the system, and will really only affect how they build their device and the costing of their device. This system would never work if vendors were forced down a particular route in their implementation. I'd love to see XBox 360s, washing machines, *any* kind of device working in this system, and the only requirement on those system to participate should be the ability to talk over the protocol (and appropriate connectivity, either directly or through an adapter). This doesn't exclude other systems, in fact it welcomes them. The reason my article states 'the answer lies (mostly) with GNU/Linux' boils down to the following facts.

-- Linux runs on more architectures than any other operating system. This greatly opens up the implementation choices for vendors, and is the reason why so many consumer devices run Linux internally.

-- The Linux kernel is free as in beer. Many manufacturers already exploit this fact to build cheaper devices. I don't personally have much in the way of cash so if I was knocking something together I'd pretty much *have* to use Linux.

-- The Linux kernel is free as in speech. Source code is available and can be modified for purpose. Essential when building specialised devices, you can take the source code and start from there. Also essential for home hackers building their own devices, if you can't get working source code for your device you aren't gonna get past the first bend.

-- The Linux Kernel marches on. The kernel is constantly being improved with newer versions, but the older version continue to have patches supplied for fixes and improvements, thanks to the open-source community. Proprietary systems generally don't offer this level of support for 'obsolete' versions of their software (as is now happening with Windows XP). This is necessary for a distributed system because a device manufacturer may incorporate a particular kernel version into their device. This version of the kernel may never need to be updated for the device as it may not need the features offered by newer versions (which would simply add bloat and work for the device manufacturer); however, bugs can and do appear and they will need to be fixed, it's a hell of a lot easier to achieve this if you're running a Linux kernel on the device.

These factors would simply make Linux a great choice for many device makers (as it is now) but it does not in any way prevent the use of other operating systems (as my article says) and would likely result in most of the devices in the system running Linux (hence, mostly linux) but wouldn't exclude any architecture or device operating system from the party. In fact there's no reason at all why the kernel box of a user's system couldn't run embedded windows; as long as the 'implementors' don't make subtle changes to how the overall system works to ensure broken interoperability (ok, i'm being a bit bitchy there). Take a look at http://www.windowsfordevices.com/ to see that windows does just fine in the area of devices, also check http://www.linuxdevices.com to see the Linux powered counterparts. More important than any of those factors above is the absolute necessity that this system is completely open and not controlled in a proprietary way by a single company; it should be a free and easy standard to implement by anyone. The 'system' wouldn't really be Linux (or any other os for that matter), it's really no more than a software layer for connecting devices (with their own internal operating systems) in a well-defined way and possibly over well-defined transport mechanism(s).

You've sort of made me jump the gun a bit, I am currently writing a followup that goes into more detail on this; but that could be a good thing in a way. What I'm essentially hoping for by writing this stuff is that enthusiasts and the big brains out there will embrace this idea (at least in principle) and come up with a cracking protocol that works and works well. In terms of Linux the kernel and current software are there to be used, and i'm sure at least a proof of concept prototype could be cobbled together with a bit of work. I have my own ideas and a relatively broad knowledge base (bit shallow in places :o) but I'm just one guy and there are many ways this system could be built; and as you say with the possible requirement of a new programming model a definite possibility it's best to get a lot of input on how willing people are to work on this.

Phew, quite a long post, hope it actually answers your questions! Thanks for your input and congratulations, you're one of the first contributors to the cause!

Tiffany Rice's picture
Submitted by Tiffany Rice (not verified) on

What a fantastic idea, the modules could be designed by different artists to make an artistic display, either just nicely colour or with different images/patterns to make your PC a work of art as well as one of genius.

This idea should definitely be invested in by a major player. Lets hope the developer gets his fair share.

Matthew Roley's picture

Hehe, turns out this is one of my friends. The point is a great one though, for many people the visual look of the system will be very important, and would give manufacturers the opportunity to produce a 'suite' of devices unified in aesthetic, while allowing the pragmatists to stick with a bunch of grey bricks.

Ron Phillips's picture

It would be nice if the component computers could communicate optically. It avoids a lot of EMI problems, and the bandwidth ought to be sufficient for almost anything.

Matthew Roley's picture

Yeah this is something definitely worth considering. I'm not absolutely certain at this point whether it would be best to just define a protocol and let market forces (and performance requirements) determine the best connectivity options; or set a connectivity standard from the start (which would make things a lot easier *if* device manufacturers could be convinced). Though I'm a bit wary of telling people what to use (the whole point of this system is that people can use the hardware and software they want) I sort of lean toward the latter. The interoperability of the devices over the 'network' is so important that perhaps it should be set 'in stone'. Optical would be right up there in a list of candidates for this, but there also definitely a need for wireless connectivity there too. Bandwidth is a very important consideration (particularly for the way I anticipate graphical data moving around the system) but there's also the issues of cabling, latency (I'm not sure about the numbers for optical networks, but I imagine they're pretty good) and also aesthetic appeal.

Benedictarf's picture
Submitted by Benedictarf (not verified) on

I was thinking about this and couldn't get past the problem of cost and explaining the concept to your average Joe:

"So I have to buy a new computer if I want to open my email?"

But then I realised that parts for this system already exist, namely the iPod. It works by its self and I'm sure it would only take a firmware update to make it in to soundcard with usb(or what ever protocol takes your fancy) connectivity. There must be simmilar consumer devices.

I think a good analogy for the system is the Power Ranger (bare with me). Seperatly they all have their own skills, but put them together and they make a giant that's greater than the sum of its parts.

Matthew Roley's picture

The parts do already exist, you're absolutely right. And putting them together does result in a gestalt system. But this is no different from a current PC. All the same parts will need to be bought (optical drives, graphics cards, sound card, network card (for interfacing to any networks 'external' to the system network) as they are in a current PC, and they all add capability to the system (again, as with a current PC), where this system improves on the first is they should be a lot easier to manage and a lot more stable because of their isolation (interaction over a unified well defined 'network') and their 'intelligence' (because they'll generally have their own processing and storage resources making them more adaptable than a 'standard' device. The other advantages are you can keep on adding devices, certainly far more than achievable on a standard PC, and having a standard communication protocol means integrating new kinds of devices should be easier as the main compatibility target to aim for is support for whatever network standard is decided upon (theoretically you could add a quantum processing box transparently to this system when the technology becomes a reality, upgrading a PC of today would involve scrapping much of your current system). These advantages do come at a price though, because of the price overhead for having processing and storage resources in the device. I think I say in the article this could be offset somewhat by a cheaper development process for the device, if not I've just said it now.

The Power Ranger analogy is a good one, you could probably even build one of these systems to look like one (storage devices for arms, the kernel and application boxes as the body and a webcam for a head :o)

Matthew Roley's picture

Thanks to everyone who've posted (and those who may post). Gonna get back to work and will continue writing the followup for anyone who is interested. I'll check back on any comments and incorporate them into my article if possible, but won't be replying to anymore unless they're really pertinent (or a follow up to any questions I've answered above). As you may notice I have a tendency for long expanses of text and I don't want to spend the next three years writing replies :o)

John Sargeant's picture
Submitted by John Sargeant (not verified) on

and I think I speak for everyone here in saying that I don't see how having a dishwasher plugged in to my xbox 390 or whatever it is in the future is going to help anyone at all. It's science fact turned in to science fiction!!!!

tinker's picture
Submitted by tinker on

I am already doing this, in a much more inelegant way, with a handful of old conventional PC's. The boxes are, for the most part, hidden away, the central heating control box is hidden in the heating cupboard and linked with CAT5 cable, the printer and scanner box is also in that cupboard while the scanner and printer usb cables pass through ducts to the office above. I have 6 other old boxes doing specialist jobs but all out of sight, and 2 main new boxes as workstations for doing 'stuff' and where I can check/alter the other 8 systems from.

As I have not used winblows for years I am not sure if the system would work in an M$ enviroment, especially the older and smaller boxes. Linux is, in my case, the most obvious choice currently though I will admit that something new, not yet thought of perhaps, may be even better at integrating such a system.

Ideally all these boxes would become like the small modules you talk about and actually be part of the fabric of the house with a plasma display in each room and voice control replacing keyboard and mouse. If I live long enough to see it I will be surprised.

Matthew Roley's picture

This post really got my juices flowing, learning of someone who's 'rolled their own' system in this way gives me great hope that a workable distributed PC could be achieved for the masses. I imagine you had some difficulties and a lot of learning to get through to get this stuff working; and I would really love to hear more about it, such as how your boxes are setup to talk to each other and what each box does, even pics, perhaps posted onto your blog and linked back here. Would be great for me to have a look through, but only if you can be bothered :o). As I may have mentioned I'm writing a follow up detailing how I think the system should work from a user point of view, stuffed with a bit more technical detail, and it would be great to have the benefit of your experiences to refer to.

You are a pioneer sir and I salute you!

JohnMc's picture
Submitted by JohnMc (not verified) on

First as a concept, yes its ok. But I have to tell you -- you ain't the first. The very idea you are promoting is Commodore64 write large for todays market. If you will recall, if you are old enough, is that you bought a component CPU/keyboard module from Commodore. The Commodore had IO code written to understand device level components. Add a hard drive, all the logic and the processing power for the drive was in the drive module. Want to dial out? You could buy a Modem and its supporting terminal software all built into the unit. You could string all these modules together with what would look like a SATA cable today. The idea has been around since the 1980's.

Finally, up to a point I think this idea is heading in a direction opposite that of the market. I can for $200 buy a small sealed terminal like device with all the IO (parallel, serial, usb, etc) I could want. It boots a remote linux distro. All my apps reside external -- out on the network.

The other trend that will help you though is the idea of virtualization. Soon when most PC's have 4-8 uC's in them, multiple virutal OS's will be the norm. At that point the view from the end user is that any OS they use is nothing more than a file instance on a drive or network somewhere. When the Host layer is very thin indeed then you will have achieved the file centric OS model to some degree.

Matthew Roley's picture

Heh, yeah the good old Commodore64, what a joy. Being quite young and totally skint at the time, I never had the joy of attaching any peripherals to my base unit (aside from the included tape player and a joystick) so I've never experienced this particular facet, but it's great to know it worked like that! On the subject of being first I'm sure you're quite right about that, Asus posted a mock-up of a similar system a few months before my article, Sun's JINI has been around a while, also taking a device-centric approach; going back even further to the 60s and 70s you've got the Modular One system (http://www.4links.co.uk/The-Origins-of-SpaceWire-colour.pdf), a mini-computer built up from smaller modules. The idea certainly isn't new, and not particularly feasible for desktop systems up until recently. What makes the idea important now (at least in my opinion) is the huge tide of connectible devices already available, along with the predicted surge of new devices in the near future; if like today they all go their own way with their own protocol and hardware interconnect things are gonna get incredibly messy. Couple this with the already massive interoperability problems that exist today, particular for software running on 'standard' desktops (WinLinMac) all cramped in a single box and interfering with one another in largely unpredictable ways; and the dozens of different ways software and hardware can talk to each other (again, causing unpredictable interference with other hardware and software), and the need for a simple, clean, STANDARD way of connecting our hardware and software becomes clear.

On your comment regarding thin-terminals, with apps out on the network; there's nothing in this design (distributed hardware) that precludes such a setup. Remember the idea is that a user can build (or buy) the computer they want; if you're blind you probably won't want a graphics device, if you prefer to work from a thin-terminal over the wire you can do that too. All 'my design' is stipulating is that the devices and software involved should speak to each other in a 'common tongue', what those particular devices and software actually do, and how they work internally, is up to the device vendor. I wrote a bit about using a modified version of HTTP for inter-device/application communication at (http://roleyology.blogspot.com/2007/06/distributed-desktops-part-six-protocol.html)that discusses this particular topic. I've also been having words at (http://www.linuxsucks.org/topic/17035/1/Main/Time-for-an-OS-group-hug.html), if you can make it through this epic thread (long posts, bit of trolling, bit of arguing) there's more detail as well as some arguments about specific issues. The fact that apps are external, out on the network, in this model works well with the idea of distributed hardware, and the idea of being able to reference anything (hardware features, applications, files) using the URI mechanism (or any mechanism that could be agreed upon) would be a major and compelling step in interoperability.

Virtualisation (also discussed on the thread at linuxsucks.org) could be a useful tool for device vendors, one of the issues discussed involves reconfiguring your machine on the fly, for instance only running your audio and storage device for playing music without having the rest of the PC running. Virtualisation would allow device vendors an easy way to switch configurations of a device to provide this kind of functionality. Whether a particular device vendor used this approach or not would of course be up to them, for some devices it may be better to switch configurations for whatever OS is running inside the device, rather than switch to another Virtual OS within the device.

feranick@hotmail.com's picture
Submitted by feranick@hotmail.com (not verified) on

Laptops? All this works great for desktops, but how do you combine this effort with the needs of having small mobile and power efficient devices?

Matthew Roley's picture

As far as I'm concerned a laptop is just a PC with tighter integration and a different shape. Everything I've said in the article regarding desktops is equally applicable to laptops. Remember that it's now possible to buy a complete PC that's only a couple of inches square, so building a series of devices to fit within a laptop isn't much of a stretch, in fact a lot of laptops are now built out of 'modular' components (each in their own box) already. Two main concerns with laptops are of course energy efficiency and weight. In terms of energy efficiency the arguments I applied in the article still stand for laptops; the fact that each device has its own CPU and 'network' connect do mean that individually they will consume more power than a 'naked' device, but remember in the modular system the devices could be powered down more intelligently and independently, and there isn't a single large parentboard/CPU in this design, another factor that may reduce overall power consumption. Hard to tell one way or the other whether *overall* power efficiency would be better or worse, experimentation would be needed to verify that, but personally I'd favour the ability to power down the majority of my machine if those parts weren't in use. Weight is a sticky issue, I certainly imagine a laptop built from distributed devices would weigh in much heavier than the equivalent integrated ones we get now; aside from filling the modules with helium this may be a tricky one to fix, though given current rates of miniaturisation (the 2inch square PC I mentioned above) I'm sure a solution could be found.

If laptops did follow a similar pattern to the desktop in this case it would be great if the components were shareable between them, such as unplugging a storage unit from your desktop and sticking it into your laptop, and also for the laptop (and all its devices) to integrate with the desktop when docked or in range. This is part of the 'lonely device' problem that I think a standardised distributed system would help with; I've got a PC, DVD player, PS2 and TV, some connect to each other in a standard way, others have to be co-erced; if they all used the same interconnect and the same protocol to talk there'd be no reason why your PS 2 couldn't make use of processing cycles on your PC and vice-versa (or DVD or any other suitable device). There are several connectors on the back of a TV, SCART, VGA, HDMI; personally I'd much prefer it if my TV plugged directly into a 'network' of my other devices, so that apps and other devices could just stream data over one interconnect/protocol, I'd like my DVD player, laptop, any device I purchase to work this way, things would just be so much easier if the big players worked on such a standard.

Matthew Roley's picture

For anyone who's interested, I've been discussing the article at

http://www.linuxsucks.org/topic/17035/1/Main/Time-for-an-OS-group-hug.html

Some trolling, arguing and a few VERY long posts (my fault, sorry).

Johann's picture
Submitted by Johann (not verified) on

India is a paradox. You have a fast growing technology sector with a vast majority of the population still wallowing in poverty. They can build nuclear bombs but millions are still unable to feed themselves regularly and have to resort to selling their organs in Europe and America.

As part of the effort to bridge the digital divide, Novatium, working parallel with John Negroponte's $100 PC initiative has developed their own NetPC appliance at $70 without the monitor. The thin client type apparatus requires a fairly fast Internet connection to fully optimize its use. An alternative product they manufacture--NetTV--can receive cable signals and will provide PVR functions together with desktop computing. In essence you'd need a central server to use the NetPC or a software service provided through the user's Internet service provider. Device expansion is provided via USB 2.0 ports. The appliance itself consists of components normally used in cellular telephones.

The system has been used in depressed areas where access to PCs has been difficult because of the cost of the standard desktop. It has also been deployed as a smart home solution with emphasis on home entertainment. Aside from high-end, processor intensive apps, the system will allow you to do anything a normal desktop can. For a lot of people, this is enough.

I imagine this type of paradigm--along with your component-based PC idea--would make up the next generation of computer systems.

Matthew Roley's picture

NetTV is one of those ideas that fits in nicely with what I'm proposing, with a user's computer consisting solely of the devices and software needed to do its job. As you say a device providing for standard home use and entertainment would be enough for many users, in effect meaning they'd only need one device, but in a distributed system more hardware can be added according to the needs of the individual user, so a distributed, standardly (couldn't think of a better word there :) connected paradigm can serve an entire spectrum of users with entirely different computing needs, while still retaining the very important characteristics of simplicity in terms of connecting devices/software and having them talk to each other in a standard way.

The idea of using components from mobiles is particularly interesting; I discuss in my article the ability to switch configurations so that if you're for example just playing music you can reconfigure your machine on the fly so only the audio and storage (where your music resides, either on an optical drive or storage device) are powered up and active. Having a small display on such devices would be extremely useful, and the low power screens used on mobiles are ideal for this kind of function. Thinking of the millions of discarded mobiles every year it would be fantastic if at least part of those devices could be recycled for use this way, and devices having their own mini display like this would make them more portable and viable as standalone appliances, increasing the flexibility of the overall system into which they're connected. This brings me to another point I mentioned in the article, because the devices are in effect appliances in their own box (as opposed to naked expansion cards) a device one user considers 'obsolete' in the richer western countries would still be of enormous value to those in poorer or deprived areas, and the fact that such devices would have a standardised connector means such discarded devices could still be useful to say, a user in India, and could be attached to his/her system without difficulty. In the normal course of things an obsolete naked expansion device would probably end up in the bin, but a boxed connectible device could be shipped off to someone who can make use of it.

Conficio+'s picture

Hi there,
I believe the system design is not new from a software/OS standpoint.Bell Labs, the cradle of Unix (and in extension Linux) has developed Plan-9 a distributed OS (http://plan9.bell-labs.com/plan9/)

Another crux is the multitude of cables and power supplies. For this to work you'd need a bus to distribute power (in a format the is usable for the components) and the communication. Otherwise you have so many power supplies, each with its own inefficiency and heat production. In addition the extra boxes do not make this a very green concept.

However, I'd like to see innovation in terms of distribution. For example why do I need a set of cables to connect a monitor, keyboard, device, etc. with the main pc box. A simple combination cable should be very helpful and allow me to daisy chain. In other words, give me a USB hub in any display.

Go a little further in design and tell me why the graphic card is in the PC, instead of in the Monitor. I think we need a redesign of the processing/visualization interface, in order to match graphic card and monitor abilities more closely. This means lots of cost savings, because the graphic card does not have to support all kinds of resolutions, that the monitor can't display in the first place. It should also cut down on driver development cost.

In todays world the interface should be at the level of drawing vectors, rendering font and streaming images (and for the gamers some scene rendering). Drawing vectors and font is low bandwith. Streaming images is medium (after all any TV can do it) and I guess scene rendering can be handled at a decent bandwith level as well.

K<0>

Matthew Roley's picture

Hi Conficio+

A few different topics here so I'll reply to each separately...


I believe the system design is not new from a software/OS standpoint.Bell Labs, the cradle of Unix (and in extension Linux) has developed Plan-9 a distributed OS (http://plan9.bell-labs.com/plan9/)

It's quite funny, about 3 years ago I downloaded Plan9 and had a play with it, long before I even had the idea of writing this article. I didn't read any of the documentation and had no idea it was a distributed OS until your comment, guess this aspect isn't readily seen if you're running it on a single PC as I was. Didn't consider it a viable OS alternative at the time so ended up removing it, but after reading your comment I may give it another try just to see how closely it matches what I've been writing about.

Based on Plan9's documentation I can see a number of similarities. The basic design principles; consistent naming of resources, as well as a standardised protocol for their access, are strikingly similar, but then, they should be, these two steps are pretty much mandatory if you want a simple distributed system. How these principles are implemented however, differs to a large extent between what I'm proposing and what Bell Labs have achieved with Plan9.
Firstly, while Plan9 has a standard messaging protocol (9P) there is no assumption about the underlying network (other than it will be unreliable). To this end, a reliable transport protocol needs to be added (in the case of Plan9 they use IL, again, a protocol designed by the Plan9 people, designed to serve a similar purpose to TCP but without the overhead). The upshot of this is that Plan9's networking protocol can work over any transport (Ether, USB etc) and so the 'devices' in Plan9 can be connected together by a variety of cables while still ensuring reliable transport. In contrast, what I'm proposing is not only a standard networking protocol (like 9P) but also a standard network connection and transport. The main reason for this is simplicity, rather than having a multitude of connectors, and the necessity of having the correct ports on the machines to be networked (and that means extra software inside the boxes to account for all possible 'network' hardware, ethernet drivers, TCP, Plan9's IL, USB etc), there should be a single (read idiot proof) way to connect devices together, ideally with as little impact on the host systems as possible in terms of required 'drivers'. One example I cite is Infiniband, this technology has high enough bandwidth that even uncompressed video could be shunted about with relative ease, low enough latency to be usable, and with reliable transport provided at the hardware level rather than through the addition of a transport protocol. If USB could somehow achieve the same performance as Infiniband it would be an even better choice, as USB incorporates power transfer too. A second important difference is in the definition of distributed for these two systems. I may be wrong about this but the impression I get from the Plan9 docs is that it is more a centralised time-sharing system, users have a terminal or workstation through which they interact, but all the work takes place on those centralised CPUs, in a way similar to the standard Unix setups you tend to see in research institutes and academia. To this end all machines (devices) in a Plan9 distributed computer all need to be running the Plan9 kernel. Contrasting with what I've written, machines (devices) in the network can run any operating system internally, with the requirements being instead that the device must implement the network protocol AND the network hardware, but apart from that can run any software it wants internally that helps it do its job. For the actual protocol I also consider something such as HTTP more viable than a new protocol, see (http://www.linuxsucks.org/topic/17836/1/Main/re-Time-for-an-OS-group-hug.html), for more details, quite a bit of preamble in this post, but about halfway down there's a more comprehensive description of how I see the system working. HTTP may seem like an odd choice for the transport protocol, but in combination with the right networking (i.e. something like Infiniband) I believe it would expose great power between devices while still being relatively simple both to implement and use, HTTP also gives the advantage of combining resource location and namespacing (through URLs) with the networking protocol as well as being already familiar to a large number of developers and users. Also note in that post the simplicity of the file system, this is another necessary step I consider important, and is something that in my opinion all current systems have a problem with. Plan9 solves this by using namespaces to restrict a users view of resources to only those that are currently relevant, in contrast with what I'm proposing where resources are hidden in the devices and the URLs exposed by the device signify resources available to users (if you can't see the difference here bear in mind that manipulating the users namespaced resources requires work on the part of the kernel, whereas using URLs to access internal device functions does not). Also note in the file tree I describe how each device has management and configuration URLs; apart from providing simple, likely HTML based configuration screens for the device these URLs would also allow for direct access to the underlying device in a similar way to telnet or SSH, for if the user does need to get at the underlying operating system.


Another crux is the multitude of cables and power supplies. For this to work you'd need a bus to distribute power (in a format the is usable for the components) and the communication. Otherwise you have so many power supplies, each with its own inefficiency and heat production. In addition the extra boxes do not make this a very green concept.

Yes, this is potentially a major stumbling block as mentioned in the article, and is a problem with any distributed system (Plan9 included), in fact, cabling is a big problem in current PC system, there are far too many plugs and cables even now (though of course a distributed system would definitely be worse in this respect). Ideally power should be transferred through devices using the same cable as networking (a la USB) though it may prove more difficult to incorporate this into a standard such as Infiniband. Pass through power is also another option, though of course this doesn't alleviate the cables. Wireless is yet another option for both networking and power (there have been recent demonstrations of wireless power transfer). I would see such a system using both wired and wireless transfer, which possibly further exacerbates the cabling and power problems, and wireless has a long way to go before it could fulfill the latency/bandwidth requirements of a distributed system as I describe. This is another reason (apart from greater simplicity for the user) why we absolutely need to standardise a networking/power interconnect for devices, both wired and wireless, not only to clean up the cables, but to make it trivial for users to plug things into each other (by far the most important factor in my opinion). On the matter of power, remember that these devices will generally have low power requirements individually, as well as being more cleanly detachable from the system, it may be possible that overall power usage is reduced on such systems, we'll never know until one is built.


However, I'd like to see innovation in terms of distribution. For example why do I need a set of cables to connect a monitor, keyboard, device, etc. with the main pc box. A simple combination cable should be very helpful and allow me to daisy chain. In other words, give me a USB hub in any display.

Yep, there is in my view too definitely a need for simpler and easier ways to connect devices, apart from external devices that all have differing connectors the internal devices such as graphics cards have all the problems I describe in the article. Not sure about a combination cable, while it could be useful for a conventional PC it would introduce great inflexibility into a distributed system as I describe it, what if the user is blind for intstance and doesn't have a mouse, graphics device or monitor, that combination cable is gonna start looking pretty messy.


Go a little further in design and tell me why the graphic card is in the PC, instead of in the Monitor. I think we need a redesign of the processing/visualization interface, in order to match graphic card and monitor abilities more closely. This means lots of cost savings, because the graphic card does not have to support all kinds of resolutions, that the monitor can't display in the first place. It should also cut down on driver development cost.


In todays world the interface should be at the level of drawing vectors, rendering font and streaming images (and for the gamers some scene rendering). Drawing vectors and font is low bandwith. Streaming images is medium (after all any TV can do it) and I guess scene rendering can be handled at a decent bandwith level as well.

This is one of those grey areas. While having a graphics card in the monitor is ostensibly a logical and good idea, and provides several advantages as you describe, there are several disadvantages. Firstly, what happens when the graphics card is no longer powerful enough for the latest games, a complete monitor replacement is necessary, which sort of eliminates the cost savings for the user. Another option I'd like available for a distributed system is also multiple graphics devices, allowing for redundancy and concurrent (and hence faster) usage. If the graphics devices are in the monitor this obviously becomes problematic.

One possible solution I discuss at linuxsucks.org, which matches what you say but in a slightly different way, is to have a 'screen manager' within the monitor. Really this is what you describe, a graphics device in the monitor, matched to the capabilities of the monitor. However, this graphics device does not do general graphical operations in the same way a conventional graphics card does, and instead focuses on composition. Applications on the system send their bitmaps to the monitor, where the internal graphics card composites them for display. The applications themselves could work directly with bitmaps, or use other parts of the system (such as an accelerated graphics device providing vector and font functionality) to generate bitmaps on their behalf. This would allow for additional 3d graphics devices to be added to the system, they simply send frames as they are created to the monitor, which composites them into an overall display, you could have several 3d devices generating bitmap data for a single 'window', or each 3d device could be generating bitmaps for different windows. Sending lots of bitmaps around does of course require a good amount of bandwidth both on the network and inside the devices, so this may not be viable even if the networking (such as Infiniband) has sufficient bandwidth to cope. If it could work this way there'd be a great amount of flexibility exposed, but again there would be a 'bottleneck' in terms of the graphic card within the monitor, so it may be wise to 'keep it distributed', even in the case of a composition device, so that it can be upgraded or replaced if need be without impacting the rest of the system.

Johann's picture
Submitted by Johann (not verified) on

With respect to modularization, the idea of using alternative components for the devices that would be plugged into the system, I mentioned before that NetPC and NetTv use cellular phone technology. This allows both devices to maintain a smaller footprint on the desktop and enables a fan-less infrastructure. The power requirements are dramatically reduced as well.

Going beyond mobile phones, other mobile devices--MP3 players, MP4 players, laptops--all consume less power while delivering the computing and entertainment needs that their users expect. A laptop running on AC power consumes approximately 60-90w as compared to a full desktop that can use as much as 400w.

Following the paradigm of familiar home entertainment systems it is possible to build a modular component PC. A stereo for example will play back CDs but the sound will be much more enjoyable with the addition of amplifiers, tweeters and sub-woofers. In the same way, the distributed PC could be built around a basic system--think something similar to Apple's Mac Mini--which can be expanded as needed. A graphics module to enhance the display, a network storage device, audio peripherals, etc. Each with their own processing capability (built on cellular components) to distribute the processing requirements of the applications that the user will run. All with a standard connector--USB.

Bottom-line--it is possible even now to start building the distributed component PC with a little tweaking from off-the-shelf parts.

Matthew Roley's picture

As mentioned in the article, power management would be a critical part of the distributed system as I see it; with the addition of potentially limitless peripherals to a system, each having their own processor and memory, minimised power usage is essential for each individual component. Being able to power-state-change components more cleanly is an advantage, but each device would need to strike a balance between the performance required to do its job and the power requirements of achieving that performance. For many devices, it is likely that extremely low MHz processors, as used in mobiles, would be sufficient to allow the device to do its job, and with most of the system employing such devices it is possible that power consumption equivalent to a laptop could be achieved. There may however, be devices requiring a bit more oomph, this is why I think it would be important to allow device vendors to choose hardware appropriate for their device, balancing power consumption against the increased performance of the device. With each device in its own box communicating over a protocol hardware vendors would indeed be able to pick whatever hardware is appropriate for their device, whether it be a 33MHz RISC processor or a 3GHz dual-core monster.

I'm not sure about the Mac Mini as a good example or basis for a distributed system (at least not in the way I describe in the article). The Mac Mini is essentially a small form factor PC, and while it has a whole heap of expansion options in terms of USB, Firewire and Ethernet it still suffers from the problems of integrated systems as I describe in the article. For starters, the real hardware still resides in the main case, meaning upgrading memory and components such as the graphics card still require opening the case (which in this case is actually more difficult than opening a standard PC). Additionally, there's still the problem of drivers for devices having to reside on the hard drive of the main box, while this doesn't present much of a problem for the external peripherals it is a problem for the devices in the case as I describe in the article. With regards to expansion and the use of USB - I really like USB as a connection standard, it's got pretty much the most usable connector available and it handles both data and power transfer. For the kind of system I'm thinking of however, it would not be adequate (at least not in its current revision) - firstly, USB devices are far too dependent on the host system (i.e. a conventional PC) to be viable as standalone appliances that could be plugged together, the host system is effectively a bottleneck and a single point of failure and all USB devices have to share bandwidth on the bus (as opposed to switched systems like Infiniband, where each node gets full-speed full-duplex transfer rates). Additionally, while the bandwidth available on a Hi-Speed USB connector is reasonable (480Mbps, translating to about 60MB/s in theory but only roughly 30MB/s in practice) it is certainly not enough for transferring more intensive graphical data, particularly because of the shared bandwidth with the other USB devices, and AFAIK the latency window of USB is measured in milliseconds which I think is too high for a distributed system in general (really need latencies in the microseconds to get respectable response). For the low-end systems you describe it would certainly be enough, but remember that I'm trying to think of a system with a standard interconnect and protocol that could be used for any device or system, so building low-end systems using say USB and high-end systems using say InfiniBand would defeat this objective.

For your bottom line I definitely agree - it is eminently possible to build a distributed component PC now through tweaking off-the-shelf parts. Tinker who commented above has achieved his own system using ethernet connected boxes. There are literally infinite ways to do it, but when you consider interoperability with ANY other device, and the requirements of keeping up with the demands of new and as yet unknown devices, as well as the need to provide performance for those who need it (thinking of gamers here) the number of correct ways to do it gets considerably smaller. Hence the need for a standard protocol and device interconnect, along with a shift from integrated PCs (with all drivers and applications, and the meat of the hardware, locked in a single box) to a TRUE distributed system where each device has its own software, is independent but can communicate with other devices easily because of a standardised interconnect.

Author information

Matthew Roley's picture