Linux fragmentation: help or hindrance?

Linux fragmentation: help or hindrance?


Here is a familiar list for readers: vanilla kernel, custom kernel, debs, rpms, Tgzs, source files, Apt-get,Emerge, Yum, Urpmi, Synaptic, Kpackage, Adept, Kyum, Yumex, Smart, Klik and Autopackage. I could go on but you get the idea.

Every user of GNU/Linux knows the mantra that everthing is a file but they also know that it promotes itself under the banner that it is all about choice and therefore power and flexibility. That is true and it is what attracts so many of us to use GNU/Linux in addition to the political freedoms guaranteed by its open source nature and the specifics of the GPL (whichever version you support).

Loose in the Candy store

Choice, power and freedom: it's quite a heady brew and sometimes it precipitates a veritable chain reaction of events. Distros fork as people have "political" disagreements, new file systems evolve, new package managers ( command line and GUI) come out. Sometimes, the rate of change can be dizzying and the end-user is left reeling from the sheer muliplicity of choice available. Choice implies knowledge and as the choice expands the learning curve seems to become vertical which can tend to put off some newcomers. You can sometimes feel like a starry-eyed child in a shop where all sweets are free. There is always a danger that you might starve to death for want of knowing where to start.

Fragmentation has, perhaps, some negative connotations. It suggests things falling apart, the centre unable to hold like some Yeatsian falcon flying off at a tangent, but of course in Linuxland fragmentation is another name for freedom and choice. It means that any individual or group is free to develop their distro their way. This is very empowering as you can basically cherrypick you components. It can be compared to the difference between buying a suit off the peg and purchasing a bespoke item - the latter will be specifically tailored to your dimensions. In computer terms you have built your PC and ignored that shiny new model in the shop any anyway you will get exactly what you paid for. Why pay expensively for that top of the range graphics card when you don't play games and all you want is to surf, e-mail and do the odd spreadsheet or word processor document?

Where there's a will

On the software side, if a large corporation has its programming locked down in a proprietary format it can and does make decisions which affect you as the end user, up to and including being in control of the release cycle for updates and new products as well as decisions to discontinue support for legacy systems. In GNU/Linux, if you have the will, determination and technical know how you can decide to maintain that legacy system individually or in consort with others. That kind of fragmentation is clearly a good thing for it allows individuals to exercise a modicum of control over their own computing experience and retain features of an operating system which they like and use. Additionally, fragmentation can also reflect people developing or extending concepts, programs and ideas to meet new challenges. To take just one example: the Live CD.

When Klaus Knopper developed his Live CD he could not have imagined how others would take it up and run with it. Many use it to showcase GNU/Linux to newcomers, others are dissatisfied with the lack of certain features they want - a kernel module, support for a particular graphics card, better package management, faster boot times and the ability to run on different computer platforms to name a few. The live CD has been a boon to those who were wary of installing to the hard drive for fear of wiping another operating system. In response, a live cd can be used to test for hardware compatibility prior to a full install and now of course it is possible to to do the testing first and if you like what you see to proceed to a full install from within a running live environment. Entire live distros have been built to cater for security, file repair and rescue, clusters, multimedia graphics etc.

The Bad News?

The downside to all of this is of course the dreaded phrase "steep learning curve". It is one of the most common complaints you will hear from those who are reluctant to adopt GNU/Linux and they will invariably invoke the bogeyman of fragmentation as irrefutable proof. However, a closer look at this familiar objection reveals that it it more of an issue for businesses adopting it than it is for private individuals. This is a fair point as the requirements of a business will differ markedly from yours or mine. A business will be looking for continuity, reliable upgrade cycles, technical support and all the things that users of closed source, commercial, proprietary operating systems would expect - and there are many enterprise versions of GNU/Linux which render such support and do it well; Suse (sorry, Microvell), Redhat and Mandriva to mention a few.

The Good News

For the individual user of GNU/Linux these things are not unimiportant but they are not necessarily critical deal breakers. Unlike a business, you or I can behave like promiscuous distro tarts and change our loyalties more frequently than Winston Churchill changed parties. Indeed, for many, this is precisely the attraction - the novelty of trying out new distros, new features that someone has included in the latest live version which was either not available in the distro or in the package repository. Take a specific example of the bewildering choice. You have a hard disk install of say, Mepis Linux (Ok, I'm writing this on it!). and you fire up, say, Synaptic (or Kpackage or Adept or ......) and you do a quick Apt-get update in anticipation of installing that killer piece of software you have promised yourself.

Horror! It isn't available. If it is, you discover that it has a series of unmet (or unmeetable) dependencies. Against your better judgement you force it to install. Disaster. It either doesn't work or it generates system instability so you unistall it and go off on a search on the internet for a third party rpm or deb or whatever is the flavour of your distro. That doesn't work either. Dependency hell. Hmmm..... You have a think and change strategy. Of course, why make life difficult for yourself! Give it the old three card trick and install from source but using make, configure and make install method, freeing you from the dependence (no pun intended) on package developers but you then discover that every version of GNU/Linux does not install by default all the development packages which installation from source requires. So, off you go and install them one by one to meet each error message. You've done it at last. The application has installed and it is working beautifully.

Being perverse you decide to remove it and reinstall just to prove to yourself that it was no fluke and claim bragging rights on Linux Street. Oops! It was a source install and there is no rpm or deb to unistall via your chosen package manager. Should have installed Checkinstall and substituted that command at the make install stage. That would have created a binary package that would be registered in the database and could have been uninstalled easily by the familiar package manager instead of hunting for the developers' uninstall config. files.

However, you aren't too technically minded and you just don't fancy all that faffing about on the command line but the "fragmentation" of GNU/Linux gives you several more options. If Oranges aren't the only fruit then rpm and Apt-get are not the only package managers. If you are using Gentoo you have Emerge for installation and it would keep a configuration tweaker happy to the crack of doom but its relative complexity puts you off. Enter SMART.

For once it's not yet another Linux recursive acronym. It means just what it says. Its developers claim that it uses a very smart algorithm which is superior when solving dependencies and using the best, weighted, optimal solutions. They claim that what fails to install under Apt-get, say, will install under SMART. If even SMART should fail you could always go for an off the shelf solution - KLIK. For example, I could not install Democracyplayer by any of the conventional routes but KLIK did the business. It not only worked but left my file system intact and could be painlessly removed by simply dragging the desktop icon onto the trash icon and you can of course save to an external, removable medium and use it on other systems. Mileage will vary of course but it does represent yet another weapon in the installer's armoury.

An Embarrassment of Riches

There are four more solutions: Autopackage (which, once you have installed the first package, is as near as you are likely to get to a Windows-style install) but the range of available packages is small compared to the repositories for Rpm and Debian distros. Debian fans used to joke that Apt-Get was what RPM wanted to be when it grew up and it was subsequently ported to RPM distibutions as was YUM which is used in Fedora but was ported from Yellow Dog Linux. Alien too is an ideal solution if you want to do a binary install but cannot find one suitable for your distribution. On the command line you can convert one binary package format to another format (deb to rpm and vica versa). Then there is CNR (click and run) from those nice people whom brought you Lindows, aka Linspire and latterly Freespire. The package management system is CNR which, if I understand it correctly, is like SMART and acts as a meta-manager. It has not been released yet (2nd quarter of this year if you are interested) but when it is it is planned to support the following versions of GNU/Linux: Debian, Fedora, Freespire, Linspire, OpenSuse and Ubuntu with support for other versions planned for 2008.

It looks extremely interesting and it will be an enormous achievement if it turns out to be the Rosetta Stone of package management. I'm certainly going to try it out when it becomes available and test it to destruction. Doubtless, for command-line purists it will bring out the hives but if the Gordian knot of fragmentation is to be cut and mass uptake of GNU/Linux is to happen then this sort of development is crucial. It does not need to replace existing package managers or deprecate command line tools (which I love) but if it draws in the Windows techno-phobes and answers their objections then perhaps Bill and Steve should start to feel uncomfortable.

All's Well That Ends Well

Finally, although Windows dominates the desktop this is not always necessarily a bad thing. When professional phishers and bored script kiddies are targetting networked computers the sheer preponderance of Windows machines and their lamentable security record ensure relatively easy picking for the hacker. As if to add insult to injury most will be using Internet Explorer and Outlook. If a hacker has to program a malicious script he or she knows that it can catch most users as they will most likely be using these programs. Over at Planet Linux the hacker will have to design for numerous e-mail clients - Sylpheed, Pine, Kmail, Evolution and Thunderbird - and for numerous browsers - Dillo, Firefox, Seamonkey, Epiphany, Flock, Konqueror, Links, Lynx, Galeon and Opera (Ok, Opera and Firefox and Flock are available for Windows) I think that there is an acceptable degree of fragmentation, for in it, paradoxically, lies our best protection. The trick is to expand the base without endangering those aspects of GNU/Linux which attracted us to it in the first place.

This symbiotic balance is at the core of the efforts to spread this operating system without diluting its power or security. Fragmentation may well be the ghost at the banquet, the price to be paid for freedom and empowerment but if badly handled it might inhibit private and enterprise uptake. As for me all I can say is that I love GNU/Linux . Hell, why don't I just come out and admit it? If I was a woman, I'd marry it and have all its babies!

Category: 

Comments

schestowitz's picture

It's the Darwinian approach. How will you get the best solution if you do not explore? Fragmentation? No, it's GPLed, so there's reuse. It is vital for innovation. Alternatives routes also become a remedy to dead ends (e.g. Mac OS 9, Windows 2003).

Kevin Dean's picture
Submitted by Kevin Dean on

I'll respond the the very general question of "Does fragmentation help or hurt?"

My personal opinion is that fragmentation does in fact hurt GNU/Linux. But this is less an issue of the various overlapping applications or utilities, but the fragmented reasoning behind things.

I'm a GNU/Linux user for the sole purpose of Freedom. The fact that GNU/Linux is more secure, more stable and less popular are wonderful side effects, but they mean little in why I began (and continue) to use it.

So when I see people "sell" GNU/Linux to Windows users on the basis of the security, or the speed, and totally neglect the aspects of Freedom, a small part of me dies. What's even worse is when hardened GNU/Linux users don't care about Freedom.

That said, the fact that people don't value the same things endangers why I use GNU/Linux. There are, however, people out there who want Linux adopted mainstream at all costs; they'd likely say that the split views helps Linux.

Who's right? When that question has an answer, this article will have a conclusion.

Tyler's picture
Submitted by Tyler on

I think the idea that installing software on GNU/Linux is overstated. The real issue is that *nixers have access to much more software, and at much earlier stages of development, than the folks using proprietary systems. Any of the top ten or twenty distros will provide near child-proof access to mature mainstream applications. If you can get root user status, that's about the hardest part of installing OpenOffice, Firefox, Gnumeric, Emacs, xpdf, Amarok, Gimp, Inkscape, Abiword etc.

You don't run into problems until you start looking at less polished programs, stuff that either serves a niche too small or technical to merit deb-ification, or else it isn't far enough along in development to be released to non-technical users. This is software that *should* be hard to install. If it were easy, then you'd get a lot of unhappy users without the skills to understand what went wrong or contribute bugs back to the developers. By the time the software is ready, they won't want anything to do with it.

The real difference between installing software in the *nix world and the MS world is not that it's hard for us and easy for them. The real difference is that the common stuff is easy in both worlds, but while cutting-edge software can be difficult to install in GNU/Linux, it is impossible to install in Windows, because it is simply not available.

Ty

Anonymous visitor's picture
Submitted by Anonymous visitor (not verified) on

Having a choice is one of the foundation pillars of Capitalism. Saying the Linux market is "fragmented", with too many distros to choose from, is like saying the car market is "fragmented", with too many models to choose from.

The closed-source software market consolidated down to a small handful of suppliers simply because economies of scale--closed-source software being very expensive to develop, but very cheap to distribute--leads to a winner-takes-all situation where everybody apart from the number-one player eventually goes out of business. There are no such constraints in open source, because by spreading the development cost by being able to freely copy other people's work, it actually becomes feasible to customize your product for very small market niches.

Thus, the situation today, with over 350 active Linux distros, is never going to "consolidate". For every one which goes defunct, two others will spring up in its place.

Welcome to the land of choice.

Lawrence D'Oliveiro

Anonymous visitor's picture
Submitted by Anonymous visitor (not verified) on

tl;dr
the user should hire a free software professional

longer version

Choosing what software to use and learning how to use the software is just like participating in any other complex activity, it requires an investment if something needs to be done.

The analogy I'll be using is law. Nobody is born with the ability to understand all that is required to handle legal procedures. If a layperson is faced with a legal issue she has a number of choices to deal with it. She can choose to ignore it and get nothing done, she can invest her time and effort into studying the related field, she can get legal advice and/or support from people that aren't relevant to the field (not a good idea), or she can invest her time and money and get legal counsel from a relevant professional.

The same is true with software (free or non-free). The user can choose to do nothing (and get nothing), can choose to invest some time and effort to study what is available, can choose to study software design/computer science to design and code the required app, can get advice and/or support from a friend or can hire the services of an industry professional.

It doesn't matter what the user chooses to do to obtain what is desired, a price needs to be paid if something is to be done. The price is often costly and requires various amounts of investments in time, effort and money.

The problem is, people often blind to the long term advantages of the freedoms they inherit with free software and to the user subjugating nature of non-free software. The overwhelming majority of people I know would rather invest into the convenience of non-Free software over investing to the overall superior investment of Free software.

Basically, if the user doesn't know which free software to use, she should get some advice from a knowledgeable friend or pay for a free software professional. If the user doesn't know how to operate a free program well enough, she should get some advice from a knowledgeable friend or hire free software professional. If the user finds her Free program is inadequate, she should she should enlist help from a knowledgeable friend or hire a free software professional.

Anonymous visitor's picture
Submitted by Anonymous visitor (not verified) on

Your points are incisive, and I can agree with the basic concepts. The problem (for me and many users), is that we often don't know what we're getting into with this whole Linux thing. I am and have been an MS user for several years now. While I hate Microsoft and their ilk, I have to give them props for making the system relatively easy to use.

The problem isn't just about fragmentation. Most us are people who are either limited in our time, our income, and so we may not have the personal resources available to study each and every single application, let alone all of these distros. From an average user's stand point, it should not be difficult to invest in a new app. I don't need tailor made software, just software that installs without having to track down all of these packages that I don't know about or understand.

I agree, freedom has its price, and there does need to be some sort of investment. But consider this: I tried several Linux distros. I was only able to figure out how to even get one of them to boot, much less to install later on. Then, I tried to add a few applications to add to my productivity. It was a royal nightmare. After an intial investment of a full week spent in research, tracking down fixes or work arounds for the various problems that I had, I became frustrated, and dumped the entire thing altogether, and went back to Windows.

In contrast, it took me less than a day to get going with Windows. In less than three days, I was going full speed, the worst of the so-called learning curve over and done with.

I agree with the author... things do need to be made a bit easier. I shouldn't have to seek advice or professional help for something that I can already do with ease on a similar system.

I'm willing to invest, but only up to a certain point.

Anonymous visitor's picture
Submitted by Anonymous visitor (not verified) on

I'm sorry that you are experiencing such difficulty installing Linux and even getting it to boot.

You do not say what your hardware specification was or what version of Linux you were trying to install. I have an IQ lower than my shoe size and I can assure you that even I could install it. Maybe I just have the devils' own luck but I have rarely failed with an install and I have installed all the major and some of the smaller distros on an assortment of new and old desktops and two laptops.

I started Linux back when Redhat 9 was just out and that was the first distro I installed. In that time it has come a very long way and installation has become relatively easy with intuitive graphical user interfaces and installs from running live CDs. For ease of installation why not try either Ubuntu (any flavour) or debian-based distros like Mepis. Both have live versions to sample, test hardware and install if you like.

The big mainstream distros like Fedora, Mandriva, Suse etc are very user friendly indeed and should not require geek-level skills. Even Gentoo is now kind to relative newcomers. If I was to recommend a version of Linux that should run on most PC architectures I would say that you should give Ubuntu a try, not only for ease of installation but because, once installed, you would have access to almost 20,000 packages via the repositories! If you don't like using Apt-Get on the command line the user-friendly GUI, Synaptic, will ensure easy install of software with all dependencies taken care of.

(p>Believe me, once you have it installed and downloaded a few programmes you will not look back. Honestly. I'm sorry that I cannot be more specific as you have not indicated the exact nature of your problems getting Linux to boot/install but I can assure that if I can get it up and running anyone can do it too.

Gary Richmond

sureshkrshukla's picture

Hindrance as a user -
which distro to choose?
Can XYZ distro detect all my hardware ?
When will the setup finally complete with all hardware working and basic software in place ?

With Windows, setup completes in 2-3 hours with all drivers and functionality.

Hindrance as a programmer -
which library to choose - GTK/QT ?
which lang to go for GNU advocates C, but I like C++ etc?
There are sooo many IDE's all lacking same features but marketing something unique to them ? Codeblocks, Eclipse, KDE, commandline, Anjuta etc.

After endless evaluation of IDE, I put feature list and their status on wikipedia.
http://en.wikipedia.org/wiki/Comparison_of_integrated_development_environments

Surprise ! Microsoft Visual Studio came out to be the winner in list of C++ IDE.

Somebody talked about 'reuse', but how many people can/do read others code ?
People can't read others code so easily, they start their own version.

Criticism is easy and consolidation is difficult.

Nobody improves only forks/ creates fresh. Finally making same feature-incomplete, difficult to use software, with unique combination of {language + library}.

Author information

Gary Richmond's picture

Biography

A retired but passionate user of free and open source for nearly ten years, novice Python programmer, Ubuntu user, musical wanabee when "playing" piano and guitar. When not torturing musical instruments, rumoured to be translating Vogon poetry into Swahili.