2009: software installation in GNU/Linux is still broken -- and a path to fixing it

Short URL: http://fsmsh.com/3162

-->

GNU/Linux is slowly invading everybody's everyday life. I won't say "The year of the GNU/Linux desktop is here". Been there, done that. But, GNU/Linux is definitely imposing its presence -- think about Android, or the number of people who are currently using GNU/Linux as their main desktop.

And yet, software installation in GNU/Linux is broken. No, not broken... it's terribly broken. Why is that, and what can be done to fix it?

Free Software Magazine is now on Twitter!

Free Software Magazine has joined the Twitter crowd! You can see Free Software Magazine's twitter posts. You can also see our Twitter RSS feed. So, once again:

http://twitter.com/fsmag -- FSM's twitter account

http://twitter.com/statuses/user_timeline/48408821.rss -- FSM's twitter RSS feed

The current story

Most distributions today (including the great Ubuntu) are based on package managers. If you want to install a piece of software, you grab it from one of the official repositories, and your package manager will "explode it" onto your computer's file system. A program will place bits and pieces in /usr/bin, /usr/lib, /etc, and so on. This is normally done through a package manager. In Ubuntu, for example, you would probably use Synaptic. A package manager will normally solve all the "dependency problems" for you. Ah, dependencies... basically, an image viewing program might need, for example, libjpeg to function (libjpeg being a library of functions to open, save, and generally deal with JPEG files). This is a very Unix-ish approach. It works perfectly well for servers, but fails on several levels for clients. Why?

Advertisement: ZenOSS Enterprise Monitoring

ZenOSS is a powerful, enterprise level monitoring system -- and yes, it is fully free software. Sounds convincing? Download Zen OSS now!

http://www.zenoss.com/in/mi/fsm

There are several drastic problems with this approach. Here is a comprehensive but by no means exhaustive list (which will probably grow as pepople e-mail me):

  • Users need to have root access in order to install a piece of software; no per-user installation is allowed
  • It's very tricky to install several versions of the same piece of software. Just think of the poor graphic designer who needs to install several versions of Opera and Firefox;
  • Users are stuck with the piece of software installed system-wide;
  • The software needs to be downloaded from the official repositories. Well, it doesn't need to, but an average user wants to stay well away from unofficial repositories for technical reasons;
  • In some cases (especially when the user adds repositories or installs packages directly), the dependency-checking mechanism often fails and users end up with circular dependencies. They are nasty;
  • A piece of software is bound to a specific distribution, and -- what's worse -- to a specific version of that distribution too. It's not trivial to install Openoffice 3.1 on Ubuntu 8.10. You can argue that you can install the bunch of .deb packages from OpenOffice's web site. Tell that to your grandmother or your average inexperienced computer user.
  • It's not trivial to "give" a program to a friend. To the end user, giving a program to a friend should be as simple as dragging an icon onto a memory stick; instead, files are scattered all over the system.

It's 2009, and GNU/Linux is still plagued by all of these problems. Even Ubuntu, a distribution I love, is plagued by all of these issues -- and we are talking about a distribution aimed at end users!

What the story should be

The story should be very simple:

  • Users should be able to install software even without being root
  • Users should be able to install different versions of the same software immensely easily
  • Users should be able to run either their own version of the software, or the one installed on the system (if any)
  • It should be possible to download and install software even though it doesn't come from an official repository
  • Software just needs to work -- unchanged -- even if it's a bit old and it's run on a newer distribution
  • It should be possible to "give" a piece of software to a friend by simply dragging an icon onto a memory stick

All this is true with Apple's OS X. They got software installation just right -- although a few programs, lately, seem to come with an ugly installation process.

Where does the problem come from?

Don't get me wrong: I think Ubuntu is a fantastic system, and gets a lot of things right. It think the problem stems from an issue that is more philosophical than anything else.

The issue is at the heart of this article, and deserves to be put in bold.

Every GNU/Linux distribution at the moment (including Ubuntu) confuses system software with end user software, whereas they are two very different beasts which should be treated very, very differently.

I think using dpkg/apt-get or rpm/yum for system-wide software, libraries and so on is the way to go. GNU/Linux's success in the server arena is not a coincidence: a distribution is made up of several independent "bricks" which create a majestic building.

However, using the same philosophy -- and therefore architecture -- for end-user software is just too limiting. My point list above is not "a list of unfortunate drawbacks". It's one of the major reasons why GNU/Linux hasn't achieved mass penetration in the desktop arena.

What bothers me is that while all of the other problems are being solved (vendor support amongst the others), this one is a persistent thorn in every GNU/Linux user's side. A painful one.

Existing material about this problem

A lot of debate-- as well as software -- exist about this issue. In terms of software, you can get a whole distribution -- GoboLinux -- which follows exactly this priority: one directory per program. There is a problem with GoboLinux's approach: it applies the "one directory per thing" approach to everything -- including system libraries and server-side programs. GoboLinux also goes one step further by changing completely the file system -- an idea I am strongly against.

In terms of what's been said, there are several discussions about this in Ubuntu and Debian. A good start is the [Rename Top Directory Names][http://brainstorm.ubuntu.com/idea/6243/) in Ubuntu. This link has a long list of duplicates. There are also many, many "blueprint" drafts in Ubuntu's launchpad. There are so many in fact that you will get lost reading them. A lot of them talk about a simplified directory structure for the system, which as a consequence would imply simplified software installation.

What's wrong with GoboLinux?

I don't think GoboLinux's approach is a winner for two reasons:

  • The Unix file system has been around for a long time -- for good reason. It does work extremely well to keep a system sane and working.

  • It would meet too much resistance in the GNU/Linux community -- for good reason.

However, GoboLinux gave us a practical example that this change can be made. It's actually possible.

Four steps to fix the problem

I can't really fix this problem. It will take a lot of effort, and a lot of courage from major players to even start heading in the right direction.

The first step, is to face the truth and admit that there is a problem. This is the aim of this article, which -- I hope -- will resonate within the GNU/Linux community.

The second step is to set out a path which might eventually lead to a solution. This is what I will attempt to do in this article. The solution will be generic and I will try to borrow from as much existing software as possible.

The third step is to improve on the proposed solution; this is tricky, because there needs to be the right balance between too little and too much planning. It also requires somebody to coordinate the discussion, able to lead everybody towards a full solution. My secret dream is that somebody from Canonical, or from Red Hat, would do this.

The fourth step is implementation. This is the hard part. I am sure that implementing it will reveal problems, limitations --and more.

My own semi-technical take

Here is my idea. I haven't programmed in C in years; this means that I might make some silly mistakes. However, I am confident I can provide a good starting point.

Here we go.

  • There should be a comprehensive list of libraries, and versions, expected to be available with the operating system. Today, GNU/Linux has quite a number of desktop installations and we have a pretty strong idea of what a desktop system seems to expect. This list should include both Gnome and KDE. This should be a cross-distribution list. To do this, maybe a distribution (Ubuntu?) might write a list, and then others might follow. Every two years or so a new "version" of the super-system might come out with an updated list of libraries and versions. Note that applications should do their best to work on current systems, and on the previous one. This would mean that newer applications would have the potential to work on four year-old systems.

  • There should be a well defined directory tree that contains a whole application. It should include 1) the executable 2) the icon 3) the "lib" directory with extra libraries not listed in the point above and 4) anything else. This directory should be expected to be read-only. The directory could have the extension .apx and have a file called application.xml in it.

  • In case libraries are provided, the system should add them to the library path before the system ones. So, if a program needs a specific library that is not listed in the first point, or if for some reason needs a different version of the "stock" libraries then:

  • GNU/Linux file managers should show those directories and their icons

  • You should have different directories for the different versions of executables and libraries according to the processor.

  • The operating system should keep track of the applications available (each folder with extension .apx and with a application.xml file in it could be expected to be an application) and their locations. This can be done easily with triggers in the kernel when a file is moved or copied etc. The system should allow two different copies of the same application in two different directories.

  • The operating system should offer a way to upgrade all the existing applications (since it knows what's on the disk and what version it is).

  • There should be a security system where whoever distributes the application is able to "sign it" -- users should be able to view the signature before running it. This can be extended as far as we want to take it.

  • The distribution should have the option to hide completely, in its package manager, any end user applications. Yum/apt-get/synaptic and such like should still be used to keep the system up to date -- and not just for end-user programs.

  • There should be a "recipe system" like the one available in GoboLinux, where a piece of software is compiled making sure that it works in its own directory. Here, looking at GoboLinux did would be immensely beneficial. Note that providing a working recipe for each piece of software would be a big task, but it would be limited to end-user software.

Whoever manages this system should look closely at what OS X does, because OS X's engineers had the exact same problems to solve -- and solved them successfully.

Conclusions

This article might start a revolution -- or it might just be yet another article complaining about installing software in GNU/Linux.

I have a dream. A dream of a world where people distribute applications as bundled directories, and these bundles work in Ubuntu, Fedora, etc -- and they keep on working when a new version of the operating system is installed. A world where software installation in GNU/Linux is easy and applications can be swapped by simply copying them onto a memory stick.

I wonder if I will ever see this in GNU/Linux?.

P.S. Some will say, "if you like the way OS X does things, use OS X". My answer to that is, "I like the way OS X does things, it works, it solves problems, but let's rather be inspired by it and improve it"

Category:

Comments

jabjoe's picture
Submitted by jabjoe on

What you want is called Zero Install http://0install.net/

It is used in Rox to copy RiscOSs Application Directory.

http://en.wikipedia.org/wiki/Application_Directory

Having been a RiscOS user I can say this isn't a perfect system. Each application folder ends up with everything required to run the application, which sounds good, but isn't.

Like all OSs, RiscOS loaded for a dll in the system first, then the local folder. But what if a dll of the same name but different version was in the system? BANG. So you must name the dlls differently, or load version info and select from that.

So if that's all working, you have multiple apps using different version of the dll all in memory. This is something I at least don't want to be the common case!

You want dynamic libs shared by many applications. Which yes means a dependency soup. That's why you need a package manager! (WHY DOES WINDOWS STILL NOT HAVE ONE)

Perhaps for special cases, you could use the linux custom namespaces, you could knock something up that meant a user could install and run applications without admin. The applications are run in a namespace with the /usr,/etc,/lib,/bin folders unionfs with the user's and system's version. Or you make the split between user applications and system applications, and user applications run in this custom name space. But do you want this per user? I guess not. You want all users to be able install/uninstall/modify these apps. I guess not. Then don't you just end up with admin again?

Joe

Tony Mobily's picture

Hi,

I had already replied to this comment, but must have forgotten to submit it after previewing it...

-----------------
What you want is called Zero Install http://0install.net/

It is used in Rox to copy RiscOSs Application Directory.

http://en.wikipedia.org/wiki/Application_Directory
-------------------

See my other comment on Zero Install in my other answer.

-------------
So if that's all working, you have multiple apps using different version of the dll all in memory. This is something I at least don't want to be the common case!

You want dynamic libs shared by many applications. Which yes means a dependency soup. That's why you need a package manager! (WHY DOES WINDOWS STILL NOT HAVE ONE)
---------------

Did you actually read my article...?
This is precisely why I propose a set of common, guaranteed libraries that can be expected.
So, a program would only have extra ones, or theo nes that are not "guaranteed".

-----------------
Perhaps for special cases, you could use the linux custom namespaces, you could knock something up that meant a user could install and run applications without admin. The applications are run in a namespace with the /usr,/etc,/lib,/bin folders unionfs with the user's and system's version. Or you make the split between user applications and system applications, and user applications run in this custom name space.
-------------------

This goes against the important principle in my article, "One application per folder"

----------------
But do you want this per user? I guess not. You want all users to be able install/uninstall/modify these apps. I guess not. Then don't you just end up with admin again?
-----------------

I don't get it. What's stopping people from doing that _today_? If you are saying that you don't want users to execute applications _at all_, then that's a different issue -- something I suspect you will get _very little_ support for...

Merc.

jabjoe's picture
Submitted by jabjoe on

Zero Install I'll cover in the other thread.

I read your article, well more precisely I listen to it as I worked. ;-)

No matter how wide you guaranteed lib list is, people we want to use libs not in it. You want a system where applications that use the same lib, use the same file for that lib. Even if you did have a system with every possible lib installed on just in case, do you want that?

The custom namespace could work how ever is required, you could make a existing application adhere to "One application per folder", but I admit it's a hack.

Of course I want users to execute applications. I can't see how you thought I meant that. I'll try to be clearer.

You don't want each user to have their own copy of the app, very few apps are users going to need their own copy. So you want a system of shared applications. So who decides what is installed or not? One user could uninstall another's app. So you end up with permissions.

Tony Mobily's picture

Hi,

> Zero Install I'll cover in the other thread.

OK

> I read your article, well more precisely I listen to it as I worked. ;-)

:-D

--------------------------
No matter how wide you guaranteed lib list is, people we want to use libs not in it. You want a system where applications that use the same lib, use the same file for that lib. Even if you did have a system with every possible lib installed on just in case, do you want that?
----------------------------

I wrote this in the article... you write a comprehensive list of libraries that you use as a "base" system, and then if a program requires more libraries, it would have them in its own lib directory. If those extra libraries are installed with the system already, the system-wide ones will be found beforehand.

------------
You don't want each user to have their own copy of the app, very few apps are users going to need their own copy. So you want a system of shared applications. So who decides what is installed or not? One user could uninstall another's app. So you end up with permissions.
------------

...? Just have a shared "application" folder system-wide, available to all users. Only users with administration access will be able to copy apps in that folder.

This really is the least of the problems.

Merc.

jabjoe's picture
Submitted by jabjoe on

------------
I wrote this in the article... you write a comprehensive list of libraries that you use as a "base" system, and then if a program requires more libraries, it would have them in its own lib directory. If those extra libraries are installed with the system already, the system-wide ones will be found beforehand.
------------

That's standard, search PATH then locally.

One thing that helps is to make the dll interface version part of the name. So you app links to lib_123.o and maybe later lib_123.o becomes a wrapper of lib_124.o for old apps. Of course, that doesn't 100% work because sometimes with other implementations bugs show up that where hidden before, but in the open source world this is easily fixed. In the closed world it isn't, so in Windows you have the manifest enforcing.

With local only DLLs, you end in DLLs hell, which is another reason MS introduced the manifest, which solves one problem but gives you another, for instance "hey I actually wanted it to use the new DLL, it fixes the bug!". Local libs you want to be the exception, not the rule.

Apps can be written like you want, in a single folder, and some are, at least in ported versions. But in general it's better if the libs are system wide and brought down only when needed. Which is why the tend to do that.

I don't feel your pushing for something new, it seams like what existed on RiscOS and can exist now (on any OS, bar being able to "execute" these application folders) but isn't normally done as it's considered bad practice because it doesn't scale.

----------
...? Just have a shared "application" folder system-wide, available to all users. Only users with administration access will be able to copy apps in that folder.

This really is the least of the problems.
---------

How is that different then now in the area of administration?

Seams like what would make you happy is just a script that makes a folder containing everything a app needs (sucks it from a deb file?), and creates a namespace to run the app to make it think it's installed normally. I'm sure there are things that make this kind of portable app already. But you wouldn't want the whole system done like that, you only want it when you want to run something that either doesn't play nice with the rest of the system, or you don't have admin.

Tony Mobily's picture

Hi,

> Zero Install I'll cover in the other thread.

OK

> I read your article, well more precisely I listen to it as I worked. ;-)

:-D

--------------------------
No matter how wide you guaranteed lib list is, people we want to use libs not in it. You want a system where applications that use the same lib, use the same file for that lib. Even if you did have a system with every possible lib installed on just in case, do you want that?
----------------------------

I wrote this in the article... you write a comprehensive list of libraries that you use as a "base" system, and then if a program requires more libraries, it would have them in its own lib directory. If those extra libraries are installed with the system already, the system-wide ones will be found beforehand.

------------
You don't want each user to have their own copy of the app, very few apps are users going to need their own copy. So you want a system of shared applications. So who decides what is installed or not? One user could uninstall another's app. So you end up with permissions.
------------

...? Just have a shared "application" folder system-wide, available to all users. Only users with administration access will be able to copy apps in that folder.

This really is the least of the problems.

Merc.

Terry Hancock's picture

You don't want each user to have their own copy of the app, very few apps are users going to need their own copy. So you want a system of shared applications. So who decides what is installed or not? One user could uninstall another's app. So you end up with permissions.

At first I thought I agreed with you on this, but then I started thinking about how programming languages like Python handle the problem, which is reference-counting. Then I remembered that my current back up system does precisely this using hardlinks, which is a common element of nearly all Unix/Linux filesystems.

One could easily imagine an installation system which shows each user an independent /usr drive (or /usr/lib or /usr/share/lib or whatever is needed), but without taking up (much) more space when the same shared object libraries are installed -- instead, the libraries get hardlinked to the same data. So long as any one user is using a given library, the library is stored on the system, but when the last user abandons it, it automatically goes back into the free space pool. It's a simple reference-counting system, which is how standard Unix/Linux filesystems work normally, so it doesn't require new engineering on that level.

Of course, this won't satisfy the "one app/one file" requirement that Tony suggests (I'm not too sure about the sensibility of that requirement, though I do appreciate the convenience it would provide).

But it does provide a means of supporting multiple library version installations on the same computer.

What would be even nicer is if individual applications could pick and choose which library to bind to. This may be harder, though, if I understand how library-binding works.

As regards the one-file idea, might there be a way to trigger (or suggest) installation of necessary libs from library loading errors? One could imagine an installation system that read those logs to look for failures from applications which are not within the package-management system. Maybe some kind of helper application or plugin for some user-friendly package manager like Synaptic?

morris's picture
Submitted by morris on

Oh yeah! Let's strip the source code shipping requirement from the GPL too, it's too much a hassle to read all that code. Who needs to be creative?! Even better, who thinks can be creative?

I also have a few improvements for this article:

* Users can not have root access, unless they ARE root; per-user installation is allowed in $HOME
* SOMETIMES, with SOME distros, it’s very tricky to install several versions of SOME _PACKAGES_ (while the same is not true for softwares).
* Users are not stuck with software installed system-wide, that's why I have ~/bin/ and even ~/local/bin/, doing so is as simple as exporting $PATH;
* Supported _packages_ usually are downloaded from the official repositories. Pre-compiled binaries for major distros USUALLY can be found on the project's website, testing usually is made using a public repository, and running `make` at $HOME;
* IN SOME CASES, especially when the user adds repositories or installs packages directly without relying on the package manager, the package manager it self is not capable of dependency-checking and users themselves end up with circular dependencies. Nasty Users!
* A PACKAGE (again, not true for software itself) is bound to a specific distribution, which assures it to work on that same distro, and SOMETIMES to a specific version of that distribution, what usually means easier support.
* It’s nontrivial to _give a package_ to a friend, just because giving it's name is easier, and even a simple tarball SOMETIMES is too (and it's possible on every single distro);

I also recommend this article (http://mdzlog.alcor.net/2009/06/20/white-boy-test/) for updates, you are still stuck whit the _gandma_ thing, that's old!

Tony Mobily's picture

Hi,

-----------
Oh yeah! Let's strip the source code shipping requirement from the GPL too, it's too much a hassle to read all that code. Who needs to be creative?! Even better, who thinks can be creative?
-------------

You seem to be implying that with *current* package managers people get the source code when they install binaries... I think your point here is irrelevant.

> I also have a few improvements for this article:

I am all ears!

--------------
* Users can not have root access, unless they ARE root; per-user installation is allowed in $HOME
--------------

This would make system-wide configuration hard. I don't think it's a good idea.

-----------------
* SOMETIMES, with SOME distros, it’s very tricky to install several versions of SOME _PACKAGES_ (while the same is not true for softwares).
------------------

I am not following you here.

----------------------------
* Users are not stuck with software installed system-wide, that's why I have ~/bin/ and even ~/local/bin/, doing so is as simple as exporting $PATH;
-----------------------------

Common users don't know -- and shouldn't know -- what "exporting $PATH" means. Plus, I very very very much doubt people manage to install Firefox, Openoffice, Pidgin in $HOME/bin

------------------
* Supported _packages_ usually are downloaded from the official repositories. Pre-compiled binaries for major distros USUALLY can be found on the project's website, testing usually is made using a public repository, and running `make` at $HOME;
-------------------

I probably wasn't clear enough in the article... I was aiming at common users. At the moment some programs make binaries available, but they have to fight with rpms, debs, and then again different distros, and different versions of the same distro. That's just too hard. The article is about making this simplified.

----------------------------------
* IN SOME CASES, especially when the user adds repositories or installs packages directly without relying on the package manager, the package manager it self is not capable of dependency-checking and users themselves end up with circular dependencies. Nasty Users!
-----------------------------------

I assume you are being sarcastic. Note that users are sometimes _forced_ to add repositories and install extra packages. Try installing OpenOffice 3.x on Ubuntu 8.10...

-----------------------
* A PACKAGE (again, not true for software itself) is bound to a specific distribution, which assures it to work on that same distro, and SOMETIMES to a specific version of that distribution, what usually means easier support.
-----------------------

I assume you missed the part of the article where I specify that there should be a list of packages _cross_ distribution...

----------------------------
* It’s nontrivial to _give a package_ to a friend, just because giving it's name is easier, and even a simple tarball SOMETIMES is too (and it's possible on every single distro);
-----------------------------

If by tarball you are referring to the source, then we are talking about two different audiences.

------------------
I also recommend this article (http://mdzlog.alcor.net/2009/06/20/white-boy-test/) for updates, you are still stuck whit the _gandma_ thing, that's old!
-------------------

I won't comment on that article.

Thank you!

Merc.

imthefrizzlefry's picture

Concerning: "* Users can not have root access, unless they ARE root; per-user installation is allowed in $HOME"

You replied: "This would make system-wide configuration hard. I don't think it's a good idea."

----------------

It sounds like you are implying users should be capable of making system wide changes. I think that Windows has demonstrated time and time again that this is a bad idea. It allows for the very typical situation where a user is capable of installing a virus or other application that can make negative system wide changes.

I admit that nag screens are annoying, however I think the Linux solution of giving some users permission to use the sudo command with the user's password to elevate a process is the best solution. It is an improvement over the solution in Windows Vista where the user is asked to confirm each individual change.

I feel the answer to your original point of individual users needing to have multiple versions of the same software installed, is best dealt with by having user copies in the home directory, or having the super user account make a system-wide change.

bkor's picture
Submitted by bkor on

Time to cancel my FSM 'subscription'. From a magazine I expect research to be done, not articles like this... there are enough blogs to ignore.

Tony Mobily's picture

Hi,

I can't comment on your desire not to read Free Software Magazine -- that's your choice.

Well, I've been a GNU/Linux user for some 15 years, have been a programmer and have been installing GNU/Linux on clients' machines for *years*.

I can only guess a lot of people out here haven't seen common user struggle with Ubuntu (one of the most user friendly ones) as much as I have.

Regards,

Merc.

mrak018's picture
Submitted by mrak018 on

It doesn't broken. It is your brain that is broken, LameDuck. Didn't Linux Hater guy tell you enough to understand why you're wrong?

You simply want software installation in Mac OS X way, while it is done in Linux way. Both ways have their advantages and disadvantages. Both are fine. Why don't you just build a Mac-like software distribution system for Linux, if you need one? It is easy. Just go and make it.

Tony Mobily's picture

> It doesn't broken. It is your brain that is broken, LameDuck.

You are being offensive. I have to ask you to please avoid attacking people like this on your web site. I don't mind being attacked myself -- my skin is definitely thick enough. But, please avoid it if at all possible.

> Didn't Linux Hater guy tell you enough to understand why you're wrong?

I am not very familiar with the Linux Hater guy. However, I based my experience as a programmer, user and advocate as a basis of my opinions and suggestions.

--------------
You simply want software installation in Mac OS X way, while it is done in Linux way.
--------------

That's correct.

> Both ways have their advantages and disadvantages.

My (humble) point is that the current disadvantage GNU/Linux is suffering are too many -- and that the disadvantages OS X has are possible to overcome.

> Both are fine.

I disagree on this one.

--------------
Why don't you just build a Mac-like software distribution system for Linux, if you need one? It is easy. Just go and make it.
---------------

I have build a lot of software in my life -- I maintain 50000 lines of code in Drupal (User Karma, Drigg, Friend List, Extra Voting Forms) and can tell you, with an open heart, that something like this is *anything but* easy. It requires interaction between distro, and some technical trickery.

Please don't be offensive in your comments.

Bye!

Merc.

Lopo Lencastre de Almeida's picture

«Users should be able to install different versions of the same software immensely easily"»

I agree with Jabjoe. this could be really messy...

... but on the other end I do have Swiftweasel and Firefox both running in the same machine and I had two different Firefox versions also running in the same machine and using parts of the same configuration, etc. It is tricky and implies tackling the configuration files your self but it is doable.

And, usually, COMMON USERS should not have more than one stable and updated version of the software.

«Users should be able to install software even without being root»

I dn't know your Ubuntu but in mine there is a sudoers file and I don't have to be root... just need to have part of its rights.

Tony Mobily's picture

Hi,

----------------
... but on the other end I do have Swiftweasel and Firefox both running in the same machine and I had two different Firefox versions also running in the same machine and using parts of the same configuration, etc. It is tricky and implies tackling the configuration files your self but it is doable.
------------------

Most users would find this impossible...

------------------
And, usually, COMMON USERS should not have more than one stable and updated version of the software.
------------------

...? There are some cases where different versions of the same software are important. A graphic designer will want several versions of Firefox.

-----------------------------
I dn't know your Ubuntu but in mine there is a sudoers file and I don't have to be root... just need to have part of its rights.
----------------------------

Becoming root *is* like being root. I was talking about installing software in the users' home directories...

Merc.

Lopo Lencastre de Almeida's picture

«However, using the same philosophy — and therefore architecture — for end-user software is just too limiting. My point list above is not “a list of unfortunate drawbacks". It’s one of the major reasons why GNU/Linux hasn’t achieved mass penetration in the desktop arena

Sorry, Tony, to disagree with this.

The major reasons are:

  1. Lack of support for the most popular games running real well
  2. Lack of a real promotion of an easy development language for common apps
  3. Lack of lobbying (with money/goods/other rolling) with some governments to push it to schools

On 1. I have several kids around here that won't use Linux due to its lack of capability to play the games they want without too much fuss.
I know that there is Wine and I know that there is Cedega (which costs €25 / half year)... the first is a pain to make some games barely playable and the second costs money... and they don't want to pay for the game and for something that they don't need to have if playing on MS Windows.

On 2. I can say for my experience -- because I'm that old -- that the real boost that MS Windows had was thanks to Visual Basic 2 and mostly thanks to Visual Basic 3, that allowed the conversion of thousands of MS Access / Clipper apps into real graphical software apps and allowed developers with little experience in developing real graphic interfaces to port their programs to MS Windows. Visual C come latter in time to fight Borland C and Borland Delphi was the last RAD tool I used before moving away from Microsoft.
You have several around Linux but, on my view, only Gambas could win hearts and that is not the one you see promoted. Canonical Ubuntu pushes Python, Novell SuSE pushes Mono and so on... but none of those are real graphical RAD tools.

On 3. we can see what happens in most countries were Microsoft wants to be the one (http://ur1.ca/6591) and that is not only in under development countries.

Those are the most important. The rest, I think, although important are cosmetic for the common user. My mother and my nefews don't really care where the software is really installed.

Tony Mobily's picture

-----------------
«However, using the same philosophy — and therefore architecture — for end-user software is just too limiting. My point list above is not “a list of unfortunate drawbacks". It’s one of the major reasons why GNU/Linux hasn’t achieved mass penetration in the desktop arena.»

Sorry, Tony, to disagree with this.
--------------------

No problem at all :-D

> The major reasons are:
> 1. Lack of support for the most popular games running real well

OK. So, you are a game developer and want to release it. Do you release it as a deb package? Or RPM? For Ubuntu 8.04? Or Ubuntu 8.10? Or maybe Fedora 9? Fedora 10? Or Debian? Or...?

> 2. Lack of a real promotion of an easy development language for common apps

I don't see this as a major problem, but I might well be wrong...

> 3. Lack of lobbying (with money/goods/other rolling) with some governments to push it to schools

I agree!

---------------------
Those are the most important. The rest, I think, although important are cosmetic for the common user. My mother and my nefews don't really care where the software is really installed.
----------------------

I think users have a sense of being highly empowered when applications are containers and they decide where it is, or who they give it to. That, plus all of the PROs I listed in the article :-D

Merc.

jabjoe's picture
Submitted by jabjoe on

>> 1. Lack of support for the most popular games running real well

> OK. So, you are a game developer and want to release it. Do > you release it as a deb package? Or RPM? For Ubuntu 8.04? Or > Ubuntu 8.10? Or maybe Fedora 9? Fedora 10? Or Debian? Or...?

Look at Nexzuiz and see. You can download a deb file, probably an rpm, but also a zip file containing an executable for each platform.

This is not the problem with game development on Linux, it's the market, or lack of it. If games are a big driver for a platform, it's a chicken and egg problem.

Tony Mobily's picture

Hi,

My feeling, from practical feedback given by software makers and from my experience, is that a lack of an easy, multi-distro, standard way of distributing software discourages software makers from doing so.

Merc.

jabjoe's picture
Submitted by jabjoe on

Developing a game requires less libs than an application. You could maybe get away with just OpenGL and SDL.

Id's engines have been used as the basis of many games, and they have always been cross platform. There is Quake even for the RiscPC! There have been many possibilities of many games being easily developed-on or ported-to Linux. But they are not, and this is because of the lack of a market. As someone who works in the game industry, I can tell you PC games generally aren't that profitable, and that's Windows games!

Nexuiz is a really good example that it's not the development and distribution method that is the problem.

Tony Mobily's picture

Hi,

Received by email, reproduced with permission of Ian:

> The story should be very simple:
>
> * Users should be able to install software even without being root

Should they be free to install viruses as well ;-)

> * Users should be able to install different versions of the same software immensely easily
> * Users should be able to run either their own version of the software, or the one installed on the system (if any)
> * It should be possible to download and install software even though it doesn't come from an official repository
> * Software just needs to work -- unchanged -- even if it's a bit old and it's run on a newer distribution
> * It should be possible to "give" a piece of software to a friend by simply dragging an icon onto a memory stick

Acorn had this well supported using application directories in RISC OS.
Drag !app to any position in the Filesystem and it knew where it was and where libraries etc were that it might need. Each directory had scope for a custom script, application icon and program + supporting
resources. Shared libraries and system resources used more generally in !System !fonts or other shared repositories. Delete the applications directory and the app is uninstalled. Big downside is that the app could
be a virus or contain a virus.

> However, using the same philosophy -- and therefore architecture --
> for end-user software is just too limiting. My point list above is not
> "a list of unfortunate drawbacks". It's one of the major reasons why
> GNU/Linux hasn't achieved mass penetration in the desktop arena.

Not really, the main reason is that it hasn't hundreds of billions of marketing budget backing it and until recently key applications were not available on the Linux platform, most of the user training is on the Windows GUI etc. I think that is still an issue but getting easier. For most end users having an on-line add/remove as with Ubuntu is a major improvement over massing about with CDs and DVDs (not to mention malware). How many PC apps can I simply copy to a USB stick and distribute without installshield and worrying about which DLLs are or
are not present etc? Not many in my experience.

> There are also many, many "blueprint" drafts in Ubuntu's launchpad.
> There are so many in fact that you will get lost reading them. A lot
> of them talk about a simplified directory structure for the system,
> which as a _consequence_ would imply simplified software installation.

Simplifying the directory structure and making it more obvious and consistent what goes where is a good idea but I don't think it is primarily about end users installing software. More making it easier for techs and system support managers to transition from Windows and manage
systems quickly and effectively.

> . This would mean that newer
> applications would have the potential to work on four year-old
> systems.

Sounds reasonable especially as systems mature.

> The directory could have the extension `.apx` and have a file called
> `application.xml` in it.

Sounds very like the Acorn Risc OS approach.

> I have a dream. A dream of a world where people distribute
> applications as bundled directories,

If you were in a UK school in the early 90s you probably had this on a
Risc PC or A3000 :-)

> I wonder if I will ever see this in GNU/Linux?.

I still think virus proliferation would be a significant problem to solve with such a system. However, rationalising the architecture of the file system is a worthy goal irrespective of its relevance to installing
applications.

Ian

Tony Mobily's picture

>> * Users should be able to install software even without being root
>
> Should they be free to install viruses as well ;-)

"Digital signatures".
Application developers should have the freedom to only install applications digitally signed by a trusted source (the distro) or NOT.

And... let's not get into what users end up doing when a piece of software isn't the the main repository -- except, they end up installing untrusted software as _root)...

> Acorn had this well supported using application directories in RISC OS.
> Drag !app to any position in the Filesystem and it knew where it was and
> where libraries etc were that it might need. Each directory had scope
> for a custom script, application icon and program + supporting
> resources. Shared libraries and system resources used more generally
> in !System !fonts or other shared repositories. Delete the applications
> directory and the app is uninstalled. Big downside is that the app could
> be a virus or contain a virus.

See above about viruses...

>> However, using the same philosophy -- and therefore architecture --
>> for end-user software is just too limiting. My point list above is not
>> "a list of unfortunate drawbacks". It's one of the major reasons why
>> GNU/Linux hasn't achieved mass penetration in the desktop arena.
>
> Not really, the main reason is that it hasn't hundreds of billions of
> marketing budget backing it and until recently key applications were not
> available on the Linux platform, most of the user training is on the
> Windows GUI etc.

I disagree on this one... but hey.

> I think that is still an issue but getting easier. For
> most end users having an on-line add/remove as with Ubuntu is a major
> improvement over massing about with CDs and DVDs (not to mention
> malware). How many PC apps can I simply copy to a USB stick and
> distribute without installshield and worrying about which DLLs are or
> are not present etc? Not many in my experience.

None in my experience -- I was referring to Mac, not Windows...

>> In terms of what's been said, there are several discussions about this
>> in Ubuntu and Debian. A good start is the [Rename Top Directory
>> Names][http://brainstorm.ubuntu.com/idea/6243/) in Ubuntu. This link
>> has a _long_ list of duplicates.
>> There are also many, many "blueprint" drafts in Ubuntu's launchpad.
>> There are so many in fact that you will get lost reading them. A lot
>> of them talk about a simplified directory structure for the system,
>> which as a _consequence_ would imply simplified software installation.
>
> Simplifying the directory structure and making it more obvious and
> consistent what goes where is a good idea but I don't think it is
> primarily about end users installing software. More making it easier for
> techs and system support managers to transition from Windows and manage
> systems quickly and effectively.

I wrote this from an OS X perspective... rather than windows.
I think the directory structure can stay as it is because it shouldn't
even matter...

[...]
>> The directory could have the extension `.apx` and have a file called
>> `application.xml` in it.
>
> Sounds very like the Acorn Risc OS approach.

I wish I had seen it...

>> I have a dream. A dream of a world where people distribute
>> applications as bundled directories,
>
> If you were in a UK school in the early 90s you probably had this on a
> Risc PC or A3000 :-)

AAAHHHHHHHH :-D

>> I wonder if I will ever see this in GNU/Linux?.
>
> I still think virus proliferation would be a significant problem to
> solve with such a system. However, rationalising the architecture of the
> file system is a worthy goal irrespective of its relevance to installing
> applications.

That's not quite my point of view... but it's a point of view :-D

Merc.

Tony Mobily's picture

(Received by Brian and republished with his permission)

Interesting article.

Your description of "a new list every two years" is effectively what every Linux vendor all ready does with long term support releases. You might want to also consider Gentoo's notion of "slots" which allows multiple versions of libraries and compilers to exist gracefully next to each other for use by other software.

Your dream is not hard technically. It requires statically linking the program, and results in a very large binary, or distributing required libraries with the program, and the correct relative path to get to the libraries, and results in a lot of duplication of files.
One solution to the duplication problem is to use a deduplication routine to automagically hard-link duplicate library files. This then adds the added complication that the file can't be deleted while *any* (hard or symbolic) link still exists.

For user-space applications, some methods like this might make sense.

edeion's picture
Submitted by edeion on

As for me, I found your article interesting and promising.

I guess system wide install of what you call "user software" has also some advantages such as:
- decreasing disk space loss
- factoring the updates
- making generic software available by default (which is still better for hopeless users)

And this is not true only for huge servers: even on a family computer, more technical users may be asked to manage software updates (and so) for the whole family.

However, you're right: it's quite frustrating to have pain installing (say) a plain emacs when you are not root. And this happens especially on servers. Some software pieces could be installed per user even with a package manager.

I would have liked to provide a more in depth comment but I don't manage to concentrate.

Tony Mobily's picture

Hi,

> As for me, I found your article interesting and promising.

Thanks :-D

--------------
I guess system wide install of what you call "user software" has also some advantages such as:
- decreasing disk space loss
---------------

Probably true. But... we are talking about very negligible space here.

> - factoring the updates

See one of the points -- the system knows where each application is, and can therefore do updates

> - making generic software available by default (which is still better for hopeless users)

Not following you.

-------------------
And this is not true only for huge servers: even on a family computer, more technical users may be asked to manage software updates (and so) for the whole family.
--------------------

That's when you get the applications in a "system-wide" folder.

----------------------
However, you're right: it's quite frustrating to have pain installing (say) a plain emacs when you are not root. And this happens especially on servers. Some software pieces could be installed per user even with a package manager.
-------------------

Yep.

--------------
I would have liked to provide a more in depth comment but I don't manage to concentrate.
---------------

That's quite alright :-D

Merc.

NilsR's picture
Submitted by NilsR on

Interesting post, too bad you felt you had to go negative (to be heard?) There's always ways to improve any kind of software, but the flip side of that coin is that everything is forever more or less "broken".

What I think would be a cool solution to at least some of your issues, would be to add a virtual layer on top of the regular system. The bottom interface of this layer would relate to the file-system as it is. The top interface would show the application only what it needs, a virtual copy of the file-system, made exclusively for that app. No need to actually have more than one copy of (each version of) the libraries, the virtual layer would take care of serving each app with the library version it is built with.

I suspect this is done already in many situations, Wine using "Bottles" and chroot jails are probably worth looking into for lessons learned. I'm sure others could find many more examples. The hardest part would most likely be to automate it and make it as easy, user-friendly and stable as possible.

Tony Mobily's picture

> Interesting post, too bad you felt you had to go negative (to be heard?)

I didn't think it was negative... I wasn't "emotional" or anything. I just listed all of the drawbacks of the current installation methods and explained why I thought they were show-stoppers...

----------------------
There's always ways to improve any kind of software, but the flip side of that coin is that everything is forever more or less "broken".
---------------------

Yep. And then somebody says "it's broken!" :-D

-----------------------
What I think would be a cool solution to at least some of your issues, would be to add a virtual layer on top of the regular system. The bottom interface of this layer would relate to the file-system as it is. The top interface would show the application only what it needs, a virtual copy of the file-system, made exclusively for that app. No need to actually have more than one copy of (each version of) the libraries, the virtual layer would take care of serving each app with the library version it is built with.
----------------------------

This is an interesting idea. But... why would you go that far? What clear advantages would there be, compared to adding the directory's own "lib" directory to the LD_LIBRARY_PATH var?

Merc.

code.guru's picture
Submitted by code.guru on

This is a great article, and controversial too :P

I totally agree with Tony on this one, Linux is waaaaaay to hard for people like my mom. If we were to make it like 30% easier, I bet people would flock to it in truckloads :D

Everything Tony said was correct and great, as a programmer myself, I thought I might add these suggestions:

1. Make installing programs like Windows. You download an .EXE, run it, click Next a few times and you're done
2. Rewrite some of the Linux Kernel in C++ to make it more OOP; or make later modifications in that Language
3. Make the kernel have a well structured API, that way developers can worry less about all that system stuff; if they want to make a new dir, they should be allowed to call a makedir(); function, right?

Hope you liked my suggestions
~ mike

----
Please Visit My Blog:
http://shallweprogram.blogspot.com/

Tony Mobily User's picture

> This is a great article, and controversial too :P

Yep... it was a bit.

-------------------
I totally agree with Tony on this one, Linux is waaaaaay to hard for people like my mom. If we were to make it like 30% easier, I bet people would flock to it in truckloads :D
--------------------

We can only hope!

-------------
Everything Tony said was correct and great, as a programmer myself, I thought I might add these suggestions:
-----------------

----------
1. Make installing programs like Windows. You download an .EXE, run it, click Next a few times and you're done
-----------

Well this is actually exactly what my piece was against... software installation should be a matter of copying the application over!

---------------------
2. Rewrite some of the Linux Kernel in C++ to make it more OOP; or make later modifications in that Language
----------------------

Have you seen the Linux kernel's code...?
I don't think this would be possible, to be honest.

--------------------
3. Make the kernel have a well structured API, that way developers can worry less about all that system stuff; if they want to make a new dir, they should be allowed to call a makedir(); function, right?
------------------------

Lost you here... "mkdir" *is* a current Linux system call -- in fact, it's part of the POSIX standard?

Merc.

Tony Mobily User's picture

> This is a great article, and controversial too :P

Yep... it was a bit.

-------------------
I totally agree with Tony on this one, Linux is waaaaaay to hard for people like my mom. If we were to make it like 30% easier, I bet people would flock to it in truckloads :D
--------------------

We can only hope!

-------------
Everything Tony said was correct and great, as a programmer myself, I thought I might add these suggestions:
-----------------

----------
1. Make installing programs like Windows. You download an .EXE, run it, click Next a few times and you're done
-----------

Well this is actually exactly what my piece was against... software installation should be a matter of copying the application over!

---------------------
2. Rewrite some of the Linux Kernel in C++ to make it more OOP; or make later modifications in that Language
----------------------

Have you seen the Linux kernel's code...?
I don't think this would be possible, to be honest.

--------------------
3. Make the kernel have a well structured API, that way developers can worry less about all that system stuff; if they want to make a new dir, they should be allowed to call a makedir(); function, right?
------------------------

Lost you here... "mkdir" *is* a current Linux system call -- in fact, it's part of the POSIX standard?

Merc.

northofnowhere's picture

You are a brave man. Every time this subject is broached, the author can expect a large number of responses of the "it works for me, so why change" variety. (And those are the nice ones!)

I agree entirely with your analysis - software installation is broken because it's stuck in the Unix server timewarp, and this "little" problem is indeed the major reason for Linux's anaemic adoption rate. The effects permeate throughout the software creation process, marketing (or lack thereof) of Linux systems, and end user support. It's what condemns Linux to a microscopic share.

Your proposed steps to solution are a good start. I particularly agree that there must be a recognized cross-distribution system base, with regular updates, and applications added on top in their own directories.

Gobo is not the only effort in this direction. One that's taken an approach nearer to your ideal is PC-BSD, which consists of self-sufficient applications installed on top of a FreeBSD base, and allows most of the things you call for, eg multiple versions of the same application.

I also agree that the only way something like this is going to get any traction is if there is a major sponsor. But is there a company/organization brave enough to get behind something different, when what we've tried up to now hasn't worked? In some ways, the free software community is very conservative.

Tony Mobily's picture

Hi,

---------------
You are a brave man. Every time this subject is broached, the author can expect a large number of responses of the "it works for me, so why change" variety. (And those are the nice ones!)
---------------

Yeah... we have a variety of those in this very article :-D
But, thank you so much for the PC-BSD link! You are right, they are very close to doing exactly what I proposed.