Accelerated X flame wars!... Maybe not

Accelerated X flame wars!... Maybe not


An advantage to free software is that it is an environment where competition can thrive, choice is always available and different solutions exist for the same problem. However, it’s also fair to say that free software is disadvantaged where competition breeds, choices are forced on unsuspecting users and diverse technologies fight each other.

Examples of this are GNOME vs KDE, PostgreSQL vs MySQL, VI vs EMACS, and even GNU/Linux vs the BSDs. On the face of it, another flame war seems to be looming, in this case in the domain of the accelerated desktop, that is one of SuSE’s XGL vs RedHat’s AIGLX. Various commentators and observers have caught onto this and there are hints at the beginnings of a press feeding frenzy here. However, upon closer inspection of these technologies, I have discovered that they are not quite as confrontational as it would first appear.

To explain, I need to go into the background of the current (or old as it may be now) relationship between the X Windows System and accelerated 3D graphics.

The story so far

The X Windows System was originally designed really as a mechanism for 2D graphics of the type needed for web browsing or word processing. It works on a “client-server” relationship—the terminal (screen, keyboard and mouse) being a “server” process, and the X programs (browser, word processor, card games, etc.) being “clients” that connect to the terminal server, creating windows the user can use to interact with the program. It does this using a protocol designed for the purpose. (I have described the X Windows System in quite a lot of detail in my article “What is X”.)

X-Windows also has the capacity of being able to run the X-Server (screen, keyboard and mouse) on one machine, while the X-Clients (word-processor, browser, etc.) can run on another; the communication between the two uses the aforementioned protocol and is carried over a TCP/IP network.

X-Windows SchematicsX-Windows Schematics

Then along came 3D accelerated graphics. This, in short, transferred the processing required to display graphical detail from the main processor to the processor on the graphics card, or the GPU. To facilitate this, the GPU exposes a series of APIs or functions to the program that, when called, performs the necessary operations needed for the user to play first-person shooter games and blow up the appropriate baddies. Or, for the more serious, it would enable you to represent three-dimensional models of certain architectures using computer aided design programs.

In the Unix and POSIX world, the set of APIs used is the OpenGL functionality, originally from Silicon Graphics.

To facilitate this a library suite called “libGL” was created. This was implemented in a number of ways. One was the birth of the Mesa library. This included a sort of wrapper library, which exposes a set of OpenGL functions to the graphical program, simply translates them to X calls, and then transmits the data to the X-Server using the X protocol. Although this provides the logical functionality of OpenGL, you lose all of the performance enhancements of a proper 3D renderer, making it too slow to use for the majority of applications requiring graphical acceleration. However, it is still useful, because it implements the entire OpenGL API. Therefore, any small functionality of OpenGL, which isn’t provided by a method described later, can use this library to implement it. In short, it can now be seen as a fall-back library to guarantee compliance with OpenGL.

The second is the use of a mechanism in X called an extension, which enables add-on features to be included in the protocol. An extension intercepts OpenGL calls and generates special packets, which mirror the functions generated by the client processes (such as CAD programs or 3D games). The extension then encodes them and passes them to the server (screen, keyboard), where a module in the X-Server process reads these packets and calls the appropriate process on the real 3D rendering GPU.

This extension is called “GL-X”, and the mechanism described is known as “Indirect Rendering”. Be careful not to confuse “GL-X” with “XGL”, they are two different things.

GLX SchematicsGLX Schematics

Although this functionally produces a solution, for many applications it isn’t always a practical one. When you’re in the pits of hell with monsters coming at you from all directions, your ammo down to a handful of rockets, a couple of machine-gun clips and maybe a banana, the game program needs to instruct the screen to update countless thousands of polygons a second or else the game is simply unplayable. The program needs to stream directly to the GPU. Indirect rendering, even when the display is on the same machine as the game program, is simply too slow. And, you will get eaten.

A means of “Direct Rendering” needed to be incorporated into the X windows environment. And it was.

To achieve this the “networking” functionality of X was put aside. Direct rendering occurs when GPU functionality is called directly from application programs. Memory could well be shared between the renderer and the application. But, this is not feasible over a network; and, if an X-program was run on a separate machine, acceleration either did not occur or the GL-X method was used.

The way this works is the GPU vendor provides both the X-Server display driver and their own version of the “libGL” library. Windows are created, destroyed and manipulated using the existing X protocol, but when the meat of the program is required to manipulate the polygons rapidly within the window itself then the modified “libGL” library accesses the functionality in the GPU directly.

This involves a fairly sophisticated infrastructure to work, especially when considering the current X protocol mechanisms. To standardise this, something called the Direct Rendering Infrastructure (DRI) has been created. The idea behind it is that this would give a base from which independent hardware vendors could build 3D accelerated graphical drivers for X.

Direct RenderingDirect Rendering

It is worth noting that hardware providers who support OpenGL for X in any way almost certainly would support the Direct Rendering method. As the vast majority of X-Servers run on the same physical machine as the X-Clients or programs that connect to it, GL-X is currently the exception rather than the rule.

Demand for composite managers

The current accelerated solution stated above would provide the functionality to allow a game to be run in full-screen mode, or the body of the window to display accelerated graphics. However, as machine power and expectations rise, there is a demand for the windows management itself to be accelerated. This would provide effects like: placing realistic shadows behind drop down menus and having the menus themselves fade in and out of existence; or for control to be switched from one virtual window to another while giving the impression the windows are on the face of a cube and you are turning the cube. Even things like: having the background of a window being transparent, enabling you to see what is happening in windows behind it; or for scaling windows, that is making it, and its contents, larger and smaller and zooming in and out of it.

To achieve this the “window manager” needs to be replaced by a “composite window manager”, and greater interaction is required between the X protocol and the 3D renderer to allow this. Two free software solutions have come forward: XGL (X-to-OpenGL) which is sponsored by SuSE/Novell; and AIGLX (Accelerated Indirect GL-X) which is championed by RedHat.

When I first heard about these two competing solutions my immediate response was “Oh no! Not ANOTHER flame war...”. But once I examined the two of them, I found that they’re not as mutually exclusive as it first seemed. To explain why I’ll go through a summary of how each works and the differences and similarities between them.

Both achieve their goals by making good use of the “composite” extension of the X protocol. This allows windows to be drawn to pixmaps in memory rather than to the screen directly. Once this is done, another program can then take the pixmaps and do wonderful manipulations with them, such as: blending a window with its background window, giving the appearance of transparency and translucency; performing three dimensional manipulations, giving the appearance of rotating and flipping; or even simple things like making the entire window bigger and smaller effectively “zooming” it.

XGL

XGL is conceptually the simpler of the two, though probably the more technically complex. It’s a “traditional” X server that, instead of displaying its results on a monitor, interprets the display and calls the appropriate OpenGL APIs; the OpenGL driver physically interacts with the screen, keyboard and mouse.

The current implementation of XGL is XGLX. This is both an X server and an X client. To use it you need a “real” X server running that supports accelerated OpenGL rendering, and the only program (or X client) that connects to it is XGL—which is acts as an X server for all of the other X programs to connect to. In the future however, the project plan is to replace XGLX with XEGL. This replaces the “real” X-server with a process that writes directly to the Linux frame buffer.

XGL SchematicXGL Schematic

SuSE/Novell have created a special “composite window manager” that interacts with XGL so that composites of the windows can be displayed on the screen using the accelerated graphics.

AIGLX

Rather than recreate the wheel—as XGL does by rewriting the X-Server—this project attempts to modify the current X-server mechanism so that it can handle accelerated windows management. This appears to take less work than you might think (though it’s still substantial).

In this model, the “composite window manager” calls a special OpenGL API, which translates into a slightly extended GL-X protocol, even if direct rendering is installed. These are then transmitted to a slightly modified X-server, which handles the actual composites itself and performs all of the fancy effects.

The extensions to GL-X to create AIGLX largely seems to involve the composite elements. At the moment, to create a composite, the X-client program transmits windows and contents to the X-server display, which then generates the composite. This is transmitted back to the X-client program, manipulated, then transmitted back again (usually through the DRI if OpenGL is supported) to be displayed.

An extension that AIGLX provides is the ability to transmit just the pixmap manipulation commands to the X-server display so the manipulation can occur there, eliminating the need for data to be carried back and forth, significantly improving performance.

Advantages and disadvantages

There are in fact more similarities between the two architectures than differences. The two projects “borrow” code off each other to a very large extent. Both support new effects for composite window managers using resources in the GPU. Both support legacy 2D applications. In short, both have great promise.

At the moment, “compiz” is the composite window manager for SuSE/Novell’s XGL, and a modified “metacity window manager” to include composite functionality is earmarked for the RedHat’s AIGLX solution. However, once any problems are sorted out and they’re complete, any composite window manager should work with either solution. Standards rule here. Both development processes are making use of the free software model, and both will probably be included in the X.ORG framework.

At the time of writing, a test version of XGL has been released into the wild while AIGLX is still very much in the developer’s corner. At the time of reading, that situation may have changed. XGL has probably been widely welcomed, and is liked.

XGL was designed from the beginning to be as easy as possible for an existing OpenGL graphical card vendor to implement. Currently, as long as the card’s driver can be connected to by an X-client that opens a full screen X window with OpenGL support for the body of the window, it can run XGL. In the future, it’s possible the vendor won’t need to produce an X-server at all, just a means of interpreting OpenGL APIs onto a display and from a keyboard and mouse. I believe it runs on all cards that currently support OpenGL acceleration through the X windows system.

AIGLX is far more an incremental improvement of the existing X-server drivers rather than a rewrite of them. However, for it to function correctly, alterations often need to be made in the vendor’s OpenGL drivers themselves, many of which are proprietary so it’s harder to implement there than the XGL solution.

Both, as I previously mentioned, support legacy 2D X-client applications. But, currently, only AIGLX can take advantage of OpenGL Direct Rendering for games and the such. XGL is limited to the GL-X mechanism here. The reason for this is that the X-Server and the DRI mechanism need to co-operate with each other. However, in the GL-X solution, the X-Server that the X-client connects to is (currently) not the one that does the actual displaying. It’d probably be a very major hack to include this, and could simply be unfeasible. AIGLX wouldn’t have a problem here because X could implement DRI the same way as it does on systems now.

AIGLX also has the advantage of giving the hardware vendor opportunities to implement more goodies such as FrameLock, Quad-Buffered stereo, Workstation Overlays or SLI. By the way, I got that list from a presentation by Andy Ritger from Nividia for the 2006 XdevConf. I don’t know exactly what they are or their relevance, but Nividia obviously want to take this route toward creating a richer experience for GNU/Linux desktop users.

Another kid on the block

Before signing off with a conclusion I would like to draw attention to yet another project that is not unrelated. This is the “Virtual GL” project, which is working at supporting GL accelerated applications through a TCP network. The way this works is that a GL application running on a central server uses its GPU to perform 3D rendering, but rather than displaying the resulting pixmap on a screen there (which could be thousands of miles from the person running the program) it decodes and compresses it and streams the display to the remote user’s machine which uncompresses it and encodes it on the screen.

This project is still in its development phase, but it looks promising. Running full blown shoot-em-all, fast-car, wizzing OpenGL games on a remote machine may be possible after all!

Conclusion

Going back to the XGL vs AIGLX confrontation, the news is there really isn’t one. Both compliment each other and help each other; and not just in the extensive code sharing the developers are involved in. They share far more similarities than there are differences. XGL is easier to implement for hardware vendors who want to exercise minimal development effort in GNU/Linux solutions, and AIGLX is good for those who wish to take the desktop experience to new heights.

I am looking forward to witnessing GNU/Linux being taken to new places by both projects.

Category: 
License: 

Comments

admin's picture
Submitted by admin on

From: Tuukka Hastrup
Url:
Date: 2006-02-27
Subject: Some critique

Thanks for a nice overview of the current developments! I think GL-X is usually spelled GLX, however. And it's "compositing window manager" rather than "composite window manager".

I don't know how important it is to describe the exact current state of Xgl and AIGLX as they develop fast, but as you delve into DRI: What matters most is hardware acceleration, and Xglx does provide hardware-accelerated OpenGL to its clients, although indirect. This is different from the old indirect which is software-only Mesa. Further, Xgl provides accelerated XVideo too.

I'm sure you tried to balance your words to be neutral, but in the end the article leaves the feeling "Xgl may be here now but AIGLX will beat it," as heard from Red Hat versus Novell. Perhaps it was the part about Xgl recreating the wheel vs. AIGLX taking less work. Which ever way it is, I hope the better alternative wins! If AIGLX takes less work, it'd be interesting to know why it's lagging behind.

________________________________________

From: Craig
Url:
Date: 2006-02-27
Subject: Not SUSE

It's a bit unfair to attribute XGL to SUSE. It's been developed by the Novell Linux Desktop team, who were previously Ximian. NLD is based on SUSE, but it wasn't a SUSE development.

________________________________________

From: Nobody
Url:
Date: 2006-02-27
Subject: Lame

X Window System, _N_O_T_ X WindowSSSSSSSSSSSSSSSS System.

________________________________________

From: Ananda
Url:
Date: 2006-02-28
Subject: Don't be so picky

I think this was a great article and it taught me a lot. I think that, if you find the need to comment about something so trivial, you must have a lot of time on your hands.

________________________________________

From: anonymous
Url: www.microsoft.com
Date: 2006-03-01
Subject: Great!

Thanks a lot for the info.

There is so few information out there I encountered about X.

It is good to know that GLX and AIGLX is good for X performance.

Anyway, my impression of X is still it is very slow compared to Windows.

Can you create a new article about X performance compared with Windows? Why Windows is so snappy? This is obvious and everyone should know so that X development will get accelerated.

Here are my bad impression about X:

- Why have a network component if it is not needed? It looks elegant that we can display in the network but it is not needed most of the time. It is impractical and just slowing things. Just make it a plugin.

- If kernel component of X could help accelerate things, why not do it so it can beat Windows? This will have a very big impact if X can beat Windows in terms of performance in GUI part.

- I am willing to forget compatibility for performance. Anyway, I think kernel always do a re-write also (from 2.4 -> 2.6). Maybe X should also to improve it.

Thanks again for the great article.

=)

________________________________________

From: anonymous2
Url: none
Date: 2006-03-08
Subject: X Networking

Making the networking in X a plugin is simply not possible, as windows on your local machine are also created by connecting to your localhost.

Try this:

Kill all your X-servers currently running (by killing gdm or kdm), then disable your localhost-interface by running "ifconfig lo down".

Now go ahead and start your X-server.

Networking is not a "feature", it's a necessarity.

________________________________________

From: Luke
Url:
Date: 2006-03-03
Subject: 2 Questions

Hello, I have got a couple of questions.

1) You say that "An extension that AIGLX provides is the ability to transmit just the pixmap manipulation commands to the X-server display so the manipulation can occur there, eliminating the need for data to be carried back and forth, significantly improving performance."

As opposed to Xgl which uses the GLX_EXT_texture_from_pixmap extension to do the pixmap manipulation on the GPU, right? So here it seems to me (correct me if i'm wrong) that while AIGLX puts the pixmap manipulations at a higher level (which I assume means window compositing), Xgl manages to put the pixmap manipulations straight onto the GPU. If there's no extra overhead Xgl wins here then?

2) I don't completely understand this paragraph on OpenGL games: "Both, as I previously mentioned, support legacy 2D X-client applications. But, currently, only AIGLX can take advantage of OpenGL Direct Rendering for games and the such. XGL is limited to the GL-X mechanism here. The reason for this is that the X-Server and the DRI mechanism need to co-operate with each other. However, in the GL-X solution, the X-Server that the X-client connects to is (currently) not the one that does the actual displaying. It�d probably be a very major hack to include this, and could simply be unfeasible. AIGLX wouldn�t have a problem here because X could implement DRI the same way as it does on systems now."

I think you're explaining why the GL-X solution that Xgl relies on is no good. But why does Xgl rely on the GL-X solution for OpenGL games?

Well, great article! helped me to make some sense of the difference between AIGLX and Xgl.

Luke

Anonymous visitor's picture
Submitted by Anonymous visitor (not verified) on

Various reasons in-kernel X servers are bad ideas:
1. There is code in X that is not GPL, only GPL code can go in the Linux Kernel.
2. In-kernel X servers would be more platform specific than the current approach, splitting X developers across platforms.
3. If X crashes, so would the whole OS.
4. Security concerns.
5. Most importantly, the speed increase would be trivial compared to the effort and complications.

XGLX renders to the screen directly and composites _X11 windows_. This prevents the ability to composite DRI windows (accelerated programs like games) (since they draw to the _GPU_ and not to _X11 windows_).

The idea is, AIGLX allows the acceleration of indirect apps. So even games (and other GL apps) get drawn to X11 windows (at acceptable speeds). And since they are drawn as X11 windows, they can be composited.

If you run XGL, you have to use XV for video playback, and OpenGL apps in XGL are slow. But everything is composited, and it works _now_. I guess XGL will also implement indirect rendering (so everything is fast again), but since it uses the same method as AIGLX, my guess is that it will only support graphics cards that AIGLX does.

If you run AIGLX, You can use GL for video, fast GL for games/etc, and everything is composited (the end goal). The only catch is it doesn't work on the proprietary video drivers (yet).

Anonymous visitor's picture
Submitted by Anonymous visitor (not verified) on

Actually according to Wikipedia AIGLX works on the current nVidia drivers, just not the old "legacy" ones. Its ATI that is lagging behind.

Anonymous visitor's picture
Submitted by Anonymous visitor (not verified) on

AFAIK, XGL uses indirect rendering for all GLX applications (including compositing manager). AIGLX uses indirect operations for compositing manager. What about other GLX applications?

Anonymous visitor's picture
Submitted by Anonymous visitor (not verified) on

Just to correct the post from old archives about running "ifconfig lo down", but not to comment about the former post asking why networking (I find it very handy to run an application remotely, I do it every day at home).

X doesn't require networking to work. Actually you can even run the X server with the option -nolisten tcp.

Locally it uses unix sockets and not localhost, and it's helped by different extension protocols like X-SHM (shared memory). If *your* gdm/xdm does not start correctly after a ifconfig lo down, that's more because of some not really useful dns resolution that fails, maybe for remote X sessions handling.

Of course if you run something with DISPLAY=localhost:0 instead of DISPLAY=:0 that would force using network for local display, and you'd lose a lot of acceleration.

Anonymous visitor's picture
Submitted by Anonymous visitor (not verified) on

I've been using ATI and NVidia cards with their proprietary binary drivers for some time. I am unhappy with having to re-compile/install the proprietary drivers after every kernel update. When I go to re-install I usually put in an updated binary driver with unexpected results. By the time you do a kernel update your video card is too old to be supported in the latest binary driver meaning you have to find the old version somewhere.

I've heard that the new INTEL G965 chipset is open-sourced. Is there anyone here that has actually run one? How is the G965 3D performance after a kernel update?

I've also found an Open hardware implementation video card advertised. I wonder if it has half the 3D performance of my older 9200 radeon or NVidia Gforce4 mx?

ATI and NVidia binary drivers also have had problems with power-management such as sleep and standby modes (sometimes they don't wake up)

Using the free MESA drivers is the most reliable but the 3D performance is only half way there as of 2006.

Terry Hancock's picture

"I've also found an Open hardware implementation video card advertised. I wonder if it has half the 3D performance of my older 9200 radeon or NVidia Gforce4 mx?"

I can't tell for sure if you are referring to the Open Graphics Project. Of course, it doesn't exist yet, but when it does, it's expected to be roughly competitive with the ATI Radeon 9200 or so (it probably won't be much faster, but it shouldn't be slower).

It'll be awhile before OGP cards are "price/feature" competitive with proprietary cards (the project officially denies this as a goal, though I think it will be the result in the long run -- eventually the open source advantage will overwhelm the short production-run problem). In the meantime, the principle market for OGP cards will be people who are willing to pay a little extra for the openness of the hardware (specification and implementation).

Anonymous visitor's picture
Submitted by Anonymous visitor (not verified) on

you seem to have not yet learn it is called X Window
http://en.wikipedia.org/wiki/X_Window_System

please repeat it about 25 times before you ever write anything on X topic

Anonymous visitor's picture

In your title... look at the fourth word. What is this? I can't find "REPSECT" in my dictionary. Strange. Maybe you meant RESPECT, am I right? Well... it seems you too lack respect, but not in the same way/meaning as the author of this article.

Also... please repeat the word PONCTUATION about 25 times in your head before your ever write anything on any topic.

Anonymous visitor's picture
Submitted by Anonymous visitor (not verified) on

Is it just who find's this little tizz hugely ironic? Pot, Kettle, black etc.

It's "PUNCTUATION"... ;-)

Author information

Edward Macnaghten's picture

Biography

Edward Macnaghten has been a professional programmer, analyst and consultant for in excess of 20 years. His experiences include manufacturing commercially based software for a number of industries in a variety of different technical environments in Europe, Asia and the USA. He is currently running an IT consultancy specialising in free software solutions based in Cambridge UK. He also maintains his own web site.