news aggregator

Jono Bacon: Unwrapping ‘Dealing With Disrespect’

Planet Ubuntu - Fri, 2014-05-02 05:28

With the growth of the Internet and the ease of publishing content, more and more creative minds are coming online to share videos, music, software, products, services, opinions, and more. While the technology has empowered a generation to build new audiences and share interesting things, an unfortunate side-effect has been a culture in which some consumers of this content have provided feedback in a form that is personalized, mean-spirited, disrespectful, and in some cases, malicious.

We have all seen it…the trolls, the haters, the comment boxes filled with venom and vitriol, typically pointed at people just trying to do good and interesting things.

Unfortunately, this conduct can be jarring for many people, with some going as far to give up sharing their creative endeavours so as not to deal with the “wrath of the Internet”.

As some of you will know, this has been bothering me for a while now. While there is no silver bullet for solving these issues, one thing I have learned over the years is how to put negative, anti-social, and non-constructive comments and feedback into perspective.

To help others with this I have written a free book called Dealing With Disrespect.

Dealing With Disrespect is a short, simple to read, free book that provides a straight-forward guide for handling this kind of challenging feedback, picking out the legitimate criticism to learn from, and how to not just ignore the haters, but how to manage them. The book helps put all communication, whether on or offline, into perspective and helps you to become a better communicator yourself.

My goal with the book is that when someone reads something anti-social that demotivates them, a friend can recommend ‘Dealing With Disrespect’ as something that can help put things in perspective.

Go and check out the new website, watch the introductory video, and go and grab the PDF, read it online, or get it for your Kindle. There is also a FAQ.

The book is licensed under a Creative Commons license, and I encourage everyone who enjoys it and finds it useful to share it.

Nicholas Skaggs: Building cross-architecture click packages

Planet Ubuntu - Thu, 2014-05-01 22:21
Building click packages should be easy. And to a reasonable extent, qtcreator and click-buddy do make it easy. Things however can get a bit more complicated when you need to build a package that needs to run on an armhf device (you know like your phone!). Since your pc is almost certainly based on x86, you need to use, create or fake an armhf environment for building the package.

So then what options exist for getting a proper build of a project that will install properly on your device?

A phone can be more than a phone
It can also be a development environment!? Although it's not my recommendation, you can always use the source device to compile the package with. The downsides of this is namely speed and storage space. Nevertheless, it will build a click.
  1. shell into your device (adb shell / ssh mydevice)
  2. checkout the code (bzr branch lp:my-project)
  3. install the needed dependencies and sdk (apt-get install ubuntu-sdk)
  4. build with click-buddy (click-buddy --dir .)
Chroot to the rescue
The click tools contain a handy way to build a chroot expressly suited for use with click-buddy to build things. Basically, we can create a nice fake environment and pretend it's armhf, even though we're not running that architecture.

sudo click chroot -a armhf -f ubuntu-sdk-14.04 create
click-buddy --dir . --arch armhf

Most likely your package will require extra dependencies, which for now will need to be specified and passed in with the --extra-deps argument. These arguments are packages names, just like you would apt-get. Like so;

click-buddy --dir . --arch armhf --extra-deps "libboost-dev:armhf libssl-dev:armhf"

Notice we specified the arch as well, armhf. If we also add a --maint-mode, our extra installed packages will persist. This is handy if you will only ever be building a single project and don't want to constantly update the base chroot with your build dependencies.

Qtcreator build it for me!
Cmake makes all things possible. Qt Creator can not only build the click for you, it can also hold your hand through creating a chroot1. To create a chroot in qtcreator, do the following:
  1. Open Qt Creator
  2. Navigate to Tools > Options > Ubuntu > Click
  3. Click on Create Click Target
  4. After the click target is finished, add the dependencies needed for building. You can do this by clicking the maintain button.  
  5. Apt-get add what you need or otherwise setup the environment. Once ready, exit the chroot.
Now you can use this chroot for your project
  1. Open qt creator and open the project
  2. Select armhf when prompted
    1. You can also manually add the chroot to the project via Projects > Add kit and then select the UbuntuSDK armhf kit.
  3. Navigate to Projects tab and ensure the UbuntuSDK for armhf kit is selected.
  4. Build!
Rolling your own chroot
So, click can setup a chroot for you, and qt creator can build and manage one too. And these are great options for building one project. However if you find yourself building a plethora of packages or you simply want more control, I recommend setting up and using your own chroot to build. For my own use, I've picked pbuilder, but you can setup the chroot using other tools (like schroot which Qt Creator uses).

sudo apt-get install qemu-user-static ubuntu-dev-tools
pbuilder-dist trusty armhf create
pbuilder-dist trusty armhf login --save-after-login

Then, from inside the chroot shell, install a couple things you will always want available; namely the build tools and bzr/git/etc for grabbing the source you need. Be careful here and don't install too much. We want to maintain an otherwise pristine environment for building our packages. By default changes you make inside the chroot will be wiped. That means those package specific dependencies we'll install each time to build something won't persist.

apt-get install ubuntu-sdk bzr git phablet-tools

By exiting, you'll notice pbuilder will update the base tarball with our changes. Now, when you want to build something, simply do the following:

pbuilder-dist trusty armhf login
bzr branch lp:my-project
apt-get install build-dependencies-you-need

Now, you can build as usual using click tools, so something like

click-buddy --dir .

works as expected. You can even add the --provision to send the resulting click to your device. If you want to grab the resulting click, you'll need to copy it before exiting the chroot, which is mounted on your filesystem under /var/cache/pbuilder/build/. Look for the last line after you issue your login command (pbuilder-dist trusty armhf login). You should see something like, 

File extracted to: /var/cache/pbuilder/build//26213

If you cd to this directory on your local machine, you'll see the environment chroot filesystem. Navigate to your source directory and grab a copy of the resulting click. Copy it to a safe place (somewhere outside of the chroot) before exiting the chroot or you will lose your build! 

But wait, there's more!
Since you have access to the chroot while it's open (and you can login several times if you wish to create several sessions from the base tarball), you can iteratively build packages as needed, hack on code, etc. The chroot is your playground.

Remember, click is your friend. Happy hacking!

1. Thanks to David Planella for this info

Kubuntu: KDE Applications and Development Platform 4.13

Planet Ubuntu - Thu, 2014-05-01 22:04

Packages for the release of KDE SC 4.13 are available for Kubuntu 12.04LTS, 13.10 and our development release. You can get them from the Kubuntu Backports PPA. It includes an update of Plasma Desktop to 4.11.8.

Bugs in the packaging should be reported to kubuntu-ppa on Launchpad. Bugs in the software to KDE.

Kubuntu: Calligra 2.8.1 is Out

Planet Ubuntu - Thu, 2014-05-01 22:02

Packages for the release of KDE's document suite Calligra 2.8.1 are available for Kubuntu 12.04 LTS and 13.10. You can get it from the Kubuntu Backports PPA (alongside KDE SC 4.13). They are also in our development release.

Ubuntu Podcast from the UK LoCo: S07E05 – The One with the Metaphorical Tunnel

Planet Ubuntu - Thu, 2014-05-01 20:21

Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are in Studio L for Season Seven, Episode Five of the Ubuntu Podcast!

 Download OGG  Download MP3 Play in Popup

In this week’s show:-

We’ll be back next week, when we’ll interview Michael Meeks and Bjorn Michaelson from The Document Foundation and go through your feedback.

Please send your comments and suggestions to:
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Daniel Pocock: reSIProcate v1.9 WebRTC available for Fedora 20 testing

Planet Ubuntu - Thu, 2014-05-01 20:00

Today I just released reSIProcate v1.9 packages into the Fedora 20 testing updates repository.

This means Fedora 20 users can now try WebRTC more easily.

The same version is already available in Debian wheezy-backports and Ubuntu trusty.

Get started today

Install the resiprocate-repro proxy server package using yum.

Set up a DNS entry, here is what we will use in the example:

Domain WebSocket URL
Use this in JSCommunicator or JsSIP ws:// IP address of server

so the DNS entry will be IN A

Notice that in the ws:// URL, we do not specify a port. This means port 80 is used by default. You can also use a non-standard port if you prefer or if you dont' have root permissions.

Now edit the file /etc/repro/repro.config, adding a transport definition for WebSockets (WS) and change a few other parameters from their defaults:

# Bind to on port 80 Transport1Interface = # Use WS (can also use WSS for TLS, see repro.config for full details) Transport1Type = WS # if using WSS, must also change the transport param here Transport1RecordRouteUri =;transport=WS EnableFlowTokens = true DisableOutbound = false # Disable all authentication - just for testing DisableAuth = false # allow http admin access on all interfaces, # default is # HttpBindAddress =

Now set up a password for the web admin tool using htdigest:

# htdigest /etc/repro/users.txt repro admin

After all that, restart the repro process

# service repro restart

Go to the web interface on port 5080 (only listening on localhost by default), go to the "ADD DOMAIN" page and add

Now restart repro again so it recognises the new domain.

# service repro restart

Finally, you can test it using JSCommunicator or

Next steps
  • Set up a TURN server for NAT traversal
  • Use WebSockets over TLS (WSS) instead of regular WS mode
  • Set up authentication (see the various options, including client certificate support, in repro.confg)
  • Connect to regular SIP infrastructure such as Asterisk

Please come and ask on the repro-users mailing list

Dustin Kirkland: Double Encryption, for the Win!

Planet Ubuntu - Thu, 2014-05-01 19:33

Upon learning about the Heartbleed vulnerability in OpenSSL, my first thoughts were pretty desperate.  I basically lost all faith in humanity's ability to write secure software.  It's really that bad.

I spent the next couple of hours drowning in the sea of passwords and certificates I would personally need to change...ugh :-/

As of the hangover of that sobering reality arrived, I then started thinking about various systems over the years that I've designed, implemented, or was otherwise responsible for, and how Heartbleed affected those services.  Another throbbing headache set in.

I patched within minutes of Ubuntu releasing an updated OpenSSL package, and re-keyed the SSL certificate as soon as GoDaddy declared that it was safe for re-keying.

Likewise, the Ubuntu entropy service was patched and re-keyed, along with all Ubuntu-related https services by Canonical IT.  I pushed an new package of the pollinate client with updated certificate changes to Ubuntu 14.04 LTS (trusty), the same day.

That said, I did enjoy a bit of measured satisfaction, in one controversial design decision that I made in January 2012, when creating Gazzang's zTrustee remote key management system.

All default network communications, between zTrustee clients and servers, are encrypted twice.  The outer transport layer network traffic, like any https service, is encrypted using OpenSSL.  But the inner payloads are also signed and encrypted using GnuPG.

Hundreds of times, zTrustee and I were questioned or criticized about that design -- by customers, prospects, partners, and probably competitors.

In fact, at one time, there was pressure from a particular customer/partner/prospect, to disable the inner GPG encryption entirely, and have zTrustee rely solely on the transport layer OpenSSL, for performance reasons.  Tried as I might, I eventually lost that fight, and we added the "feature" (as a non-default option).  That someone might have some re-keying to do...

But even in the face of the Internet-melting Heartbleed vulnerability, I'm absolutely delighted that the inner payloads of zTrustee communications are still protected by GnuPG asymmetric encryption and are NOT vulnerable to Heartbleed style snooping.

In fact, these payloads are some of the very encryption keys that guard YOUR health care and financial data stored in public and private clouds around the world by Global 2000 companies.

Truth be told, the insurance against crypto library vulnerabilities zTrustee bought by using GnuPG and OpenSSL in combination was really the secondary objective.

The primary objective was actually to leverage asymmetric encryption, to both sign AND encrypt all payloads, in order to cryptographically authenticate zTrustee clients, ensure payload integrity, and enforce key revocations.  We technically could have used OpenSSL for both layers and even realized a few performance benefits -- OpenSSL is faster than GnuPG in our experience, and can leverage crypto accelerator hardware more easily.  But I insisted that the combination of GPG over SSL would buy us protection against vulnerabilities in either protocol, and that was worth any performance cost in a key management product like zTrustee.

In retrospect, this makes me wonder why diverse, backup, redundant encryption, isn't more prevalent in the design of security systems...

Every elevator you have ever used has redundant safety mechanisms.  Your car has both seat belts and air bags.  Your friendly cashier will double bag your groceries if you ask.  And I bet you've tied your shoes with a double knot before.

Your servers have redundant power supplies.  Your storage arrays have redundant hard drives.  You might even have two monitors.  You're might be carrying a laptop, a tablet, and a smart phone.

Moreover, important services on the Internet are often highly available, redundant, fault tolerant or distributed by design.

But the underpinnings of the privacy and integrity of the very Internet itself, is usually protected only once, with transport layer encryption of the traffic in motion.

At this point, can we afford the performance impact of additional layers of security?  Or, rather, at this point, can we afford not to use all available protection?


p.s. I use both dm-crypt and eCryptFS on my Ubuntu laptop ;-)

Diego Turcios: Ubuntu Honduras in the FLISOL

Planet Ubuntu - Thu, 2014-05-01 19:18
Ubuntu Honduras was present in the  FLISOL San Pedro Sula There were several presentations in the FLISOL. Ubuntu Honduras team members talk about the following topics:
  • What is Ubuntu
  • Open Hardware
  • C++ with Cocos2dx
  • Backbone.js
Besides the 3 speakers, there were members of Informatica Libre helping people installed Ubuntu and other GNU/Linux distributions to the people who wished to have a GNU/Linux version.
There were aproximately 300 people in this event. Many of them were high school students. This event help students understand the importance of Free Software in our daily life.
To know
What is FLISOL?FLISOL, an acronym for Festival Latinoamericano de Instalación de Software Libre (Latin American free software install fest), is the biggest event for spreading Software Libre since 2005, performed simultaneously in different countries of Latin America.
FLISOL San Pedro Sula (FLISOL SPS)This is a newspaper article talking about FLISOL

The FLISOL took place in San Pedro Sula Honduras at the university UNAH-VS.

Some Images

Jorge Castro: Deploying and managing workloads to POWER8

Planet Ubuntu - Thu, 2014-05-01 17:45

For the past cycle we have been working with IBM to bring Ubuntu 14.04 to the POWER8 architecture. For my part I’ve been working on helping get our collection of over 170 services to Just Work for this new set of Ubuntu users.

This week Mark Shuttleworth and Doug Balog (IBM) demoed how people would use a POWER8 machine. Since Ubuntu is supported, it comes with the same great tools for deploying and managing your software in the cloud. Using Juju we were able to fire up Hadoop with Ganglia monitoring, a SugarCRM site with MariaDB and memcached, and finally a Websphere Liberty application server serving a simple Pet Store:

No part of this demo is staged; I literally went from nothing to fully deployed in 178 seconds on a POWER8. Dynamic languages just work, and most things in the archive (that are not architecture specific) also just work; we recompiled the archive to ppc64le over the course of this cycle and if you’re using 14.04 you’re good to go.

For reference here’s the entire Juju demo as we did it from behind the curtain:

This is the power of Juju!

Robie Basak: New in Ubuntu 14.04 LTS: nginx 1.4 in main

Planet Ubuntu - Thu, 2014-05-01 17:04

Ubuntu 14.04 LTS ships with nginx 1.4, which is now in main for the first time. Packages in main are covered by the Ubuntu Security Team and generally receive particular focus and attention in Ubuntu. This brings nginx up to par with Apache as a first class citizen in Ubuntu.

This move also led us to closer collaboration with nginx upstream. This is great to see happening in Ubuntu, and can only help to improve quality in our ecosystem.

Note that it is only nginx, nginx-core and the other support packages nginx-doc and nginx-common that are in main. The other packages (extras, full, light, naxsi etc) contain third party plugins and thus remain in universe. See below for details.


Thomas Ward had been looking after the nginx packages in Ubuntu for quite a while, so when I received requests to get nginx into main, I made sure to contact him. One requirement for main inclusion is a team commitment to look after the package. We concluded that Thomas would carry on looking after nginx in general in Ubuntu, but that the rest of the Ubuntu Server Team would be able to back him as necessary.

Following Jorge's blog post about nginx plans for main, Sarah Novotny from nginx upstream contacted us to see how we might be able to collaborate. We are now all in touch so that we can work together to make the nginx experience better for Ubuntu users. I made sure that we all connected with the Debian nginx team also.

Thomas also blogged about nginx in main as soon as it landed.

Packaging notes

There are a couple of notable differences in Ubuntu's nginx packaging (inherited from Debian):

  1. The default path served is not /var/www/ like it is with Apache. Instead, it is /usr/share/nginx/html/. This directory contains the index.html file that is served by default. However, /usr/share/ is not a suitable location to place your own files to serve, since this area is maintained by packaging. Instead, you should configure nginx to serve from a different path, and then use that. According to the Filesystem Hierarchy Standard, /srv is a suitable path to use for this.

    Placing your own files in /usr/share/nginx/html/ is dangerous, as they can be arbitrarily overwritten by package upgrades. This unfortunate behaviour has been reported in bug 1194074, and there there has been some discussion in Debian bug 730382. But as this is a consequence of the choice of default document root as a deliberate decision by the Debian nginx maintainers, there isn't yet any solution to stop users falling into this trap, except to know about it. So please heed this warning and make sure that you change your document root appropriately.

  2. The nginx daemon does not start by default as soon as the package is installed. You must do this by hand using service nginx start. This makes sense since you will usually need to reconfigure nginx to use a different document root first (see the previous point).

Adjustments for main inclusion

A requirement for main inclusion in Ubuntu is quality and maintainability from a security perspective. The security team reviewed nginx and passed this requirement for nginx itself, noting that "Nginx is high-quality legible code, excellent explanatory comments and platform notes, very useful utility functions, and defensive error checking and logging".

However, some third party modules shipped with nginx in Debian varied in quality, so did not pass this requirement for main inclusion.

Since nginx does not currently support dynamically loadable modules, it is not possible for binary distributions such as Debian and Ubuntu to independently build plugin modules using separate source packages. Since module selection is done at build time, this makes it impossible for users to select the precise set of modules they want to have enabled in their nginx binaries, or to add modules written by third parties afterwards, as the distribution has already built the nginx binaries as part of the distribution.

So instead, Debian supplies a selection of third party modules as part of the nginx packaging. This results in binary packages such as nginx-light, nginx-full and nginx-extras, so that users can at least pick from a list of predefined sets of modules, which include common third party modules.

Since third party modules could not be included in main in Ubuntu, a new binary package nginx-core was created which contains only modules supplied in the nginx source itself. It is nginx-core, generated from the nginx source only, and related support packages nginx, nginx-common, and nginx-doc, that were promoted to main.

nginx 1.6

nginx 1.6 was released on 24 April, which was a week after the release of 14.04 LTS. This means that it will not be available as part of Ubuntu except in future releases. If you need nginx 1.6 on 12.04 or 14.04, you can use a PPA or the upstream-provided package repository. Read on for details.

Multiple package sources

You now have a variety of sources for nginx packages. You can install nginx from the Ubuntu repository itself, use the Launchpad nginx team PPA or use the packages provided by nginx upstream.

When deciding which source to use, I suggest that you consider the differences in release management, how security updates are handled and by whom, and your deployment's external repository dependencies.

This sort of choice in package repository source seems to be becoming increasingly common for key packages as the Free Software ecosystem continues to develop. Ubuntu Server LTS releases remain the stable, solid ground that production server deployments are built on, but the faster development pace of upstreams means that there is constant demand for newer upstream releases to be made available in older LTS releases.

Here are my own personal opinions on the pros and cons of the different approaches, from an nginx perspective.

nginx from Ubuntu itself

nginx as shipped within Ubuntu follows the Ubuntu release cycle and release management. You get the version available at the time the Ubuntu release you're using entered feature freeze, with only high-impact bugfixes and security updates issued as updates, as curated by the Ubuntu Server Team and the Ubuntu Security Team.

This provides a stable platform, where by stable I mean that the package does not functionally change in the lifetime of the Ubuntu release. From a production perspective, this means that if you successfully deployed last week, you can have maximum confidence in performing an identical redeployment next week. If your workflow is to have a validated, consistently reproducable deployment, then this approach minimises the chance of your deployment regressing, by not changing it. More information on Ubuntu's stable release policy can be found on the Stable Release Updates page.

The trade-off to this release stability is that the latest and greatest is not available, except through six-monthly non-LTS releases, whose use is less common on production servers.

You can see which nginx version ships with which Ubuntu release on the Ubuntu nginx package page. If the version of nginx shipped with Ubuntu is suitable for your needs, then I recommend using this option.

nginx from the Launchpad nginx team PPAs

The Launchpad nginx team PPAs are mainly maintained by Thomas Ward nowadays, who is the same person who generally looks after the nginx packaging in Ubuntu itself. You have a choice of two PPAs "stable" and "mainline", which follow the two lines of upstream development.

The version of nginx in these PPAs move along with upstream releases. For example: if you installed nginx 1.4 from this repository on 12.04 previously, it would automatically have upgraded to 1.6 when you performed your regular system updates after the PPA was updated to 1.6. This effectively gives you a "rolling release" of the latest nginx, but based on the stable release of Ubuntu 12.04 LTS.

The advantage of this approach is that you get the latest version of nginx, assuming that this matters to you. For example, if you need a more recent feature that is not present in the version of nginx shipped with the latest LTS release of Ubuntu, then this is useful.

If you want to continue to have the latest version, then this option will work well for you.

However if you want the latest version but for it to subsequently not change, then this is dangerous, since not updating your system from this PPA having used it also means that you will not receive security updates, and bugfixes will generally not be available to you unless you also bump to the latest release version.

Assuming that you do stay up-to-date with the PPA, then instead of managing regression risk by not changing things, regression risk must now necessarily be managed by the nginx upstream team's QA process. In this case, I know that they do have a comprehensive test suite that they run before release, but clearly the regression risk is higher than the approach of not changing anything at all.

Also, note that as a PPA this does not receive the attention of the Ubuntu Security Team. Thomas is very good at keeping this PPA up to date, but clearly PPA maintanance primarily by one person does have a very low bus factor in the context of timely updates.

nginx packages from upstream

nginx upstream also publish package repositories for nginx. The trade-offs in using these are quite similar to using the Launchpad nginx team PPA from a release management perspective.

The key difference is in packaging. nginx upstream packaging is designed to appear largely the same regardless of which distribution you are using, for more consistency across distribution families. This is different from the nginx distribution packaging, which is generally designed to follow the patterns commonly used across the Debian and Ubuntu distributions. So if you move from distribution packaging to upstream packaging or vice versa, you will probably need to adapt your deployment configuration.

I assume that the bus factor for timely updates here is much higher than for the PPA, since this repository is managed by the larger upstream nginx team. Security updates generally originate from upstreams anyway, so in general all nginx repositories are to some extent reliant on the upstream nginx team for updates, of course, regardless of their direct source.

Note that if you choose this option, your deployment will additionally rely on the availability of the upstream nginx package archive. If you use many upstream repositories for many different components in your deployment, then this magnifies to many points of failure. You can mitigate this risk entirely by mirroring the packages you are using. Of course, this type of deployment dependency also applies to anything based on the Ubuntu archive, in that your deployment already has a dependency on the Ubuntu archive if you are not mirroring the Ubuntu packages you use. I think that how you consider this trade-off, or whether it is even a trade-off at all, is a matter of opinion.


I will note that the Ubuntu Backports repository exists, but it is not currently used for nginx, so I will not discuss this option further here.

Getting help

As always, see Ubuntu's main page on community support options., #ubuntu-server on IRC (Freenode) and the Ubuntu Server mailing list are appropriate venues.


A big shout out is due to Thomas Ward, who has been looking after both nginx in Ubuntu and the Launchpad nginx team PPA for quite a while now. Thomas was pivotal in getting nginx into main, and blogged about it when it landed.

Thanks also to the Debian nginx packaging team. Ubuntu's nginx packaging is based on their hard work.

And to Sarah Novotny of nginx upstream, for reaching out and collaborating with us to help make the nginx experience of Ubuntu users better.

Costales: PYLang: Practice Languages with Ubuntu

Planet Ubuntu - Thu, 2014-05-01 15:18
A few years ago, for the first Ubuntu App Showdown, I developed PYEnglish, an application for practice English.
Now, it's available PYLang, an upgrade from that application. You can practice English & Spanish in its first release.
How to play
Easy, just complete the right sentence with random words!
PYLang is a handy utility created with a different attitude in mind, forcing the user to write the correct sentence. The app provides per-detail-colored items to be utilized by the user with the main goal of combining them in a correct sentence.

Testing your English 
Testing your Spanish

How to install
You can install into Ubuntu 14.04 or previous versions.

Just install from the Ubuntu Software Center in Trusty
PYLang web page.

Canonical Design Team: New DVD design for 14.04 LTS

Planet Ubuntu - Thu, 2014-05-01 13:40

The new DVD designs feature:

Desktop Edition
- 14.04 wallpaper
- Modified design of the folded paper numerals

Server Edition
- An integrated, 14 module graphic

Trusty Tahr – hidden reveal within the DVD pocket

Design exploration – folded paper numerals

Design exploration – graphic numerals

Alternative Desktop Edition concepts

Alternative Server Edition concepts


Subscribe to Free Software Magazine aggregator