news aggregator

Ubuntu Classroom: Ubuntu User Days coming up!

Planet Ubuntu - Mon, 2014-01-20 06:03

Next weekend, from Saturday the 25th at 14:30 UTC until Sunday the 26th at 01:00 UTC the Classroom team will be hosting the Ubuntu User Days!

User Days was created to be a set of chat-based classes offered during a two days period to teach the beginning or intermediate Ubuntu user the basics to get them started with Ubuntu. Sessions this cycle include:

  • Command line made easy
  • Unity: Tips, tricks and configuration
  • Equivalent Applications
  • Finding Support for Ubuntu

You can check the full schedule here:

The best thing is, everyone can come! If you want to participate, you just need to join #ubuntu-classroom and #ubuntu-classroom-chat on in your IRC client, or just click here for browser-based Webchat.

We hope to see you next weekend!

Jono Bacon: Bad Voltage in 2014

Planet Ubuntu - Sun, 2014-01-19 16:44

In 2013 we kicked off Bad Voltage, a fun and irreverent podcast about technology, Open Source, gaming, politics, and anything else we find interesting. The show includes a veritable bounty of presenters including Stuart Langridge (LugRadio, Show Of Jaq), Bryan Lunduke (Linux Action Show), Jeremy Garcia (LinuxQuestions Podcast), and myself (LugRadio, Shot Of Jaq).

We have all podcasted quite a bit before and we know it takes a little while to really get into the groove, but things are really starting to gel in the show. We are all having a blast doing it, and it seems people are enjoying it.

If you haven’t given the show a whirl, I would love to encourage you to check out our most episode. In it we feature:

  • An interview with Sam Hulick who writes music for video games (Mass Effect, Baldur’s Gate) as well as some of the Ubuntu sounds.
  • We discuss the Mars One project and whether it absolutely madness or vague possibility.
  • We evaluate how Open Source app devs can make money, different approaches, and whether someone could support a family with it.
  • Part One of our 2014 predictions. We will review them at the end of the year to see how we did. Be sure to share your predictions too!

Go and download the show in either MP3 or Ogg format and subscribe to the podcast feeds!

We also have a new community forum that is starting to get into its groove too. The forum is based on Discourse, so is a pleasure to use, and a really nice community is forming. We would love to welcome you too!

In 2014 we want to grow the show, refine our format, and grow our community around the world. Our goal here is that Bad Voltage becomes the perfect combination of informative but really fun to listen to. I have no doubt that our format and approach will continue to improve with each show. We also want to grow an awesome, inclusive, and diverse community of listeners too. Our goal is that people associate the Bad Voltage community as a fun, like-minded set of folks who chat together, play together, collaborate together, and enjoy the show together.

Here’s to a fun year with Bad Voltage and we hope you come and be a part of it.

Kubuntu Wire: Valve’s OpenGL Debugger Developed on Kubuntu

Planet Ubuntu - Sun, 2014-01-19 11:19

Valve make Steam, a platform for making computer games which got a lot of people excited when it was announced a couple years ago that it was being ported to Ubuntu.One of the developers has just announced a new OpenGL debugger.  It’s developed on Kubuntu and uses the Qt Creator IDE.  Best of all it’s going to be completely open source.  Lovely to know Kubuntu is helping bring the next generation of games on all platforms.  More details on Phoronix.

Eric Hammond: Finding the Region for an AWS Resource ID

Planet Ubuntu - Sun, 2014-01-19 00:03

use concurrent AWS command line requests to search the world for your instance, image, volume, snapshot, …


Amazon EC2 and many other AWS services are divided up into various regions across the world. Each region is a separate geographic area and is completely independent of other regions.

Though this is a great architecture for preventing global meltdown, it can occasionally make life more difficult for customers, as we must interact with each region separately.

One example of this is when we have the id for an AMI, instance, or other EC2 resource and want to do something with it but don’t know which region it is in.

This happens on ServerFault when a poster presents a problem with an instance, provides the initial AMI id, but forgets to specify the EC2 region. In order to find and examine the AMI, you need to look in each region to discover where it is.


You’ll hear a repeating theme when discussing performance in AWS:

To save time, run API requests concurrently.

This principle applies perfectly when performing requests across regions.

Parallelizing requests may seem like it would require an advanced programming language, but since I love using command line programs for simple interactive AWS tasks, I’ll present an easy mechanism for concurrent processing that works in bash.


The following sample code finds an AMI using concurrent aws-cli commands to hit all regions in parallel.

id=ami-25b01138 # example AMI id type=image # or "instance" or "volume" or "snapshot" or ... regions=$(aws ec2 describe-regions --output text --query 'Regions[*].RegionName') for region in $regions; do ( aws ec2 describe-${type}s --region $region --$type-ids $id &>/dev/null && echo "$id is in $region" ) & done 2>/dev/null; wait 2>/dev/null

This results in the following output:

ami-25b01138 is in sa-east-1

By running the queries concurrently against all of the regions, we cut the run time by almost 90% and get our result in a second or two.

Drop this into a script, add code to automatically detect the type of the id, and you’ve got a useful command line tool… which you’re probably going to want to immediately rewrite in Python so there’s not quite so much forking going on.

Original article:

Randall Ross: Planet Ubuntu Needs More Awesome - Part 2

Planet Ubuntu - Sat, 2014-01-18 19:31

In Part 1, I presented some of the results of my surveys about Planet Ubuntu from late 2013. Didn't read the summary? There's still time! What better a way to start your day?

With that behind us, let's dive into Part 2 of my promised summary along with additional bonus colour-commentary and recommendations not available anywhere else (at any price.)

Planet Ubuntu needs a makeover.


Survey Says:
There is a strong indication that people want a "new and improved" Planet Ubuntu.

Colour Commentary:
I'm firmly of the same opinion. Planet Ubuntu looks creaky and awkward. It's a throwback to an earlier era of web design. Interactivity? Not there. It also doesn't present well on different form factors. Have you ever tried reading it on Ubuntu Touch? Were you happy with the result? I could go on and on, but suffice to say there's room for improvement.

Some of you might be thinking "Why bother? There are plenty of other social web platforms that we could use as an Ubuntu blog. Why not just use ______." The problem with the word that's usually on top of that blank is that it's always out of our control, often predatory, and usually a bad idea in the long run. The best chance we have to shape the personality of one of the most prominent sites about Ubuntu is to actually maintain control of it. Planet Ubuntu reflects on Ubuntu whether we want to admit it or not. Let's admit it and make Planet Ubuntu great again.

Randall Concludes:
Let's reboot it.


Survey Says:
Ignoring the fence-sitters, people want Ubuntu stories to have prominence, by a factor of two to one.

Colour Commentary:
I was a little surprised by how many people don't care one way or another. That aside, the majority vote for increased prominence of Ubuntu-related content is encouraging. I think this represents a good compromise for people who are insistent about blogging about non-Ubuntu topics on an Ubuntu site. (Yes, there are some who are.) Give them a small place, but not a place that detracts from the main event. Maybe the "real estate" a story gets should be proportional to the amount of Ubuntu content it contains. The mechanism for determining that would have to be designed, but it's an idea that has merit.

Randall Concludes:
Ubuntu-centric stories should be granted more prominence.


Survey Says:

Colour Commentary:
People have no idea how widely (or not) Planet Ubuntu is read. Some think it's amongst the top sites on the web, and others swear it's nothing but cob webs and tumble weeds. This isn't really surprising since the site doesn't publish any stats, and in the absence of data people will make up some wild assumptions. If we want Planet Ubuntu to have as wide a readership as possible, which IS what we want, then perhaps an important first step would be to insert analytics, or even a simple page view counter that can be graphed over time. That way, well be able to see if we're as popular as we need to be.

Randall Concludes:
Publish page view stats ASAP. We cannot improve what we cannot measure.


Survey Says:
People want Planet Ubuntu authors to abide by the Ubuntu Code of Conduct.

Colour Commentary:
This was a bit of an accidental poll. While I was in the midst of my polling activities an unfortunate article that was a clear violation of the CoC and in poor taste was posted. What surprised (and disappointed) me is how long it took to take it down. Thankfully it was removed, but who knows how many people saw the article and now associate Ubuntu with something crass and juvenile?

Adding even more disappointment, the article was from someone who wasn't even an Ubuntu Member any more. So, it should never even have been posted in the first place.

And, adding *even more* disappointment, an effort to clean up the list of people who could post to Planet Ubuntu had been languishing for months.

Randall Concludes:
Maintain the site. (Looking in the direction of Community Council). Take down CoC violations with haste (i.e. in minutes, not hours). If you don't have the time/bandwidth, then delegate, or increase your numbers.


Survey Says:
Nearly an even split.

Colour Commentary:
Given that there's a desire to make Ubuntu stories more prominent (see above), I'm curious to know what mechanism the people who don't want up-voting would use to make this happen. Perhaps an algorithm that scans for keywords and adjusts prominence accordingly? Or, maybe we could leave the decision to a panel of experts? I don't think either of these options have merit. I advocate that we use a system of up-voting by a group of people that are passionate about Ubuntu and are actively contributing to it day-in, day-out. Perhaps Ubuntu Members would be a good start for a group of up-voters?

Randall Concludes:
We need a reliable way to make Ubuntu articles prominent. Up-voting is that way.

To be continued...
I will wrap up the series in my next post with general conclusions and a prescription on how to make Planet Ubuntu awesome again. In the meantime, please share your thoughts in the comments.

Colin Watson: Testing wanted: GRUB 2.02~beta2 Debian/Ubuntu packages

Planet Ubuntu - Sat, 2014-01-18 00:48

This is mostly a repost of my ubuntu-devel mail for a wider audience, but see below for some additions.

I'd like to upgrade to GRUB 2.02 for Ubuntu 14.04; it's currently in beta. This represents a year and a half of upstream development, and contains many new features, which you can see in the NEWS file.

Obviously I want to be very careful with substantial upgrades to the default boot loader. So, I've put this in trusty-proposed, and filed a blocking bug to ensure that it doesn't reach trusty proper until it's had a reasonable amount of manual testing. If you are already using trusty and have some time to try this out, it would be very helpful to me. I suggest that you only attempt this if you're comfortable driving apt-get directly and recovering from errors at that level, and if you're willing to spend time working with me on narrowing down any problems that arise.

Please ensure that you have rescue media to hand before starting testing. The simplest way to upgrade is to enable trusty-proposed, upgrade ONLY packages whose names start with "grub" (e.g. use apt-get dist-upgrade to show the full list, say no to the upgrade, and then pass all the relevant package names to apt-get install), and then (very important!) disable trusty-proposed again. Provided that there were no errors in this process, you should be safe to reboot. If there were errors, you should be able to downgrade back to 2.00-22 (or 1.27+2.00-22 in the case of grub-efi-amd64-signed).

Please report your experiences (positive and negative) with this upgrade in the tracking bug. I'm particularly interested in systems that are complex in any way: UEFI Secure Boot, non-trivial disk setups, manual configuration, that kind of thing. If any of the problems you see are also ones you saw with earlier versions of GRUB, please identify those clearly, as I want to prioritise handling regressions over anything else. I've assigned myself to that bug to ensure that messages to it are filtered directly into my inbox.

I'll add a couple of things that weren't in my ubuntu-devel mail. Firstly, this is all in Debian experimental as well (I do all the work in Debian and sync it across, so the grub2 source package in Ubuntu is a verbatim copy of the one in Debian these days). There are some configuration differences applied at build time, but a large fraction of test cases will apply equally well to both. I don't have a definite schedule for pushing this into jessie yet - I only just finished getting 2.00 in place there, and the release schedule gives me a bit more time - but I certainly want to ship jessie with 2.02 or newer, and any test feedback would be welcome. It's probably best to just e-mail feedback to me directly for now, or to the pkg-grub-devel list.

Secondly, a couple of news sites have picked this up and run it as "Canonical intends to ship Ubuntu 14.04 LTS with a beta version of GRUB". This isn't in fact my intent at all. I'm doing this now because I think GRUB 2.02 will be ready in non-beta form in time for Ubuntu 14.04, and indeed that putting it in our development release will help to stabilise it; I'm an upstream GRUB developer too and I find the exposure of widely-used packages very helpful in that context. It will certainly be much easier to upgrade to a beta now and a final release later than it would be to try to jump from 2.00 to 2.02 in a month or two's time.

Even if there's some unforeseen delay and 2.02 isn't released in time, though, I think nearly three months of stabilisation is still plenty to yield a boot loader that I'm comfortable with shipping in an LTS. I've been backporting a lot of changes to 2.00 and even 1.99, and, as ever for an actively-developed codebase, it gets harder and harder over time (in particular, I've spent longer than I'd like hunting down and backporting fixes for non-512-byte sector disks). While I can still manage it, I don't want to be supporting 2.00 for five more years after upstream has moved on; I don't think that would be in anyone's best interests. And I definitely want some of the new features which aren't sensibly backportable, such as several of the new platforms (ARM, ARM64, Xen) and various networking improvements; I can imagine a number of our users being interested in things like optional signature verification of files GRUB reads from disk, improved Mac support, and the TrueCrypt ISO loader, just to name a few. This should be a much stronger base for five-year support.

Stéphane Graber: LXC 1.0: Unprivileged containers [7/10]

Planet Ubuntu - Fri, 2014-01-17 23:28

This is post 7 out of 10 in the LXC 1.0 blog post series.

Introduction to unprivileged containers

The support of unprivileged containers is in my opinion one of the most important new features of LXC 1.0.

You may remember from previous posts that I mentioned that LXC should be considered unsafe because while running in a separate namespace, uid 0 in your container is still equal to uid 0 outside of the container, meaning that if you somehow get access to any host resource through proc, sys or some random syscalls, you can potentially escape the container and then you’ll be root on the host.

That’s what user namespaces were designed for and implemented. It was a multi-year effort to think them through and slowly push the hundreds of patches required into the upstream kernel, but finally with 3.12 we got to a point where we can start a full system container entirely as a user.

So how do those user namespaces work? Well, simply put, each user that’s allowed to use them on the system gets assigned a range of unused uids and gids, ideally a whole 65536 of them. You can then use those uids and gids with two standard tools called newuidmap and newgidmap which will let you map any of those uids and gids to virtual uids and gids in a user namespace.

That means you can create a container with the following configuration:

lxc.id_map = u 0 100000 65536 lxc.id_map = g 0 100000 65536

The above means that I have one uid map and one gid map defined for my container which will map uids and gids 0 through 65536 in the container to uids and gids 100000 through 165536 on the host.

For this to be allowed, I need to have those ranges assigned to my user at the system level with:

stgraber@castiana:~$ grep stgraber /etc/sub* 2>/dev/null /etc/subgid:stgraber:100000:65536 /etc/subuid:stgraber:100000:65536

LXC has now been updated so that all the tools are aware of those unprivileged containers. The standard paths also have their unprivileged equivalents:

  • /etc/lxc/lxc.conf => ~/.config/lxc/lxc.conf
  • /etc/lxc/default.conf => ~/.config/lxc/default.conf
  • /var/lib/lxc => ~/.local/share/lxc
  • /var/lib/lxcsnaps => ~/.local/share/lxcsnaps
  • /var/cache/lxc => ~/.cache/lxc

Your user, while it can create new user namespaces in which it’ll be uid 0 and will have some of root’s privileges against resources tied to that namespace will obviously not be granted any extra privilege on the host.

One such thing is creating new network devices on the host or changing bridge configuration. To workaround that, we wrote a tool called “lxc-user-nic” which is the only SETUID binary part of LXC 1.0 and which performs one simple task.
It parses a configuration file and based on its content will create network devices for the user and bridge them. To prevent abuse, you can restrict the number of devices a user can request and to what bridge they may be added.

An example is my own /etc/lxc/lxc-usernet file:

stgraber veth lxcbr0 10

This declares that the user “stgraber” is allowed up to 10 veth type devices to be created and added to the bridge called lxcbr0.

Between what’s offered by the user namespace in the kernel and that setuid tool, we’ve got all that’s needed to run most distributions unprivileged.


All examples and instructions I’ll be giving below are expecting that you are running a perfectly up to date version of Ubuntu 14.04 (codename trusty). That’s a pre-release of Ubuntu so you may want to run it in a VM or on a spare machine rather than upgrading your production computer.

The reason to want something that recent is because the rough requirements for well working unprivileged containers are:

  • Kernel: 3.13 + a couple of staging patches (which Ubuntu has in its kernel)
  • User namespaces enabled in the kernel
  • A very recent version of shadow that supports subuid/subgid
  • Per-user cgroups on all controllers (which I turned on a couple of weeks ago)
  • LXC 1.0 beta2 or higher (released two days ago)
  • A version of PAM with a loginuid patch that’s yet to be in any released version

Those requirements happen to all be true of the current development release of Ubuntu as of two days ago.

LXC pre-built containers

User namespaces come with quite a few obvious limitations. For example in a user namespace you won’t be allowed to use mknod to create a block or character device as being allowed to do so would let you access anything on the host. Same thing goes with some filesystems, you won’t for example be allowed to do loop mounts or mount an ext partition, even if you can access the block device.

Those limitations while not necessarily world ending in day to day use are a big problem during the initial bootstrap of a container as tools like debootstrap, yum, … usually try to do some of those restricted actions and will fail pretty badly.

Some templates may be tweaked to work and workaround such as a modified fakeroot could be used to bypass some of those limitations but the goal of the LXC project isn’t to require all of our users to be distro engineers, so we came up with a much simpler solution.

I wrote a new template called “download” which instead of assembling the rootfs and configuration locally will instead contact a server which contains daily pre-built rootfs and configuration for most common templates.

Those images are built from our Jenkins server using a few machines I have on my home network (a set of powerful x86 builders and a quadcore ARM board). The actual build process is pretty straightforward, a basic chroot is assembled, then the current git master is downloaded, built and the standard templates are run with the right release and architecture, the resulting rootfs is compressed, a basic config and metadata (expiry, files to template, …) is saved, the result is pulled by our main server, signed with a dedicated GPG key and published on the public web server.

The client side is a simple template which contacts the server over https (the domain is also DNSSEC enabled and available over IPv6), grabs signed indexes of all the available images, checks if the requested combination of distribution, release and architecture is supported and if it is, grabs the rootfs and metadata tarballs, validates their signature and stores them in a local cache. Any container creation after that point is done using that cache until the time the cache entries expires at which point it’ll grab a new copy from the server.

The current list of images is (as can be requested by passing –list):

--- DIST    RELEASE   ARCH    VARIANT    BUILD --- debian    wheezy    amd64   default    20140116_22:43 debian    wheezy    armel   default    20140116_22:43 debian    wheezy    armhf   default    20140116_22:43 debian    wheezy    i386    default    20140116_22:43 debian    jessie    amd64   default    20140116_22:43 debian    jessie    armel   default    20140116_22:43 debian    jessie    armhf   default    20140116_22:43 debian    jessie    i386    default    20140116_22:43 debian    sid    amd64   default    20140116_22:43 debian    sid    armel   default    20140116_22:43 debian    sid    armhf   default    20140116_22:43 debian    sid    i386    default    20140116_22:43 oracle    6.5    amd64   default    20140117_11:41 oracle    6.5    i386    default    20140117_11:41 plamo     5.x    amd64   default    20140116_21:37 plamo     5.x    i386    default    20140116_21:37 ubuntu    lucid    amd64   default    20140117_03:50 ubuntu    lucid    i386    default    20140117_03:50 ubuntu    precise   amd64   default    20140117_03:50 ubuntu    precise   armel   default    20140117_03:50 ubuntu    precise   armhf   default    20140117_03:50 ubuntu    precise   i386    default    20140117_03:50 ubuntu    quantal   amd64   default    20140117_03:50 ubuntu    quantal   armel   default    20140117_03:50 ubuntu    quantal   armhf   default    20140117_03:50 ubuntu    quantal   i386    default    20140117_03:50 ubuntu    raring    amd64   default    20140117_03:50 ubuntu    raring    armhf   default    20140117_03:50 ubuntu    raring    i386    default    20140117_03:50 ubuntu    saucy    amd64   default    20140117_03:50 ubuntu    saucy    armhf   default    20140117_03:50 ubuntu    saucy    i386    default    20140117_03:50 ubuntu    trusty    amd64   default    20140117_03:50 ubuntu    trusty    armhf   default    20140117_03:50 ubuntu    trusty    i386    default    20140117_03:50

The template has been carefully written to work on any system that has a POSIX compliant shell with wget. gpg is recommended but can be disabled if your host doesn’t have it (at your own risks).

The same template can be used against your own server, which I hope will be very useful for enterprise deployments to build templates in a central location and have them pulled by all the hosts automatically using our expiry mechanism to keep them fresh.

While the template was designed to workaround limitations of unprivileged containers, it works just as well with system containers, so even on a system that doesn’t support unprivileged containers you can do:

lxc-create -t download -n p1 -- -d ubuntu -r trusty -a amd64

And you’ll get a new container running the latest build of Ubuntu 14.04 amd64.

Using unprivileged LXC

Right, so let’s get you started, as I already mentioned, all the instructions below have only been tested on a very recent Ubuntu 14.04 (trusty) installation.
You may want to grab a daily build and run it in a VM.

Install the required packages:

  • sudo apt-get update
  • sudo apt-get dist-upgrade
  • sudo apt-get install lxc systemd-services uidmap

Now a quick workaround while we wait to have our new cgroup manager in Ubuntu, put the following into /etc/init/lxc-unpriv-cgroup.conf:

start on starting systemd-logind and started cgroup-lite script     set +e     echo 1 > /sys/fs/cgroup/memory/memory.use_hierarchy     for entry in /sys/fs/cgroup/*/cgroup.clone_children; do         echo 1 > $entry     done     exit 0 end script

This trick is required because logind doesn’t configure use_hierarchy or clone_children the way LXC needs them.

Now, reboot your machine for those cgroups to get properly reconfigured.

Then, assign yourself a set of uids and gids with:

  • sudo usermod –add-subuids 100000-165536 $USER
  • sudo usermod –add-subgids 100000-165536 $USER

Now create ~/.config/lxc/default.conf with the following content: = veth = lxcbr0 = up = 00:16:3e:xx:xx:xx lxc.id_map = u 0 100000 65536 lxc.id_map = g 0 100000 65536

And /etc/lxc/lxc-usernet with:

<your username> veth lxcbr0 10

And that’s all you need. Now let’s create our first unprivileged container with:

lxc-create -t download -n p1 -- -d ubuntu -r trusty -a amd64

You should see the following output from the download template:

Setting up the GPG keyring Downloading the image index Downloading the rootfs Downloading the metadata The image cache is now ready Unpacking the rootfs --- You just created an Ubuntu container (release=trusty, arch=amd64). The default username/password is: ubuntu / ubuntu To gain root privileges, please use sudo.

So looks like your first container was created successfully, now let’s see if it starts:

ubuntu@trusty-daily:~$ lxc-start -n p1 -d ubuntu@trusty-daily:~$ lxc-ls --fancy NAME  STATE    IPV4     IPV6     AUTOSTART  ------------------------------------------ p1    RUNNING  UNKNOWN  UNKNOWN  NO

It’s running! At this point, you can get a console using lxc-console or can SSH to it by looking for its IP in the ARP table (arp -n).

One thing you probably noticed above is that the IP addresses for the container aren’t listed, that’s because unfortunately LXC currently can’t attach to an unprivileged container’s namespaces. That also means that some fields of lxc-info will be empty and that you can’t use lxc-attach. However we’re looking into ways to get that sorted in the near future.

There are also a few problems with job control in the kernel and with PAM, so doing a non-detached lxc-start will probably result in a rather weird console where things like sudo will most likely fail. SSH may also fail on some distros. A patch has been sent upstream for this, but I just noticed that it doesn’t actually cover all cases and even if it did, it’s not in any released version yet.

Quite a few more improvements to unprivileged containers are to come until the final 1.0 release next month and while we certainly don’t expect all workloads to be possible with unprivileged containers, it’s still a huge improvement on what we had before and a very good building block for a lot more interesting use cases.

Randall Ross: Planet Ubuntu Needs More Awesome - Part 1

Planet Ubuntu - Fri, 2014-01-17 19:47

Could Planet Ubuntu be made more awesome? How would we do it? Where would we start? Perhaps we'd start by seeing who reads it and what those people actually think about it. During the latter weeks of 2013 I conducted a series of polls (on my blog) to determine just that. Going into this effort, I had my opinions. Some of them were validated. Some were not.

What follows is my promised summary of the survey results along with my bonus colour-commentary and recommendations.

Planet Ubuntu has the potential to be *much* more awesome, and we should seriously consider making it *the* place to visit for all things Ubuntu.


Survey Says:
Members outnumber non-members by a factor of two (give-or-take) as readers of Planet Ubuntu.

Colour Commentary:
I was surprised that the margin wasn't a *lot* bigger. I was expecting a factor of at least 10:1, and certainly not 2:1! Planet Ubuntu is an echo chamber - a place where primarily Ubuntu members speak to themselves. Could we do better? Yes! Why not make Planet Ubuntu a place for everyone? Why not make it *the* place where people (of all types, not just Ubuntu members) come to get the latest information about Ubuntu's collaborators and their Ubuntu thoughts? If this were to be our focus, I think we'd see a lot better news and information on other sites too as they would be hard-pressed to slant their stories (or to miss the point entirely) when the people who are actually building Ubuntu are presenting matters clearly and to a wide audience on a very public site.

Randall Concludes:
We need to make Planet Ubuntu appeal to everyone. We need to make it read primarily by non-members. The ratio needs to be at least 1000:1.


Survey Says: By a wide margin, readers perceive that they derive value from Planet Ubuntu.

Colour Commentary:
This is a very surprising result, and I must admit, I don't share the same opinion. To me, Planet Ubuntu has drifted far away from where it was a few years ago. I can recall visiting daily in those days and reading all kinds of really interesting news and commentary, mostly about Ubuntu, and specifically from people who were at the core of a lot of Ubuntu goings-on. Now, when I come to Planet Ubuntu, I am generally disappointed to find that most of it is not about Ubuntu, and the core contributors rarely chime in. Instead, we have people that perhaps were once really interested and involved in Ubuntu who now have new pet-projects and want to showcase them. I'm all for learning about what people are working on these days, but Planet Ubuntu ought to be mainly about Ubuntu.

Randall Concludes:
I'm going to agree to disagree and be in the minority here. Planet Ubuntu is not as useful as it could be. We are aiming too low.


Survey Says: People primarily want Planet Ubuntu people to be written by "Ubuntu people", but not necessarily Ubuntu members. Then there are some that want to introduce a Canonical relationship, but only for authors who don't work for Canonical.

Colour Commentary:
I'm surprised by the Canonical (or Mark) selected result. This tells me there is pent-up demand for a more official voice, or more core-contributor stories but reluctance to restrict to the direct voice of Canonical employees.

There is also demand to let in authors who are Ubuntu contributors but not necessarily Ubuntu members. Overall the data suggests a need to expand authorship. Perhaps Ubuntu Members aren't pulling their blogging weight.

Randall Concludes:
Let's open up Planet Ubuntu to people who have real passion for Ubuntu and who live and breathe it daily. That might mean forgoing the requirement to be an Ubuntu Member, and replacing that requirement with something along the lines of "must have a demonstrated and sustained passion for Ubuntu".


Survey Says: Most people want Planet Ubuntu to be about Ubuntu.

Colour Commentary:
This is expected, and I fully agree. I want Planet Ubuntu to live up to its name. I would love it if people writing there would keep their articles focused, at least to the point there's a clear Ubuntu tie-in. That's what makes it worthy of a read instead of the gerjillion other sites on the web that aren't about Ubuntu. Do we really need a site that has Ubuntu in its name that is primarily not about Ubuntu?

Here's an anecdote. Back in the early 2000's, 2002 I recall, there was a period of time where the USA (government) was considering an invasion of Iraq. At the same time, there was a large popular movement and demonstrations/marches against the idea. I was in San Francisco at the time, and recall seeing large numbers of people marching on Van Ness Ave, near SF City Hall carrying placards saying "No Invasion", "Peace not War", i.e. stuff one would expect to see at an anti-invasion demonstration. In the same march, I also saw placards with slogans like "End Poverty", "Stop Animal Cruelty", and other noteworthy causes. What struck me about this was just how out of place they were and how obvious it was that some were trying to capitalize on the popularity of the demonstration for other "pet" causes. I was saddened that the main cause was being diluted. (Note that I'm not saying these other causes were not worthy, but I am saying that this was not the place for them. People were trying to stop a war.)

Randall Concludes:
I'm going advocate that we keep Planet Ubuntu about Ubuntu and encourage everyone who writes here to respect the stated title of the site.


Survey Says: Planet Ubuntu is a "watering hole". Our readers come here often.

Colour Commentary:
This is encouraging in that it shows that we have reader loyalty. People keep coming back. This doesn't say why though. Are people coming back in the hopes that there will be something interesting about Ubuntu? When they arrive are they pleasantly surprised? Or, are they like me and longing for more Ubuntu? This survey question begs for more questions to get at the reasons.

Randall Concludes:
We have loyal readers. Let's find out why.

To be continued...
I will continue with a summary of results 6 through 10 soon. In the meantime, please share your thoughts in the comments.

Steve Conklin: Ubuntu File System Benchmarking

Planet Ubuntu - Fri, 2014-01-17 17:48


I’ve been working to implement file system benchmarking as part of the test process that the kernel team applies to every kernel update. These are intended to help us spot performance issues. The following announcement I just sent to the Ubuntu kernel mailing list covers the specifics:


The Ubuntu kernel team has implemented the first of what we hope will be a growing set of benchmarks which are run against Ubuntu kernel releases. The first two benchmarks to be included are iozone file system tests, with and without fsync enabled. These are being run as part of the testing applied to all kernel releases. == Disclaimers == READ THESE CAREFULLY 1. These benchmarks are not intended to indicate any performance metrics in any real world or end user situations. They are intended to expose possible performance differences between releases, and not to reflect any particular use case. 2. Fixes for file system bugs reduce performance in some cases. Performance decreases between releases may be a side effect of fixing bugs, and not a bug in themselves. 3. While assessments of performance are valuable, they are not the only criteria that should be used to select a file system. In addition to benchmarks, file systems must be tested for a variety of use cases and verified for correctness under a variety of conditions. == General Information == 1. The top level benchmarking results page is located here: This page is linked from the top level index at 2. The tests are run on the same bare-metal hardware for each release, on spinning magnetic media. 3. Test partitions are sized at twice system memory size to prevent the entire test data set from being cached. 4. File systems tested are ext2, ext3, ext4, xfs, and btrfs 5. For each release, each test is run on each file system five times, and then the results are averaged. == Types of results == There are three types of results. To find performance regressions, we (the Ubuntu kernel team) are primarily interested in the second and third types. 1. The Iozone test generates charts of the data for each individual file system type. To navigate to these, select the links under the "Ran" or "Passed" columns in the list of results for each benchmark, then select the test name ("iozone", for example) from that page. The graphs for each run for each file system type will be available from that page in the "Graphs" column. The second and third result sets are generated by the iozone-results-comparator tool, located here: 2. Charts comparing performance among all tested file systems for each individual release. To navigate to these, select the links under the "Ran" or "Passed" columns in the list of results, then select the "charts" link at the top of that page. 3. Charts comparing different releases to each other. These comparisons are generated for each file system type, and are linked at the bottom of the index page for each benchmark. These comparisons include: 3A. Comparison between the latest kernel for each Ubuntu series (i.e. raring, saucy, etc). 3B. Comparison between the latest kernel for each LTS release. 3C. comparison of successive versions within each series.

James Page: Call for testing: Juju and gccgo

Planet Ubuntu - Fri, 2014-01-17 14:24

Today I uploaded juju-core 1.17.0-0ubuntu2 to the Ubuntu Trusty archive.

This version of the juju-core package provides Juju binaries built using both the golang gc compiler and the gccgo 4.8 compiler that we have for 14.04.

The objective for 14.04 is to have a single toolchain for Go that can support x86, ARM and Power architectures. Currently the only way we can do this is to use gccgo instead of golang-go.

This initial build still only provides packages for x86 and armhf; other architectures will follow once we have sorted out exactly how to provide the ‘go’ tool on platforms other than these.

By default, you’ll still be using the golang gc built binaries; to switch to using the gccgo built versions:

sudo update-alternatives --set juju /usr/lib/juju-1.17.0-gcc/bin/juju

and to switch back:

sudo update-alternatives --set juju /usr/lib/juju-1.17.0/bin/juju

Having both versions available should make diagnosing any gccgo specific issues a bit easier.

To push the local copy of the jujud binary into your environment use:

juju bootstrap --upload-tools

This is not recommended for production use but will ensure that you are testing the gccgo built binaries on both client and server.

Thanks to Dave Cheney and the rest of the Juju development team for all of the work over the last few months to update the codebases for Juju and its dependencies to support gccgo!

Stephen M. Webb: The future looks very small.

Planet Ubuntu - Fri, 2014-01-17 00:00

I have a new toy. I didn’t get it because I’m hip, although I am, I got it because we’re trying to prepare Unity 7 on the Trusty Tahr (Ubuntu 14.04 LTS) for the next generation of hardware that will be sitting on everyone’s desk (or lap, or table in the coffee shop) within a few years. I got a laptop with a high-DPI (dots per inch) 4K display and a sensitive touchscreen.

This particular piece of furniture is a Lenovo Yoga 2 Pro, sporting a 3200×1800 pixel 10-touch touchscreen in a 13 inch form factor. That works out to a pixel density of about 280 pixels per inch, much more refined than my main laptop (a Lenovo ThinkPad T410, 1440×900 at 14 inches) which sits at 120 pixels per inch and the external monitor I have attached to it (a Benq FP22W, 1680×1050 at 22 inches) at 95 pixels per inch. Sure, spec ennui, but it’s germane to the topic here.

The problem is that out of the box, most GUI software assumes it’s running on a display device, regardless of its dimensions, with a dot pitch of 96 dots per inch. It’s true for Microsoft Windows and it’s true for GNU/Linux, although I’ve been out of the Apple Macintosh world long enough to plead ignorance there. I know it’s true of Microsoft Windows because the Yoga 2 Pro came with Microsoft Windows 8.1 preinstalled by the manufacturer, and I had a brief chance to test it out before I got to work. IE displayed web pages in teeny weeny characters, and when I opened COMMAND.COM (or whatever the name of the command console is these days) to create a rescue image, it defaulted to using 8×8 bitmapped fonts. The eyestrain finding the reconfiguration option felt like it caused my corneas to bleed.

We have the same problem in Ubuntu. When I installed a prelease image of Trusty on the Yoga 2 Pro the GRUB2 menu was so tiny I couldn’t read it (bitmapped fonts again). Fortunately, the default was sensible and the system booted OK. Unity 7, of course, was similarly unusable, as was the Terminal, the Browser, and pretty much everything else. Ouch.

The problem is rooted in the fact that it’s an invalid assumption that all display devices have a dot pitch of 96 pixels per inch. I already experience this with my dual monitor setup, but it’s less noticeable with 120 vs. 95 DPI. This is just not a valid assumption.

See, a character in a 12 point font needs to appear to be 12 points. That’s 1 pica. One sixth of an inch. The size of a 12 point character should not vary depending on the resolution of your monitor. There’s a caveat, though, in that when I said ‘appear’ what I meant was at a comfortable viewing distance. Turns out that for the best human interface we do in fact want the absolute size of text to change depending on the viewing distance so that there is a constant angle subtended by the display. Er, that means things that are father away need to be bigger so they seem the same size. Got it? Think: projectors. Turns out people use phones and tablets up close, so smaller is OK, but they use their laptops and desktops farther away so smaller is no good.

This is where I need a diagram as a visual aid, but I’m afraid my drawing skills have rusted out and are at the shop for repairs. If someone wants to contribute one, that’d be great.

So, what we need to do for Unity running on the desktop is to figure out the physical dot pitch for each physical display connected to the system, and calcluate the scaling factor that would convert to a fixed 96 pixels per inch, then scale the fonts by that much. Other metrics need to be expressed in terms of ems (another measure based on the current font size — a term that comes from the days of hot metal) and graphics scaled accordingly.

But wait, we don’t want to scale windows if we don’t have to. We don’t want to waste the “retina” display, we just want text to be readable (and controls to be usable). At this point, we’re looking at making sure the Unity Launcher, the Unity panel, the Quicklists, and the Shortcuts are all usable out of the box on a high DPI display, because I have one and I tell you it’s not too usable right now.

A lot of applications are not going to work perfectly on high DPI, including the browsers and the office suites. We’re thinking of adding some optional window scaling through Compiz to help out with those but time is rapidly flowing and there’s a lot of work to do. Stand by for updates. As always, patches are welcome.

Michael Zanetti: Little sneak previews

Planet Ubuntu - Thu, 2014-01-16 23:28

Recently I’ve been busy working on the right edge gesture in Ubuntu Touch. Here’s a little sneak preview:

Your browser does not support HTML5 video

Note that this is still work in progress and probably won’t hit the image really soon.

If you’ve been paying attention while watching the video you might have noticed that I used Ubuntu Touch to control my living room lighting. That is done with a set of Philips Hue Lightbulbs which I got end of last year and a soon to be released app called “Shine”, running on Ubuntu Touch, MeeGo and Desktops.

Stay tuned!

Jono Bacon: Growing an Active Ubuntu Advocacy Community

Planet Ubuntu - Thu, 2014-01-16 23:13

Like many of you, I am tremendously excited about Ubuntu’s future. We are building a powerful convergence platform across multiple devices with a comprehensive developer platform at the heart of it. This could have a profound impact on users and developers alike.

Now, you have all heard me and my team rattling on about this for a while, but we also have a wonderful advocacy community in Ubuntu in the form of our LoCo Teams who are spreading the word. I want to explore ways to help support and grow the events and advocacy that our LoCo Teams are doing.

I had a conversation with Jose on the LoCo Council about this today, and I think we have a fun plan to move forward with. We are going to need help though, so please let me know in the comments if you can participate.

Step 1: Ubuntu Advocacy Kit

The Ubuntu Advocacy Kit is designed to provide a one-stop shop of information, materials (e.g. logos, brochures, presentations), and more for doing any kind of Ubuntu advocacy. Right now it needs a bit of a spring clean, which I am currently working on.

I think we need to get as many members of our community to utilize the kit. With this in mind we are going to do a few things:

  • Get the kit cleaned up and up to date.
  • Get it linked on and encourage our community to use it.
  • Encourage our community to contribute to the kit and add additional content.
  • Grow the team that maintains the kit.

Help needed: great writers and editors.

Step 2: Advocacy App

The Ubuntu Advocacy Kit works offline. This was a conscious decision with a few benefits:

  1. It makes it easier to know you have all relevant content without having to go to a website and download all the assets. When you have the kit, you have all the materials.
  2. The kit can be used offline.
  3. The kit can be more easily shared.
  4. When people contribute to the kit it feels like you are making something, as opposed to adding docs to a website. This increases the sense of ownership.

With the kit being contained in an offline HTML state (and the source material in reStructured Text) it means that it wouldn’t be that much work to make a click package of the kit that we can ship on the phone, tablet, and desktop.

Just imagine that: you can use the click store to install the Ubuntu Advocacy Kit and have all the information and materials you need, right from the palm of your hand on your phone, tablet, or desktop.

The current stylesheet for the kit doesn’t render well on a mobile device, so it would be handy if we could map the top-level nav (Documentation, Materials etc) to tabs in an app.

We could also potentially include links to other LoCo resources (e.g. a RSS feed view of news from and a list of teams.

If you would be interested in working on this, let me know.

Help needed: Ubuntu SDK programmers and artists.

Step 3: Hangout Workshops

I am going to schedule some hangout workshops to go through some tips of how to organize and run LoCo events and advocacy campaigns, and use the advocacy kit as the source material for the workshop. I hope this will result in more events being coordinated.

Help needed: LoCo members who want to grow their skills.

Step 4: LoCo Portal

We also want to encourage wider use of so our community can get a great idea of the pule of advocacy, events, and more going on.

Help needed: volunteers to run events.

Feedback and volunteers are most welcome!

Stuart Langridge: Posting to Discourse via the Discourse REST API from Python

Planet Ubuntu - Thu, 2014-01-16 20:05

The Bad Voltage forum is run by Discourse. As part of posting a new episode, I wanted to be able to send a post to the forum from a script. Discourse has a REST API but it’s not very well documented, at least partially because it’s still being worked on. So if you read this post two years after it was written, it might be entirely full of lies. Still, I managed to work out how to post to Discourse from a Python script, and here’s an example script to do just that.

First, you’ll need an API key. If you’re the forum administrator, which I am, you can generate one of these from http://YOURFORUM/admin/api. It is not clear to me exactly what this API key does: in particular, I suspect that it is a key with total admin rights over the forum, so don’t share it around. If there’s a way of making an API key with limited rights to just create posts and that’s it, I don’t know that way; if you do know that way, tell me! Once you’ve got your API key, and your username, fill them into the script as APIKEY and APIUSERNAME.

import requests # apt-get install python-requests # based on # log all the things so you can see what's going on import logging import httplib httplib.HTTPConnection.debuglevel = 1 logging.basicConfig() logging.getLogger().setLevel(logging.DEBUG) requests_log = logging.getLogger("requests.packages.urllib3") requests_log.setLevel(logging.DEBUG) requests_log.propagate = True # api key created in discourse admin. probably super-secret, so don't tell anyone. APIKEY="whatever the api key is" APIUSERNAME="your username" QSPARAMS = {"api_key": APIKEY, "api_username": APIUSERNAME} FORUM = "http://url for your forum/" # with the slash on the end # First, get cookie r = requests.get(FORUM, params=QSPARAMS) SESSION_COOKIE = r.cookies["_forum_session"] # Now, send a post to the _forum_session post_details = { "title": "Title of the new topic", "raw": "Body text of the post", "category": 7, # get the category ID from the admin "archetype": "regular", "reply_to_post_number": 0 } r = + "posts", params=QSPARAMS, data=post_details, cookies={"_forum_session": SESSION_COOKIE}) print "Various details of the response from discourse" print r.text, r.headers, r.status_code disc_data = json.loads(r.text) disc_data["FORUM"] = FORUM print "The link to your new post is: " print "%(FORUM)st/%(topic_slug)s/%(topic_id)s" % disc_data

Harald Sitter: Neon 5′s Many PPAs & APT

Planet Ubuntu - Thu, 2014-01-16 07:27

Project Neon 5, the KDE Frameworks 5 version of Kubuntu‘s continuous KDE software  delivery system, has more than one package repositories balancing quality and update frequency in different ways. This post is supposed to help those of you who, like me, wish to use whatever works best and therefore need to switch PPAs from time to time.

APT really comes in handy for this. As it allows you to pretty freely move between versions of a package through increasing pinning priority on a different PPA.

First you add all three repositories and create a file /etc/apt/preferences.d/kf5 containing:

Package: * Pin: release o=LP-PPA-neon-kf5-snapshot-weekly Pin-Priority: 350 Package: * Pin: release o=LP-PPA-neon-kf5-snapshot-daily Pin-Priority: 250 Package: * Pin: release o=LP-PPA-neon-kf5 Pin-Priority: 150

Now all you need to do is increase the priority of any of the entries to switch to that snapshot and run

sudo apt-get update && sudo apt-get dist-upgrade

APT will then automagically move your entire KDE Frameworks 5 stack to the version in the PPA with the highest priority.

So magic.

David Murphy: HP Chromebook 14 + DigitalOcean (and Ubuntu) = Productivity

Planet Ubuntu - Wed, 2014-01-15 21:09

Although I still use my desktop replacement (i.e., little-to-no battery life) for a good chunk of my work, recent additions to my setup have resulted in some improvements that I thought others might be interested in.

For Christmas just gone my wonderful wife Suzanne – and my equally wonderful children, but let’s face it was her money not theirs! – bought me a HP Chromebook 14. Since the Chromebooks were first announced, I was dismissive of them, thinking that at best they would be a cheap laptop to install Ubuntu on. However over the last year my attitudes had changed, and I came to realise that at least 70% of my time is spent in some browser or other, and of the other 30% most is spent in a terminal or Sublime Text. This realisation, combined with the improvements Intel Haswell brought to battery life made me reconsider my position and start seriously looking at a Chromebook as a 2nd machine for the couch/coffee shop/travel.

I initially focussed on the HP Chromebook 11 and while the ARM architecture didn’t put me off, the 2GB RAM did. When I found the Chromebook 14 with a larger screen, 4GB RAM and Haswell chipset, I dropped enough subtle hints and Suzanne got the message.

So Christmas Day came and I finally got my hands on it! First impressions were very favourable: this neither looks nor feels like a £249 device. ChromeOS was exactly what I was expecting, and generally gets out of my way. The keyboard is superb, and I would compare it in quality to that of my late MacBook Pro. Battery life is equally superb, and I’m easily getting 8+ hours at a time.

Chrome – and ChromeOS – is not without limitations though, and although a new breed of in-browser environments such as Codebox, Koding,, and Cloud9 are giving more options for developers, what I really want is a terminal. Enter Secure Shell from Google – SSH in your browser (with public key authentication). This lets me connect to any box of my choosing, and although I could have just connected back to my desk-bound laptop, I would still be limited to my barely-deserves-the-name-broadband ADSL connection.

So, with my Chromebook and SSH client in place, DigitalOcean was my next port of call, using their painless web interface to create an Ubuntu-based droplet. Command Line Interfaces are incredibly powerful, and despite claims to the contrary most developers spending most of their time with them1. There are a plethora of tools to improve your productivity, and my three must-haves are:

With this droplet I can do pretty much anything I need that ChromeOS doesn’t provide, and connect through to the many other droplets, linodes, EC2 nodes, OpenStack nodes and other servers I use personally and professionally.

In some other posts I’ll expand on how I use (and – equally importantly – how I secure) my DigitalOcean droplets, and which “apps” I use with Chrome.

  1. The fact that I now spend most of my time in the browser and not on the command-line shows you that I’ve settled into my role as an engineering manager! :-) 

The post HP Chromebook 14 + DigitalOcean (and Ubuntu) = Productivity appeared first on David Murphy.

Hollman Enciso: Redimensionar volumenes Data y Root de CloudStack en XenServer

Planet Ubuntu - Wed, 2014-01-15 16:09

A el día de hoy CloudStack en su ultima versión estable (4.2.1) no cuenta con una caracteristica importante que es el redimensionamiento de volumenes Root que es el volumen donde se instala por primera vez el sistema operativo de las instancias.

Por ejemplo si creamos una instancia con ciertos recursos de hardware y almacenamiento de 20GB sobre la cual instalamos bien sea Windows o Linux, independiente de el esquema de particionamiento. CloudStack NO me permite por ahora (característica disponible en la siguiente versión 4.3) redimensionar este volumen que se ha adquirido desde un principio de 20GB en este ejemplo.

CloudStack define define un volumen como una unidad de almacenamiento disponible para las maquinas virtuales. Define 2 tipos de volumenes, los root disk y los data disk; Los root contienen la raiz “/” del filesystem y usualmente el de booteo; Data Disk es utilizado para como almacenamiento adicional bien sea por ejemplo /home o el disco D:/ de Windows.

Para redimensaionar Volumenes DATA debemos antes que nada haber instalado las tools de XenServer sobre la instancia y luego ingresar a la pestaña Storage, seleccionamos el volumen DATA a redimensionar y seleccionamos la opción Resize Volume, seleccionamos el nuevo tamaño y listo.

Para redimensionar volúmenes Root es mas complejo debido a que como mencioné anteriormente CloudStack no cuenta con esta característica.  Primero debemos parar la instancia, una vez en estado stopped, ingresamos a nuestro XenServer y seleccionamos nuestro almacenamiento primario para posterior seleccionar las propiedades de el volumen a redimensionar. Una vez en las propiedades vamos a la pestaña Size and Location y cambiamos el tamaño de la partición, en este ejemplo mi partición era de 20GB la he aumentado a 30GB de una instancia Windows Server

Ahora debemos decirle a nuestra instancia, en este caso al Windows que extienda el volumen root, osea el C:/.  Lo hacemos desde el administrador de Discos que nos debe mostrar la partición de 20GB + 10GB adicionales que hemos adicionado listos para extender. damos click derecho a C y damos extend, seleccionamos la cantidad de bytes a extender y aceptamos

Ahora tenemos un problema y es que en CloudStack el volumen se sigue reflejando como un volumen de 20GB y no de 30GB.

Debemos editar la tabla de volumenes “volumes” de CloudStack y poner el nuevo valor; Nos conectamos a mysql a la tabla cloud o la que hayamos definido para nuestro management server:

# mysql -u root -p

mysql> use cloud;

Hacemos un select para identificar el ID de el volumen de mi instancia que en mi caso es el ROOT-5

select * from volumes\G

Identificado el id, hacemos el update del campo size a el nuevo valor, este debe ir en bytes:

mysql> update volumes set size=32212254720 where id=6;
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Paramos e iniciamos la instancia para que lea los nuevos campos de ls base de datos y listo. Verificamos nuevamente en la pestaña storage de CloudStack el volumen y debe estar mostrando el nuevo valor 30 GB

Para mayor info leer documentación oficial

Post Relacionados

Hollman Enciso: Instalar CloudStack Management Server en Ubuntu

Planet Ubuntu - Wed, 2014-01-15 15:12

Después de mi anterior entrada al blog de Como instalar CloudStack Management Server en CentOS 6.x algunos amigos me han preguntado si se puede instalar en Ubuntu, si están disponibles los DEB o en su defecto un repositorio etc. La respuesta es Si se puede y se hace tan sencillo como en CentOS desde los repositorios.

Ya es cuestión de el administrador sobre que OS quiere trabajar o en cual confía mas.

Los requerimientos de hardware mínimos son:

Los requerimientos de hardware recomendados en la documentación oficial de CloudStack son:

  • 64-bit x86 CPU (mas cores, darán un mejor performance)
  • 4 GB de memoria RAM
  • 250 GB de disco local (Se recomienda mínimo ; 500 GB)
  • Almenos 1 NIC
  • Dirección IP estática asignada
  • Nombre de Dominio completo (en esta guia explicaré como hacerlo)

Creamos el repositorio creando o editando el archivo /etc/apt/sources.list.d/cloudstack.list y agregamos el repo

deb precise 4.2

Ahora descargamos y agregamos a las llaves de confianza del repo

$ wget -O -|apt-key add

actualizamos la cache de paquetes

$ sudo apt-get update

Descargamos el management server

$ sudo apt-get install cloudstack-mangagement

Una vez termine debemos descargar el vhd-util en la ruta /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver

$ sudo wget

Instalamos el motor de bases de datos

$ sudo apt-get install mysql-server

Ahora hacemos unas modificaciones al archivo de configuración. editando el archivo /etc/my.cnfy agregamos las siguientes lineas debajo de datadir=/var/lib/mysql en la sección [mysqld]

innodb_rollback_on_timeout=1 innodb_lock_wait_timeout=600 max_connections=350 log-bin=mysql-bin binlog-format = 'ROW'

Procedemos con al creación de las tablas en la bd para nuestro management server

cloudstack-setup-databases cloud:<password_database>@localhost –deploy-as=root:<password>

Para finalizar ejecutamos el siguiente comando



Post Relacionados

Ubuntu App Developer Blog: Ubuntu App Heroes interviewed: the Weather app developers

Planet Ubuntu - Wed, 2014-01-15 14:36

We are starting a blog series where we interview our Ubuntu App Heroes. We want to learn more about how developers found the experience writing apps for Ubuntu, what their plans are, what they do and who they are.

Kicking off the series, we had a quick chat with the two guys working on the beautiful Weather app, Martin Borho and Raúl Yeguas.

Can you introduce yourselves?

Raúl: My name is Raúl Yeguas, I’m a frontend developer and I live in Seville in Spain. I studied IT at the University of Jaén where I organised some free software events. I’m a great Qt fan and a proud KDE user.

Martin: My name is Martin Borho, I’m 37 years old and I live in Hamburg, Germany. I work as a freelance programmer, mainly coding Python.

When and how did you get involved in the Ubuntu Core Apps project?

Martin: As Ubuntu Touch was announced, there was a little form at the webpage, asking for interested persons willing to contribute. As I was searching for a project I could join at that time, I filled it out….

Rául: I noticed Canonical’s call for developers on QtPlanet. When I subscribed to Canonical’s first announce I thought that it was for helping developers to write their own apps for their platform; but when I received the emails from them asking me what core app I wanted to work on I was so surprised and excited. I’m part of the Core Apps Developers from the beginning.

Have you developed apps before?

Martin: Yes, I’ve started doing a mobile app, named “Ask Ziggy”, on my Nokia N900 in 2010. In 2011 I’ve built an app for Google News called “NewsG” for WebOS. Which I later ported to Qt/QML, to get it on my Nokia N9/N950.

Raúl: Yes, mainly C++/Qt apps and HTML/JS webapps.

What was your experience learning everything involved to work on the Weather app?

Martin: Hmm, initially I had no idea what to expect. After all I have learned quite a few things (and still do). Contributing to a large scale project with people from all over the world is one, how various parts have to fit together is another one. It is fascinating to see how Ubuntu Touch has evolved over the last months.

Raúl: I have to say that this team is awesome. I learned too much from them, mainly about working in team with distant people and about designing new ways to interact with an app.

Weather App Designs

Is there anything you are proud of or feel is solved very well in the Weather app?

Raúl: Yes, the gestures to change between daily forecast and hourly forecast. I think is too easy to use and intuitive.

Martin: Hard to say, perhaps: It’s quite easy to add more weather data providers to the app, without having to deal much with the UI part. And having a distinction between fast and slow scrolling, to flip between days, respective hours, is quite nice.

What can new app developers learn from your app?

Martin: Can’t say… as I’m doing Qt/QML only in my spare time I don’t think it’s very sophisticated in that regard.

Raúl: I think that our app has well organised and differentiated graphics components so I think that it could be a good example for learning how to create complex QML components by creating simple parts. It also has a very good API to call weather info providers.

What can users of the app expect in the coming months?

Martin: The integration of Weather Channel as a second weather data provider is nearly finished and will be ready to get merged into trunk very soon. Apart from that, Raúl is currently working on new animated icons, which will be very nice when ready.

Raúl: Yes, expect some new animations for eye-candy and a new weather information provider.

Do you have any other hobbies apart from working on Ubuntu?

Martin: I like biking. And as the stadium of my favourite club is only a 5 minute walk away, I like watching football too …

Raúl: Yes, like non-IT people have. I like watching movies, playing videogames and traveling. When I have enough time I produce electronic music. But I have to confess that sometimes I contribute on other open source projects \o/

Valorie Zimmerman: The Forgetting, and remembering

Planet Ubuntu - Wed, 2014-01-15 08:44
My mother died of Alzheimer's disease, and my dad is disappearing in a similar way. He's not been diagnosed with Alz., but what's the difference? His memory and personality are both diminishing, just as hers did. Looking at all the literature, it seems that he is in the final stage, in fact.

The :
Very severe cognitive decline
(Severe or late-stage Alzheimer's disease)
In the final stage of this disease, individuals lose the ability to respond to their environment, to carry on a conversation and, eventually, to control movement. They may still say words or phrases.
At this stage, individuals need help with much of their daily personal care, including eating or using the toilet. They may also lose the ability to smile, to sit without support and to hold their heads up. Reflexes become abnormal. Muscles grow rigid. Swallowing impaired.These details are mostly true for my dad, although he still holds up his head. So far, he eats well, although sometimes must be prompted to do so. He lives in the wheelchair though, and rarely if ever moves his body.

When my mother was losing her mind, she had some bizarre theories of her problems and quite destructive paranoia, and I mourned her loss constantly. When she died, I found that all that grief was just a waste; I had to grieve her death anyway. So when my dad started down the same road, after my first rebellion against it, I thought it through, and decided to do things differently this time. I uncluttered my life, and started doing that to the house too. I said no quite a bit, and shed jobs and tasks like crazy. Until I got bored! That was the point at which I started reading email again, started visiting IRC more often, and began pitching in as I had time and interest.

So as my dad is slipping away, we're making progress on the house, and life is pretty peaceful and delightful. I'm able to enjoy the time I spend with Dad, and also the days I have to skip my visits, whether it's for a local meeting or a trip to Europe. When he goes, I'll grieve his death, but I'm not finding this process painful. It helps a lot that I alternate visits with my sister, so the pressure is shared, and he's in a nursing home, so neither of us is changing his diapers or brushing his teeth. I don't think suffering is a healthy response to his slow death, so I'm choosing instead to enjoy the time I have left with my dad.

Oddly enough, it helps to read up on Alzheimer's; what it is, and what progress is being made in the fight against it. This is important, because my generation, the baby boomers, are entering their sixties. This disease could bring the health care establishment to its knees, all around the world.

I've just finished a remarkable book, called The Forgetting - Alzheimer's: Portrait of an Epidemic, by David Schenk. It's a really great synthesis of history, experience, research, and thinking about dementia. Just for myself, I'm going to copy part of what he says in Chapter 16, called Things To Avoid, on page 228:
Doctors cannot yet cure Alzheimer's, or prevent it, or even mask its symptoms for very long. But hundreds of studies have begun to produce a pointillist portrait of how people can help themselves--things to do for the body, mind, and spirit that might reduce the risk of getting the disease, or at least delay its onset:
     Avoid head injuries,
     Avoid fatty foods,
     Avoid high blood pressure,
     Eat foods rich in antioxidants, which eliminate damaging free-radical molecules. Eat, specifically, prunes, raisins, blueberries, blackberries, kale, strawberries, spinach, raspberries, brussels sprouts, plums, alfalfa sprouts, broccoli, beets, oranges, red grapes, red peppers, and cherries. (Foods listed according to their antioxidant content, in descending order.)
     Eat foods rich in folic acid, and in vitamin B6, B12, C, and E.
     Eat tuna, salmon, and other foods rich in fatty acids.
     Don't drink too much alcohol. (A moderate amount might be slightly beneficial.)
     Don't skimp on sleep. (Sleep is rejuvenating to the brain and the body; sleep seems to play a very important role in long-term memory formation.)
     Maintain a high level of social contact (and consider marriage--one study shows fewer married people getting Alzheimer's).
     If you are a woman past menopause, consider estrogen replacement therapy. (Some studies suggest it may reduce Alzheimer's incidence by as much as half.)
     If you like to chew gum, continue chewing gum.....
     If you regularly take non-steroidal anti-inflammatory drugs such as ibuprofen for another reason, continue. (Some studies show a benefit.)
     Get a thorough education.
     Keep your mind active. Read, discuss, debate, create, play word games, do crossword puzzles, meet new people, learn new languages. Studies show that people with very high levels of education, while not immune from Alzheimer's, do tend to get the disease later than others.That last paragraph supports the work I began when we first got our Coleco ADAM computer. I've learned how to use computers to edit newsletters, do genealogy research, use and administer mail lists, forums, newgroups, websites. I used Macs, some Windows machines, and then found my true home in FOSS, and the communities I found there: KDEKubuntuLinuxchix, the Ubuntu Women, and Linuxfest Northwest.


Subscribe to Free Software Magazine aggregator