news aggregator

The Fridge: Ubuntu Weekly Newsletter Issue 351

Planet Ubuntu - Mon, 2014-01-20 22:31

Tony Whitmore: Big Finish Day 4

Planet Ubuntu - Mon, 2014-01-20 22:27

I had a great time at the weekend, taking photographs at Big Finish Day 4. The event is for fans of the audio production company, who make audio plays of Doctor Who, The Avengers, Dark Shadows, Blake’s 7 and much more. I’ve listened to Big Finish audio plays for years, mostly their Doctor Who range (of course!). The production standards are superb and one of their recent releases is up for a BBC audio drama award. I’ve been lucky enough to do some work for them over the last few months, and was asked to go along to capture some of the event.

In the morning I was wandering around taking candid shots of people enjoying the convention and the panels. It was rather like taking wedding photographs although slightly more relaxed. There are so many different moments to capture in a short time during a wedding ceremony, but a convention panel is a little more static and a good deal longer. Fortunately the urbane Nick Briggs kept the crowd laughing through the morning, and there was a really great atmosphere through the whole event.

https://twitter.com/DudleyIan/status/424504877109510144

In the afternoon I set up a portable studio to take some photos of various Big Finish actors. I was rather pleased with this set up, especially as it all managed it fit in my car! Apart from the background roll.

I really enjoyed working at Big Finish Day, catching up with some of the very nice people I’ve met at recording sessions, and hope to be asked back again!

Pin It

Marcin Juszkiewicz: The story of Qt/AArch64 patching

Planet Ubuntu - Mon, 2014-01-20 17:16

In over a year of my work as AArch64 porter I saw a lot of patches. But Qt one has the most interesting history.

Around year ago when I was building whatever possible during my Linaro work we got to the point when Qt jumped into a queue. Build failed but fixing was quite easy — all I had to do was to take “webkitgtk” patch written by Riku Voipio and apply it to Qt 4. Resulting file landed in “meta-aarch64″ layer of OpenEmbedded and is still there.

Time passed. More common distributions like Debian, Fedora, OpenSUSE, Ubuntu (alphabetical order) started working on their AArch64 ports. And one day Fedora and Ubuntu started working on building Qt 4. I do not know who wrote QAtomic stuff but I saw few versions/iterations of it and it took me quite long time (first on model, then on real hardware) to get it fully working in Fedora — used Ubuntu version of patches.

Up to this moment it was over 9 months and still no upstreaming was done. So one day I decided to go for it and opened QTBUG #35442. Then reopened issue #33 in “double-conversion” project (which is used in few places in Qt) as they got good fix and merged wrong one (but it is fixed now). For that one I opened a request to update to newer version of “double-conversion” as QTBUG #35528.

But story did not end there. As Qt 4 development is more or less ended I was asked to provide fixes for Qt 5. Took me a while. Had to create a graph of build time dependencies between Qt 5 components (had to break few in meantime) and slowly module after module I got most of it built.

There were 3 components which required patching:

First one is solved upstream and waits for Qt guys. I was told that 5.3 will get it updated. Second one is already reviewed and merged. Last one left and here is a problem as it looks like the only person who does QtWebKit code reviews is Allan Sandfeld Jensen but he can not review code he sent. I am not able to do that due to Qt Contributor License Agreement which needs to be signed and (due to legal stuff) I can not do that.

So how it looks now? I would say that quite good. One 3rd party project needs update (in two places of Qt 5) and one patch needs to get through code review. I still need to send package updates to Fedora bug tracker. Ubuntu will need to merge patches when they move to 5.2 version.

All rights reserved © Marcin Juszkiewicz
The story of Qt/AArch64 patching was originally posted on Marcin Juszkiewicz website

Related posts:

  1. I miss Debian tools
  2. Going for FOSDEM
  3. AArch64 port of Debian/Ubuntu is alive!

David Planella: A quickstart guide to the Ubuntu emulator

Planet Ubuntu - Mon, 2014-01-20 14:24

Following a recent announcement, the Ubuntu emulator is going to become a primary Engineering platform for development. Quoting Alexander Sack, when ready, the goal is to

[...] start using the emulator for everything you usually would do on the phone. We really want to make the emulator a class A engineering platform for everyone

While the final emulator is still work in progress, this month we are also going to see a big push in finishing all the pieces to make it a first-class citizen for development, both for the platform itself and for app developers. However, as it stands today, the emulator is already functional, so I’ve decided to prepare a quickstart guide to highlight the great work the Foundations and Phonedations teams (along with many other contributors) are producing to make it possible.

While you should consider this as guide as a preview, you can already use it to start getting familiar with the emulator for testing, platform development and writing apps.

Requirements

To install and run the Ubuntu emulator, you will need:

  • 512MB of RAM dedicated to the emulator
  • 4GB of disk space
  • OpenGL-capable desktop drivers (most graphics drivers/cards are)
Installing the emulator

If you are using Trusty Tahr, the Ubuntu development version, installation is as easy as opening a terminal, pressing Ctrl+Alt+T and running this command, followed by Enter:

sudo apt-get install ubuntu-emulator

Alternatively, if you are running a stable release such as Ubuntu 13.10, you can install the emulator by manually downloading its packages first:

Show me how

  1. Create a folder named emulator in your home directory
  2. Go to the goget-ubuntu-touch packages page in Launchpad
  3. Scroll down to Trusty Tahr and click on the arrow to the left to expand it
  4. Scroll further to the bottom of the page and click on the ubuntu-emulator_* package corresponding to your architecture (i386 or amd64) to download in the ~/emulator folder you created
  5. Now go to the Android packages page in Launchpad
  6. Scroll down to Trusty Tahr and click on the arrow to the left to expand it
  7. Scroll further to the bottom of the page and click on the ubuntu-emulator_runtime_* package corresponding to download it at the same ~/emulator folder
  8. Open a terminal with Ctrl+Alt+T
  9. Change the directory to the location where you downloaded the package writing the following command in the terminal: cd emulator
  10. Then run this command to install the packages: sudo dpkg -i *.deb
  11. Once the installation is successful you can close the terminal and remove the ~/emulator folder and its contents

Installation notes
  • Downloaded images are cached at ~/.cache/ubuntuimage –using the standard XDG_CACHE_DIR location.
  • Instances are stored at ~/.local/share/ubuntu-emulator –using the standard XDG_DATA_DIR location.
  • While an image upgrade feature is in the works, for now you can simply create an instance of a newer image over the previous one.
Running the emulator

The ubuntu-emulator tool makes it again really easy to manage instances and run the emulator. Typically, you’ll be opening a terminal and running these commands the first time you create an instance (where myinstance is the name you’ve chsen for it):

sudo ubuntu-emulator create myinstance
ubuntu-emulator run myinstance

You can create any instances you need for different purposes. And once the instance has been created, you’ll be generally using the ubuntu-emulator run myinstance command to start an emulator session based on that instance.

There are 3 main elements you’ll be interacting with when running the emulator:

  • The phone UI – this is the visual part of the emulator, where you can interact with the UI in the same way you’d do it with a phone. You can use your mouse to simulate taps and slides. Bonus points if you can recognize the phone model where the UI is in ;)
  • The remote session on the terminal – upon starting the emulator, a terminal will also be launched alongside. Use the phablet username and the same password to log in to an interactive ADB session on the emulator. You can also launch other terminal sessions using other communication protocols –see the link at the end of this guide for more details.
  • The ubuntu-emulator tool – with this CLI tool, you can manage the lifetime and runtime of Ubuntu images. Common subcommands of ubuntu-emulator include create (to create new instances), destroy (to destroy existing instances), run (as we’ve already seen, to run instances), snapshot (to create and restore snapshots of a given point in time) and more. Use ubuntu-emulator --help to learn about these commands and ubuntu-emulator command --help to learn more about a particular command and its options.
Runtime notes
  • At this time, the emulator takes a while to load. During that time, you’ll see a black screen inside the phone skin. Just wait a bit until it’s finished loading and the welcome screen appears.
  • By default the latest built image from the devel-proposed channel is used. This can be changed during creation with the --channel and --revision options.
  • If your host has a network connection, the emulator will use that transparently, even thought the network indicator might show otherwise.
  • To talk to the emulator, you can use standard adb. The emulator should appear under the list of the adb devices command. Due to a known bug, you’ll need to run adb kill-server; adb start-server on the host before you can see the emulator listed as a device.
Learn more and contribute

I hope this guide has whetted your appetite to start testing the emulator! You can also contribute making the emulator a first-class target for Ubuntu development. The easiest way is to install it and give it ago. If something is not working you can then file a bug.

If you want to fix a bug yourself or contribute to code, the best thing is to ask the developers about how to get started by subscribing to the Ubuntu phone mailing list.

If you want to learn more about the emulator, including how to create instance snapshots and other cool features, head out to the Ubuntu Emulator wiki page.

And next… support for the tablet form factor and SDK integration. Can’t wait for those features to land in the emulator!

The post A quickstart guide to the Ubuntu emulator appeared first on David Planella.

Riccardo Padovani: Development plans of calc app for Ubuntu Touch

Planet Ubuntu - Mon, 2014-01-20 08:00

As you probably know on April 17th, 2014 Ubuntu 14.04 ‘Trusty Tahr‘ will be relased. Also second stable version of Ubuntu Touch will be released and main goal for this version, for all the core-apps, is the desktop convergence.

Convergence it’s the revolutionary idea behind Ubuntu Touch: all the apps can run on desktop, tablet and phone (and maybe TV) not because have different implementation of same interface, but because interface adapts dinamically to the size of screen, as a responsive website. You can try this with the app SaucyBacon by randomcpp.

Now, let me explain what are the jobs to be done for the calculator from here to April:

  • For the convergence we need to enable keyboard support on desktop, so you can use the app with your keyboard and you are not forced to use the mouse. A basically implementation of this is landed: you can use key up and down to scroll, number keys to enter numbers and enter to do a calc. But some bugs are still open. We want to enable others shortcuts to edit label and to tear off calcs, and also we want to implement copy and paste;
  • Always for convergence we need to create an optimized tablet version and to fix some wrong behaviors on desktop;
  • We need to create a sidestage version of the app so on big screen you can place side by side two apps;
  • In our wishlist there is also to implement scientific function, but it isn’t a priority and I don’t know if and when this will be implemented.

If you want to follow the implementation of all this feature please see our blueprint.

If you want to help us in development (Why you should contribute to Ubuntu Touch) please join #ubuntu-app-devel or #ubuntu-calc-app on IRC and ping boiko, dpm, mihir, popey or I (WebbyIT).

Ciao,
R.

This work is licensed under Creative Commons Attribution 3.0 Unported

Ubuntu Classroom: Ubuntu User Days coming up!

Planet Ubuntu - Mon, 2014-01-20 06:03

Next weekend, from Saturday the 25th at 14:30 UTC until Sunday the 26th at 01:00 UTC the Classroom team will be hosting the Ubuntu User Days!

User Days was created to be a set of chat-based classes offered during a two days period to teach the beginning or intermediate Ubuntu user the basics to get them started with Ubuntu. Sessions this cycle include:

  • Command line made easy
  • Unity: Tips, tricks and configuration
  • Equivalent Applications
  • Finding Support for Ubuntu

You can check the full schedule here: https://wiki.ubuntu.com/UserDays

The best thing is, everyone can come! If you want to participate, you just need to join #ubuntu-classroom and #ubuntu-classroom-chat on irc.freenode.net in your IRC client, or just click here for browser-based Webchat.

We hope to see you next weekend!


Jono Bacon: Bad Voltage in 2014

Planet Ubuntu - Sun, 2014-01-19 16:44

In 2013 we kicked off Bad Voltage, a fun and irreverent podcast about technology, Open Source, gaming, politics, and anything else we find interesting. The show includes a veritable bounty of presenters including Stuart Langridge (LugRadio, Show Of Jaq), Bryan Lunduke (Linux Action Show), Jeremy Garcia (LinuxQuestions Podcast), and myself (LugRadio, Shot Of Jaq).

We have all podcasted quite a bit before and we know it takes a little while to really get into the groove, but things are really starting to gel in the show. We are all having a blast doing it, and it seems people are enjoying it.

If you haven’t given the show a whirl, I would love to encourage you to check out our most episode. In it we feature:

  • An interview with Sam Hulick who writes music for video games (Mass Effect, Baldur’s Gate) as well as some of the Ubuntu sounds.
  • We discuss the Mars One project and whether it absolutely madness or vague possibility.
  • We evaluate how Open Source app devs can make money, different approaches, and whether someone could support a family with it.
  • Part One of our 2014 predictions. We will review them at the end of the year to see how we did. Be sure to share your predictions too!

Go and download the show in either MP3 or Ogg format and subscribe to the podcast feeds!

We also have a new community forum that is starting to get into its groove too. The forum is based on Discourse, so is a pleasure to use, and a really nice community is forming. We would love to welcome you too!

In 2014 we want to grow the show, refine our format, and grow our community around the world. Our goal here is that Bad Voltage becomes the perfect combination of informative but really fun to listen to. I have no doubt that our format and approach will continue to improve with each show. We also want to grow an awesome, inclusive, and diverse community of listeners too. Our goal is that people associate the Bad Voltage community as a fun, like-minded set of folks who chat together, play together, collaborate together, and enjoy the show together.

Here’s to a fun year with Bad Voltage and we hope you come and be a part of it.

Kubuntu Wire: Valve’s OpenGL Debugger Developed on Kubuntu

Planet Ubuntu - Sun, 2014-01-19 11:19

Valve make Steam, a platform for making computer games which got a lot of people excited when it was announced a couple years ago that it was being ported to Ubuntu.One of the developers has just announced a new OpenGL debugger.  It’s developed on Kubuntu and uses the Qt Creator IDE.  Best of all it’s going to be completely open source.  Lovely to know Kubuntu is helping bring the next generation of games on all platforms.  More details on Phoronix.

Eric Hammond: Finding the Region for an AWS Resource ID

Planet Ubuntu - Sun, 2014-01-19 00:03

use concurrent AWS command line requests to search the world for your instance, image, volume, snapshot, …

Background

Amazon EC2 and many other AWS services are divided up into various regions across the world. Each region is a separate geographic area and is completely independent of other regions.

Though this is a great architecture for preventing global meltdown, it can occasionally make life more difficult for customers, as we must interact with each region separately.

One example of this is when we have the id for an AMI, instance, or other EC2 resource and want to do something with it but don’t know which region it is in.

This happens on ServerFault when a poster presents a problem with an instance, provides the initial AMI id, but forgets to specify the EC2 region. In order to find and examine the AMI, you need to look in each region to discover where it is.

Performance

You’ll hear a repeating theme when discussing performance in AWS:

To save time, run API requests concurrently.

This principle applies perfectly when performing requests across regions.

Parallelizing requests may seem like it would require an advanced programming language, but since I love using command line programs for simple interactive AWS tasks, I’ll present an easy mechanism for concurrent processing that works in bash.

Example

The following sample code finds an AMI using concurrent aws-cli commands to hit all regions in parallel.

id=ami-25b01138 # example AMI id type=image # or "instance" or "volume" or "snapshot" or ... regions=$(aws ec2 describe-regions --output text --query 'Regions[*].RegionName') for region in $regions; do ( aws ec2 describe-${type}s --region $region --$type-ids $id &>/dev/null && echo "$id is in $region" ) & done 2>/dev/null; wait 2>/dev/null

This results in the following output:

ami-25b01138 is in sa-east-1

By running the queries concurrently against all of the regions, we cut the run time by almost 90% and get our result in a second or two.

Drop this into a script, add code to automatically detect the type of the id, and you’ve got a useful command line tool… which you’re probably going to want to immediately rewrite in Python so there’s not quite so much forking going on.

Original article: http://alestic.com/2014/01/aws-region-search

Randall Ross: Planet Ubuntu Needs More Awesome - Part 2

Planet Ubuntu - Sat, 2014-01-18 19:31

In Part 1, I presented some of the results of my surveys about Planet Ubuntu from late 2013. Didn't read the summary? There's still time! What better a way to start your day?

With that behind us, let's dive into Part 2 of my promised summary along with additional bonus colour-commentary and recommendations not available anywhere else (at any price.)

TL;DR:
Planet Ubuntu needs a makeover.

6.

Survey Says:
There is a strong indication that people want a "new and improved" Planet Ubuntu.

Colour Commentary:
I'm firmly of the same opinion. Planet Ubuntu looks creaky and awkward. It's a throwback to an earlier era of web design. Interactivity? Not there. It also doesn't present well on different form factors. Have you ever tried reading it on Ubuntu Touch? Were you happy with the result? I could go on and on, but suffice to say there's room for improvement.

Some of you might be thinking "Why bother? There are plenty of other social web platforms that we could use as an Ubuntu blog. Why not just use ______." The problem with the word that's usually on top of that blank is that it's always out of our control, often predatory, and usually a bad idea in the long run. The best chance we have to shape the personality of one of the most prominent sites about Ubuntu is to actually maintain control of it. Planet Ubuntu reflects on Ubuntu whether we want to admit it or not. Let's admit it and make Planet Ubuntu great again.

Randall Concludes:
Let's reboot it.

7.

Survey Says:
Ignoring the fence-sitters, people want Ubuntu stories to have prominence, by a factor of two to one.

Colour Commentary:
I was a little surprised by how many people don't care one way or another. That aside, the majority vote for increased prominence of Ubuntu-related content is encouraging. I think this represents a good compromise for people who are insistent about blogging about non-Ubuntu topics on an Ubuntu site. (Yes, there are some who are.) Give them a small place, but not a place that detracts from the main event. Maybe the "real estate" a story gets should be proportional to the amount of Ubuntu content it contains. The mechanism for determining that would have to be designed, but it's an idea that has merit.

Randall Concludes:
Ubuntu-centric stories should be granted more prominence.

8.

Survey Says:
Huh?

Colour Commentary:
People have no idea how widely (or not) Planet Ubuntu is read. Some think it's amongst the top sites on the web, and others swear it's nothing but cob webs and tumble weeds. This isn't really surprising since the site doesn't publish any stats, and in the absence of data people will make up some wild assumptions. If we want Planet Ubuntu to have as wide a readership as possible, which IS what we want, then perhaps an important first step would be to insert analytics, or even a simple page view counter that can be graphed over time. That way, well be able to see if we're as popular as we need to be.

Randall Concludes:
Publish page view stats ASAP. We cannot improve what we cannot measure.

9.

Survey Says:
People want Planet Ubuntu authors to abide by the Ubuntu Code of Conduct.

Colour Commentary:
This was a bit of an accidental poll. While I was in the midst of my polling activities an unfortunate article that was a clear violation of the CoC and in poor taste was posted. What surprised (and disappointed) me is how long it took to take it down. Thankfully it was removed, but who knows how many people saw the article and now associate Ubuntu with something crass and juvenile?

Adding even more disappointment, the article was from someone who wasn't even an Ubuntu Member any more. So, it should never even have been posted in the first place.

And, adding *even more* disappointment, an effort to clean up the list of people who could post to Planet Ubuntu had been languishing for months.

Randall Concludes:
Maintain the site. (Looking in the direction of Community Council). Take down CoC violations with haste (i.e. in minutes, not hours). If you don't have the time/bandwidth, then delegate, or increase your numbers.

10.

Survey Says:
Nearly an even split.

Colour Commentary:
Given that there's a desire to make Ubuntu stories more prominent (see above), I'm curious to know what mechanism the people who don't want up-voting would use to make this happen. Perhaps an algorithm that scans for keywords and adjusts prominence accordingly? Or, maybe we could leave the decision to a panel of experts? I don't think either of these options have merit. I advocate that we use a system of up-voting by a group of people that are passionate about Ubuntu and are actively contributing to it day-in, day-out. Perhaps Ubuntu Members would be a good start for a group of up-voters?

Randall Concludes:
We need a reliable way to make Ubuntu articles prominent. Up-voting is that way.

---
To be continued...
I will wrap up the series in my next post with general conclusions and a prescription on how to make Planet Ubuntu awesome again. In the meantime, please share your thoughts in the comments.

Colin Watson: Testing wanted: GRUB 2.02~beta2 Debian/Ubuntu packages

Planet Ubuntu - Sat, 2014-01-18 00:48

This is mostly a repost of my ubuntu-devel mail for a wider audience, but see below for some additions.

I'd like to upgrade to GRUB 2.02 for Ubuntu 14.04; it's currently in beta. This represents a year and a half of upstream development, and contains many new features, which you can see in the NEWS file.

Obviously I want to be very careful with substantial upgrades to the default boot loader. So, I've put this in trusty-proposed, and filed a blocking bug to ensure that it doesn't reach trusty proper until it's had a reasonable amount of manual testing. If you are already using trusty and have some time to try this out, it would be very helpful to me. I suggest that you only attempt this if you're comfortable driving apt-get directly and recovering from errors at that level, and if you're willing to spend time working with me on narrowing down any problems that arise.

Please ensure that you have rescue media to hand before starting testing. The simplest way to upgrade is to enable trusty-proposed, upgrade ONLY packages whose names start with "grub" (e.g. use apt-get dist-upgrade to show the full list, say no to the upgrade, and then pass all the relevant package names to apt-get install), and then (very important!) disable trusty-proposed again. Provided that there were no errors in this process, you should be safe to reboot. If there were errors, you should be able to downgrade back to 2.00-22 (or 1.27+2.00-22 in the case of grub-efi-amd64-signed).

Please report your experiences (positive and negative) with this upgrade in the tracking bug. I'm particularly interested in systems that are complex in any way: UEFI Secure Boot, non-trivial disk setups, manual configuration, that kind of thing. If any of the problems you see are also ones you saw with earlier versions of GRUB, please identify those clearly, as I want to prioritise handling regressions over anything else. I've assigned myself to that bug to ensure that messages to it are filtered directly into my inbox.

I'll add a couple of things that weren't in my ubuntu-devel mail. Firstly, this is all in Debian experimental as well (I do all the work in Debian and sync it across, so the grub2 source package in Ubuntu is a verbatim copy of the one in Debian these days). There are some configuration differences applied at build time, but a large fraction of test cases will apply equally well to both. I don't have a definite schedule for pushing this into jessie yet - I only just finished getting 2.00 in place there, and the release schedule gives me a bit more time - but I certainly want to ship jessie with 2.02 or newer, and any test feedback would be welcome. It's probably best to just e-mail feedback to me directly for now, or to the pkg-grub-devel list.

Secondly, a couple of news sites have picked this up and run it as "Canonical intends to ship Ubuntu 14.04 LTS with a beta version of GRUB". This isn't in fact my intent at all. I'm doing this now because I think GRUB 2.02 will be ready in non-beta form in time for Ubuntu 14.04, and indeed that putting it in our development release will help to stabilise it; I'm an upstream GRUB developer too and I find the exposure of widely-used packages very helpful in that context. It will certainly be much easier to upgrade to a beta now and a final release later than it would be to try to jump from 2.00 to 2.02 in a month or two's time.

Even if there's some unforeseen delay and 2.02 isn't released in time, though, I think nearly three months of stabilisation is still plenty to yield a boot loader that I'm comfortable with shipping in an LTS. I've been backporting a lot of changes to 2.00 and even 1.99, and, as ever for an actively-developed codebase, it gets harder and harder over time (in particular, I've spent longer than I'd like hunting down and backporting fixes for non-512-byte sector disks). While I can still manage it, I don't want to be supporting 2.00 for five more years after upstream has moved on; I don't think that would be in anyone's best interests. And I definitely want some of the new features which aren't sensibly backportable, such as several of the new platforms (ARM, ARM64, Xen) and various networking improvements; I can imagine a number of our users being interested in things like optional signature verification of files GRUB reads from disk, improved Mac support, and the TrueCrypt ISO loader, just to name a few. This should be a much stronger base for five-year support.

Stéphane Graber: LXC 1.0: Unprivileged containers [7/10]

Planet Ubuntu - Fri, 2014-01-17 23:28

This is post 7 out of 10 in the LXC 1.0 blog post series.

Introduction to unprivileged containers

The support of unprivileged containers is in my opinion one of the most important new features of LXC 1.0.

You may remember from previous posts that I mentioned that LXC should be considered unsafe because while running in a separate namespace, uid 0 in your container is still equal to uid 0 outside of the container, meaning that if you somehow get access to any host resource through proc, sys or some random syscalls, you can potentially escape the container and then you’ll be root on the host.

That’s what user namespaces were designed for and implemented. It was a multi-year effort to think them through and slowly push the hundreds of patches required into the upstream kernel, but finally with 3.12 we got to a point where we can start a full system container entirely as a user.

So how do those user namespaces work? Well, simply put, each user that’s allowed to use them on the system gets assigned a range of unused uids and gids, ideally a whole 65536 of them. You can then use those uids and gids with two standard tools called newuidmap and newgidmap which will let you map any of those uids and gids to virtual uids and gids in a user namespace.

That means you can create a container with the following configuration:

lxc.id_map = u 0 100000 65536 lxc.id_map = g 0 100000 65536

The above means that I have one uid map and one gid map defined for my container which will map uids and gids 0 through 65536 in the container to uids and gids 100000 through 165536 on the host.

For this to be allowed, I need to have those ranges assigned to my user at the system level with:

stgraber@castiana:~$ grep stgraber /etc/sub* 2>/dev/null /etc/subgid:stgraber:100000:65536 /etc/subuid:stgraber:100000:65536

LXC has now been updated so that all the tools are aware of those unprivileged containers. The standard paths also have their unprivileged equivalents:

  • /etc/lxc/lxc.conf => ~/.config/lxc/lxc.conf
  • /etc/lxc/default.conf => ~/.config/lxc/default.conf
  • /var/lib/lxc => ~/.local/share/lxc
  • /var/lib/lxcsnaps => ~/.local/share/lxcsnaps
  • /var/cache/lxc => ~/.cache/lxc

Your user, while it can create new user namespaces in which it’ll be uid 0 and will have some of root’s privileges against resources tied to that namespace will obviously not be granted any extra privilege on the host.

One such thing is creating new network devices on the host or changing bridge configuration. To workaround that, we wrote a tool called “lxc-user-nic” which is the only SETUID binary part of LXC 1.0 and which performs one simple task.
It parses a configuration file and based on its content will create network devices for the user and bridge them. To prevent abuse, you can restrict the number of devices a user can request and to what bridge they may be added.

An example is my own /etc/lxc/lxc-usernet file:

stgraber veth lxcbr0 10

This declares that the user “stgraber” is allowed up to 10 veth type devices to be created and added to the bridge called lxcbr0.

Between what’s offered by the user namespace in the kernel and that setuid tool, we’ve got all that’s needed to run most distributions unprivileged.

Pre-requirements

All examples and instructions I’ll be giving below are expecting that you are running a perfectly up to date version of Ubuntu 14.04 (codename trusty). That’s a pre-release of Ubuntu so you may want to run it in a VM or on a spare machine rather than upgrading your production computer.

The reason to want something that recent is because the rough requirements for well working unprivileged containers are:

  • Kernel: 3.13 + a couple of staging patches (which Ubuntu has in its kernel)
  • User namespaces enabled in the kernel
  • A very recent version of shadow that supports subuid/subgid
  • Per-user cgroups on all controllers (which I turned on a couple of weeks ago)
  • LXC 1.0 beta2 or higher (released two days ago)
  • A version of PAM with a loginuid patch that’s yet to be in any released version

Those requirements happen to all be true of the current development release of Ubuntu as of two days ago.

LXC pre-built containers

User namespaces come with quite a few obvious limitations. For example in a user namespace you won’t be allowed to use mknod to create a block or character device as being allowed to do so would let you access anything on the host. Same thing goes with some filesystems, you won’t for example be allowed to do loop mounts or mount an ext partition, even if you can access the block device.

Those limitations while not necessarily world ending in day to day use are a big problem during the initial bootstrap of a container as tools like debootstrap, yum, … usually try to do some of those restricted actions and will fail pretty badly.

Some templates may be tweaked to work and workaround such as a modified fakeroot could be used to bypass some of those limitations but the goal of the LXC project isn’t to require all of our users to be distro engineers, so we came up with a much simpler solution.

I wrote a new template called “download” which instead of assembling the rootfs and configuration locally will instead contact a server which contains daily pre-built rootfs and configuration for most common templates.

Those images are built from our Jenkins server using a few machines I have on my home network (a set of powerful x86 builders and a quadcore ARM board). The actual build process is pretty straightforward, a basic chroot is assembled, then the current git master is downloaded, built and the standard templates are run with the right release and architecture, the resulting rootfs is compressed, a basic config and metadata (expiry, files to template, …) is saved, the result is pulled by our main server, signed with a dedicated GPG key and published on the public web server.

The client side is a simple template which contacts the server over https (the domain is also DNSSEC enabled and available over IPv6), grabs signed indexes of all the available images, checks if the requested combination of distribution, release and architecture is supported and if it is, grabs the rootfs and metadata tarballs, validates their signature and stores them in a local cache. Any container creation after that point is done using that cache until the time the cache entries expires at which point it’ll grab a new copy from the server.

The current list of images is (as can be requested by passing –list):

--- DIST    RELEASE   ARCH    VARIANT    BUILD --- debian    wheezy    amd64   default    20140116_22:43 debian    wheezy    armel   default    20140116_22:43 debian    wheezy    armhf   default    20140116_22:43 debian    wheezy    i386    default    20140116_22:43 debian    jessie    amd64   default    20140116_22:43 debian    jessie    armel   default    20140116_22:43 debian    jessie    armhf   default    20140116_22:43 debian    jessie    i386    default    20140116_22:43 debian    sid    amd64   default    20140116_22:43 debian    sid    armel   default    20140116_22:43 debian    sid    armhf   default    20140116_22:43 debian    sid    i386    default    20140116_22:43 oracle    6.5    amd64   default    20140117_11:41 oracle    6.5    i386    default    20140117_11:41 plamo     5.x    amd64   default    20140116_21:37 plamo     5.x    i386    default    20140116_21:37 ubuntu    lucid    amd64   default    20140117_03:50 ubuntu    lucid    i386    default    20140117_03:50 ubuntu    precise   amd64   default    20140117_03:50 ubuntu    precise   armel   default    20140117_03:50 ubuntu    precise   armhf   default    20140117_03:50 ubuntu    precise   i386    default    20140117_03:50 ubuntu    quantal   amd64   default    20140117_03:50 ubuntu    quantal   armel   default    20140117_03:50 ubuntu    quantal   armhf   default    20140117_03:50 ubuntu    quantal   i386    default    20140117_03:50 ubuntu    raring    amd64   default    20140117_03:50 ubuntu    raring    armhf   default    20140117_03:50 ubuntu    raring    i386    default    20140117_03:50 ubuntu    saucy    amd64   default    20140117_03:50 ubuntu    saucy    armhf   default    20140117_03:50 ubuntu    saucy    i386    default    20140117_03:50 ubuntu    trusty    amd64   default    20140117_03:50 ubuntu    trusty    armhf   default    20140117_03:50 ubuntu    trusty    i386    default    20140117_03:50

The template has been carefully written to work on any system that has a POSIX compliant shell with wget. gpg is recommended but can be disabled if your host doesn’t have it (at your own risks).

The same template can be used against your own server, which I hope will be very useful for enterprise deployments to build templates in a central location and have them pulled by all the hosts automatically using our expiry mechanism to keep them fresh.

While the template was designed to workaround limitations of unprivileged containers, it works just as well with system containers, so even on a system that doesn’t support unprivileged containers you can do:

lxc-create -t download -n p1 -- -d ubuntu -r trusty -a amd64

And you’ll get a new container running the latest build of Ubuntu 14.04 amd64.

Using unprivileged LXC

Right, so let’s get you started, as I already mentioned, all the instructions below have only been tested on a very recent Ubuntu 14.04 (trusty) installation.
You may want to grab a daily build and run it in a VM.

Install the required packages:

  • sudo apt-get update
  • sudo apt-get dist-upgrade
  • sudo apt-get install lxc systemd-services uidmap

Now a quick workaround while we wait to have our new cgroup manager in Ubuntu, put the following into /etc/init/lxc-unpriv-cgroup.conf:

start on starting systemd-logind and started cgroup-lite script     set +e     echo 1 > /sys/fs/cgroup/memory/memory.use_hierarchy     for entry in /sys/fs/cgroup/*/cgroup.clone_children; do         echo 1 > $entry     done     exit 0 end script

This trick is required because logind doesn’t configure use_hierarchy or clone_children the way LXC needs them.

Now, reboot your machine for those cgroups to get properly reconfigured.

Then, assign yourself a set of uids and gids with:

  • sudo usermod –add-subuids 100000-165536 $USER
  • sudo usermod –add-subgids 100000-165536 $USER

Now create ~/.config/lxc/default.conf with the following content:

lxc.network.type = veth lxc.network.link = lxcbr0 lxc.network.flags = up lxc.network.hwaddr = 00:16:3e:xx:xx:xx lxc.id_map = u 0 100000 65536 lxc.id_map = g 0 100000 65536

And /etc/lxc/lxc-usernet with:

<your username> veth lxcbr0 10

And that’s all you need. Now let’s create our first unprivileged container with:

lxc-create -t download -n p1 -- -d ubuntu -r trusty -a amd64

You should see the following output from the download template:

Setting up the GPG keyring Downloading the image index Downloading the rootfs Downloading the metadata The image cache is now ready Unpacking the rootfs --- You just created an Ubuntu container (release=trusty, arch=amd64). The default username/password is: ubuntu / ubuntu To gain root privileges, please use sudo.

So looks like your first container was created successfully, now let’s see if it starts:

ubuntu@trusty-daily:~$ lxc-start -n p1 -d ubuntu@trusty-daily:~$ lxc-ls --fancy NAME  STATE    IPV4     IPV6     AUTOSTART  ------------------------------------------ p1    RUNNING  UNKNOWN  UNKNOWN  NO

It’s running! At this point, you can get a console using lxc-console or can SSH to it by looking for its IP in the ARP table (arp -n).

One thing you probably noticed above is that the IP addresses for the container aren’t listed, that’s because unfortunately LXC currently can’t attach to an unprivileged container’s namespaces. That also means that some fields of lxc-info will be empty and that you can’t use lxc-attach. However we’re looking into ways to get that sorted in the near future.

There are also a few problems with job control in the kernel and with PAM, so doing a non-detached lxc-start will probably result in a rather weird console where things like sudo will most likely fail. SSH may also fail on some distros. A patch has been sent upstream for this, but I just noticed that it doesn’t actually cover all cases and even if it did, it’s not in any released version yet.

Quite a few more improvements to unprivileged containers are to come until the final 1.0 release next month and while we certainly don’t expect all workloads to be possible with unprivileged containers, it’s still a huge improvement on what we had before and a very good building block for a lot more interesting use cases.

Randall Ross: Planet Ubuntu Needs More Awesome - Part 1

Planet Ubuntu - Fri, 2014-01-17 19:47

Could Planet Ubuntu be made more awesome? How would we do it? Where would we start? Perhaps we'd start by seeing who reads it and what those people actually think about it. During the latter weeks of 2013 I conducted a series of polls (on my blog) to determine just that. Going into this effort, I had my opinions. Some of them were validated. Some were not.

What follows is my promised summary of the survey results along with my bonus colour-commentary and recommendations.

TL;DR:
Planet Ubuntu has the potential to be *much* more awesome, and we should seriously consider making it *the* place to visit for all things Ubuntu.

1.

Survey Says:
Members outnumber non-members by a factor of two (give-or-take) as readers of Planet Ubuntu.

Colour Commentary:
I was surprised that the margin wasn't a *lot* bigger. I was expecting a factor of at least 10:1, and certainly not 2:1! Planet Ubuntu is an echo chamber - a place where primarily Ubuntu members speak to themselves. Could we do better? Yes! Why not make Planet Ubuntu a place for everyone? Why not make it *the* place where people (of all types, not just Ubuntu members) come to get the latest information about Ubuntu's collaborators and their Ubuntu thoughts? If this were to be our focus, I think we'd see a lot better news and information on other sites too as they would be hard-pressed to slant their stories (or to miss the point entirely) when the people who are actually building Ubuntu are presenting matters clearly and to a wide audience on a very public site.

Randall Concludes:
We need to make Planet Ubuntu appeal to everyone. We need to make it read primarily by non-members. The ratio needs to be at least 1000:1.

2.

Survey Says: By a wide margin, readers perceive that they derive value from Planet Ubuntu.

Colour Commentary:
This is a very surprising result, and I must admit, I don't share the same opinion. To me, Planet Ubuntu has drifted far away from where it was a few years ago. I can recall visiting daily in those days and reading all kinds of really interesting news and commentary, mostly about Ubuntu, and specifically from people who were at the core of a lot of Ubuntu goings-on. Now, when I come to Planet Ubuntu, I am generally disappointed to find that most of it is not about Ubuntu, and the core contributors rarely chime in. Instead, we have people that perhaps were once really interested and involved in Ubuntu who now have new pet-projects and want to showcase them. I'm all for learning about what people are working on these days, but Planet Ubuntu ought to be mainly about Ubuntu.

Randall Concludes:
I'm going to agree to disagree and be in the minority here. Planet Ubuntu is not as useful as it could be. We are aiming too low.

3.

Survey Says: People primarily want Planet Ubuntu people to be written by "Ubuntu people", but not necessarily Ubuntu members. Then there are some that want to introduce a Canonical relationship, but only for authors who don't work for Canonical.

Colour Commentary:
I'm surprised by the Canonical (or Mark) selected result. This tells me there is pent-up demand for a more official voice, or more core-contributor stories but reluctance to restrict to the direct voice of Canonical employees.

There is also demand to let in authors who are Ubuntu contributors but not necessarily Ubuntu members. Overall the data suggests a need to expand authorship. Perhaps Ubuntu Members aren't pulling their blogging weight.

Randall Concludes:
Let's open up Planet Ubuntu to people who have real passion for Ubuntu and who live and breathe it daily. That might mean forgoing the requirement to be an Ubuntu Member, and replacing that requirement with something along the lines of "must have a demonstrated and sustained passion for Ubuntu".

4.

Survey Says: Most people want Planet Ubuntu to be about Ubuntu.

Colour Commentary:
This is expected, and I fully agree. I want Planet Ubuntu to live up to its name. I would love it if people writing there would keep their articles focused, at least to the point there's a clear Ubuntu tie-in. That's what makes it worthy of a read instead of the gerjillion other sites on the web that aren't about Ubuntu. Do we really need a site that has Ubuntu in its name that is primarily not about Ubuntu?

Here's an anecdote. Back in the early 2000's, 2002 I recall, there was a period of time where the USA (government) was considering an invasion of Iraq. At the same time, there was a large popular movement and demonstrations/marches against the idea. I was in San Francisco at the time, and recall seeing large numbers of people marching on Van Ness Ave, near SF City Hall carrying placards saying "No Invasion", "Peace not War", i.e. stuff one would expect to see at an anti-invasion demonstration. In the same march, I also saw placards with slogans like "End Poverty", "Stop Animal Cruelty", and other noteworthy causes. What struck me about this was just how out of place they were and how obvious it was that some were trying to capitalize on the popularity of the demonstration for other "pet" causes. I was saddened that the main cause was being diluted. (Note that I'm not saying these other causes were not worthy, but I am saying that this was not the place for them. People were trying to stop a war.)

Randall Concludes:
I'm going advocate that we keep Planet Ubuntu about Ubuntu and encourage everyone who writes here to respect the stated title of the site.

5.

Survey Says: Planet Ubuntu is a "watering hole". Our readers come here often.

Colour Commentary:
This is encouraging in that it shows that we have reader loyalty. People keep coming back. This doesn't say why though. Are people coming back in the hopes that there will be something interesting about Ubuntu? When they arrive are they pleasantly surprised? Or, are they like me and longing for more Ubuntu? This survey question begs for more questions to get at the reasons.

Randall Concludes:
We have loyal readers. Let's find out why.

---
To be continued...
I will continue with a summary of results 6 through 10 soon. In the meantime, please share your thoughts in the comments.

Steve Conklin: Ubuntu File System Benchmarking

Planet Ubuntu - Fri, 2014-01-17 17:48

 

I’ve been working to implement file system benchmarking as part of the test process that the kernel team applies to every kernel update. These are intended to help us spot performance issues. The following announcement I just sent to the Ubuntu kernel mailing list covers the specifics:

——————————————————————————————

The Ubuntu kernel team has implemented the first of what we hope will be a growing set of benchmarks which are run against Ubuntu kernel releases. The first two benchmarks to be included are iozone file system tests, with and without fsync enabled. These are being run as part of the testing applied to all kernel releases. == Disclaimers == READ THESE CAREFULLY 1. These benchmarks are not intended to indicate any performance metrics in any real world or end user situations. They are intended to expose possible performance differences between releases, and not to reflect any particular use case. 2. Fixes for file system bugs reduce performance in some cases. Performance decreases between releases may be a side effect of fixing bugs, and not a bug in themselves. 3. While assessments of performance are valuable, they are not the only criteria that should be used to select a file system. In addition to benchmarks, file systems must be tested for a variety of use cases and verified for correctness under a variety of conditions. == General Information == 1. The top level benchmarking results page is located here: http://kernel.ubuntu.com/benchmarking/ This page is linked from the top level index at kernel.ubuntu.com 2. The tests are run on the same bare-metal hardware for each release, on spinning magnetic media. 3. Test partitions are sized at twice system memory size to prevent the entire test data set from being cached. 4. File systems tested are ext2, ext3, ext4, xfs, and btrfs 5. For each release, each test is run on each file system five times, and then the results are averaged. == Types of results == There are three types of results. To find performance regressions, we (the Ubuntu kernel team) are primarily interested in the second and third types. 1. The Iozone test generates charts of the data for each individual file system type. To navigate to these, select the links under the "Ran" or "Passed" columns in the list of results for each benchmark, then select the test name ("iozone", for example) from that page. The graphs for each run for each file system type will be available from that page in the "Graphs" column. The second and third result sets are generated by the iozone-results-comparator tool, located here: http://code.google.com/p/iozone-results-comparator/ 2. Charts comparing performance among all tested file systems for each individual release. To navigate to these, select the links under the "Ran" or "Passed" columns in the list of results, then select the "charts" link at the top of that page. 3. Charts comparing different releases to each other. These comparisons are generated for each file system type, and are linked at the bottom of the index page for each benchmark. These comparisons include: 3A. Comparison between the latest kernel for each Ubuntu series (i.e. raring, saucy, etc). 3B. Comparison between the latest kernel for each LTS release. 3C. comparison of successive versions within each series.

James Page: Call for testing: Juju and gccgo

Planet Ubuntu - Fri, 2014-01-17 14:24

Today I uploaded juju-core 1.17.0-0ubuntu2 to the Ubuntu Trusty archive.

This version of the juju-core package provides Juju binaries built using both the golang gc compiler and the gccgo 4.8 compiler that we have for 14.04.

The objective for 14.04 is to have a single toolchain for Go that can support x86, ARM and Power architectures. Currently the only way we can do this is to use gccgo instead of golang-go.

This initial build still only provides packages for x86 and armhf; other architectures will follow once we have sorted out exactly how to provide the ‘go’ tool on platforms other than these.

By default, you’ll still be using the golang gc built binaries; to switch to using the gccgo built versions:

sudo update-alternatives --set juju /usr/lib/juju-1.17.0-gcc/bin/juju

and to switch back:

sudo update-alternatives --set juju /usr/lib/juju-1.17.0/bin/juju

Having both versions available should make diagnosing any gccgo specific issues a bit easier.

To push the local copy of the jujud binary into your environment use:

juju bootstrap --upload-tools

This is not recommended for production use but will ensure that you are testing the gccgo built binaries on both client and server.

Thanks to Dave Cheney and the rest of the Juju development team for all of the work over the last few months to update the codebases for Juju and its dependencies to support gccgo!


Stephen M. Webb: The future looks very small.

Planet Ubuntu - Fri, 2014-01-17 00:00

I have a new toy. I didn’t get it because I’m hip, although I am, I got it because we’re trying to prepare Unity 7 on the Trusty Tahr (Ubuntu 14.04 LTS) for the next generation of hardware that will be sitting on everyone’s desk (or lap, or table in the coffee shop) within a few years. I got a laptop with a high-DPI (dots per inch) 4K display and a sensitive touchscreen.

This particular piece of furniture is a Lenovo Yoga 2 Pro, sporting a 3200×1800 pixel 10-touch touchscreen in a 13 inch form factor. That works out to a pixel density of about 280 pixels per inch, much more refined than my main laptop (a Lenovo ThinkPad T410, 1440×900 at 14 inches) which sits at 120 pixels per inch and the external monitor I have attached to it (a Benq FP22W, 1680×1050 at 22 inches) at 95 pixels per inch. Sure, spec ennui, but it’s germane to the topic here.

The problem is that out of the box, most GUI software assumes it’s running on a display device, regardless of its dimensions, with a dot pitch of 96 dots per inch. It’s true for Microsoft Windows and it’s true for GNU/Linux, although I’ve been out of the Apple Macintosh world long enough to plead ignorance there. I know it’s true of Microsoft Windows because the Yoga 2 Pro came with Microsoft Windows 8.1 preinstalled by the manufacturer, and I had a brief chance to test it out before I got to work. IE displayed web pages in teeny weeny characters, and when I opened COMMAND.COM (or whatever the name of the command console is these days) to create a rescue image, it defaulted to using 8×8 bitmapped fonts. The eyestrain finding the reconfiguration option felt like it caused my corneas to bleed.

We have the same problem in Ubuntu. When I installed a prelease image of Trusty on the Yoga 2 Pro the GRUB2 menu was so tiny I couldn’t read it (bitmapped fonts again). Fortunately, the default was sensible and the system booted OK. Unity 7, of course, was similarly unusable, as was the Terminal, the Browser, and pretty much everything else. Ouch.

The problem is rooted in the fact that it’s an invalid assumption that all display devices have a dot pitch of 96 pixels per inch. I already experience this with my dual monitor setup, but it’s less noticeable with 120 vs. 95 DPI. This is just not a valid assumption.

See, a character in a 12 point font needs to appear to be 12 points. That’s 1 pica. One sixth of an inch. The size of a 12 point character should not vary depending on the resolution of your monitor. There’s a caveat, though, in that when I said ‘appear’ what I meant was at a comfortable viewing distance. Turns out that for the best human interface we do in fact want the absolute size of text to change depending on the viewing distance so that there is a constant angle subtended by the display. Er, that means things that are father away need to be bigger so they seem the same size. Got it? Think: projectors. Turns out people use phones and tablets up close, so smaller is OK, but they use their laptops and desktops farther away so smaller is no good.

This is where I need a diagram as a visual aid, but I’m afraid my drawing skills have rusted out and are at the shop for repairs. If someone wants to contribute one, that’d be great.

So, what we need to do for Unity running on the desktop is to figure out the physical dot pitch for each physical display connected to the system, and calcluate the scaling factor that would convert to a fixed 96 pixels per inch, then scale the fonts by that much. Other metrics need to be expressed in terms of ems (another measure based on the current font size — a term that comes from the days of hot metal) and graphics scaled accordingly.

But wait, we don’t want to scale windows if we don’t have to. We don’t want to waste the “retina” display, we just want text to be readable (and controls to be usable). At this point, we’re looking at making sure the Unity Launcher, the Unity panel, the Quicklists, and the Shortcuts are all usable out of the box on a high DPI display, because I have one and I tell you it’s not too usable right now.

A lot of applications are not going to work perfectly on high DPI, including the browsers and the office suites. We’re thinking of adding some optional window scaling through Compiz to help out with those but time is rapidly flowing and there’s a lot of work to do. Stand by for updates. As always, patches are welcome.


Michael Zanetti: Little sneak previews

Planet Ubuntu - Thu, 2014-01-16 23:28

Recently I’ve been busy working on the right edge gesture in Ubuntu Touch. Here’s a little sneak preview:



Your browser does not support HTML5 video

Note that this is still work in progress and probably won’t hit the image really soon.

If you’ve been paying attention while watching the video you might have noticed that I used Ubuntu Touch to control my living room lighting. That is done with a set of Philips Hue Lightbulbs which I got end of last year and a soon to be released app called “Shine”, running on Ubuntu Touch, MeeGo and Desktops.

Stay tuned!

Jono Bacon: Growing an Active Ubuntu Advocacy Community

Planet Ubuntu - Thu, 2014-01-16 23:13

Like many of you, I am tremendously excited about Ubuntu’s future. We are building a powerful convergence platform across multiple devices with a comprehensive developer platform at the heart of it. This could have a profound impact on users and developers alike.

Now, you have all heard me and my team rattling on about this for a while, but we also have a wonderful advocacy community in Ubuntu in the form of our LoCo Teams who are spreading the word. I want to explore ways to help support and grow the events and advocacy that our LoCo Teams are doing.

I had a conversation with Jose on the LoCo Council about this today, and I think we have a fun plan to move forward with. We are going to need help though, so please let me know in the comments if you can participate.

Step 1: Ubuntu Advocacy Kit

The Ubuntu Advocacy Kit is designed to provide a one-stop shop of information, materials (e.g. logos, brochures, presentations), and more for doing any kind of Ubuntu advocacy. Right now it needs a bit of a spring clean, which I am currently working on.

I think we need to get as many members of our community to utilize the kit. With this in mind we are going to do a few things:

  • Get the kit cleaned up and up to date.
  • Get it linked on loco.ubuntu.com and encourage our community to use it.
  • Encourage our community to contribute to the kit and add additional content.
  • Grow the team that maintains the kit.

Help needed: great writers and editors.

Step 2: Advocacy App

The Ubuntu Advocacy Kit works offline. This was a conscious decision with a few benefits:

  1. It makes it easier to know you have all relevant content without having to go to a website and download all the assets. When you have the kit, you have all the materials.
  2. The kit can be used offline.
  3. The kit can be more easily shared.
  4. When people contribute to the kit it feels like you are making something, as opposed to adding docs to a website. This increases the sense of ownership.

With the kit being contained in an offline HTML state (and the source material in reStructured Text) it means that it wouldn’t be that much work to make a click package of the kit that we can ship on the phone, tablet, and desktop.

Just imagine that: you can use the click store to install the Ubuntu Advocacy Kit and have all the information and materials you need, right from the palm of your hand on your phone, tablet, or desktop.

The current stylesheet for the kit doesn’t render well on a mobile device, so it would be handy if we could map the top-level nav (Documentation, Materials etc) to tabs in an app.

We could also potentially include links to other LoCo resources (e.g. a RSS feed view of news from loco.ubuntu.com) and a list of teams.

If you would be interested in working on this, let me know.

Help needed: Ubuntu SDK programmers and artists.

Step 3: Hangout Workshops

I am going to schedule some hangout workshops to go through some tips of how to organize and run LoCo events and advocacy campaigns, and use the advocacy kit as the source material for the workshop. I hope this will result in more events being coordinated.

Help needed: LoCo members who want to grow their skills.

Step 4: LoCo Portal

We also want to encourage wider use of loco.ubuntu.com so our community can get a great idea of the pule of advocacy, events, and more going on.

Help needed: volunteers to run events.

Feedback and volunteers are most welcome!

Stuart Langridge: Posting to Discourse via the Discourse REST API from Python

Planet Ubuntu - Thu, 2014-01-16 20:05

The Bad Voltage forum is run by Discourse. As part of posting a new episode, I wanted to be able to send a post to the forum from a script. Discourse has a REST API but it’s not very well documented, at least partially because it’s still being worked on. So if you read this post two years after it was written, it might be entirely full of lies. Still, I managed to work out how to post to Discourse from a Python script, and here’s an example script to do just that.

First, you’ll need an API key. If you’re the forum administrator, which I am, you can generate one of these from http://YOURFORUM/admin/api. It is not clear to me exactly what this API key does: in particular, I suspect that it is a key with total admin rights over the forum, so don’t share it around. If there’s a way of making an API key with limited rights to just create posts and that’s it, I don’t know that way; if you do know that way, tell me! Once you’ve got your API key, and your username, fill them into the script as APIKEY and APIUSERNAME.

import requests # apt-get install python-requests # based on https://github.com/discoursehosting/discourse-api-php/blob/master/lib/DiscourseAPI.php # log all the things so you can see what's going on import logging import httplib httplib.HTTPConnection.debuglevel = 1 logging.basicConfig() logging.getLogger().setLevel(logging.DEBUG) requests_log = logging.getLogger("requests.packages.urllib3") requests_log.setLevel(logging.DEBUG) requests_log.propagate = True # api key created in discourse admin. probably super-secret, so don't tell anyone. APIKEY="whatever the api key is" APIUSERNAME="your username" QSPARAMS = {"api_key": APIKEY, "api_username": APIUSERNAME} FORUM = "http://url for your forum/" # with the slash on the end # First, get cookie r = requests.get(FORUM, params=QSPARAMS) SESSION_COOKIE = r.cookies["_forum_session"] # Now, send a post to the _forum_session post_details = { "title": "Title of the new topic", "raw": "Body text of the post", "category": 7, # get the category ID from the admin "archetype": "regular", "reply_to_post_number": 0 } r = requests.post(FORUM + "posts", params=QSPARAMS, data=post_details, cookies={"_forum_session": SESSION_COOKIE}) print "Various details of the response from discourse" print r.text, r.headers, r.status_code disc_data = json.loads(r.text) disc_data["FORUM"] = FORUM print "The link to your new post is: " print "%(FORUM)st/%(topic_slug)s/%(topic_id)s" % disc_data

Harald Sitter: Neon 5′s Many PPAs & APT

Planet Ubuntu - Thu, 2014-01-16 07:27

Project Neon 5, the KDE Frameworks 5 version of Kubuntu‘s continuous KDE software  delivery system, has more than one package repositories balancing quality and update frequency in different ways. This post is supposed to help those of you who, like me, wish to use whatever works best and therefore need to switch PPAs from time to time.

APT really comes in handy for this. As it allows you to pretty freely move between versions of a package through increasing pinning priority on a different PPA.

First you add all three repositories and create a file /etc/apt/preferences.d/kf5 containing:

Package: * Pin: release o=LP-PPA-neon-kf5-snapshot-weekly Pin-Priority: 350 Package: * Pin: release o=LP-PPA-neon-kf5-snapshot-daily Pin-Priority: 250 Package: * Pin: release o=LP-PPA-neon-kf5 Pin-Priority: 150

Now all you need to do is increase the priority of any of the entries to switch to that snapshot and run

sudo apt-get update && sudo apt-get dist-upgrade

APT will then automagically move your entire KDE Frameworks 5 stack to the version in the PPA with the highest priority.

So magic.

Pages

Subscribe to Free Software Magazine aggregator