news aggregator

Ubuntu Podcast from the UK LoCo: S07E04 – The One with All the Haste

Planet Ubuntu - Thu, 2014-04-24 19:30

We’re back with Season Seven, Episode Four of the Ubuntu Podcast! Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are drinking tea and eating Simnel cake in Studio L.

 Download OGG  Download MP3 Play in Popup

In this week’s show:

We’ll be back next week, so please send your comments and suggestions to:
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: and skype: ubuntuukpodcast
Follow our twitter feed
Find our Facebook Fan Page
Follow us on Google Plus

Daniel Pocock: Android betrays tethering data

Planet Ubuntu - Thu, 2014-04-24 17:06

When I upgraded an Android device the other day, I found that tethering completely stopped working. The updated CyanogenMod had inherited a new bug from Android, informing the carrier that I was tethering. The carrier, Vodafone Italy, had decided to make my life miserable by blocking that traffic. I had a closer look and managed to find a workaround.

There is absolutely no difference, from a technical perspective, between data transmitted from a mobile device on-screen application and data transmitted from tethering. Revealing the use of tethering to the carrier is a massive breach of privacy - yet comments in the Google bug tracker suggest it is a feature rather than a bug. This little table helps put that logic in perspective:

Product Person who carriers handset User Mobile network who wants to discriminate against some types of network traffic to squeeze more money out of the Product Feature Revealing private information about the way the Product uses his/her Internet so the real User can profit.

It is also bad news for the environment: many people are being tricked into buying un-needed USB dongle modems that will end up in the rubbish in 1-2 years time when their contract expires and the company pushes them to upgrade to the next best thing.

Behind the scenes

What does it really mean in practice, how does Android tell your carrier which data is from tethering?

As my device is rooted and as it is my device and I will do what I want with it, I decided to have a look inside.

The ip command revealed that there are now two network devices, rmmnet_usb0 and rmmnet_usb1. The basic ip route command reveals that traffic from different source addresses is handled differently and split over the different network devices:

shell@android:/ # ip route dev tun0 scope link default via dev rmnet_usb0 via dev rmnet_usb1 via dev rmnet_usb1 dev rmnet_usb0 proto kernel scope link src dev rmnet_usb0 scope link dev rmnet_usb1 proto kernel scope link src dev rmnet_usb1 scope link dev tun0 scope link dev rndis0 proto kernel scope link src

I then looked more closely and found that there is also an extra routing table, it can be found with ip rule

shell@android:/ # ip rule show 0: from all lookup local 32765: from lookup 60 32766: from all lookup main 32767: from all lookup default shell@android:/ # ip route show table 60 default via dev rmnet_usb1 dev rmnet_usb1 dev rndis0 scope link

In this routing table, it is obvious that data from the tethering subnet ( is sent over the extra device rmnet_usb1.

Manually cleaning it up

If the phone is rooted, it is possible to very quickly change the routing table to get all the tethering traffic going through the normal rmnet_usb0 interface again.

It is necessary to get rid of that alternative routing table first:

ip rule del pref 32765

and then update the iptables entries that refer to interface names:

iptables -t nat -I natctrl_nat_POSTROUTING -s -o rmnet_usb0 -j MASQUERADE iptables -I natctrl_FORWARD -i rmmnet_usb0 -j RETURN

This immediately resolved the problem for me on the Vodafone network in Italy.


If Google can be bullied into accepting this kind of discriminatory routing in the stock Android builds and it can even propagate into CyanogenMod, then I'm just glad I'm not running one of those Android builds that has been explicitly "enhanced" by a carrier.

It raises big questions about who really is the owner and user of the device and who is receiving the value when a person pays money to "buy" a device.

Canonical Design Team: Latest from the web team — April 2014

Planet Ubuntu - Thu, 2014-04-24 13:57

Ubuntu 14.04 LTS is out and it’s great! The period after release tends to be slightly less hectic than the lead up to it, but that doesn’t mean that the web team is not as busy as ever.

In the last few weeks we’ve worked on:

  • Ubuntu 14.04 LTS release: we’ve published the latest updates to that go alongside the latest release of Ubuntu
  • is now responsive! Stay tuned for a more in-depth post on this, and keep following our series on how we made responsive; we’ve also launched a new and improved cloud section
  • Juju GUI: we’ve moved the inspector to the left of the screen, which should be live in the coming weeks, and we’re finalising user research
  • Fenchurch: we moved downloads, contributions and search to Fenchurch, so we’re now effectively off our old Drupal site, with a better geolocation solution for download mirrors
  • Ubuntu Resources: we’ve released the beta version for large screen sizes of Ubuntu Resources
  • Future of Web Design: I attended and spoke at the Future of Web Design conference, in London, where I talked about letting mechanisation into our work as web designers, and how we can move further in our profession

And we’re currently working on:

  • Responsive we’re currently working on tweaks and improvements following the release on 17 April
  • Web style guide: we’re updating the Ubuntu web style guide (still in alpha) to reflect the changes from making responsive
  • Ubuntu Resources: we’re currently working on making the transition from Ubuntu Resources to Ubuntu Insights, after that we’ll be working on creating a press centre on the new Ubuntu Insights
  • Fenchurch: we’re working on a new front-end for our asset server and upgrading the CMS to the version running
  • Las Vegas sprint: a few of us are travelling to the USA next week for some intense Juju planning and work
  • Legal pages: we’re in the process of defining the information architecture and wireframing for a new hub that will hold all our legal information
  • Partners: we’re finalising wireframes and content for a new Ubuntu partners site

And, if you’d like to join the web team, we are currently looking for an experienced user experience designer to join us! Send us an email if you’d like to apply.

Delicious treats on release day

Do you have any questions or suggestions for us? Would you like to hear about any of these projects and tasks in more detail? Let us know your thoughts in the comments.

Robie Basak: New in Ubuntu 14.04: Apache 2.4

Planet Ubuntu - Thu, 2014-04-24 13:13

Ubuntu 14.04 ships with Apache 2.4, which is a significant upgrade over Apache 2.2 as found in 12.04.

Apache 2.4 actually first appeared in 13.10, though of course if you intend to do an LTS to LTS upgrade, you won't notice this until now.

If you have a default configuration, then everything should upgrade automatically.

Of course, server deployments typically do not run on defaults. In this case, there are significant changes of which you should be aware. Expect the apache2 postinst script to fail to restart Apache after the upgrade. You'll need to fix up your own customisations to meet the requirements in Apache 2.4 and then run sudo dpkg --configure -a and sudo apt-get -f install to recover. Be sure to back up your system before you begin.

Instead of upgrading, you may want to consider this as an opportunity to enter the new world of automated deployments. Codify your deployment, and then test and deploy a fresh instance of Apache on 14.04 instead, using virtual machines as needed. This is far less stressful than trying to upgrade an existing production system!

Upstream changes

You will need to update any custom configuration according to latest upstream configuration syntax.

See upstream's document "Upgrading to 2.4 from 2.2" for details of required configuration changes. Authorization and access control directives have changed, and will likely need adjustment. Various defaults have also changed.

Significant packaging changes

The default path to served files has changed from /var/www to /var/www/html, mainly for security reasons. See the debian-devel thread "Changing the default document root for HTTP server" for details.

The packaging has been overhauled quite significantly. /etc/apache2/conf.d/ is now /etc/apache2/conf-available/ and /etc/apache2/conf-enabled/, to match the existing sites-enabled/ and mods-enabled/ mechanisms.

Before you upgrade, I suggest that you first make sure that everything in /etc/apache2/*-available is correctly a symlink to the corresponding /etc/apache2/*-enabled. Note that all configurations in sites-enabled and conf-enabled need a .conf suffix now.

Make use of the a2enmod, a2ensite, a2enconf series tools! These help you easily manage the symlinks from *-available to *-enabled.

See Debian's apache2 packaging NEWS file for full details.

Other Notes

Debian changed the default "It works!" page into a comprehensive page explaining on where to go after an initial installation. Initially, I imported this into Ubuntu without noticing this change. Thank you to Andreas Hasenack for pointing out that the page referred to Debian and the Debian bug tracker in a misleading way, in bug 1288690. I fixed this in Ubuntu by essentially doing a s/Debian/Ubuntu/g and crediting Debian appropriately instead.


I think the Apache 2.4 packaging is a shining example of complex packaging done well. All credit is due to Stefan Fritsch and Arno Töll, the Debian maintainers of the Apache packaging. They have done the bulk of the work involved in this update.

Getting help

As always, see Ubuntu's main page on community support options., #ubuntu-server on IRC (Freenode) and the Ubuntu Server mailing list are appropriate venues.

Costales: "Folder Color" app: Change the color of your folders in Ubuntu

Planet Ubuntu - Wed, 2014-04-23 17:29
A simple, easy, fast and useful app! Change the color of your folders in Nautilus in a really easy way, so that you can get a better visual layout!

Folder Color in Ubuntu
How to install? Just enter this command into a Terminal, logout and enjoy it!
sudo add-apt-repository ppa:costales/folder-color ; sudo apt-get update ; sudo apt-get install folder-color -y

More info.

Mark Shuttleworth: U talking to me?

Planet Ubuntu - Wed, 2014-04-23 17:16

This upstirring undertaking Ubuntu is, as my colleague MPT explains, performance art. Not only must it be art, it must also perform, and that on a deadline. So many thanks and much credit to the teams and individuals who made our most recent release, the Trusty Tahr, into the gem of 14.04 LTS. And after the uproarious ululation and post-release respite, it’s time to open the floodgates to umpteen pent-up changes and begin shaping our next show.

The discipline of an LTS constrains our creativity – our users appreciate the results of a focused effort on performance and stability and maintainability, and we appreciate the spring cleaning that comes with a focus on technical debt. But the point of spring cleaning is to make room for fresh ideas and new art, and our next release has to raise the roof in that regard. And what a spectacular time to be unleashing creativity in Ubuntu. We have the foundations of convergence so beautifully demonstrated by our core apps teams – with examples that shine on phone and tablet and PC. And we have equally interesting innovation landed in the foundational LXC 1.0, the fastest, lightest virtual machines on the planet, born and raised on Ubuntu. With an LTS hot off the press, now is the time to refresh the foundations of the next generation of Linux: faster, smaller, better scaled and better maintained. We’re in a unique position to bring useful change to the ubiquitary Ubuntu developer, that hardy and precise pioneer of frontiers new and potent.

That future Ubuntu developer wants to deliver app updates instantly to users everywhere; we can make that possible. They want to deploy distributed brilliance instantly on all the clouds and all the hardware. We’ll make that possible. They want PAAS and SAAS and an Internet of Things that Don’t Bite, let’s make that possible. If free software is to fulfil it’s true promise it needs to be useful for people putting precious parts into production, and we’ll stand by our commitment that Ubuntu be the most useful platform for free software developers who carry the responsibilities of Dev and Ops.

It’s a good time to shine a light on umbrageous if understandably imminent undulations in the landscape we love – time to bring systemd to the centre of Ubuntu, time to untwist ourselves from Python 2.x and time to walk a little uphill and, thereby, upstream. Time to purge the ugsome and prune the unusable. We’ve all got our ucky code, and now’s a good time to stand united in favour of the useful over the uncolike and the utile over the uncous. It’s not a time to become unhinged or ultrafidian, just a time for careful review and consideration of business as usual.

So bring your upstanding best to the table – or the forum – or the mailing list – and let’s make something amazing. Something unified and upright, something about which we can be universally proud. And since we’re getting that once-every-two-years chance to make fresh starts and dream unconstrained dreams about what the future should look like, we may as well go all out and give it a dreamlike name. Let’s get going on the utopic unicorn. Give it stick. See you at vUDS.

Svetlana Belkin: vBlog Teaser

Planet Ubuntu - Wed, 2014-04-23 13:54

I’m thinking of doing a vBlog about Ubuntu and other things:

Adam Stokes: new juju plugin: juju-sos

Planet Ubuntu - Wed, 2014-04-23 05:52

Juju sos is my entryway into Go code and the juju internals. This plugin will execute and pull sosreports from all machines known to juju or a specific machine of your choice and copy them locally on your machine.

An example of what this plugin does, first, some output of juju status to give you an idea of the machines I have:

┌[poe@cloudymeatballs] [/dev/pts/1] └[~]> juju status environment: local machines: "0": agent-state: started agent-version: dns-name: localhost instance-id: localhost series: trusty "1": agent-state: started agent-version: dns-name: instance-id: poe-local-machine-1 series: trusty hardware: arch=amd64 cpu-cores=1 mem=2048M root-disk=8192M "2": agent-state: started agent-version: dns-name: instance-id: poe-local-machine-2 series: trusty hardware: arch=amd64 cpu-cores=1 mem=2048M root-disk=8192M services: keystone: charm: cs:trusty/keystone-2 exposed: false relations: cluster: - keystone identity-service: - openstack-dashboard units: keystone/0: agent-state: started agent-version: machine: "2" public-address: openstack-dashboard: charm: cs:trusty/openstack-dashboard-0 exposed: false relations: cluster: - openstack-dashboard identity-service: - keystone units: openstack-dashboard/0: agent-state: started agent-version: machine: "1" open-ports: - 80/tcp - 443/tcp public-address:

Basically what we are looking at is 2 machines that are running various services on them in my case Openstack Horizon and Keystone. Now suppose I have some issues with my juju machines and openstack and I need a quick way to gather a bunch of data on those machines and send them to someone who can help. With my juju-sos plugin, I can quickly gather sosreports on each of the machines I care about in as little typing as possible.

Here is the output from juju sos querying all machines known to juju:

┌[poe@cloudymeatballs] [/dev/pts/1] └[~]> juju sos -d ~/scratch 2014-04-23 05:30:47 INFO juju.provider.local environprovider.go:40 opening environment "local" 2014-04-23 05:30:47 INFO juju.state open.go:81 opening state, mongo addresses: [""]; entity "" 2014-04-23 05:30:47 INFO juju.state open.go:133 dialled mongo successfully 2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:53 Querying all machines 2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:59 Adding machine(1) 2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:59 Adding machine(2) 2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:88 Capturing sosreport for machine 1 2014-04-23 05:30:55 INFO juju.sos main.go:119 Copying archive to "/home/poe/scratch" 2014-04-23 05:30:56 INFO juju.sos.cmd cmd.go:88 Capturing sosreport for machine 2 2014-04-23 05:31:08 INFO juju.sos main.go:119 Copying archive to "/home/poe/scratch" ┌[poe@cloudymeatballs] [/dev/pts/1] └[~]> ls $HOME/scratch sosreport-ubuntu-20140423040507.tar.xz sosreport-ubuntu-20140423052125.tar.xz sosreport-ubuntu-20140423052545.tar.xz sosreport-ubuntu-20140423050401.tar.xz sosreport-ubuntu-20140423052223.tar.xz sosreport-ubuntu-20140423052600.tar.xz sosreport-ubuntu-20140423050727.tar.xz sosreport-ubuntu-20140423052330.tar.xz sosreport-ubuntu-20140423052610.tar.xz sosreport-ubuntu-20140423051436.tar.xz sosreport-ubuntu-20140423052348.tar.xz sosreport-ubuntu-20140423053052.tar.xz sosreport-ubuntu-20140423051635.tar.xz sosreport-ubuntu-20140423052450.tar.xz sosreport-ubuntu-20140423053101.tar.xz sosreport-ubuntu-20140423052006.tar.xz sosreport-ubuntu-20140423052532.tar.xz

Another example of juju sos just capturing a sosreport from one machine:

┌[poe@cloudymeatballs] [/dev/pts/1] └[~]> juju sos -d ~/scratch -m 2 2014-04-23 05:41:59 INFO juju.provider.local environprovider.go:40 opening environment "local" 2014-04-23 05:42:00 INFO juju.state open.go:81 opening state, mongo addresses: [""]; entity "" 2014-04-23 05:42:00 INFO juju.state open.go:133 dialled mongo successfully 2014-04-23 05:42:00 INFO juju.sos.cmd cmd.go:70 Querying one machine(2) 2014-04-23 05:42:00 INFO juju.sos.cmd cmd.go:88 Capturing sosreport for machine 2 2014-04-23 05:42:08 INFO juju.sos main.go:99 Copying archive to "/home/poe/scratch"

Fancy, fancy

Of course this is a work in progress and I have a few ideas of what else to add here, some of those being:

  • Rename the sosreports to match the dns-name of the juju machine
  • Filter sosreport captures based on services
  • Optionally pass arguments to sosreport command in order to filter out specific plugins I want to run, ie

    $ juju sos -d ~/sosreport -- -b -o juju,maas,nova-compute

As usual contributions are welcomed and some installation instructions are located in the readme

Mario Limonciello: IR Receiver extension for Ambilight raspberry pi clone

Planet Ubuntu - Wed, 2014-04-23 05:48
After working with my ambilight clone for a few days, I discovered the biggest annoyance was that it wouldn't turn off after turning off the TV.  I had some ideas on how I could remotely trigger it from the phone or from an external HTPC but I really wanted a self contained solution in case I decided to swap the HTPC for a FireTV or a Chromecast.

This brought me to trying to do it directly via my remote.  My HTPC uses a mceusb, so I was tempted to just get another mceusb for the pi.  This would have been overkill though, the pi has tons of unused GPIO's, it can be done far simpler (and cheaper).

I looked into it and discovered that someone actually already wrote a kernel module that directly controls an IR sensor on a GPIO.  The kernel module is based off the existing lirc_serial module, but adapted specifically for the raspberry pi.  (See for more information)

HardwareAll that's necessary is a 38 kHz IR sensor.  You'll spend under $5 on one of them on Amazon (plus some shipping) or you can get one from radio shack if you want something quick and local.  I spent $4.87 on one at my local radio shack.

The sensor is really simple, 3 pins.  All 3 pins are available in the pi's header.  One goes to 3.3V rail, one to ground, and one to a spare GPIO.  There's a few places on the header that you can use for each.  Just make sure you match up the pinout to the sensor you get.  I chose to use GPIO 22 as it's most convenient for my lego case.  The lirc_rpi defaults to GPIO 18.

Some notes to keep in mind:

  1. While soldering it, be cognizant of which way you want the sensor to face so that it can be accessed from the remote.  
  2. Remember that you are connecting to 3.3V and Ground from the Pi header.  The ground connection won't be the same as your rail that was used to power the pi if you are powering via USB.  
  3. The GPIO pins are not rated for 5V, so be sure to connect to the 3.3V.

LIRC is available directly in the raspbian repositories.  Install it like this:

# sudo apt-get install lirc

Manually load the module so that you can test it.
# sudo modprobe lirc_rpi gpio_in_pin=22
Now use mode2 to test that it's working.  Once you run the command, press some buttons on your remote.  You should be output about space, pulse and other stuff.  Once you're satisfied, press ctrl-c to exit.
# mode2 -d /dev/lirc0
Now, add the modules that need to be loaded to /etc/modules.  If you are using a different GPIO than 18, specify it here again.  This will make sure that lirc_rpi loads on boot.

lirc_rpi gpio_in_pin=22

Now modify /etc/lirc/hardware.conf to match this configuration to make it work for the rpi:
/etc/lirc/hardware.conf# /etc/lirc/hardware.conf## Arguments which will be used when launching lircdLIRCD_ARGS="--uinput"
#Don't start lircmd even if there seems to be a good config file#START_LIRCMD=false
#Don't start irexec, even if a good config file seems to exist.#START_IREXEC=false
#Try to load appropriate kernel modulesLOAD_MODULES=true
# Run "lircd --driver=help" for a list of supported drivers.DRIVER="default"# usually /dev/lirc0 is the correct setting for systems using udev DEVICE="/dev/lirc0"MODULES="lirc_rpi"
# Default configuration files for your hardware if anyLIRCD_CONF=""LIRCMD_CONF=""
Next, we'll record the buttons that you want the pi to trigger the backlight toggle on.  I chose to do it on the event of turning the TV on or off.  For me I actually have a harmony remote that has separate events for "Power On" and "Power Off" available.  So I chose to program KEY_POWER and KEY_POWER2.  If you don't have the codes available for both "Power On" and "Power Off" then you can just program "Power Toggle" to KEY_POWER.
# irrecord -d /dev/lirc0 ~/lircd.conf
Once you have the lircd.conf recorded, move it into /etc/lirc to overwrite /etc/lirc/lircd.conf and start lirc
# sudo mv /home/pi/lircd.conf /etc/lirc/lircd.conf# sudo /etc/init.d/lirc start
With lirc running you can examine that it's properly recognizing your key event using the irw command.  Once irw is running, press the button on the remote and make sure your pi recognizes it.  Once you're done press ctrl-c to exit.
# irw
Now that you've validated the pi can recognize the command, it's time to tie it to an actual script.  Create /home/pi/.lircrc with contents like this:
/home/pi/.lircrcbegin     button = KEY_POWER     prog = irexec     repeat = 0     config = /home/pi/ offend
begin     button = KEY_POWER2     prog = irexec     repeat = 0     config = /home/pi/ onend
My looks like this:
/home/pi/!/bin/shARG=toggleif [ -n "$1" ]; then ARG=$1fiRUNNING=$(pgrep hyperion-v4l2)if [ -n "$RUNNING" ]; then if [ "$ARG" = "on" ]; then exit 0 fi pkill hyperion-v4l2 hyperion-remote --color black exit 0fiif [ "$ARG" = "off" ]; then hyperion-remote --color black exit 0fi#spawn hyperion remote before actually clearing channels to prevent extra flickershyperion-v4l2 --crop-height 30 --crop-width 10 --size-decimator 8 --frame-decimator 2 --skip-reply --signal-threshold 0.08&hyperion-remote --clearall

To test, run irexec and then press your remote button.  With any luck irexec will launch the toggle script and change your LED status.
# irexec
Lastly, you need to add irexec to your /etc/rc.local to make it boot with the pi.  Make sure you put the execution before the exit 0
/etc/rc.localsu pi -c "irexec -d"su pi -c "/home/pi/ off"
Reboot your pi, and make sure everything works together.  
# sudo reboot

Charles Profitt: Ubuntu 14.04: Subtle shades of success

Planet Ubuntu - Wed, 2014-04-23 03:10

I just completed upgrading four computers to Ubuntu 14.04 tonight. My testing machine has been running 14.04 since early alpha phase, but in the last two days I upgrade by work Lenovo W520, my person Lenovo T530 and the self-assembled desktop with a core2duo and Nvidia 8800 GTS that I haded down to my son.

Confidence In Ubuntu
On Friday of this week I will be involved in delivering training to a group of Boy Scout leaders at a Wood Badge course. I will utilize my primary laptop, the T530, to give a presentation and produce the Gilwell Gazette. I completed a great deal of prep work on Ubuntu 13.10 and if I did not have complete confidence in Ubuntu 14.04 I would have waited until after the weekend to upgrade. I needed to be confident that the multi-monitor functionality would work, that documents produced in an earlier version of Libre Office would not suddenly change the page layouts. In short, I was depending on Ubuntu being dependable and solid more than I usually do.

Subtle Changes Add Flexibility and Polish
Ubuntu added some very small tweaks that truly add to the overall user experience. The borderless windows, new lock screen, and smaller minimum size of launcher icons all add up to slight, but pleasant changes.

Here is a screen shot of the 14.04 desktop on the Lenovo T530.

14.04 desktop

Daniel Pocock: Automatically creating repackaged upstream tarballs for Debian

Planet Ubuntu - Tue, 2014-04-22 20:34

One of the less exciting points in the day of a Debian Developer is the moment they realize they have to create a repackaged upstream source tarball.

This is often a process that they have to repeat on each new upstream release too.

Wouldn't it be useful to:

  • Scan all the existing repackaged upstream source tarballs and diff them against the real tarballs to catalog the things that have to be removed and spot patterns?
  • Operate a system that automatically produces repackaged upstream source tarballs for all tags in the upstream source repository or all new tarballs in the upstream download directory? Then the DD can take any of them and package them when he wants to with less manual effort.
  • Apply any insights from this process to detect non-free content in the rest of the Debian archive and when somebody is early in the process of evaluating a new upstream project?
Google Summer of Code is back

One of the Google Summer of Code projects this year involves recursively building Java projects from their source. Some parts of the project, such as repackaged upstream tarballs, can be generalized for things other than Java. Web projects including minified JavaScript are a common example.

Andrew Schurman, based near Vancouver, is the student selected for this project. Over the next couple of weeks, I'll be starting to discuss the ideas in more depth with him. I keep on stumbling on situations where repackaged upstream tarballs are necessary and so I'm hoping that this is one area the community will be keen to collaborate on.

Martin Pitt: Booting Ubuntu with systemd: Test packages available

Planet Ubuntu - Tue, 2014-04-22 16:54

On the last UDS we talked about migrating from upstart to systemd to boot Ubuntu, after Mark announced that Ubuntu will follow Debian in that regard. There’s a lot of work to do, but it parallelizes well once developers can run systemd on their workstations or in VMs easily and the system boots up enough to still be able to work with it.

So today I merged our systemd package with Debian again, dropped the systemd-services split (which wasn’t accepted by Debian and will be unnecessary now), and put it into my systemd PPA. Quite surprisingly, this booted a fresh 14.04 VM pretty much right away (of course there’s no Plymouth prettiness). The main two things which were missing were NetworkManager and lightdm, as these don’t have an init.d script at all (NM) or it isn’t enabled (lightdm). Thus the PPA also contains updated packages for these two which provide a proper systemd unit. With that, the desktop is pretty much fully working, except for some details like cron not running. I didn’t go through /etc/init/*.conf with a small comb yet to check which upstart jobs need to be ported, that’s now part of the TODO list.

So, if you want to help with that, or just test and tell us what’s wrong, take the plunge. In a 14.04 VM (or real machine if you feel adventurous), do

sudo add-apt-repository ppa:pitti/systemd sudo apt-get update sudo apt-get dist-upgrade

This will replace systemd-services with systemd, update network-manager and lightdm, and a few libraries. Up to now, when you reboot you’ll still get good old upstart. To actually boot with systemd, press Shift during boot to get the grub menu, edit the Ubuntu stanza, and append this to the linux line: init=/lib/systemd/systemd.

For the record, if pressing shift doesn’t work for you (too fast, VM, or similar), enable the grub menu with

sudo sed -i '/GRUB_HIDDEN_TIMEOUT/ s/^/#/' /etc/default/grub sudo update-grub

Once you are satisfied that your system boots well enough, you can make this permanent by adding the init= option to /etc/default/grub (and possibly remove the comment sign from the GRUB_HIDDEN_TIMEOUT lines) and run sudo update-grub again. To go back to upstart, just edit the file again, remove the init=sudo update-grub again.

I’ll be on the Debian systemd/GNOME sprint next weekend, so I feel reasonably well prepared now.

Jonathan Riddell: Favourite Twitter Post

Planet Ubuntu - Tue, 2014-04-22 12:11
KDE Project:

There's only 1 tool to deal with an unsupported Windows XP...

Michael Rooney: Easily sending postcards to your Kickstarter backers with Lob

Planet Ubuntu - Tue, 2014-04-22 12:00

Recently my friend Joël Franusic was stressing out about sending postcards to his Kickstarter backers and asked me to help him out. He pointed me to the excellent service, which is a very developer-friendly API around printing and mailing. We quickly had some code up and running that could take a CSV export of Kickstarter campaign backers, verify addresses, and trigger the sending of customizable, actual physical postcards to the backers.

We wanted to share the project such that it could help out other Kickstarter campaigns, so we put it on Github:

Below I explain how to install and use this script to use Lob to send postcards to your Kickstarter backers. The section after that explains how the script works in detail.

Using the script to mail postcards to your backers

First, you’ll need to sign up for a free account at, then grab your “Test API Key” from the “Account Details” section of your account page. At this point you can use your sandbox API key to test away free of charge and view images of any resulting postcards. Once you are happy with everything, you can plug in credit card details and start using your “Live” API key. Second, you’ll need an export from Kickstarter for the backers you wish to send postcards to.

Now you’ll want to grab the kickstarter-lob code and get it set.

These instructions assume that you’re using a POSIX compatible operating system like Mac OS X or Linux. If you’re using Mac OS X, open the “Terminal” program and type the commands below into it to get started:

git clone cd kickstarter-lob sudo easy_install pip # (if you don’t have pip installed already) pip install -r requirements.txt cp config_example.json config.json open config.json

At this point, you should have a text editor open with the configuration information. Plug in the correct details, making sure to maintain quotes around the values. You’ll need to provide a few things besides an API key:

  • A URL of an image or PDF to be used for the front of the postcard.
    This means that you need to have your PDF available online somewhere. I suggest using Amazon’s S3 service to host your PDF.

  • A message to be printed on the back of the postcard (the address of the receiver will automatically show up here as well).

  • Your return address.

Now you are ready to give it a whirl. Run it like so. Make sure you include the filename for your Kickstarter export:

$ python ~/Downloads/your-kickstarter-backer-report.csv Fetching list of any postcards already sent... Verifying addresses of backers... warning: address verification failed for, cannot send to this backer. Already sent postcards to 0 of 161 backers Send to 160 unsent backers now? [y/N]: y Postcard sent to Jeff Bezos! (psc_45df20c2ade155a9) Postcard sent to Tim Cook! (psc_dcbf89cd1e46c488) ... Successfully sent to 160 backers with 0 failures

The script will verify all addresses, and importantly, only send to addresses not already sent to. The script queries Lob to keep track of who you’ve already sent a postcard to; this important feature allows you to download new Kickstarter exports as people fill in or update their addresses. After downloading a new export from Kickstarter, just run the script against the new export, and the script will only send postcards to the new addresses.

Before anything actually happens, you’ll notice that you’re informed of how many addresses have not yet received postcards and prompted to send them or not, so you can feel assured it is sending only as many postcards as you expect.

If you were to run it again immediately, you’d see something like this:

$ python ~/Downloads/your-kickstarter-backer-report.csv Fetching list of any postcards already sent... Verifying addresses of backers... warning: address verification failed for, cannot send to this backer. Already sent postcards to 160 of 161 backers SUCCESS: All backers with verified addresses have been processed, you're done!

After previewing your sandbox postcards on Lob’s website, you can plug in your live API key in the config.json file and send real postcards at reasonable rates.

How the script works

This section explains how the script actually works. If all you wanted to do is send postcards to your Kickstarter backers, then you can stop reading now. Otherwise, read on!

Before you get started, take a quick look at the “” file on GitHub:

We start by importing four Python libraries: “csv”, “json”, “lob”, and “sys”. Of those four libraries, “lob” is the only one that isn’t part of Python’s standard library. The “lob” library is installed by using the “pip install -r requirements.txt” command I suggest using above. You can also install “lob-python” using pip or easy_install.

#!/usr/bin/env python import csv import json import lob import sys

Next we define one class named “ParseKickstarterAddresses” and two functions “addr_identifier” and “kickstarter_dict_to_lob_dict”

“ParseKickstarterAddresses” is the code that reads in the backer report from Kickstarter and turns it into an array of Python dictionaries.

class ParseKickstarterAddresses:    def __init__(self, filename):        self.items = []        with open(filename, 'r') as csvfile:            reader = csv.DictReader(csvfile)            for row in reader:                self.items.append(row)

The “addr_identifier” function takes an address and turns it into a unique identifier, allowing us to avoid sending duplicate postcards to backers.

def addr_identifier(addr):    return u"{name}|{address_line1}|{address_line2}|{address_city}|{address_state}|{address_zip}|{address_country}".format(**addr).upper()

The “kickstarter_dict_to_lob_dict” function takes a Python dictionary and turns it into a dictionary we can give to Lob as an argument.

def kickstarter_dict_to_lob_dict(dictionary):    ks_to_lob = {'Shipping Name': 'name',                 'Shipping Address 1': 'address_line1',                 'Shipping Address 2': 'address_line2',                 'Shipping City': 'address_city',                 'Shipping State': 'address_state',                 'Shipping Postal Code': 'address_zip',                 'Shipping Country': 'address_country'}    address_dict = {}    for key in ks_to_lob.keys():        address_dict[ks_to_lob[key]] = dictionary[key]    return address_dict

The “main” function is where the majority of the logic for our script resides. Let’s cover that in more detail.

We start by reading in the name of the Kickstarter backer export file. Loading our configuration file (“config.json”) and then configuring Lob with the Lob API key from the configuration file:

def main():    filename = sys.argv[1]    config = json.load(open("config.json"))    lob.api_key = config['api_key']

Next we query Lob for the list of postcards that have already been sent. You’ll notice that the “processed_addrs” variable is a Python “set”, if you haven’t used a set in Python before, a set is sort of like an array that doesn’t allow duplicates. We only fetch 100 results from Lob at a time, and use a “while” loop to make sure that we get all of the results.

print("Fetching list of any postcards already sent...") processed_addrs = set() postcards = [] postcards_result = lob.Postcard.list(count=100) while len(postcards_result):    postcards.extend(postcards_result)    postcards_result = lob.Postcard.list(count=100, offset=len(postcards))

One we fetch all of the postcards, we print out how many were found:

print("...found {} previously sent postcards.".format(len(postcards)))

Then we iterate through all of our results and add them to the “processed_addrs” set. Note the use of the “addr_identifier” function, which turns each address dictionary into a string that uniquely identifies that address.

for processed in postcards:    identifier = addr_identifier(    processed_addrs.add(identifier)

Next we set up a bunch of variables that will be used later on, variables with configuration information for the postcards that Lob will send, the addresses from the Kickstarter backers export file, and variables to keep track of who we’ve sent postcards to and who we still need to send postcards to.

postcard_from_address = config['postcard_from_address'] postcard_message = config['postcard_message'] postcard_front = config['postcard_front'] postcard_name = config['postcard_name'] addresses = ParseKickstarterAddresses(filename) to_send = [] already_sent = []

At this point, we’re ready to start validating addresses, the code below loops over every line in the Kickstarter backers export file and uses Lob to see if the address is valid.

print("Verifying addresses of backers...") for line in addresses.items:     to_person = line['Shipping Name']     to_address = kickstarter_dict_to_lob_dict(line)     try:         to_name = to_address['name']         to_address = lob.AddressVerify.verify(**to_address).to_dict()['address']         to_address['name'] = to_name     except lob.exceptions.LobError:         msg = 'warning: address verification failed for {}, cannot send to this backer.'         print(msg.format(line['Email']))         continue

If the address is indeed valid, we check to see if we’ve already sent a postcard to that address. If so, the address is added to the list of addresses we’ve “already_sent” postcards to. Otherwise, it’s added to the list of address we still need “to_send” postcards to.

if addr_identifier(to_address) in processed_addrs:     already_sent.append(to_address) else:     to_send.append(to_address)

Next we print out the number of backers we’ve already sent postcards to and check to see if we need to send postcards to anybody, exiting if we don’t need to send postcards to anybody.

nbackers = len(addresses.items) print("Already sent postcards to {} of {} backers".format(len(already_sent), nbackers)) if not to_send:     print("SUCCESS: All backers with verified addresses have been processed, you're done!")     return

Finally, if we do need to send one or more postcards, we tell the user how many postcards will be mailed and then ask them to confirm that those postcards should be mailed:

query = "Send to {} unsent backers now? [y/N]: ".format(len(to_send), nbackers) if raw_input(query).lower() == "y":     successes = failures = 0

If the user enters “Y” or “y”, then we start sending postcards. The call to Lob is wrapped in a “try/except” block. We handle calls to the Lob library that return a “LobError” exception, counting those calls as a “failure”. Other exceptions are not handled and will result in the script exciting with that exception.

for to_address in to_send:     try:         rv = lob.Postcard.create(to=to_address, name=postcard_name, from_address=postcard_from_address, front=postcard_front, message=postcard_message)         print("Postcard sent to {}! ({})".format(to_address['name'],         successes += 1     except lob.exceptions.LobError:         msg = 'Error: Failed to send postcard to'         print("{} for {}".format(msg, to_address['name']))         failures += 1

Lastly, we print a message indicating how many messages were sent and how many failures we had.

    print("Successfully sent to {} backers with {} failures".format(successes, failures)) else:

(If the user pressed a key other than “Y” or “y”, this is the message that they’ll see)

print("Okay, not sending to unsent backers.")

And there you have it, a short script that uses Lob to send postcards to your Kickstarter backers, with code to only send one postcard per address, that gracefully handles errors from Lob.

I hope that you’ve found this useful! Please let us know of any issues you encounter on Github, or send pull requests adding exciting new features. Most importantly, enjoy easily bringing smiles to your backers!

The Fridge: Ubuntu Weekly Newsletter Issue 364

Planet Ubuntu - Mon, 2014-04-21 21:30

Welcome to the Ubuntu Weekly Newsletter. This is issue #364 for the week April 14 – 20, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth Krumbach Joseph
  • Paul White
  • Emily Gonyer
  • Tiago Carrondo
  • Jose Antonio Rey
  • Jim Connett
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Michael Hall: Make Android apps Human with NDR

Planet Ubuntu - Mon, 2014-04-21 18:54

Ever since we started building the Ubuntu SDK, we’ve been trying to find ways of bringing the vast number of Android apps that exist over to Ubuntu. As with any new platform, there’s a chasm between Android apps and native apps that can only be crossed through the effort of porting.

There are simple solutions, of course, like providing an Android runtime on Ubuntu. On other platforms, those have shown to present Android apps as second-class citizens that can’t benefit from a new platform’s unique features. Worse, they don’t provide a way for apps to gradually become first-class citizens, so chasm between Android and native still exists, which means the vast majority of apps supported this way will never improve.

There are also complicates solutions, like code conversion, that try to translate Android/Java code into the native platform’s language and toolkit, preserving logic and structure along the way. But doing this right becomes such a monumental task that making a tool to do it is virtually impossible, and the amount of cleanup and checking needed to be done by an actual developer quickly rises to the same level of effort as a manual port would have. This approach also fails to take advantage of differences in the platforms, and will re-create the old way of doing things even when it doesn’t make sense on the new platform.

NDR takes a different approach to these, it doesn’t let you run our Android code on Ubuntu, nor does it try to convert your Android code to native code. Instead NDR will re-create the general framework of your Android app as a native Ubuntu app, converting Activities to Pages, for example, to give you a skeleton project on which you can build your port. It won’t get you over the chasm, but it’ll show you the path to take and give you a head start on it. You will just need to fill it in with the logic code to make it behave like your Android app. NDR won’t provide any of logic for you, and chances are you’ll want to do it slightly differently than you did in Android anyway, due to the differences between the two platforms.

To test NDR during development, I chose the Telegram app because it was open source, popular, and largely used Android’s layout definitions and components. NDR will be less useful against apps such as games, that use their own UI components and draw directly to a canvas, but it’s pretty good at converting apps that use Android’s components and UI builder.

After only a couple days of hacking I was able to get NDR to generate enough of an Ubuntu SDK application that, with a little bit of manual cleanup, it was recognizably similar to the Android app’s.

This proves, in my opinion, that bootstrapping an Ubuntu port based on Android source code is not only possible, but is a viable way of supporting Android app developers who want to cross that chasm and target their apps for Ubuntu as well. I hope it will open the door for high-quality, native Ubuntu app ports from the Android ecosystem.  There is still much more NDR can do to make this easier, and having people with more Android experience than me (that would be none) would certainly make it a more powerful tool, so I’m making it a public, open source project on Launchpad and am inviting anybody who has an interest in this to help me improve it.

Rick Spencer: Adding Interactivity to a Map with Popovers

Planet Ubuntu - Mon, 2014-04-21 15:30
On Friday I started my app "GetThereDC". I started by adding the locations of all of the Bikeshare stations in DC to a map. Knowing where the stations are is great, but it's a bummer when you go to a station and there are no bikes, or there are no empty parking spots. Fortunately, that exact information is in the XML feed, so I just need a way to display it.  
The way I decided to do it is to make the POI (the little icons for each station on the map) clickable, and when the user clicks the POI to use the Popover feature in the Ubuntu Components toolkit to display the data.
Make the POI Clickable When you want to make anyting "clickable" in QML, you just use a MouseArea component. Remember that each POI is constructed as a delegate in the MapItemView as an Image component. So all I have to do is add a MouseArea inside the Image and respond to the Click event. So, not my image looks like this:
sourceItem: Image
id: poiImage
source: "images/bike_poi.png"
anchors.fill: parent
print("The POI was clicked! ")
This can be used anywhere in QML to make an image respond to a click. MouseArea, of course, has other useful events as well, for things like onPressed, onPressAndHold, etc...
Add the Roles to the XmlListModel I already know that I'll want something to use for a title for each station, the address, as well as the number of bikes and the number of parking slots. Looking at the XML I can see that the "name" property is the address, so that's a bonus. Additionally, I can see the other properties I want are called "nbBikes" and "nbEmptyDocks". So, all I do is add those three new roles to the XmlListModel that I constructed before:
id: bikeStationModel
source: ""
query: "/stations/station"
XmlRole { name: "lat"; query: "lat/string()"; isKey: true }
XmlRole { name: "lng"; query: "long/string()"; isKey: true }
XmlRole {name: "name"; query: "name/string()"; isKey: true}
XmlRole {name: "available"; query: "nbBikes/string()"; isKey: true}
XmlRole {name: "freeSlots"; query: "nbEmptyDocks/string()"; isKey: true}
Make a Popover Component The Ubuntu SDK offers some options for displaying additional information. In old school applications these might be dialog boxes, or message boxes. For the purposes of this app, Popover looks like the best bet. I suspect that over time the popover code might get a little complex, so I don't want it to be too deeply nested inside the MapItemView, as the code will become unwieldy. So, instead I decided to add a file called BikeShareStationPopover.qml to the components sub-directory. Then I copy and pasted the sample code in the documentation to get started. 

To make a popover, you start with a Component tag, and then add a popover tag inside that. Then, you can put pretty much whatever you want into that Popover. I am going to go with a Column and use ListItem components because I think it will look nice, and it's the easiest way to get started. Since I already added the XmlRoles I'll just use those roles in the construction of each popover. 

Since I know that I will be adding other kinds of POI, I decided to add a Capital Bike Share logo to the top of the list so users will know what kind of POI they clicked. I also added a close button just to be certain that users don't get confused about how to go back to the map. So, at the end of they day, I just have a column with ListItems:
import QtQuick 2.0
import Ubuntu.Components 0.1
import Ubuntu.Components.ListItems 0.1 as ListItem
import Ubuntu.Components.Popups 0.1
id: popoverComponent
id: popover
id: containerLayout
left: parent.left
right: parent.right
control: Image
source: "../images/CapitalBikeshare_Logo.jpg"
ListItem.Header { text: name}
ListItem.Standard { text: available + " bikes available" }
ListItem.Standard { text: freeSlots + " parking spots available"}
highlightWhenPressed: false
control: Button
text: "Close"
onClicked: PopupUtils.close(popover)
Make the Popover Component Appear on ClickSo, now that I made the component code, I just need to add it to the MapItemView and make it appear on click. So, I add the tag and give it an id to the MapQuickItem Delegate, and change the onClicked handler for the MouseArea to open the popover:
delegate: MapQuickItem
id: poiItem
coordinate: QtPositioning.coordinate(lat,lng)
anchorPoint.x: poiImage.width * 0.5
anchorPoint.y: poiImage.height
z: 9
sourceItem: Image
id: poiImage
source: "images/bike_poi.png"
anchors.fill: parent
id: bikeSharePopover
And when I run the app, I can click on any POI and see the info I want! Easy!

Code is here

Mario Limonciello: Ambilight clone using Raspberry Pi

Planet Ubuntu - Mon, 2014-04-21 05:39
Recently I came across and thought it was pretty neat.  The lights were expensive however, and it required your phone or tablet to be in use every time you wanted to use it which seemed sub-optimal.

I've been hunting for a useful project to do with my Raspberry Pi, and found out that there were two major projects centered around getting something similar setup.


I set out to put a similar setup together for my TV.  I purchased:

Hardware SetupOnce everything arrived, I soldered a handful of wires to a prototyping board so that I could house more of the pieces in the raspberry pi case.  I used a cut up micro USB cord to provide power from the 5V rail and ground to the pi itself and then also to one end of the 4 pin JST adapter.  The same 5V 10a power supply is used to power both of them.
Prototyping board, probably this size is overkill,
but I have flexibility for future projects to add on now.

Once I got everything put into the pi properly, I double checked all the connections and closed up the case.
My pi case with the top removed and an
inset put in for holding the proto boardWhole thing assembledI proceeded to do the TV.  I have a 46" set, which works out to 18 LEDs on either side and 30 LEDs on the top and bottom.  I cut up the LED strips and used double sided tape to affix to the TV.  Once the LED strips are cut up you have to solder 4 pins from the out end of one strip to the in end of another strip.  I'd recommend looking for some of the prebuilt L corner strips if you do this.  I didn't use them and it was a pain to strip and hold such small wires in place to solder in the small corners.

Back of TV w/ LEDs attached
Corner with wires soldered on
Software SetupOnce all the HW was together I proceeded to get the software set up.  I originally had an up to date version of raspbian wheezy installed.  It included an updated kernel (version 3.10).  I managed to set everything up using it except the grabber, but then discovered that there were problems with the USB grabber.  Plugging it in causes the box to kernel panic.  The driver for the USB grabber has made it upstream in kernel version 3.11, so I expected it should be usable in 3.10 with some simple backporting tweaks, but didn't narrow it down entirely.
I did find out that kernel 3.6.11 did work with an earlier version of the driver however, so I re-did my install using an older snapshot of raspbian.  I managed to get things working there, but would like to iron out the problems causing a kernel panic at some point.
USB Grabber instructionsThe USB grabber I got is dirt cheap but not based off the really common chipsets already supported in the kernel with the versions in raspbian, so it requires some extra work.
  1. Install Raspbian snapshot from 2013-07-26.  Configure as desired.
  2. git clone ambi-tv
  3. cd ambi-tv/misc && sudo sh ./
  4. cd usbtv-driver && make
  5. sudo mkdir /lib/modules/3.6.11+/extra
  6. sudo cp usbtv.ko /lib/modules/3.6.11+/extra/
  7. sudo depmod -a
Hyperiond InstructionsAfter getting the grabber working, installing hyperion is a piece of cake.  This will set up hyperiond to start on boot.
  1. wget -N
  2. sudo sh ./
  3. Edit /etc/modprobe.d/raspi-blacklist.conf using nano.  Comment out the line with blacklist spi-bcm2708
  4. sudo reboot
Hyperion configuration fileFrom another PC that has java (OpenJDK 7 works on Ubuntu 14.04)
  1. Visit and fetch the jar file.
  2. Run it to configure your LEDs.
  3. From the defaults, I particularly had to change the LED type and the number of LEDs around the TV.
  4. My LEDs were originally listed at RGB but I later discovered that they are GRB.  If you encounter problems later with the wrong colors showing up, you can change them here too.
  5. Save the conf file and scp it into the /etc directory on your pi
  6. sudo /etc/init.d/hyperiond restart
Test the LED's
  1. Plug in the LEDs and install the test application at
  2. Try out some of the patterns and color wheel to make sure that everything is working properly.  It will save you problems later diagnosing grabber problems if you know things are sound here (this is where I found my RGB/GRB problem).
Test pattern
Set up things for Hyperion-V4L2I created a script in ~ called  It runs the V4L2 capture application and sets the LEDs accordingly.  I can invoke it again to turn off the LEDs.  As a future modification I intend to control this with my harmony remote or some other method.  If someone comes up with something cool, please share.
#!/bin/shARG=toggleif [ -n "$1" ]; then        ARG=$1fiRUNNING=$(pgrep hyperion-v4l2)if [ -n "$RUNNING" ]; then        if [ "$ARG" = "on" ]; then                exit 0        fi        pkill hyperion-v4l2        exit 0fihyperion-v4l2 --crop-height 30 --crop-width 10 --size-decimator 8 --frame-decimator 2 --skip-reply --signal-threshold 0.08&
That's the exact script I use to run things.  I had to modify the crop height from the defaults that were on the directions elsewhere to avoid flicker on the top.  To diganose problems here, I'd recommend using the --screenshot argument of hyperion-v4l2 and examining output.
Once you've got it good, add it to /etc/rc.local to start up on boot:
su pi -c /home/pi/
Test It all togetherEverything should now be working.
Here's my working setup:

Lubuntu Blog: Lubuntu 14.04

Planet Ubuntu - Sun, 2014-04-20 19:52
First of all, my apologies for disappear due to personal reasons (went out for a few days). But it's here, Lubuntu 14.04 codename Trusty Tahr. The missing links for PowerPC machines have been recovered. Feel free to go to the Downloads section and grab it. If you need more info check the release page.

Ubuntu Classroom: Ubuntu Open Week for Trusty: Starting Soon!

Planet Ubuntu - Sun, 2014-04-20 03:15

And the Ubuntu Open Week for this cycle is just around the corner! This will be three days full of excitement, where you will be able to know what different teams and people in the community do. Whether you are a developer, a designer, a tester or a community member, this is the right event if you want to get involved with the community and are looking for a starting point.

The event will take place from April 22nd to April 24th 2014, from 15 to 19 UTC each day. During these three days we will have people from various teams, such as the Server, Documentation, and Juju teams. There are twelve different sessions scheduled, so make sure to find which ones interest you and write the times down in your calendars! The full schedule can be found at the Open Week Wiki page.

All sessions will take place at #ubuntu-classroom and #ubuntu-classroom-chat on (click here to join from your web browser). There are three sessions in the schedule which are labeled with the [ON AIR!] tag, which means the session will be streamed live at the Ubuntu on Air! webpage.

In the case you can not attend the event, logs will be linked in the schedule as soon as they become available. For On Air! sessions, they will be available at the Ubuntu on Air! YouTube Channel. Hope to see you all there!


Subscribe to Free Software Magazine aggregator