news aggregator

Elizabeth K. Joseph: The Ubuntu Weekly Newsletter needs you!

Planet Ubuntu - Thu, 2014-08-14 04:41

On Monday we released Issue 378 of the Ubuntu Weekly Newsletter. The newsletter has thousands of readers across various formats from wiki to email to forums and discourse.

As we creep toward the 400th issue, we’ve been running a bit low on contributors. Thanks to Tiago Carrondo and David Morfin for pitching in these past few weeks while they could, but the bulk of the work has fallen to José Antonio Rey and myself and we can’t keep this up forever.

So we need more volunteers like you to help us out!

We specifically need folks to let us know about news throughout the week (email them to and to help write summaries over the weekend. All links and summaries are stored in a Google Doc, so you don’t need to learn any special documentation formatting or revision control software to participate. Plus, everyone who participates is welcome to add their name to the credits!

Summary writers. Summary writers receive an email every Friday evening (or early Saturday) with a link to the collaborative news links document for the past week which lists all the articles that need 2-3 sentence summaries. These people are vitally important to the newsletter. The time commitment is limited and it is easy to get started with from the first weekend you volunteer. No need to be shy about your writing skills, we have style guidelines to help you on your way and all summaries are reviewed before publishing so it’s easy to improve as you go on.

Interested? Email and we’ll get you added to the list of folks who are emailed each week and you can help as you have time.

Costales: Destino Ubuconla 2014 - #1 Cartagena de Indias

Planet Ubuntu - Thu, 2014-08-14 04:32
Al salir del avión nos azotó una ola de bochorno... Claro :) ¡Estamos en el corazón del Caribe!

En la salida del aeropuerto nos esperaba José Luís Ahumada (a partir de ahora Bart, como él quiere que le llamemos), muy voluntarioso y buen anfitrión.

Tras dejar los bártulos en el hostal nos acercamos a su 'oficina': Vivelab, un semillero que facilita instalaciones tecnológicas a emprendedores, donde estaba trabajando Sergio Meneses, a quien por fin conozco en persona tras tantos años colaborando en Ubuntu ;)
Respirando la iniciativa tecnológica en el Vivelab
Saliendo de Vivelab coincidimos por casualidad con Rodny, un compañero superentusiasta que trabajará en conseguir la mejor Ubuconla posible.

Nos acercamos al centro y tengo que decir que Cartagena hay que vivirla; callejeando por sus recovecos, soñando los tiempos pasados de sus antiguos edificios de piedra coronados por espectaculares baldones de madera, viéndose inmerso en la algarabía de su día a día, con sus mercados, caótico tráfico, vendedores con las más ingeniosas tretas... como decía, ¡hay que vivir Cartagena! :D

Tras cambiar moneda fuimos a comer a un restaurante que me encantó, con un 'pescaito' a la plancha y unos zumos de guanaba y lulo riquísimos.

Sergio, Bart y CostalesEn la sobremesa Sergio tuvo que irse a trabajar y Bart nos descubrió el extraordinario trabajo de CaribeMesh, una organización de Cartagena que está haciendo llegar Internet a los barrios más desfavorecidos, en donde económicamente ni llega, ni interesa que llegue la red de redes. ¡Chapó CaribeMesh! ;D

Tras la comida e interesante charla con Bart, fuimos a una agencia para contratar la noche siguiente en Las Islas del Rosario. Al igual que en nuestros viajes a Perú, aquí también se impone el regateo. Y ya con el día siguiente organizado, pedimos a Bart volver al hostal, porque el jetlag seguía haciendo mella en nosotros.

Tras la siesta y ya con las fuerzas recuperadas, pasaron sobre las 20:30 Sergio y Bart para ir a cenar.
Debido a la copiosa comida no teníamos mucho hambre y nos sació una sóla pizza en un centro comercial que ya estaba casi vacío. Y es que si en España acostumbramos comer sobre las 2 y cenar sobre las 9:30, aquí se suele comer sobre las 12 y cenar sobre las 7 :O

Ya de vuelta en el hostal estuve charlando casi 1 hora con Sergio en su propia habitación (se hospeda en la habitación contigua a la nuestra). Fue chocante contrastar opiniones de muchos temas, principalmente del LoCo Council que por email significaría cruzar decenas de correos para no llegar a ningún lado, y por contra, en persona, un cara a cara sigue siendo lo mejor :)

Continúa leyendo más de este viaje.

Sergio Meneses: Everything Ready for the Ubucon LatinAmerica! :D

Planet Ubuntu - Thu, 2014-08-14 00:08

UbuconLA T-shirts

UbuconLA Labels

Working Hard!

More news soon….

Harald Sitter: Volume Configuration

Planet Ubuntu - Wed, 2014-08-13 21:21

Because sometimes a widget in the system tray is not enough:

A volume and sound routing configuration module for the Plasma workspace. Based on PulseAudio, using QtQuick2 (because, why not?). Made in Randa.

Disclaimer: this is very much a work in progress thing.

Daniel Pocock: WebRTC in CRM/ERP solutions at xTupleCon 2014

Planet Ubuntu - Wed, 2014-08-13 18:29

In October this year I'll be visiting the US and Canada for some conferences and a wedding. The first event will be xTupleCon 2014 in Norfolk, Virginia. xTuple make the popular open source accounting and CRM suite PostBooks. The event kicks off with a keynote from Apple co-founder Steve Wozniak on the evening of October 14. On October 16 I'll be making a presentation about how JSCommunicator makes it easy to add click-to-call real-time communications (RTC) to any other web-based product without requiring any browser plugins or third party softphones.

Juliana Louback has been busy extending JSCommunicator as part of her Google Summer of Code project. When finished, we hope to quickly roll out the latest version of JSCommunicator to other sites including, the WebRTC portal for the Debian Developer community. Juliana has also started working on wrapping JSCommunicator into a module for the new xTuple / PostBooks web-based CRM. Versatility is one of the main goals of the JSCommunicator project and it will be exciting to demonstrate this in action at xTupleCon.

xTupleCon discounts for developers

xTuple has advised that they will offer a discount to other open source developers and contributers who wish to attend any part of their event. For details, please contact xTuple directly through this form. Please note it is getting close to their deadline for registration and discounted hotel bookings.

Potential WebRTC / JavaScript meet-up in Norfolk area

For those who don't or can't attend xTupleCon there has been some informal discussion about a small WebRTC-hacking event at some time on 15 or 16 October. Please email me privately if you may be interested.

Kubuntu Wire: Editor on Kubuntu

Planet Ubuntu - Wed, 2014-08-13 16:24 is a website which covers open source news and has long been KDE friendly, just not for example it’s leading the story of Plasma 5 being updated yesterday. Recently I got an e-mail from the editor:

From: Swapnil Bhartiya
To: Jonathan Riddell
Subject: Kubuntu Feedback

Hi Jonathan,

Just wanted to tell you that I have been using Kubuntu as my primary OS
for a while now and I am really impressed how stable and bug-free it has
become. Earlier random crashes was a normal thing for Kubuntu so I never
used it as main distros. Though I still triple boot with openSUSE and
Arch all running Plasma, I really started to love what you are doing
with Kubuntu.

Keep up the great work and let us know what else I should do to further
promote Plasma/KDE

Swapnil Bhartiya

Jonathan Riddell: Upstream and Downstream: why packaging takes time

Planet Ubuntu - Wed, 2014-08-13 16:18
KDE Project:

Here in the KDE office in Barcelona some people spend their time on purely upstream KDE projects and some of us are primarily interested in making distros work which mean our users can get all the stuff we make. I've been asked why we don't just automate the packaging and go and do more productive things. One view of making on a distro like Kubuntu is that its just a way to package up the hard work done by others to take all the credit. I don't deny that, but there's quite a lot to the packaging of all that hard work, for a start there's a lot of it these days.

"KDE" used to be released once every nine months or less frequently. But yesterday I released the first bugfix update to Plasma, to make that happen I spent some time on Thursday with David making the first update to Frameworks 5. But Plasma 5 is still a work in progress for us distros, let's not forget about KDE SC 4.13.3 which Philip has done his usual spectacular job of updating in the 14.04 LTS archive or KDE SC 4.14 betas which Scarlett has been packaging for utopic and backporting to 14.04 LTS. KDE SC used to be 20 tars, now it's 169 and over 50 langauge packs.


If we were packaging it without any automation as used to be done it would take an age but of course we do automate the repetative tasks, the KDE SC 4.13.97 status page shows all the packages and highlights obvious problems. But with 169 tars even running the automated script takes a while, then you have to fix any patches that no longer apply. We have policies to disuade having patches, any patches should be upstream in KDE or on their way upstream, but sometimes it's unavoidable that we have some to maintain which often need small changes for each upstream release.


Much of what we package are libraries and if one small bit changes in the library, any applications which use that library will crash. This is ABI and the rules for binary compatility in C++ are nuts. Not infrequently someone in KDE will alter a library ABI without realising. So we maintain symbol files to list all the symbols, these can often feel like more trouble than they're worth because they need updated when a new version of GCC produces different symbols or when symbols disappear and on investigation they turn out to be marked private and nobody will be using them anyway, but if you miss a change and apps start crashing as nearly happened in KDE PIM last week then people get grumpy.


Debian, and so Ubuntu, documents the copyright licence of every files in every package. This is a very slow and tedious job but it's important that it's done both upstream and downstream because it you don't people won't want to use your software in a commercial setting and at worst you could end up in court. So I maintain the licensing policy and not infrequently have to fix bits which are incorrectly or unclearly licenced and answer questions such as today I was reviewing whether a kcm in frameworks had to be LGPL licenced for Eike. We write a copyright file for every package and again this can feel like more trouble than its worth, there's no easy way to automate it but by some readings of the licence texts it's necessary to comply with them and it's just good practice. It also means that if someone starts making claims like requiring licencing for already distributed binary packages I'm in an informed position to correct such nonsense.


When we were packaging KDE Frameworks from scratch we had to find a descirption of each Framework. Despite policies for metadata some were quite underdescribed so we had to go and search for a sensible descirption for them. Infact not infrequently we'll need to use a new library which doesn't even have a sensible paragraph describing what it does. We need to be able to make a package show something of a human face.


A recent addition to the world of .deb packaging is MultiArch which allows i386 packages to be installed on amd64 computers as well as some even more obscure combinations (powerpc on ppcel64 anyone?). This lets you run Skype on your amd64 computer without messy cludges like the ia32-libs package. However it needs quite a lot of attention from packagers of libraries marking which packages are multiarch, which depend on other multiarch or arch independent packages and even after packaging KDE Frameworks I'm not entirely comfortable with doing it.

Splitting up Packages

We spend lots of time splitting up packages. When say Calligra gets released it's all in one big tar but you don't want all of it on your system because you just want to write a letter in Calligra Words and Krita has lots of image and other data files which take up lots of space you don't care for. So for each new release we have to work out which of the installed files go into which .deb package. It takes time and even worse occationally we can get it wrong but if you don't want heaps of stuff on your computer you don't need then it needs to be done. It's also needed for library upgrades, if there's a new version of libfoo and not all the programs have been ported to it then you can install libfoo1 and libfoo2 on the same system without problems. That's not possible with distros which don't split up packages.

One messy side effect of this is that when a file moves from one .deb to another .deb made by the same sources, maybe Debian chose to split it another way and we want to follow them, then it needs a Breaks/Replaces/Conflicts added. This is a pretty messy part of .deb packaging, you need to specify which version it Breaks/Replaces/Conflicts and depending on the type of move you need to specify some combination of these three fields but even experienced packages seem to be unclear on which. And then if a backport (with files in original places) is released which has a newer version than the version you specify in the Breaks/Replaces/Conflicts it just refuses to install and stops half way through installing until a new upload is made which updates the Breaks/Replaces/Conflicts version in the packaging. I'd be interested in how this is solved in the RPM world.

Debian Merges

Ubuntu is forked from Debian and to piggy back on their work (and add our own bugs while taking the credit) we merge in Debian's packaging at the start of each cycle. This is fiddly work involving going through the diff (and for patches that's often a diff or a diff) and changelog to work out why each alternation was made. Then we merge them together, it takes time and it's error prone but it's what allows Ubuntu to be one of the most up to date distros around even while much of the work gone into maintaining universe packages not part of some flavour has slowed down.

Stable Release Updates

You have Kubuntu 14.04 LTS but you want more? You want bugfixes too? Oh but you want them without the possibility of regressions? Ubuntu has quite strict definition of what's allowed in after an Ubuntu release is made, this is because once upon a time someone uploaded a fix for X which had the side effect of breaking X on half the installs out there. So for any updates to get into the archive they can only be for certain packages with a track record of making bug fix releases without sneaking in new features or breaking bits. They need to be tested, have some time passed to allow for wider testing, be tested again using the versions compiled in Launchpad and then released. KDE makes bugfix releases of KDE SC every month and we update them in the latest stable and LTS releases as 4.13.3 was this week. But it's not a process you can rush and will take a couple of weeks usually. That 4.13.3 update was even later then usual because we were busy with Plasma 5 and whatnot. And it's not perfect, a bug in Baloo did get through with 4.13.2. But it would be even worse if we did rush it.


Ah but you want new features too? We don't allow in new features into the normal updates because they will have more chance of having regressions. That's why we make backports, either in the kubuntu-ppa/backports archive or in the ubuntu backports archive. This involves running the package through another automation script to change whever needs changed for the backport then compiling it all, testing it and releasing it. Maintaining and running that backport script is quite faffy so sending your thanks is always appreciated.

We have an allowance to upload new bugfix (micro releases) of KDE SC to the ubuntu archive because KDE SC has a good track record of fixing things and not breaking thins. When we come to wanting to update Plasma we'll need to argue for another allowance. One controvertial release in KDE Frameworks is that there's no bugfix releases, only monthly releases with new features. These are unlikely to get into the Ubuntu archive, we can try to argue the case that with automated tests and other processes the quality is high enough, but it'll be a hard sell.

Crack of the Day
Project Neon provides packages of daily builds of parts of KDE from Git. And there's weekly ISOs that are made from this too. These guys rock. The packages are monolithic and install in /opt to be able to live alongside your normal KDE software.


You should be able to run KDELibs 4 software on a Plasma 5 desktop. I spent quite a bit of time ensuring this is possible by having no overlapping files in kdelibs/kde-runtime and kde frameworks and some parts of Plasma. This wasn't done primarily for Kubuntu, many of the files could have been split out into .deb packages that could be shared between KDELibs 4 and Plasma 5, but other disros which just installs packages in a monolithic style benefitted. Some projects like Baloo didn't ensure they were co-installable, fine for Kubuntu as we can separate the libraries that need to be coinstalled from the binaries, but other distros won't be so happy.

Automated Testing
Increasingly KDE software comes with its own test suite. Test suites are something that has been late coming to free software (and maybe software in general) but now it's here we can have higher confidence that the software is bug free. We run these test suites as part of the package compilation process and not infrequently find that the test suite doesn't run, I've been told that it's not expected for packagers to use it in the past. And of course tests fail.

Obscure Architectures
In Ubuntu we have some obscure architectures. 64-bit Arm is likely to be a useful platform in the years to come. I'm not sure why we care about 64-bit powerpc, I can only assume someone has paid Canonical to care about it. Not infrequently we find software compiles fine on normal PCs but breaks on these obscure platforms and we need to debug why they is. This can be a slow process on ARM which takes an age to do anything, or very slow where I don't even have access to a machine to test on, but it's all part of being part of a distro with many use-cases.

Future Changes
At Kubuntu we've never shared infrstructure with Debian despite having 99% the same packaging. This is because Ubuntu to an extent defines itself as being the technical awesomeness of Debian with smoother processes. But for some time Debian has used git while we've used the slower bzr (it was an early plan to make Ubuntu take over the world of distributed revision control with Bzr but then Git came along and turned out to be much faster even if harder to get your head around) and they've also moved to team maintainership so at last we're planning shared repositories. That'll mean many changes in our scripts but should remove much of the headache of merges each cycle.

There's also a proposal to move our packaging to daily builds so we won't have to spend a lot of time updating packaging at every release. I'm skeptical if the hassle of the infrastructure for this plus fixing packaging problems as they occur each day will be less work than doing it for each release but it's worth a try.

ISO Testing
Every 6 months we make an Ubuntu release (which includes all the flavours of which Ubuntu [Unity] is the flagship and Kubuntu is the most handsome) and there's alphas and betas before that which all need to be tested to ensure they actually install and run. Some of the pain of this has reduced since we've done away with the alternative (text debian-installer) images but we're nowhere near where Ubuntu [Unity] or OpenSUSE is with OpenQA where there are automated installs running all the time in various setups and some magic detects problems. I'd love to have this set up.

I'd welcome comments on how any workflow here can be improved or how it compares to other distributions. It takes time but in Kubuntu we have a good track record of contributing fixes upstream and we all are part of KDE as well as Kubuntu. As well as the tasks I list above about checking copyright or co-installability I do Plasma releases currently, I just saw Harald do a Phonon release and Scott's just applied for a KDE account for fixes to PyKDE. And as ever we welcome more people to join us, we're in #kubuntu-devel where free hugs can be found, and we're having a whole day of Kubuntu love at Akademy.

Sergio Meneses: System 76 Stickers at the UbunconLatinAmerica 2014

Planet Ubuntu - Wed, 2014-08-13 15:18

If you want to have the most beautiful stickers in your laptop , this is the opportunity!

You can find the amazing System76 Stickers at the Ubuconla2014 for free!!!! :D

System76 Stickers

Thanks to System76 for this amazing gift!


Si te gusta tener hermosos stickers en tu laptop, esta es tu oportunidad!

Ronnie Tucker: GUN Linux: On the range with TrackingPoint’s new AR-15s

Planet Ubuntu - Wed, 2014-08-13 09:00

Since first running into TrackingPoint at CES 2013, we’ve kept tabs on the Austin-based company and its Linux-powered rifles, which it collectively calls “Precision Guided Firearms,” or PGFs. We got to spend a few hours on the range with TrackingPoint’s first round of near-production bolt-action weapons last March, when my photojournalist buddy Steven Michael nailed a target at 1,008 yards—about 0.91 kilometers—on his first try, in spite of never having fired a rifle before.

A lot of things have changed in the past year for TrackingPoint. The company relocated its headquarters from within Austin to the suburb community of Pflugerville, constructed an enormous manufacturing and testing lab to scale up PGF production, shed some 30 employees (including CEO Jason Schauble and VP Brett Boyd, the latter of whom oversaw our range visit in 2013), and underwent a $29 million Series D round of financing. It also sold as many PGFs as it could make, according to Oren Schauble, TrackingPoint’s director of marketing and brother of former CEO Jason Schauble.


Submitted by:Lee Hutchinson

David Tomaschik: DEF CON 22 Recap

Planet Ubuntu - Wed, 2014-08-13 05:45

I'm back and recovering with typical post-con fatigue. This year, I made several mistakes, not the least of which was trying to do BSides, Black Hat, and DEF CON. Given the overlapping schedules and the events occurring outside the conferences, this left me really drained, not to mention spending more time transiting between the events than I'd like.

BSides Las Vegas

B-Sides was a blast, but I spent most of the time I was there playing in the Pros vs Joes CTF run by Dichotomy. This is a particularly nice Capture the Flag competition, since it's based on defending (and attacking) "real world" networks, rather than the typical Jeopardy-style "crack this binary" competitions. Most of the problems seen in the real world aren't, in fact, 0-day produced by talented hackers, but in fact configuration weaknesses, outdated software, and insecure practices exploited by script kiddies. PvJ forces you to consider how to harden a "corporate" environment while still providing the same services. You get a Cisco ASA as your firewall, and can reconfigure services as needed to establish your perimeter and secure your systems. On Day 2, you also get to see just how good you are at breaking in, and just how good (or bad) your opponents are at securing their network.

Black Hat

There were a couple of interesting talks to see at Black Hat, but some of the ones that I hoped would be more ground breaking seemed to just scratch the surface and didn't provide enough depth. (Or working demos! I'm looking at you, USB firmware!) The Black Hat business hall was an incredible letdown, as basically none of the booths had anyone with technical depth for discussion, but just had sales people who wanted to sell things that probably don't work anyway. [Cynical mode off.]

In all honesty, Black Hat continues to be a venue for government & corporate security managers, and consultants and contractors that work for those entities. There's absolutely nothing community about it, but so long as you go in with that expectation, you won't be disappointed by that.


So much to do, so little time! Every year, I'm plagued by the same problem: which of the 7 amazing things going on right now do I want to do? This year, the problem got even more complicated for me due to an event run by my employer.

The badge was, as usual, pretty awesome, thanks to 1o57's work. Apparently he even worked on it during his honeymoon, so a big thanks to @NelleBot for not yelling at him too much, so we all got to play with some awesome hardware. Once again, the badge features a Parallax Propeller chip, which is sortof unfortunate, as the toolkit for it is closed-source and Linux is not a first-class citizen. Between that & time constraints, I didn't spend any time working on the badge challenge, but maybe I'll play around with it some now that I'm home. I believe I've spotted (and heard of) an IR transmitter/receiver pair, similar to the DC20 badge. I also have some IR LEDs and receivers at home, so I wonder if they're in a similar range. Maybe I'll break out a Digispark as an IR transceiver to play around with.

Thursday night was theSummit, an annual fundraiser run by Vegas 2.0 to raise money for the Electronic Frontier Foundation. It's an incredible event, with lots of great people in attendance, and a good opportunity to meet many of the BSides and DEF CON speakers. The fact that there's a raffle, auction, and open bar is just the icing on the cake. (Donating to the EFF makes it such a good cause that I wouldn't miss it for anything!) As you can see at the top, the VIP badge for theSummit was pretty awesome. I love the LED shining through the acrylic to make the text glow.

I was really happy to see the Crypto & Privacy village, and even though I only got a little time there, it was great to see that playing more of a role at DEF CON. I attended the OpenPGP keysigning on Friday, but didn't make it back for Saturday's. They also seemed to have some good introductory crypto talks, and it'll be interesting to see how that evolves over the next year.

Despite losing a lot of time to a work event and teaching at the R00tz Asylum, I managed to play in Capture the Packet with another member of DC404 (my DEF CON group from when I lived in Atlanta) and we won the round, qualifying for the finals. Unfortunately, he wasn't able to make it to the finals due to his flight arrangements, so another DC404 member (and current coworker) stepped in, and we managed a 2nd place overall finish, which I was extremely happy with. (Not that a black badge wouldn't have been cool... There's always next year.)

Of course, work events aren't so bad when they come with this view. We took some interesting people on a little trip around the High Roller, the tallest Ferris Wheel in the world, right off the strip! It was incredible to get to talk with some of them, and the view didn't hurt things either.

If you haven't heard, this was the final year at the Rio. It's time to pack our bags and head across the freeway to Paris. And Bally's. That's right, it's going to take 2 hotels to contain all the hackers. Apparently we'll have room blocks at several more of the area hotels. Makes sense given this year's reported 16,000 attendance.

Harald Sitter: Phonon + GStreamer + VLC 4.8 Beta

Planet Ubuntu - Tue, 2014-08-12 21:58

Today in Randa: Phonon 4.8 Beta got released, making the GStreamer backend use the GStreamer1 API and improving robustness in all parts of Phonon.

Phonon is a most excellent multimedia library for Qt.

For more information on this new beta releae head on over to our releases page.

New Phonon GStreamer maintainer Daniel Vrátil is a close friend of Konqi! Picture kindly provided by Martin Klapetek. Also, no dragons were harmed in the making of this picture (we think).

The Fridge: Ubuntu Global Jam 14.10

Planet Ubuntu - Tue, 2014-08-12 17:33

With the timing getting a bit tight and no serious objections against the suggested dates, we’d like to plan the next Ubuntu Global Jam for

UGJ 14.10: 12-14 September 2014

To get the planning going, we’d like to invite all available LoCo enthusiasts, LoCo contacts and LoCo Council to join us for a

Planning hangout
Thursday 14th Aug, 14 UTC

You are all invited, we’ll get everyone on the hangout who wants to participate.

If you’re new to the party, have a look at for some reading.

Originally posted to the loco-contacts mailing list on Tue Aug 12 14:01:17 UTC 2014 by Daniel Holbach

Ubuntu Kernel Team: Kernel Team Meeting Minutes – August 12, 2014

Planet Ubuntu - Tue, 2014-08-12 17:13
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.


20140812 Meeting Agenda

Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:


Status: Utopic Development Kernel

The Utopic kernel has been rebased to v3.16 final and uploaded to the
archive, ie. linux-3.13.0-7.12. Please test and let us know your
Important upcoming dates:
Thurs Aug 21 – Utopic Feature Freeze (~1 week away)
Thurs Sep 25 – Utopic Final Beta (~6 weeks away)

Status: CVE’s

The current CVE status can be reviewed at the following link:

Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid


cycle: 08-Aug through 29-Aug
08-Aug Last day for kernel commits for this cycle
10-Aug – 16-Aug Kernel prep week.
17-Aug – 23-Aug Bug verification & Regression testing.
24-Aug – 29-Aug Regression testing & Release to -updates.

cycle: 29-Aug through 29-Aug
29-Aug Last day for kernel commits for this cycle
31-Sep – 06-Sep Kernel prep week.
07-Sep – 13-Sep Bug verification & Regression testing.
14-Sep – 20-Sep Regression testing & Release to -updates.

Status for the main kernels, until today (Aug. 12):

  • Lucid – Kernels being prep’d
  • Precise – Kernels being prep’d
  • Trusty – Kernels being prep’d

    Current opened tracking bugs details:


    For SRUs, SRU report is a good source of information:


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Elizabeth K. Joseph: Fosscon 2014

Planet Ubuntu - Tue, 2014-08-12 16:43

Flying off to a conference on the other side of the country 2 weeks after having my gallbladder removed may not have been one of the wisest decisions of my life, but I am very glad I went. Thankfully MJ had planned on coming along to this event anyway, so I had companionship… and someone to carry the luggage :)

This was Fosscon‘s 5th year, 4th in Philadelphia and the 3rd one I’ve been able to attend. I was delighted this year to have my employer, HP, sponsor the conference at a level that gave us a booth and track room. Throughout the day I was attending talks, giving my own and chatting with people at the HP booth about the work we’re doing in OpenStack and opportunities for people who are looking to work with open source technologies.

The day started off with a keynote by Corey Quinn titled “We are not special snowflakes” which stressed the importance of friendliness and good collaboration skills in technical candidates.

I, for one, am delighted to see us as an industry moving away from BOFHs and kudos for antisocial behavior. I may not be a social butterfly, but I value the work of my peers and strive to be someone people enjoy working with.

After the keynote I did a talk about having a career in FOSS. I was able to tell stories about my own work and experiences and those of some of my colleagues. I talked about my current role at HP and spent a fair amount of time giving participation examples related to my work on Xubuntu. I must really enjoy this topic, because I didn’t manage to leave time for questions! Fortunately I think I made up for it in some great chats with other attendees throughout the day.

My slides from the talk are available here: FOSSCON-2014-FOSS_career.pdf

Some other resources related to my talk:

During the conference I always was able to visit with my friends at the Ubuntu booth. They had brought along a couple copies of The Official Ubuntu Book, 8th Edition for me to sign (hooray!) and then sell to conference attendees. I brought along my Ubuntu tablet which they were able to have at the booth, and which MJ grabbed from me during a session when someone asked to see a demo.

After lunch I went to see Charlie Reisinger’s “Lessons From Open Source Schoolhouse” where he talked about the Ubuntu deployments in his school district. I’ve been in contact with Charlie for quite some time now since the work we do with Partimus also puts us in schools, but he’s been able to achieve some pretty exceptional success in his district. It was a great pleasure to finally meet him in person and his talk was very inspiring.

I’ve been worried for quite some time that children growing up today will only have access to tablets and smart phones that I classify as “read only devices.” I think back to when I first started playing with computers and the passion for them grew out of the ability to tinker and discover, if my only exposure had been a tablet I don’t think I’d be where I am today. Charlie’s talk went in a similar direction, particularly as he revealed that he controversially allows students to have administrative (sudo) access on the Ubuntu laptops! The students feel trusted, empowered and in the time the program has been going on, he’s been able to put together a team of student apprentices who are great at working with the software and can help train other students, and teachers too.

It was also interesting to learn that after the district got so much press the students began engaging people in online communities.

Fosscon talks aren’t recorded, but check out Charlie’s TEDx Lancaster talk to get a taste of the key points about student freedom and the apprentice program he covered: Enabling students in a digital age: Charlie Reisinger at TEDxLancaster

GitHub for Penn Manor School District here:

The last talk I went to of the day was by Robinson Tryon on “LibreOffice Tools and Tricks For Making Your Work Easier” where I was delighted to see how far they’ve come with the Android/iOS Impress remote and work being done in the space of editing PDFs, including the development of Hybrid PDFs which can be opened by LibreOffice for editing or a PDF viewer and contain full versions of both documents. I also didn’t realized that LibreOffice retained any of the command line tools, so it was pretty cool to learn about soffice --headless --convert to do CLI-based conversions of files.

Huge thanks to the volunteers who make Fosscon happen. The Franklin Institute was a great venue and aside from the one room downstairs, I think the layout worked out well for us. Booths were in common spaces that attendees congregated in, and I was even able to meet some tech folks who were just at the museum and happened upon us, which was a lot of fun.

More photos from the event here:

Kubuntu: KDE Applications and Development Platform 4.13.3

Planet Ubuntu - Tue, 2014-08-12 13:48

Packages for the release of KDE SC 4.13.3 are available for Kubuntu 14.04LTS. You will recieve them from the regular update channel.

Bugs in the packaging should be reported on Launchpad. Bugs in the software to KDE.

Dustin Kirkland: Learn Byobu in 10 minutes while listening to Mozart

Planet Ubuntu - Tue, 2014-08-12 12:44
If you're interested in learning how to more effectively use your terminal as your integrated devops environment, consider taking 10 minutes and watching this video while enjoying the finale of Mozart's Symphony No. 40Allegro Assai (part of which is rumored to have inspired Beethoven's 5th).

I'm often asked for a quick-start guide, to using Byobu effectively.  This wiki page is a decent start, as is the manpage, and the various links on the upstream website.  But it seems that some of the past screencast videos have had the longest lasting impressions to Byobu users over the years.
I was on a long, international flight from Munich to Newark this past Saturday with a bit of time on my hands, and I cobbled together this instructional video.    That recent international trip to Nuremberg inspired me to rediscover Mozart, and I particularly like this piece, which Mozart wrote in 1788, but sadly never heard performed.  You can hear it now, and learn how to be more efficient in command line environments along the way :-)


Benjamin Kerensa: UbuConLA: Firefox OS on show in Cartagena

Planet Ubuntu - Tue, 2014-08-12 09:30

If you are attending UbuConLA I would strongly encourage you to check out the talks on Firefox OS and Webmaker. In addition to the talks, there will also be a Firefox OS workshop where attendees can go more hands on.

When the organizers of UbuConLA reached out to me several months ago, I knew we really had to have a Mozilla presence at this event so that Ubuntu Users who are already using Firefox as their browser of choice could learn about other initiatives like Firefox OS and Webmaker.

People in Latin America always have had a very strong ethos in terms of their support and use of Free Software and we have an amazingly vibrant community there in Columbia.

So if you will be anywhere near Universidad Tecnológica De Bolívar in Catagena, Columbia, please go see the talks and learn why Firefox OS is the mobile platform that makes the open web a first class citizen.

Learn how you can build apps and test them in Firefox on Ubuntu! A big thanks to Guillermo Movia for helping us get some speakers lined up here! I really look forward to seeing some awesome Firefox OS apps getting published as a result of our presence at UbuConLA as I am sure the developers will love what Firefox OS has to offer.


Feliz Conferencia!

Ronnie Tucker: Peppermint OS 5: Light, Refreshing Linux

Planet Ubuntu - Tue, 2014-08-12 08:00

The Peppermint OS is built around a concept that may be unique among desktop environments. It is a hybrid of traditional Linux desktop applications and cloud-based apps.

Using the Ice technology in the Peppermint OS is much like launching an app on an Android phone or tablet. For example, I can launch Google Docs, Gmail, Twitter, Yahoo Mail, YouTube, Pandora or Facebook as if they were self-contained apps on a mobile device — but these pseudo apps never need updating. Ice easily creates a menu entry to launch any website or application as if it were installed.

This innovative approach puts the latest release of Peppermint OS 5, which appeared in late June, well ahead of the computing curve. It brings cloud apps to the Linux desktop with the ease and flexibility of a Chromebook. It marries that concept to the traditional idea of having installed software that runs without cloud interaction.


Submitted by: Jack M. Germain

The Fridge: Ubuntu Weekly Newsletter Issue 378

Planet Ubuntu - Tue, 2014-08-12 01:51

Welcome to the Ubuntu Weekly Newsletter. This is issue #378 for the week August 4 – 10, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Jose Antonio Rey
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Adam Stokes: Containerize juju’s local provider

Planet Ubuntu - Mon, 2014-08-11 23:05
Current approach

Juju’s existing providers(except manual) do not allow you to containerize the bootstrap node. However, in the manual provider this is possible using something like this in your environments.yaml file and setting the boostrap-host appropriately:

## manual: type: manual # bootstrap-host holds the host name of the machine where the # bootstrap machine agent will be bootstrap-host: # bootstrap-user specifies the user to authenticate as when # connecting to the bootstrap machine. If defaults to # the current user. # bootstrap-user: joebloggs # storage-listen-ip specifies the IP address that the # bootstrap machine's Juju storage server will listen # on. By default, storage will be served on all # network interfaces. # storage-listen-ip: # storage-port specifes the TCP port that the # bootstrap machine's Juju storage server will listen # on. It defaults to 8040 # storage-port: 8040

Cool, that will allow me to bootstrap juju on something other than my host machine. But, that machine needs to be configured appropriately for a non-interactive deployment (setting ssh keys, passwordless sudo, etc).

A different approach

In my particular case we wanted our Openstack Installer to be fully containerized from juju bootstrap to deploying of compute nodes. In order to achieve this we need to configure an existing container to be our bootstrap agent and still allow for our mixture of kvm/lxc environments for use within the Openstack deployment.


Create a container named joojoo that will be used as our Juju bootstrap agent:

ubuntu@fluffy:~$ sudo lxc-create -t ubuntu -n joojoo

Update the container’s lxcbr0 to be on its own network:

ubuntu@fluffy:~$ cat <<-EOF | sudo tee /var/lib/lxc/joojoo/rootfs/etc/default/lxc-net USE_LXC_BRIDGE="true" LXC_BRIDGE="lxcbr0" LXC_ADDR="" LXC_NETMASK="" LXC_NETWORK="" LXC_DHCP_RANGE="," LXC_DHCP_MAX="253" EOF

Create the necessary character files for kvm support within lxc via mknod, also persist them through reboots.

ubuntu@fluffy:~$ cat <<-EOF | sudo tee /var/lib/lxc/joojoo/rootfs/etc/rc.local #!/bin/sh mkdir -p /dev/net || true mknod /dev/kvm c 10 232 mknod /dev/net/tun c 10 200 exit 0 EOF

Start the container

ubuntu@fluffy:~$ sudo lxc-start -n joojoo -d

Pre-install libvirt and uvtools

ubuntu@fluffy:~$ sudo lxc-attach -n joojoo -- apt-get update ubuntu@fluffy:~$ sudo lxc-attach -n joojoo -- apt-get install -qyf \ libvirt-bin uvtool uvtool-libvirt software-properties-common

Make sure our ubuntu user has the correct libvirtd group associated

ubuntu@fluffy:~$ sudo lxc-attach -n joojoo -- usermod -a -G libvirtd ubuntu Now that you have a containerized environment ready for Juju, lets test!

The LXC container should now be ready for a juju deployment. Lets use our Openstack Cloud Installer to test this setup. I want to make sure everything deploys into its appropriate containers/kvm instances and that I can still access the Horizon dashboard to deploy a compute instance.

First, ssh into your container, you can get the IP with the lxc-ls -f command:

ubuntu@fluffy:~$ sudo lxc-ls -f joojoo NAME STATE IPV4 IPV6 AUTOSTART ------------------------------------------- joojoo RUNNING - NO ubuntu@fluffy:~$ ssh ubuntu@

Within the container add our PPA and perform the installation:

ubuntu@joojoo:~$ sudo apt-add-repository ppa:cloud-installer/experimental ubuntu@joojoo:~$ sudo apt-add-repository ppa:juju/stable ubuntu@joojoo:~$ sudo apt update && sudo apt install cloud-installer ubuntu@joojoo:~$ sudo cloud-install

Note I’m using our experimental PPA for Openstack Cloud Installer which will be our next major release and will automate the previous steps for putting juju within a container.

This test I’m using the Single Install method, so select that and enter a Openstack password of your choice. Now sit back and wait for the installation to finish.


First we created a LXC container to be used as our entry point for juju to bootstrap itself too. This required some configuration changes to how the container will handle bridged connections along with making sure the character devices required by KVM are available.

Next we installed some pre-requisites for libvirt and uvtools.

From there we login to the newly created container, install, and run the Openstack Cloud Installer. This will install juju-core and lxc as dependencies along with automatically configuring lxc-net with our predefined lxc-net template, seen in the latest lxc-ls output (showing eth0, lxcbr0, and virbr0):

ubuntu@fluffy:~$ sudo lxc-ls -f NAME STATE IPV4 IPV6 AUTOSTART ------------------------------------------------------------------- joojoo RUNNING,, - NO

Once the installer is finished we verify that our LXC container was able to facilitate the deployment of services in both LXC (nested) and KVM (also nested within LXC).

It’s a long list so here is the pastebin. What you’ll notice is that all machines/services are now bound to the 10.0.4.x network which is what was defined in the lxc-net configuration above. We have KVM’s running within our host container which also houses containers for the Openstack deployment.

Just to give a more visual representation of the setup:

Baremetal Machine - LXC Container - Runs juju bootstrap agent - KVM (machine 1) - Houses a bunch of LXC's for the openstack services - KVM (machine 2) - Houses nova-compute - KVM (machine 3) - Houses quantum-gateway Why is this a good thing? ubuntu@fluffy:~$ sudo lxc-stop -n joojoo ubuntu@fluffy:~$ sudo lxc-destroy -n joojoo

And it’s like it never happened …


Thanks to a colleague, Robert Ayres, who provided the necessary information for getting KVM to run within an LXC container.


Subscribe to Free Software Magazine aggregator