Planet Ubuntu
Subscribe to Planet Ubuntu feed
Planet Ubuntu -
Updated: 34 min 13 sec ago

Robbie Williamson: Canonical’s Office of The CDO: A 5 Year Journey in DevOps

Mon, 2014-03-10 19:00

I’m often asked what being the Vice President of Cloud Development and Operations means, when introduced for a talk or meeting, or when someone happens to run by my LinkedIn profile or business card.

The office of the CDO has been around in Canonical for so long, I forget that the approach we’ve taken to IT and development is either foreign or relatively new to a lot of IT organizations, especially in the commonly thought of “enterprise” space. I was reminded of this when I gave a presentation at an OpenStack Developer Summit entitled “OpenStack in Production: The Good, the Bad, & the Ugly” a year ago in Portland, Oregon. Many in the audience were surprised by the fact that Canonical not only uses OpenStack in production, but uses our own tools, Juju and MAAS, created to manage these cloud deployments. Furthermore, some attendees were floored by how our IT and engineering teams actually worked well together to leverage these deployments in our production deployment of globally accessible and extensively used services.

Before going into what the CDO is today, I want to briefly cover how it came to be. The story of the CDO goes back to 2009, when our CEO, Jane Silber, and Founder, Mark Shuttleworth, were trying to figure out how our IT operations team and web services teams could work better…smarter together. At the same time our engineering teams had been experimenting with cloud technologies for about a year, going so far as to provide the ability to deploy a private cloud in our 9.04 release of Ubuntu Server.

It was clear to us then, that cloud computing would revolutionize the way in which IT departments and developers interact and deploy solutions, and if we were going to be serious players in this new ecosystem, we’d need to understand it at the core. The first step to streamlining our development and operations activities was to merge our IT team, who provided all global IT services to both Canonical and the Ubuntu community, with our Launchpad team, who developed, maintained, and serviced, the core infrastructure for hosting and building Ubuntu. We then added our Online Services team, who drove our Ubuntu One related services, and this new organization was called Core DevOps…thus the CDO was born.

Roughly soon after the formation of the CDO, I was transitioning between roles within Canonical, going from acting CTO to Release Manager (10.10 on 10.10.10..perfection! ), then landing in as our new manager for the Ubuntu Server and Security teams. Our server engineering efforts continued to become more and more focused on cloud, and we had also began working on a small, yet potentially revolutionary, internal project called Ensemble, which was focused on solving the operational challenges system administrators, solution architects, and developers would face in the cloud, when one went from managing 100s of machines and associated services to 1000s.

All of this led to a pivotal engineering meeting in Cape Town, South Africa early 2011, where management and technical leaders representing all parts of the CDO and Ubuntu Server engineering met with Mark Shuttleworth, along with the small team working on Project Ensemble, to determine the direction Canonical would take with our server product.

Until this moment in time, while we had been dabbling in cloud computing technologies with projects like our own cloud-init and the Amazon EC2 AMI Locator, Ubuntu Server was still playing second stage to Ubuntu for the desktop. While being derived from Debian (the world’s most widely deployed and dependable Linux web hosting server OS), certainly gave us credibility as a server OS, the truth was that most people thought of desktops when you mentioned Ubuntu the OS. Canonical’s engineering investments were still primarily client focused, and Ubuntu Server was nothing much more than new Debian releases at a predictable cadence, with a bit of cloud technology thrown in to test the waters. But this weeklong engineering sprint was where it all changed. After hours and hours of technical debates, presentations, demonstrations, and meetings, there were two major decisions made that week that would catapult Canonical and Ubuntu Server to the forefront of cloud computing as an operating system.

The first decision made was OpenStack was the way forward. The project was still in its early days, but it had already peaked many of our engineers’ interest, not only because it was being led by friends of Ubuntu and former colleagues of Canonical, Rick Clark, Thierry Carrez, and Soren Hansen, but the development methods, project organization, and community were derived from Ubuntu, and thus it was something we knew had potential to grow and sustain itself as an opensource project. While we still had to do our due diligence on the code, and discuss the decision at UDS, it was clear to many then that we’d inevitably go that direction.

The second decision made was that Project Ensemble would be our main technical contribution to cloud computing, and more importantly, the key differentiator we needed to break through as the operating system for the cloud. While many in our industry were still focused on scale-up, legacy enterprise computing and the associated tools and technologies for things like configuration and virtual machine management, we knew orchestrating services and managing the cloud were the challenges cloud adopters would need help with going forward. Project Ensemble was going to be our answer.

Fast forward a year to early 2012. Project Ensemble had been publicly unveiled as, Juju, the Ubuntu Server team had fully adopted OpenStack and plans for the hugely popular Ubuntu Cloud Archive were in the works, and my role had expanded to Director of Ubuntu Server, covering the engineering activities of multiple teams working on Ubuntu Server, OpenStack, and Juju. The CDO was still covering IT operations, Launchpad, and Online Services, but now we had started discussing plans to transition our own internal IT infrastructure over to an internal cloud computing model, essentially using the very same technologies we expected our users, and Canonical customers, to depend on.  As part of the conversation on deploying cloud internally, our Ubuntu Server engineering teams started looking at tools to adopt that would provide our internal IT teams and the wider Ubuntu community the ability to deploy and manage large numbers of machines installed with Ubuntu Server. Originally, we landed on creating a tool based on Fedora’s Cobbler project, combined with Puppet scripts, and called it Ubuntu Orchestra. It was perfect for doing large-scale, coordinated installations of the OS and software, such as OpenStack, however it quickly became clear that doing this install was just the beginning…and unfortunately, the easy part.  Managing and scaling the deployment was the hard part. While we had called it Orchestra, it wasn’t able to orchestrate much beyond machine and application install. Intelligently and automatically controlling the interconnected services of OpenStack or Hadoop in a way that allowed for growth and adaptability was the challenge.  Furthermore, the ways in which you had to describe the deployments were restricted to Puppet and it’s descriptive scripting language and approach to configuration management…what about users wanting Chef?…or CF Engine?…or the next foobar configuration management tool to come about?  If we only had a tool for orchestrating services that ran on bare metal, we’d be golden….and thus Metal as a Service (MAAS) was born.

MAAS was created for the sole purpose of providing Juju a way to orchestrate physical machines the same way Juju managed instances in the cloud.  The easiest way to do this, was to create something that gave cloud deployment architects the tools needed to manage pools of servers like the cloud.  Once we began this project, we quickly realized that it was good enough to even stand on its own, i.e. as a management tool for hardware, and so we expanded it to a full fledged project.  MAAS expanded to having a verbose API and user-tested GUI, thereby making Juju, Ubuntu Server deployment, and Canonical’s Landscape product leverage the same tool for managing hardware…allowing all three to benefit from the learnings and experiences of having a shared codebase.

The CDO Evolves

In the middle of 2012, the current VP of CDO decided to seek new opportunities elsewhere.  Senior management took this opportunity to look at the current organizational structure of Core DevOps, and adjust/adapt according to both what we had learned over the past 3 1/2 years and where we saw the evolution of IT and the server/cloud development heading.  The decision was made to focus the CDO more on cloud/scale-out server technologies and aspects, thus the Online Services team was moved over to a more client focused engineering unit. This left Launchpad and internal IT in the CDO, however the decision was also made to move all server and cloud related project engineering teams and activities into the organization. The reasoning was pretty straight-forward, put all of server dev and ops into the same team to eliminate “us vs them” siloed conversations…streamline the feedback loop between engineering and internal users to accelerate both code quality and internal adoption.  I took a career growth decision to apply for the chance to lead the CDO, and was fortunate enough to get it, and thus became the new Vice President of Core DevOps.

My first decision as new lead of the CDO was to change the name.  It might seem trivial, but while I felt it was key to keep to our roots in DevOps, the name Core DevOps no longer applied to our organization because of the addition of so much more server and cloud/scale-out computing focused engineering.  We had also decided to scale back internal feature development on Launchpad, focusing more on maintenance and reviewing/accepting outside contributions.  Out of a pure desire to reduce the overhead that department name changes usually cause in a company, I decided to keep the acronym and go with Cloud and DevOps at first. However, then the name (and quite honestly the job title itself) seemed a little too vague…I mean what does VP of Cloud or VP of DevOps really mean?  I felt like it would have been analogous to being the VP of Internet and Agile Development…heavy on buzzword and light on actual meaning.  So I made a minor tweak to “Cloud Development and Operations“, and while arguably still abstract, it at least covered everything we did within the organization at high level.

At the end of 2012, we internally gathered representation of every team in the “new and improved” CDO for a week long strategy session on how we’d take advantage of the reorganization. We reviewed team layouts, workflows, interactions, tooling, processes, development models, and even which teams individuals were on.  Our goal was to ensure we didn’t duplicate effort unnecessarily, share best practices, eliminate unnecessary processes, break down communication silos, and generally come together as one true team. The outcome resulted in some teams broken apart, some others newly formed, processes adapted, missions changed, and some people lost because they didn’t feel like they fit anymore.

Entering into 2013, the goal was to simply get work done:

  • Work to deploy, expand, and transition developers and production-level services to our internal OpenStack clouds: CanoniStack and ProdStack.
  • Work to make MAAS and Juju more functional, reliable, and scalable.
  • Work to make Ubuntu Server better suited for OpenStack, more easily consumable in the public cloud, and faster to bring up for use in all scale-out focused hardware deployments
  • Work to make Canonical’s Landscape product more relevant in the cloud space, while continuing to be true to its roots of server management.

All this work was in preparation for the 14.04 LTS release, i.e. the Trusty Tahr. Our feeling was (and still is) that this had to be the release when it all came together into a single integrated solution for use in *any* scale-out computing scenario…cloud…hyperscale…big data…high performance computing…etc.  If a computing solution involved large numbers of computational machines (physical or virtual) and massively scalable workloads, we wanted Ubuntu Server to be the defacto OS of choice.  By the end of last year, we had achieved a lot of the IT and engineering goals we set, and felt pretty good about ourselves.  However, as a company we quickly discovered there was one thing we left out in our grand plan to better align and streamline our efforts around scale-out technologies….professional delivery and support of these technologies.

To be clear, Canonical had not forgotten about growing or developing our teams of engineers and architects responsible for delivering solutions and support to customers. We had just left them out of our “how can we do this better” thinking when aligning the CDO. We were initially focused on improving how we developed and deployed, and we were benefiting from the changes made.  However, now as we began growing our scale-out computing customer base in hyperscale and cloud (both below and above), we began to see that same optimizations made between Dev and Ops, needed to be done with delivery. So in December of last year, we moved all hardware enablement and certification efforts for servers, along with technical support and cloud consultancy teams into the CDO.  The goal was to strengthen the product feedback loop, remove more “us vs them” silos, and improve the response times to customer issues found in the field.  We were basically becoming a global team of scale-out technology superheroes.

It’s been only 3 months since our server and cloud enablement and delivery/support teams have joined the CDO, and there are already signs of improvement in responsiveness to support issues and collaboration on technical design.  I won’t lie and say it’s all been butterflies and roses, nor will I say we’re done and running like a smooth, well-oiled machine because you simply can’t do that in 3 months, but I know we’ll get there with time and focus.

So there you have it.

The Cloud Development and Operations organization in Canonical is now 5 years strong.  We deliver global, 24×7 IT services to Canonical, our customers and Ubuntu community.  We have engineering teams creating server, cloud, hyperscale, and scale-out software technologies and solutions to problems some have still yet to even consider.  We deliver these technologies and provide customer support for Canonical across a wide range of products including Ubuntu Server and Cloud.  This end-to-end integration of development, operations, and delivery is why Ubuntu Server 14.04 LTS, aka the Trusty Tahr, will be the most robust, technically innovative release of the Ubuntu for the server and cloud to date.

Canonical Design Team: Ubuntu 14.04 LTS wallpaper

Mon, 2014-03-10 17:11


For the last couple of weeks we’ve been working on the new Ubuntu Wallpaper. It is never easy trust me. The most difficult part was to work out the connection between the old wallpapers and the new look and feel – Suru. The wallpaper has become an integral part of the ubuntu brand, the strong colours and gradated flow are powerful important elements. We realised this when looking from a distance at someone laptop it really does shout UBUNTU.

We spent some time looking at our brand guidelines as well as previous wallpaper thinking how to connect the old with the new and how to make the transition smooth. I did start with simple shapes and treated them as a separate sheets of paper. After a while we moved away from that idea simply because Suru is about simplicity and minimalism.
When we got the composition right we started to play with colours, we tried all our Ubuntu complimentary colours but we were not entirely happy. Don’t get me wrong ;) they did look nice but it didn’t feel like a next step from our last wallpaper…

And here some examples of the things I was exploring…

Svetlana Belkin: Teaching A Classroom Session: Lessons Learned

Sun, 2014-03-09 23:52

Last Sunday, I taught the “Getting started contributing to the Wiki docs” classroom session of the Doc Team’s Ubuntu Documentation Day. This was the first time I taught one and I learned some lessons:

  • Have an outline of your lesson ready
  • If possible, have the outline reviewed by someone else in the team to check for erros
  • Have an example to go over and have it for everyone that wants to do it be done by the them
  • Have enough time for pauses and use them for questions
  • It’s okay to still have five (5) to ten (10) minutes left in your session; it can be used for questions
  • You have to PM the Classbot with the commands in order for it to work out the questions and have them posted into the session from the chat channel

I hope these lessons can help others who wants to teach a session.

Mark Shuttleworth: The very best edge of all

Sat, 2014-03-08 17:56

Check out “loving the bottom edge” for the most important bit of design guidance for your Ubuntu mobile app.

This work has been a LOT of fun. It started when we were trying to find the zen of each edge of the screen, a long time back. We quickly figured out that the bottom edge is by far the most fun, by far the most accessible. You can always get to it easily, it feels great. I suspect that’s why Apple has used the bottom edge for their quick control access on IOS.

We started in the same place as Apple, thinking that the bottom edge was so nice we wanted it for ourselves, in the system. But as we discussed it, we started to think that the app developer was the one who deserved to do something really distinctive in their app with it instead. It’s always tempting to grab the tastiest bit for oneself, but the mark of civility is restraint in the use of power and this felt like an appropriate time to exercise that restraint.

Importantly you can use it equally well if we split the screen into left and right stages. That made it a really important edge for us because it meant it could be used equally well on the Ubuntu phone, with a single app visible on the screen, and on the Ubuntu tablet, where we have the side stage as a uniquely cool way to put phone apps on tablet screens alongside a bigger, tablet app.

The net result is that you, the developer, and you, the user, have complete creative freedom with that bottom edge. There are of course ways to judge how well you’ve exercised that freedom, and the design guidance tries to leave you all the freedom in the world while still providing a framework for evaluating how good the result will feel to your users. If you want, there are some archetypes and patterns to choose from, but what I’d really like to see is NEW patterns and archetypes coming from diverse designs in the app developer community.

Here’s the key thing – that bottom edge is the one thing you are guaranteed to want to do more innovatively on Ubuntu than on any other mobile platform. So if you are creating a portable app, targeting a few different environments, that’s the thing to take extra time over for your Ubuntu version. That’s the place to brainstorm, try out ideas on your friends, make a few mockups. It’s the place you really express the single most important aspects of your application, because it’s the fastest, grooviest gesture in the book, and it’s all yours on Ubuntu.

Have fun!

Paul Tagliamonte: Donate to MediaGoblin!

Sat, 2014-03-08 14:27
If you’re the sort that cares about federation, the MediaGoblin project is for you!

MediaGoblin is a media hosting platform, where you can post all sorts of things, like video, images or even 3d models. It’s a nice replacement for things like Flickr, and YouTube, and super easy to set up.

If I wasn’t such a lazy person, I’d have uploaded it to Debian, but alas. Soon. Soon!

It’s an official GNU project, and it’s maintainer, Chris is a totally awesome guy, and MediaGoblin is really important work.

If you feel the urge to support federation on the web, Support MediaGoblin! Help us take back the net!

Benjamin Mako Hill: V-Day

Sat, 2014-03-08 00:50

My friend Noah mentioned the game VVVVVV. I was confused because I thought he was talking about the visual programming language vvvv. I went to Wikipedia to clear up my confusion but ended up on the article on VVVVV which is about the Latin phrase “vi veri universum vivus vici” meaning, “by the power of truth, I, while living, have conquered the universe”.

There is no Wikipedia article on VVVVVVV. That would be ridiculous.

Eric Hammond: Cost of Transitioning S3 Objects to Glacier

Fri, 2014-03-07 23:29

how I was surprised by a large AWS charge and how to calculate the break-even point

Glacier Archival of S3 Objects

Amazon recently introduced a fantastic new feature where S3 objects can be automatically migrated over to Glacier storage based on the S3 bucket, the key prefix, and the number of days after object creation.

This makes it trivially easy to drop files in S3, have fast access to them for a while, then have them automatically saved to long-term storage where they can’t be accessed as quickly, but where the storage charges are around a tenth of the price.

…or so I thought.

S3 Lifecycle Rule

My first use of this feature was on some buckets where I store about 350 GB of data that fits the Glacier use pattern perfectly: I want to save it practically forever, but expect to use it rarely.

It was straight forward to use the S3 Console to add a lifecycle rule to the S3 buckets so that all objects are archived to Glacier after 60 days:

(Long time readers of this blog may be surprised I didn’t list the command lines to accomplish this task, but Amazon has not yet released useful S3 tools that include the required functionality.)

Since all of the objects in the buckets were more than 60 days old, I expected them to be transitioned to Glacier within a day, and true to Amazon’s documentation, this occurred on schedule.

Surprise Charge

What I did not expect was an email alert from my AWS billing alarm monitor on this account letting me know that I had just passed $200 for the month, followed a few hours later by an alert for $300, followed by an alert for a $400 trigger.

This is one of my personal accounts, so a rate of several hundred dollars a day is not sustainable. Fortunately, a quick investigation showed that this increase was due to one time charges, so I wasn’t about to run up a $10k monthly bill.

The line item on the AWS Activity report showed the source of the new charge:

$0.05 per 1,000 Glacier Requests x 5,306,220 Requests = $265.31

It had not occurred to me that there would be much of a charge for transitioning the objects from S3 to Glacier. I should have read the S3 Pricing page, where Amazon states:

Glacier Archive and Restore Requests: $0.05 per 1,000 requests

This is five times as expensive as the initial process of putting objects into S3, which is $0.01 per 1,000 PUT requests.

There is one “archive request” for each S3 object that is transitioned from S3 to Glacier, and I had over five million objects in these buckets, something I didn’t worry about previously because my monthly S3 charges were based on the total GB, not the number of objects.5306220

Overhead per Glacier Object

josh.monet has pointed out in the comments that Amazon has documented some Glacier storage overhead:

For each S3 object migrated to Glacier, Amazon adds “an additional 32 KB of Glacier data plus an additional 8 KB of S3 standard storage data”.

Storage for this overhead is charged at standard Glacier and S3 prices. This makes Glacier completely unsuitable for small objects.

Break-even Point

After stopping to think about it, I realized that I was still saving money on the long term by moving objects in these S3 buckets to Glacier storage. This one-time up front cost was going to be compensated for slowly by my monthly savings, because Glacier is cheap, even compared to the reasonably cheap S3 storage costs, at least for larger files.

Here are the results of my calculations:

  • Monthly cost of storing in S3: 350 GB x $0.095/GB = $33.25

  • Monthly cost of storing in Glacier: $8.97

    • 350 GB x $0.01/GB = $3.50
    • Glacier overhead: 5.3 million * 32 KB * $0.01/GB = $1.62
    • S3 overhead: 5.3 million * 8 KB * $0.95/GB = $3.85
  • One time cost to transition 5.3 million objects from S3 to Glacier: $265

  • Months until I start saving money by moving to Glacier: 11

  • Savings per year after first 11 months: $291 (73%)

For this data’s purpose, everything eventually works out to an advantage, so thanks, Amazon! I will, however, think twice before doing this with other types of buckets, just to make sure that the data is large enough and is going to be sitting around long enough in Glacier to be worth the transition costs.

As it turns out, the primary factor in how long it takes to break even is the average size of the S3 objects. If the average size of my data files were larger, then I would start saving money sooner.

Here’s the formula… The number of months to break even and start saving money when transferring S3 objects to Glacier is:

break-even months = 631,613 / (average S3 object size in bytes - 13,011)

(units apologies to math geeks)

In my case, the average size of the S3 objects was 70,824 bytes (about 70 KB). Applying the above formula:

631,613 / (70,824 - 13,011) = 10.9

or about 11 months until the savings in Glacier over S3 covers the cost of moving my objects from S3 to Glacier.

Looking closely at the above formula, you can see that any object 13 KB or smaller is going to cost more to transition to Glacier rather than leaving it in S3. Files approaching that size are going to save too little money to justify the transfer costs.

The above formula assumes an S3 storage cost of $0.095 per GB per month in us-east-1. If you are storing more than a TB, then you’re into the $0.08 tier or lower, so your break-even point will take longer and you’ll want to do more calculations to find your savings.

[Update 2012-12-19: Included additional S3 and Glacier storage overhead per item. Thanks to josh.monet for pointing us to this information buried in the S3 FAQ.]

[Update 2013-03-07] Amazon S3 documentation now has a section on Glacier Pricing Considerations that has some good pointers.

Original article:

David Henningsson: Headset jacks on newer laptops

Fri, 2014-03-07 10:17

Headsets come in many sorts and shapes. And laptops come with different sorts of headset jacks – there is the classic variant of one 3.5 mm headphone jack and one 3.5 mm mic jack, and the newer (common on smartphones) 3.5 mm headset jack which can do both. USB and Bluetooth headsets are also quite common, but that’s outside the scope for this article, which is about different types of 3.5 mm (1/8 inch) jacks and how we support them in Ubuntu 14.04.

You’d think this would be simple to support, and for the classic (and still common) version of having one headphone jack and one mic jack that’s mostly true, but newer hardware come in several variants.

If we talk about the typical TRRS headset – for the headset itself there are two competing standards, CTIA and OMTP. CTIA is the more common variant, at least in the US and Europe, but it means that we have laptop jacks supporting only one of the variants, or both by autodetecting which sort has been plugged in.

Speaking of autodetection, hardware differs there as well. Some computers can autodetect whether a headphone or a headset has been plugged in, whereas others can not. Some computers also have a “mic in” mode, so they would have only one jack, but you can manually retask it to be a microphone input.
Finally, a few netbooks have one 3.5 mm TRS jack where you can plug in either a headphone or a mic but not a headset.

So, how would you know which sort of headset jack(s) you have on your device? Well, I found the most reliable source is to actually look at the small icon present next to the jack. Does it look like a headphone (without mic), headset (with mic) or a microphone? If there are two icons separated by a slash “/”, it means “either or”.

For the jacks where the hardware cannot autodetect what has been plugged in, the user needs to do this manually. In Ubuntu 14.04, we now have a dialog:

In previous versions of Ubuntu, you would have to go to the sound settings dialog and make sure the correct input and output were selected. So still solvable, just a few more clicks. (The dialog might also be present in some Ubuntu preinstalls running Ubuntu 12.04.)

So in userspace, we should be all set. Now let’s talk about kernels and individual devices.

Quite common with Dell machines manufactured in the last year or so, is the version where the hardware can’t distinguish between headphones and headsets. These machines need to be quirked in the kernel, which means that for every new model, somebody has to insert a row in a table inside the kernel. Without that quirk, the jack will work, but with headphones only.
So if your Dell machine is one of these and not currently supporting headset microphones in Ubuntu 14.04, here’s what you can do:

  • Check which codec you have: We currently can enable this for ALC255, ALC283, ALC292 and ALC668. “grep -r Realtek /proc/asound/card*” would be the quickest way to figure this out.
  • Try it for yourself: edit /etc/modprobe.d/alsa-base.conf and add the line “options snd-hda-intel model=dell-headset-multi”. (A few instead need “options snd-hda-intel model=dell-headset-dock”, but it’s not that common.) Reboot your computer and test.
  • Regardless of whether you manage to resolve this or not, feel free to file a bug using the “ubuntu-bug audio” command. Please remove the workaround from the previous step (and reboot) before filing the bug. This might help others with the same hardware, as well as helping us upstreaming your fix to future kernels in case the workaround was successful. Please keep separate machines in separate bugs as it helps us track when a specific hardware is fixed.

Notes for people not running Ubuntu

  • Kernel support for most newer devices appeared in 3.10. Additional quirks have been added to even newer kernels, but most of them are with CC to stable, so will hopefully appear in 3.10 as well.
  • PulseAudio support is present in 4.0 and newer.
  • The “what did you plug in”-dialog is a part of unity-settings-daemon. The code is free software and available here.

Michael Hall: Work begins on Qimo 3.0

Fri, 2014-03-07 09:00

Today I started work on the third release of Qimo.  I’m slowly but surely learning how to do it better.

In the first release of Qimo I unpacked an Ubuntu ISO by hand, changed things by hand, and then re-packed the ISO….by hand. It was really quite a bit of work.

By the second release I had learned to build debian packages to install Qimo’s artwork, configurations and dependencies.  This make things a good bit easier, but I was still unpacking Ubuntu’s ISO, ripping things out, installing my new packages, hacking a few more things together, and then packing it up again.

Now, for the third release, I’m learning how to use debootstrap to create a brand new ISO, just for Qimo, with just the things that Qimo needs to run. I’m still learning how it all works, but I’m hoping it will reduce the time and effort it takes to spin up a new CD image, which will let me iterate faster on the actual development of Qimo, and maybe even provide regular image downloads during development.

Who knows, if all goes well, I might even have time to fix up the website.

Ubuntu GNOME: LTS Proposal

Fri, 2014-03-07 06:32


Today, Ubuntu GNOME Team has submitted a proposal for being an LTS release to Ubuntu Technical Board Team and that is for Trusty Tahr Cycle.

It means, if the Ubuntu Technical Board approves our proposal, Ubuntu GNOME 14.04 will be LTS – fingers crossed.

What does LTS mean?
Please see this Wiki Page.

Whether Ubuntu GNOME 14.04 will be an LTS release or not, we shall definitely need ‘more active contributors and manpower’. The community is growing fast, both in quality and quantity which is very promising and this has absolutely increased our confidence that we could do more and go the extra miles. Our need to ‘more contributions’ will never stop since we have decided to go that path.

There are lots of areas where you can help and support Ubuntu GNOME with. Please have a read at:
Getting Involved with Ubuntu GNOME.

If you have any question, please don’t hesitate to Contact Us.

“I am because we are.”

“All of us are smarter than any one of us.”

Thank you for choosing and supporting Ubuntu GNOME!

Jono Bacon: Open Source Think Tank Community Leadership Summit Soon

Thu, 2014-03-06 23:03

As some of you will know, I founded the Community Leadership Summit that takes place in Portland, Oregon every year. The event brings together community leaders, organizers and managers and the projects and organizations that are interested in growing and empowering a strong community. Each year we discuss, debate and continue to refine the art of building an effective and capable community, structured in a set of presentation and attendee-driven unconference sessions.

This year’s event is happening on 18th – 19th July 2014 (the two days before OSCON), and is shaping up to be a great event. We have over 180 people registered already, with a diverse and wide-ranging set of attendees. The event is free to attend, you just need to register first. We hope to see you there!

In a few weeks though we have an additional sister-event to the main Community Leadership Summit at the Open Source Think Tank.

The Community Leadership Summit and Open Source Think Tank have partnered to create a unique event designed for executives and managers involved in community management planning and strategic development. While the normal annual Community Leadership Summit serves practicing community managers and leaders well, this unique event is designed to be very focused on executives in a strategic leadership position to understand the value and process of building a community.

I have been wanting to coordinate a strategic leadership event such as this for some time, and the Think Tank is the perfect venue; it brings together executives across a wide range of Open Source organizations, and I will be delivering the Community Leadership Summit track as a key part of the event on the first day.

The event takes place on 24th March 2014 in Napa, California. See the event homepage for more details – I hope to see you there!

The track is shaping up well. We will have keynote sessions, break-out groups discussing gamification, metrics, hiring community managers, and more, a dedicated case study (based on a real organization with the identity anonymized) to exercise these skills and more.

If you want to join the Community Leadership Summit track at the Open Source Think Tank, please drop me an email as space is limited. I hope to see you there!

Jono Bacon: Ubuntu Developer Summit Next Week

Thu, 2014-03-06 22:51

Next week we have our Ubuntu Developer Summit, taking place online from Tues 11th March 2014 – Thurs 13th March 2014. Go and see the schedule – we still have lots of schedule space if you want to run a session. For details of how to propose a session, see this guide.

I just want to highlight a session I would like to really invite input on in particular.

Today the online Ubuntu Developer Summit is largely based on the formula from our physical UDSs that we used to have, and that formula goes back to 2004. While these have traditionally served the project well, I am cognizant that our community is much bigger and more diverse than it used to be, and our current Ubuntu Developer Summit doesn’t serve our wider community as well as it could; there is more to Ubuntu to rigorous software engineering.

UDS is great if you are a developer focused on building software and ensuring you have a plan to do so, but for our translators, advocates, marketeers, app developers, and more…the format doesn’t suit those communities as well.

As such, I would like to discuss this and explore opportunities where UDS could serve our wider community better. The session is here and is on Wed 12th March at 19.00UTC. I hope you can join me!

Nicholas Skaggs: Keeping ubuntu healthy: Manual Image Testing

Thu, 2014-03-06 22:26
Continuing our discussion of the testing within ubuntu, today's post will talk about how you can help ubuntu stay healthy by manually testing the images produced. No amount of robots or automated testing in the world can replace you (well, at least not yet, heh), and more specifically your workflow and usage patterns.

As discussed, everyday new images are produced for ubuntu for all supported architecture types. I would encourage you to follow along and watch the progression of the OS through these images and your testing. Every data point matters and testing on a regular basis is helpful. So how to get started?
Settle in with a nice cup of tea while testing!
The Desktop
For the desktop images everything you need is on the image tracker. There is a wonderful video and text tutorial for helping you get started. You report your results on the tracker itself in a simple web form, so you'll need a launchpad account if you don't have one.

The secondary way to help keep the desktop images in good shape is to install and run the development version of ubuntu on your machine. Each day you can check for updates and update your machine to stay in sync. Use your pc as you normally would, and when you find a bug, report it! Bugs found before the release are much easier to fix than after release.

Now for the phablet images you will need a device capable of running the image. Check out the list. Grab your device and go through the installation process as described on the wiki. Make sure to select the '-proposed' channel when you install so you will see updates to get the latest images being worked on every time they are built. From there you can update everyday. Use the device and when you find a bug, report it! Here's a wiki page to help guide your testing and help you understand how and where to report bugs.

Don't forget there's a whole team of people within ubuntu dedicated to testing just like you. And they would love to have you join them!

Lubuntu Blog: Ubuntu community survey 2014

Thu, 2014-03-06 21:42
Another year, our fellow (and Ubuntu member) Nathan Heafner prepared the Ubuntu community survey, a simple and quick test  (no more than 2 minutes, trust me) whose results will help a lot to the entire community of those (like us!) who make Ubuntu and its flavours more secure, useable and beautiful. You'll be asked about your preferred desktop (I'm sure it's LXDE for Lubuntu), web browser, if you

Canonical Design Team: Protected: Loving the bottom edge

Thu, 2014-03-06 16:09

This content is password protected. To view it please enter your password below:


Canonical Design Team: Thanks for all your submissions!

Thu, 2014-03-06 15:33

Happy by Sergei Pozdnyak

The submissions process for Ubuntu 14.04 is now closed. If you’d like to look at the images head over to the Flickr Group. From here on a group of dedicated and splendid individuals will get together to select the images that are going to go into the next release of Ubuntu. We’ll be hanging out on #1404wallpaper on Freenode and you can come listen in

We generally welcome discussion but please remember that a decision is needed from the time that people volunteer so not too much additional debate.

We’ll start with a meeting tomorrow, Friday 7th March, at 19:00GMT.

Ahmed Sghaier: Ubuntu Touch development phone - Nexus 4

Thu, 2014-03-06 04:35
Ubuntu Touch has been under the spotlight of Ubuntu community, interested users and third parties. But following the steps and participating in the development of this revolutionary platform has been a real challenge for the Tunisian LoCo Team.

Currently the official development device, Google/LG Nexus 4, is not available for the Tunisian market which makes it harder to get hands on for testing purposes.

On the other hand, regardless of hardware availability problems for the Tunisian LoCo team, our community have kept a good pace following and mastering the latest Ubuntu technologies.

Actually, the Tunisian community supported Ubuntu Touch since the beginning and dedicated the time to introduce it during Ubuntu Global Jam 13.03. Few months later was the first serious Ubuntu Touch development experience in September 2013 during the Ubuntu Touch coding sprint held in GNU30 event (Organized by Esprit Libre in collaboration with more than ten local FLOSS communities and clubs, including Ubuntu-TN).

Later in 2014, three Ubuntu Touch training sessions were ensured by three of the developer members from the GNU30 sprint (organized by Ubuntu-TN in collaboration with CLLFSM and ISIMUX).

Moreover, many upcoming events in Tunisia will concentrate on Ubuntu Touch, such as the local Ubuntu Dev Challenge and other Ubuntu Touch conferences and workshops. Thus, the need of a hardware device to showcase Ubuntu Touch and to test the serious applications.

Fortunately, I was lucky to get the support of Mr Amjed Abdejlil. The owner of a little store in Arizona (USA), but a big tech enthusiast and a supporter for the Tunisian Community. Actually, Mr Amjed was very happy to send me the official Ubuntu Touch development phone (Nexus 4). And I am very thankful for his great support.

So, I only got the phone last week, and I recently flashed Android 4.4 with the radio of Android 4.3 then installed Ubuntu Trusty in dual boot.

Following is an album with some pictures of the phone.

I would be also happy to post a review and a little tutorial when possible.

More importantly, is that I intend to provide a "best effort" Ubuntu Touch application testing for local community developers. But I can promise this only for Ubuntu-TN FreedomFighter members, and will grant them a priority testing. (Please work harder to become an FF member in order to gain more community privileges)

To conclude with, this is a very exciting news for me. And I expect it to be as exciting for our LoCo team. I hope this will be just another push forward to see Tunisian Ubuntu developers soon.

Kubuntu: Calligra 2.8 is Out

Wed, 2014-03-05 22:37

Packages for the release of KDE's document suite Calligra 2.8 are available for Kubuntu 12.04 LTS and 13.10. You can get it from the Kubuntu Backports PPA (alongside KDE SC 4.12). They are also in our development release.

Harald Sitter: Kubuntu Testing and You

Wed, 2014-03-05 15:05

With the latest Kubuntu 14.04 Beta 1 out the door, the Kubuntu team is hard at work to deliver the highest possible quality for the upcoming LTS release.

As part of this we are introducing basic test cases that every user can run to ensure that core functionality such as instant messaging and playing MP3 files is working as expected. All tests are meant to take no more than 10 minutes and should be doable by just about everyone. They are the perfect way to get some basic testing done without all the hassle testing usually involves.

If you are already testing Beta 1, head on over to our Quality Assurance Headquarters to get the latest test cases.

Feel free to run any test case, at any time.

If you have any questions, drop me a mail at, or stop by in #kubuntu-devel on

kitten by David Flores