news aggregator

Mark Shuttleworth: Ubuntu is the leading OpenStack distribution

Planet Ubuntu - Wed, 2014-05-21 09:03

Kudos to all the speakers, panellists, designers and engineers who made ODS Atlanta such a great event last week. And thanks in particular to the team at Canonical that helped pull together our keynote, I had a very large number of compliments that really belong to all of you!

For those that didn’t make it, here are a few highlights.

First, Ubuntu is the leading OpenStack distribution, with 55% of all production are using Ubuntu, nearly 5x the number for RHEL. There is a big squabble at the moment between vendors in the RHEL camp; for the record, Canonical is happy to work with vendors of alternative OpenStack distributions on Ubuntu as long as we have a commercial agreement that enables us to support users. Nonetheless, the standard way to do OpenStack starts with Ubuntu followed by the addition of Canonical’s cloud archive, installing OpenStack using those packages.

Second, vendors are focused on interoperability through Canonical’s OpenStack Interop Lab (OIL). We build OpenStack thousands of ways every month with permutations and combinations of code from many vendors. Bring us a Juju charm of your work, sign up to the OIL program and we’ll tell you which other vendors you need to do more work with if you want to be interoperable with their OpenStack offerings.

Third, Juju and MAAS are growing support for Windows and CentOS, with other operating systems on the horizon too (patches welcome!). Thanks to contributions from CloudBase Solutions, you’ll get amazing orchestration of Windows and Linux apps on any cloud or bare metal. If you have a Windows app that you want charmed up, they are the guys to talk to! We did a live on-stage install of OpenStack with Ubuntu KVM and Windows Hyper-V with the beta code, and expect it to land in production Juju / MAAS in the coming weeks.


I’m particularly excited about a new product we’ve announced, which is a flat-fee fully managed on-premise OpenStack solution. Using our architecture and tools, and your hardware, we can give you a best-of-breed OpenStack deployment with SLA for a fixed fee of $15 per server per day. Pretty amazing, and if you are considering OpenStack, definitely an option to evaluate.  Give us a call!

Ubuntu Kernel Team: Kernel Team Meeting Minutes – May 20, 2014

Planet Ubuntu - Tue, 2014-05-20 17:19
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.


20140520 Meeting Agenda

ARM Status

Nothing new to report this week

Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

Milestone Targeted Work Items

Here’s our todo list until we formulate a better plan for tracking work
next week at our team sprint:

   apw    core-1405-kernel    2 work items       ogasawara    core-1405-kernel    2 work items   

Status: Utopic Development Kernel

We have uploaded our first v3.15 based kernel, 3.15.0-1.5, to the Utopic
archive. It is currently based on the v3.15-rc5 upstream kernel.
Important upcoming dates:
Mon-Wed June 9 – 11, vUDS (~3 weeks away)
Thurs Jun 26 – Alpha 1 (~5 weeks away)
Fri Jun 27 – Kernel Freeze for 12.04.5 and 14.04.1 (~5 weeks away)

  • NOTE: The PrecisePangolin/ReleaseSchedule notes Kernel Freeze as Aug 9. I believe this should be amended to Jun 27.

Status: CVE’s

The current CVE status can be reviewed at the following link:

Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (May. 20):

  • Lucid – Prep week
  • Precise – Prep week
  • Quantal – Prep week
  • Saucy – Prep week
  • Trusty – Prep week

    Current opened tracking bugs details:


    For SRUs, SRU report is a good source of information:



    cycle: 18-May through 07-Jun
    16-May Last day for kernel commits for this cycle
    18-May – 24-May Kernel prep week.
    25-May – 31-May Bug verification & Regression testing.
    01-Jun – 07-Jun Regression testing & Release to -updates.

Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Sam Hewitt: Strawberry Sorbet

Planet Ubuntu - Tue, 2014-05-20 12:00

Hey we're heading towards summer, it's time to make ice cream. Okay, you may not be as adventurous as that –sorbet is less work & equally enjoyable.

Now, you may not have an ice cream machine (and neither do I) but a food processor or blender is a way to cheat that.

You can use these methods with almost any fruit (or combinations of fruit), I just happened to have & use fresh strawberries.


    Makes about 1 litre

  • 1 pound / 0.5 kg fresh strawberries, leafy parts removed
  • 1/2 a lemon, juice of
  • 150 mL simple syrup or 250 mL strawberry preserves
  • pinch of salt
  1. In a food processor blend the strawberries, preserves, lemon juice & salt until smooth.
  2. Transfer to a shallow dish, cover & freeze until completely solid, depending on your freezer: 3-4+ hours.
  3. Break up the frozen sorbet and return to the food processor. Blend until light and smooth.
  4. Return to dish, cover and place back in freezer to set.
  5. If it is too hard before serving, let thaw for 15 minutes or so in your fridge.

Canonical Design Team: Sticky notes and a mobile first approach

Planet Ubuntu - Tue, 2014-05-20 11:10

As the number of Juju users has been rapidly increasing over the past year, so has the number of new solutions in the form of charms and bundles. To help users assess and choose solutions we felt it would be useful to improve the visual presentation of charm and bundle details on

While we were in Las Vegas, we took advantage of the opportunity to work with the Juju developers and solutions team to find out how they find and use existing charms and bundles in their own deployments. Together we evaluated the existing browsing experience in the Juju GUI and went through JSON-files line by line to understand what information we hold on charms.

We used post-its to capture every piece of information that the database holds about a bundle or charm that is submitted to charmworld.


We created small screen wireframes first to really focus on the most important content and how it could potentially be displayed in a linear way. After showing the wireframes to a couple more people we used our guidelines to create mobile designs that we can scale out to tablet and desktop.

With the grouped and prioritised information in mind we created the first draft of the wireframes.


In order to verify and test our designs, we made them modular. Over time it will be easy to move content around if we want to test if another priority works better for a certain solution. The mobile-first approach is a great tool for making sense of complex information and forced us to prioritise the content around user’s needs.

First version designs.

Daniel Pocock: Ganglia welcomes Google Summer of Code students for 2014

Planet Ubuntu - Tue, 2014-05-20 07:09

Ganglia has been granted funding for five students in the 2014 Google Summer of Code

The names of the students chosen for the program were announced on 21 April and the official coding period has started this week.

The students are:

Project Student Data science Plamen Dimitrov NVIDIA GPU monitoring Md Ali Ahsan Rana Ganglia/Nagios integration Chandrika Parimoo JMXetric Ng Zhi An Internal Ganglia metrics Oliver Hamm

and the mentoring team consists of Rajat Phull, Bernard Li, Nick Satterly, Robert Kovacs and Daniel Pocock.

The whole Ganglia community congratulates the students on their selection for GSoC this year and is very excited about working with them. We would also like to thank O'Reilly for generously providing the GSoC students with copies of the book Monitoring with Ganglia

If any other member of the community would like to assist formally or informally in the mentoring program, testing any of the projects or anything else please just get in touch with us through the Ganglia developers mailing list or #ganglia on freenode

The Fridge: Ubuntu Weekly Newsletter Issue 368

Planet Ubuntu - Mon, 2014-05-19 23:03

Welcome to the Ubuntu Weekly Newsletter. This is issue #368 for the week May 12 – 18, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth Krumbach Joseph
  • Paul White
  • Emily Gonyer
  • Penelope Stowe
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Dennis Kaarsemaker: Vim trick: detect filetype after typing #!shbang line

Planet Ubuntu - Mon, 2014-05-19 20:47

Vim can detect syntax based on filename or on shbang lines, but if you're creating for example a new perl script and you don't want to use the .pl extension, vim will not automatically detect that after you type #!/usr/bin/perl. Instead, you manyally have to type :filetype detect. There must be a better way!

But of course there is a better way, there's vimscript!

function! RedetectFiletype() if getpos(".")[1] == 1 && getline(1) =~ '^#!' filetype detect endif endfunction inoremap <CR>:call RedetectFiletype()<CR>a<CR>

So if you write a new shbang and hit enter, vim will take the hint and try to guess what kind of syntax you will be using.

Tony Whitmore: Mad Malawi Mountain Mission

Planet Ubuntu - Mon, 2014-05-19 16:59

This autumn I’m going to Malawi to climb Mount Mulanje. You might not have heard of it, but it’s 3,000m high and the tallest mountain in southern Africa. I will be walking 15 miles a day uphill, carrying a heavy backpack. I will be bitten by mosquitoes and other flying buzzy things. It’ll be hard work, is what I’m saying.

I’m doing this to raise money for AMECA. They’ve built a hospital in Malawi that is completely sustainable and not reliant on charity to keep operating. Adults pay for their treatments and children are treated for free. But AMECA also support nurses from the UK to go and work in the hospital. The people of Malawi get better healthcare and the nurses get valuable experience to bring back to the UK.

And that’s what the money I’m raising will go towards. There are just 15 surgeons in Malawi for 15 million people so the extra support is so valuable.

There have been lots of amazing, generous donors already. My family, friends, colleagues, members of the Ubuntu and FLOSS community, Doctor Who fans, random people off the Internet have all donated. Thank you, everyone. I have been touched by the response. But there’s still a way to go. I have just one month to raise £190. So much has been raised already, but I would love it if you could help push me over my target. Or, if you don’t like me and want to see me suffer, help me reach my target and I’ll be sure to post lots of photos of the injuries I sustain. Either way…

Please donate here. Pin It

Jono Bacon: Goodbye Canonical, Hello XPRIZE

Planet Ubuntu - Mon, 2014-05-19 15:15

After nearly eight years of service at Canonical, I will be stepping down as the Ubuntu Community Manager and leaving my fellow warthogs at Canonical on 29th May 2014.

I have always been passionate about two things in my life. Firstly, I want to go to work every day and feel that my efforts are having a wider impact on the world. Secondly, I believe that community and collaboration is at the core what makes us human and what drives us to create beautiful things.

Canonical has provided room for me to explore both of these areas in droves. Free Software is an undeniable power for good in making technology accessible to all. Ubuntu has been at the forefront of this; focusing on simplicity, elegance, and ease of use to make technology as accessible and widely available as possible. Canonical and the Ubuntu Community has also provided an environment in which I could explore the many facets of community building, leadership, and growth…trying lots of ideas, learning from what worked and what didn’t, and evolving what we do.

This has resulted in me having the opportunity to learn from great people, in fun and challenging situations, and to further the art and science of building great communities.

A new chapter

…and this is where a new chapter in my life opens.

Recently I was presented with the opportunity to go and work at the XPRIZE Foundation.

For those of you unfamiliar with XPRIZE, their focus is to solve the major problems facing humanity. This work is delivered by incentivized competitions to solve these grand challenges.

This started with the $10million Ansari XPRIZE that spawned the commercial space-flight industry. Other examples include the Qualcomm Tricorder XPRIZE (to create an affordable handheld device to diagnose health issues), the Google Lunar XPRIZE (to achieve the safe landing of a private craft on the surface of the moon), the Wendy Schmidt Ocean Health XPRIZE (improving our understanding of ocean acidification), and the A.I XPRIZE (create the first A.I. to walk or roll out on stage and present a TED Talk so compelling that it commands a standing ovation).

XPRIZE is an organization with significant ideas and ambitions to have a profound impact on the world. If you want to get a better feel for this, I recommend you watch this video by founder, Peter Diamandis; it is tremendously inspiring.

Peter believes that competition is in our DNA. I believe that collaboration and community is in our DNA. As you can imagine, these concepts are complimentary to each other and this is why I feel like this such a natural fit for me.

As such, I will be joining XPRIZE as Senior Director of Community. I will be there to look at the full breadth of what XPRIZE does and inject community and collaboration into the many different layers from how the prizes are picked, how teams are formed, how R&D is created, how technologies go into production, and more. I am tremendously excited about the opportunity.

Difficult decisions

Although XPRIZE is an exciting (if unknown) road forward, leaving Canonical is bittersweet.

To put this in starker terms, Canonical quite literally changed my life. It helped to transform my career from a position of observation of communities to one of structured best practice. It helped me to think differently, challenge myself, and be open to being challenged by others. It afforded me the opportunity to travel the world, meet incredible people, see incredible things, and ultimately led me to meet my wife, Erica, who has become the corner-stone of our family. This was never a job, it was a way of life, and Canonical provided every ounce of support in helping me to achieve what I did here and to be the best that I could be.

Working with the Ubuntu community has not just been a privilege, it has been a pleasure. One of the many reasons why I love what I do is that I am exposed to so many incredible people, minds, and ideas, and the Ubuntu community is a text-book definition of what makes community so powerful and such an agent for making the world a better place. I will be forever thankful for not just the opportunity to meet so many different members of the global Ubuntu family, but to also continue these many friendships into my next endeavour.

Now, some of you reading this may be concerned by this move. Some of you may be worried that my departure is due to a negative experience at Canonical, or that the community is somehow less important than it used to be. I want to be very clear in responding to this.

I am not leaving Canonical due to annoyance, frustration, bureaucracy, lack of support or anything else negative. I have a wonderful relationship with Mark Shuttleworth, Jane Silber, Rick Spencer and the other executives. I have a great relationship with my peers and my team, and I love going to work every single day. These people are not just colleagues, they are friends. I have long said I have the very best job in community management and I feel as strong about that today as I did when I joined.

I am not leaving Canonical due to problems, I am moving on to a new opportunity at XPRIZE. I actually wasn’t looking for a move; I was quite content in my role at Canonical, but XPRIZE came out of nowhere, and it felt like a good next step to move forward to.

Likewise, I can assure you that the relationship with community at Canonical has not changed at all. Mark Shuttleworth and the rest of the leadership team are passionate about our community and they are intimately aware that our community is critical to the success of Ubuntu.

I believe in Ubuntu as much as I did when I joined. I have long talked about how Free Software and Open Source is only truly game-changing if the technology is simple, powerful, and accessible. Ubuntu is the very best place to get Open Source across the desktop, cloud, and now the mobile space too. Canonical has hired a phenomenal team over the years to drive this, and we are seeing the fruits of this success. I look forward to seeing this story unfold more and more and seeing Canonical achieve wider and wider ambitions.

Before I wrap up, I just want to offer some thanks to Mark Shuttleworth, Jane Silber, Rick Spencer, my team, my peers in the Ubuntu Engineering Management Team, my fellow warthogs at Canonical, and everyone in the Ubuntu community for being so supportive over the years. You all helped me turn my dream into a reality and help me become the person I am today.

I also want to say a special thank-you to Mark who gave me a shot in 2006 and has been a constant beacon of support and inspiration for so many years. I consider Mark a mentor, but more importantly a friend.

We have taken on some tough challenges over the years in Ubuntu, challenges that were necessary for us to grow. I have never questioned Mark’s commitment to our values and our success as a project once, and I am thankful for him to lead Ubuntu towards success; successful projects need leaders who can constantly ask new questions and explore new territory.

You don’t get rid of me that easily

Now, I won’t actually be going anywhere. I will still be hanging out on IRC, posting on my social media networks, still responding to email, and will continue to do Bad Voltage and run the Community Leadership Summit. I will continue to be an Ubuntu Member, to use Ubuntu on my desktop and server, and continue to post about and share my thoughts about where Ubuntu is moving forward. I am looking forward in many ways to experiencing the true Ubuntu community experience now I will be on the other side of the garden.

As I step out of my position at Canonical, I am hugely proud of the accomplishments of my team (Daniel Holbach, David Planella, Michael Hall, Nicholas Skaggs, Alan Pope (and alumni, Jorge Castro, Kyle Nitzsche, Ahmed Kamal)). I can’t think of a better group of people to continue to help our community to do great work and be successful.

So, here is to fun and fond memories, and here is to a new set of challenges helping to create a a better world with XPRIZE. Thanks!

Sebastian K&uuml;gler: Locale changes in Plasma Next

Planet Ubuntu - Mon, 2014-05-19 14:37

One of the things that take care of internationalization of Plasma is the locale. Locale is a container concept that includes Wikipedia defines Locale as “a set of parameters that defines the user’s language, country and any special variant preferences that the user wants to see in their user interface”. There have been some changes in this area between Plasma 4.x and Plasma Next. In this article, I will give an overview of some of the changes, and what it means for the user. I’ll exclusively talk about the locale and although there is some overlap between Locale and Translations, I’ll concentrate just on the locale for now.

In Qt5, the locale support has seen a lot of improvements compared to Qt4. John Layt has done some fantastic work in contributing the features that are needed by many KDE applications, to a point where in most cases, KLocale is not needed anymore, and code that used it can now rely on QLocale. This means less duplication of code and API (QLocale vs. KLocale), more compabitility across applications (as more apps move to use QLocale), less interdependencies between libraries, and a smaller footprint.
This is one of the areas where porting of applications from KDE Platform 4.x to KDE Frameworks 5 can cause a bit of work, but it has clear advantages. KLocale is also still there, in the kde4support library, but it’s deprecated, and included as a porting aid and compatibility layer.

In Plasma, we have already made this transition to QLocale, and we’re at a point where we’re mostly happy about it. This also means that we had to revisit the Locale settings, which is probably the single component that is most visible to the user. Of course the locale matters everywhere, so the most fundamental thing is that the user gets units, number formats, currencies and all that presented in a way that is familiar and in line with overall regional settings. There’s a bunch of cases where users will want to have more fine-grained control over specific settings, and that is where the “Formats” settings interface in systemsettings comes in. In Plasma 4.x, the settings were very much based on either using a common setting and overriding specific properties of that in great detail. You could, for example, specify the decimal separator as a string. This allows a lot of control, but it’s also easy to get wrong. It also does not cover all necessary cases, as the Locale is much more subtle than can be expressed in a bunch of input boxes. Locale also has impact on sorting, collation of strings, has its own rules for appending or prepending the currency.
QLocale, as opposed to the deprecated KLocale doesn’t allow to set specific properties for outside users. This is, in my opinion, a valid choice, and can be translated in a fashion that is more useful to the user as well. The Formats settings UI now allows the user to pick a regional/language setting per “topic”. So if you pick, for example “Netherlands” for currency, and United States” for time, you’ll get euros, but your time will display with AM/PM. So UI has moved, so to say to using a region and language combination instead of overriding locale internals.

The mechanism we’ve put behind it is simple, but it has a number of advantages as well. The basic premise is that systemsettings sets the locale(s) for the workspace, and apps obey that. This can be done quite easily, following POSIX rules, by exporting variables such as LANG, LC_CURRENCY, LC_TIME, etc.. Now if the user has configured the locale in systemsettings, at next login, these variables will be exported for apps that are run within that session to be picked up. If the user didn’t specify her own locale settings, the default as set by the system is used. QLocale picks up these variables and Does The Right Thing. A “wanted side-effect” of this is that applications that do not use QLocale will also be able to pick up the locale settings, assuming they’re following the POSIX standard described above. This means that GTK+ apps will follow these settings as well — just as it should be within the same session. It also means that if you run, for example LXDE, it will also be able to have apps follow its locale, without doing special magic for Qt/KDE applications.

Xubuntu: Screen locking in Xubuntu 14.04

Planet Ubuntu - Mon, 2014-05-19 11:15

Improving screenlocking (or: sessionlocking) has been on our agenda for a few cycles now. We’ve used the old and proven XScreensaver for a few releases, but people have always complained about its antiquated looks (which are also not customizable). Switching to gnome-screensaver wasn’t an option because of the additional package dependencies. Furthermore, after gnome-screensaver 3.6, locking became more tightly integrated into Gnome-Shell, which is why Ubuntu/Unity kept version 3.6 and has maintained it for a few releases now.

Starting with 14.04, Ubuntu/Unity have switched to a new solution for locking, and so have we.

The solution Xubuntu uses in 14.04 is called light-locker. The light-locker project is a fork of gnome-screensaver 3.6, but cut down to a bare minimum (so no gnome-dependencies), using LightDM’s greeter as the lock (and unlock) screen.

How does the screenlocking work?

There aren’t too many changes for users. The light-locker process operates in the background and people can still lock their session in the ways they used to (e.g. through Whiskermenu’s lock launcher or through a keyboard shortcut invoking “xflock4″).

Settings are configurable via a settings dialog developed for Xubuntu 14.04, called Light Locker Settings(TODO: Link to docs for it). The tool can be found in the the Settings Manager. It allows you to configure whether your session should be locked automatically after a timeout and the screen-blank and off times. The dialog is still, for the moment, basic, but it should allow you enough control. Refinements are planned for future cycles/releases.

One thing that changes for users is the fact that locking with LightDM means that a new virtual terminal is opened. In a default single-user session, the user’s X session rests at VT7 (reachable with the keyboard-shortcut Ctrl+Alt+F7). When you lock your session, LightDM sends a lock signal, light-locker locks the session on VT7 and you get forwarded to VT8, where you’re presented with the login greeter, which serves as the unlock dialog.

The aforementioned change introduces one inconvenience you might (or might not) notice: when light-locker switches the VT, there is some screenflickering and it could take a second or two on older machines.

What happened to my music playback?

As your seat becomes inactive, your audio stream is stopped/paused until you log into your session again. This is one of the known issues of light-locker or locking with LightDM in general.

Currently, when locking, it is assumed you are either:

  1. in a public space of sorts (the desktop at home hardly needs locking) and have walked away from the machine
  2. using a system with more than one user

Stopping/pausing playback in both of these scenarios make sense.

However, this might be an annoying change for users used to having their music playback continue even when their session locks. If you don’t like this behavior, there are basically two solutions:

  1. Set light-locker to lock the session “When the screensaver is deactivated”
  2. Switch back to using xscreensaver
  3. Add your user to the “audio” group on your computer and  music playback will continue also with light-locker

The first option is a good workaround, because it means that your audio-playback will continue when the screen has been blanked. However, when you wake up your computer, e.g. by touching the mouse, it will pause the music until you log into your session again.
The third solution is mentioned last, because it isn’t advised to add your user to the “audio” group (read The Audio Group wiki page for a comprehensive explanation). However, as long as you’re on a single-user system, this might still be an option for you.

Can I have a screensaver other than the blank screen with light-locker?

In a word – no.

If you need a screensaver for whatever reason, perhaps using a TV for a monitor and don’t want a blank screen, then you will need to remove light-locker and install some alternative, like xscreensaver.


From Xubuntu 14.04 on, we can finally provide a visually consistent way of logging in to and locking your session with light-locker. As mentioned above, there is a conceptual change in how we look at locking in Xubuntu (which to some might seem like a small regression), however, there are still good alternatives for those who don’t agree with our vision.

Known Issues

Currently, you might run into this known issue (that we discovered only when the release was already imminent), which we’re already working on fixing:

  1. Xfce4 Power Manager does not restore screen power (1259339) – see the release notes for details and workarounds

Also, upgraders from previous Xubuntu versions might run into trouble because XScreensaver and light-locker are both installed. Just get rid of one of the two to resolve that.

Canonical Design Team: Making responsive: scoping the work (6)

Planet Ubuntu - Mon, 2014-05-19 08:42

This post is part of the series ‘Making responsive‘.

Following the designers and developers sprint, we had a full web team workshop day to discuss our findings and plan the work for the following weeks.

Planning and scoping was tricky because we had to balance the work required to make the site responsive with incoming work requests from the business — there was a big release of Ubuntu coming up and lots of content updates along with it.

We carried out a few team exercises that helped us to deconstruct the project into smaller chunks that we could prioritise and plan around other commitments, so that it didn’t feel like building the Titanic but rather something more manageable.

Creating a wishlist

Initially, it’s good to have an idea of what each person considers important for the project.

The simplest way to capture everything you want to do in a project is to write all ideas on separate sticky notes. This approach helps to identify common themes and priority tasks.

This is a good opportunity to get the entire team together in a room and give everyone space to say what they hope to achieve in the near, and not so near, future.

At the end, though, it’s only natural that you’ll be left with a huge amount of ideas, so it’s necessary to organise them into groups, like projects or topics.

Dividing work into phases

Following the wishlist exercise, and based on the resources and time available, fixed deadlines, and business goals, the list was trimmed down into four time frames:

  • To be completed before the Ubuntu 14.04 LTS release
  • To be completed for the release
  • To be completed soon after release
  • To be completed later

If you have hard deadlines for other projects that are in your team’s plate, start adding those into a calendar overview (we used four sticky notes columns) and discuss what can be done within the time that is left available.

For example, we knew that, in preparation for Ubuntu’s presence at the Mobile World Congress at the end of February, the content of the tablet and phone sections of the website would have to be updated ahead of any responsive work going live.

Defining high priority tasks

In terms of the responsive project itself, we defined the priority tasks:

  • Update our CSS with the components that had been created for the two initial responsive projects and that would be useful across our sites
  • Find an initial solution for main, second and third level navigation, and multiple footers
  • Create updated image assets where necessary

And we also defined what we wouldn’t do at this stage:

  • Rewrite copy
  • Restructure and reorder content
  • Change the information architecture of the site
  • Update the site’s visual style

It was important to have these restrictions in place in order to keep the scope of the project as small and as feasible as possible. Most times, it’s impossible to do everything you wanted in the first instance of moving to responsive, so deciding what would be the biggest wins for the amount of time you have available is an important step in planning the work.

At the end of the workshop, we all felt more comfortable with the amount of work ahead: following a few simple exercises, we had identified pain points, set realistic goals and expectations, and established priorities.

Content risk assessment

Matt went through all the pages of the site to assess what, in terms of the design, could become an issue once the site was responsive and once the content had to fit into small screens. These findings were added to a document divided into five different types of content:

  • Images
  • Graphs
  • Tables
  • Layout and behaviour
  • Text

With this document we could see how much work we potentially would have when transitioning all the pages of the site to the updated responsive styles, and which would be the trickier problems to solve.

Estimating time

With the content inventory at hand, Ant estimated the degree of difficulty of converting each page for responsive, using a scale of 1 to 3. He then estimated how many ‘points’ he should be able to get done in one day, which left us with an estimated time of completion of the first pass at fixing rendering issues.

Something to bear in mind when estimating times is that, while fixing the rendering issues that came with converting all the pages of the site to the responsive styles proved faster than initially estimated, the testing across different devices and screen sizes that followed was time-consuming for both designers and developers.

The complexity of testing and how long you should allow for it will depend on the site’s design and the CSS being used: for example, when using newer techniques you should allow for enough time to create suitable fallbacks for browsers with fewer capabilities. Another thing to keep in mind is that testing across devices should be done as you go, rather than at the very end of the process. Just a quick look at a couple of different devices and browsers (for example, previous Android versions and Opera Mini) before you start the estimation process will you give you clearer idea of the amount of work that lies ahead.

Even though our time estimates were a little off, creating those spreadsheets and dividing the work into very small blocks made us feel more in control, and, as we ticked pages off, it made us feel motivated.


When you’re working on a large living and breathing website, you know that all the updates and changes that come along with it don’t stop just because you want to make your site responsive. It’s important that everyone involved understands that you should be putting your website first, and that responsive is not necessarily the top priority. That’s why it’s important to be smart about the way you plan the project and give yourself some parameters to work within — the transition isn’t going to happen overnight.

Benjamin Mako Hill: Installing GNU/Linux on a 2014 Lenovo Thinkpad X1 Carbon

Planet Ubuntu - Sun, 2014-05-18 22:58

I recently bought a new Lenovo X1 Carbon. It is the new second-generation, type “20A7″ laptop, based on Intel’s Haswell microarchiteture with the adaptive keyboard. It is the version released in 2014. I also ordered the Thinkpad OneLink Dock which I have returned for the OneLink Pro Dock which I have not yet received.

The system is still very new, challenging, and different, but seems to support GNU/Linux reasonably well if you are willing to run a bleeding edge version and/or patch your kernel and if you are not afraid to spend an afternoon or two tweaking things. What follows are my installation notes for Debian testing (jessie) when I installed it in early May 2014. My general impressions about the laptop as a GNU/Linux system — and overall — are at the end of this write-up.

Systems Description

The X1 Carbon I ordered included the 512GB SSD, the 14.0 inch WQHD (2560×1440) 260 nit touchscreen, and the maximum 8GB of memory. I believe the rest is not particularly negotiable but includes a 720p HD Camera, a 45.2Wh battery, and an Intel Dual Band Wireless 7260AC with Bluetooth 4.0.

For those that are curious Here is the output of lspci on the system:

00:00.0 Host bridge: Intel Corporation Haswell-ULT DRAM Controller (rev 0b) 00:02.0 VGA compatible controller: Intel Corporation Haswell-ULT Integrated Graphics Controller (rev 0b) 00:03.0 Audio device: Intel Corporation Haswell-ULT HD Audio Controller (rev 0b) 00:14.0 USB controller: Intel Corporation Lynx Point-LP USB xHCI HC (rev 04) 00:16.0 Communication controller: Intel Corporation Lynx Point-LP HECI #0 (rev 04) 00:16.3 Serial controller: Intel Corporation Lynx Point-LP HECI KT (rev 04) 00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I218-LM (rev 04) 00:1b.0 Audio device: Intel Corporation Lynx Point-LP HD Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Lynx Point-LP PCI Express Root Port 6 (rev e4) 00:1c.1 PCI bridge: Intel Corporation Lynx Point-LP PCI Express Root Port 3 (rev e4) 00:1d.0 USB controller: Intel Corporation Lynx Point-LP USB EHCI #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Lynx Point-LP LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Lynx Point-LP SATA Controller 1 [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Lynx Point-LP SMBus Controller (rev 04) BIOS/Firmware

The BIOS firmware is non-free and proprietary as it the case with all ThinkPads and nearly all laptops. According to this thread there is a bug in the default BIOS that means that suspend to RAM is broken in GNU/Linux.

You can get updated BIOS at the Lenovo’s ThinkPad X1 Carbon (Type 20A7, 20A8) Drivers and software page by looking in the the “BIOS” section. Honestly, the easiest approach is probably to download the Windows BIOS Update utility (documentation is here) which you can use to run the BIOS update from within Windows before you install GNU/Linux.

If that’s not an option (e.g., if you’ve already installed GNU/Linux) the best method is to download the bootable CD ISO from the same page. Of course, since the X1 Carbon has no optical media, you have to find another way to boot the CD image. I struggled to get the ISO to boot from USB using the usually reliable dd method. This message suggest that the issue had to do with the El Torito wrapper:

“I had to dump the eltorito image from the ISO they provide, after that I was able to dd the resulting image to a flash drive and the bios update went well, no cdrom needed.”

I updated to version 1.13 of the BIOS which fixes the suspend/resume bug. By the time you read this, there may be newer versions that fix other things so check the Lenovo website.

Installing Debian

I installed Debian testing using the March 19, 2014 “Alpha 1″ release of the Debian Installer for Jessie (currently testing). I installed in graphical mode. With the WQHD screen, everything was extremely tiny but it worked flawlessly.

I downloaded the amd64 net install image from the normal place and installed the rest of the system using the built-in Ethernet port which required no firmware or extra drivers. I did the normal dd if=FILENAME.iso of=/dev/sdX method of getting the installer onto the a USB stick to boot. I turned off restricted boot in BIOS first. In general, the latest version of the Debian installation guide is always a good source of guidance on installing Debian.

I used the Debian installer wizard to partition and selected “Use entire disk and partition it for LVM and encrypted data” which kept the UEFI partitions around. The system installed with no errors or issues and booted up normally afterward. The grub menu is hilariously narrow on the WQHD screen.

If you want to use the built-in wireless and/or Bluetooth, you will need to install the non-free iwlwifi firmware package. It is very lame that we still have to do this to use hardware we have purchased.

What Works and Doesn’t

The following stuff works the first time I booted into the GNOME 3 desktop and logged in:

  • The WQHD 2560×1440 screen
  • The touchscreen
  • Both the TrackPoint and the touchpad
  • Built-in e1000e Ethernet using the dongle
  • The keyboard plus the “adaptive” row of F1-F12 keys.
  • External monitor using the full HDMI or mini-DisplayPort connectors
  • Audio (both speakers and microphone)
  • The camera/webcam

The following stuff works if you install non-free firmware:

  • Internal Wireless
  • Bluetooth 4.0

The following stuff works with qualifications:

  • Suspend to RAM — Works once you have updated the firmware.
  • The adaptive keyboard — The F1-F12 keys work but the “button” that theoretically lets you switch to different sets of function buttons (e.g., volume, brightness) does nothing.
  • Disabling the touchpad — There is a BIOS option to disable the touchpad. It works in Windows and does nothing at all in GNU/Linux.

I have not tried:

  • The fingerprint reader
OneLink Dock

I also ordered the OneLink Dock. It’s pretty cool and plugs into the side of the laptop with a wide plug that includes the the standard power plug inside a big connector for everything else. Everything on the dock worked out of the box including the:

  • HDMI port
  • USB hub
  • USB audio device in the dock
  • USB Ethernet device in the dock

One thing to note. The HDMI output on the standard OneLink has a maximum resolution of 1920×1080 which is not stated on its material on Lenovo’s website. Moreover, if you plug in the OneLink cable, the laptop HDMI port is disabled. This means if you plug in the dock, you simply cannot drive an external monitor over HDMI at anything higher than 1920×1080. Instead, you will need to find a mini-DisplayPort to DisplayPort cord or adapter which will work. Of course, this will also mean that it’s two connectors instead of one.

Alternatively, you can buy the OneLink Pro dock which apparently works with higher resolutions over its DisplayPort connector. I have exchanged docks but have not received the Pro version so I cannot verify this.

Disabling the touchpad

As a long-term ThinkPad user, I love the TrackPoint pointing stick. If you plan on using this, the built-in touchpad is incredibly aggravating because it is very easy to brush against it while using the TrackPoint.

In BIOS, there is an option to disable the touchpad. Although this works in Windows, it does absolutely nothing in GNU/Linux. Part of the issue is that, unlike the older X1 Carbon and other ThinkPads, there are no TrackPoint buttons. Instead of buttons, there are regions at the top of the touchpad which are configured, in software, to act like buttons. If you want to be able to click, the touchpad can never be truly turned off.

This is not problem unique to the Haswell X1 Carbon and a number of people have been struggling with this issue on other Lenovo laptops. Essentially, what you need to do is configure your touchpad so that the buttons are where you want them and so that it ignores any input for the purposes of cursor movement.

There are a few ways of doing this but this answer from an question has the solution I ended up using:

Open file /usr/share/X11/xorg.conf.d/50-synaptics.conf for edit.

Find Section “InputClass” which the following line is Identifier “Default clickpad buttons”.

Edit option for SoftButtonAreas to values 64% 0 1 42% 36% 64% 1 42%, this is size of the right and middle button.

Enable option AreaBottomEdge and change value to 1, this will disable touchpad movement.

If everything done right, your class should looks like:

Section "InputClass" Identifier "Default clickpad buttons" MatchDriver "synaptics" Option "SoftButtonAreas" "64% 0 1 42% 36% 64% 1 42%" Option "AreaBottomEdge" "1" EndSection

Essentially, the first Option line will create a middle button that is 32% of the width and 42% of the height, and a right button that is 32% of the width and 42% of the height. The synaptics manpage (man synpatics) will give you more detail on the general way this works.

Of course, something does feel very wrong about editing a file in /usr/share.

Fixing the Adaptive Keyboard

The most wild feature of the laptop is the adaptive keyboard strip. The strip is a back-lit LCD that looks almost like E Ink screen and acts as a touchscreen keyboard. The default mode gives you the F1-F12 keys. If you “press” the keys (since they aren’t buttons, you just put your finger on top of them) they act like normal F-keys. You can Ctrl-Alt-F1, etc., to switch to virtual terminals out of the box. There are four modes: “Function” (i.e., normal F-keys), Home, Web, and Chat. The last three overlap quite a bit (e.g., they all have brightness and volume). You can play with an example on the Lenovo homepage.

In Windows, switching programs will apparently change these “keys” so that an appropriate set of buttons is shown for the application you are using. You can also change these keys manually with a big “Fn” button at the far left of the adaptive keyboard strip.

As I write this this, released kernels do not support the adaptive keyboard Fn button which means you cannot use anything other than the F-keys out of the box. I believe it also means that resuming from suspend to RAM breaks these keys.

That said, Shuduo Sang from Canonical has released several versions of a patch to to the thinkpad_acpi kernel module which adds support for the Home mode. The other modes (web and chat) do not seem to be supported. The latest version of the patch is on on the Linux Kernel Mailing List and the relevant commits are:

330947b save and restore adaptive keyboard mode for suspend and,resume 3a9d20b support Thinkpad X1 Carbon 2nd generation's adaptive keyboard

Although this is not supported in Debian testing at the time of writing, a bug was filed in Debian and quickly fixed by Ben Hutchings in Debian kernel version 3.14.2-1 which is currently in sid/unstable. As a result, if you install the latest version kernel from Debian unstable (3.14.2-1 or later), the adaptive keyboard just works.

If you aren’t using Debian and if kernel you are using does not have support, you might be patching your kernel.

Ethernet in the OneLink Dock

This was not an issue using the latest kernel in Debian but apparently people have struggled with getting the USB Ethernet device in the OneLink dock to work. For example, this bug suggests:

Many new Thinkpad laptops have a dock (Thinkpad OneLink Dock) containing a usb ethernet chip that is supported by the ax88179 driver. However its USB ID is not included in the driver shipped with the 3.13 kernel used in Trusty. A patch to add this ID has been sent to the LKML (see ) and it would be very convenient for all users of the dock if it could be applied to the Trusty kernel.

If your kernel does not support the USB Ethernet device in the dock, and a newer kernel doesn’t fix it, the patch is straightforward.

General Impressions

As I have described in my interview with The Setup, I have been a user of ThinkPad X-series laptops for many years. This is my sixth X-series ThinkPad.

Overall, I quite like the hardware! Once things mature a little bit, I think that this will be a great laptop for running GNU/Linux. That said, I ordered the laptop without realizing that the X1 Carbon had gone through a major revision! The keyboard was quite a suprise. I think that changing a system so radically without changing the model name/number is a very bad move on Lenovo’s part.

There are two remaining issues with the system I’m still struggling with: (1) the keyboard layout is freaky and weird, and (2) the super high resolution screen breaks many things.

The quality of the keyboard itself is great and worthy of the ThinkPad name. That said, there are two ways in which it is strange. The first is the adaptive keyboard strip. Overall, it works surprisingly well and I think it is a clever idea. My sense is that the strip is more annoying in Windows because it changes out from under you all the time. In GNU/Linux, only manual changing of modes is supported. This, in my opinion, is a feature. I do miss the real feedback you get from pressing keys but for F-keys and volume-keys that I don’t use often this isn’t too important. On the downside, I have realized several times that I had been holding down a “button” for several seconds and not noticed.

The more annoying issue with the keyboard is the way that the other keys have moved around. Getting rid of the CapsLock is wonderful! How has this taken so long? Replacing it with a split Home and End keys is nuts. I’ve remapped the Home and End to put Control back where it should be. My right Control to now Home but I still don’t have an End key. The split Backspace and Delete is not a problem for me. The tilde/apostrophe is in a very bad place. There is no Insert, Print Screen/SysRq, Scroll Lock, Pause/Break or NumLock. They are all just gone. Surprisingly, I haven’t missed any of them.

The second issue is the 2560×1440 resolution on the 14 inch screen. I use a 27 inch external monitor with the same native resolution laptop but, by my arithmetic, the pixel density on the laptop is 210 DPI instead 109 DPI on the external monitor. The result is “the scaling problem” and it’s a huge pain that seems mostly unsolved on any operating system.

Fonts and widgets that look good on the laptop look huge on my external monitor. Stuff that looks good on my external monitor looks minuscule on the laptop. I routinely move windows between my laptop screen and my large monitor. Until I find a display system that can handle this kind of scaling effectively, this requires changing font size and zooming all the time. At the moment, I’m shrinking and expanding my font size using the built in hot keys in Emacs, Gnome Terminal, and Firefox/Iceweasel. I love the high resolution screen but the current situation is crazy-making.

Finally, this setup will not get you into the Church of Emacs and it’s not about to find its way onto the FSF’s list of endorsed hardware. For one, I paid the Windows tax. Beyond that, there is the non-free BIOS and the need for non-free firmware to use the wireless and Bluetooth. This is standard for ThinkPads but it isn’t getting any easier to swallow. There are alternatives in the form of Gluglug’s X60 laptops running CoreBoot, Lemote Yeelong laptops, Bunnie Huang’s Novena and others that are better in these regards. I am very excited for these projects but, for a number of reasons, these just weren’t an option for the laptop I use for my research computing.

Colin King: smemstat: a new tool to report shared memory usage

Planet Ubuntu - Sun, 2014-05-18 19:59
Back in April I wrote smemstat in to dump out the per process shared memory usage based on the memory mapping data from /proc/$pid/smaps.   I tried to make smemstat as compact as possible with minimised changes in heap size to reduce the impact on the total system shared memory statistics.

smemstat reports the memory utilised per process taking in consideration pages that are shared with other processes. So, if two processes share 64K of memory, each process will report that memory as 32K each.

smemstat has to modes of operation: "dump all current memory stats" and a "dump periodic change in memory stats".  The former is useful for a single snapshot view of memory utilisation, where as the latter is useful to observe memory size changes over time. Exiting or new processes that share memory with other processes will cause a change the reported amounted of memory used by the processes because this memory is shared amongst a changed number of processes.

There are various options to smemstat, so consult the man page for more details.  One useful option is -o that collects the smemstat statistics into a JSON formatted output file. This can be useful for memory based tests as the structured JSON data can be easily parsed and analysed.

smemstat  has landed in Ubuntu Utopic 14.10 and is also available in Debian. Examples of smemstat can be found here. I hope it proves to be a useful utility.

Duncan McGreggor: lfest: A Routing Party for LFE Web Apps

Planet Ubuntu - Sun, 2014-05-18 19:50
The Clojure community (in particular, James Reeves) has produced one of the best APIs I've seen for creating RESTful services, or any application using path dispatching (routing), really: Compojure.

I've been really missing this in LFE. Despite the fact that creating RESTful apps in LFE with YAWS is so simple it doesn't even need a framework, the routes can be a bit difficult to read, due to some of the destructuring that's done (e.g., URL paths).

Take the following route declaration from the yaws-rest-starter project:

Note that this example delegates additional routing to other functions, since all the routing in one function was pretty difficult to read. This, however, detracts from the overall readability in a different way: there's not one place to look to see what functions map to the complete URL-space for this service.

I wanted to be able to do something like what you can do in Clojure:

LFE is a Lisp with macros, so why not? Also, nox or mheise (I forget who) on the #erlang-lisp channel had previously noted all the lfe* projects, and suggested that someone create the inevitable "lfest" repo on Github.

Enter lfest. After a weekend of hacking and debugging some tiny macros, surprisingly little code now supports routes definitions like the following in LFE+YAWS web apps:

If you're curious to see what this actually expands to before getting compiled in a .beam file, you can view it here.

Bonus for LFE webdevs: this project also has a module of HTTP status codes, for easy re-use in other projects.

The recent work I've been doing with macros has really helped shape the section I've been planning to write in the LFE User Guide. I haven't wanted it to be yet another dry description of macros in a Lisp-2. Instead, my hope is to help jump-start people into how to think about macros, how to start writing them immediately (without years of study first), and how to debug them.

The trick, as with all complicated subjects, is how to remove the barrier to entry and get folks that necessary hands-on experience without dumbing-down the material. It's almost ready...

Also, it's Bay to Breakers today, and Ocean Beach is bumpin. The synchronicity is just eery. May you party just as hard with routes in LFE.

Daniel Pocock: London free VoIP user group this Tuesday

Planet Ubuntu - Sun, 2014-05-18 18:30

A new user group for free and open VoIP and RTC is getting together in London. The first meeting is this Tuesday, 20 May at a central London location.

Free RTC mailing list

Please feel free to join the Free RTC mailing list (kindly sponsored by FSF Europe) if you would like to find out more about the emergence of free software based RTC solutions.

Maia Grotepass: Trusty DVD parcels

Planet Ubuntu - Sun, 2014-05-18 14:21

I am just getting the parcels of DVDs ready. I will take tgem to the post office this week sometime. Thanks to everyone who is willing to be a distribution point.

The Fridge: Ubuntu 12.10 (Quantal Quetzal) End of Life reached on May 16 2014

Planet Ubuntu - Sat, 2014-05-17 01:46

This is a follow-up to the End of Life warning sent last month to confirm that as of today (May 16, 2014), Ubuntu 12.10 is no longer supported. No more package updates will be accepted to 12.10, and it will be archived to in the coming weeks.

The original End of Life warning follows, with upgrade instructions:

Ubuntu announced its 12.10 (Quantal Quetzal) release more than 18 months ago, on October 18, 2012. Since changes to the Ubuntu support cycle mean that Ubuntu 13.04 has reached end of life before Ubuntu 12.10, the support cycle for Ubuntu 12.10 has been extended slightly to overlap with the release of Ubuntu 14.04 LTS. This will allow users to move directly from Ubuntu 12.10 to Ubuntu 14.04 LTS (via Ubuntu 13.10).

This period of overlap is now coming to a close, and we will be retiring Ubuntu 12.10 on Friday, May 16, 2014. At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 12.10.

The supported upgrade path from Ubuntu 12.10 is via Ubuntu 13.10, though we highly recommend that once you’ve upgraded to 13.10, you continue to upgrade through to 14.04, as 13.10′s support will end in July.

Instructions and caveats for the upgrade may be found at:

Ubuntu 13.10 and 14.04 continue to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:

Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customize or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Sat May 17 01:37:07 UTC 2014 by Adam Conrad

Rafał Cieślak: C++11: std::threads managed by a designated class

Planet Ubuntu - Fri, 2014-05-16 17:54

Recently I have noticed an unobvious problem that may appear when using std::threads as class fields. I believe it is more than likely to meet if one is not careful enough when implementing C++ classes, due to it’s tricky nature. Also, its solution provides an elegant example of what has to be considered when working with threads in object-oriented C++, therefore I decided to share it.

Consider a scenario where we would like to implement a class that represents a particular thread activity. We would like it to:

  • start a new thread it manages when an instance is constructed
  • stop it when it is destructed

I will present the obvious implementation, explain the problem with it, and describe how to deal with it.

So the outline would certainly look like this:

#include <thread> #include <chrono> class MyClass{ public: MyClass(); ~MyClass(); private: std::thread the_thread; bool stop_thread = false; // Super simple thread stopping. void ThreadMain(); };

Okay, but the default std::thread constructor is pretty pointless here, it “Creates new thread object which does not represent a thread“. So a pretty obvious solution is to explicitly use another constructor, which will actually launch the thread:

the_thread(&MyClass::ThreadMain, this)

Then we can implement the thread routines within ThreadMain method.  The destructor would take care of stopping the thread gracefully and waiting for it to terminate.

#include <thread> #include <chrono> class MyClass{ public: MyClass() : the_thread(&MyClass::ThreadMain, this) {} ~MyClass(){ stop_thread = true; the_thread.join(); } private: std::thread the_thread; bool stop_thread = false; // Super simple thread stopping. void ThreadMain(){ while(!stop_thread){ // Do something useful, e.g: std::this_thread::sleep_for( std::chrono::seconds(1) ); } } };

This seems like an elegantly implemented class managing the thread. It will even [seem to] work correctly!

But there is a critical problem with it.

The new thread will start running as soon as it is constructed. It will not wait for the constructor to finish its work (why should it?). It may, and sometimes, at random, it will, leading you into a false sense of security and correctness. It will not even wait for other class fields to be constructed. It has the right to cause crash when accessing stop_thread. And what if your class has other fields that are not atomic? The thread can start using uninitialized objects, problems guaranteed.

One possible solution would be to have the constructor notify the thread when it is safe to start, from within its body. This might be accomplished with an atomic or with a conditional variable. Keep in mind that it would need to be constructed before the std::thread, so that it is ready to use when the thread starts (so the order of construction really does matter!).

This might work in case of the code I use for demonstration. But in general case this will do no good. Imagine use a hierarchy of polymorphic thread-managing classes; imagine that ThreadMain is virtual. In such scenario its addresses is derived from the vtable. But the vtable pointer is initialized after the block of the constructor! This means the starting thread would start from calling a wrong function, leading to a variety of confusing behavior. The idea of notifying it when to start won’t help here.

A universal solution would be to prepare then class in two steps.

  • First, construct it.
  • When it is ready to use, call Start() which will actually launch the thread.

Like this:

MyClass instance; instance.Start();

So we will need to start with no thread running, and construct it dynamically when Start is called. My first thought was:

class MyClass{ public: MyClass(){} ~MyClass(){/****/} void Start(){ the_thread = new std::thread(&MyClass::ThreadMain,this); } private: std::thread* the_thread; // Or you could use std::unique_ptr<>. /****/ };


But this just feels wrong. Pointers? Seriously, are we forced to manually manage memory?

No. std::thread is movable! So we can use that boring default std::thread constructor (not that pointless, right?) to construct it at the beginning, and then, when Start() is called, we will substitute it with an actual running thread! Like that:

class MyClass{ public: /* Explicitly using the default constructor to * underline the fact that it does get called */ MyClass() : the_thread() {} ~MyClass(){ stop_thread = true; if(the_thread.joinable()) the_thread.join(); } void Start(){ // This will start the thread. Notice move semantics! the_thread = std::thread(&MyClass::ThreadMain,this); } private: std::thread the_thread; /****/ };

Now this is both safe and elegant.

This scenario has taught me to stay vigilant when mixing asynchronous execution with stuff that C++ does on the lower levels. I do hope that it has opened your eyes too, or that at least you will remember it as a simple yet interesting case!


Filed under: Ubuntu

Sebastian K&uuml;gler: Grumpy wizards

Planet Ubuntu - Fri, 2014-05-16 14:51

Oxygen Font Example

In Plasma, we have traditionally relied on the font settings dictated by the distribution we run on. This means that we’ll take whatever “Sans” font the distro has set up (or has left to something else), and worked with that. The results of that were sub-optimal at least, as it meant we had almost no control how things are going to look like for end users. Fonts matter a lot, since they determine how readable the UI is, but also what impression it gives. They also have effect on sizing, and even more so in Plasma Next.

Many widgets’ size in a UI depend on the font: Will this message actually fit into the allowed space for it? (And then: What about a translated version of this message?) In Plasma Next, we’re relying even more on sensible font settings and metrics in order to improve our support for HighDPI displays (displays that have more than 150 dots per inch). To achieve balance in the UI sizing, and to make sizing based on what really matters (how much content fits in there?), we’ve put a much stronger emphasis on fontsize-as-rendered-on-a-given screen. I’ve explained the basic mechanics behind that in an earlier post, so I won’t go into too much detail about that. Suffice to say that the base unit for our sizing is the height of the letter M rendered on the screen. This gives us a good base metric that takes into account the DPI of the screen, but also the preference of the font as set up by the user. In essence, this means that we design UIs to fit a certain number of columns and rows of text (approximately, and with ample dynamic spacing, so also longer translations fit well). It also means that the size of UI elements is not expressed in pixels anymore, and also not relative to the screen resolution, but that you get roughly the same physical size on different displays. This seems to work rather well, and we have gotten little complaints about sizing being off.

Relying on font metrics for low-level sizing units also means that we need the font to actually tell us the truth about its sizing. We need to know for example, how many pixels a given font on a given screen with a given pointsize will take, and we need this font to actually align with these values. This sounds quite logical, but there are fonts out there who don’t do a really good job in telling their metrics. This can lead to over- or undersizes UIs, alignment and margins being off, and a whole bunch of other visual and usability problems. It also looks bad. I find it personally quite frustrating when I see UIs that I or somebody else has spent quite some time on “getting it juuuuuust right”, and then seeing it completely misaligned and wrongly sized, just because some distro didn’t pay enough attention to choose a well-working (by our standards, of course ;)) font.

Oxygen Font in Kickoff Launcher rendered at 180DPI

So, to mitigate these cases, we’ve chosen to be a bit more bold about font selection in Plasma Next. We are now including the Oxygen font and setting it up as default on new installs. This means that we know the defaults work, and they work well across a range of displays and systems. We’re also defaulting to certain renderer settings, so the fonts look as smooth as possible on most machines. This fixes a slew of possible technical issues, but it also has a huge impact on esthetics. By setting a default font, we provide a clearer idea of “with this setup, we feel it’s going to look just right”.

For this, we’ve chosen the Oxygen font, which has been created by Vernon Adams, is released under the SIL Open Font License and has been created under the Google and a really beautifully done, modern, simple and clean typeface. It is optimized for rendering with Freetype, and it mainly targets web browsers, desktops, laptops and mobile devices. Vern has created this font for Oxygen and in collaboration with some of the Oxygen designers. The font has actually been around for a while already, but we feel it’s now ready for prime-time, so limelight it is.

As it happens with Free software, this has been a long-lasting itch to scratch for me. One of the first thing I had to do with every install of Plasma (or previously, KDE 3 even), was to change the fonts to something bearable. Imagine finishing the installer, and being greeted with Helvetica — Barf. (And Helvetica isn’t even that bad a font, I’ve seen much worse.) I’m glad we could fix this now in Plasma Next, and I’m confident that this will help many users having a nicer looking desktop without changing anything.

Apart from the technicalities, there will always be users who have a strong preference for a certain font, or setting. For those, we have the font selection in our systemsettings, so you can always set up your personally preferred font. We’re just changing the default.


Subscribe to Free Software Magazine aggregator