news aggregator

Valorie Zimmerman: Thank you, Jerry Seinfeld

Planet Ubuntu - Fri, 2014-05-30 06:45
Seinfeld has a stand-up routine about how your party self wants to be wild and crazy, because the person who will suffer the hangover is "Morning Guy." This was how I thought and why I procrastinated for most of my life. Eventually I figured out that my Self today and my Self tomorrow -- same Self.

I value kindness, and try to be kind and thoughtful to others. So why not be kind to my Self too? In my effort to make life more peaceful and happy, I've gradually started to be kinder to my future self, and have reduced procrastination quite a bit. Along the way, I've discovered how much easier it is to just do a small task when I notice it needs doing, rather than putting it off. My attitude about those little tasks has changed too, and they hardly seem like work. Instead, they feel like being kind to my Self.

This all seems like a virtuous circle, since doing small tasks immediately keeps life simple and clean, and living in a clean, calm environment makes it easier to do the work to keep things up. This seems to be valuable everywhere; in the daily schedule, around the house, at work, in relationships.

Perhaps it was easier to fall into this way of working since I've been started some practices like making cold-brew coffee and fermented foods, both of which need to be started hours or days before they are ready to consume. Every time I finish preparing a new batch, I feel like I'm 'paying it forward' to my future self. The work we've been doing on our house also helps in a major way.

For those who call this kindness "selfishness," consider the so-called Golden Rule, restated by Jesus as love your neighbor as you love yourself. That final phrase says it all; self-care is at the core of love.

And it all counts.

  • Making your bed when you get up, counts.
  • Brushing and flossing your teeth, counts.
  • Eating breakfast, counts.
  • Washing up, counts.
  • Making a meeting on time, counts.
  • Writing a thoughtful email, counts.
  • Helping someone in IRC, counts.
  • Eating vegetables or fruit instead of junk food, counts.
  • Drinking water instead of a sugary beverage, counts.
  • Walking up a flight of stairs instead of taking the elevator, counts.
  • Picking up a bit of trash, counts.
  • Getting to bed in time to have a full night's sleep, counts.
  • Writing a blog post, counts!

Anyone who has flown in a jetliner, has heard the advice: If you are traveling with someone, and the oxygen masks drop, do not help them put on their masks first. Put on your own mask, then help others!

Many of the ideas that have helped me get to this place come from unfuckyourhabitat.tumblr.com which is invaluable. It is one thing to give yourself credit for the progress you make, but it is amazing to have a team of people cheering you on!

Jeremy Kerr: Netbooting with petitboot

Planet Ubuntu - Fri, 2014-05-30 01:12

I've been working on petitboot's netboot code recently. Here's the lowdown on how it all works.

Essentially, everything is intended to be compatible with the de-facto standard pxelinux behaviour. However, there's one major difference, in that we skip the stage where the machine downloads a binary pxelinux loader (because we're already running the loader, right?). This means that you probably don't want to populate the filename field in the DHCP response header. That said, petitboot should work fine with most current pxelinux configurations.

Netboot configuration process

By default, petitboot will send a DHCP request on any interfaces that have an active link (ie, have a network cable plugged-in). The DHCP response will dictate petiboot's behaviour:

Firstly, petitboot will look for a "PXE configuration file" option (DHCP option 209) in the response. If this is specified, then petitboot will download and parse that configuration file. This can be either a full URL, or a file path. See the URLs section below for details on paths and URLs.

If no explicit configuration file is given (ie, there's no option 209 included in the DHCP response), then petitboot will attempt pxelinux-style configuration auto-discovery, using the machine's MAC address, the IP of the DHCP lease, and fall back to a file named default. For example, for a machine with a MAC of 00:01:02:03:04:05, given a lease IP of 192.168.0.10 (C0A8000A in hex), petitboot will request the following files, in order, stopping at the first successful download:

  1. prefix/pxelinux.cfg/01-00-01-02-03-04-05
  2. prefix/pxelinux.cfg/C0A8000A
  3. prefix/pxelinux.cfg/C0A8000
  4. prefix/pxelinux.cfg/C0A800
  5. prefix/pxelinux.cfg/C0A80
  6. prefix/pxelinux.cfg/C0A8
  7. prefix/pxelinux.cfg/C0A
  8. prefix/pxelinux.cfg/C0
  9. prefix/pxelinux.cfg/C
  10. prefix/pxelinux.cfg/default

- where prefix will depend on a few things:

  1. If the DHCP response include a "PXE path prefix" option (DHCP option 210), petitboot will use that value as the prefix. This prefix can be a full URL, or just a path prefix (see the URLs section for details). Note that option 210 should always end with a trailing slash.
  2. Otherwise, TFTP is assumed, the server is determined from the DHCP response, and the files are requested from the top-level directory.

Finally, if there is a "file" parameter present in the DHCP header, then that file is added as a binary boot option, to be executed directly by the machine with no initrd or boot arguments. Don't specify a text config file in this manner, it won't work.

Configuration files

Petitboot supports configuration files based on the syslinux configuration format. However, not all keywords are parsed, as some relate to functionality that isn't relevant in a petitboot environment. Currently, petitboot supports the DEFAULT, KERNEL, INITRD and APPEND keywords. Keywords are case-insensitive.

Here's a typical petitboot configuration file that defines a single, default boot option:

default Linux 3.10.4 label Linux 3.10.4 kernel tftp://boot-server/powerpc/vmlinux-3.10.4 initrd tftp://boot-server/powerpc/initrd-3.10.4 append root=/dev/sda1 console=hvc0 URLs, servers and paths

Remote resources ­— such as configuration files, kernels and initramfs images — can be specified as full URLs (eg. tftp://hostname/path/file) or just paths (eg. /path/file). If a full URL is given, then petitboot will use that as-is. Supported protocols are currently http, ftp, tftp and nfs.

If only a path is given, petitboot will assume the TFTP protocol, and use an appropriate server address based on the DHCP response parameters, in this order:

  1. The "TFTP Server name" option - DCHP option 66
  2. The "Server Identifier" option - DHCP option 54
  3. The "siaddr" field in the DHCP/BOOTP header

Within a configuration file, paths are resolved relative to the location of that file. In keeping with the pxelinux configuration format, absolute paths can be give with a :: prefix - eg. ::/powerpc/vmlinux. Full URLs are always treated as absolute.

DHCP configuration examples

Here are a couple of DHCP server configurations that illustrate how to netboot petitboot machines. These examples are intended for the ISC DHCP server, and only show the configurations relevant to petitboot configuration - you'll need to define the usual subnet, range, etc sections too.

Single, predefined configuration file

This simple example configures all DHCP clients to use a single petitboot configuration file, served over HTTP:

# define a "conf-file" option syntax for the PXE configuration file (opt 209) option conf-file code 209 = text; option conf-file "http://boot-server/petitboot.cfg"; Fixed configuration for multiple architectures

This example shows a configuration that will allow petitboot-based POWER machines to work alongside pxelinux-based x86 machines. We use the DHCP architecture identifier 0x0e to dinstinguish the POWER OPAL boot clients.

# define a "conf-file" option syntax for the PXE configuration file (opt 209) option conf-file code 209 = text; # define an "arch" option syntax for the DHCP architecture identifier (opt 93) option arch code 93 = unsigned integer 16; # specify separate configuration files for powerpc & x86 machines # and configure x86 machines to use the pxelinux.0 loader. if option arch = 00:0e { option conf-file "powerpc/pxelinux/netboot.cfg"; } else { filename "x86/pxelinux/pxelinux.0"; option conf-file "netboot.cfg"; }

Since we're not specifying full URLs for the configuration files here, petitboot will attempt to download using TFTP, from the same host as the DHCP server.

In the x86 section, note that the config file (netboot.cfg) is specified relative to the pxelinux.0 binary. In this case, pxelinux will request the file from x86/pxelinux/netboot.cfg.

Managed TFTP server

In automated-provisioning environments, a central deployment system may control machine setup and boot. When a newly-racked machine is first booted, we want it to boot to an initial install/provision environment, where it is initialised and registers itself with the provisioning service. Once that registration is complete, we want it to boot to the newly-installed environment.

One way to achieve this is to have the deployment system manage PXE configuration files that are served over TFTP.

For this, we'd rely on the PXE autodiscovery mechanism for any newly-deployed machines (relying on the fallback to the configuration file named 'default'). Once a machine has completed its provisioning process (and registered with the deployment service), a per-machine configuration file can be added to the TFTP server, named after the machine's MAC address. This file will configure the newly-provisioned machine to boot to its standard OS environment, rather than booting through the initial-install process again.

For this scenario, we can use a PXE path prefix parameter to distinguish machines of different architectures:

# define a "path-prefix" option syntax for the PXE path prefix (opt 210) option path-prefix code 210 = text; # define an "arch" option syntax for the DHCP architecture identifier (opt 93) option arch code 93 = unsigned integer 16; # use 192.168.0.3 as our managed TFTP server next-server 192.168.0.3; # provide separate binaries and configuration files depending on # client architecture if option arch = 00:0e { # POWER OPAL option path-prefix "powerpc/"; } else if option arch = 00:07 { # x86-64 EFI option path-prefix "x86-efi/"; filename "pxelinux/bootx64.efi"; } else { # x86 PC-BIOS option path-prefix "x86-pc-bios/"; filename "pxelinux/pxelinux.0"; }

We could just as easily use HTTP instead of TFTP here, by specifying full HTTP URLs as the configuration files. For the non-petitboot machines, we'd need to use a pxe loader that supports HTTP, like gPXE.

Managed DHCP server

This is similar to the previous example, but rather than using per-machine configuration files (served over TFTP), we can implement per-machine configuration directly in the DHCP server configuration.

If our DHCP configuration is managed by the deployment system, we can use host-specific configurations for machines that have been provisioned, and fall back to a default configuration for newly-racked machines. In this scenario, the deployment system is responsible for managing the DHCP configuration by adding a 'host' stanza for each known machine, after installation.

# define a "conf-file" option syntax for the PXE configuration file (opt 209) option conf-file code 209 = text; # define an "arch" option syntax for the DHCP architecture identifier (opt 93) option arch code 93 = unsigned integer 16; # configuration for running installers on unknown hosts: provide separate # binaries and configuration files depending on client architecture if option arch = 00:0e { # POWER OPAL option conf-file "pxe-configs/installer-powerpc.conf" } else if option arch = 00:07 { # x86-64 EFI option conf-file "pxe-configs/installer-x86-64-efi.conf" filename "pxelinux/bootx64.efi"; } # known host configurations, for which we specify an existing config file. These # sections are generated and managed by the deployment process, after each # machine has been provisioned. host server-001 { hardware ethernet 3c:97:0e:3b:85:00; option conf-file "pxe-configs/runtime-powerpc.conf"; } host server-002 { hardware ethernet b4:1e:26:c4:a0:be; option conf-file "pxe-configs/runtime-x86.conf"; }

Ubuntu Podcast from the UK LoCo: S07E09 – The One with the Ick Factor

Planet Ubuntu - Thu, 2014-05-29 19:30

Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are in Studio L for Season Seven, Episode Eight of the Ubuntu Podcast!

 Download OGG  Download MP3 Play in Popup

In this week’s show:-

We’ll be back next week, when we’ll interview Martin Wimpress from the MATE desktop team and go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Mark Shuttleworth: Enjoying G+

Planet Ubuntu - Thu, 2014-05-29 18:25

Plussers, I’m now at https://plus.google.com/+MarkShuttleworthCanonical/ and quite enjoying the service. That G+ persona is mostly software and cloud related (so short on beekeeping, kitesurfing or botanical pursuits).

This is probably the best way to keep in touch. I’m not on Facebook and every year Claire and I clear out the oddball fake-me’s that seem to crop up regularly there. To their credit, FB have been really good about sorting those out. I’m told I’m missing out but, srsly, TIME!

G+ seems really clean and well organised, the chatter is mostly thoughtful and good-natured.

Mark Shuttleworth: #9 – Canonical’s cloud-init saves you from image soup, on every cloud

Planet Ubuntu - Thu, 2014-05-29 16:22

This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play.

We run an extensive program to identify issues and features that make a difference to cloud users. One result of that program is that we pioneered dynamic image customisation and wrote cloud-init. I’ll tell the story of cloud-init as an illustration of the focus the Ubuntu team has on making your devops experience fantastic on any given cloud.

 

Ever struggled to find the “right” image to use on your favourite cloud? Ever wondered how you can tell if an image is safe to use, what keyloggers or other nasties might be installed? We set out to solve that problem a few years ago and the resulting code, cloud-init, is one of the more visible pieces Canonical designed and built, and very widely adopted.

Traditionally, people used image snapshots to build a portfolio of useful base images. You’d start with a bare OS, add some software and configuration, then snapshot the filesystem. You could use those snapshots to power up fresh images any time you need more machines “like this one”. And that process works pretty amazingly well. There are hundreds of thousands, perhaps millions, of such image snapshots scattered around the clouds today. It’s fantastic. Images for every possible occasion! It’s a disaster. Images with every possible type of problem.

The core issue is that an image is a giant binary blob that is virtually impossible to audit. Since it’s a snapshot of an image that was running, and to which anything might have been done, you will need to look in every nook and cranny to see if there is a potential problem. Can you afford to verify that every binary is unmodified? That every configuration file and every startup script is safe? No, you can’t. And for that reason, that whole catalogue of potential is a catalogue of potential risk. If you wanted to gather useful data sneakily, all you’d have to do is put up an image that advertises itself as being good for a particular purpose and convince people to run it.

There are other issues, even if you create the images yourself. Each image slowly gets out of date with regard to security updates. When you fire it up, you need to apply all the updates since the image was created, if you want a secure machine. Eventually, you’ll want to re-snapshot for a more up-to-date image. That requires administration overhead and coordination, most people don’t do it.

That’s why we created cloud-init. When your virtual machine boots, cloud-init is run very early. It looks out for some information you send to the cloud along with the instruction to start a new machine, and it customises your machine at boot time. When you combine cloud-init with the regular fresh Ubuntu images we publish (roughly every two weeks for regular updates, and whenever a security update is published), you have a very clean and elegant way to get fresh images that do whatever you want. You design your image as a script which customises the vanilla, base image. And then you use cloud-init to run that script against a pristine, known-good standard image of Ubuntu. Et voila! You now have purpose-designed images of your own on demand, always built on a fresh, secure, trusted base image.

Auditing your cloud infrastructure is now straightforward, because you have the DNA of that image in your script. This is devops thinking, turning repetitive manual processes (hacking and snapshotting) into code that can be shared and audited and improved. Your infrastructure DNA should live in a version control system that requires signed commits, so you know everything that has been done to get you where you are today. And all of that is enabled by cloud-init. And if you want to go one level deeper, check out Juju, which provides you with off-the-shelf scripts to customise and optimise that base image for hundreds of common workloads.

Jono Bacon: Last Day Today

Planet Ubuntu - Thu, 2014-05-29 15:34

Recently I announced I am stepping down as Ubuntu Community Manager at Canonical and moving to XPRIZE as Senior Director of Community. Today is my last day at Canonical.

I just want to say how touched I have been by the response. The comments, social media posts, emails, and calls from you have been so kind and supportive. You are all good people, and I am going to miss every single one of you.

The reason why I have devoted my life to understanding communities is that I believe communities bring out the best in people, and all of you are a perfect example of that. I cannot express just how much I appreciate it.

Over the course of the next few weeks my replacement will be sourced and announced. and in the interim my team (Daniel Holbach, Michael Hall, David Planella, Nicholas Skaggs, Alan Pope) will take over my duties. Everything has been transitioned over, and remember, the weekly Q&As will continue at 6pm UTC every Tuesday on Ubuntu On Air with my team filling in for me. As ever, any and all Ubuntu questions are welcome!

Of course, I will still be around. I am going to continue to be a member of the Ubuntu community and an avid Ubuntu user, tester, and supporter. I will continue to be on IRC, you can email me at jono@jonobacon.org, I will continue to do Bad Voltage, and I have a busy schedule at the Community Leadership Summit, OSCON, and more. I am also going to continue to have my own Q&A session every week where you can ask questions about my perspectives on Ubuntu, Canonical, community management, XPRIZE, and more; I will announce this soon.

Ubuntu has a tremendous future ahead of it, built on the hard work and passion of a global community. We are only just getting started with a new era of Ubuntu convergence and cloud orchestration and while I will miss being there in an official capacity, I am just thankful that I can continue to be along for the ride in the very community I played a part in building.

I now have a few weeks off and then my new adventure begins. Stay tuned.

Michael Hall: Community Donations Funding Report

Planet Ubuntu - Thu, 2014-05-29 13:54

Last year the main Ubuntu download page was changed to include a form for users to make a donation to one or more parts of Ubuntu, including to the community itself. Those donations made for “Community projects” were made available to members of our community who knew of ways to use them that would benefit the Ubuntu project.

Every dollar given out is an investment in Ubuntu and the community that built it. This includes sponsoring community events, sending community representatives to those events with booth supplies and giveaway items, purchasing hardware to make improve development and testing, and more.

But these expenses don’t cover the time, energy, and talent that went along with them, without which the money itself would have been wasted.  Those contributions, made by the recipients of these funds, can’t be adequately documented in a financial report, so thank you to everybody who received funding for their significant and sustained contributions to Ubuntu.

As part of our commitment to openness and transparency we said that we would publish a report highlighting both the amount of donations made to this category, and how and where that money was being used. Linked below is the first of those reports.

View the Report

Jorge Castro: Real World Juju Troubleshooting

Planet Ubuntu - Thu, 2014-05-29 12:04

The following is a guest post by Ian Booth:

Juju has the ability to set up and change logging levels at a package level so that problems within a particular area can be better diagnosed. Recently there were some issues with containers (both kvm and lxc) started by the local provider transitioning from pending to started. We wanted to be able to inspect the sequence of commands Juju uses to start a container. Fortunately, the Juju code which starts lxc and kvm containers does log out the actual commands used to download the requisite image and start the container et al. The logging level used for this information is TRACE. By default, Juju machine agents log at the debug level, and this information can be seen when running ‘juju debug-log’. So unfortunately, this means the information we are interested in is not visible by default in the machine logs.

Luckily we can change the logging level used for particular packages so that the information is logged. This is done using the ‘logging-config’ attribute and can either be done at bootstrap:

juju bootstrap --logging-config=golxc=TRACE

or on a running system:

juju set-env logging-config=golxc=TRACE

As an aside, you can use:

juju get-env logging-config

to see what the current logging-config value is.

The logging-config value above turns on TRACE level dubugging for the golxc package, which is responsible for starting and managing lxc containers on behalf of Juju. For kvm containers, the package name is ‘kvm’.

We can use debug-log to look at the logging we have just enabled. As of 1.19 onwards, filtering can be used to just show what we are interested in. Run this command in a separate terminal:

juju debug-log --include-module=golxc

Now we can deploy a charm or use add-machine to initiate a new container. We can see the commands Juju issues to download the image and start the container. This allows us to see what parameters are passed in to the various lxc commands, and we could even manually run the commands if we wish, in order to reproduce and examine in more detail how the commands behave. An example of the logging information when adding TRACE level debugging to lxc startup looks like:

machine-0: 2014-05-27 04:28:39 TRACE golxc.run.lxc-start golxc.go:439 run: lxc-start [--daemon -n ian-local-machine-1 -c /var/lib/juju/containers/ian-local-machine-1/console.log -o /var/lib/juju/containers/ian-local-machine-1/container.log -l DEBUG] machine-0: 2014-05-27 04:28:40 TRACE golxc.run.lxc-ls golxc.go:439 run: lxc-ls [-1] machine-0: 2014-05-27 04:28:41 TRACE golxc.run.lxc-ls golxc.go:451 run successful output: ian-local-machine-1 machine-0: 2014-05-27 04:28:41 TRACE golxc.run.lxc-info golxc.go:439 run: lxc-info [-n ian-local-machine-1] machine-0: 2014-05-27 04:28:41 TRACE golxc.run.lxc-info golxc.go:451 run successful output: Name: ian-local-machine-1 machine-0: 2014-05-27 04:28:45 TRACE golxc.run.lxc-start golxc.go:448 run failed output: lxc-start: command get_cgroup failed to receive response machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-ls golxc.go:439 run: lxc-ls [-1] machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-ls golxc.go:451 run successful output: ian-local-machine-1 machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-info golxc.go:439 run: lxc-info [-n ian-local-machine-1] machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-info golxc.go:451 run successful output: Name: ian-local-machine-1

You can see that when logging the various commands execute, the format is cmd [arg arg arg]. So to run these manually leave out the []. You can also see that there was a problem starting the lxc container due to a cgroups issue. This error is shown in juju status, but often it’s useful to see what happens leading up to the error occurring.

In summary, Juju’s configurable logging output can be used to help diagnose issues and understand what Juju is doing under the covers. It offers the ability to turn on extra logging when required, and it can be turned off again when no longer required.

Canonical Design Team: Making ubuntu.com responsive: making our grid responsive (8)

Planet Ubuntu - Thu, 2014-05-29 08:26

This post is part of the series ‘Making ubuntu.com responsive‘.

A big part of converting our existing fixed-width desktop site to be responsive was to make sure we had a flexible grid that would flow seamlessly from small to large screens.

From the start, we decided that we were going to approach the move as simply as possible: we wanted the content our grid holds to become easier to read and browse on any screen size, but that doesn’t necessarily mean creating anything too complex.

Our existing fixed-width grid

Before the transition, our grid consisted of 12 columns with 20px gutters.

The width of each column could be variable, but we were working with 57px columns and 40px padding on each side of the main content container.

Our existing fixed-width desktop grid.

Inside that grid, content can be divided into one, two, three or four columns. In extreme cases, we can also use five columns, but we avoid this.

Our grid laid over an existing page of the site.

We also try keeping text content below eight columns, as it becomes harder to read otherwise.

Adding flexibility

When we first created our web style guide, we decided that, since we were getting our hands dirty with the refactoring of the CSS, we’d go ahead and convert our grid to use percentages instead of pixels.

The idea was that it would be useful for the future, while keeping everything looking almost exactly the same, since the content would still be all held within a fixed-width container for the time being.

A fixed-width container holding percentage-based columns.

This proved one of the best decisions we made and it made the transition to responsive much smoother than we initially thought.

The CSS

We used Gridinator to initially create our basic grid, removed any unnecessary rules and properties from the CSS it generated and added others we needed.

The settings we’ve input, in case you’re wondering, were:

  • Number of columns: 12
  • Column width: 57px
  • Column margins: 20px (technically, the gutters)
  • Container margin: 40px
  • Body font size: 16px
  • Option: percentage units

Screenshot of our Gridinator settings.

We could have created this CSS from scratch, but we found this tool saved us some precious time when creating all the variations we needed when using the grid.

You can have a peek into our current grid stylesheet now.

First prototypes

The first two steps we took when creating the initial responsive prototype of ubuntu.com were:

  • Remove the fixed-width container and see how the content would flow freely and fluidly
  • Remove all the floats and positioning rules from the existing CSS and see how the content would flow in a linear manner, and test this on small screen devices

When we removed the fixed-width container, all the content became fluid. Obviously, there were no media queries behind this, so the content was free to grow to huge sizes, with really long line lengths, and equally the columns could shrink to unreasonably narrow sizes. But it was fluid, and we were happy!

Similarly, when checking the float-free prototype, even though there were quite a few issues with background images and custom, absolutely positioned elements, the results were promising.

Our first float-free prototype: some issues but overall a good try.

These tests showed to us that, even though the bulk of the work was still ahead of us, a lot had been accomplished by simply making the effort to convert our initial pixel-based columns into percentage based ones. This is a test that we think other teams could be able to do, before moving on to a complete revamp of their CSS.

Defining breakpoints

We didn’t want to relate our breakpoints to specific devices, but it was important that we understood what kind of screen sizes people were using to visit our site on.

To give you an idea, here are the top 10 screen sizes (not window size, mind) between 4 March and 3 April 2014, in pixels, on ubuntu.com:

  1. 1366 x 768: 26.15%
  2. 1920 x 1028: 15.4%
  3. 1280 x 800: 7.98%
  4. 1280 x 1024: 7.26%
  5. 1440 x 900: 6.29%
  6. 1024 x 768: 6.26%
  7. 1600 x 900: 5.51%
  8. 1680 x 1050: 4.94%
  9. 1920 x 1200: 2.73%
  10. 1024 x 600: 2.12%

The first small screen (360×640) comes up at number 17 in the list, followed by 320×568 and 320×480 at numbers 21 and 22, respectively.

We decided to test the common approach and try breakpoints at:

  • Under 768px: small screen styles
  • Between 768px and under 986px: medium screen styles
  • 986px and up: large screen styles

This worked well for our content and was in line with what we had seen in our analytics numbers.

At the small screen breakpoint we have:

  • Reduced typographic scale
  • Reduced margins and padding
  • Reduced main content container padding

At this scale, following what we had done in Ubuntu Resources (now Ubuntu Insights), we reused the grid from the Ubuntu phone designs, which divides the portrait screen into 40 squares, horizontally.

The phone grid.

At the medium screen breakpoint we have:

  • Increased (from small screen) typographic scale
  • Increased margins and padding
  • Increased main content container padding

At the large screen break point we have:

  • The original typographic scale
  • The original margins and padding
  • The original main content container padding

Comparison between small, medium and large screen spacing.

Ideas for the future

In the future, we would like to use more component-specific breakpoints. Some of our design components would work better if they reflowed or resized at different points than the rest of the site, so more granular control over specific elements would be useful. This usually depends on the type and amount of content the component holds.

What about you? We’d love to know how other people have tackled this issue, and what suggestions you have to create flexible and robust grids. Have you used any useful tools or techniques? Let us know in the comments section.

Reading list

Jono Bacon: Community Leadership Summit 2014, New Forum, OSCON, Training, and More!

Planet Ubuntu - Thu, 2014-05-29 04:51

As many of you will know, I organize an event every year called the Community Leadership Summit. The event brings together community leaders, organizers and managers and the projects and organizations that are interested in growing and empowering a strong community.

The event pulls together these leading minds in community management, relations and online collaboration to discuss, debate and continue to refine the art of building an effective and capable community.

This year’s event is shaping up to be incredible. We have a fantastic list of registered attendees and I want to thank our sponsors, O’Reilly, Citrix, and LinuxFund.

The event is taking place on 18 – 19 July 2014 in Portland, Oregon. I hope to see you all there, it is going to be a fantastic CLS this year!

I also have a few other things to share too…

Community Leadership Forum

My goal as a community manager is to help contribute to the growth of the community management profession. I started this journey by publishing The Art of Community and ensuring it is available freely as well as in stores. I then set up the Community Leadership Summit as just discussed, and now I am keen to put together a central community for community management and leadership discussion.

As such, I am proud to launch the new Community Leadership Forum for discussing topics that relate to community management, as well as topics for discussion at the Community Leadership Summit event each year. The forum is designed to be a great place for sharing and learning tips and techniques, getting to know other community leaders, and having fun.

The forum is powered by Discourse, so it is a pleasure to use, and I want to thank discoursehosting.com for generously providing free hosting for us.

Be sure to go and sign up!

Speaking Events and Training

I also wanted to share that I will be at OSCON this year and I will be giving a presentation called Dealing With Disrespect that is based upon my free book of the same name for managing complex communications.

This is the summary of the talk:

In this new presentation from Jono Bacon, author of The Art of Community, founder of the Community Leadership Summit, and Ubuntu Community Manager, he discusses how to process, interpret, and manage rude, disrespectful, and non-constructive feedback in communities so the constructive criticism gets through but the hate doesn’t.

The presentation covers the three different categories of communications, how we evaluate and assess different attributes in each communication, the factors that influence all of our communications, and how to put in place a set of golden rules for handling feedback and putting it in perspective.

If you personally or your community has suffered rudeness, trolling, and disrespect, this presentation is designed to help.

This presentation is on Wed 23rd July at 2.30pm in E144.

In addition to this I will also be providing a full day of community management training at OSCON on Sunday 20th July in D135.

I will also be providing full day community management training at LinuxCon North America and LinuxCon Europe. More details of this to follow soon in a dedicated blog post.

Lots of fun things ahead and I hope to see you there!

Jeremy Kerr: powerpc testing without powerpc hardware

Planet Ubuntu - Thu, 2014-05-29 04:42

Want to do a test boot of a powerpc machine, but don't have one handy? The qemu ppc64-softmmu emulator is currently working well with the pseries target, and it's fairly simple to get an emulated machine booting.

You'll need qemu version 1.6.0 or above. If this isn't provided by your distribution, you can build from upstream sources:

git clone git://git.qemu-project.org/qemu.git cd qemu ./configure --target-list=ppc64-softmmu make

the resulting qemu binary will be at ./ppc64-softmmu/qemu-system-ppc64.

To run a pseries-like machine, we just need the -M pseries option.

The default qemu device setup is okay, but I tend to configure my qemu a little differently: no video, and console on serial ports. This is what I generally do:

qemu-system-ppc64 -M pseries -m 1024 \ -nodefaults -nographic -serial pty -monitor stdio \ -netdev user,id=net0 -device virtio-net-pci,netdev=net0 \ -kernel vmlinux -initrd initrd.img -append root=/dev/ram0

This will print out a message telling you which PTY is being used for the serial port:

[jk@pablo qemu]$ qemu-system-ppc64 -M pseries -m 1024 \ -nodefaults -nographic -serial pty -monitor stdio \ -netdev user,id=net0 -device virtio-net-pci,netdev=net0 \ -kernel vmlinux -initrd initrd.img -append root=/dev/ram0 char device redirected to /dev/pts/11 (label serial0) QEMU 1.6.0 monitor - type 'help' for more information (qemu)

You can then interact with the emulated serial device from a separate terminal, using screen:

screen /dev/pts/11

In the screen session, the sequence ctrl+a, ctrl+k will exit. Typing quit at the (qemu) prompt will terminate the virtual machine.

emulated network devices

The qemu environment above uses virtio-based networking, which may not work if your kernel doesn't include a virtio-net driver. In this case, just replace the -device virtio-net-pci,netdev=net0 argument with:

-device spapr-vlan,netdev=net0 emulated block devices

The qemu example above doesn't define any block devices, so there's no persistent storage available. We can use either the spapr-vscsi (sPAPR virtual SCSI, the virtualised IBM hypervisor interface) or virtio-blk-pci (virtio interface) devices. This choice will depend on your kernel; if it includes drivers for virtio, I'd suggest using that.

For virtio, add something like:

-device virtio-blk-pci,drive=drive0 -drive id=drive0,if=none,file=/path/to/host/storage

For sPAPR virtual SCSI, use something like:

-device spapr-vscsi -device scsi-hd,drive=drive0 -drive id=drive0,if=none,file=/path/to/host/storage

Either of these will define a qemu drive with id "drive0", and attach it to backing storage at /path/to/host/storage - this can be a plain file or block device. If you'd like to define multiple guest block devices, you need to use new ids (drive1, drive2, …) for both -device and -drive arguments.

Lubuntu Blog: Lubuntu Screencast - Network Manager

Planet Ubuntu - Wed, 2014-05-28 10:03
Thanks to SilverLion for his work. The easiest way to explain how to fix a bug, for those who didn't read the Release Notes and want to know what's happening with thir internet connections. A Screencast is worth a thounsand words!

Pages

Subscribe to Free Software Magazine aggregator