Planet Ubuntu
Subscribe to Planet Ubuntu feed
Planet Ubuntu - http://planet.ubuntu.com/
Updated: 1 hour 16 min ago

Valorie Zimmerman: Exciting! Writing another KDE book: Frameworks in Randa

Sat, 2014-05-31 08:54

Returning to Randa is always tremendous, no matter what the work is. The house in Randa, Switzerland is so welcoming, so solid, yet modern and comfortable too.

KDE is in the building mode this year. If you think of the software like a house, Frameworks is the foundation and framing. Of course it isn't like a house, since the dwelling we are constructing is made from familiar materials, but remade in a interactive, modular fashion. I'd better stop with the house metaphor before I take it too far, because the rooms (applications) will be familiar, yet updated. And we can't call the Plasma Next "paint", yet the face KDE will be showing the world in our software will be familiar yet completely updated as well.

However, this year will see the foundation for KDE's newest surge ahead in software, from the foundation to the roof, to use the house metaphor one more time. I really cannot wait to get there and start work on our next book, the Framework Recipe book (working title). Of course, travel is expensive, and most of us will come from all over the world. So once again, we're raising funds for the Randa Meetings, which will be the largest so far. The e.V. is a major sponsor, but this is one big gathering. We need community support. Please give generously here:

www.kde.org/fundraisers/randameetings2014/index.php

Bryan Quigley: The Mozilla I want

Fri, 2014-05-30 21:13

Somewhat a response to
https://webwewant.mozilla.org/en/
https://blog.mozilla.org/blog/2014/05/14/drm-and-the-challenge-of-serving-users/
https://fsf.org/news/fsf-condemns-partnership-between-mozilla-and-adobe-to-support-digital-restrictions-management

NOTE: These are more personal opinions and not necessarily those of my employer, your employer, or any of the businesses and government in my town,city, state, country.

DRM

It is *never* something people want.  Have you ever heard someone say “I really want this content I bought/”rented” to be harder to share/remix/watch and to have even greater legal ramifications if I do so?”  They do want content made by Hollywood, but those are different things.

DRM – Mozilla being played?

This reminds me of the time Chrome said it would drop H264 [1].   From what played out in public it seemed that Mozilla didn’t see the need to push for H264 sooner because they trusted Google to actually drop H264 support.

In a somewhat reverse situation, Mozilla just said they will adopt EME in Firefox before any of the possible benefits are realized by others.  Right now it’s being implemented only in Chrome and IE 11 [2], and I’ve only seen it used in Chrome OS and IE 11.   From my point of view I would have preferred if Mozilla had at least waiting to see if we will get more platform support from major vendors on this.  (aka Linux support for Netflix)

If so, maybe the increase in Linux market-share would provide some balance to the DRM’s negative affects.  Making free software overall a net win.   If so, I would (still reluctantly) support Mozilla’s decision here if they saw it as an end to get Hollywood media to more freer platforms.   But why not wait and see if this is actually true with Google Chrome/Netflix on Linux?

[1] http://blog.chromium.org/2011/01/html-video-codec-support-in-chrome.html
[2] http://html5test.com/compare/feature/video-drm.html

Reduce “Hollywood” power

I would like to see Mozilla pushing Indie/Crowdsourcing media, like:
Paid streaming site for indie videos https://indieflix.com/
Public broadcasting http://www.pbs.org/
Better publishing http://www.getmiro.com/
Basically, How can Mozilla use it’s capabilities to better change the media landscape?  (A slightly “better” form of DRM should not be the answer to this question).

Abandon the DoNoTrack header, provide actual options

It doesn’t work (and almost certainly never will) and it gives people a false sense of doing something.   You are giving advertisers another data point to track!  It literally does the opposite of what is supposed to.

Instead!

  •  Finish blocking 3rd party cookies (https://blog.mozilla.org/privacy/2013/02/25/firefox-getting-smarter-about-third-party-cookies/)
  • Promote (by adding it as a search option, etc) providers that promise to not track ANY of their users.   DuckDuckGo being the most obvious example.
    Their is so little difference between Yahoo and Bing search.. and DuckDuckGo is a damn good search engine[2].
  • Push advertisers off of Flash (generally a good idea, but also will help with privacy – no flash cookies, etc)
    Generally I’m supportive of the Click-to-play, etc initiatives Mozilla has taken thus far.  Flash is the exception to that rule.   Here’s the outline of a plan to push advertisers off of it. (the numbers are obviously made up for illustrative purposes)

    1. Start forcing Click-to-play for Flash when the site has more than 6 plugins running (pick some “high” number, and count all plugins, not just flash)
    2. Reduce the number of plugins to 5, after some number of Firefox releases or some specific Adobe Flash counting metric.  Repeat pushing to 4, etc.
    3. Once advertisers get on board and Flash ads aren’t served by the big advertisers, now we can push Flash to click-to-play at 2 instances per page.
    4. Once flash usage drops under 5% [1], we’d be able to push it to default click-to-play for all Flash.

[1] http://w3techs.com/technologies/details/cp-flash/all/all
[2] http://libretechtips.com/reviews-internet/duckduckgo-comparison-google-bing-yandex

SSL 3.0 – When will it go away?

You’ve removed the (easy) option to disable it.  When will it go away for good?  Why does Chrome let the user see what protocol version (TLS 1.2 vs 1.0, etc) their users are using, but Firefox doesn’t?

Mobile – Firefox OS

Well I work at a direct competitor in mobile… but not actually working with our phone product..

  • Launching just with Facebook contact sync well, isn’t very open and totally goes against promoting those that respect your same values.
  • I get that you can’t magically make the devices more open, but at least can we get public commitments for how long a device will be supported for?  And how often it will get Firefox OS updates?
  • I wish you had used Thunderbird as Firefox OS’s email client… I think that would let it scale really really well and give you a new reason to push features there.. Maybe you are under the hood?

If you’re reading this and don’t know, you can try out Firefox OS (“Boot2Gecko”) on your desktop too! (https://nightly.mozilla.org/)

End on a positive note..

I love the new Firefox interface.  It’s awesome and makes customizing the browser much better.   I’m a nightly user and teach courses on Firefox.  I’m not going anywhere (fast) over the DRM decision.  Going to keep doing what I do and see how it pans out…

Ronnie Tucker: Full Circle #85 hits the streets…

Fri, 2014-05-30 17:19

This month:
* Command & Conquer
* How-To : Python, LibreOffice, and GRUB2 Pt.1.
* Graphics : Blender and Inkscape.
* Review: Ubuntu 14.04
* Security Q&A
* What Is: CryptoCurrency
* Open Source Design
* NEW! - Arduino
plus: Q&A, Linux Labs, Ask The New Guy, Ubuntu Games, and a competition to win a Humble Bundle!

Get it while it’s hot!
http://fullcirclemagazine.org/issue-85/

Sam Hewitt: Hamburger Buns

Fri, 2014-05-30 16:00

If you're making hamburgers, buns are needed. You can buy them but they are dead simple to make yourself and (like most homemade bread) they're going to taste far superior to anything that comes off a shelf.

Just in case you wanted my basic (and perfect) hamburger mince recipe to:

Hamburger Recipe
    Ingredients

    4 hours of waiting; 10 minutes of work; Makes 8 buns

  • 3 cups all-purpose flour
  • 1 teaspoon salt
  • 1 (1/4 oz) package active dry yeast
  • 1 cup warm water
  • 1 large egg
  • 3 tablespoons butter, melted
  • 3 tablespoons sugar
  • 1 tablespoon milk
  • olive oil
  • 1 egg, beaten
  • 1 tablespoon sesame or poppy seeds, for garnish (optional)
    Directions
  1. Dissolve the yeast in the warm water with 1/4 of the flour. Let stand until quite foamy.
  2. Whisk together an egg, with the milk, butter & sugar.
  3. Dump the remaining flour and salt into a food processor (or stand mixer with a dough hook).
  4. Add the wet ingredients:
    • If using a food processor, slowly pour in the yeast mixture & the egg-milk mixture and blitz until the mixture comes together into a ball.
    • If using a stand mixer, add the yeast & egg-milk mixtures and run the machine on low for 5-6 minutes.
    If it sticky to the touch add a little more flour.
  5. Dump the dough-mass onto a lightly floured surface and form into a ball.
  6. Coat lightly with olive oil and place in a large bowl. Cover lightly with a towel and let rise until it has doubled in volume –about 2 hours.
  7. Transfer risen dough to a lightly floured surface, and flatten a bit to remove any large bubbles.
  8. Form into a large even circle and divide into 8 even pieces. Form each of those into a ball, flatten slightly and place onto a baking sheet lined with parchment paper.
  9. Lightly drape a sheet of plastic wrap over the buns and let rise again for another hour.
  10. Preheat the nearest oven to 375 degrees Fahrenheit (190 Celcius).
  11. Lightly brush all the buns with the beaten egg and sprinkle with sesame or poppy seeds, if using.
  12. Bake for 15-20 minutes until the exteriors are golden brown.
  13. Remove & let cool completely. Slice in half crosswise & serve.

Canonical Design Team: Making ubuntu.com responsive: adapting our navigation to small screens (9)

Fri, 2014-05-30 14:12

This post is part of the series ‘Making ubuntu.com responsive‘.

One of the biggest challenges when making the move to responsive was tackling the navigation in ubuntu.com. This included rethinking not only the main navigation with first, second and third level links, but also a big 3-tier footer and global navigation.

Desktop size main navigation and footer on ubuntu.com.

1. Brainstorming

Instead of assigning this task to a single UX designer, and with the availability of everyone in the team very tight, we gathered two designers and two UX designers in a room for a few hours for an intensive brainstorming session. The goal of the meeting was to find an initial solution for how our navigation could work in small screens.

We started by analysing examples of small screen navigation from other sites and discussing how they could work (or not) for ubuntu.com. This was a good way of generating ideas to solve the navigation for our specific case.

Some of the small screen navigation examples we analysed, from left to right: the Guardian, BBC and John Lewis.

We decided to keep to existing common design patterns for small screen navigation rather than trying to think of original solutions, so we stuck with the typical menu icon on the top right with a dropdown on click for the top level links.

Settling on a solution for second and third level navigation was trickier, as we wanted to keep a balance between exposing our content and making the site easy to navigate without the menus getting in the way.

We thought keeping to simple patterns would make it easier for users to understand the mechanics of the navigation quickly, and assumed that in smaller screens users tend to forgive extra clicks if that means simpler and uncluttered navigation.

Some of the ideas we sketched during the brainstorming session.

2. Prototyping

With little time on our hands, we decided we’d deliver our solution in paper sketches for a prototype to be created by our developers. The starting point for the styling of the navigation should follow closely that of Ubuntu Insights, and the remaining tweaks should be built and designed directly in code.

Navigation of Ubuntu Insights on large and small screens.

We briefed Ant with the sketches and some design and UX direction and he quickly built a prototype of the main navigation and footer for us to test and further improve.

First navigation prototype.

3. Improving

We gathered again to test and review the prototype once it was ready, and suggest improvements.

Everyone agreed that the mechanics of the navigation were right, and that visual tweaks could make it easier to understand, providing the user with more cues about the hierarchy of the content.

Initially, when scaling down the screen the search and navigation overlapped a small amount before the small screen menu icon kicked in, so we also thought it would be nice to animate the change of the amount of padding between widths.

We also made sure that if JavaScript is not available, when the user clicks on the menu icon the page scrolls down to the footer, where the navigation is accessible.

Final navigation prototype, after some tweaks.

Some final thoughts

When time is of essence, but you still want to be able to experiment and generate as many ideas as possible, spending a couple of hours in a room with other team members, talking through case studies and how they can be applied to your situation proved a really useful and quick way to advance the project. And time and time again, it has proved very useful to invite people who are not directly involved with the project to give the team valuable new ideas and insights.

We’re now planning to test the navigation in our next quarterly set of usability testing, which will surely provide us with useful feedback to make the website easier to navigate across all screen sizes.

Reading list

Valorie Zimmerman: Thank you, Jerry Seinfeld

Fri, 2014-05-30 06:45
Seinfeld has a stand-up routine about how your party self wants to be wild and crazy, because the person who will suffer the hangover is "Morning Guy." This was how I thought and why I procrastinated for most of my life. Eventually I figured out that my Self today and my Self tomorrow -- same Self.

I value kindness, and try to be kind and thoughtful to others. So why not be kind to my Self too? In my effort to make life more peaceful and happy, I've gradually started to be kinder to my future self, and have reduced procrastination quite a bit. Along the way, I've discovered how much easier it is to just do a small task when I notice it needs doing, rather than putting it off. My attitude about those little tasks has changed too, and they hardly seem like work. Instead, they feel like being kind to my Self.

This all seems like a virtuous circle, since doing small tasks immediately keeps life simple and clean, and living in a clean, calm environment makes it easier to do the work to keep things up. This seems to be valuable everywhere; in the daily schedule, around the house, at work, in relationships.

Perhaps it was easier to fall into this way of working since I've been started some practices like making cold-brew coffee and fermented foods, both of which need to be started hours or days before they are ready to consume. Every time I finish preparing a new batch, I feel like I'm 'paying it forward' to my future self. The work we've been doing on our house also helps in a major way.

For those who call this kindness "selfishness," consider the so-called Golden Rule, restated by Jesus as love your neighbor as you love yourself. That final phrase says it all; self-care is at the core of love.

And it all counts.

  • Making your bed when you get up, counts.
  • Brushing and flossing your teeth, counts.
  • Eating breakfast, counts.
  • Washing up, counts.
  • Making a meeting on time, counts.
  • Writing a thoughtful email, counts.
  • Helping someone in IRC, counts.
  • Eating vegetables or fruit instead of junk food, counts.
  • Drinking water instead of a sugary beverage, counts.
  • Walking up a flight of stairs instead of taking the elevator, counts.
  • Picking up a bit of trash, counts.
  • Getting to bed in time to have a full night's sleep, counts.
  • Writing a blog post, counts!

Anyone who has flown in a jetliner, has heard the advice: If you are traveling with someone, and the oxygen masks drop, do not help them put on their masks first. Put on your own mask, then help others!

Many of the ideas that have helped me get to this place come from unfuckyourhabitat.tumblr.com which is invaluable. It is one thing to give yourself credit for the progress you make, but it is amazing to have a team of people cheering you on!

Jeremy Kerr: Netbooting with petitboot

Fri, 2014-05-30 01:12

I've been working on petitboot's netboot code recently. Here's the lowdown on how it all works.

Essentially, everything is intended to be compatible with the de-facto standard pxelinux behaviour. However, there's one major difference, in that we skip the stage where the machine downloads a binary pxelinux loader (because we're already running the loader, right?). This means that you probably don't want to populate the filename field in the DHCP response header. That said, petitboot should work fine with most current pxelinux configurations.

Netboot configuration process

By default, petitboot will send a DHCP request on any interfaces that have an active link (ie, have a network cable plugged-in). The DHCP response will dictate petiboot's behaviour:

Firstly, petitboot will look for a "PXE configuration file" option (DHCP option 209) in the response. If this is specified, then petitboot will download and parse that configuration file. This can be either a full URL, or a file path. See the URLs section below for details on paths and URLs.

If no explicit configuration file is given (ie, there's no option 209 included in the DHCP response), then petitboot will attempt pxelinux-style configuration auto-discovery, using the machine's MAC address, the IP of the DHCP lease, and fall back to a file named default. For example, for a machine with a MAC of 00:01:02:03:04:05, given a lease IP of 192.168.0.10 (C0A8000A in hex), petitboot will request the following files, in order, stopping at the first successful download:

  1. prefix/pxelinux.cfg/01-00-01-02-03-04-05
  2. prefix/pxelinux.cfg/C0A8000A
  3. prefix/pxelinux.cfg/C0A8000
  4. prefix/pxelinux.cfg/C0A800
  5. prefix/pxelinux.cfg/C0A80
  6. prefix/pxelinux.cfg/C0A8
  7. prefix/pxelinux.cfg/C0A
  8. prefix/pxelinux.cfg/C0
  9. prefix/pxelinux.cfg/C
  10. prefix/pxelinux.cfg/default

- where prefix will depend on a few things:

  1. If the DHCP response include a "PXE path prefix" option (DHCP option 210), petitboot will use that value as the prefix. This prefix can be a full URL, or just a path prefix (see the URLs section for details). Note that option 210 should always end with a trailing slash.
  2. Otherwise, TFTP is assumed, the server is determined from the DHCP response, and the files are requested from the top-level directory.

Finally, if there is a "file" parameter present in the DHCP header, then that file is added as a binary boot option, to be executed directly by the machine with no initrd or boot arguments. Don't specify a text config file in this manner, it won't work.

Configuration files

Petitboot supports configuration files based on the syslinux configuration format. However, not all keywords are parsed, as some relate to functionality that isn't relevant in a petitboot environment. Currently, petitboot supports the DEFAULT, KERNEL, INITRD and APPEND keywords. Keywords are case-insensitive.

Here's a typical petitboot configuration file that defines a single, default boot option:

default Linux 3.10.4 label Linux 3.10.4 kernel tftp://boot-server/powerpc/vmlinux-3.10.4 initrd tftp://boot-server/powerpc/initrd-3.10.4 append root=/dev/sda1 console=hvc0 URLs, servers and paths

Remote resources ­— such as configuration files, kernels and initramfs images — can be specified as full URLs (eg. tftp://hostname/path/file) or just paths (eg. /path/file). If a full URL is given, then petitboot will use that as-is. Supported protocols are currently http, ftp, tftp and nfs.

If only a path is given, petitboot will assume the TFTP protocol, and use an appropriate server address based on the DHCP response parameters, in this order:

  1. The "TFTP Server name" option - DCHP option 66
  2. The "Server Identifier" option - DHCP option 54
  3. The "siaddr" field in the DHCP/BOOTP header

Within a configuration file, paths are resolved relative to the location of that file. In keeping with the pxelinux configuration format, absolute paths can be give with a :: prefix - eg. ::/powerpc/vmlinux. Full URLs are always treated as absolute.

DHCP configuration examples

Here are a couple of DHCP server configurations that illustrate how to netboot petitboot machines. These examples are intended for the ISC DHCP server, and only show the configurations relevant to petitboot configuration - you'll need to define the usual subnet, range, etc sections too.

Single, predefined configuration file

This simple example configures all DHCP clients to use a single petitboot configuration file, served over HTTP:

# define a "conf-file" option syntax for the PXE configuration file (opt 209) option conf-file code 209 = text; option conf-file "http://boot-server/petitboot.cfg"; Fixed configuration for multiple architectures

This example shows a configuration that will allow petitboot-based POWER machines to work alongside pxelinux-based x86 machines. We use the DHCP architecture identifier 0x0e to dinstinguish the POWER OPAL boot clients.

# define a "conf-file" option syntax for the PXE configuration file (opt 209) option conf-file code 209 = text; # define an "arch" option syntax for the DHCP architecture identifier (opt 93) option arch code 93 = unsigned integer 16; # specify separate configuration files for powerpc & x86 machines # and configure x86 machines to use the pxelinux.0 loader. if option arch = 00:0e { option conf-file "powerpc/pxelinux/netboot.cfg"; } else { filename "x86/pxelinux/pxelinux.0"; option conf-file "netboot.cfg"; }

Since we're not specifying full URLs for the configuration files here, petitboot will attempt to download using TFTP, from the same host as the DHCP server.

In the x86 section, note that the config file (netboot.cfg) is specified relative to the pxelinux.0 binary. In this case, pxelinux will request the file from x86/pxelinux/netboot.cfg.

Managed TFTP server

In automated-provisioning environments, a central deployment system may control machine setup and boot. When a newly-racked machine is first booted, we want it to boot to an initial install/provision environment, where it is initialised and registers itself with the provisioning service. Once that registration is complete, we want it to boot to the newly-installed environment.

One way to achieve this is to have the deployment system manage PXE configuration files that are served over TFTP.

For this, we'd rely on the PXE autodiscovery mechanism for any newly-deployed machines (relying on the fallback to the configuration file named 'default'). Once a machine has completed its provisioning process (and registered with the deployment service), a per-machine configuration file can be added to the TFTP server, named after the machine's MAC address. This file will configure the newly-provisioned machine to boot to its standard OS environment, rather than booting through the initial-install process again.

For this scenario, we can use a PXE path prefix parameter to distinguish machines of different architectures:

# define a "path-prefix" option syntax for the PXE path prefix (opt 210) option path-prefix code 210 = text; # define an "arch" option syntax for the DHCP architecture identifier (opt 93) option arch code 93 = unsigned integer 16; # use 192.168.0.3 as our managed TFTP server next-server 192.168.0.3; # provide separate binaries and configuration files depending on # client architecture if option arch = 00:0e { # POWER OPAL option path-prefix "powerpc/"; } else if option arch = 00:07 { # x86-64 EFI option path-prefix "x86-efi/"; filename "pxelinux/bootx64.efi"; } else { # x86 PC-BIOS option path-prefix "x86-pc-bios/"; filename "pxelinux/pxelinux.0"; }

We could just as easily use HTTP instead of TFTP here, by specifying full HTTP URLs as the configuration files. For the non-petitboot machines, we'd need to use a pxe loader that supports HTTP, like gPXE.

Managed DHCP server

This is similar to the previous example, but rather than using per-machine configuration files (served over TFTP), we can implement per-machine configuration directly in the DHCP server configuration.

If our DHCP configuration is managed by the deployment system, we can use host-specific configurations for machines that have been provisioned, and fall back to a default configuration for newly-racked machines. In this scenario, the deployment system is responsible for managing the DHCP configuration by adding a 'host' stanza for each known machine, after installation.

# define a "conf-file" option syntax for the PXE configuration file (opt 209) option conf-file code 209 = text; # define an "arch" option syntax for the DHCP architecture identifier (opt 93) option arch code 93 = unsigned integer 16; # configuration for running installers on unknown hosts: provide separate # binaries and configuration files depending on client architecture if option arch = 00:0e { # POWER OPAL option conf-file "pxe-configs/installer-powerpc.conf" } else if option arch = 00:07 { # x86-64 EFI option conf-file "pxe-configs/installer-x86-64-efi.conf" filename "pxelinux/bootx64.efi"; } # known host configurations, for which we specify an existing config file. These # sections are generated and managed by the deployment process, after each # machine has been provisioned. host server-001 { hardware ethernet 3c:97:0e:3b:85:00; option conf-file "pxe-configs/runtime-powerpc.conf"; } host server-002 { hardware ethernet b4:1e:26:c4:a0:be; option conf-file "pxe-configs/runtime-x86.conf"; }

Ubuntu Podcast from the UK LoCo: S07E09 – The One with the Ick Factor

Thu, 2014-05-29 19:30

Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are in Studio L for Season Seven, Episode Eight of the Ubuntu Podcast!

 Download OGG  Download MP3 Play in Popup

In this week’s show:-

We’ll be back next week, when we’ll interview Martin Wimpress from the MATE desktop team and go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Mark Shuttleworth: Enjoying G+

Thu, 2014-05-29 18:25

Plussers, I’m now at https://plus.google.com/+MarkShuttleworthCanonical/ and quite enjoying the service. That G+ persona is mostly software and cloud related (so short on beekeeping, kitesurfing or botanical pursuits).

This is probably the best way to keep in touch. I’m not on Facebook and every year Claire and I clear out the oddball fake-me’s that seem to crop up regularly there. To their credit, FB have been really good about sorting those out. I’m told I’m missing out but, srsly, TIME!

G+ seems really clean and well organised, the chatter is mostly thoughtful and good-natured.

Mark Shuttleworth: #9 – Canonical’s cloud-init saves you from image soup, on every cloud

Thu, 2014-05-29 16:22

This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play.

We run an extensive program to identify issues and features that make a difference to cloud users. One result of that program is that we pioneered dynamic image customisation and wrote cloud-init. I’ll tell the story of cloud-init as an illustration of the focus the Ubuntu team has on making your devops experience fantastic on any given cloud.

 

Ever struggled to find the “right” image to use on your favourite cloud? Ever wondered how you can tell if an image is safe to use, what keyloggers or other nasties might be installed? We set out to solve that problem a few years ago and the resulting code, cloud-init, is one of the more visible pieces Canonical designed and built, and very widely adopted.

Traditionally, people used image snapshots to build a portfolio of useful base images. You’d start with a bare OS, add some software and configuration, then snapshot the filesystem. You could use those snapshots to power up fresh images any time you need more machines “like this one”. And that process works pretty amazingly well. There are hundreds of thousands, perhaps millions, of such image snapshots scattered around the clouds today. It’s fantastic. Images for every possible occasion! It’s a disaster. Images with every possible type of problem.

The core issue is that an image is a giant binary blob that is virtually impossible to audit. Since it’s a snapshot of an image that was running, and to which anything might have been done, you will need to look in every nook and cranny to see if there is a potential problem. Can you afford to verify that every binary is unmodified? That every configuration file and every startup script is safe? No, you can’t. And for that reason, that whole catalogue of potential is a catalogue of potential risk. If you wanted to gather useful data sneakily, all you’d have to do is put up an image that advertises itself as being good for a particular purpose and convince people to run it.

There are other issues, even if you create the images yourself. Each image slowly gets out of date with regard to security updates. When you fire it up, you need to apply all the updates since the image was created, if you want a secure machine. Eventually, you’ll want to re-snapshot for a more up-to-date image. That requires administration overhead and coordination, most people don’t do it.

That’s why we created cloud-init. When your virtual machine boots, cloud-init is run very early. It looks out for some information you send to the cloud along with the instruction to start a new machine, and it customises your machine at boot time. When you combine cloud-init with the regular fresh Ubuntu images we publish (roughly every two weeks for regular updates, and whenever a security update is published), you have a very clean and elegant way to get fresh images that do whatever you want. You design your image as a script which customises the vanilla, base image. And then you use cloud-init to run that script against a pristine, known-good standard image of Ubuntu. Et voila! You now have purpose-designed images of your own on demand, always built on a fresh, secure, trusted base image.

Auditing your cloud infrastructure is now straightforward, because you have the DNA of that image in your script. This is devops thinking, turning repetitive manual processes (hacking and snapshotting) into code that can be shared and audited and improved. Your infrastructure DNA should live in a version control system that requires signed commits, so you know everything that has been done to get you where you are today. And all of that is enabled by cloud-init. And if you want to go one level deeper, check out Juju, which provides you with off-the-shelf scripts to customise and optimise that base image for hundreds of common workloads.

Jono Bacon: Last Day Today

Thu, 2014-05-29 15:34

Recently I announced I am stepping down as Ubuntu Community Manager at Canonical and moving to XPRIZE as Senior Director of Community. Today is my last day at Canonical.

I just want to say how touched I have been by the response. The comments, social media posts, emails, and calls from you have been so kind and supportive. You are all good people, and I am going to miss every single one of you.

The reason why I have devoted my life to understanding communities is that I believe communities bring out the best in people, and all of you are a perfect example of that. I cannot express just how much I appreciate it.

Over the course of the next few weeks my replacement will be sourced and announced. and in the interim my team (Daniel Holbach, Michael Hall, David Planella, Nicholas Skaggs, Alan Pope) will take over my duties. Everything has been transitioned over, and remember, the weekly Q&As will continue at 6pm UTC every Tuesday on Ubuntu On Air with my team filling in for me. As ever, any and all Ubuntu questions are welcome!

Of course, I will still be around. I am going to continue to be a member of the Ubuntu community and an avid Ubuntu user, tester, and supporter. I will continue to be on IRC, you can email me at jono@jonobacon.org, I will continue to do Bad Voltage, and I have a busy schedule at the Community Leadership Summit, OSCON, and more. I am also going to continue to have my own Q&A session every week where you can ask questions about my perspectives on Ubuntu, Canonical, community management, XPRIZE, and more; I will announce this soon.

Ubuntu has a tremendous future ahead of it, built on the hard work and passion of a global community. We are only just getting started with a new era of Ubuntu convergence and cloud orchestration and while I will miss being there in an official capacity, I am just thankful that I can continue to be along for the ride in the very community I played a part in building.

I now have a few weeks off and then my new adventure begins. Stay tuned.

Pages