news aggregator

Mark Shuttleworth: #8 – Ubuntu makes useful guarantees on every cloud

Planet Ubuntu - Sat, 2014-05-31 11:41

This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play.

When you see Ubuntu on a cloud it means that Canonical has a working relationship with that cloud vendor, and the Ubuntu images there come with a set of guarantees:

  1. Those images are up to date and secure.
  2. They have also been optimised on that cloud, both for performance and cost.
  3. The images provide a standard experience for app compatibility.

That turns out to be a lot of work for us to achieve, but it makes your life really easy.

Fresh, secure and tasty images

We update the cloud images across all clouds on a regular basis. Updating the image means that you have more of the latest updates pre-installed so launching a new machine is much faster – fewer updates to install on boot for a fully secured and patched machine.

  1. At least every two weeks, typically, if there are just a few small updates across the board to roll into the freshest image.
  2. Immediately if there is a significant security issue, so starting a fresh image guarantees you to have no known security gotchas.
  3. Sooner than usual if there are a lot of updates which would make launching and updating a machine slow.

Updates might include fixes to the kernel, or any of the packages we install by default in the “core” cloud images.

We also make sure that these updated images are used by default in any “quick launch” UI that the cloud provides, so you don’t have to go hunt for the right image identity. And there are automated tools that will tell you the ID for the current image of Ubuntu on your cloud of choice. So you can script “give me a fresh Ubuntu machine” for any cloud, trivially. It’s all very nice.

Optimised for your pocket and your workload

Every cloud behaves differently – both in terms of their architecture, and their economics. When we engage with the cloud operator we figure out how to ensure that Ubuntu is “optimal” on that cloud. Usually that means we figure out things like storage mechanisms (the classic example is S3 but we have to look at each cloud to see what they provide and how to take advantage of it) and ensure that data-heavy operations like system updates draw on those resources in the most cost-efficient manner. This way we try to ensure that using Ubuntu is a guarantee of the most cost-effective base OS experience on any given cloud.

In the case of more sophisticated clouds, we are digging in to kernel parameters and drivers to ensure that performance is first class. On Azure there is a LOT of deep engineering between Canonical and Microsoft to ensure that Ubuntu gets the best possible performance out of the Hyper-V substrate, and we are similarly engaged with other cloud operators and solution providers that use highly-specialised hypervisors, such as Joyent and VMware. Even the network can be tweaked for efficiency in a particular cloud environment once we know exactly how that cloud works under the covers. And we do that tweaking in the standard images so EVERYBODY benefits and you can take it for granted – if you’re using Ubuntu, it’s optimal.

The results of this work can be pretty astonishing. In the case of one cloud we reduced the Ubuntu startup time by 23x from what their team had done internally; not that they were ineffective, it’s just that we see things through the eyes of a large-scale cloud user and care about things that a single developer might not care about as much. When you’re doing something at scale, even small efficiencies add up to big numbers.

Standard, yummy

Before we had this program in place, every cloud vendor hacked their own Ubuntu images, and they were all slightly different in unpredictable ways. We all have our own favourite way of doing things, so if every cloud has a lead engineer who rigged the default Ubuntu the way they like it, end users have to figure out the differences the hard way, stubbing their toes on them. In some cases they had default user accounts with different behaviour, in others they had different default packages installed. EMACS, Vi, nginx, the usual tweaks. In a couple of cases there were problems with updates or security, and we realised that Ubuntu users would be much better off if we took responsibility for this and ensured that the name is an assurance of standard behaviour and quality across all clouds.

So now we have that, and if you see Ubuntu on a public cloud you can be sure it’s done to that standard, and we’re responsible. If it isn’t, please let us know and we’ll fix it for you.

That means that you can try out a new cloud really easily – your stuff should work exactly the same way with those images, and differences between the clouds will have been considered and abstracted in the base OS. We’ll have tweaked the network, kernel, storage, update mechanisms and a host of other details so that you don’t have to, we’ll have installed appropriate tools for that specific cloud, and we’ll have lined things up so that to the best of our ability none of those changes will break your apps, or updates. If you haven’t recently tried a new cloud, go ahead and kick the tires on the base Ubuntu images in two or three of them. They should all Just Work TM.

 

It’s frankly a lot of fun for us to work with the cloud operators – this is the frontline of large-scale systems engineering, and the guys driving architecture at public cloud providers are innovating like crazy but doing so in a highly competitive and operationally demanding environment. Our job in this case is to make sure that end-users don’t have to worry about how the base OS is tuned – it’s already tuned for them. We’re taking that to the next level in many cases by optimising workloads as well, in the form of Juju charms, so you can get whole clusters or scaled-out services that are tuned for each cloud as well. The goal is that you can create a cloud account and have complex scale-out infrastructure up and running in a few minutes. Devops, distilled.

Valorie Zimmerman: Exciting! Writing another KDE book: Frameworks in Randa

Planet Ubuntu - Sat, 2014-05-31 08:54

Returning to Randa is always tremendous, no matter what the work is. The house in Randa, Switzerland is so welcoming, so solid, yet modern and comfortable too.

KDE is in the building mode this year. If you think of the software like a house, Frameworks is the foundation and framing. Of course it isn't like a house, since the dwelling we are constructing is made from familiar materials, but remade in a interactive, modular fashion. I'd better stop with the house metaphor before I take it too far, because the rooms (applications) will be familiar, yet updated. And we can't call the Plasma Next "paint", yet the face KDE will be showing the world in our software will be familiar yet completely updated as well.

However, this year will see the foundation for KDE's newest surge ahead in software, from the foundation to the roof, to use the house metaphor one more time. I really cannot wait to get there and start work on our next book, the Framework Recipe book (working title). Of course, travel is expensive, and most of us will come from all over the world. So once again, we're raising funds for the Randa Meetings, which will be the largest so far. The e.V. is a major sponsor, but this is one big gathering. We need community support. Please give generously here:

www.kde.org/fundraisers/randameetings2014/index.php

Bryan Quigley: The Mozilla I want

Planet Ubuntu - Fri, 2014-05-30 21:13

Somewhat a response to
https://webwewant.mozilla.org/en/
https://blog.mozilla.org/blog/2014/05/14/drm-and-the-challenge-of-serving-users/
https://fsf.org/news/fsf-condemns-partnership-between-mozilla-and-adobe-to-support-digital-restrictions-management

NOTE: These are more personal opinions and not necessarily those of my employer, your employer, or any of the businesses and government in my town,city, state, country.

DRM

It is *never* something people want.  Have you ever heard someone say “I really want this content I bought/”rented” to be harder to share/remix/watch and to have even greater legal ramifications if I do so?”  They do want content made by Hollywood, but those are different things.

DRM – Mozilla being played?

This reminds me of the time Chrome said it would drop H264 [1].   From what played out in public it seemed that Mozilla didn’t see the need to push for H264 sooner because they trusted Google to actually drop H264 support.

In a somewhat reverse situation, Mozilla just said they will adopt EME in Firefox before any of the possible benefits are realized by others.  Right now it’s being implemented only in Chrome and IE 11 [2], and I’ve only seen it used in Chrome OS and IE 11.   From my point of view I would have preferred if Mozilla had at least waiting to see if we will get more platform support from major vendors on this.  (aka Linux support for Netflix)

If so, maybe the increase in Linux market-share would provide some balance to the DRM’s negative affects.  Making free software overall a net win.   If so, I would (still reluctantly) support Mozilla’s decision here if they saw it as an end to get Hollywood media to more freer platforms.   But why not wait and see if this is actually true with Google Chrome/Netflix on Linux?

[1] http://blog.chromium.org/2011/01/html-video-codec-support-in-chrome.html
[2] http://html5test.com/compare/feature/video-drm.html

Reduce “Hollywood” power

I would like to see Mozilla pushing Indie/Crowdsourcing media, like:
Paid streaming site for indie videos https://indieflix.com/
Public broadcasting http://www.pbs.org/
Better publishing http://www.getmiro.com/
Basically, How can Mozilla use it’s capabilities to better change the media landscape?  (A slightly “better” form of DRM should not be the answer to this question).

Abandon the DoNoTrack header, provide actual options

It doesn’t work (and almost certainly never will) and it gives people a false sense of doing something.   You are giving advertisers another data point to track!  It literally does the opposite of what is supposed to.

Instead!

  •  Finish blocking 3rd party cookies (https://blog.mozilla.org/privacy/2013/02/25/firefox-getting-smarter-about-third-party-cookies/)
  • Promote (by adding it as a search option, etc) providers that promise to not track ANY of their users.   DuckDuckGo being the most obvious example.
    Their is so little difference between Yahoo and Bing search.. and DuckDuckGo is a damn good search engine[2].
  • Push advertisers off of Flash (generally a good idea, but also will help with privacy – no flash cookies, etc)
    Generally I’m supportive of the Click-to-play, etc initiatives Mozilla has taken thus far.  Flash is the exception to that rule.   Here’s the outline of a plan to push advertisers off of it. (the numbers are obviously made up for illustrative purposes)

    1. Start forcing Click-to-play for Flash when the site has more than 6 plugins running (pick some “high” number, and count all plugins, not just flash)
    2. Reduce the number of plugins to 5, after some number of Firefox releases or some specific Adobe Flash counting metric.  Repeat pushing to 4, etc.
    3. Once advertisers get on board and Flash ads aren’t served by the big advertisers, now we can push Flash to click-to-play at 2 instances per page.
    4. Once flash usage drops under 5% [1], we’d be able to push it to default click-to-play for all Flash.

[1] http://w3techs.com/technologies/details/cp-flash/all/all
[2] http://libretechtips.com/reviews-internet/duckduckgo-comparison-google-bing-yandex

SSL 3.0 – When will it go away?

You’ve removed the (easy) option to disable it.  When will it go away for good?  Why does Chrome let the user see what protocol version (TLS 1.2 vs 1.0, etc) their users are using, but Firefox doesn’t?

Mobile – Firefox OS

Well I work at a direct competitor in mobile… but not actually working with our phone product..

  • Launching just with Facebook contact sync well, isn’t very open and totally goes against promoting those that respect your same values.
  • I get that you can’t magically make the devices more open, but at least can we get public commitments for how long a device will be supported for?  And how often it will get Firefox OS updates?
  • I wish you had used Thunderbird as Firefox OS’s email client… I think that would let it scale really really well and give you a new reason to push features there.. Maybe you are under the hood?

If you’re reading this and don’t know, you can try out Firefox OS (“Boot2Gecko”) on your desktop too! (https://nightly.mozilla.org/)

End on a positive note..

I love the new Firefox interface.  It’s awesome and makes customizing the browser much better.   I’m a nightly user and teach courses on Firefox.  I’m not going anywhere (fast) over the DRM decision.  Going to keep doing what I do and see how it pans out…

Ronnie Tucker: Full Circle #85 hits the streets…

Planet Ubuntu - Fri, 2014-05-30 17:19

This month:
* Command & Conquer
* How-To : Python, LibreOffice, and GRUB2 Pt.1.
* Graphics : Blender and Inkscape.
* Review: Ubuntu 14.04
* Security Q&A
* What Is: CryptoCurrency
* Open Source Design
* NEW! - Arduino
plus: Q&A, Linux Labs, Ask The New Guy, Ubuntu Games, and a competition to win a Humble Bundle!

Get it while it’s hot!
http://fullcirclemagazine.org/issue-85/

Sam Hewitt: Hamburger Buns

Planet Ubuntu - Fri, 2014-05-30 16:00

If you're making hamburgers, buns are needed. You can buy them but they are dead simple to make yourself and (like most homemade bread) they're going to taste far superior to anything that comes off a shelf.

Just in case you wanted my basic (and perfect) hamburger mince recipe to:

Hamburger Recipe
    Ingredients

    4 hours of waiting; 10 minutes of work; Makes 8 buns

  • 3 cups all-purpose flour
  • 1 teaspoon salt
  • 1 (1/4 oz) package active dry yeast
  • 1 cup warm water
  • 1 large egg
  • 3 tablespoons butter, melted
  • 3 tablespoons sugar
  • 1 tablespoon milk
  • olive oil
  • 1 egg, beaten
  • 1 tablespoon sesame or poppy seeds, for garnish (optional)
    Directions
  1. Dissolve the yeast in the warm water with 1/4 of the flour. Let stand until quite foamy.
  2. Whisk together an egg, with the milk, butter & sugar.
  3. Dump the remaining flour and salt into a food processor (or stand mixer with a dough hook).
  4. Add the wet ingredients:
    • If using a food processor, slowly pour in the yeast mixture & the egg-milk mixture and blitz until the mixture comes together into a ball.
    • If using a stand mixer, add the yeast & egg-milk mixtures and run the machine on low for 5-6 minutes.
    If it sticky to the touch add a little more flour.
  5. Dump the dough-mass onto a lightly floured surface and form into a ball.
  6. Coat lightly with olive oil and place in a large bowl. Cover lightly with a towel and let rise until it has doubled in volume –about 2 hours.
  7. Transfer risen dough to a lightly floured surface, and flatten a bit to remove any large bubbles.
  8. Form into a large even circle and divide into 8 even pieces. Form each of those into a ball, flatten slightly and place onto a baking sheet lined with parchment paper.
  9. Lightly drape a sheet of plastic wrap over the buns and let rise again for another hour.
  10. Preheat the nearest oven to 375 degrees Fahrenheit (190 Celcius).
  11. Lightly brush all the buns with the beaten egg and sprinkle with sesame or poppy seeds, if using.
  12. Bake for 15-20 minutes until the exteriors are golden brown.
  13. Remove & let cool completely. Slice in half crosswise & serve.

Canonical Design Team: Making ubuntu.com responsive: adapting our navigation to small screens (9)

Planet Ubuntu - Fri, 2014-05-30 14:12

This post is part of the series ‘Making ubuntu.com responsive‘.

One of the biggest challenges when making the move to responsive was tackling the navigation in ubuntu.com. This included rethinking not only the main navigation with first, second and third level links, but also a big 3-tier footer and global navigation.

Desktop size main navigation and footer on ubuntu.com.

1. Brainstorming

Instead of assigning this task to a single UX designer, and with the availability of everyone in the team very tight, we gathered two designers and two UX designers in a room for a few hours for an intensive brainstorming session. The goal of the meeting was to find an initial solution for how our navigation could work in small screens.

We started by analysing examples of small screen navigation from other sites and discussing how they could work (or not) for ubuntu.com. This was a good way of generating ideas to solve the navigation for our specific case.

Some of the small screen navigation examples we analysed, from left to right: the Guardian, BBC and John Lewis.

We decided to keep to existing common design patterns for small screen navigation rather than trying to think of original solutions, so we stuck with the typical menu icon on the top right with a dropdown on click for the top level links.

Settling on a solution for second and third level navigation was trickier, as we wanted to keep a balance between exposing our content and making the site easy to navigate without the menus getting in the way.

We thought keeping to simple patterns would make it easier for users to understand the mechanics of the navigation quickly, and assumed that in smaller screens users tend to forgive extra clicks if that means simpler and uncluttered navigation.

Some of the ideas we sketched during the brainstorming session.

2. Prototyping

With little time on our hands, we decided we’d deliver our solution in paper sketches for a prototype to be created by our developers. The starting point for the styling of the navigation should follow closely that of Ubuntu Insights, and the remaining tweaks should be built and designed directly in code.

Navigation of Ubuntu Insights on large and small screens.

We briefed Ant with the sketches and some design and UX direction and he quickly built a prototype of the main navigation and footer for us to test and further improve.

First navigation prototype.

3. Improving

We gathered again to test and review the prototype once it was ready, and suggest improvements.

Everyone agreed that the mechanics of the navigation were right, and that visual tweaks could make it easier to understand, providing the user with more cues about the hierarchy of the content.

Initially, when scaling down the screen the search and navigation overlapped a small amount before the small screen menu icon kicked in, so we also thought it would be nice to animate the change of the amount of padding between widths.

We also made sure that if JavaScript is not available, when the user clicks on the menu icon the page scrolls down to the footer, where the navigation is accessible.

Final navigation prototype, after some tweaks.

Some final thoughts

When time is of essence, but you still want to be able to experiment and generate as many ideas as possible, spending a couple of hours in a room with other team members, talking through case studies and how they can be applied to your situation proved a really useful and quick way to advance the project. And time and time again, it has proved very useful to invite people who are not directly involved with the project to give the team valuable new ideas and insights.

We’re now planning to test the navigation in our next quarterly set of usability testing, which will surely provide us with useful feedback to make the website easier to navigate across all screen sizes.

Reading list

Valorie Zimmerman: Thank you, Jerry Seinfeld

Planet Ubuntu - Fri, 2014-05-30 06:45
Seinfeld has a stand-up routine about how your party self wants to be wild and crazy, because the person who will suffer the hangover is "Morning Guy." This was how I thought and why I procrastinated for most of my life. Eventually I figured out that my Self today and my Self tomorrow -- same Self.

I value kindness, and try to be kind and thoughtful to others. So why not be kind to my Self too? In my effort to make life more peaceful and happy, I've gradually started to be kinder to my future self, and have reduced procrastination quite a bit. Along the way, I've discovered how much easier it is to just do a small task when I notice it needs doing, rather than putting it off. My attitude about those little tasks has changed too, and they hardly seem like work. Instead, they feel like being kind to my Self.

This all seems like a virtuous circle, since doing small tasks immediately keeps life simple and clean, and living in a clean, calm environment makes it easier to do the work to keep things up. This seems to be valuable everywhere; in the daily schedule, around the house, at work, in relationships.

Perhaps it was easier to fall into this way of working since I've been started some practices like making cold-brew coffee and fermented foods, both of which need to be started hours or days before they are ready to consume. Every time I finish preparing a new batch, I feel like I'm 'paying it forward' to my future self. The work we've been doing on our house also helps in a major way.

For those who call this kindness "selfishness," consider the so-called Golden Rule, restated by Jesus as love your neighbor as you love yourself. That final phrase says it all; self-care is at the core of love.

And it all counts.

  • Making your bed when you get up, counts.
  • Brushing and flossing your teeth, counts.
  • Eating breakfast, counts.
  • Washing up, counts.
  • Making a meeting on time, counts.
  • Writing a thoughtful email, counts.
  • Helping someone in IRC, counts.
  • Eating vegetables or fruit instead of junk food, counts.
  • Drinking water instead of a sugary beverage, counts.
  • Walking up a flight of stairs instead of taking the elevator, counts.
  • Picking up a bit of trash, counts.
  • Getting to bed in time to have a full night's sleep, counts.
  • Writing a blog post, counts!

Anyone who has flown in a jetliner, has heard the advice: If you are traveling with someone, and the oxygen masks drop, do not help them put on their masks first. Put on your own mask, then help others!

Many of the ideas that have helped me get to this place come from unfuckyourhabitat.tumblr.com which is invaluable. It is one thing to give yourself credit for the progress you make, but it is amazing to have a team of people cheering you on!

Jeremy Kerr: Netbooting with petitboot

Planet Ubuntu - Fri, 2014-05-30 01:12

I've been working on petitboot's netboot code recently. Here's the lowdown on how it all works.

Essentially, everything is intended to be compatible with the de-facto standard pxelinux behaviour. However, there's one major difference, in that we skip the stage where the machine downloads a binary pxelinux loader (because we're already running the loader, right?). This means that you probably don't want to populate the filename field in the DHCP response header. That said, petitboot should work fine with most current pxelinux configurations.

Netboot configuration process

By default, petitboot will send a DHCP request on any interfaces that have an active link (ie, have a network cable plugged-in). The DHCP response will dictate petiboot's behaviour:

Firstly, petitboot will look for a "PXE configuration file" option (DHCP option 209) in the response. If this is specified, then petitboot will download and parse that configuration file. This can be either a full URL, or a file path. See the URLs section below for details on paths and URLs.

If no explicit configuration file is given (ie, there's no option 209 included in the DHCP response), then petitboot will attempt pxelinux-style configuration auto-discovery, using the machine's MAC address, the IP of the DHCP lease, and fall back to a file named default. For example, for a machine with a MAC of 00:01:02:03:04:05, given a lease IP of 192.168.0.10 (C0A8000A in hex), petitboot will request the following files, in order, stopping at the first successful download:

  1. prefix/pxelinux.cfg/01-00-01-02-03-04-05
  2. prefix/pxelinux.cfg/C0A8000A
  3. prefix/pxelinux.cfg/C0A8000
  4. prefix/pxelinux.cfg/C0A800
  5. prefix/pxelinux.cfg/C0A80
  6. prefix/pxelinux.cfg/C0A8
  7. prefix/pxelinux.cfg/C0A
  8. prefix/pxelinux.cfg/C0
  9. prefix/pxelinux.cfg/C
  10. prefix/pxelinux.cfg/default

- where prefix will depend on a few things:

  1. If the DHCP response include a "PXE path prefix" option (DHCP option 210), petitboot will use that value as the prefix. This prefix can be a full URL, or just a path prefix (see the URLs section for details). Note that option 210 should always end with a trailing slash.
  2. Otherwise, TFTP is assumed, the server is determined from the DHCP response, and the files are requested from the top-level directory.

Finally, if there is a "file" parameter present in the DHCP header, then that file is added as a binary boot option, to be executed directly by the machine with no initrd or boot arguments. Don't specify a text config file in this manner, it won't work.

Configuration files

Petitboot supports configuration files based on the syslinux configuration format. However, not all keywords are parsed, as some relate to functionality that isn't relevant in a petitboot environment. Currently, petitboot supports the DEFAULT, KERNEL, INITRD and APPEND keywords. Keywords are case-insensitive.

Here's a typical petitboot configuration file that defines a single, default boot option:

default Linux 3.10.4 label Linux 3.10.4 kernel tftp://boot-server/powerpc/vmlinux-3.10.4 initrd tftp://boot-server/powerpc/initrd-3.10.4 append root=/dev/sda1 console=hvc0 URLs, servers and paths

Remote resources ­— such as configuration files, kernels and initramfs images — can be specified as full URLs (eg. tftp://hostname/path/file) or just paths (eg. /path/file). If a full URL is given, then petitboot will use that as-is. Supported protocols are currently http, ftp, tftp and nfs.

If only a path is given, petitboot will assume the TFTP protocol, and use an appropriate server address based on the DHCP response parameters, in this order:

  1. The "TFTP Server name" option - DCHP option 66
  2. The "Server Identifier" option - DHCP option 54
  3. The "siaddr" field in the DHCP/BOOTP header

Within a configuration file, paths are resolved relative to the location of that file. In keeping with the pxelinux configuration format, absolute paths can be give with a :: prefix - eg. ::/powerpc/vmlinux. Full URLs are always treated as absolute.

DHCP configuration examples

Here are a couple of DHCP server configurations that illustrate how to netboot petitboot machines. These examples are intended for the ISC DHCP server, and only show the configurations relevant to petitboot configuration - you'll need to define the usual subnet, range, etc sections too.

Single, predefined configuration file

This simple example configures all DHCP clients to use a single petitboot configuration file, served over HTTP:

# define a "conf-file" option syntax for the PXE configuration file (opt 209) option conf-file code 209 = text; option conf-file "http://boot-server/petitboot.cfg"; Fixed configuration for multiple architectures

This example shows a configuration that will allow petitboot-based POWER machines to work alongside pxelinux-based x86 machines. We use the DHCP architecture identifier 0x0e to dinstinguish the POWER OPAL boot clients.

# define a "conf-file" option syntax for the PXE configuration file (opt 209) option conf-file code 209 = text; # define an "arch" option syntax for the DHCP architecture identifier (opt 93) option arch code 93 = unsigned integer 16; # specify separate configuration files for powerpc & x86 machines # and configure x86 machines to use the pxelinux.0 loader. if option arch = 00:0e { option conf-file "powerpc/pxelinux/netboot.cfg"; } else { filename "x86/pxelinux/pxelinux.0"; option conf-file "netboot.cfg"; }

Since we're not specifying full URLs for the configuration files here, petitboot will attempt to download using TFTP, from the same host as the DHCP server.

In the x86 section, note that the config file (netboot.cfg) is specified relative to the pxelinux.0 binary. In this case, pxelinux will request the file from x86/pxelinux/netboot.cfg.

Managed TFTP server

In automated-provisioning environments, a central deployment system may control machine setup and boot. When a newly-racked machine is first booted, we want it to boot to an initial install/provision environment, where it is initialised and registers itself with the provisioning service. Once that registration is complete, we want it to boot to the newly-installed environment.

One way to achieve this is to have the deployment system manage PXE configuration files that are served over TFTP.

For this, we'd rely on the PXE autodiscovery mechanism for any newly-deployed machines (relying on the fallback to the configuration file named 'default'). Once a machine has completed its provisioning process (and registered with the deployment service), a per-machine configuration file can be added to the TFTP server, named after the machine's MAC address. This file will configure the newly-provisioned machine to boot to its standard OS environment, rather than booting through the initial-install process again.

For this scenario, we can use a PXE path prefix parameter to distinguish machines of different architectures:

# define a "path-prefix" option syntax for the PXE path prefix (opt 210) option path-prefix code 210 = text; # define an "arch" option syntax for the DHCP architecture identifier (opt 93) option arch code 93 = unsigned integer 16; # use 192.168.0.3 as our managed TFTP server next-server 192.168.0.3; # provide separate binaries and configuration files depending on # client architecture if option arch = 00:0e { # POWER OPAL option path-prefix "powerpc/"; } else if option arch = 00:07 { # x86-64 EFI option path-prefix "x86-efi/"; filename "pxelinux/bootx64.efi"; } else { # x86 PC-BIOS option path-prefix "x86-pc-bios/"; filename "pxelinux/pxelinux.0"; }

We could just as easily use HTTP instead of TFTP here, by specifying full HTTP URLs as the configuration files. For the non-petitboot machines, we'd need to use a pxe loader that supports HTTP, like gPXE.

Managed DHCP server

This is similar to the previous example, but rather than using per-machine configuration files (served over TFTP), we can implement per-machine configuration directly in the DHCP server configuration.

If our DHCP configuration is managed by the deployment system, we can use host-specific configurations for machines that have been provisioned, and fall back to a default configuration for newly-racked machines. In this scenario, the deployment system is responsible for managing the DHCP configuration by adding a 'host' stanza for each known machine, after installation.

# define a "conf-file" option syntax for the PXE configuration file (opt 209) option conf-file code 209 = text; # define an "arch" option syntax for the DHCP architecture identifier (opt 93) option arch code 93 = unsigned integer 16; # configuration for running installers on unknown hosts: provide separate # binaries and configuration files depending on client architecture if option arch = 00:0e { # POWER OPAL option conf-file "pxe-configs/installer-powerpc.conf" } else if option arch = 00:07 { # x86-64 EFI option conf-file "pxe-configs/installer-x86-64-efi.conf" filename "pxelinux/bootx64.efi"; } # known host configurations, for which we specify an existing config file. These # sections are generated and managed by the deployment process, after each # machine has been provisioned. host server-001 { hardware ethernet 3c:97:0e:3b:85:00; option conf-file "pxe-configs/runtime-powerpc.conf"; } host server-002 { hardware ethernet b4:1e:26:c4:a0:be; option conf-file "pxe-configs/runtime-x86.conf"; }

Ubuntu Podcast from the UK LoCo: S07E09 – The One with the Ick Factor

Planet Ubuntu - Thu, 2014-05-29 19:30

Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are in Studio L for Season Seven, Episode Eight of the Ubuntu Podcast!

 Download OGG  Download MP3 Play in Popup

In this week’s show:-

We’ll be back next week, when we’ll interview Martin Wimpress from the MATE desktop team and go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Mark Shuttleworth: Enjoying G+

Planet Ubuntu - Thu, 2014-05-29 18:25

Plussers, I’m now at https://plus.google.com/+MarkShuttleworthCanonical/ and quite enjoying the service. That G+ persona is mostly software and cloud related (so short on beekeeping, kitesurfing or botanical pursuits).

This is probably the best way to keep in touch. I’m not on Facebook and every year Claire and I clear out the oddball fake-me’s that seem to crop up regularly there. To their credit, FB have been really good about sorting those out. I’m told I’m missing out but, srsly, TIME!

G+ seems really clean and well organised, the chatter is mostly thoughtful and good-natured.

Mark Shuttleworth: #9 – Canonical’s cloud-init saves you from image soup, on every cloud

Planet Ubuntu - Thu, 2014-05-29 16:22

This is a series of posts on reasons to choose Ubuntu for your public or private cloud work & play.

We run an extensive program to identify issues and features that make a difference to cloud users. One result of that program is that we pioneered dynamic image customisation and wrote cloud-init. I’ll tell the story of cloud-init as an illustration of the focus the Ubuntu team has on making your devops experience fantastic on any given cloud.

 

Ever struggled to find the “right” image to use on your favourite cloud? Ever wondered how you can tell if an image is safe to use, what keyloggers or other nasties might be installed? We set out to solve that problem a few years ago and the resulting code, cloud-init, is one of the more visible pieces Canonical designed and built, and very widely adopted.

Traditionally, people used image snapshots to build a portfolio of useful base images. You’d start with a bare OS, add some software and configuration, then snapshot the filesystem. You could use those snapshots to power up fresh images any time you need more machines “like this one”. And that process works pretty amazingly well. There are hundreds of thousands, perhaps millions, of such image snapshots scattered around the clouds today. It’s fantastic. Images for every possible occasion! It’s a disaster. Images with every possible type of problem.

The core issue is that an image is a giant binary blob that is virtually impossible to audit. Since it’s a snapshot of an image that was running, and to which anything might have been done, you will need to look in every nook and cranny to see if there is a potential problem. Can you afford to verify that every binary is unmodified? That every configuration file and every startup script is safe? No, you can’t. And for that reason, that whole catalogue of potential is a catalogue of potential risk. If you wanted to gather useful data sneakily, all you’d have to do is put up an image that advertises itself as being good for a particular purpose and convince people to run it.

There are other issues, even if you create the images yourself. Each image slowly gets out of date with regard to security updates. When you fire it up, you need to apply all the updates since the image was created, if you want a secure machine. Eventually, you’ll want to re-snapshot for a more up-to-date image. That requires administration overhead and coordination, most people don’t do it.

That’s why we created cloud-init. When your virtual machine boots, cloud-init is run very early. It looks out for some information you send to the cloud along with the instruction to start a new machine, and it customises your machine at boot time. When you combine cloud-init with the regular fresh Ubuntu images we publish (roughly every two weeks for regular updates, and whenever a security update is published), you have a very clean and elegant way to get fresh images that do whatever you want. You design your image as a script which customises the vanilla, base image. And then you use cloud-init to run that script against a pristine, known-good standard image of Ubuntu. Et voila! You now have purpose-designed images of your own on demand, always built on a fresh, secure, trusted base image.

Auditing your cloud infrastructure is now straightforward, because you have the DNA of that image in your script. This is devops thinking, turning repetitive manual processes (hacking and snapshotting) into code that can be shared and audited and improved. Your infrastructure DNA should live in a version control system that requires signed commits, so you know everything that has been done to get you where you are today. And all of that is enabled by cloud-init. And if you want to go one level deeper, check out Juju, which provides you with off-the-shelf scripts to customise and optimise that base image for hundreds of common workloads.

Jono Bacon: Last Day Today

Planet Ubuntu - Thu, 2014-05-29 15:34

Recently I announced I am stepping down as Ubuntu Community Manager at Canonical and moving to XPRIZE as Senior Director of Community. Today is my last day at Canonical.

I just want to say how touched I have been by the response. The comments, social media posts, emails, and calls from you have been so kind and supportive. You are all good people, and I am going to miss every single one of you.

The reason why I have devoted my life to understanding communities is that I believe communities bring out the best in people, and all of you are a perfect example of that. I cannot express just how much I appreciate it.

Over the course of the next few weeks my replacement will be sourced and announced. and in the interim my team (Daniel Holbach, Michael Hall, David Planella, Nicholas Skaggs, Alan Pope) will take over my duties. Everything has been transitioned over, and remember, the weekly Q&As will continue at 6pm UTC every Tuesday on Ubuntu On Air with my team filling in for me. As ever, any and all Ubuntu questions are welcome!

Of course, I will still be around. I am going to continue to be a member of the Ubuntu community and an avid Ubuntu user, tester, and supporter. I will continue to be on IRC, you can email me at jono@jonobacon.org, I will continue to do Bad Voltage, and I have a busy schedule at the Community Leadership Summit, OSCON, and more. I am also going to continue to have my own Q&A session every week where you can ask questions about my perspectives on Ubuntu, Canonical, community management, XPRIZE, and more; I will announce this soon.

Ubuntu has a tremendous future ahead of it, built on the hard work and passion of a global community. We are only just getting started with a new era of Ubuntu convergence and cloud orchestration and while I will miss being there in an official capacity, I am just thankful that I can continue to be along for the ride in the very community I played a part in building.

I now have a few weeks off and then my new adventure begins. Stay tuned.

Michael Hall: Community Donations Funding Report

Planet Ubuntu - Thu, 2014-05-29 13:54

Last year the main Ubuntu download page was changed to include a form for users to make a donation to one or more parts of Ubuntu, including to the community itself. Those donations made for “Community projects” were made available to members of our community who knew of ways to use them that would benefit the Ubuntu project.

Every dollar given out is an investment in Ubuntu and the community that built it. This includes sponsoring community events, sending community representatives to those events with booth supplies and giveaway items, purchasing hardware to make improve development and testing, and more.

But these expenses don’t cover the time, energy, and talent that went along with them, without which the money itself would have been wasted.  Those contributions, made by the recipients of these funds, can’t be adequately documented in a financial report, so thank you to everybody who received funding for their significant and sustained contributions to Ubuntu.

As part of our commitment to openness and transparency we said that we would publish a report highlighting both the amount of donations made to this category, and how and where that money was being used. Linked below is the first of those reports.

View the Report

Jorge Castro: Real World Juju Troubleshooting

Planet Ubuntu - Thu, 2014-05-29 12:04

The following is a guest post by Ian Booth:

Juju has the ability to set up and change logging levels at a package level so that problems within a particular area can be better diagnosed. Recently there were some issues with containers (both kvm and lxc) started by the local provider transitioning from pending to started. We wanted to be able to inspect the sequence of commands Juju uses to start a container. Fortunately, the Juju code which starts lxc and kvm containers does log out the actual commands used to download the requisite image and start the container et al. The logging level used for this information is TRACE. By default, Juju machine agents log at the debug level, and this information can be seen when running ‘juju debug-log’. So unfortunately, this means the information we are interested in is not visible by default in the machine logs.

Luckily we can change the logging level used for particular packages so that the information is logged. This is done using the ‘logging-config’ attribute and can either be done at bootstrap:

juju bootstrap --logging-config=golxc=TRACE

or on a running system:

juju set-env logging-config=golxc=TRACE

As an aside, you can use:

juju get-env logging-config

to see what the current logging-config value is.

The logging-config value above turns on TRACE level dubugging for the golxc package, which is responsible for starting and managing lxc containers on behalf of Juju. For kvm containers, the package name is ‘kvm’.

We can use debug-log to look at the logging we have just enabled. As of 1.19 onwards, filtering can be used to just show what we are interested in. Run this command in a separate terminal:

juju debug-log --include-module=golxc

Now we can deploy a charm or use add-machine to initiate a new container. We can see the commands Juju issues to download the image and start the container. This allows us to see what parameters are passed in to the various lxc commands, and we could even manually run the commands if we wish, in order to reproduce and examine in more detail how the commands behave. An example of the logging information when adding TRACE level debugging to lxc startup looks like:

machine-0: 2014-05-27 04:28:39 TRACE golxc.run.lxc-start golxc.go:439 run: lxc-start [--daemon -n ian-local-machine-1 -c /var/lib/juju/containers/ian-local-machine-1/console.log -o /var/lib/juju/containers/ian-local-machine-1/container.log -l DEBUG] machine-0: 2014-05-27 04:28:40 TRACE golxc.run.lxc-ls golxc.go:439 run: lxc-ls [-1] machine-0: 2014-05-27 04:28:41 TRACE golxc.run.lxc-ls golxc.go:451 run successful output: ian-local-machine-1 machine-0: 2014-05-27 04:28:41 TRACE golxc.run.lxc-info golxc.go:439 run: lxc-info [-n ian-local-machine-1] machine-0: 2014-05-27 04:28:41 TRACE golxc.run.lxc-info golxc.go:451 run successful output: Name: ian-local-machine-1 machine-0: 2014-05-27 04:28:45 TRACE golxc.run.lxc-start golxc.go:448 run failed output: lxc-start: command get_cgroup failed to receive response machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-ls golxc.go:439 run: lxc-ls [-1] machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-ls golxc.go:451 run successful output: ian-local-machine-1 machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-info golxc.go:439 run: lxc-info [-n ian-local-machine-1] machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-info golxc.go:451 run successful output: Name: ian-local-machine-1

You can see that when logging the various commands execute, the format is cmd [arg arg arg]. So to run these manually leave out the []. You can also see that there was a problem starting the lxc container due to a cgroups issue. This error is shown in juju status, but often it’s useful to see what happens leading up to the error occurring.

In summary, Juju’s configurable logging output can be used to help diagnose issues and understand what Juju is doing under the covers. It offers the ability to turn on extra logging when required, and it can be turned off again when no longer required.

Canonical Design Team: Making ubuntu.com responsive: making our grid responsive (8)

Planet Ubuntu - Thu, 2014-05-29 08:26

This post is part of the series ‘Making ubuntu.com responsive‘.

A big part of converting our existing fixed-width desktop site to be responsive was to make sure we had a flexible grid that would flow seamlessly from small to large screens.

From the start, we decided that we were going to approach the move as simply as possible: we wanted the content our grid holds to become easier to read and browse on any screen size, but that doesn’t necessarily mean creating anything too complex.

Our existing fixed-width grid

Before the transition, our grid consisted of 12 columns with 20px gutters.

The width of each column could be variable, but we were working with 57px columns and 40px padding on each side of the main content container.

Our existing fixed-width desktop grid.

Inside that grid, content can be divided into one, two, three or four columns. In extreme cases, we can also use five columns, but we avoid this.

Our grid laid over an existing page of the site.

We also try keeping text content below eight columns, as it becomes harder to read otherwise.

Adding flexibility

When we first created our web style guide, we decided that, since we were getting our hands dirty with the refactoring of the CSS, we’d go ahead and convert our grid to use percentages instead of pixels.

The idea was that it would be useful for the future, while keeping everything looking almost exactly the same, since the content would still be all held within a fixed-width container for the time being.

A fixed-width container holding percentage-based columns.

This proved one of the best decisions we made and it made the transition to responsive much smoother than we initially thought.

The CSS

We used Gridinator to initially create our basic grid, removed any unnecessary rules and properties from the CSS it generated and added others we needed.

The settings we’ve input, in case you’re wondering, were:

  • Number of columns: 12
  • Column width: 57px
  • Column margins: 20px (technically, the gutters)
  • Container margin: 40px
  • Body font size: 16px
  • Option: percentage units

Screenshot of our Gridinator settings.

We could have created this CSS from scratch, but we found this tool saved us some precious time when creating all the variations we needed when using the grid.

You can have a peek into our current grid stylesheet now.

First prototypes

The first two steps we took when creating the initial responsive prototype of ubuntu.com were:

  • Remove the fixed-width container and see how the content would flow freely and fluidly
  • Remove all the floats and positioning rules from the existing CSS and see how the content would flow in a linear manner, and test this on small screen devices

When we removed the fixed-width container, all the content became fluid. Obviously, there were no media queries behind this, so the content was free to grow to huge sizes, with really long line lengths, and equally the columns could shrink to unreasonably narrow sizes. But it was fluid, and we were happy!

Similarly, when checking the float-free prototype, even though there were quite a few issues with background images and custom, absolutely positioned elements, the results were promising.

Our first float-free prototype: some issues but overall a good try.

These tests showed to us that, even though the bulk of the work was still ahead of us, a lot had been accomplished by simply making the effort to convert our initial pixel-based columns into percentage based ones. This is a test that we think other teams could be able to do, before moving on to a complete revamp of their CSS.

Defining breakpoints

We didn’t want to relate our breakpoints to specific devices, but it was important that we understood what kind of screen sizes people were using to visit our site on.

To give you an idea, here are the top 10 screen sizes (not window size, mind) between 4 March and 3 April 2014, in pixels, on ubuntu.com:

  1. 1366 x 768: 26.15%
  2. 1920 x 1028: 15.4%
  3. 1280 x 800: 7.98%
  4. 1280 x 1024: 7.26%
  5. 1440 x 900: 6.29%
  6. 1024 x 768: 6.26%
  7. 1600 x 900: 5.51%
  8. 1680 x 1050: 4.94%
  9. 1920 x 1200: 2.73%
  10. 1024 x 600: 2.12%

The first small screen (360×640) comes up at number 17 in the list, followed by 320×568 and 320×480 at numbers 21 and 22, respectively.

We decided to test the common approach and try breakpoints at:

  • Under 768px: small screen styles
  • Between 768px and under 986px: medium screen styles
  • 986px and up: large screen styles

This worked well for our content and was in line with what we had seen in our analytics numbers.

At the small screen breakpoint we have:

  • Reduced typographic scale
  • Reduced margins and padding
  • Reduced main content container padding

At this scale, following what we had done in Ubuntu Resources (now Ubuntu Insights), we reused the grid from the Ubuntu phone designs, which divides the portrait screen into 40 squares, horizontally.

The phone grid.

At the medium screen breakpoint we have:

  • Increased (from small screen) typographic scale
  • Increased margins and padding
  • Increased main content container padding

At the large screen break point we have:

  • The original typographic scale
  • The original margins and padding
  • The original main content container padding

Comparison between small, medium and large screen spacing.

Ideas for the future

In the future, we would like to use more component-specific breakpoints. Some of our design components would work better if they reflowed or resized at different points than the rest of the site, so more granular control over specific elements would be useful. This usually depends on the type and amount of content the component holds.

What about you? We’d love to know how other people have tackled this issue, and what suggestions you have to create flexible and robust grids. Have you used any useful tools or techniques? Let us know in the comments section.

Reading list

Jono Bacon: Community Leadership Summit 2014, New Forum, OSCON, Training, and More!

Planet Ubuntu - Thu, 2014-05-29 04:51

As many of you will know, I organize an event every year called the Community Leadership Summit. The event brings together community leaders, organizers and managers and the projects and organizations that are interested in growing and empowering a strong community.

The event pulls together these leading minds in community management, relations and online collaboration to discuss, debate and continue to refine the art of building an effective and capable community.

This year’s event is shaping up to be incredible. We have a fantastic list of registered attendees and I want to thank our sponsors, O’Reilly, Citrix, and LinuxFund.

The event is taking place on 18 – 19 July 2014 in Portland, Oregon. I hope to see you all there, it is going to be a fantastic CLS this year!

I also have a few other things to share too…

Community Leadership Forum

My goal as a community manager is to help contribute to the growth of the community management profession. I started this journey by publishing The Art of Community and ensuring it is available freely as well as in stores. I then set up the Community Leadership Summit as just discussed, and now I am keen to put together a central community for community management and leadership discussion.

As such, I am proud to launch the new Community Leadership Forum for discussing topics that relate to community management, as well as topics for discussion at the Community Leadership Summit event each year. The forum is designed to be a great place for sharing and learning tips and techniques, getting to know other community leaders, and having fun.

The forum is powered by Discourse, so it is a pleasure to use, and I want to thank discoursehosting.com for generously providing free hosting for us.

Be sure to go and sign up!

Speaking Events and Training

I also wanted to share that I will be at OSCON this year and I will be giving a presentation called Dealing With Disrespect that is based upon my free book of the same name for managing complex communications.

This is the summary of the talk:

In this new presentation from Jono Bacon, author of The Art of Community, founder of the Community Leadership Summit, and Ubuntu Community Manager, he discusses how to process, interpret, and manage rude, disrespectful, and non-constructive feedback in communities so the constructive criticism gets through but the hate doesn’t.

The presentation covers the three different categories of communications, how we evaluate and assess different attributes in each communication, the factors that influence all of our communications, and how to put in place a set of golden rules for handling feedback and putting it in perspective.

If you personally or your community has suffered rudeness, trolling, and disrespect, this presentation is designed to help.

This presentation is on Wed 23rd July at 2.30pm in E144.

In addition to this I will also be providing a full day of community management training at OSCON on Sunday 20th July in D135.

I will also be providing full day community management training at LinuxCon North America and LinuxCon Europe. More details of this to follow soon in a dedicated blog post.

Lots of fun things ahead and I hope to see you there!

Jeremy Kerr: powerpc testing without powerpc hardware

Planet Ubuntu - Thu, 2014-05-29 04:42

Want to do a test boot of a powerpc machine, but don't have one handy? The qemu ppc64-softmmu emulator is currently working well with the pseries target, and it's fairly simple to get an emulated machine booting.

You'll need qemu version 1.6.0 or above. If this isn't provided by your distribution, you can build from upstream sources:

git clone git://git.qemu-project.org/qemu.git cd qemu ./configure --target-list=ppc64-softmmu make

the resulting qemu binary will be at ./ppc64-softmmu/qemu-system-ppc64.

To run a pseries-like machine, we just need the -M pseries option.

The default qemu device setup is okay, but I tend to configure my qemu a little differently: no video, and console on serial ports. This is what I generally do:

qemu-system-ppc64 -M pseries -m 1024 \ -nodefaults -nographic -serial pty -monitor stdio \ -netdev user,id=net0 -device virtio-net-pci,netdev=net0 \ -kernel vmlinux -initrd initrd.img -append root=/dev/ram0

This will print out a message telling you which PTY is being used for the serial port:

[jk@pablo qemu]$ qemu-system-ppc64 -M pseries -m 1024 \ -nodefaults -nographic -serial pty -monitor stdio \ -netdev user,id=net0 -device virtio-net-pci,netdev=net0 \ -kernel vmlinux -initrd initrd.img -append root=/dev/ram0 char device redirected to /dev/pts/11 (label serial0) QEMU 1.6.0 monitor - type 'help' for more information (qemu)

You can then interact with the emulated serial device from a separate terminal, using screen:

screen /dev/pts/11

In the screen session, the sequence ctrl+a, ctrl+k will exit. Typing quit at the (qemu) prompt will terminate the virtual machine.

emulated network devices

The qemu environment above uses virtio-based networking, which may not work if your kernel doesn't include a virtio-net driver. In this case, just replace the -device virtio-net-pci,netdev=net0 argument with:

-device spapr-vlan,netdev=net0 emulated block devices

The qemu example above doesn't define any block devices, so there's no persistent storage available. We can use either the spapr-vscsi (sPAPR virtual SCSI, the virtualised IBM hypervisor interface) or virtio-blk-pci (virtio interface) devices. This choice will depend on your kernel; if it includes drivers for virtio, I'd suggest using that.

For virtio, add something like:

-device virtio-blk-pci,drive=drive0 -drive id=drive0,if=none,file=/path/to/host/storage

For sPAPR virtual SCSI, use something like:

-device spapr-vscsi -device scsi-hd,drive=drive0 -drive id=drive0,if=none,file=/path/to/host/storage

Either of these will define a qemu drive with id "drive0", and attach it to backing storage at /path/to/host/storage - this can be a plain file or block device. If you'd like to define multiple guest block devices, you need to use new ids (drive1, drive2, …) for both -device and -drive arguments.

Lubuntu Blog: Lubuntu Screencast - Network Manager

Planet Ubuntu - Wed, 2014-05-28 10:03
Thanks to SilverLion for his work. The easiest way to explain how to fix a bug, for those who didn't read the Release Notes and want to know what's happening with thir internet connections. A Screencast is worth a thounsand words!

Michael Hall: Calling for Ubuntu Online Summit sessions

Planet Ubuntu - Wed, 2014-05-28 08:00

A couple of months ago Jono announced the dates for the Ubuntu Online Summit, June 10th – 12th,  and those dates are almost upon us now.  The schedule is opened, the track leads are on board, all we need now are sessions.  And that’s where you come in.

Ubuntu Online Summit is a change for us, we’re trying to mix the previous online UDS events with our Open Week, Developer Week and User Days events, to try and bring people from every part of our community together to celebrate, educate, and improve Ubuntu. So in addition to the usual planning sessions we had at UDS, we’re also looking for presentations from our various community teams on the work they do, walk-throughs for new users learning how to use Ubuntu, as well as instructional sessions to help new distro developers, app developers, and cloud devops get the most out of it as a platform.

What we need from you are sessions.  It’s open to anybody, on any topic, anyway you want to do it.  The only requirement is that you can start and run a Google+ OnAir Hangout, since those are what provide the live video streaming and recording for the event.  There are two ways you can propose a session: the first is to register a Blueprint in Launchpad, this is good for planning session that will result in work items, the second is to propose a session directly in Summit, which is good for any kind of session.  Instructions for how to do both are available on the UDS Website.

There will be Track Leads available to help you get your session on the schedule, and provide some technical support if you have trouble getting your session’s hangout setup. When you propose your session (or create your Blueprint), try to pick the most appropriate track for it, that will help it get approved and scheduled faster.

Ubuntu Development

Many of the development-oriented tracks from UDS have been rolled into the Ubuntu Development track. So anything that would previously have been in Client, Core/Foundations or Cloud and Server will be in this one track now. The track leads come from all parts of Ubuntu development, so whatever you session’s topic there will be a lead there who will be familiar with it.

Track Leads:
  • Łukasz Zemczak
  • Steve Langasek
  • Leann Ogasawara
  • Antonio Rosales
  • Marc Deslaurs
Application Development

Introduced a few cycles back, the Application Development track will continue to have a focus on improving the Ubuntu SDK, tools and documentation we provide for app developers.  We also want to introduce sessions focused on teaching app development using the SDK, the various platform services available, as well as taking a deeper dive into specifics parts of the Ubuntu UI Toolkit.

Track Leads:
  • Michael Hall
  • David Planella
  • Alan Pope
  • Zsombor Egri
  • Nekhelesh Ramananthan
Cloud DevOps

This is the counterpart of the Application Development track for those with an interest in the cloud.  This track will have a dual focus on planning improvements to the DevOps tools like Juju, as well as bringing DevOps up to speed with how to use them in their own cloud deployments.  Learn how to write charms, create bundles, and manage everything in a variety of public and private clouds.

Track Leads:
  • Jorge Castro
  • Marco Ceppi
  • Patricia Gaughen
  • Jose Antonio Rey
Community

The community track has been a stable of UDS for as long as I can remember, and it’s still here in the Ubuntu Online Summit.  However, just like the other tracks, we’re looking beyond just planning ways to improve the community structure and processes.  This time we also want to have sessions showing users how they can get involved in the Ubuntu community, what teams are available, and what tools they can use in the process.

Track Leads:
  • Daniel Holbach
  • Jose Antonio Rey
  • Laura Czajkowski
  • Svetlana Belkin
  • Pablo Rubianes
Users

This is a new track and one I’m very excited about. We are all users of Ubuntu, and whether we’ve been using it for a month or a decade, there are still things we can all learn about it. The focus of the Users track is to highlight ways to get the most out of Ubuntu, on your laptop, your phone or your server.  From detailed how-to sessions, to tips and tricks, and more, this track can provide something for everybody, regardless of skill level.

Track Leads:
  • Elizabeth Krumbach Joseph
  • Nicholas Skaggs
  • Valorie Zimmerman

So once again, it’s time to get those sessions in.  Visit this page to learn how, then start thinking of what you want to talk about during those three days.  Help the track leads out by finding more people to propose more sessions, and let’s get that schedule filled out. I look forward to seeing you all at our first ever Ubuntu Online Summit.

David Planella: A quickstart guide to the Ubuntu emulator

Planet Ubuntu - Wed, 2014-05-28 06:41

Following the initial announcement, the Ubuntu emulator is going to become a primary Engineering platform for development. Quoting Alexander Sack, when ready, the goal is to

[...] start using the emulator for everything you usually would do on the phone. We really want to make the emulator a class A engineering platform for everyone

While the final emulator is still work in progress, we’re continually seeing the improvements in finishing all the pieces to make it a first-class citizen for development, both for the platform itself and for app developers. However, as it stands today, the emulator is already functional, so I’ve decided to prepare a quickstart guide to highlight the great work the Foundations and Phonedations teams (along with many other contributors) are producing to make it possible.

While you should consider this as guide as a preview, you can already use it to start getting familiar with the emulator for testing, platform development and writing apps.

Requirements

To install and run the Ubuntu emulator, you will need:

  • Ubuntu 14.04 or later (see installation notes for older versions)
  • 512MB of RAM dedicated to the emulator
  • 4GB of disk space
  • OpenGL-capable desktop drivers (most graphics drivers/cards are)
Installing the emulator

If you are using Ubuntu 14.04, installation is as easy as opening a terminal, pressing Ctrl+Alt+T and running these commands, followed by Enter:

sudo add-apt-repository ppa:ubuntu-sdk-team/ppa && sudo apt-get update
sudo apt-get install ubuntu-emulator

Alternatively, if you are running an older stable release such as Ubuntu 12.04, you can install the emulator by manually downloading its packages first:

Show me how

  1. Create a folder named MARKDOWN_HASHb3eeabb8ee11c2be770b684d95219ecbMARKDOWN_HASH in your home directory
  2. Go to the goget-ubuntu-touch packages page in Launchpad
  3. Scroll down to Trusty Tahr and click on the arrow to the left to expand it
  4. Scroll further to the bottom of the page and click on the MARKDOWN_HASH05556613978ce6821766bb234e2ff0f2MARKDOWN_HASH package corresponding to your architecture (i386 or amd64) to download in the MARKDOWN_HASH1e681dc9c2bfe6538971553668079349MARKDOWN_HASH folder you created
  5. Now go to the Android packages page in Launchpad
  6. Scroll down to Trusty Tahr and click on the arrow to the left to expand it
  7. Scroll further to the bottom of the page and click on the MARKDOWN_HASH1843750ed619186a2ce7bdabba6f8062MARKDOWN_HASH package corresponding to download it at the same MARKDOWN_HASH1e681dc9c2bfe6538971553668079349MARKDOWN_HASH folder
  8. Open a terminal with Ctrl+Alt+T
  9. Change the directory to the location where you downloaded the package writing the following command in the terminal: MARKDOWN_HASH8844018ed0ccc8c506d6aff82c62c46fMARKDOWN_HASH
  10. Then run this command to install the packages: MARKDOWN_HASH0452d2d16235c62b87fd735e6496c661MARKDOWN_HASH
  11. Once the installation is successful you can close the terminal and remove the MARKDOWN_HASH1e681dc9c2bfe6538971553668079349MARKDOWN_HASH folder and its contents

Installation notes
  • Downloaded images are cached at ~/.cache/ubuntuimage –using the standard XDG_CACHE_DIR location.
  • Instances are stored at ~/.local/share/ubuntu-emulator –using the standard XDG_DATA_DIR location.
  • While an image upgrade feature is in the works, for now you can simply create an instance of a newer image over the previous one.
Running the emulator

The ubuntu-emulator tool makes it again really easy to manage instances and run the emulator. Typically, you’ll be opening a terminal and running these commands the first time you create an instance (where myinstance is the name you’ve chsen for it):

sudo ubuntu-emulator create myinstance --arch=i386
ubuntu-emulator run myinstance

You can create any instances you need for different purposes. And once the instance has been created, you’ll be generally using the ubuntu-emulator run myinstance command to start an emulator session based on that instance.

Notice how in the command above the --arch parameter was specified to override the default architecture (armhf). Using the i386 arch will make the emulator run at a (much faster) native speed.

Other parameters you might want to experiment with are also: --scale=0.7 and --memory=720. In these examples, we’re scaling down the UI to be 70% of the original size (useful for smaller screens) and specifying a maximum of 720GB for the emulator to use (on systems with memory to spare).

There are 3 main elements you’ll be interacting with when running the emulator:

  • The phone UI – this is the visual part of the emulator, where you can interact with the UI in the same way you’d do it with a phone. You can use your mouse to simulate taps and slides. Bonus points if you can recognize the phone model where the UI is in ;)
  • The remote session on the terminal – upon starting the emulator, a terminal will also be launched alongside. Use the phablet username and the same password to log in to an interactive ADB session on the emulator. You can also launch other terminal sessions using other communication protocols –see the link at the end of this guide for more details.
  • The ubuntu-emulator tool – with this CLI tool, you can manage the lifetime and runtime of Ubuntu images. Common subcommands of ubuntu-emulator include create (to create new instances), destroy (to destroy existing instances), run (as we’ve already seen, to run instances), snapshot (to create and restore snapshots of a given point in time) and more. Use ubuntu-emulator --help to learn about these commands and ubuntu-emulator command --help to learn more about a particular command and its options.
Runtime notes
  • Make sure you’ve got enough space to install the emulator and create new instances, otherwise the operation will fail (or take a long time) without warning.
  • At this time, the emulator takes a while to load for the first time. During that time, you’ll see a black screen inside the phone skin. Just wait a bit until it’s finished loading and the welcome screen appears.
  • By default the latest built image from the devel-proposed channel is used. This can be changed during creation with the --channel and --revision options.
  • If your host has a network connection, the emulator will use that transparently, even though the network indicator might show otherwise.
  • To talk to the emulator, you can use standard adb. The emulator should appear under the list of the adb devices command.
Learn more and contribute

I hope this guide has whetted your appetite to start testing the emulator! You can also contribute making the emulator a first-class target for Ubuntu development. The easiest way is to install it and give it ago. If something is not working you can then file a bug.

If you want to fix a bug yourself or contribute to code, the best thing is to ask the developers about how to get started by subscribing to the Ubuntu phone mailing list.

If you want to learn more about the emulator, including how to create instance snapshots and other cool features, head out to the Ubuntu Emulator wiki page.

And next… support for the tablet form factor and SDK integration. Can’t wait for those features to land!

The post A quickstart guide to the Ubuntu emulator appeared first on David Planella.

Pages

Subscribe to Free Software Magazine aggregator