Planet Ubuntu
Subscribe to Planet Ubuntu feed
Planet Ubuntu - http://planet.ubuntu.com/
Updated: 1 hour 26 min ago

Michael Hall: Community Donations Funding Report

Thu, 2014-05-29 13:54

Last year the main Ubuntu download page was changed to include a form for users to make a donation to one or more parts of Ubuntu, including to the community itself. Those donations made for “Community projects” were made available to members of our community who knew of ways to use them that would benefit the Ubuntu project.

Every dollar given out is an investment in Ubuntu and the community that built it. This includes sponsoring community events, sending community representatives to those events with booth supplies and giveaway items, purchasing hardware to make improve development and testing, and more.

But these expenses don’t cover the time, energy, and talent that went along with them, without which the money itself would have been wasted.  Those contributions, made by the recipients of these funds, can’t be adequately documented in a financial report, so thank you to everybody who received funding for their significant and sustained contributions to Ubuntu.

As part of our commitment to openness and transparency we said that we would publish a report highlighting both the amount of donations made to this category, and how and where that money was being used. Linked below is the first of those reports.

View the Report

Jorge Castro: Real World Juju Troubleshooting

Thu, 2014-05-29 12:04

The following is a guest post by Ian Booth:

Juju has the ability to set up and change logging levels at a package level so that problems within a particular area can be better diagnosed. Recently there were some issues with containers (both kvm and lxc) started by the local provider transitioning from pending to started. We wanted to be able to inspect the sequence of commands Juju uses to start a container. Fortunately, the Juju code which starts lxc and kvm containers does log out the actual commands used to download the requisite image and start the container et al. The logging level used for this information is TRACE. By default, Juju machine agents log at the debug level, and this information can be seen when running ‘juju debug-log’. So unfortunately, this means the information we are interested in is not visible by default in the machine logs.

Luckily we can change the logging level used for particular packages so that the information is logged. This is done using the ‘logging-config’ attribute and can either be done at bootstrap:

juju bootstrap --logging-config=golxc=TRACE

or on a running system:

juju set-env logging-config=golxc=TRACE

As an aside, you can use:

juju get-env logging-config

to see what the current logging-config value is.

The logging-config value above turns on TRACE level dubugging for the golxc package, which is responsible for starting and managing lxc containers on behalf of Juju. For kvm containers, the package name is ‘kvm’.

We can use debug-log to look at the logging we have just enabled. As of 1.19 onwards, filtering can be used to just show what we are interested in. Run this command in a separate terminal:

juju debug-log --include-module=golxc

Now we can deploy a charm or use add-machine to initiate a new container. We can see the commands Juju issues to download the image and start the container. This allows us to see what parameters are passed in to the various lxc commands, and we could even manually run the commands if we wish, in order to reproduce and examine in more detail how the commands behave. An example of the logging information when adding TRACE level debugging to lxc startup looks like:

machine-0: 2014-05-27 04:28:39 TRACE golxc.run.lxc-start golxc.go:439 run: lxc-start [--daemon -n ian-local-machine-1 -c /var/lib/juju/containers/ian-local-machine-1/console.log -o /var/lib/juju/containers/ian-local-machine-1/container.log -l DEBUG] machine-0: 2014-05-27 04:28:40 TRACE golxc.run.lxc-ls golxc.go:439 run: lxc-ls [-1] machine-0: 2014-05-27 04:28:41 TRACE golxc.run.lxc-ls golxc.go:451 run successful output: ian-local-machine-1 machine-0: 2014-05-27 04:28:41 TRACE golxc.run.lxc-info golxc.go:439 run: lxc-info [-n ian-local-machine-1] machine-0: 2014-05-27 04:28:41 TRACE golxc.run.lxc-info golxc.go:451 run successful output: Name: ian-local-machine-1 machine-0: 2014-05-27 04:28:45 TRACE golxc.run.lxc-start golxc.go:448 run failed output: lxc-start: command get_cgroup failed to receive response machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-ls golxc.go:439 run: lxc-ls [-1] machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-ls golxc.go:451 run successful output: ian-local-machine-1 machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-info golxc.go:439 run: lxc-info [-n ian-local-machine-1] machine-0: 2014-05-27 04:29:45 TRACE golxc.run.lxc-info golxc.go:451 run successful output: Name: ian-local-machine-1

You can see that when logging the various commands execute, the format is cmd [arg arg arg]. So to run these manually leave out the []. You can also see that there was a problem starting the lxc container due to a cgroups issue. This error is shown in juju status, but often it’s useful to see what happens leading up to the error occurring.

In summary, Juju’s configurable logging output can be used to help diagnose issues and understand what Juju is doing under the covers. It offers the ability to turn on extra logging when required, and it can be turned off again when no longer required.

Canonical Design Team: Making ubuntu.com responsive: making our grid responsive (8)

Thu, 2014-05-29 08:26

This post is part of the series ‘Making ubuntu.com responsive‘.

A big part of converting our existing fixed-width desktop site to be responsive was to make sure we had a flexible grid that would flow seamlessly from small to large screens.

From the start, we decided that we were going to approach the move as simply as possible: we wanted the content our grid holds to become easier to read and browse on any screen size, but that doesn’t necessarily mean creating anything too complex.

Our existing fixed-width grid

Before the transition, our grid consisted of 12 columns with 20px gutters.

The width of each column could be variable, but we were working with 57px columns and 40px padding on each side of the main content container.

Our existing fixed-width desktop grid.

Inside that grid, content can be divided into one, two, three or four columns. In extreme cases, we can also use five columns, but we avoid this.

Our grid laid over an existing page of the site.

We also try keeping text content below eight columns, as it becomes harder to read otherwise.

Adding flexibility

When we first created our web style guide, we decided that, since we were getting our hands dirty with the refactoring of the CSS, we’d go ahead and convert our grid to use percentages instead of pixels.

The idea was that it would be useful for the future, while keeping everything looking almost exactly the same, since the content would still be all held within a fixed-width container for the time being.

A fixed-width container holding percentage-based columns.

This proved one of the best decisions we made and it made the transition to responsive much smoother than we initially thought.

The CSS

We used Gridinator to initially create our basic grid, removed any unnecessary rules and properties from the CSS it generated and added others we needed.

The settings we’ve input, in case you’re wondering, were:

  • Number of columns: 12
  • Column width: 57px
  • Column margins: 20px (technically, the gutters)
  • Container margin: 40px
  • Body font size: 16px
  • Option: percentage units

Screenshot of our Gridinator settings.

We could have created this CSS from scratch, but we found this tool saved us some precious time when creating all the variations we needed when using the grid.

You can have a peek into our current grid stylesheet now.

First prototypes

The first two steps we took when creating the initial responsive prototype of ubuntu.com were:

  • Remove the fixed-width container and see how the content would flow freely and fluidly
  • Remove all the floats and positioning rules from the existing CSS and see how the content would flow in a linear manner, and test this on small screen devices

When we removed the fixed-width container, all the content became fluid. Obviously, there were no media queries behind this, so the content was free to grow to huge sizes, with really long line lengths, and equally the columns could shrink to unreasonably narrow sizes. But it was fluid, and we were happy!

Similarly, when checking the float-free prototype, even though there were quite a few issues with background images and custom, absolutely positioned elements, the results were promising.

Our first float-free prototype: some issues but overall a good try.

These tests showed to us that, even though the bulk of the work was still ahead of us, a lot had been accomplished by simply making the effort to convert our initial pixel-based columns into percentage based ones. This is a test that we think other teams could be able to do, before moving on to a complete revamp of their CSS.

Defining breakpoints

We didn’t want to relate our breakpoints to specific devices, but it was important that we understood what kind of screen sizes people were using to visit our site on.

To give you an idea, here are the top 10 screen sizes (not window size, mind) between 4 March and 3 April 2014, in pixels, on ubuntu.com:

  1. 1366 x 768: 26.15%
  2. 1920 x 1028: 15.4%
  3. 1280 x 800: 7.98%
  4. 1280 x 1024: 7.26%
  5. 1440 x 900: 6.29%
  6. 1024 x 768: 6.26%
  7. 1600 x 900: 5.51%
  8. 1680 x 1050: 4.94%
  9. 1920 x 1200: 2.73%
  10. 1024 x 600: 2.12%

The first small screen (360×640) comes up at number 17 in the list, followed by 320×568 and 320×480 at numbers 21 and 22, respectively.

We decided to test the common approach and try breakpoints at:

  • Under 768px: small screen styles
  • Between 768px and under 986px: medium screen styles
  • 986px and up: large screen styles

This worked well for our content and was in line with what we had seen in our analytics numbers.

At the small screen breakpoint we have:

  • Reduced typographic scale
  • Reduced margins and padding
  • Reduced main content container padding

At this scale, following what we had done in Ubuntu Resources (now Ubuntu Insights), we reused the grid from the Ubuntu phone designs, which divides the portrait screen into 40 squares, horizontally.

The phone grid.

At the medium screen breakpoint we have:

  • Increased (from small screen) typographic scale
  • Increased margins and padding
  • Increased main content container padding

At the large screen break point we have:

  • The original typographic scale
  • The original margins and padding
  • The original main content container padding

Comparison between small, medium and large screen spacing.

Ideas for the future

In the future, we would like to use more component-specific breakpoints. Some of our design components would work better if they reflowed or resized at different points than the rest of the site, so more granular control over specific elements would be useful. This usually depends on the type and amount of content the component holds.

What about you? We’d love to know how other people have tackled this issue, and what suggestions you have to create flexible and robust grids. Have you used any useful tools or techniques? Let us know in the comments section.

Reading list

Jono Bacon: Community Leadership Summit 2014, New Forum, OSCON, Training, and More!

Thu, 2014-05-29 04:51

As many of you will know, I organize an event every year called the Community Leadership Summit. The event brings together community leaders, organizers and managers and the projects and organizations that are interested in growing and empowering a strong community.

The event pulls together these leading minds in community management, relations and online collaboration to discuss, debate and continue to refine the art of building an effective and capable community.

This year’s event is shaping up to be incredible. We have a fantastic list of registered attendees and I want to thank our sponsors, O’Reilly, Citrix, and LinuxFund.

The event is taking place on 18 – 19 July 2014 in Portland, Oregon. I hope to see you all there, it is going to be a fantastic CLS this year!

I also have a few other things to share too…

Community Leadership Forum

My goal as a community manager is to help contribute to the growth of the community management profession. I started this journey by publishing The Art of Community and ensuring it is available freely as well as in stores. I then set up the Community Leadership Summit as just discussed, and now I am keen to put together a central community for community management and leadership discussion.

As such, I am proud to launch the new Community Leadership Forum for discussing topics that relate to community management, as well as topics for discussion at the Community Leadership Summit event each year. The forum is designed to be a great place for sharing and learning tips and techniques, getting to know other community leaders, and having fun.

The forum is powered by Discourse, so it is a pleasure to use, and I want to thank discoursehosting.com for generously providing free hosting for us.

Be sure to go and sign up!

Speaking Events and Training

I also wanted to share that I will be at OSCON this year and I will be giving a presentation called Dealing With Disrespect that is based upon my free book of the same name for managing complex communications.

This is the summary of the talk:

In this new presentation from Jono Bacon, author of The Art of Community, founder of the Community Leadership Summit, and Ubuntu Community Manager, he discusses how to process, interpret, and manage rude, disrespectful, and non-constructive feedback in communities so the constructive criticism gets through but the hate doesn’t.

The presentation covers the three different categories of communications, how we evaluate and assess different attributes in each communication, the factors that influence all of our communications, and how to put in place a set of golden rules for handling feedback and putting it in perspective.

If you personally or your community has suffered rudeness, trolling, and disrespect, this presentation is designed to help.

This presentation is on Wed 23rd July at 2.30pm in E144.

In addition to this I will also be providing a full day of community management training at OSCON on Sunday 20th July in D135.

I will also be providing full day community management training at LinuxCon North America and LinuxCon Europe. More details of this to follow soon in a dedicated blog post.

Lots of fun things ahead and I hope to see you there!

Jeremy Kerr: powerpc testing without powerpc hardware

Thu, 2014-05-29 04:42

Want to do a test boot of a powerpc machine, but don't have one handy? The qemu ppc64-softmmu emulator is currently working well with the pseries target, and it's fairly simple to get an emulated machine booting.

You'll need qemu version 1.6.0 or above. If this isn't provided by your distribution, you can build from upstream sources:

git clone git://git.qemu-project.org/qemu.git cd qemu ./configure --target-list=ppc64-softmmu make

the resulting qemu binary will be at ./ppc64-softmmu/qemu-system-ppc64.

To run a pseries-like machine, we just need the -M pseries option.

The default qemu device setup is okay, but I tend to configure my qemu a little differently: no video, and console on serial ports. This is what I generally do:

qemu-system-ppc64 -M pseries -m 1024 \ -nodefaults -nographic -serial pty -monitor stdio \ -netdev user,id=net0 -device virtio-net-pci,netdev=net0 \ -kernel vmlinux -initrd initrd.img -append root=/dev/ram0

This will print out a message telling you which PTY is being used for the serial port:

[jk@pablo qemu]$ qemu-system-ppc64 -M pseries -m 1024 \ -nodefaults -nographic -serial pty -monitor stdio \ -netdev user,id=net0 -device virtio-net-pci,netdev=net0 \ -kernel vmlinux -initrd initrd.img -append root=/dev/ram0 char device redirected to /dev/pts/11 (label serial0) QEMU 1.6.0 monitor - type 'help' for more information (qemu)

You can then interact with the emulated serial device from a separate terminal, using screen:

screen /dev/pts/11

In the screen session, the sequence ctrl+a, ctrl+k will exit. Typing quit at the (qemu) prompt will terminate the virtual machine.

emulated network devices

The qemu environment above uses virtio-based networking, which may not work if your kernel doesn't include a virtio-net driver. In this case, just replace the -device virtio-net-pci,netdev=net0 argument with:

-device spapr-vlan,netdev=net0 emulated block devices

The qemu example above doesn't define any block devices, so there's no persistent storage available. We can use either the spapr-vscsi (sPAPR virtual SCSI, the virtualised IBM hypervisor interface) or virtio-blk-pci (virtio interface) devices. This choice will depend on your kernel; if it includes drivers for virtio, I'd suggest using that.

For virtio, add something like:

-device virtio-blk-pci,drive=drive0 -drive id=drive0,if=none,file=/path/to/host/storage

For sPAPR virtual SCSI, use something like:

-device spapr-vscsi -device scsi-hd,drive=drive0 -drive id=drive0,if=none,file=/path/to/host/storage

Either of these will define a qemu drive with id "drive0", and attach it to backing storage at /path/to/host/storage - this can be a plain file or block device. If you'd like to define multiple guest block devices, you need to use new ids (drive1, drive2, …) for both -device and -drive arguments.

Lubuntu Blog: Lubuntu Screencast - Network Manager

Wed, 2014-05-28 10:03
Thanks to SilverLion for his work. The easiest way to explain how to fix a bug, for those who didn't read the Release Notes and want to know what's happening with thir internet connections. A Screencast is worth a thounsand words!

Michael Hall: Calling for Ubuntu Online Summit sessions

Wed, 2014-05-28 08:00

A couple of months ago Jono announced the dates for the Ubuntu Online Summit, June 10th – 12th,  and those dates are almost upon us now.  The schedule is opened, the track leads are on board, all we need now are sessions.  And that’s where you come in.

Ubuntu Online Summit is a change for us, we’re trying to mix the previous online UDS events with our Open Week, Developer Week and User Days events, to try and bring people from every part of our community together to celebrate, educate, and improve Ubuntu. So in addition to the usual planning sessions we had at UDS, we’re also looking for presentations from our various community teams on the work they do, walk-throughs for new users learning how to use Ubuntu, as well as instructional sessions to help new distro developers, app developers, and cloud devops get the most out of it as a platform.

What we need from you are sessions.  It’s open to anybody, on any topic, anyway you want to do it.  The only requirement is that you can start and run a Google+ OnAir Hangout, since those are what provide the live video streaming and recording for the event.  There are two ways you can propose a session: the first is to register a Blueprint in Launchpad, this is good for planning session that will result in work items, the second is to propose a session directly in Summit, which is good for any kind of session.  Instructions for how to do both are available on the UDS Website.

There will be Track Leads available to help you get your session on the schedule, and provide some technical support if you have trouble getting your session’s hangout setup. When you propose your session (or create your Blueprint), try to pick the most appropriate track for it, that will help it get approved and scheduled faster.

Ubuntu Development

Many of the development-oriented tracks from UDS have been rolled into the Ubuntu Development track. So anything that would previously have been in Client, Core/Foundations or Cloud and Server will be in this one track now. The track leads come from all parts of Ubuntu development, so whatever you session’s topic there will be a lead there who will be familiar with it.

Track Leads:
  • Łukasz Zemczak
  • Steve Langasek
  • Leann Ogasawara
  • Antonio Rosales
  • Marc Deslaurs
Application Development

Introduced a few cycles back, the Application Development track will continue to have a focus on improving the Ubuntu SDK, tools and documentation we provide for app developers.  We also want to introduce sessions focused on teaching app development using the SDK, the various platform services available, as well as taking a deeper dive into specifics parts of the Ubuntu UI Toolkit.

Track Leads:
  • Michael Hall
  • David Planella
  • Alan Pope
  • Zsombor Egri
  • Nekhelesh Ramananthan
Cloud DevOps

This is the counterpart of the Application Development track for those with an interest in the cloud.  This track will have a dual focus on planning improvements to the DevOps tools like Juju, as well as bringing DevOps up to speed with how to use them in their own cloud deployments.  Learn how to write charms, create bundles, and manage everything in a variety of public and private clouds.

Track Leads:
  • Jorge Castro
  • Marco Ceppi
  • Patricia Gaughen
  • Jose Antonio Rey
Community

The community track has been a stable of UDS for as long as I can remember, and it’s still here in the Ubuntu Online Summit.  However, just like the other tracks, we’re looking beyond just planning ways to improve the community structure and processes.  This time we also want to have sessions showing users how they can get involved in the Ubuntu community, what teams are available, and what tools they can use in the process.

Track Leads:
  • Daniel Holbach
  • Jose Antonio Rey
  • Laura Czajkowski
  • Svetlana Belkin
  • Pablo Rubianes
Users

This is a new track and one I’m very excited about. We are all users of Ubuntu, and whether we’ve been using it for a month or a decade, there are still things we can all learn about it. The focus of the Users track is to highlight ways to get the most out of Ubuntu, on your laptop, your phone or your server.  From detailed how-to sessions, to tips and tricks, and more, this track can provide something for everybody, regardless of skill level.

Track Leads:
  • Elizabeth Krumbach Joseph
  • Nicholas Skaggs
  • Valorie Zimmerman

So once again, it’s time to get those sessions in.  Visit this page to learn how, then start thinking of what you want to talk about during those three days.  Help the track leads out by finding more people to propose more sessions, and let’s get that schedule filled out. I look forward to seeing you all at our first ever Ubuntu Online Summit.

David Planella: A quickstart guide to the Ubuntu emulator

Wed, 2014-05-28 06:41

Following the initial announcement, the Ubuntu emulator is going to become a primary Engineering platform for development. Quoting Alexander Sack, when ready, the goal is to

[...] start using the emulator for everything you usually would do on the phone. We really want to make the emulator a class A engineering platform for everyone

While the final emulator is still work in progress, we’re continually seeing the improvements in finishing all the pieces to make it a first-class citizen for development, both for the platform itself and for app developers. However, as it stands today, the emulator is already functional, so I’ve decided to prepare a quickstart guide to highlight the great work the Foundations and Phonedations teams (along with many other contributors) are producing to make it possible.

While you should consider this as guide as a preview, you can already use it to start getting familiar with the emulator for testing, platform development and writing apps.

Requirements

To install and run the Ubuntu emulator, you will need:

  • Ubuntu 14.04 or later (see installation notes for older versions)
  • 512MB of RAM dedicated to the emulator
  • 4GB of disk space
  • OpenGL-capable desktop drivers (most graphics drivers/cards are)
Installing the emulator

If you are using Ubuntu 14.04, installation is as easy as opening a terminal, pressing Ctrl+Alt+T and running these commands, followed by Enter:

sudo add-apt-repository ppa:ubuntu-sdk-team/ppa && sudo apt-get update
sudo apt-get install ubuntu-emulator

Alternatively, if you are running an older stable release such as Ubuntu 12.04, you can install the emulator by manually downloading its packages first:

Show me how

  1. Create a folder named MARKDOWN_HASHb3eeabb8ee11c2be770b684d95219ecbMARKDOWN_HASH in your home directory
  2. Go to the goget-ubuntu-touch packages page in Launchpad
  3. Scroll down to Trusty Tahr and click on the arrow to the left to expand it
  4. Scroll further to the bottom of the page and click on the MARKDOWN_HASH05556613978ce6821766bb234e2ff0f2MARKDOWN_HASH package corresponding to your architecture (i386 or amd64) to download in the MARKDOWN_HASH1e681dc9c2bfe6538971553668079349MARKDOWN_HASH folder you created
  5. Now go to the Android packages page in Launchpad
  6. Scroll down to Trusty Tahr and click on the arrow to the left to expand it
  7. Scroll further to the bottom of the page and click on the MARKDOWN_HASH1843750ed619186a2ce7bdabba6f8062MARKDOWN_HASH package corresponding to download it at the same MARKDOWN_HASH1e681dc9c2bfe6538971553668079349MARKDOWN_HASH folder
  8. Open a terminal with Ctrl+Alt+T
  9. Change the directory to the location where you downloaded the package writing the following command in the terminal: MARKDOWN_HASH8844018ed0ccc8c506d6aff82c62c46fMARKDOWN_HASH
  10. Then run this command to install the packages: MARKDOWN_HASH0452d2d16235c62b87fd735e6496c661MARKDOWN_HASH
  11. Once the installation is successful you can close the terminal and remove the MARKDOWN_HASH1e681dc9c2bfe6538971553668079349MARKDOWN_HASH folder and its contents

Installation notes
  • Downloaded images are cached at ~/.cache/ubuntuimage –using the standard XDG_CACHE_DIR location.
  • Instances are stored at ~/.local/share/ubuntu-emulator –using the standard XDG_DATA_DIR location.
  • While an image upgrade feature is in the works, for now you can simply create an instance of a newer image over the previous one.
Running the emulator

The ubuntu-emulator tool makes it again really easy to manage instances and run the emulator. Typically, you’ll be opening a terminal and running these commands the first time you create an instance (where myinstance is the name you’ve chsen for it):

sudo ubuntu-emulator create myinstance --arch=i386
ubuntu-emulator run myinstance

You can create any instances you need for different purposes. And once the instance has been created, you’ll be generally using the ubuntu-emulator run myinstance command to start an emulator session based on that instance.

Notice how in the command above the --arch parameter was specified to override the default architecture (armhf). Using the i386 arch will make the emulator run at a (much faster) native speed.

Other parameters you might want to experiment with are also: --scale=0.7 and --memory=720. In these examples, we’re scaling down the UI to be 70% of the original size (useful for smaller screens) and specifying a maximum of 720GB for the emulator to use (on systems with memory to spare).

There are 3 main elements you’ll be interacting with when running the emulator:

  • The phone UI – this is the visual part of the emulator, where you can interact with the UI in the same way you’d do it with a phone. You can use your mouse to simulate taps and slides. Bonus points if you can recognize the phone model where the UI is in ;)
  • The remote session on the terminal – upon starting the emulator, a terminal will also be launched alongside. Use the phablet username and the same password to log in to an interactive ADB session on the emulator. You can also launch other terminal sessions using other communication protocols –see the link at the end of this guide for more details.
  • The ubuntu-emulator tool – with this CLI tool, you can manage the lifetime and runtime of Ubuntu images. Common subcommands of ubuntu-emulator include create (to create new instances), destroy (to destroy existing instances), run (as we’ve already seen, to run instances), snapshot (to create and restore snapshots of a given point in time) and more. Use ubuntu-emulator --help to learn about these commands and ubuntu-emulator command --help to learn more about a particular command and its options.
Runtime notes
  • Make sure you’ve got enough space to install the emulator and create new instances, otherwise the operation will fail (or take a long time) without warning.
  • At this time, the emulator takes a while to load for the first time. During that time, you’ll see a black screen inside the phone skin. Just wait a bit until it’s finished loading and the welcome screen appears.
  • By default the latest built image from the devel-proposed channel is used. This can be changed during creation with the --channel and --revision options.
  • If your host has a network connection, the emulator will use that transparently, even though the network indicator might show otherwise.
  • To talk to the emulator, you can use standard adb. The emulator should appear under the list of the adb devices command.
Learn more and contribute

I hope this guide has whetted your appetite to start testing the emulator! You can also contribute making the emulator a first-class target for Ubuntu development. The easiest way is to install it and give it ago. If something is not working you can then file a bug.

If you want to fix a bug yourself or contribute to code, the best thing is to ask the developers about how to get started by subscribing to the Ubuntu phone mailing list.

If you want to learn more about the emulator, including how to create instance snapshots and other cool features, head out to the Ubuntu Emulator wiki page.

And next… support for the tablet form factor and SDK integration. Can’t wait for those features to land!

The post A quickstart guide to the Ubuntu emulator appeared first on David Planella.

Lubuntu Blog: Box 0.46

Tue, 2014-05-27 16:15
Just another update. Box now reaches 0.46 inside Lubuntu Trusty Tahr artwork package or as a standalone for your preferred Linux distro.Lots of more apps and other things were added. If you want touse it just add the Daily PPA to your sources and voilá, or download directly from the Artwork section.

Ubuntu Server blog: Meeting Minutes: May 20th

Tue, 2014-05-27 15:31
Agenda
  • Review ACTION points from previous meeting
  • T Development
  • Server & Cloud Bugs (caribou)
  • Weekly Updates & Questions for the QA Team (psivaa)
  • Weekly Updates & Questions for the Kernel Team (smb, sforshee)
  • Ubuntu Server Team Events
  • Open Discussion
  • Announce next meeting date, time and chair
Minutes

Pretty straightforward meeting given 14.04 release is still pretty fresh in our minds, and ODS was last week. Great Demo at UDS! Utopic Unicorn is underway way (https://wiki.ubuntu.com/UtopicUnicorn/ReleaseSchedule) the first Alpha release scheduled for June 26th, and vUDS on June 12th. Server team blueprints are also in progress with the topic blueprint (https://blueprints.launchpad.net/ubuntu/+spec/topic-u-server_ already created and dependencies being posted to it.

The Bugs we covered were:

  • Launchpad bug 1319555 in ec2-api-tools (Ubuntu Utopic) “update out-dated ec2-api-tools for 12.04″ [High,New]
  • Launchpad bug 1315052 in lxc (Ubuntu Utopic) “lxc-attach from a different login session fails” [High,Triaged]
  • Launchpad bug 1317587 in clamav (Ubuntu Utopic) “ClamAV 0.98.1 is Outdated” [High,In progress]
  • Launchpad bug 1317811 in linux (Ubuntu) “Dropped packets on EC2, “xen_netfront: xennet: skb rides the rocket: x slots”" [Medium,In progress]
Next Meeting

Next meeting will be on Tuesday, May 27th at 16:00 UTC in #ubuntu-meeting.

Additional logs @ https://wiki.ubuntu.com/MeetingLogs/Server/20140520

Valorie Zimmerman: Problem redimensionated into challenge, thence into success

Tue, 2014-05-27 00:33
Problem: our Dish DVR died. And it died before I backed up the content, and that content was hundreds of old Doctor Who episodes I was collecting until I could watch in some kind of decent order.

Since it would briefly come back to life if allowed to rest for awhile unplugged, I bought a backup hard drive of the proper sort (with its own power supply), and plugged it in. However, the DVR died before backing up commenced.

This DVR is leased, so Dish sent along a new one, asking that I return the broken one within 10 days. Yesterday a technician showed up to be sure that everything was working, since the new machine had taken quite long to get a watchable image. I asked him if it was possible to move the old hard drive into the new machine and do the backup, before sending the old machine back? He said yes, although he couldn't do it for us.

So, new challenge: remove the old hard drive from the old DVR. I found a wikibook about the DVR here: en.wikibooks.org/wiki/VIP_922/Dish_Network. There the author says,
With no A/C power connected - from the old 922, pull the internal hard drive and set it aside. To do this remove 4, back cover screws (black,) then slide the cover back about 1/2 inch and tilt upwards to remove.Well now, here was my first problem. Screws, no biggie. But "slide the cover back"? It simply would not move for me. However, teamwork to the rescue. My husband Bob used the straight-slot screwdriver and pried with a bit more power than I would have used, and slide, it did. From there on out, the problem was redimensionated (thank you genii for that beautiful term!) and it was only a matter of more screws, unhooking the power and motherboard connection, and sliding out.

After using my husband again to help me slide out the TV cabinet and photograph and then remove the DVR hookups, I used the same procedure again. Remove the cover, then the HD, and then switched the old HD into its place. I left off the cover, and then Bob hooked up the new machine again, and we turned it on to wait. After the backup hard drive was plugged in, this slow beast restarted again, but we had a new option: backup. Just to be safe, we selected only the Doctor Who eps. If there is room once that's done, I'll select the rest of what I want. The backup is now proceeding, and the readout reports that it will take another 10 hours. OMG, usb is slow!

So, redimensionating is cool. I'm going to try to remember to do it more often. Also, many thanks to the dish tech and wikibook author who both shared their information freely, and my husband who supplied support, muscle, and didn't give up!

Sam Hewitt: How to Properly Cook Pasta

Mon, 2014-05-26 20:00

Pasta is a wonderful thing. However many unwonderful things are done to pasta when it's being cooked. Consider this post a how-to about improving your pasta eating experience.

A sub-par pasta experience is one that involves overcooked & bland pasta, with the sauce slopped on top. However, this very easy to avoid.

How to Improve Your Pasta Experience

Whether boxed or fresh pasta, following (and remembering) these few things will improve your pasta cooking abilities and your life as well as impress your friends & lover(s).*

*these claims may be exaggerated.

1. Use enough salt

Something I see done frequently by people is they boil water for their pasta and either don't salt their water or put only a pinch in.

Salt is crucial to giving your pasta flavour & lowering the overall amount of salt needed for your dish.

Also, if you've heard that salted water cooks food faster (because of its higher boiling temperature), those claims are a bit exaggerated; the amount of salt you're adding is only enough to raise the temperature about 1 degree.

2. Don't add oil

There's a bad practice of adding oil to the pasta cooking liquid to keep it from sticking. This only achieves one thing: oily pasta & oily pasta means sauce won't cling to it or be absorbed, which equals flavourless pasta.

Adding oil may also keep the pasta water from bubbling up and boiling over the rim, but this can also be achieved by using a large enough pot and also by reducing the heat a little (but still maintaining a boil).

3. Stir

During the first minute or two of cooking, give the pasta a good stir to keep it from sticking together.

This is the crucial, since during this time the pasta is coated with sticky starch. If you don't stir, pieces of pasta that are touching one another literally cook onto one another.

4. Avoid rinsing

Rinsing the pasta after cooking, will cool the pasta and prevent the absorption of a sauce. Not to mention it can wash away any remaining surface starch, which is advantageous to your cooking of the pasta. This small amount of starch left on the pasta by the cooking water can thicken your sauce slightly when you do encorporate the pasta.

5. Cooking al dente

The term al dente is simply culinary-speak for pasta that is just slightly undercooked, which is considered by many to be the optimal mouthfeel for pasta.

As cooking times vary for various pasta shapes, the only way to truly know is to sample one of the cooking pasta and see if it has just a little bite to it when you chew it –this is al dente and considered cooked.

6. Finish cooking in sauce

As it cools, the starch in the pasta crystallizes and becomes insoluble, therefore the pasta won't absorb as much sauce. As such, I always prepare the sauce first in a large skillet, regardless of it's simplicity, before cooking the pasta.

The moment the pasta is done, I scoop it out of the water with a spider and let it drain over the pot for a few seconds. Then I dump it into the hot sauce, stir well, & cover it to let the pasta absorb the sauce for a minute or two, before serving.

Bonus: Quick Tomato Sauce Recipe

It would be appropriate of me to provide a sauce recipe after all of that, so here's a quick tomato sauce.

    Ingredients
  • 1 large (28 oz.) can tomatoes, diced or whole (uncooked)
  • 1 can (12 oz.) tomato paste
  • 1 bulb of garlic (10-12 cloves), minced or thinly sliced
  • 1 onion, diced
  • 1-2 tablespoons, olive oil
  • 2 bay leaves
  • 2 tablespoons dry oregano
  • 250-300mL red wine
  • salt, to taste
  • 1 bag, baby spinach
  • 1 box dry pasta, such as penne or farfalle (bowties)
    Directions
  1. Preheat your skillet over med-high.
  2. Add olive oil and saute the garlic & onion for a few minutes.
  3. Add the wine, canned tomatoes & tomato paste. Stir.
  4. Add the bay leaves & oregano. Season with salt. Simmer for 8-10 minutes.
  5. Cook your pasta and encorporate into sauce. Cover & let it absorb the sauce for a few minutes.
  6. Place the bag of spinach atop the encorporated pasta-sauce mixture & cover –the remaining heat will be enough to wilt the spinach.
  7. When spinach is wilted, serve and garnish with grated Parmesan, if desired.

Dustin Kirkland: Influx by Daniel Suarez

Mon, 2014-05-26 17:12
An old friend of mine finally got around to reading Daemon, years after I sent him the recommendation, and that reminded me to dust off this post I've had in my drafts folder for 6 months.On a whim in September 2008, I blogged a review of perhaps the best techno-thriller I had read in almost a decade -- Daemon, by Leinad Zeraus.

I had no idea that innocuous little blog post would result in a friendship with the author, Daniel Suarez, himself.  Daniel, and his publicist, Michelle, would send me an early preview print of the sequel to Daemon, Freedom™, as well as his next two books, Kill Decision and Influx over the subsequent 6 years.

I read Influx in December 2013, a couple of months before its official release, on a very long flight to Helsinki, Finland.

Predictably, I thoroughly enjoyed it as much as each of Daniel's previous 3 books.  One particular story arch pays an overt homage to one of my favorite books of all time -- Alexandre Dumas' Count of Monte Cristo.  Influx succeeded in generating even more tension, for me.  While it's natural for me to know, intuitively, the line between science and fiction for the artificial intelligence, robotics, and computer technology pervasive in DaemonFreedom™, and Kill Decision, Influx is in a different category entirely.  There's an active, working element of new found thrills and subconscious tension not found in the others, built on the biotechnology and particle physics where I have no expertise whatsoever.  I found myself constantly asking, "Whoa shit man -- how much of that is real?!?"  All in all, it makes for another fantastic techno-thriller.

After 5+ years of email correspondence, I actually had the good fortune to meet Daniel in person in Austin during SxSW.  My friend, Josh (who was the person that originally game me my first copy of Daemon back in 2008), and I had drinks and dinner with Daniel and his wife.

It was fun to learn that Daniel is actually quite a fan of Ubuntu (which made a brief cameo on the main character's computer in Kill Decision).  Actually, Daniel shared the fact the he wrote the majority of Influx on a laptop running Ubuntu!


Cheers,
Dustin

Aurélien Gâteau: Using KApidox

Mon, 2014-05-26 16:07
What is KApidox?

Good libraries come with good documentation. It is therefore essential for KDE Frameworks to provide comprehensive online and offline documentation.

KApidox is a set of tools used to generate KDE Frameworks documentation. These command-line tools use Doxygen to do most of the work. The following tools are provided:

  • kgenapidox: Generate documentation for a single framework
  • kgenframeworksapidox: Generate documentations for all frameworks as well as the landing page which lets you list and filter frameworks by supported platforms
  • depdiagram-prepare and depdiagram-generate: Generate dependency diagrams (requires CMake and Graphviz)

In this post I am going to talk about kgenapidox, which is the tool you are most likely to run by yourself. While it is often good enough to read documentation online through api.kde.org, it is also useful to be able to generate documentation offline, for example because your Internet access is slow or you are currently offline, or because you want to improve the existing documentation. kgenapidox is the tool you want to use for this.

Installing

The first thing to do is to install KApidox. The code is hosted in a Git repository on KDE infrastructure. Get it with:

git clone git://anongit.kde.org/kapidox

KApidox tools are written in Python. In addition to Doxygen, you need to have the pyyaml and Jinja2 Python modules installed. If your distribution does not provide packages for those modules, you can install them with:

pip install --user pyyaml jinja2

KApidox itself can be installed the standard Python way using python setup.py install. You can also run KApidox tools directly from the source directory.

Generating Documentation

You are now ready to generate documentation. Go into any checkout of a framework repository and run kgenapidox:

$ kgenapidox 19:08:48 INFO Running Doxygen 19:08:49 INFO Postprocessing 19:08:50 INFO Done 19:08:50 INFO Doxygen warnings are listed in apidocs/doxygen-warnings.log 19:08:50 INFO API documentation has been generated in apidocs/html/index.html

As you can see from the command output, the documentation is generated by default in the apidocs/html directory. You can now open the documentation with your preferred browser. kgenapidox can also tell Doxygen to generate man pages or Qt compressed help files. Run kgenapidox --help for more details.

Improving the Documentation

If you maintain a framework, contribute to the KDE Frameworks project or want to get involved, open the warning file generated in apidocs/doxygen-warnings.log and start fixing! Improving the documentation of a framework can make it much more useful, so it is a very welcome contribution.

Vim tip: The warnings file can be loaded in the quickfix list with :cfile apidocs/doxygen-warnings.log.

Riccardo Padovani: Canonical Sprint in Malta

Mon, 2014-05-26 07:00

Wow. Just wow.

Last week I went to a Canonical Sprint, in Malta. The track was about the Client, and we focus on how make the Ubuntu for Phones even better.

We discussed also about some new features and some new designs we will introduce in next weeks. Stay tuned! During the week I fixed many bugs, and wrote a lot of code.

But I think the Sprint isn’t (only) about codes and features. IMO it’s much more about networking. You know, we work mainly via IRC and mailing lists, and for how wonderful can be an online friendship, it will be never as drink some beers together!

So, first of all thanks to Canonical for the invite, and for the awesome adventure I’m living in this last year.

But mainly thanks to all the guys I met. This week allows me to understand that people I working with are not only cool as developers, but also awesome as people!

We were 6 of community that joined the sprint: other than me, there were Nik, my roommate, who is making clock app rocking; Victor and Andrew, which develope the music app, Kunal, our wizard of calendar, and Adnane, who works on the HTML5 SDK. It’s an honour to code side by side with these guys.

Then, there was the Canonical Community Team: Alan and David are my mentors, my point of contact in Canonical, it was a pleasure to meet them and to have a beer (or 2, maybe 3…) together! Michael is the man who keeps update developer.ubuntu.com (and he does a lot of other things) and when I report a bug I know that in five minutes will be fixed. Nicholas is a cool guy, but he wants I use autopilot. No one is perfect. There was also Daniel. I worked with him only for few patches in december, so I had no possibility before this week to speak with him. And know what? He’s so funny! Last but not least, Jono: we had some time to talk, and I’m very happy about this: thanks for all your work in Canonical, and good luck for your future!

But all other Canonical employers were very gentle, and I met some guys which I want to see soon, and now I know who ping on IRC to have a bug in the SDK fixed ;-)
Also, other guys I used to know only on IRC (boiko, elopio, mardy, zsombi and others) now have a face! And new guys, from Design and QA and online account teams!!!

Was a wonderful week, and I have no words to say how much I’m happy, and to say thanks to all!

So, thanks for all guys, hope to see you all soon, and continue to make Ubuntu rocking!

 

Wow!

Ciao,
R.

This work is licensed under Creative Commons Attribution 3.0 Unported

Andrew Pollock: [debian] Day 117: Fixed hardening issues with simpleproxy, uploaded slack and sma

Sun, 2014-05-25 20:44

I managed to spend a few hours doing Debian stuff again today, which was great.

Today I learned about blhc, which is sadly not mentioned in the wiki page on hardening, which I always refer to. It turns out that it is mentioned in the walkthrough wiki page linked off it though. I'd not read that page until today. Many thanks to Samuel Bronson on IRC for pointing out the tool to me.

Initially I didn't think the tool told me anything I didn't already know, but then I realised it was saying that the upstream Makefile wasn't passing in $(CPPFLAGS) and $(LDFLAGS) when it invoked the compiler. Know that I know all of that, the build warning also mentioned in the PTS made a whole lot more sense. Definitely a case of "today I learned..."

So I made a simple patch to the upstream Makefile.in and simpleproxy is now all appropriately hardened. I'm very happy about that, as it was annoying me that it wasn't Lintian-clean.

I was able to use the same technique to similarly fix up sma. It's somewhat entertaining when you maintain a package for almost 7 years, and the upstream homepage changes from being the software author's website to what appears to be erotic fiction advertising for London escorts... That made for some entertaining reading this morning.

I've now managed to give all my packages a spring clean. I might do another pass and convert them all to debhelper 9 as a way of procrastinating before I touch isc-dhcp.

Luca Falavigna: Adventures in cross-chroot creations

Sun, 2014-05-25 20:21

I’ve been playing with qemu-user-static a bit to create a set of porterboxes for my Deb-o-Matic build farm. After reading gregoa’s post on how to create cross-chroot with qemu-debootstrap, I was immediately able to create armel, armhf and powerpc boxes with very little efforts.

I tried to extend the number of porterboxes available by adding mips* and s390x, in order to have all the Linux-based architectures supported in Jessie, but with no luck. Here’s a summary of my attempts.

MIPS*
Chroot creation fails under both mips and mipsel trying to configure libuuid1. The problem is due to the fact libuuid1′s postinst script calls groupadd and useradd. Those two utilities rely on NETLINK sockets, which apparently are not handled by QEMU at the moment. I raised the question upstream to see whether it is possible to solve this problem.

s390x
Chroot creation used to fail with a SIGSEGV. This particular bug has been fixed recently, but it seems it’s not enough to have a working chroot. It fails with some gzip errors, probably because some portions of dpkg-deb are not fully covered by qemu-s390x-static.

Preparing to unpack .../base-files_7.3_s390x.deb ...
Unpacking base-files (7.3) ...
dpkg-deb (subprocess): decompressing archive member: internal gzip read error: '<fd:5>: incorrect data check'
/bin/tar: Unexpected EOF in archive
/bin/tar: Unexpected EOF in archive
/bin/tar: Error is not recoverable: exiting now
dpkg-deb: error: subprocess tar returned error exit status 2
dpkg: error processing archive /var/cache/apt/archives/base-passwd_3.5.33_s390x.deb (--install):
subprocess dpkg-deb --control returned error exit status 2


Costales: Programa PADRE: Hacer la Declaración de la Renta 2013 con Ubuntu 14.04

Sun, 2014-05-25 11:34
El programa PADRE de Hacienda funciona bajo JAVA, concretamente recomiendan la versión 7.

INSTALAR JAVA 7 (REQUISITO PREVIO)

Para saber si lo tenemos instalado, abrimos una Terminal y ejecutamos: "java -version":
costales@dev:~$ java -version
java version "1.7.0_55"
Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
Java HotSpot(TM) Client VM (build 24.55-b03, mixed mode)
costales@dev:~$

Si no tenemos instalado JAVA podemos hacerlo ejecutando esto en la misma Terminal:
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java7-installer


INSTALAR PROGRAMA PADRE

Ya con JAVA instalado, descargamos el programa PADRE desde la página oficial o más cómodo desde la Terminal que ya tenemos abierta:
wget http://www.agenciatributaria.es/static_files/AEAT/Contenidos_Comunes/La_Agencia_Tributaria/Descarga_Programas/Descarga/Java/100/2013/Renta2013_unix_1_21.sh

Ahora convertimos ese fichero en ejecutable:
chmod +x Renta2013_unix_1_21.sh

Y lo ejecutamos:
./Renta2013_unix_1_21.sh

Ya sólo nos queda seguir el asistente y en la 3ª ventana marcar: No crear enlaces simbólicos.
Paso 1
Paso 2

Paso 3: Marcar No crear enlaces simbólicos
Finalizando
El programa PADRE ejecutándose en Xubuntu 14.04 Trusty

Ubuntu Women: Phase 2 of ProjectHarvest

Sat, 2014-05-24 21:11

The Ubuntu Women team has decided that Harvest will be re-started and the project is now at Phase 2, code-named Seeking Out Developers.

The Harvest system aggregates information about low-hanging fruit and aims to visualise which packages of the Ubuntu distribution are in a good and which are in a bad shape.

Harvest is a Django-based web application written in Python, code is available here: https://code.launchpad.net/harvest

Over the next few weeks we’ll be working to get instructions up for developers to stand up test environments and getting our roadmap defined for the project based on recent feedback and other outstanding bug reports.

What we need now is you! Python developers who are interested in helping us improve Harvest. Please contact Svetlana Belkin at belkinsa@ubuntu.com if you’re interested in helping out.

Pages