news aggregator

Benjamin Kerensa: Now we do know know…

Planet Ubuntu - Tue, 2014-01-21 22:15

Some months back Mark Shuttleworth blogged “At least we know now who belongs to the Open Source Tea Party ;)” and it inspired Lennart a developer to print some shirts and I decided I needed one too and so I had some made. Although the shirt for me is tongue in cheek expression I think its disappointing and absolutely contrary to open source culture to ridicule people who share their opinions on free software.

If we all agreed on how things should be done or what the better stack is then we wouldn’t have so much great free and open source software. There would be no MariaDB because everyone would be content with MySQL there would be no Kubuntu because everyone would be satisfied with Ubuntu and so on.

Victor Tuson Palau: [Ubuntu Touch] Update to Logviewer

Planet Ubuntu - Tue, 2014-01-21 19:49

I am pleased to announced that Logviewer is now published in the Ubuntu Touch store.  The app no longer runs unconfined but users “read_path” pointing to “/var/log/” and “/home/phablet/.cache/”. If you think there is another interested log path let me know and I will try to include it.

Also, one feature that landed by popular request is submitting a selected section of a log to pastebin , thanks to Popey for the image:


Ubuntu Kernel Team: Kernel Team Meeting Minutes – January 21, 2014

Planet Ubuntu - Tue, 2014-01-21 17:15
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140121 Meeting Agenda


ARM Status

Nothing new to report this week.


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Milestone Targeted Work Items

No new update this week.


Status: Trusty Development Kernel

No new update this week.


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Saucy/Quantal/Precise/Lucid

Status for the main kernels, until today (Nov. 26):

  • Lucid – Holding
  • Precise – Holding
  • Quantal – Holding
  • Saucy – Holding

    We are in a holding pattern waiting to see if any regressions show up that would cause us
    to respin before the 12.04.4 release goes out.

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Note: Raring hit EOL and is no longer supported. However, the lts-backport-raring kernel
    *WILL* continue to be supported until the first point release of the next LTS (14.04.1).


Open Discussion or Questions? Raise your hand to be recognized

No meeting next week due to the kernel team sprint.
The next meeting is scheduled for February 4th, 2014.

Mattia Migliorini: Melany 1.1: looking for new features

Planet Ubuntu - Tue, 2014-01-21 16:32

During the last months, Melany raised a great interest in WordPress users that look for a simple theme based on Twitter Bootstrap for their blog. Some of them contributed with translations, suggestions and bug reports. First of all I’d like to thank all of them (who, I hope, are reading this post).

But now, let’s bring it further. I’d like Melany to become a theme fully supported by the community, because we don’t need only to update Bootstrap, but also making it more complete. The first step to take in this project is to make it well structured. I’m working on it and Melany 1.2 will come with a complete documentation. But, before that, version 1.1 must come.

I already selected a list of features that have to be introduced in Melany 1.1, codenamed Silver Weiro. What I ask you is to propose new features. What you’d like to see in the next version of this Bootstrap-based theme? What you’d like to be changed or improved?

Head on to the official website to see a list of proposed features and suggest your own. I need you to serve what you really like. Are you ready to help yourselves?

 

Read the official announcement

David Planella: Announcing the first Ubuntu App Dev Schools

Planet Ubuntu - Tue, 2014-01-21 15:30

Following the call for volunteers to organize App Dev Schools across the globe, we’re excited to say that there are already events planned in 3 different countries. Every single App Dev School will help growing our community of app developers and drive adoption of our favourite free OS on all devices, everywhere.

Our LoCo community has got an incredible track record for organizing release parties, Ubuntu Hours, Global Jams, and all sorts of meet-ups for Ubuntu enthusiasts and folks who are new to Ubuntu. Ubuntu App Developer Schools are very new, but in the same way LoCos are, they’re going to become crucial in the new era of mobile devices and convergence. So we would like to see more of them and we need your help!

You can run an App Dev School too

If you’ve already organized an event, you already know the drill, but if it’s your first one, here are some guidelines on how you can put one together:

  1. Find a place to run an event and pick a date when to run it.
  2. Find some other folks in your LoCo who would be interested in helping.
  3. To promote it, remember to add it to the LoCo Directory
  4. Get the material and tune it for your event if needed.
  5. Promote the event locally and encourage people to join.
  6. Practice the material a few times before the big day, then show up, run the class and have fun.
  7. Take lots of pictures!

The ever awesome José Antonio Rey has made it even easier for Spanish-speaking LoCos to run events by having translated the materials into Spanish, so do get in touch with him if you’d like to use them.

And finally, for those of you who don’t have mobile devices to show Ubuntu on, the emulator is a nice alternative to use for app development and presentations. To help you get started, I’ve put together a quickstart guide to the Ubuntu emulator.

If you’re thinking about organizing one and you’ve got questions or need help, get in touch with me at david.planella@ubuntu.com

Looking forward to seeing all your App Dev Schools around the world!

The post Announcing the first Ubuntu App Dev Schools appeared first on David Planella.

Eric Hammond: Installing AWS Command Line Tools from Amazon Downloads

Planet Ubuntu - Tue, 2014-01-21 00:16

When you need an AWS command line toolset not provided by Ubuntu packages, you can download the tools directly from Amazon and install them locally.

In a previous article I provided instructions on how to install AWS command line tools using Ubuntu packages. That method is slightly easier to set up and easier to upgrade when Ubuntu releases updates. However, the Ubuntu packages aren’t always up to date with the latest from Amazon and there are not yet Ubuntu packages published for every AWS command line tools you might want to use.

Unfortunately, Amazon does not have one single place where you can download all the command line tools for the various services, nor are all of the tools installed in the same way, nor do they all use the same format for accessing the AWS credentials.

The following steps show how I install and configure the AWS command line tools provided by Amazon when I don’t use the packages provided by Ubuntu.

Prerequisites

Install required software packages:

sudo apt-get update sudo apt-get install -y openjdk-6-jre ruby1.8-full rubygems \ libxml2-utils libxml2-dev libxslt-dev \ unzip cpanminus build-essential sudo gem install uuidtools json httparty nokogiri

Create a directory where all AWS tools will be installed:

sudo mkdir -p /usr/local/aws

Now we’re ready to start downloading and installing all of the individual software bundles that Amazon has released and made available in scattered places on their web site and various S3 buckets.

Download and Install AWS Command Line Tools

These steps should be done from an empty temporary directory so you can afterwards clean up all of the downloaded and unpacked files.

Note: Some of these download URLs always get the latest version and some tools have different URLs every time a new version is released. Click through on the tool link to find the latest [Download] URL.

EC2 API command line tools:

wget --quiet http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip unzip -qq ec2-api-tools.zip sudo rsync -a --no-o --no-g ec2-api-tools-*/ /usr/local/aws/ec2/

EC2 AMI command line tools:

wget --quiet http://s3.amazonaws.com/ec2-downloads/ec2-ami-tools.zip unzip -qq ec2-ami-tools.zip sudo rsync -a --no-o --no-g ec2-ami-tools-*/ /usr/local/aws/ec2/

IAM (Identity and Access Management) commmand line tools:

wget --quiet http://awsiammedia.s3.amazonaws.com/public/tools/cli/latest/IAMCli.zip unzip -qq IAMCli.zip sudo rsync -a --no-o --no-g IAMCli-*/ /usr/local/aws/iam/

RDS (Relational Database Service) command line tools:

wget --quiet http://s3.amazonaws.com/rds-downloads/RDSCli.zip unzip -qq RDSCli.zip sudo rsync -a --no-o --no-g RDSCli-*/ /usr/local/aws/rds/

ELB (Elastic Load Balancer) command line tools:

wget --quiet http://ec2-downloads.s3.amazonaws.com/ElasticLoadBalancing.zip unzip -qq ElasticLoadBalancing.zip sudo rsync -a --no-o --no-g ElasticLoadBalancing-*/ /usr/local/aws/elb/

AWS CloudFormation command line tools:

wget --quiet https://s3.amazonaws.com/cloudformation-cli/AWSCloudFormation-cli.zip unzip -qq AWSCloudFormation-cli.zip sudo rsync -a --no-o --no-g AWSCloudFormation-*/ /usr/local/aws/cfn/

Auto Scaling command line tools:

wget --quiet http://ec2-downloads.s3.amazonaws.com/AutoScaling-2011-01-01.zip unzip -qq AutoScaling-*.zip sudo rsync -a --no-o --no-g AutoScaling-*/ /usr/local/aws/as/

AWS Import/Export command line tools:

wget --quiet http://awsimportexport.s3.amazonaws.com/importexport-webservice-tool.zip sudo mkdir /usr/local/aws/importexport sudo unzip -qq importexport-webservice-tool.zip -d /usr/local/aws/importexport

CloudSearch command line tools:

wget --quiet http://s3.amazonaws.com/amazon-cloudsearch-data/cloud-search-tools-1.0.0.1-2012.03.05.tar.gz tar xzf cloud-search-tools*.tar.gz sudo rsync -a --no-o --no-g cloud-search-tools-*/ /usr/local/aws/cloudsearch/

CloudWatch command line tools:

wget --quiet http://ec2-downloads.s3.amazonaws.com/CloudWatch-2010-08-01.zip unzip -qq CloudWatch-*.zip sudo rsync -a --no-o --no-g CloudWatch-*/ /usr/local/aws/cloudwatch/

ElastiCache command line tools:

wget --quiet https://s3.amazonaws.com/elasticache-downloads/AmazonElastiCacheCli-2012-03-09-1.6.001.zip unzip -qq AmazonElastiCacheCli-*.zip sudo rsync -a --no-o --no-g AmazonElastiCacheCli-*/ /usr/local/aws/elasticache/

Elastic Beanstalk command line tools:

wget --quiet https://s3.amazonaws.com/elasticbeanstalk/cli/AWS-ElasticBeanstalk-CLI-2.1.zip unzip -qq AWS-ElasticBeanstalk-CLI-*.zip sudo rsync -a --no-o --no-g AWS-ElasticBeanstalk-CLI-*/ /usr/local/aws/elasticbeanstalk/

Elastic MapReduce command line tools:

wget --quiet http://elasticmapreduce.s3.amazonaws.com/elastic-mapreduce-ruby.zip unzip -qq -d elastic-mapreduce-ruby elastic-mapreduce-ruby.zip sudo rsync -a --no-o --no-g elastic-mapreduce-ruby/ /usr/local/aws/elasticmapreduce/

Simple Notification Serivice (SNS) command line tools:

wget --quiet http://sns-public-resources.s3.amazonaws.com/SimpleNotificationServiceCli-2010-03-31.zip unzip -qq SimpleNotificationServiceCli-*.zip sudo rsync -a --no-o --no-g SimpleNotificationServiceCli-*/ /usr/local/aws/sns/ sudo chmod 755 /usr/local/aws/sns/bin/*

Route 53 (DNS) command line tools:

sudo mkdir -p /usr/local/aws/route53/bin for i in dnscurl.pl route53tobind.pl bindtoroute53.pl route53zone.pl; do sudo wget --quiet --directory-prefix=/usr/local/aws/route53/bin \ http://awsmedia.s3.amazonaws.com/catalog/attachments/$i sudo chmod +x /usr/local/aws/route53/bin/$i done cpanm --sudo --notest --quiet Net::DNS::ZoneFile NetAddr::IP \ Net::DNS Net::IP Digest::HMAC Digest::SHA1 Digest::MD5

CloudFront command line tool:

sudo wget --quiet --directory-prefix=/usr/local/aws/cloudfront/bin \ http://d1nqj4pxyrfw2.cloudfront.net/cfcurl.pl sudo chmod +x /usr/local/aws/cloudfront/bin/cfcurl.pl

S3 command line tools:

wget --quiet http://s3.amazonaws.com/doc/s3-example-code/s3-curl.zip unzip -qq s3-curl.zip sudo mkdir -p /usr/local/aws/s3/bin/ sudo rsync -a --no-o --no-g s3-curl/ /usr/local/aws/s3/bin/ sudo chmod 755 /usr/local/aws/s3/bin/s3curl.pl

AWS Data Pipeline command line tools:

wget --quiet https://s3.amazonaws.com/datapipeline-us-east-1/software/latest/DataPipelineCLI/datapipeline-cli.zip unzip -qq datapipeline-cli.zip sudo rsync -a --no-o --no-g datapipeline-cli/ /usr/local/aws/datapipeline/

Now that we have all of the software installed under /usr/local/aws we need to set up the AWS credentials and point the tools to where they can find everything.

Set up AWS Credentials and Envronment

Create a place to store the secret AWS credentials:

mkdir -m 0700 $HOME/.aws-default/

Copy your AWS X.509 certificate and private key to this subdirectory. These files will have names that look something like this:

$HOME/.aws-default/cert-7KX4CVWWQ52YM2SUCIGGHTPDNDZQMVEF.pem $HOME/.aws-default/pk-7KX4CVWWQ52YM2SUCIGGHTPDNDZQMVEF.pem

Create the file $HOME/.aws-default/aws-credential-file.txt with your AWS access key id and secret access key in the following format:

AWSAccessKeyId=<insert your AWS access id here> AWSSecretKey=<insert your AWS secret access key here>

Create the file $HOME/.aws-default/aws-credentials.json in the following format:

{ "access-id": "<insert your AWS access id here>", "private-key": "<insert your AWS secret access key here>", "key-pair": "<insert the name of your Amazon ec2 key-pair here>", "key-pair-file": "<insert the path to the .pem file for your Amazon ec2 key pair here>", "region": "<The region where you wish to launch your job flows. Should be one of us-east-1, us-west-1, us-west-2, eu-west-1, ap-southeast-1, or ap-northeast-1, sa-east-1>", "use-ssl": "true", "log-uri": "s3://yourbucket/datapipelinelogs" }

Create the file $HOME/.aws-secrets in the following format:

%awsSecretAccessKeys = ( 'default' => { id => '<insert your AWS access id here>', key => '<insert your AWS secret access key here>', }, );

Create a symbolic link for s3curl to find its hardcoded config file and secure the file permissions

ln -s $HOME/.aws-secrets $HOME/.s3curl chmod 600 $HOME/.aws-default/* $HOME/.aws-secrets

Add the following lines to your $HOME/.bashrc file so that the AWS command line tools know where to find themselves and the credentials. We put the new directories in the front of the $PATH so that we run these instead of any similar tools installed by Ubuntu packages.

export JAVA_HOME=/usr export EC2_HOME=/usr/local/aws/ec2 export AWS_IAM_HOME=/usr/local/aws/iam export AWS_RDS_HOME=/usr/local/aws/rds export AWS_ELB_HOME=/usr/local/aws/elb export AWS_CLOUDFORMATION_HOME=/usr/local/aws/cfn export AWS_AUTO_SCALING_HOME=/usr/local/aws/as export CS_HOME=/usr/local/aws/cloudsearch export AWS_CLOUDWATCH_HOME=/usr/local/aws/cloudwatch export AWS_ELASTICACHE_HOME=/usr/local/aws/elasticache export AWS_SNS_HOME=/usr/local/aws/sns export AWS_ROUTE53_HOME=/usr/local/aws/route53 export AWS_CLOUDFRONT_HOME=/usr/local/aws/cloudfront for i in $EC2_HOME $AWS_IAM_HOME $AWS_RDS_HOME $AWS_ELB_HOME \ $AWS_CLOUDFORMATION_HOME $AWS_AUTO_SCALING_HOME $CS_HOME \ $AWS_CLOUDWATCH_HOME $AWS_ELASTICACHE_HOME $AWS_SNS_HOME \ $AWS_ROUTE53_HOME $AWS_CLOUDFRONT_HOME /usr/local/aws/s3 do PATH=$i/bin:$PATH done PATH=/usr/local/aws/elasticbeanstalk/eb/linux/python2.7:$PATH PATH=/usr/local/aws/elasticmapreduce:$PATH PATH=/usr/local/aws/datapipeline:$PATH export EC2_PRIVATE_KEY=$(echo $HOME/.aws-default/pk-*.pem) export EC2_CERT=$(echo $HOME/.aws-default/cert-*.pem) export AWS_CREDENTIAL_FILE=$HOME/.aws-default/aws-credential-file.txt export ELASTIC_MAPREDUCE_CREDENTIALS=$HOME/.aws-default/aws-credentials.json export DATA_PIPELINE_CREDENTIALS=$HOME/.aws-default/aws-credentials.json

Set everything up in your current shell:

source $HOME/.bashrc Test

Make sure that the command line tools are installed and have credentials set up correctly. These commands should not return errors:

ec2-describe-regions ec2-ami-tools-version iam-accountgetsummary rds-describe-db-engine-versions elb-describe-lb-policies cfn-list-stacks cs-describe-domain mon-version elasticache-describe-cache-clusters eb --version elastic-mapreduce --list --all sns-list-topics dnscurl.pl --keyname default https://route53.amazonaws.com/2010-10-01/hostedzone | xmllint --format - cfcurl.pl --keyname default https://cloudfront.amazonaws.com/2008-06-30/distribution | xmllint --format - s3curl.pl --id default http://s3.amazonaws.com/ | xmllint --format - datapipeline --list-pipelines

Are you aware of any other command line tools provided by Amazon? Let other readers know in the comments on this article.

[Update 2012-09-06: New URL for ElastiCache tools. Thanks iknewitalready]

[Upate 2012-12-21: Added AWS Data Pipeline command line tools. May break Elastic MapReduce due to Ruby version conflict.]

Original article: http://alestic.com/2012/09/aws-command-line-tools

Sam Hewitt: Basic Pizza Dough Recipe

Planet Ubuntu - Tue, 2014-01-21 00:00

I make pizza fairly often and as such I feel that I have perfected my dough recipe –having said that I never know the exact measurements as I eyeball it nearly all the time.

As such, the pizza dough I make has that right amount of elasticity –that we know and love in pizza– with a hint of sweetness, so I add honey.

    Ingredients

    This recipe yields enough dough for about three 30-centimetre-ish pizzas, depending on how thin you roll it of course.

  • 3 cups all-purpose or bread flour (or a combination thereof)
  • scant 1 tablespoon kosher salt
  • ~1⅓ cup warm water, 40-45 degrees Celcius (104-113 Fahrenheit)
  • ~1 tablespoon dry inactive yeast
  • ~1 tablespoon sugar
  • ~1/4 cup olive oil
  • ~1 tablespoon honey (optional –for sweetness)
    Directions
  1. If you have a stand mixer with dough hook: great. Otherwise mix, with another method, the salt and flour
  2. Dissolve the yeast and sugar in the warm water. Let stand until the yeast is activated (~10 minutes).
  3. Make a well in your flour-salt combination pour in the yeast mixture, olive oil and honey.
  4. Mix together until the dough has formed into a ball and is slightly sticky to the touch.
  5. Remove, shape into a neat ball and lightly oil it.
  6. Loosely cover the dough in a bowl with a dampish cloth and let rise until doubled in volume (1-2 hours depending on the environment).
  7. When risen, ideally divide, and knead*.
  8. Shape each smaller ball into a pizza shape for baking.
When kneading ensure to use a consistent folding motion –what you're essentially doing is aligning the gluten protein strands to achieve optimum elasticity.

I have come across an alternative method for making dough that I am now a huge fan of: in a food processor. Not only does it minimize mess, but the food processor does all the kneading for you.

    Alternative Method:
  1. Using a food processor (with the regular blade), combine the flour and salt.
  2. Like above, dissolve the yeast and sugar in the warm water and wait ~10 minutes until the yeast is activated.
  3. Stir the honey and olive oil into the yeast solution.
  4. While the food processor is on and going, pour the liquid through the feeding tube of the food processor.
  5. Stop when the dough comes together into a sort-of ball
  6. Remove, roll into a neater ball and place in a loosely covered container to rise, until it has approximately doubled in size.
  7. When risen, divide and shape into pizza crusts(s) for baking.

The Fridge: Ubuntu Weekly Newsletter Issue 351

Planet Ubuntu - Mon, 2014-01-20 22:31

Tony Whitmore: Big Finish Day 4

Planet Ubuntu - Mon, 2014-01-20 22:27

I had a great time at the weekend, taking photographs at Big Finish Day 4. The event is for fans of the audio production company, who make audio plays of Doctor Who, The Avengers, Dark Shadows, Blake’s 7 and much more. I’ve listened to Big Finish audio plays for years, mostly their Doctor Who range (of course!). The production standards are superb and one of their recent releases is up for a BBC audio drama award. I’ve been lucky enough to do some work for them over the last few months, and was asked to go along to capture some of the event.

In the morning I was wandering around taking candid shots of people enjoying the convention and the panels. It was rather like taking wedding photographs although slightly more relaxed. There are so many different moments to capture in a short time during a wedding ceremony, but a convention panel is a little more static and a good deal longer. Fortunately the urbane Nick Briggs kept the crowd laughing through the morning, and there was a really great atmosphere through the whole event.

https://twitter.com/DudleyIan/status/424504877109510144

In the afternoon I set up a portable studio to take some photos of various Big Finish actors. I was rather pleased with this set up, especially as it all managed it fit in my car! Apart from the background roll.

I really enjoyed working at Big Finish Day, catching up with some of the very nice people I’ve met at recording sessions, and hope to be asked back again!

Pin It

Marcin Juszkiewicz: The story of Qt/AArch64 patching

Planet Ubuntu - Mon, 2014-01-20 17:16

In over a year of my work as AArch64 porter I saw a lot of patches. But Qt one has the most interesting history.

Around year ago when I was building whatever possible during my Linaro work we got to the point when Qt jumped into a queue. Build failed but fixing was quite easy — all I had to do was to take “webkitgtk” patch written by Riku Voipio and apply it to Qt 4. Resulting file landed in “meta-aarch64″ layer of OpenEmbedded and is still there.

Time passed. More common distributions like Debian, Fedora, OpenSUSE, Ubuntu (alphabetical order) started working on their AArch64 ports. And one day Fedora and Ubuntu started working on building Qt 4. I do not know who wrote QAtomic stuff but I saw few versions/iterations of it and it took me quite long time (first on model, then on real hardware) to get it fully working in Fedora — used Ubuntu version of patches.

Up to this moment it was over 9 months and still no upstreaming was done. So one day I decided to go for it and opened QTBUG #35442. Then reopened issue #33 in “double-conversion” project (which is used in few places in Qt) as they got good fix and merged wrong one (but it is fixed now). For that one I opened a request to update to newer version of “double-conversion” as QTBUG #35528.

But story did not end there. As Qt 4 development is more or less ended I was asked to provide fixes for Qt 5. Took me a while. Had to create a graph of build time dependencies between Qt 5 components (had to break few in meantime) and slowly module after module I got most of it built.

There were 3 components which required patching:

First one is solved upstream and waits for Qt guys. I was told that 5.3 will get it updated. Second one is already reviewed and merged. Last one left and here is a problem as it looks like the only person who does QtWebKit code reviews is Allan Sandfeld Jensen but he can not review code he sent. I am not able to do that due to Qt Contributor License Agreement which needs to be signed and (due to legal stuff) I can not do that.

So how it looks now? I would say that quite good. One 3rd party project needs update (in two places of Qt 5) and one patch needs to get through code review. I still need to send package updates to Fedora bug tracker. Ubuntu will need to merge patches when they move to 5.2 version.

All rights reserved © Marcin Juszkiewicz
The story of Qt/AArch64 patching was originally posted on Marcin Juszkiewicz website

Related posts:

  1. I miss Debian tools
  2. Going for FOSDEM
  3. AArch64 port of Debian/Ubuntu is alive!

David Planella: A quickstart guide to the Ubuntu emulator

Planet Ubuntu - Mon, 2014-01-20 14:24

Following a recent announcement, the Ubuntu emulator is going to become a primary Engineering platform for development. Quoting Alexander Sack, when ready, the goal is to

[...] start using the emulator for everything you usually would do on the phone. We really want to make the emulator a class A engineering platform for everyone

While the final emulator is still work in progress, this month we are also going to see a big push in finishing all the pieces to make it a first-class citizen for development, both for the platform itself and for app developers. However, as it stands today, the emulator is already functional, so I’ve decided to prepare a quickstart guide to highlight the great work the Foundations and Phonedations teams (along with many other contributors) are producing to make it possible.

While you should consider this as guide as a preview, you can already use it to start getting familiar with the emulator for testing, platform development and writing apps.

Requirements

To install and run the Ubuntu emulator, you will need:

  • 512MB of RAM dedicated to the emulator
  • 4GB of disk space
  • OpenGL-capable desktop drivers (most graphics drivers/cards are)
Installing the emulator

If you are using Trusty Tahr, the Ubuntu development version, installation is as easy as opening a terminal, pressing Ctrl+Alt+T and running this command, followed by Enter:

sudo apt-get install ubuntu-emulator

Alternatively, if you are running a stable release such as Ubuntu 13.10, you can install the emulator by manually downloading its packages first:

Show me how

  1. Create a folder named emulator in your home directory
  2. Go to the goget-ubuntu-touch packages page in Launchpad
  3. Scroll down to Trusty Tahr and click on the arrow to the left to expand it
  4. Scroll further to the bottom of the page and click on the ubuntu-emulator_* package corresponding to your architecture (i386 or amd64) to download in the ~/emulator folder you created
  5. Now go to the Android packages page in Launchpad
  6. Scroll down to Trusty Tahr and click on the arrow to the left to expand it
  7. Scroll further to the bottom of the page and click on the ubuntu-emulator_runtime_* package corresponding to download it at the same ~/emulator folder
  8. Open a terminal with Ctrl+Alt+T
  9. Change the directory to the location where you downloaded the package writing the following command in the terminal: cd emulator
  10. Then run this command to install the packages: sudo dpkg -i *.deb
  11. Once the installation is successful you can close the terminal and remove the ~/emulator folder and its contents

Installation notes
  • Downloaded images are cached at ~/.cache/ubuntuimage –using the standard XDG_CACHE_DIR location.
  • Instances are stored at ~/.local/share/ubuntu-emulator –using the standard XDG_DATA_DIR location.
  • While an image upgrade feature is in the works, for now you can simply create an instance of a newer image over the previous one.
Running the emulator

The ubuntu-emulator tool makes it again really easy to manage instances and run the emulator. Typically, you’ll be opening a terminal and running these commands the first time you create an instance (where myinstance is the name you’ve chsen for it):

sudo ubuntu-emulator create myinstance
ubuntu-emulator run myinstance

You can create any instances you need for different purposes. And once the instance has been created, you’ll be generally using the ubuntu-emulator run myinstance command to start an emulator session based on that instance.

There are 3 main elements you’ll be interacting with when running the emulator:

  • The phone UI – this is the visual part of the emulator, where you can interact with the UI in the same way you’d do it with a phone. You can use your mouse to simulate taps and slides. Bonus points if you can recognize the phone model where the UI is in ;)
  • The remote session on the terminal – upon starting the emulator, a terminal will also be launched alongside. Use the phablet username and the same password to log in to an interactive ADB session on the emulator. You can also launch other terminal sessions using other communication protocols –see the link at the end of this guide for more details.
  • The ubuntu-emulator tool – with this CLI tool, you can manage the lifetime and runtime of Ubuntu images. Common subcommands of ubuntu-emulator include create (to create new instances), destroy (to destroy existing instances), run (as we’ve already seen, to run instances), snapshot (to create and restore snapshots of a given point in time) and more. Use ubuntu-emulator --help to learn about these commands and ubuntu-emulator command --help to learn more about a particular command and its options.
Runtime notes
  • At this time, the emulator takes a while to load. During that time, you’ll see a black screen inside the phone skin. Just wait a bit until it’s finished loading and the welcome screen appears.
  • By default the latest built image from the devel-proposed channel is used. This can be changed during creation with the --channel and --revision options.
  • If your host has a network connection, the emulator will use that transparently, even thought the network indicator might show otherwise.
  • To talk to the emulator, you can use standard adb. The emulator should appear under the list of the adb devices command. Due to a known bug, you’ll need to run adb kill-server; adb start-server on the host before you can see the emulator listed as a device.
Learn more and contribute

I hope this guide has whetted your appetite to start testing the emulator! You can also contribute making the emulator a first-class target for Ubuntu development. The easiest way is to install it and give it ago. If something is not working you can then file a bug.

If you want to fix a bug yourself or contribute to code, the best thing is to ask the developers about how to get started by subscribing to the Ubuntu phone mailing list.

If you want to learn more about the emulator, including how to create instance snapshots and other cool features, head out to the Ubuntu Emulator wiki page.

And next… support for the tablet form factor and SDK integration. Can’t wait for those features to land in the emulator!

The post A quickstart guide to the Ubuntu emulator appeared first on David Planella.

Riccardo Padovani: Development plans of calc app for Ubuntu Touch

Planet Ubuntu - Mon, 2014-01-20 08:00

As you probably know on April 17th, 2014 Ubuntu 14.04 ‘Trusty Tahr‘ will be relased. Also second stable version of Ubuntu Touch will be released and main goal for this version, for all the core-apps, is the desktop convergence.

Convergence it’s the revolutionary idea behind Ubuntu Touch: all the apps can run on desktop, tablet and phone (and maybe TV) not because have different implementation of same interface, but because interface adapts dinamically to the size of screen, as a responsive website. You can try this with the app SaucyBacon by randomcpp.

Now, let me explain what are the jobs to be done for the calculator from here to April:

  • For the convergence we need to enable keyboard support on desktop, so you can use the app with your keyboard and you are not forced to use the mouse. A basically implementation of this is landed: you can use key up and down to scroll, number keys to enter numbers and enter to do a calc. But some bugs are still open. We want to enable others shortcuts to edit label and to tear off calcs, and also we want to implement copy and paste;
  • Always for convergence we need to create an optimized tablet version and to fix some wrong behaviors on desktop;
  • We need to create a sidestage version of the app so on big screen you can place side by side two apps;
  • In our wishlist there is also to implement scientific function, but it isn’t a priority and I don’t know if and when this will be implemented.

If you want to follow the implementation of all this feature please see our blueprint.

If you want to help us in development (Why you should contribute to Ubuntu Touch) please join #ubuntu-app-devel or #ubuntu-calc-app on IRC and ping boiko, dpm, mihir, popey or I (WebbyIT).

Ciao,
R.

This work is licensed under Creative Commons Attribution 3.0 Unported

Ubuntu Classroom: Ubuntu User Days coming up!

Planet Ubuntu - Mon, 2014-01-20 06:03

Next weekend, from Saturday the 25th at 14:30 UTC until Sunday the 26th at 01:00 UTC the Classroom team will be hosting the Ubuntu User Days!

User Days was created to be a set of chat-based classes offered during a two days period to teach the beginning or intermediate Ubuntu user the basics to get them started with Ubuntu. Sessions this cycle include:

  • Command line made easy
  • Unity: Tips, tricks and configuration
  • Equivalent Applications
  • Finding Support for Ubuntu

You can check the full schedule here: https://wiki.ubuntu.com/UserDays

The best thing is, everyone can come! If you want to participate, you just need to join #ubuntu-classroom and #ubuntu-classroom-chat on irc.freenode.net in your IRC client, or just click here for browser-based Webchat.

We hope to see you next weekend!


Jono Bacon: Bad Voltage in 2014

Planet Ubuntu - Sun, 2014-01-19 16:44

In 2013 we kicked off Bad Voltage, a fun and irreverent podcast about technology, Open Source, gaming, politics, and anything else we find interesting. The show includes a veritable bounty of presenters including Stuart Langridge (LugRadio, Show Of Jaq), Bryan Lunduke (Linux Action Show), Jeremy Garcia (LinuxQuestions Podcast), and myself (LugRadio, Shot Of Jaq).

We have all podcasted quite a bit before and we know it takes a little while to really get into the groove, but things are really starting to gel in the show. We are all having a blast doing it, and it seems people are enjoying it.

If you haven’t given the show a whirl, I would love to encourage you to check out our most episode. In it we feature:

  • An interview with Sam Hulick who writes music for video games (Mass Effect, Baldur’s Gate) as well as some of the Ubuntu sounds.
  • We discuss the Mars One project and whether it absolutely madness or vague possibility.
  • We evaluate how Open Source app devs can make money, different approaches, and whether someone could support a family with it.
  • Part One of our 2014 predictions. We will review them at the end of the year to see how we did. Be sure to share your predictions too!

Go and download the show in either MP3 or Ogg format and subscribe to the podcast feeds!

We also have a new community forum that is starting to get into its groove too. The forum is based on Discourse, so is a pleasure to use, and a really nice community is forming. We would love to welcome you too!

In 2014 we want to grow the show, refine our format, and grow our community around the world. Our goal here is that Bad Voltage becomes the perfect combination of informative but really fun to listen to. I have no doubt that our format and approach will continue to improve with each show. We also want to grow an awesome, inclusive, and diverse community of listeners too. Our goal is that people associate the Bad Voltage community as a fun, like-minded set of folks who chat together, play together, collaborate together, and enjoy the show together.

Here’s to a fun year with Bad Voltage and we hope you come and be a part of it.

Kubuntu Wire: Valve’s OpenGL Debugger Developed on Kubuntu

Planet Ubuntu - Sun, 2014-01-19 11:19

Valve make Steam, a platform for making computer games which got a lot of people excited when it was announced a couple years ago that it was being ported to Ubuntu.One of the developers has just announced a new OpenGL debugger.  It’s developed on Kubuntu and uses the Qt Creator IDE.  Best of all it’s going to be completely open source.  Lovely to know Kubuntu is helping bring the next generation of games on all platforms.  More details on Phoronix.

Eric Hammond: Finding the Region for an AWS Resource ID

Planet Ubuntu - Sun, 2014-01-19 00:03

use concurrent AWS command line requests to search the world for your instance, image, volume, snapshot, …

Background

Amazon EC2 and many other AWS services are divided up into various regions across the world. Each region is a separate geographic area and is completely independent of other regions.

Though this is a great architecture for preventing global meltdown, it can occasionally make life more difficult for customers, as we must interact with each region separately.

One example of this is when we have the id for an AMI, instance, or other EC2 resource and want to do something with it but don’t know which region it is in.

This happens on ServerFault when a poster presents a problem with an instance, provides the initial AMI id, but forgets to specify the EC2 region. In order to find and examine the AMI, you need to look in each region to discover where it is.

Performance

You’ll hear a repeating theme when discussing performance in AWS:

To save time, run API requests concurrently.

This principle applies perfectly when performing requests across regions.

Parallelizing requests may seem like it would require an advanced programming language, but since I love using command line programs for simple interactive AWS tasks, I’ll present an easy mechanism for concurrent processing that works in bash.

Example

The following sample code finds an AMI using concurrent aws-cli commands to hit all regions in parallel.

id=ami-25b01138 # example AMI id type=image # or "instance" or "volume" or "snapshot" or ... regions=$(aws ec2 describe-regions --output text --query 'Regions[*].RegionName') for region in $regions; do ( aws ec2 describe-${type}s --region $region --$type-ids $id &>/dev/null && echo "$id is in $region" ) & done 2>/dev/null; wait 2>/dev/null

This results in the following output:

ami-25b01138 is in sa-east-1

By running the queries concurrently against all of the regions, we cut the run time by almost 90% and get our result in a second or two.

Drop this into a script, add code to automatically detect the type of the id, and you’ve got a useful command line tool… which you’re probably going to want to immediately rewrite in Python so there’s not quite so much forking going on.

Original article: http://alestic.com/2014/01/aws-region-search

Randall Ross: Planet Ubuntu Needs More Awesome - Part 2

Planet Ubuntu - Sat, 2014-01-18 19:31

In Part 1, I presented some of the results of my surveys about Planet Ubuntu from late 2013. Didn't read the summary? There's still time! What better a way to start your day?

With that behind us, let's dive into Part 2 of my promised summary along with additional bonus colour-commentary and recommendations not available anywhere else (at any price.)

TL;DR:
Planet Ubuntu needs a makeover.

6.

Survey Says:
There is a strong indication that people want a "new and improved" Planet Ubuntu.

Colour Commentary:
I'm firmly of the same opinion. Planet Ubuntu looks creaky and awkward. It's a throwback to an earlier era of web design. Interactivity? Not there. It also doesn't present well on different form factors. Have you ever tried reading it on Ubuntu Touch? Were you happy with the result? I could go on and on, but suffice to say there's room for improvement.

Some of you might be thinking "Why bother? There are plenty of other social web platforms that we could use as an Ubuntu blog. Why not just use ______." The problem with the word that's usually on top of that blank is that it's always out of our control, often predatory, and usually a bad idea in the long run. The best chance we have to shape the personality of one of the most prominent sites about Ubuntu is to actually maintain control of it. Planet Ubuntu reflects on Ubuntu whether we want to admit it or not. Let's admit it and make Planet Ubuntu great again.

Randall Concludes:
Let's reboot it.

7.

Survey Says:
Ignoring the fence-sitters, people want Ubuntu stories to have prominence, by a factor of two to one.

Colour Commentary:
I was a little surprised by how many people don't care one way or another. That aside, the majority vote for increased prominence of Ubuntu-related content is encouraging. I think this represents a good compromise for people who are insistent about blogging about non-Ubuntu topics on an Ubuntu site. (Yes, there are some who are.) Give them a small place, but not a place that detracts from the main event. Maybe the "real estate" a story gets should be proportional to the amount of Ubuntu content it contains. The mechanism for determining that would have to be designed, but it's an idea that has merit.

Randall Concludes:
Ubuntu-centric stories should be granted more prominence.

8.

Survey Says:
Huh?

Colour Commentary:
People have no idea how widely (or not) Planet Ubuntu is read. Some think it's amongst the top sites on the web, and others swear it's nothing but cob webs and tumble weeds. This isn't really surprising since the site doesn't publish any stats, and in the absence of data people will make up some wild assumptions. If we want Planet Ubuntu to have as wide a readership as possible, which IS what we want, then perhaps an important first step would be to insert analytics, or even a simple page view counter that can be graphed over time. That way, well be able to see if we're as popular as we need to be.

Randall Concludes:
Publish page view stats ASAP. We cannot improve what we cannot measure.

9.

Survey Says:
People want Planet Ubuntu authors to abide by the Ubuntu Code of Conduct.

Colour Commentary:
This was a bit of an accidental poll. While I was in the midst of my polling activities an unfortunate article that was a clear violation of the CoC and in poor taste was posted. What surprised (and disappointed) me is how long it took to take it down. Thankfully it was removed, but who knows how many people saw the article and now associate Ubuntu with something crass and juvenile?

Adding even more disappointment, the article was from someone who wasn't even an Ubuntu Member any more. So, it should never even have been posted in the first place.

And, adding *even more* disappointment, an effort to clean up the list of people who could post to Planet Ubuntu had been languishing for months.

Randall Concludes:
Maintain the site. (Looking in the direction of Community Council). Take down CoC violations with haste (i.e. in minutes, not hours). If you don't have the time/bandwidth, then delegate, or increase your numbers.

10.

Survey Says:
Nearly an even split.

Colour Commentary:
Given that there's a desire to make Ubuntu stories more prominent (see above), I'm curious to know what mechanism the people who don't want up-voting would use to make this happen. Perhaps an algorithm that scans for keywords and adjusts prominence accordingly? Or, maybe we could leave the decision to a panel of experts? I don't think either of these options have merit. I advocate that we use a system of up-voting by a group of people that are passionate about Ubuntu and are actively contributing to it day-in, day-out. Perhaps Ubuntu Members would be a good start for a group of up-voters?

Randall Concludes:
We need a reliable way to make Ubuntu articles prominent. Up-voting is that way.

---
To be continued...
I will wrap up the series in my next post with general conclusions and a prescription on how to make Planet Ubuntu awesome again. In the meantime, please share your thoughts in the comments.

Colin Watson: Testing wanted: GRUB 2.02~beta2 Debian/Ubuntu packages

Planet Ubuntu - Sat, 2014-01-18 00:48

This is mostly a repost of my ubuntu-devel mail for a wider audience, but see below for some additions.

I'd like to upgrade to GRUB 2.02 for Ubuntu 14.04; it's currently in beta. This represents a year and a half of upstream development, and contains many new features, which you can see in the NEWS file.

Obviously I want to be very careful with substantial upgrades to the default boot loader. So, I've put this in trusty-proposed, and filed a blocking bug to ensure that it doesn't reach trusty proper until it's had a reasonable amount of manual testing. If you are already using trusty and have some time to try this out, it would be very helpful to me. I suggest that you only attempt this if you're comfortable driving apt-get directly and recovering from errors at that level, and if you're willing to spend time working with me on narrowing down any problems that arise.

Please ensure that you have rescue media to hand before starting testing. The simplest way to upgrade is to enable trusty-proposed, upgrade ONLY packages whose names start with "grub" (e.g. use apt-get dist-upgrade to show the full list, say no to the upgrade, and then pass all the relevant package names to apt-get install), and then (very important!) disable trusty-proposed again. Provided that there were no errors in this process, you should be safe to reboot. If there were errors, you should be able to downgrade back to 2.00-22 (or 1.27+2.00-22 in the case of grub-efi-amd64-signed).

Please report your experiences (positive and negative) with this upgrade in the tracking bug. I'm particularly interested in systems that are complex in any way: UEFI Secure Boot, non-trivial disk setups, manual configuration, that kind of thing. If any of the problems you see are also ones you saw with earlier versions of GRUB, please identify those clearly, as I want to prioritise handling regressions over anything else. I've assigned myself to that bug to ensure that messages to it are filtered directly into my inbox.

I'll add a couple of things that weren't in my ubuntu-devel mail. Firstly, this is all in Debian experimental as well (I do all the work in Debian and sync it across, so the grub2 source package in Ubuntu is a verbatim copy of the one in Debian these days). There are some configuration differences applied at build time, but a large fraction of test cases will apply equally well to both. I don't have a definite schedule for pushing this into jessie yet - I only just finished getting 2.00 in place there, and the release schedule gives me a bit more time - but I certainly want to ship jessie with 2.02 or newer, and any test feedback would be welcome. It's probably best to just e-mail feedback to me directly for now, or to the pkg-grub-devel list.

Secondly, a couple of news sites have picked this up and run it as "Canonical intends to ship Ubuntu 14.04 LTS with a beta version of GRUB". This isn't in fact my intent at all. I'm doing this now because I think GRUB 2.02 will be ready in non-beta form in time for Ubuntu 14.04, and indeed that putting it in our development release will help to stabilise it; I'm an upstream GRUB developer too and I find the exposure of widely-used packages very helpful in that context. It will certainly be much easier to upgrade to a beta now and a final release later than it would be to try to jump from 2.00 to 2.02 in a month or two's time.

Even if there's some unforeseen delay and 2.02 isn't released in time, though, I think nearly three months of stabilisation is still plenty to yield a boot loader that I'm comfortable with shipping in an LTS. I've been backporting a lot of changes to 2.00 and even 1.99, and, as ever for an actively-developed codebase, it gets harder and harder over time (in particular, I've spent longer than I'd like hunting down and backporting fixes for non-512-byte sector disks). While I can still manage it, I don't want to be supporting 2.00 for five more years after upstream has moved on; I don't think that would be in anyone's best interests. And I definitely want some of the new features which aren't sensibly backportable, such as several of the new platforms (ARM, ARM64, Xen) and various networking improvements; I can imagine a number of our users being interested in things like optional signature verification of files GRUB reads from disk, improved Mac support, and the TrueCrypt ISO loader, just to name a few. This should be a much stronger base for five-year support.

St&eacute;phane Graber: LXC 1.0: Unprivileged containers [7/10]

Planet Ubuntu - Fri, 2014-01-17 23:28

This is post 7 out of 10 in the LXC 1.0 blog post series.

Introduction to unprivileged containers

The support of unprivileged containers is in my opinion one of the most important new features of LXC 1.0.

You may remember from previous posts that I mentioned that LXC should be considered unsafe because while running in a separate namespace, uid 0 in your container is still equal to uid 0 outside of the container, meaning that if you somehow get access to any host resource through proc, sys or some random syscalls, you can potentially escape the container and then you’ll be root on the host.

That’s what user namespaces were designed for and implemented. It was a multi-year effort to think them through and slowly push the hundreds of patches required into the upstream kernel, but finally with 3.12 we got to a point where we can start a full system container entirely as a user.

So how do those user namespaces work? Well, simply put, each user that’s allowed to use them on the system gets assigned a range of unused uids and gids, ideally a whole 65536 of them. You can then use those uids and gids with two standard tools called newuidmap and newgidmap which will let you map any of those uids and gids to virtual uids and gids in a user namespace.

That means you can create a container with the following configuration:

lxc.id_map = u 0 100000 65536 lxc.id_map = g 0 100000 65536

The above means that I have one uid map and one gid map defined for my container which will map uids and gids 0 through 65536 in the container to uids and gids 100000 through 165536 on the host.

For this to be allowed, I need to have those ranges assigned to my user at the system level with:

stgraber@castiana:~$ grep stgraber /etc/sub* 2>/dev/null /etc/subgid:stgraber:100000:65536 /etc/subuid:stgraber:100000:65536

LXC has now been updated so that all the tools are aware of those unprivileged containers. The standard paths also have their unprivileged equivalents:

  • /etc/lxc/lxc.conf => ~/.config/lxc/lxc.conf
  • /etc/lxc/default.conf => ~/.config/lxc/default.conf
  • /var/lib/lxc => ~/.local/share/lxc
  • /var/lib/lxcsnaps => ~/.local/share/lxcsnaps
  • /var/cache/lxc => ~/.cache/lxc

Your user, while it can create new user namespaces in which it’ll be uid 0 and will have some of root’s privileges against resources tied to that namespace will obviously not be granted any extra privilege on the host.

One such thing is creating new network devices on the host or changing bridge configuration. To workaround that, we wrote a tool called “lxc-user-nic” which is the only SETUID binary part of LXC 1.0 and which performs one simple task.
It parses a configuration file and based on its content will create network devices for the user and bridge them. To prevent abuse, you can restrict the number of devices a user can request and to what bridge they may be added.

An example is my own /etc/lxc/lxc-usernet file:

stgraber veth lxcbr0 10

This declares that the user “stgraber” is allowed up to 10 veth type devices to be created and added to the bridge called lxcbr0.

Between what’s offered by the user namespace in the kernel and that setuid tool, we’ve got all that’s needed to run most distributions unprivileged.

Pre-requirements

All examples and instructions I’ll be giving below are expecting that you are running a perfectly up to date version of Ubuntu 14.04 (codename trusty). That’s a pre-release of Ubuntu so you may want to run it in a VM or on a spare machine rather than upgrading your production computer.

The reason to want something that recent is because the rough requirements for well working unprivileged containers are:

  • Kernel: 3.13 + a couple of staging patches (which Ubuntu has in its kernel)
  • User namespaces enabled in the kernel
  • A very recent version of shadow that supports subuid/subgid
  • Per-user cgroups on all controllers (which I turned on a couple of weeks ago)
  • LXC 1.0 beta2 or higher (released two days ago)
  • A version of PAM with a loginuid patch that’s yet to be in any released version

Those requirements happen to all be true of the current development release of Ubuntu as of two days ago.

LXC pre-built containers

User namespaces come with quite a few obvious limitations. For example in a user namespace you won’t be allowed to use mknod to create a block or character device as being allowed to do so would let you access anything on the host. Same thing goes with some filesystems, you won’t for example be allowed to do loop mounts or mount an ext partition, even if you can access the block device.

Those limitations while not necessarily world ending in day to day use are a big problem during the initial bootstrap of a container as tools like debootstrap, yum, … usually try to do some of those restricted actions and will fail pretty badly.

Some templates may be tweaked to work and workaround such as a modified fakeroot could be used to bypass some of those limitations but the goal of the LXC project isn’t to require all of our users to be distro engineers, so we came up with a much simpler solution.

I wrote a new template called “download” which instead of assembling the rootfs and configuration locally will instead contact a server which contains daily pre-built rootfs and configuration for most common templates.

Those images are built from our Jenkins server using a few machines I have on my home network (a set of powerful x86 builders and a quadcore ARM board). The actual build process is pretty straightforward, a basic chroot is assembled, then the current git master is downloaded, built and the standard templates are run with the right release and architecture, the resulting rootfs is compressed, a basic config and metadata (expiry, files to template, …) is saved, the result is pulled by our main server, signed with a dedicated GPG key and published on the public web server.

The client side is a simple template which contacts the server over https (the domain is also DNSSEC enabled and available over IPv6), grabs signed indexes of all the available images, checks if the requested combination of distribution, release and architecture is supported and if it is, grabs the rootfs and metadata tarballs, validates their signature and stores them in a local cache. Any container creation after that point is done using that cache until the time the cache entries expires at which point it’ll grab a new copy from the server.

The current list of images is (as can be requested by passing –list):

--- DIST    RELEASE   ARCH    VARIANT    BUILD --- debian    wheezy    amd64   default    20140116_22:43 debian    wheezy    armel   default    20140116_22:43 debian    wheezy    armhf   default    20140116_22:43 debian    wheezy    i386    default    20140116_22:43 debian    jessie    amd64   default    20140116_22:43 debian    jessie    armel   default    20140116_22:43 debian    jessie    armhf   default    20140116_22:43 debian    jessie    i386    default    20140116_22:43 debian    sid    amd64   default    20140116_22:43 debian    sid    armel   default    20140116_22:43 debian    sid    armhf   default    20140116_22:43 debian    sid    i386    default    20140116_22:43 oracle    6.5    amd64   default    20140117_11:41 oracle    6.5    i386    default    20140117_11:41 plamo     5.x    amd64   default    20140116_21:37 plamo     5.x    i386    default    20140116_21:37 ubuntu    lucid    amd64   default    20140117_03:50 ubuntu    lucid    i386    default    20140117_03:50 ubuntu    precise   amd64   default    20140117_03:50 ubuntu    precise   armel   default    20140117_03:50 ubuntu    precise   armhf   default    20140117_03:50 ubuntu    precise   i386    default    20140117_03:50 ubuntu    quantal   amd64   default    20140117_03:50 ubuntu    quantal   armel   default    20140117_03:50 ubuntu    quantal   armhf   default    20140117_03:50 ubuntu    quantal   i386    default    20140117_03:50 ubuntu    raring    amd64   default    20140117_03:50 ubuntu    raring    armhf   default    20140117_03:50 ubuntu    raring    i386    default    20140117_03:50 ubuntu    saucy    amd64   default    20140117_03:50 ubuntu    saucy    armhf   default    20140117_03:50 ubuntu    saucy    i386    default    20140117_03:50 ubuntu    trusty    amd64   default    20140117_03:50 ubuntu    trusty    armhf   default    20140117_03:50 ubuntu    trusty    i386    default    20140117_03:50

The template has been carefully written to work on any system that has a POSIX compliant shell with wget. gpg is recommended but can be disabled if your host doesn’t have it (at your own risks).

The same template can be used against your own server, which I hope will be very useful for enterprise deployments to build templates in a central location and have them pulled by all the hosts automatically using our expiry mechanism to keep them fresh.

While the template was designed to workaround limitations of unprivileged containers, it works just as well with system containers, so even on a system that doesn’t support unprivileged containers you can do:

lxc-create -t download -n p1 -- -d ubuntu -r trusty -a amd64

And you’ll get a new container running the latest build of Ubuntu 14.04 amd64.

Using unprivileged LXC

Right, so let’s get you started, as I already mentioned, all the instructions below have only been tested on a very recent Ubuntu 14.04 (trusty) installation.
You may want to grab a daily build and run it in a VM.

Install the required packages:

  • sudo apt-get update
  • sudo apt-get dist-upgrade
  • sudo apt-get install lxc systemd-services uidmap

Now a quick workaround while we wait to have our new cgroup manager in Ubuntu, put the following into /etc/init/lxc-unpriv-cgroup.conf:

start on starting systemd-logind and started cgroup-lite script     set +e     echo 1 > /sys/fs/cgroup/memory/memory.use_hierarchy     for entry in /sys/fs/cgroup/*/cgroup.clone_children; do         echo 1 > $entry     done     exit 0 end script

This trick is required because logind doesn’t configure use_hierarchy or clone_children the way LXC needs them.

Now, reboot your machine for those cgroups to get properly reconfigured.

Then, assign yourself a set of uids and gids with:

  • sudo usermod –add-subuids 100000-165536 $USER
  • sudo usermod –add-subgids 100000-165536 $USER

Now create ~/.config/lxc/default.conf with the following content:

lxc.network.type = veth lxc.network.link = lxcbr0 lxc.network.flags = up lxc.network.hwaddr = 00:16:3e:xx:xx:xx lxc.id_map = u 0 100000 65536 lxc.id_map = g 0 100000 65536

And /etc/lxc/lxc-usernet with:

<your username> veth lxcbr0 10

And that’s all you need. Now let’s create our first unprivileged container with:

lxc-create -t download -n p1 -- -d ubuntu -r trusty -a amd64

You should see the following output from the download template:

Setting up the GPG keyring Downloading the image index Downloading the rootfs Downloading the metadata The image cache is now ready Unpacking the rootfs --- You just created an Ubuntu container (release=trusty, arch=amd64). The default username/password is: ubuntu / ubuntu To gain root privileges, please use sudo.

So looks like your first container was created successfully, now let’s see if it starts:

ubuntu@trusty-daily:~$ lxc-start -n p1 -d ubuntu@trusty-daily:~$ lxc-ls --fancy NAME  STATE    IPV4     IPV6     AUTOSTART  ------------------------------------------ p1    RUNNING  UNKNOWN  UNKNOWN  NO

It’s running! At this point, you can get a console using lxc-console or can SSH to it by looking for its IP in the ARP table (arp -n).

One thing you probably noticed above is that the IP addresses for the container aren’t listed, that’s because unfortunately LXC currently can’t attach to an unprivileged container’s namespaces. That also means that some fields of lxc-info will be empty and that you can’t use lxc-attach. However we’re looking into ways to get that sorted in the near future.

There are also a few problems with job control in the kernel and with PAM, so doing a non-detached lxc-start will probably result in a rather weird console where things like sudo will most likely fail. SSH may also fail on some distros. A patch has been sent upstream for this, but I just noticed that it doesn’t actually cover all cases and even if it did, it’s not in any released version yet.

Quite a few more improvements to unprivileged containers are to come until the final 1.0 release next month and while we certainly don’t expect all workloads to be possible with unprivileged containers, it’s still a huge improvement on what we had before and a very good building block for a lot more interesting use cases.

Randall Ross: Planet Ubuntu Needs More Awesome - Part 1

Planet Ubuntu - Fri, 2014-01-17 19:47

Could Planet Ubuntu be made more awesome? How would we do it? Where would we start? Perhaps we'd start by seeing who reads it and what those people actually think about it. During the latter weeks of 2013 I conducted a series of polls (on my blog) to determine just that. Going into this effort, I had my opinions. Some of them were validated. Some were not.

What follows is my promised summary of the survey results along with my bonus colour-commentary and recommendations.

TL;DR:
Planet Ubuntu has the potential to be *much* more awesome, and we should seriously consider making it *the* place to visit for all things Ubuntu.

1.

Survey Says:
Members outnumber non-members by a factor of two (give-or-take) as readers of Planet Ubuntu.

Colour Commentary:
I was surprised that the margin wasn't a *lot* bigger. I was expecting a factor of at least 10:1, and certainly not 2:1! Planet Ubuntu is an echo chamber - a place where primarily Ubuntu members speak to themselves. Could we do better? Yes! Why not make Planet Ubuntu a place for everyone? Why not make it *the* place where people (of all types, not just Ubuntu members) come to get the latest information about Ubuntu's collaborators and their Ubuntu thoughts? If this were to be our focus, I think we'd see a lot better news and information on other sites too as they would be hard-pressed to slant their stories (or to miss the point entirely) when the people who are actually building Ubuntu are presenting matters clearly and to a wide audience on a very public site.

Randall Concludes:
We need to make Planet Ubuntu appeal to everyone. We need to make it read primarily by non-members. The ratio needs to be at least 1000:1.

2.

Survey Says: By a wide margin, readers perceive that they derive value from Planet Ubuntu.

Colour Commentary:
This is a very surprising result, and I must admit, I don't share the same opinion. To me, Planet Ubuntu has drifted far away from where it was a few years ago. I can recall visiting daily in those days and reading all kinds of really interesting news and commentary, mostly about Ubuntu, and specifically from people who were at the core of a lot of Ubuntu goings-on. Now, when I come to Planet Ubuntu, I am generally disappointed to find that most of it is not about Ubuntu, and the core contributors rarely chime in. Instead, we have people that perhaps were once really interested and involved in Ubuntu who now have new pet-projects and want to showcase them. I'm all for learning about what people are working on these days, but Planet Ubuntu ought to be mainly about Ubuntu.

Randall Concludes:
I'm going to agree to disagree and be in the minority here. Planet Ubuntu is not as useful as it could be. We are aiming too low.

3.

Survey Says: People primarily want Planet Ubuntu people to be written by "Ubuntu people", but not necessarily Ubuntu members. Then there are some that want to introduce a Canonical relationship, but only for authors who don't work for Canonical.

Colour Commentary:
I'm surprised by the Canonical (or Mark) selected result. This tells me there is pent-up demand for a more official voice, or more core-contributor stories but reluctance to restrict to the direct voice of Canonical employees.

There is also demand to let in authors who are Ubuntu contributors but not necessarily Ubuntu members. Overall the data suggests a need to expand authorship. Perhaps Ubuntu Members aren't pulling their blogging weight.

Randall Concludes:
Let's open up Planet Ubuntu to people who have real passion for Ubuntu and who live and breathe it daily. That might mean forgoing the requirement to be an Ubuntu Member, and replacing that requirement with something along the lines of "must have a demonstrated and sustained passion for Ubuntu".

4.

Survey Says: Most people want Planet Ubuntu to be about Ubuntu.

Colour Commentary:
This is expected, and I fully agree. I want Planet Ubuntu to live up to its name. I would love it if people writing there would keep their articles focused, at least to the point there's a clear Ubuntu tie-in. That's what makes it worthy of a read instead of the gerjillion other sites on the web that aren't about Ubuntu. Do we really need a site that has Ubuntu in its name that is primarily not about Ubuntu?

Here's an anecdote. Back in the early 2000's, 2002 I recall, there was a period of time where the USA (government) was considering an invasion of Iraq. At the same time, there was a large popular movement and demonstrations/marches against the idea. I was in San Francisco at the time, and recall seeing large numbers of people marching on Van Ness Ave, near SF City Hall carrying placards saying "No Invasion", "Peace not War", i.e. stuff one would expect to see at an anti-invasion demonstration. In the same march, I also saw placards with slogans like "End Poverty", "Stop Animal Cruelty", and other noteworthy causes. What struck me about this was just how out of place they were and how obvious it was that some were trying to capitalize on the popularity of the demonstration for other "pet" causes. I was saddened that the main cause was being diluted. (Note that I'm not saying these other causes were not worthy, but I am saying that this was not the place for them. People were trying to stop a war.)

Randall Concludes:
I'm going advocate that we keep Planet Ubuntu about Ubuntu and encourage everyone who writes here to respect the stated title of the site.

5.

Survey Says: Planet Ubuntu is a "watering hole". Our readers come here often.

Colour Commentary:
This is encouraging in that it shows that we have reader loyalty. People keep coming back. This doesn't say why though. Are people coming back in the hopes that there will be something interesting about Ubuntu? When they arrive are they pleasantly surprised? Or, are they like me and longing for more Ubuntu? This survey question begs for more questions to get at the reasons.

Randall Concludes:
We have loyal readers. Let's find out why.

---
To be continued...
I will continue with a summary of results 6 through 10 soon. In the meantime, please share your thoughts in the comments.

Pages

Subscribe to Free Software Magazine aggregator