news aggregator

The Fridge: Ubuntu 13.10 (Saucy Salamander) reaches End of Life on July 17 2014

Planet Ubuntu - Fri, 2014-06-20 20:17

Ubuntu announced its 13.10 (Saucy Salamander) release almost 9 months ago, on October 17, 2013. This was the second release with our new 9 month support cycle and, as such, the support period is now nearing its end and Ubuntu 13.10 will reach end of life on Thursday, July 17th. At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 13.10.

The supported upgrade path from Ubuntu 13.10 is via Ubuntu 14.04 LTS. Instructions and caveats for the upgrade may be found at:

https://help.ubuntu.com/community/TrustyUpgrades

Ubuntu 14.04 LTS continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce

Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Fri Jun 20 05:00:13 UTC 2014 by Adam Conrad on behalf of the Ubuntu Release Team

Marco Ceppi: Deploying OpenStack with just two machines. The MaaS and Juju way.

Planet Ubuntu - Fri, 2014-06-20 18:46

A lot of people have been asking lately about what the minimum number of nodes are required to setup OpenStack and there seems to be a lot of buzz around setting up OpenStack with Juju and MAAS. Some would speculate it has something to do with the amazing keynote presentation by Mark Shuttleworth, others would conceed it’s just because charms are so damn cool. Whatever the reason my answer is as follows

You really want 12 nodes to do OpenStack right, even more for high availability, but at a bare minimum you only need two nodes.

So, naturally, as more people dive in to OpenStack and evaluate how they can use it in their organizations, they jump at the thought “Oh, I have two servers laying around!” and immediately want to know how to achieve such a feat with Juju and MAAS. So, I took an evening to do such a thing with my small cluster and share the process.

This post makes a few assumptions. First, that you have already set up MAAS, installed Juju, and configured Juju to speak to your MAAS environment. Secondly, that the two machine allotment is nodes after setting up MAAS and that these two nodes are already enlisted in MAAS.

My setup

Before I dive much deeper, let me briefly show my setup.

I realize the photo is terrible, the Nexus 4 just doesn’t have a super stellar camera compared to other phones on the market. For the purposes of this demo I’m using my home MAAS cluster which consists of three Intel NUCs, a gigabit switch, a switched PDU, and an old Dell Optiplex with an extra nick which acts as the MAAS region controller. All the NUCs have been enlisted in MAAS and commissioned already.

Diving in

Once MAAS and Juju are configured you can go ahead and run juju bootstrap. This will provision one of the MAAS nodes and use it as the orchestration node for your juju environment. This can take some time, especially if you don’t have fastpath installer selected, if you get a timeout during your first bootstrap don’t fret! You can increase the bootstrap timeout in the environments.yaml file with the following directive in your maas definition: bootstrap-timeout: 900. During the video I increase this timeout to 900 seconds in the hopes of eliminating this issue.

After you’ve bootstrapped it’s time to get deploying! If you care to use the Juju GUI now would be the time to deploy it. You can do so with by running the following command:

juju deploy --to 0 juju-gui

To avoid having juju spin us up another machine we can tell Juju to simply place it on machine 0.

NOTE: the --to flag is crazy dangerous. Not all services can be safely co-located with each other. This is tandumount to “hulk smashing” services and will likely break things. Juju GUI is designed to coincide with the bootstrap node so this has been safe. Running this elsewhere will likely result in bad things. You have been warned.

Now it’s time to get OpenStack going! Run the following commands:

juju deploy --to lxc:0 mysql juju deploy --to lxc:0 keystone juju deploy --to lxc:0 nova-cloud-controller juju deploy --to lxc:0 glance juju deploy --to lxc:0 rabbitmq-server juju deploy --to lxc:0 openstack-dashboard juju deploy --to lxc:0 cinder

To break this down, what you’re doing is deploying the minimum number of components required to support OpenStack, only, your deploying them to machine 0 (the bootstrap node) in LXC containers. If you don’t know what LXC containers are, they are very light weight Linux containers (virtual machines) that don’t produce a lot of overhead but allow you to safely compartmentalize these services. So, after a few minutes these machines will begin to pop online, but in the meantime we can press on because Juju waits for nothing!

The next step is to deploy the nova-compute node. This is the powerhouse behind OpenStack and is the hypervisor for launching instances. As such, we don’t really want to virtualize it as KVM (or XEN, etc) don’t work well inside of LXC machines.

juju deploy nova-compute

That’s it. MAAS will allocate the second, and final node if you only have two, to nova-compute. Now while all these machines are popping up and becoming ready lets create relations. The magic of Juju and what it can do is in creating relations between services. It’s what turns a bunch of scripts into LEGOs for the cloud. You’ll need to run the following commands to create all the relations necessary for the OpenStack components to talk to eachother:

juju add-relation mysql keystone juju add-relation nova-cloud-controller mysql juju add-relation nova-cloud-controller rabbitmq-server juju add-relation nova-cloud-controller glance juju add-relation nova-cloud-controller keystone juju add-relation nova-compute nova-cloud-controller juju add-relation nova-compute mysql juju add-relation nova-compute rabbitmq-server:amqp juju add-relation nova-compute glance juju add-relation glance mysql juju add-relation glance keystone juju add-relation glance cinder juju add-relation mysql cinder juju add-relation cinder rabbitmq-server juju add-relation cinder nova-cloud-controller juju add-relation cinder keystone juju add-relation openstack-dashboard keystone

Whew, I know that’s a lot to go through, but OpenStack isn’t a walk in the park. It’s a pretty intricate system with lots of dependencies. The good news is we’re nearly done! No doubt most of the nodes have turned green in the GUI or are marked as “started” in the output of juju status.

One of the last things is configuration for the cloud. Since this is all working against Trusty, we have the latest OpenStack being installed. All that’s left is to configure our admin password in keystone so we can log in to the dashboard.

juju set keystone admin-password="helloworld"

Set the password to whatever you’d like. Once complete, run juju status openstack-dashboard find the public-address for that unit, load it’s address in your browser and navigate to /horizon. (For example, if the public-address was 10.0.1.2 you would go to http://10.0.1.2/horizon). Log in with the username admin and the password as you set it in the command line. You should now be in the horizon dashboard for OpenStack. Click on Admin -> System Panel -> Hypervisors and confirm you have a hypervisor listed.

Congradulations! You’ve create a condensed OpenStack installation.

Emma Marshall: Colorado Ubuntu Team: Operation 'Spread Ubuntu' is Underway!

Planet Ubuntu - Fri, 2014-06-20 16:39
After another successful Colorado Ubuntu Team meeting this week, it's clear that our LoCo team is gaining momentum for a busy summer of spreading Ubuntu. The purpose of this week's Ubuntu Hour was to finalize a welcome handout with ways to get started with Ubuntu and reasons to use the best operating system in the world. The goal is to provide the handout with Ubuntu CD's, a sticker sheet and Ubuntu key stickers to cover up those Windows keys. The document was completed with final edits Wednesday and the CoLoCo CD kits were prepared to send to local volunteers yesterday.

On top of the incredible response from the team to complete the handout, I received a handful of volunteers for CD distribution throughout Colorado. The volunteers below will be available with install CD's in the following Colorado cities:
  • Neal McBurnett: Boulder
  • Chris Yoder: Longmont
  • Ryan Nicholson: Fort Collins
  • Emma Marshall: Denver & Aurora
Volunteer emails will be included in the Monday announcement to the Colorado Team mailing list. I'm very happy with all the support and teamwork going on within the Colorado Ubuntu Team and can't wait for our upcoming events. In addition to more Ubuntu Hours, we're planning an epic InstallFest this Summer.

Here's a close look at our 2-sided handout:


Thank you to the Colorado Ubuntu Team for helping spread Ubuntu! We are on an excellent path to a successful summer!

Rohan Garg: The KDE Randa meetings needs your help

Planet Ubuntu - Fri, 2014-06-20 16:04

The Randa meetings provide an excellent opportunity for KDE developers to come across for a week long hack session to fix bugs in various KDE components while collaborating on new features.

This year we have some amazing things planned, with contributors working across the board on delivering an amazing KDE Frameworks 5 experience, a KDE frameworks SDK, a KDE frameworks book, the usual bug fixing and writing new features for the KDE Multimedia stack and much much more.

So please, go ahead and donate to our Randa fundraiser here , because when these contributors come together, amazing things happen :)


Kubuntu: KDE Applications and Development Platform 4.13.2

Planet Ubuntu - Fri, 2014-06-20 16:00

Packages for the release of KDE SC 4.13.2 are available for Kubuntu 12.04LTS, 13.10 and our development release. You can get them from the Kubuntu Backports PPA.

Bugs in the packaging should be reported to kubuntu-ppa on Launchpad. Bugs in the software to KDE.

To update, use the Software Repository Guide to add the following repository to your software sources list:

Kubuntu: KDE Applications and Development Platform 4.13.2

Planet Ubuntu - Fri, 2014-06-20 16:00

Packages for the release of KDE SC 4.13.2 are available for Kubuntu 12.04LTS, 13.10 and our development release. You can get them from the Kubuntu Backports PPA.

Bugs in the packaging should be reported to kubuntu-ppa on Launchpad. Bugs in the software to KDE.

To update, use the Software Repository Guide to add the following repository to your software sources list:

Harald Sitter: Weekly Plasma 5 Live ISO

Planet Ubuntu - Fri, 2014-06-20 15:07

With KDE Frameworks 5 and Plasma 5 not too far away our awesome Blue Systems build crew now increased the cadence at which we publish new Neon 5 Live ISO images for you to test. I am very happy to announce that from now on there will be a new ISO waiting for you every Friday at:

http://files.kde.org/snapshots/

Neon 5 provides daily built packages of the latest and greatest KDE software based on KDE Frameworks 5 based on Kubuntu, so what you get every Friday is most certainly no older than 24 hours. This makes the ISOs a perfect way to test and report bugs in the upcoming release, as well as track the overall progress of things.
If you’d like continuous (albeit, less trouble-free) updates for an existing Kubuntu 14.04 or Neon 5 installation you can of course use the PPAs directly. Beware! There be dragons ;)

If you would like to support future KDE Development please check out the current fundraiser for the KDE Randa Meetings 2014.

Harald Sitter: Mozilla Firefox Testing

Planet Ubuntu - Fri, 2014-06-20 15:07

Next in Kubuntu:

Mozilla Firefox as web browser.

Kubuntu 14.04 will contain Mozilla Firefox as default browser. The Kubuntu team would like to invite everyone to give the all new Kubuntu 14.04 Alpha 1 a whirl and share their thoughts with us on our Google+ page.

 

Bryan Quigley: Lead (Pb) in the USA

Planet Ubuntu - Fri, 2014-06-20 13:45

Live in the US?  Did you know that we put Lead (Pb), a known neurotoxin, in:

  • Garden hoses (that have been shown to leak Lead into the water)
  • Power cords (including laptop cords)
  • Carseats (mostly to the base, some other toxins have been found in the seat itself)
  • Likely more, it’s apparently not uncommon to be added to plastic…

In the EU you aren’t allowed to put Lead in the above.  I think it’s time we joined them!

  • Sign the petition on the White House We the People site.
  • Donate to this Indiegogo campaign to test carseats for toxic chemicals. (They are only asking for $10,000! ~ mostly to buy the carseats)
  • Share this post / the above with friends, family, and any celebrities you happen to know on Twitter, etc.  #NoSafeAmountOfLead.
  • Bonus: Watch episode 7 of the new Cosmos which ends with Neil deGrasse Tyson saying there is no safe amount of lead.

Please let me know if you have trouble doing any of the above..

Anthony Hook: Let's talk about Chromebooks

Planet Ubuntu - Fri, 2014-06-20 08:36

It’s all about purpose. This is the most important thing to keep in mind when you’re attempting to compare or judge something on how useful something is.

What do you look to accomplish with a coupe sports car? Surely you don’t buy one and claim it sucks because you can’t fit your family of four. That isn’t the intended purpose.

There are a lot of people trying to replace their day-to-day machine with Chromebooks and expecting a cheap 1-to-1 replacement. Whelp, good luck, you may end up frustrated. Let’s take a second and scope your expectations properly, so you know what you’re getting into.

Here’s an example of something I’ve witnessed:

ChromeOS sucks, it doesn’t have $application.

This is one of the most prominent types of comments or articles on the internet. This argument is invalid, however, in a proper scoping of intended purpose of the device.

Frankly, if you’re trying to get one of these for very cheap to replace every experience you have on a Windows, Mac, or Linux computer, you’re going to have a bad time. You’re just not going to get that out of a Chromebook, but that’s okay. Or may be for you, depending on what you’re looking to do.

Are you looking for a cheap Facebook or Pinterest machine? Awesome. Some editing using Google Drive of documents and spreadsheets? Yep. Email? Check. Netflix? Aye, it can do that too. Google Hangouts? Like a champ. Remote access to a VDI environment with VMware Horizon View? Yes! More on that later…

There are a ton of addons and ‘webapps’ you can install from the Chrome Web Store, and alternatives.

It is built to be a web browser; it is a web browser. That’s it. And if that’s what you want, it is perfect. If you require something else, then it isn’t. And that is okay.

written and posted from a Chromebook.

Ubuntu Scientists: Team Wiki Pages Update: June 19, 2014

Planet Ubuntu - Thu, 2014-06-19 23:02

Today, Svetlana Belkin (belkinsa), done work on the team wiki pages, mainly the home page.  The home page now has a cleaner look where the basics, such as the introduction about the team all the way to how to contact the team and how to join the team.  Svetlana also removed some of the excess “tabs” on the menu bar and added a “Site Map” tab, where users can see what other pages are there.

There is still work to be done on a homepage, mainly with menu and a lot of work on those team wiki pages, as stated here and in the UOS session.  Hopefully, the team’s wiki pages will be finished by the end of July 2014 in order for a clearer understanding for newcomers.

 


Filed under: News Tagged: News, Svetlana Belkin, Team Wiki Pages, Ubuntu, Ubuntu Scientists, wiki

Ubuntu Podcast from the UK LoCo: Episode 12 – The One Without the Ski Trip

Planet Ubuntu - Thu, 2014-06-19 19:00

We’re back with Season Seven, Episode Twelve of the Ubuntu Podcast! Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are drinking tea and eating very rich chocolate cake (like this one, only more chocolatey) in Studio L.

 Download OGG  Download MP3 Play in Popup

In this week’s show:

df -hP |column -t |tee >( head -n1 > /dev/stderr ) |grep -v Filesystem |sort -k5nr

Use it to order file-systems by percent usage and keep the header in place. Or order by file-system size with -k2n
* And we read your feedback – thanks for sending it in!

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Raphaël Hertzog: Convince your company to contribute to Debian Long Term Support

Planet Ubuntu - Thu, 2014-06-19 13:56

The press picked up the recent press release about Debian LTS but mainly to mention the fact that it’s up and running. The call for help is almost never mentioned.

It’s a pity because while it’s up, it’s not really running satisfactorily yet. As of today (2014-06-19), 36 packages in squeeze need a security update, yet squeeze-lts has only seen 7 updates.

As usual what we lack is contributors doing the required work, but in this specific case, there’s a simple solution: pay people to do the required work. This extended support is mainly for the benefit of corporate users and if they see value in Debian LTS, it should not be too difficult to convince companies to support the project.

With some other Debian developers, we have gone out of our way to make it super easy for companies to support the Debian LTS project. We have created a service offer for Debian-using companies.

Freexian (my company) collects money from all contributing companies (by way of invoices) and then uses the money collected to pay Debian contributors who will prepare security updates. On top of this we added some concrete benefits for contributing companies such as the possibility to indicate which packages should have priority, or even the possibility to provide functional tests to ensure that a security update doesn’t introduce a regression in their production setup.

To make a good job of maintaining Debian Squeeze, our goal is to fund the equivalent of a full-time position. We’re currently far from there with only 13 hours per month funded by 4 companies. That makes a current average of 3.25 hours/month funded by each contributing company, for a price of 276 EUR/month or 3315 EUR/year.

This is not much if you compare it with the price those companies would have to pay to upgrade all their Debian 6 machines now instead of keeping them for two supplementary years.

Assuming the average contribution level will stay the same, we only need the support of 50 other companies in the world. That’s really not much compared to the thousands of companies using Debian. Can you convince your own company? Grab the subscription form and have a chat with your company management.

Help us reach that goal, share this article and the link to Freexian’s Debian LTS offer. Long Term Support is important if we want Debian to be a good choice for servers and big deployments. We need to make Squeeze LTS a success!

Thank you!

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Canonical Design Team: Latest from the web team — June 2014

Planet Ubuntu - Thu, 2014-06-19 00:34

We’re now almost half way through the year and only a few days until summer officially starts here in the UK!

In the last few weeks we’ve worked on:

  • Responsive ubuntu.com: we’ve finished publishing the series on making ubuntu.com responsive on the design blog
  • Ubuntu.com: we’ve released a hub for our legal documents and information, and we’ve created homepage takeovers for Mobile Asia Expo
  • Juju GUI: we’ve planned work for the next cycle, sketched scenarios based on the new personas, and launched the new inspector on the left
  • Fenchurch: we’ve finished version 1 of our new asset server, and we’ve started work on the new Ubuntu partners site
  • Ubuntu Insights: we’ve published the latest iteration of Ubuntu Insights, now with a dedicated press area
  • Chinese website: we’ve released the Chinese version of ubuntu.com

And we’re currently working on:

  • Responsive Day Out: I’m speaking at the Responsive Day Out conference in Brighton on the 27th on how we made ubuntu.com responsive
  • Responsive ubuntu.com: we’re working on the final tweaks and improvements to our code and documentation so that we can release to the public in the next few weeks
  • Juju GUI: we’re now starting to design based on the scenarios we’ve created
  • Fenchurch: we’re now working on Juju charms for the Chinese site asset server and Ubuntu partners website
  • Partners: we’re finishing the build of the new Ubuntu partners site
  • Chinese website: we’ll be adding a cloud and server sections to the site
  • Cloud Installer: we’re working on the content for the upcoming Cloud Installer beta pages

If you’d like to join the web team, we are currently looking for a web designer and a front end developer to join the team!

Working on Juju personas and scenarios.

Have you got any questions or suggestions for us? Would you like to hear about any of these projects and tasks in more detail? Let us know your thoughts in the comments.

Robert Ancell: GTK+ applications in Unity 8 (Mir)

Planet Ubuntu - Wed, 2014-06-18 22:09
Ryan Lortie and I have been tinkering away with making getting GTK+ applications to run in Unity 8 and as you can see below it works!

This shows me running the Unity 8 preview session. Simple Scan shows up as an option and can be launched and perform a scan.

This is only a first start, and there's still lots of work to be done. In particular:

  • Applications need to set X-Ubuntu-Touch=true in their .desktop files to show in Unity 8.
  • Application icons from the gnome theme do not show (bug).
  • GTK+ applications don't go fullscreen (bug).
  • No cursors changes (bug).
  • We only support single window applications because we can't place/focus the subwindows yet (bug). We're currently faking menus and tooltips by drawing them onto the same surface.

If you are using Ubuntu 14.10 you can install the packages for this from a PPA:

$ sudo apt-add-repository ppa:ubuntu-desktop/gtk-mir
$ sudo apt-get update
$ sudo apt-get upgrade

The PPA contains a version of GTK+ with Mir support, fixes for libraries that assume you are running in X and a few select applications patched so they show in Unity 8.
The Mir backend currently on the wip/mir branch in the GTK+ git repository. We will keep developing it there until it is complete enough to propose into GTK+ master. We have updated jhbuild to support Mir so we can easily build and test this backend going forward.

Canonical Design Team: Making ubuntu.com responsive: final thoughts

Planet Ubuntu - Wed, 2014-06-18 13:08

This post is part of the series ‘Making ubuntu.com responsive‘.

There are several resources out there on how to create responsive websites, but they tend to go through the process in an ideal scenario, where the project starts with a blank slate, from scratch.

That’s why we thought it would be nice to share the steps we took in converting our main website and framework, ubuntu.com, into a fully responsive site, with all the constraints that come from working on legacy code, with very little time available, while making sure that the site was kept up-to-date and responding to the needs to the business.

Before we started this process, the idea of converting ubuntu.com seemed like a herculean task. It was only because we divided the work in several stages, smaller projects, tightened scope, and kept things simple, that it was possible to do it.

We learned a lot throughout this past year or so, and there is a lot more we want to do. We’d love to hear about your experience of similar projects, suggestions on how we can improve, tools we should look into, books we should read, articles we should bookmark, and things we should try, so please do leave us your thoughts in the comments section.

Here is the list of all the post in the series:

  1. Intro
  2. Setting the rules
  3. Making the rules a reality
  4. Pilot projects
  5. Lessons learned
  6. Scoping the work
  7. Approach to content
  8. Making our grid responsive
  9. Adapting our navigation to small screens
  10. Dealing with responsive images
  11. Updating font sizes and increasing readability
  12. Our Sass architecture
  13. Ensuring performance
  14. JavaScript considerations
  15. Testing on multiple devices

Note: I will be speaking about making ubuntu.com responsive at the Responsive Day Out conference in Brighton, on the 27th June. See you there!

James Page: How we scaled OpenStack to launch 168,000 cloud instances

Planet Ubuntu - Wed, 2014-06-18 09:28

In the run up to the OpenStack summit in Atlanta, the Ubuntu Server team had it’s first opportunity to test OpenStack at real scale.

AMD made available 10 SeaMicro 15000 chassis in one of their test labs. Each chassis has 64, 4 core, 2 thread (8 logical cores), 32GB RAM servers with 500G storage attached via a storage fabric controller – creating the potential to scale an OpenStack deployment to a large number of compute nodes in a small rack footprint.

As you would expect, we chose the best tools for deploying OpenStack:

  • MAAS – Metal-as-a-Service, providing commissioning and provisioning of servers.
  • Juju – The service orchestration for Ubuntu, which we use to deploy OpenStack on Ubuntu using the OpenStack charms.
  • OpenStack Icehouse on Ubuntu 14.04 LTS.
  • CirrOS – a small footprint linux based Cloud OS

MAAS has native support for enlisting a full SeaMicro 15k chassis in a single command – all you have to do is provide it with the MAC address of the chassis controller and a username and password.  A few minutes later, all servers in the chassis will be enlisted into MAAS ready for commissioning and deployment:

maas local node-group probe-and-enlist-hardware \   nodegroup model=seamicro15k mac=00:21:53:13:0e:80 \   username=admin password=password power_control=restapi2

Juju has been the Ubuntu Server teams preferred method for deploying OpenStack on Ubuntu for as long as I can remember; Juju uses Charms to encapsulate the knowledge of how to deploy each part of OpenStack (a service) and how each service relates to each other – an example would include how Glance relates to MySQL for database storage, Keystone for authentication and authorization and (optionally) Ceph for actual image storage.

Using the charms and Juju, it’s possible to deploy complex OpenStack topologies using bundles, a yaml format for describing how to deploy a set of charms in a given configuration – take a look at the OpenStack bundle we used for this test to get a feel for how this works.

Starting out small(ish)

All ten chassis were not all available from the outset of testing, so we started off with two chassis of servers to test and validate that everything was working as designed.   With 128 physical servers, we were able to put together a Neutron based OpenStack deployment with the following services:

  • 1 Juju bootstrap node (used by Juju to control the environment), Ganglia Master server
  • 1 Cloud Controller server
  • 1 MySQL database server
  • 1 RabbitMQ messaging server
  • 1 Keystone server
  • 1 Glance server
  • 3 Ceph storage servers
  • 1 Neutron Gateway network forwarding server
  • 118 Compute servers

We described this deployment using a Juju bundle, and used the juju-deployer tool to bootstrap and deploy the bundle to the MAAS environment controlling the two chassis.  Total deployment time for the two chassis to the point of a OpenStack cloud that was usable was around 35 minutes.

At this point we created 500 tenants in the cloud, each with its own private network (using Neutron), connected to the outside world via a shared public network.  The immediate impact of doing this is that Neutron creates dnsmasq instances, Open vSwitch ports and associated network namespaces on the Neutron Gateway data forwarding server – seeing this many instances of dnsmasq on a single server is impressive – and the server dealt with the load just fine!

Next we started creating instances; we looked at using Rally for this test, but it does not currently support using Neutron for instance creation testing, so we went with a simple shell script that created batches of servers (we used a batch size of 100 instances) and then waited for them to reach the ACTIVE state.  We used the CirrOS cloud image (developed and maintained by the Ubuntu Server teams’ very own Scott Moser) with a custom Nova flavor with only 64 MB of RAM.

We immediately hit our first bottleneck – by default, the Nova daemons on the Cloud Controller server will spawn sub-processes equivalent to the number of cores that the server has.  Neutron does not do this and we started seeing timeouts on the Nova Compute nodes waiting for VIF creation to complete.  Fortunately Neutron in Icehouse has the ability to configure worker threads, so we updated the nova-cloud-controller charm to set this configuration to a sensible default, and provide users of the charm with a configuration option to tweak this setting.  By default, Neutron is configured to match what Nova does, 1 process per core – using the charm configuration this can be scaled up using a simple multiplier – we went for 10 on the Cloud Controller node (80 neutron-server processes, 80 nova-api processes, 80 nova-conductor processes).  This allowed us to resolve the VIF creation timeout issue we hit in Nova.

At around 170 instances per compute server, we hit our next bottleneck; the Neutron agent status on compute nodes started to flap, with agents being marked down as instances were being created.  After some investigation, it turned out that the time required to parse and then update the iptables firewall rules at this instance density took longer than the default agent timeout – hence why agents kept dropping out from Neutrons perspective.  This resulted in virtual interface (VIF) creation timing out and we started to see instance activation failures when trying to create more that a few instances in parallel.  Without an immediate fix for this issue (see bug 1314189), we took the decision to turn Neutron security groups off in the deployment and run without any VIF level iptables security.  This was applied using the nova-compute charm we were using, but is obviously not something that will make it back into the official charm in the Juju charm store.

With the workaround on the Compute servers and we were able to create 27,000 instances on the 118 compute nodes. The API call times to create instances from the testing endpoint remained pretty stable during this test, however as the Nova Compute servers got heavily loaded, the amount of time taken for all instances to reach the ACTIVE state did increase:

Doubling up

At this point AMD had another two chassis racked and ready for use so we tore down the existing two chassis, updated the bundle to target compute services at the two new chassis and re-deployed the environment.  With a total of 256 servers being provisioned in parallel, the servers were up and running within about 60 minutes, however we hit our first bottleneck in Juju.

The OpenStack charm bundle we use has a) quite a few services and b) a-lot of relations between services – Juju was able to deploy the initial services just fine, however when the relations where added, the load on the Juju bootstrap node went very high and the Juju state service on this node started to throw a larger number of errors and became unresponsive – this has been reported back to the Juju core development team (see bug 1318366).

We worked around this bottleneck by bringing up the original two chassis in full, and then adding each new chassis in series to avoid overloading the Juju state server in the same way.  This obviously takes longer (about 35 minutes per chassis) but did allow us to deploy a larger cloud with an extra 128 compute nodes, bringing the total number of compute nodes to 246 (118+128).

And then we hit our next bottleneck…

By default, the RabbitMQ packaging in Ubuntu does not explicitly set a file descriptor ulimit so it picks up the Ubuntu defaults – which are 1024 (soft) and 4096 (hard).  With 256 servers in the deployment, RabbitMQ hits this limit on concurrent connections and stops accepting new ones.  Fortunately it’s possible to raise this limit in /etc/default/rabbitmq-server – and as we were deployed using the rabbitmq-server charm, we were able to update the charm to raise this limit to something sensible (64k) and push that change into the running environment.  RabbitMQ restarted, problem solved.

With the 4 chassis in place, we were able to scale up to 55,000 instances.

Ganglia was letting us know that load on the Nova Cloud Controller during instance setup was extremely high (15-20), so we decided at this point to add another unit to this service:

juju add-unit nova-cloud-controller

and within 15 minutes we had another Cloud Controller server up and running, automatically configured for load balancing of API requests with the existing server and sharing the load for RPC calls via RabbitMQ.   Load dropped, instance setup time decreased, instance creation throughput increased, problem solved.

Whilst we were working through these issues and performing the instance creation, AMD had another two chassis (6 & 7) racked, so we brought them into the deployment adding another 128 compute nodes to the cloud bringing the total to 374.

And then things exploded…

The number of instances that can be created in parallel is driven by two factors – 1) the number of compute nodes and 2) the number of workers across the Nova Cloud Controller servers.  However, with six chassis in place, we were not able to increase the parallel instance creation rate as much as we wanted to without getting connection resets between Neutron (on the Cloud Controllers) and the RabbitMQ broker.

The learning from this is that Neutron+Nova makes for an extremely noisy OpenStack deployment from a messaging perspective, and a single RabbitMQ server appeared to not be able to deal with this load.  This resulted in a large number of instance creation failures so we stopped testing and had a re-think.

A change in direction

After the failure we saw in the existing deployment design, and with more chassis still being racked by our friends at AMD, we still wanted to see how far we could push things; however with Neutron in the design, we could not realistically get past 5-6 chassis of servers, so we took the decision to remove Neutron from the cloud design and run with just Nova networking.

Fortunately this is a simple change to make when deploying OpenStack using charms as the nova-cloud-controller charm has a single configuration option to allow Neutron and Nova networkings to be configured. After tearing down and re-provisioning the 6 chassis:

juju destroy-enviroment maas juju-deployer --bootstrap -c seamicro.yaml -d trusty-icehouse

with the revised configuration, we were able to create instances in batches of 100 at a respectable throughput of initially 4.5/sec – although this did degrade as load on compute servers went higher.  This allowed us to hit 75,000 running instances (with no failures) in 6hrs 33 mins, pushing through to 100,000 instances in 10hrs 49mins – again with no failures.

As we saw in the smaller test, the API invocation time was fairly constant throughout the test, with the total provisioning time through to ACTIVE state increasing due to loading on the compute nodes:

Status check

OK – so we are now running an OpenStack Cloud on Ubuntu 14.04 across 6 seamicro chassis (1,2,3,5,6,7 – 4 comes later) – a total of 384 servers (give or take one or two which would not provision).  The cumulative load across the cloud at this point was pretty impressive – Ganglia does a pretty good job at charting this:

AMD had two more chassis (8 & 9) in the racks which we had enlisted and commissioned, so we pulled them into the deployment as well;  This did take some time – Juju was grinding pretty badly at this point and just running ‘juju add-unit -n 63 nova-compute-b6′ was taking 30 minutes to complete (reported upstream – see bug 1317909).

After a couple of hours we had another ~128 servers in the deployment, so we pushed on and created some more instances through to the 150,000 mark – as the instances where landing on the servers on the 2 new chassis, the load on the individual servers did increase more rapidly so instance creation throughput did slow down faster but the cloud managed the load.

Tipping point?

Prior to starting testing at any scale, we had some issues with one of the chassis (4) which AMD had resolved during testing, so we shoved that back into the cloud as well; after ensuring that the 64 extra servers where reporting correctly to Nova, we started creating instances again.

However, the instances kept scheduling onto the servers in the previous two chassis we added (8 & 9) with the new nodes not getting any instances.  It turned out that the servers in chassis 8 & 9 where AMD based servers with twice the memory capacity; by default, Nova does not look at VCPU usage when making scheduling decisions, so as these 128 servers had more remaining memory capacity that the 64 new servers in chassis 4, they were still being targeted for instances.

Unfortunately I’d hopped onto the plane from Austin to Atlanta for a few hours so I did not notice this – and we hit our first 9 instance failures.  The 128 servers in Chassis 8 and 9 ended up with nearly 400 instances each – severely over-committing on CPU resources.

A few tweaks to the scheduler configuration, specifically turning on the CoreFilter and setting the over commit at x 32, applied to the Cloud Controller nodes using the Juju charm, and instances started to land on the servers in chassis 4.  This seems like a sane thing to do by default, so we will add this to the nova-cloud-controller charm with a configuration knob to allow the over commit to be altered.

At the end of the day we had 168,000 instances running on the cloud – this may have got some coverage during the OpenStack summit….

The last word

Having access to this many real servers allowed us to exercise OpenStack, Juju, MAAS and our reference Charm configurations in a way that we have not been able undertake before.  Exercising infrastructure management tools and configurations at this scale really helps shake out the scale pinch points – in this test we specifically addressed:

  • Worker thread configuration in the nova-cloud-controller charm
  • Bumping open file descriptor ulimits in the rabbitmq-server charm enabled greater concurrent connections
  • Tweaking the maximum number of mysql connections via charm configuration
  • Ensuring that the CoreFilter is enabled to avoid potential extreme overcommit on nova-compute nodes.

There where a few things we could not address during the testing for which we had to find workarounds:

  • Scaling a Neutron base cloud past more than 256 physical servers
  • High instance density on nova-compute nodes with Neutron security groups enabled.
  • High relation creation concurrency in the Juju state server causing failures and poor performance from the juju command line tool.

We have some changes in the pipeline to the nova-cloud-controller and nova-compute charms to make it easier to split Neutron services onto different underlying messaging and database services.  This will allow the messaging load to be spread across different message brokers, which should allow us to scale a Neutron based OpenStack cloud to a much higher level than we achieved during this testing.  We did find a number of other smaller niggles related to scalability – checkout the full list of reported bugs.

And finally some thanks:

  • Blake Rouse for doing the enablement work for the SeaMicro chassis and getting us up and running at the start of the test.
  • Ryan Harper for kicking off the initial bundle configuration development and testing approach (whilst I was taking a break- thanks!) and shaking out the initial kinks.
  • Scott Moser for his enviable scripting skills which made managing so many servers a whole lot easier – MAAS has a great CLI – and for writing CirrOS.
  • Michael Partridge and his team at AMD for getting so many servers racked and stacked in such a short period of time.
  • All of the developers who contribute to OpenStack, MAAS and Juju!

.. you are all completely awesome!


Kubuntu Wire: Refurbished HP Laptops with Kubuntu

Planet Ubuntu - Wed, 2014-06-18 09:20

It’s always nice to come across Kubuntu being used in the wild.  Recently I was pointed to a refurbished laptop shop in Ireland who are selling HP laptops running Kubuntu 14.04LTS.  €140 for a laptop? You’d pay as much for a just the Windows licence in most other shops.

Michael Hall: A Tale of Two Systems

Planet Ubuntu - Wed, 2014-06-18 08:00

Two years ago, my wife and I made the decision to home-school our two children.  It was the best decision we could have made, our kids are getting a better education, and with me working from home since joining Canonical I’ve been able to spend more time with them than ever before. We also get to try and do some really fun things, which is what sets the stage for this story.

Both my kids love science, absolutely love it, and it’s one of our favorite subjects to teach.  A couple of weeks ago my wife found an inexpensive USB microscope, which lets you plug it into a computer and take pictures using desktop software.  It’s not a scientific microscope, nor is it particularly powerful or clear, but for the price it was just right to add a new aspect to our elementary science lessons. All we had to do was plug it in and start exploring.

My wife has a relatively new (less than a year) laptop running windows 8.  It’s not high-end, but it’s all new hardware, new software, etc.  So when we plugged in our simple USB microscope…….it failed.  As in, didn’t do anything.  Windows seemed to be trying to figure out what to do with it, over and over and over again, but to no avail.

My laptop, however, is running Ubuntu 14.04, the latest stable and LTS release.  My laptop is a couple of years old, but classic, Lenovo x220. It’s great hardware to go with Ubuntu and I’ve had nothing but good experiences with it.  So of course, when I decided to give our new USB microsope a try……it failed.  The connection was fine, the log files clearly showed that it was being identified, but nothing was able to see it as a video input device or make use of it.

Now, if that’s where our story ended, it would fall right in line with a Shakespearean tragedy. But while both Windows and Ubuntu failed to “just work” with this microscope, both failures were not equal. Because the Windows drivers were all closed source, my options ended with that failure.

But on Ubuntu, the drivers were open, all I needed to do was find a fix. It took a while, but I eventually found a 2.5 year old bug report for an identical chipset to my microscope, and somebody proposed a code fix in the comments.  Now, the original reporter never responded to say whether or not the fix worked, and it was clearly never included in the driver code, but it was an opportunity.  Now I’m no kernel hacker, nor driver developer, in fact I probably shouldn’t be trusted to write any amount of C code at all.  But because I had Ubuntu, getting the source code of my current driver, as well as all the tools and dependencies needed to build it, took only a couple of terminal commands.  The patch was too old to cleanly apply to the current code, but it was easy enough to figure out where they should go, and after a couple tries to properly build just the driver (and not the full kernel or every driver in it), I had a new binary kernel modules that would load without error.  Then, when I plugged my USB microscope in again, it worked!

People use open source for many reasons.  Some people use it because it’s free as in beer, for them it’s on the same level as freeware or shareware, only the cost matters. For others it’s about ethics, they would choose open source even if it cost them money or didn’t work as well, because they feel it’s morally right, and that proprietary software is morally wrong. I use open source because of USB microscopes. Because when they don’t work, open source gives me a chance to change that.

Canonical Design Team: Making ubuntu.com responsive: testing on multiple devices (15)

Planet Ubuntu - Wed, 2014-06-18 07:35

This post is part of the series ‘Making ubuntu.com responsive‘.

When working on a responsive project you’ll have to test on multiple operating systems, browsers and devices, whether you’re using emulators or the real deal.

Testing on the actual devices is preferable — it’s hard to emulate the feel of a device in your hand and the interactions and gestures you can do with it — and more enjoyable, but budget and practicability will never allow you to get a hold of and test on all the devices people might use to access your site.

We followed very simple steps that anyone can emulate to decide which devices we tested ubuntu.com on.

Numbers

You can quickly get a grasp of which operating systems, browsers and devices your visitors are using to get to your site just by looking at your analytics.

By doing this you can establish whether some of the more troublesome ones are worth investing time in. For example, if only 10 people accessed your site through Internet Explorer 6, perhaps you don’t need to provide a PNG fallback solution. But you might also get a few less pleasant surprises and find that a hard-to-cater-for browser is one of the preferred ones by your customers.

When we did our initial analysis we didn’t find any real surprises, however, due to the high volume of traffic that ubuntu.com sees every month even a very small percentage represented a large number of people that we just couldn’t ignore. It was important to keep this in mind as we defined which browsers, operating systems and devices to test on, and what issues we’d fix where.

Browsers (between 11 February and 13 March 2014) Browser Percentage usage Chrome 46.88% Firefox 36.96% Internet Explorer Total 7.54% 11 41.15% 8 22.96% 10 17.05% 9 14.24% 7 2.96% 6 1.56% Safari 4.30% Opera 1.68% Android Browser 1.04% Opera Mini 0.45% Operating systems (between 11 February and 13 March 2014) Operating system Percentage usage Windows Total 52.45% 7 60.81% 8.1 14.31% XP 13.3% 8.84 8.84% Vista 2.38% Linux 35.4% Macintosh 6.14% Android Total 3.32% 4.4.2 19.62% 4.3 15.51% 4.1.2 15.39% iOS 1.76% Mobile devices (between 12 May and 11 June 2014)
5.41% of total visits Device Percentage usage (of 5.41%) Apple iPad 17.33% Apple iPhone 12.82% Google Nexus 7 3.12% LG Nexus 5 2.97% Samsung Galaxy S III 2.01% Google Nexus 4 2.01% Samsung Galaxy S IV 1.17% HTC M7 One 0.92% Samsung Galaxy Note 2 0.88% Not set 16.66%

After analysing your numbers, you can also define which combinations to test in (operating system and browser).

Go shopping

Based on the most popular devices people were using the access our site, we made a short list of the ones we wanted to buy first. We weren’t married to the analytics numbers though: the idea was to cover a range of screen sizes and operating systems and expand as we went along.

  • Nexus 7
  • Samsung Galaxy S III
  • Samsung Galaxy Note II

We opted for not splashing on an iPad or iPhone, as there are quite a few around the office (retina and non-retina) and the money we saved meant we could buy other less common devices.

Part of our current device suite.

When we started to get a few bug reports from Android Gingerbread and Windows phone users, we decided we needed phones with those operating systems installed. This was our second batch of purchases:

  • Samsung Galaxy y
  • Kindle Fire HD (Amazon was having a sale at the time we made the list!)
  • Nokia Lumia 520
  • LG G2

And, last but not least, we use a Nexus 4 to test on Ubuntu on phones.

We didn’t spend much in any of our shopping sprees, but have managed to slowly build an ever-growing device suite that we can test our sites on, which is invaluable when working on responsive projects.

Alternatives

Some operating systems and browsers are trickier to test on in native devices. We have a BrowserStack account that we tend to use mostly to test on older Windows and Internet Explorer versions, although we also test Windows on virtual machines.

BrowserStack website.

Tools

We have to confess we’re not using any special software that synchronises testing and interactions across devices. We haven’t really felt the need for that yet, but at some point we should experiment with a few tools, so we’d like to hear suggestions!

Browser support

We prefer to think of different levels (or ways) of access to the content rather than browser support. The overall rule is that everyone should be able to get to the content, and bugs that obstruct access are prioritised and fixed.

As much as possible, and depending on resources available at the time, we try to fix rendering issues in browsers and devices used by a higher percentage of visitors: degree of usage defines the degree of effort fixing rendering bugs.

And, obviously: dowebsitesneedtolookexactlythesameineverybrowser.com.

Reading list

Pages

Subscribe to Free Software Magazine aggregator