news aggregator

Kubuntu Wire: Refurbished HP Laptops with Kubuntu

Planet Ubuntu - Wed, 2014-06-18 09:20

It’s always nice to come across Kubuntu being used in the wild.  Recently I was pointed to a refurbished laptop shop in Ireland who are selling HP laptops running Kubuntu 14.04LTS.  €140 for a laptop? You’d pay as much for a just the Windows licence in most other shops.

Michael Hall: A Tale of Two Systems

Planet Ubuntu - Wed, 2014-06-18 08:00

Two years ago, my wife and I made the decision to home-school our two children.  It was the best decision we could have made, our kids are getting a better education, and with me working from home since joining Canonical I’ve been able to spend more time with them than ever before. We also get to try and do some really fun things, which is what sets the stage for this story.

Both my kids love science, absolutely love it, and it’s one of our favorite subjects to teach.  A couple of weeks ago my wife found an inexpensive USB microscope, which lets you plug it into a computer and take pictures using desktop software.  It’s not a scientific microscope, nor is it particularly powerful or clear, but for the price it was just right to add a new aspect to our elementary science lessons. All we had to do was plug it in and start exploring.

My wife has a relatively new (less than a year) laptop running windows 8.  It’s not high-end, but it’s all new hardware, new software, etc.  So when we plugged in our simple USB microscope…….it failed.  As in, didn’t do anything.  Windows seemed to be trying to figure out what to do with it, over and over and over again, but to no avail.

My laptop, however, is running Ubuntu 14.04, the latest stable and LTS release.  My laptop is a couple of years old, but classic, Lenovo x220. It’s great hardware to go with Ubuntu and I’ve had nothing but good experiences with it.  So of course, when I decided to give our new USB microsope a try……it failed.  The connection was fine, the log files clearly showed that it was being identified, but nothing was able to see it as a video input device or make use of it.

Now, if that’s where our story ended, it would fall right in line with a Shakespearean tragedy. But while both Windows and Ubuntu failed to “just work” with this microscope, both failures were not equal. Because the Windows drivers were all closed source, my options ended with that failure.

But on Ubuntu, the drivers were open, all I needed to do was find a fix. It took a while, but I eventually found a 2.5 year old bug report for an identical chipset to my microscope, and somebody proposed a code fix in the comments.  Now, the original reporter never responded to say whether or not the fix worked, and it was clearly never included in the driver code, but it was an opportunity.  Now I’m no kernel hacker, nor driver developer, in fact I probably shouldn’t be trusted to write any amount of C code at all.  But because I had Ubuntu, getting the source code of my current driver, as well as all the tools and dependencies needed to build it, took only a couple of terminal commands.  The patch was too old to cleanly apply to the current code, but it was easy enough to figure out where they should go, and after a couple tries to properly build just the driver (and not the full kernel or every driver in it), I had a new binary kernel modules that would load without error.  Then, when I plugged my USB microscope in again, it worked!

People use open source for many reasons.  Some people use it because it’s free as in beer, for them it’s on the same level as freeware or shareware, only the cost matters. For others it’s about ethics, they would choose open source even if it cost them money or didn’t work as well, because they feel it’s morally right, and that proprietary software is morally wrong. I use open source because of USB microscopes. Because when they don’t work, open source gives me a chance to change that.

Canonical Design Team: Making ubuntu.com responsive: testing on multiple devices (15)

Planet Ubuntu - Wed, 2014-06-18 07:35

This post is part of the series ‘Making ubuntu.com responsive‘.

When working on a responsive project you’ll have to test on multiple operating systems, browsers and devices, whether you’re using emulators or the real deal.

Testing on the actual devices is preferable — it’s hard to emulate the feel of a device in your hand and the interactions and gestures you can do with it — and more enjoyable, but budget and practicability will never allow you to get a hold of and test on all the devices people might use to access your site.

We followed very simple steps that anyone can emulate to decide which devices we tested ubuntu.com on.

Numbers

You can quickly get a grasp of which operating systems, browsers and devices your visitors are using to get to your site just by looking at your analytics.

By doing this you can establish whether some of the more troublesome ones are worth investing time in. For example, if only 10 people accessed your site through Internet Explorer 6, perhaps you don’t need to provide a PNG fallback solution. But you might also get a few less pleasant surprises and find that a hard-to-cater-for browser is one of the preferred ones by your customers.

When we did our initial analysis we didn’t find any real surprises, however, due to the high volume of traffic that ubuntu.com sees every month even a very small percentage represented a large number of people that we just couldn’t ignore. It was important to keep this in mind as we defined which browsers, operating systems and devices to test on, and what issues we’d fix where.

Browsers (between 11 February and 13 March 2014) Browser Percentage usage Chrome 46.88% Firefox 36.96% Internet Explorer Total 7.54% 11 41.15% 8 22.96% 10 17.05% 9 14.24% 7 2.96% 6 1.56% Safari 4.30% Opera 1.68% Android Browser 1.04% Opera Mini 0.45% Operating systems (between 11 February and 13 March 2014) Operating system Percentage usage Windows Total 52.45% 7 60.81% 8.1 14.31% XP 13.3% 8.84 8.84% Vista 2.38% Linux 35.4% Macintosh 6.14% Android Total 3.32% 4.4.2 19.62% 4.3 15.51% 4.1.2 15.39% iOS 1.76% Mobile devices (between 12 May and 11 June 2014)
5.41% of total visits Device Percentage usage (of 5.41%) Apple iPad 17.33% Apple iPhone 12.82% Google Nexus 7 3.12% LG Nexus 5 2.97% Samsung Galaxy S III 2.01% Google Nexus 4 2.01% Samsung Galaxy S IV 1.17% HTC M7 One 0.92% Samsung Galaxy Note 2 0.88% Not set 16.66%

After analysing your numbers, you can also define which combinations to test in (operating system and browser).

Go shopping

Based on the most popular devices people were using the access our site, we made a short list of the ones we wanted to buy first. We weren’t married to the analytics numbers though: the idea was to cover a range of screen sizes and operating systems and expand as we went along.

  • Nexus 7
  • Samsung Galaxy S III
  • Samsung Galaxy Note II

We opted for not splashing on an iPad or iPhone, as there are quite a few around the office (retina and non-retina) and the money we saved meant we could buy other less common devices.

Part of our current device suite.

When we started to get a few bug reports from Android Gingerbread and Windows phone users, we decided we needed phones with those operating systems installed. This was our second batch of purchases:

  • Samsung Galaxy y
  • Kindle Fire HD (Amazon was having a sale at the time we made the list!)
  • Nokia Lumia 520
  • LG G2

And, last but not least, we use a Nexus 4 to test on Ubuntu on phones.

We didn’t spend much in any of our shopping sprees, but have managed to slowly build an ever-growing device suite that we can test our sites on, which is invaluable when working on responsive projects.

Alternatives

Some operating systems and browsers are trickier to test on in native devices. We have a BrowserStack account that we tend to use mostly to test on older Windows and Internet Explorer versions, although we also test Windows on virtual machines.

BrowserStack website.

Tools

We have to confess we’re not using any special software that synchronises testing and interactions across devices. We haven’t really felt the need for that yet, but at some point we should experiment with a few tools, so we’d like to hear suggestions!

Browser support

We prefer to think of different levels (or ways) of access to the content rather than browser support. The overall rule is that everyone should be able to get to the content, and bugs that obstruct access are prioritised and fixed.

As much as possible, and depending on resources available at the time, we try to fix rendering issues in browsers and devices used by a higher percentage of visitors: degree of usage defines the degree of effort fixing rendering bugs.

And, obviously: dowebsitesneedtolookexactlythesameineverybrowser.com.

Reading list

Sebastian Kügler: Five Musings on Frameworks Quality

Planet Ubuntu - Wed, 2014-06-18 01:42

Musing…


In many cases, high-quality code counts more than bells and whistles. Fast, reliable and well-maintained libraries provide a solid base for excellent applications built on top of it. Investing time into improving existing code improves the value of that code, and of the software built on top of that. For shared components, such as libraries, this value is often multiplied by the number of users. With this in mind, let’s have a closer look of how the Frameworks 5 transition affects the quality of the code, so many developers and users rely on.

KDE Frameworks 5 will be released in 2 weeks from now. This fifth revision of what is currently known as the “KDE Development Platform” (or, technically “kdelibs”) is the result of 3 years of effort to modularize the individual libraries (and “bits and pieces”) we shipped as kdelibs and kde-runtime modules as part of KDE SC 4.x. KDE Frameworks contains about 60 individual modules, libraries, plugins, toolchain, and scripting (QtQuick, for example) extensions.

One of the important aspects that has seen little exposure when talking about the Frameworks 5 project, but which is really at the heart of it, are the processes behind it. The Frameworks project, as it happens with such transitions has created a new surge of energy for our libraries. The immediate results, KF5′s first stable release is a set of software frameworks that induce minimal overhead, are source- and binary-stable for the foreseeable future, are well maintained, get regular updates and are proven, high-quality, modern and performant code. There is a well-defined contribution process and no mandatory copyright assignment. In other words, it’s a reliable base to build software on in many different aspects.

Maturity

Extension and improvement of existing software are two ways of increasing their values. KF5 does not contain revolutionary new code, instead of extending it, in this major cycle, we’re concentrating on widening the usecases and improving their quality. The initial KDE4 release contained a lot of rewritten code, changed APIs and meant a major cleanup of hard-to-scale and sometimes outright horrible code. Even over the course of 4.x, we had a couple of quite fundamental changes to core functionality, for example the introduction of semantic desktop features, Akonadi, in Plasma the move to QML 1.x.
All these new things have now seen a few years of work on them (and in the case of Nepomuk replacing of the guts of it with the migration to the much more performant Baloo framework). These things are mature, stable and proven to work by now. The transition to Qt5 and KF5 doesn’t actually change a lot about that, we’ve worked out most of the kinks of this transition by now. For many application-level code using KDE Frameworks, the porting will be rather easy to do, though not zero effort. The APIs themselves haven’t changed a lot, changes to make something work usually involve updating the build-system. From that point on, the application is often already functional, and can be gradually moved away from deprecated APIs. Frameworks 5 provides the necessary compatibility libraries to ease porting as much as possible.
Surely, with the inevitable and purposeful explosion of the user-base following a first stable release, we will get a lot of feedback how to further improve the code in Frameworks 5. Processes, requirements and tooling for this is in place. Also, being an open system, we’re ready to receive your patches.
Frameworks 5, in many ways encodes more than 15 years of experience into a clearly structured, stable base to build applications for all kinds of purposes, on all kinds of platforms on.

Framework Caretakers

With the modularization of the libraries, we’ve looked for suitable maintainers for them, and we’ve been quite successful in finding responsible caretakers for most of them. This is quite important as it reduces bottlenecks and single points of failure. It also scales up the throughput of our development process, as the work can be shared across more shoulders more easily. This achieves quicker feedback for development questions, code review requests, or help with bug fixes. We don’t actually require module maintainers to fix every single bug right away, they are acting much more as orchestrators and go-to-guys for a specific framework.

More Reviews

More peer-review of code is generally a good thing. It provides safety nets for code problems, catches potential bugs, makes sure code doesn’t do dumb thing, or smart things in the wrong way. It also allows transfer of knowledge by talking about each others code. We have already been using Review Board for some time, but the work on Frameworks 5 and Plasma 2 has really boosted our use of review board, and review processes in general. It has become a more natural part of our collaboration process, and it’s a very good thing, both socially and code-quality-wise.
More code reviews also keeps us developers in check. It makes it harder to slip in a bit of questionable code, a psychological thing. If I know my patches will be looked at line-by-line critically, it triggers more care when submitting patches. The reasons for this are different, and range from saving other developers some time to point out issues which I could have found myself had I gone over the code once more, but also make me look more cool when I submit a patch that is clean and nice, and can be submitted as-is.
Surely, code reviews can be tedious and slow down the development, but with the right dose, in the end it leads to better code, which can be trusted down the line. The effects might not be immediately obvious, but they are usually positive.

Tooling

Splitting up the libraries and getting the build-system up to the task introduced major breakage at the build-level. In order to make sure our changes would work, and actually result in buildable and working frameworks, we needed better tooling. One huge improvement in our process was the arrival of a continuous integration system. Pushing code into one of the Frameworks nowadays means that a it is built in a clean environment and automated tests are run. It’s also used to build its dependencies, so problems in the code that might have slipped the developer’s attention are more often caught automatically. Usually, the results of the Continuous integration system’s automated builds are available within a few minutes, and if something breaks, developers get notifications via IRC or email. Having these short turnaround cycles makes it easier to fix things, as the memory of the change leading to the problem is still fresh. It also saves others time, it’s less likely that I find a broken build when I update to latest code.
The build also triggers running autotests, which have been extended already, but are still quite far away from complete coverage. Having automated tests available makes it easier to spot problems, and increases the confidence that a particular change doesn’t wreak havoc elsewhere.
Neither continuous builds, nor autotests can make 100% sure that nothing ever breaks, but it makes it less likely, and it saves development resources. If a script can find a problem, that’s probably vastly more efficient than manual testing. (Which is still necessary, of course.)
A social aspect here is that not a single person is responsible if something breaks in autobuilds or autotests, it rather should be considered a “stop-the-line” event, and needs immediate attention — by anyone.

Continuous Improvement

This harnessing allows us to concentrate more on further improvments. Software in general are subject to a continous evolution, and Frameworks 5.0 is “just another” milestone in that ongoing process. Better scalability of the development processes (including QA) is not about getting to a stable release, it supports the further improvement. As much as we’ve updated code with more modern and better solutions, we’re also “upgrading” the way we work together, and the way we improve our software further. It’s the human build system behind software.

The circle goes all the way round, the continuous improvement process, its backing tools and processes evolve over time. They do not just pop out of thin air, they’re not dictated from the top down, they are rather the result of the same level of experience that went into the software itself. The software as a product and its creation process are interlinked. Much of the important DNA of a piece of software is encoded in its creation and maintenance process, and they evolve together.

David Tomaschik: Parameter Injection in jCryption

Planet Ubuntu - Wed, 2014-06-18 01:00

jCryption is an open-source plugin for jQuery that is used for performing encryption on the client side that can be decrypted server side. It works by retrieving an RSA key from the server, then encrypting an AES key under the RSA key, and sending both the encrypted AES key and the RSA key to the server. This is not dissimilar to how OpenPGP encrypts data for transmission. (Though, of course, implementation details are vastly different.)

jCryption comes with PHP and perl code demonstrating the decryption server-side, and while not packaged as ready-to-use libraries, it is likely that most users used the sample code for the server-side implementation. While the code used proc_open, which doesn't allow command injection (it's not being run through a shell, so shell metacharacters aren't relevant) still allows an attacker to modify the arguments being passed to the command.

Originally, the code used constructs like:

1$cmd = sprintf("openssl enc -aes-256-cbc -pass pass:'$key' -a -e");

Because $key can be attacker-controlled, an attacker can close the pass string early, and add additional openssl parameters there. This includes, for example, the ability to read the jCryption RSA private key, allowing an attacker to read any traffic sent with jCryption that they have captured (or capture in the future).

I reported this issue late last night, and Daniel Griesser, the author of jCryption, replied shortly thereafter, confirming he was looking into the matter. By this morning, he had created a fix and pushed a new release out. It speaks very highly of a developer when they're able to respond so quickly to a security concern.

For the curious, it was fixed by escaping the shell argument using escapeshellarg:

1$cmd = sprintf("openssl enc -aes-256-cbc -pass pass:" . escapeshellarg($key) . " -a -e");

I'm not releasing a PoC that does the actual crypto steps at this point, I want to make sure sites have had a chance to upgrade.

Ben Howard: SSD and PIOPS AMI's for AWS

Planet Ubuntu - Tue, 2014-06-17 20:17
No doubt many of you have seen the announcement from AWS in regards to the new SSD-backed EBS volumes [1]. I am pleased to announce that we have day one availability of the latest Ubuntu images registered with SSD backed EBS volumes.

Additionally, we also have made a new type of image available called "io1". These new images have 500 provisioned IOPS by default. 
Even better, both the SSD and io1 images are available for HVM, 32-bit and 64-bit paravirtual instances.

As of today, all dailies and future releases will be published with all supported volume types. The latest releases are at [2,3,4] or [5]:

[1] https://aws.amazon.com/blogs/aws/new-ssd-backed-elastic-block-storage/
[2] http://cloud-images.ubuntu.com/releases/trusty/release-20140607.1/
[3] http://cloud-images.ubuntu.com/releases/saucy/release-20140608/
[4] http://cloud-images.ubuntu.com/releases/precise/release-20140606/[5] http://cloud-images.ubuntu.com/locator/ec2/

David Murphy: Hands-on with Canonical’s Orange Box

Planet Ubuntu - Tue, 2014-06-17 20:05

Ars Technica has a great write up by Lee Hutchinson on our Orange Box demo and training unit.

You can’t help but have your attention grabbed by it!

As the comments are quick to point out – at the expense of the rest of the piece – the hardware isn’t the compelling story here. While you can buy your own, you can almost certainly hand build an equivalent-or-better set up for less money1, but Ars recognises this:

Of course, that’s exactly the point: the Orange Box is that taste of heroin that the dealer gives away for free to get you on board. And man, is it attractive. However, as Canonical told me about a dozen times, the company is not making them to sell—it’s making them to use as revenue driving opportunities and to quickly and effectively demo Canonical’s vision of the cloud.

The Orange Box is about showing off MAAS & Juju, and – usually – OpenStack.

To see what Ars think of those, you should read the article.

I definitely echo Lee’s closing statement:

I wish my closet had an Orange Box in it. That thing is hella cool.

  1. Or make one out of wood like my colleague Gavin did! 

The post Hands-on with Canonical’s Orange Box appeared first on David Murphy.

Ubuntu Scientists: Introducting the “Ubuntu Scientists Blog”!

Planet Ubuntu - Tue, 2014-06-17 18:50

In the last UOS, the founder (belkinsa, Svetlana Belkin) of Ubuntu Scientists decided to create a blog where the team will post news, interviews of the members, and stories of members to help other scientists to get a feeling of who we are, how to help us, and how to use FOSS in the science fields.  Another team also have done this in the past.

Other posts are HERE


Filed under: News Tagged: blog, Interviews, News, Stories, Svetlana Belkin, Ubuntu, Ubuntu Scientists, UOS

Ubuntu Kernel Team: Kernel Team Meeting Minutes – June 17, 2014

Planet Ubuntu - Tue, 2014-06-17 17:16
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140617 Meeting Agenda


ARM Status

Nothing new to report this week


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Milestone Targeted Work Items

I’d note that this section of the meeting is becoming less and less
useful to me in its current form. I’d like to take a vote to skip this
section until I/we can find a better solution. All in favor (ie +1)?
I’ll take silence as agreement as well
ppisati: +1
jsalisbury: +1
rtg: +1
ogasawara: ok, motion passed.
(actually the same could be said for the ARM status part since support it’s part of generic now FWIW)
Dropping ARM Status from this meeting as well.

   apw    core-1405-kernel    2 work items       ogasawara    core-1405-kernel    2 work items   


Status: Utopic Development Kernel

We have rebased our Utopic kernel to v3.15 final and uploaded
(3.15.0-6.11). As noted in previous meetings, we are planning on
converging on the v3.16 kernel for Utopic. We have started tracking
v3.16-rc1 in our “unstable” ubuntu-utopic branch. We’ll let this
marinate and bake for a bit before we do an official v3.16 based upload
to the archive.
—–
Important upcoming dates:
Thurs Jun 26 – Alpha 1 (~1 week away)
Fri Jun 27 – Kernel Freeze for 12.04.5 and 14.04.1 (~1 week away)
Thurs Jul 31 – Alpha 2 (~6 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (May. 6):

  • Lucid – Verification and Testing
  • Precise – Verification and Testing
  • Saucy – Verification and Testing
  • Trusty – Verification and Testing
    Current opened tracking bugs details:
  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Schedule:
    cycle: 08-Jun through 28-Jun
    ====================================================================
    06-Jun Last day for kernel commits for this cycle
    08-Jun – 14-Jun Kernel prep week.
    15-Jun – 21-Jun Bug verification & Regression testing.
    22-Jun – 28-Jun Regression testing & Release to -updates.
    14.04.1 cycle: 29-Jun through 07-Aug
    ====================================================================
    27-Jun Last day for kernel commits for this cycle
    29-Jun – 05-Jul Kernel prep week.
    06-Jul – 12-Jul Bug verification & Regression testing.
    13-Jul – 19-Jul Regression testing & Release to -updates.
    20-Jul – 24-Jul Release prep
    24-Jul 14.04.1 Release [1]
    07-Aug 12.04.5 Release [2]
    [1] This will be the very last kernels for lts-backport-quantal, lts-backport-raring,
    and lts-backport-saucy.
    [2] This will be the lts-backport-trusty kernel as the default in the precise point
    release iso.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Svetlana Belkin: UOS 14.06 Summary and Lessons Learned (as a Track Lead)

Planet Ubuntu - Tue, 2014-06-17 17:11

The UOS 14.06 was last week during June 12 to June 14 and it was the first one that I was able to be there for the whole thing and I was a track lead for the Community Track which I feel that I ended up running most of the show along with Daniel Holbach.  To the other track leads of the same track, I mean no offence.  :)  Because this was my first full UOS, I tired myself out quickly after each day (the weather was gloomy all three days too), I had no mood to do anything else after and this is why this blog post is almost a week late.

First thing that I will share with you are the summaries for the Community track:

Introduction to Lubuntu: Phill Whiteside and Harry Webber talked about what Lubuntu is and what is planned.
Ubuntu Women Utopic Goals: To get more women involved in Ubuntu, the team has been looking into adding a “get involved quiz” to the website. The plan is now to get it up on community.ubuntu.com. The women’s team also want to take a look at Harvest and see how it could be improved to show new developers what needs to get done. The team website will also get more stories and updated best practices. More classroom sessions are planned as well.
Community Roundtable: A number of topics were discussed, among them dates for the next events. UOS dates will be picked soon, it was suggested to bring it back in line with the release cycle again. We will work with the LoCo community and Classroom team to organise the Global Jam and other events this cycle.
In the LoCo part of our community we want to look into making it easier to share stories and pictures of LoCo events and publish them on Planet. We also want to look into helping teams to train new coordinators and organisers on their teams.
From fix to image: how your patch makes it into Ubuntu: The CI team has put together an impressive process to get changes automatically built and tested. This makes it a lot easier to land high quality changes in Ubuntu. Łukasz Zemczak gave a great presentation on how this process works.
Ubuntu Documentation Team Roundtable: A number of initiatives were discussed to make it easier for newcomers to get involved with the team: a cleanup of current documentation and referring to it on help.ubuntu.com and elsewhere. Regular meetings are planned again as well.
Kubuntu Documentation Team Roundtable June 2014: They talked about following Ubuntu GNOME and setting up a Kubuntu Promo team to help promote and gather contributors and then send them to the right team (Docs, Dev, etc) They are also talked about once or server things get setup we can work on docs.kubuntu.org to make it look more in line with the new kubuntu set.
Introduction to Ubuntu GNOME: Ali Linx talked about Ubuntu GNOME, the web site, and the history of the flavour. He and other team members also talked about plans for the website, mainly about art work.
App development training programme: In the last cycle some of our app developers went out to their LoCo meetings and did some app development workshops. We put together a plan to turn this into a more formal training programme, starting in phase 1 in July.
Ubuntu Scientists June 2014 Roundtable: The team reviewed the team’s wiki page and discussed a few changes to it, to make it more inviting and set clearer tasks for newcomers. Another idea was to interview scientist users about their use of Ubuntu and blog about it.

Thanks Daniel Holbach for the summaries and the links to the sessions are in the Community Track embedded link.  Sorry if it’s hard to read, I can’t fix this issue!

I went to other sessions but this my favorite:

And I still have some to watch!

Since I was a track lead, I have a few lessons that I learned:

  • Give enough notice to the team or group of people but I think this was completely my fault since the UOS organizers didn’t give us a month’s notice
  • Use Chrome not Firefox for Hangouts and if needed, restart your computer before the next Hangout.  I had issues with my netbook and my mic where no one was able to hear me.
  • Even though it’s suggested to set up the Hangout on Air ten minutes before, but if you have time, do it a bit early and check if you have any problems
  • You can host a session for someone else but you don’t need to say anything

I enjoyed this one but I think it could of been better, but I know that is getting worked on.


Canonical Design Team: Making ubuntu.com responsive: JavaScript considerations (14)

Planet Ubuntu - Tue, 2014-06-17 10:05

This post is part of the series ‘Making ubuntu.com responsive‘.

The JavaScript used on ubuntu.com is very light. We limit its use to small functional elements of the web style guide, which act to enhance the user experience but are never required to deliver the content the user is there to consume.

At Canonical we use YUI as our JavaScript framework of choice. We have many years of using it for our websites and web apps therefore have a large knowledge base to fall back on. We have a single core.js which contains a number of functions called on when required.

Below I will discuss some of the functions and workarounds we have provided in the web style guide.

Providing fallbacks

When considering our transition from PNGs to SVGs across the site, we provided a fallback for background images with Modernizr and reset the background image with the .no-svg class on the body. Our approached to a fallback replacement in markup images was a JavaScript snippet from CSS Tricks – SVG Fallbacks, which I converted to YUI:

The snippet above checks if Modernizr exists in the namespace. It then interrogates the Modernizr object for SVG support. If the browser does not support SVGs we loop through each image with .svg contained in the src and replace the src with the same path and filename but a .png version. This means all SVGs need to have a PNG version at the same location.

Navigation and fallback

The mobile navigation on ubuntu.com uses JavaScript to toggle the menu open and closed. We decided to use JavaScript because it’s well supported. We explored using :target as a pure CSS solution, but this selector isn’t supported in Internet Explorer 7, which represented a fair chunk of our visitors.

The navigation on ubuntu.com, in small screens.

For browsers that don’t support JavaScript we resort to displaying the “burger” icon, which acts as an in-page anchor to the footer which contains the site navigation.

Equal height

As part of the guidelines project we needed a way of setting a number of elements to the same height. We would love to use the flexbox to do this but the browser support is not there yet. Therefore we developed a small JavaScript solution:

This function finds all elements with an .equal-height class. We then look for child divs or lis and measure the tallest one. Then set all these children to the highest value.

Using combined YUI

One of the obstacles discovered when working on this project was that YUI will load modules from an http (non secure) domain as the library requires. This of course causes issues on any site that is hosted on a secure domain. We definitely didn’t want to restrict the use of the web style guide to non secure sites, therefore we need combine all required modules into a combined YUI file.

To combine your own YUI visit YUI configurator. Add the modules you require and copy the code from the Output Console panel into your own hosted file.

Final thoughts

Obviously we had a fairly easy time of making our JavaScript responsive as we only use the minimum required as a general principle on our site. But using integrating tools like Modernizr into our workflow and keeping top of CSS browser support, we can keep what we do lean and current.

Reading list

The Fridge: Ubuntu Weekly Newsletter Issue 372

Planet Ubuntu - Mon, 2014-06-16 21:38

Welcome to the Ubuntu Weekly Newsletter. This is issue #372 for the week June 9 – 15, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Emily Gonyer
  • Jose Antonio Rey
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Chris J Arges: manually deploying openstack with a virtual maas on ubuntu trusty (part 2)

Planet Ubuntu - Mon, 2014-06-16 17:16
In the previous post, I went over how to setup a virtual MAAS environment using KVM [1]. Here I will explain how to setup Juju for use with this environment.

For this setup, we’ll use the maas-server as the juju client to interact with the cluster.

This guide was very useful:
https://maas.ubuntu.com/docs/juju-quick-start.html

Update to the latest stable tools:
sudo apt-add-repository ppa:juju/stable
sudo apt-get update
Next we want to setup juju on the host machine.
sudo apt-get install juju-coreCreate juju environment file.
juju initGenerate a specific MAAS API key by using the following link:
http://192.168.100.10/MAAS/account/prefs/

write the following to ~/.juju/environments.yaml replacing ‘maas api key’ with what was generated above:
default: vmaas
environments:
  vmaas:
    type: maas
    maas-server: 'http://192.168.100.10/MAAS'
    maas-oauth: '<maas api key>'
    admin-secret: ubuntu # or something generated with uuid
    default-series: trusty
Now let’s sync tools and bootstrap a node. Note, if you have multiple juju environments then you may need to specify ‘-e vmaas’ if it isn’t your default environment.
juju sync-tools
juju bootstrap # add --show-log --debug  for more output
See if it works by using the following command:
juju statusYou should see something similar to the following:
~$ juju status
environment: vmaas
machines:
  "0":
    agent-state: down
    agent-state-info: (started)
    agent-version: 1.18.4
    dns-name: maas-node-0.maas
    instance-id: /MAAS/api/1.0/nodes/node-e41b0c34-e1cb-11e3-98c6-5254001aae69/
    series: trusty
services: {}
Now we can do a test deployment with the juju-gui to our bootstrap node.
juju deploy juju-guiWhile it is deploying you can type the following to get a log.
juju debug-logI wanted to be able to access the juju-gui from an ‘external’ address, so I edited /etc/networking/interfaces on that machine to have a static address:
juju ssh 0sudo vim /etc/networking/interfaces
Add the following to the file:
auto eth0
iface eth0 inet static
  address 192.168.100.11
  netmask 255.255.255.0
Bring that interface up.
sudo ifup eth0The password can be found here on the host machine:
grep pass .juju/environments/vmaas.jenvIf you used the above configurations it should be ‘ubuntu’.

Log into the service so you can monitor the status graphically during the deployment.

If you get errors saying that servers couldn’t be reached you may have DNS configuration or proxy issues. You’ll have to first resolve these before using Juju. I’ve had is intermittent network issues in my lab. In order to workaround those physical issues you may have to retry the bootstrap, or increase the timeout values in ~/.juju/environments.yaml to use the following:
  bootstrap-retry-delay: 5
  bootstrap-timeout: 1200
Now you’re cooking with juju.
  1. http://dinosaursareforever.blogspot.com/2014/06/manually-deploying-openstack-with.html

Valorie Zimmerman: Primes, and products of primes

Planet Ubuntu - Mon, 2014-06-16 08:25
I was finding it difficult to stop thinking and fall asleep last night, so I decided to count as far as I could, in primes or products of primes. I'm not sure why, but it did stop the thoughts whirling around like a hamster on a wheel.

Note on superscripts: HTML only allows me to show squares and cubes. Do the addition.

1
2
3

5
2×3
7


2×5
11
2×3²
13
2x7
3x5
2²×2²
17
2×3²
19
2²×5
3×7
2×11
23
2³×3

2×13

2²×7
29
2×3×5
31
2²×2³
3×11
2×17
5×7
2²×3²
37
2×19
3×13
2³×5
41
2×3×7
43
2²×11
3²×5
2×23
47
2²×2²×3

2×5²
3×17
2²×13
53
2×3³
5×11
2³×7
3×19
2×29
59
2²×3×5
61
2×31
2×3×11
2³×2³
5×13
2×3×11
67
2²×17
3×23
2×5×7
71
2³×3² 
73
2×37
3×5²
2²×19
7×11
2×3×13
79
2²×2²×5
3²×3²
2×41
83
2²×3×7
5×17
2×43
3×29
2³×11
89
2×3³×5
7×13
2²×23
3×31
2×47
5×19
2²×2³×3
97
2²×2³×3
3²×11
2²×5²
101

Please comment if I got the coding or arithmetic wrong! It's been fun to figure all this out -- in my head if I could, on paper if I had to.

Valorie Zimmerman: Emotional Maturity and Free / Open Source communities

Planet Ubuntu - Mon, 2014-06-16 06:32
12 Signs of emotional maturity has an excellent list of the characteristics we look for in FOSS team members -- and traits I want to strengthen in my Self.

1. Flexibility - So necessary. The only constant is change, so survival dictates flexibility.

 2. Responsibility - Carthage Buckley, the author of 12 Signs of emotional maturity says:
You take responsibility for your own life. You understand that your current circumstances are a result of the decisions you have taken up to now. When something goes wrong, you do not rush to blame others. You identify what you can do differently the next time and develop a plan to implement these changes.The world is a mirror. Sometimes when things go wrong, I mistake what I see as caused by some malevolent force, or even someone being stupid. The human brain is designed to keep us from recognizing our own errors and mistakes, unfortunately. So I need to remember to take responsibility, and seek out evidence of personal shortcomings, in order to improve.

I want my team members to do the same! When someone has caused a mess, I want them to take responsibility, and clean up. I want to learn to more often do the same.

 3. Vision trumps knowledge - If I have a dream and desire, I can get the knowledge I need. Whereas a body of knowledge, by itself, doesn't make anything happen.

Good marketing sells the sizzle, not the steak. In other words, make people hungry, and they will buy your steak. Tell them how great it is, and they'll go somewhere they can smell steak! When working in my team, I need to remember this.

4. Personal growth - A priority every day. Who wants to be around stagnant people?

5. Seek alternative views - This one is so difficult, and so important. The hugely expanded media choices available to people now leads to many of us never interacting with people who disagree with us, or have a different perspective. This leads to groupthink, and even disaster. One way to prevent this in teams is to value diversity, and recruit with diversity as a goal. 

 6. Non-judgmental - Another hard one. Those who seek out alternative views, will more easily recognize how different we all can be, while all being of worth. And when we focus on shared goals rather than positions, we can continue to make shared progress towards those goals.

 7. Resilience - Stuff happens. When it does, we all can learn to pick up, dust off, and get going again. This doesn't mean denying that stuff happens; rather it means accepting that and continuing on anyway.

8. A calm demeanor - I think this results from resilience. Freaking out just wastes time and energy, and gets me further off-balance. Better to breathe a bit, and continue on my way.

 9. Realistic optimism - I love this word pair. Seeing that a glass is half-full, rather than half-empty is a habit, and habits can be created. Bad habits can be changed. Buckley says that success requires effort and patience. Your goals are worth effort and patience, creativity, and perseverance.

10. Approachable - Again, a choice. If I'm open to others, they will feel free to offer their help, encouragement or even warnings. If seeking alternative views is a value, then being approachable is one way to get those views.

11. Self-belief - I think this can be carried too far, but if we've looked for alternative views and perspectives, and created a plan with those views in mind, then criticism will not stop progress. When our goals are deeply desired, we can be flexible in details, and yet continue progress towards the ultimate destination.

12. Humor - Laughter and joy are signs that you are healthy and on your right path. The teams I want to work with are those full of humor, laughter and joy.

PS: I was unable to work the wonderful new word bafulates into this blog post, to my regret. Please accept my apologies.

Elizabeth K. Joseph: Texas Linuxfest wrap-up

Planet Ubuntu - Mon, 2014-06-16 05:23

Last week I finally had the opportunity to attend Texas Linuxfest. I first heard about this conference back when it started from some Ubuntu colleagues who were getting involved with it, so it was exciting when my talk on Code Review for Systems Administrators was accepted.

I arrived late on Thursday night, much later than expected after some serious flight delays due to weather (including 3 hours on the tarmac at a completely different airport due to running out of fuel over DFW). But I got in early enough to get rest before the expo hall opened on Friday afternoon where I helped staff the HP booth.

At the HP booth, we were showing off the latest developments in the high density Moonshot system, including the ARM-based processors that are coming out later this year (currently it’s sold with server grade Atom processors). It was cool to be able to see one, learn more about it and chat with some of the developers at HP who are focusing on ARM.


HP Moonshot

That evening I joined others at the Speaker dinner at one of the Austin Java locations in town. Got to meet several cool new people, including another fellow from HP who was giving a talk, an editor from Apress who joined us from England and one of the core developers of BusyBox.

On Saturday the talks portion of the conference began!

The keynote was by Karen Sandler, titled “Identity Crisis: Are we who we say we are? which was a fascinating look at how we all present ourselves in the community. As a lawyer, she gave some great insight into the multiple loyalties that many contributors to Open Source have and explored some of them. This was quite topical for me as I continue to do a considerable amount of volunteer work with Ubuntu and work at HP on the OpenStack project as my paid job. But am I always speaking for HP in my role in OpenStack? I am certainly proud to represent HP’s considerable efforts in the community, but in my day to day work I’m largely passionate about the project and my work on a personal level and my views tend to be my own. During the Q&A there was also interesting discussion about use of email aliases, which got me thinking about my own. I have an Ubuntu address which I pretty strictly use for Ubuntu mailing lists and private Ubuntu-related correspondences, I have an HP address that I pretty much just use for internal HP work and then everything else in my life pretty much goes to my main personal address – including all correspondences on the OpenStack, local Linux and other mailing lists.


Karen Sandler beginning her talk with a “Thank You” to the conference organizers

The next talk I went to was by Corey Quinn on “Selling Yourself: How to handle a technical interview” (slides here). I had a chat with him a couple weeks back about this talk and was able to give some suggestions, so it was nice to see the full talk laid out. His experience comes from work at Taos where he does a lot of interviewing of candidates and was able to make several observations based on how people present themselves. He began by noting that a resume’s only job is to get you an interview, so more time should be spent on actually practicing interviewing rather than strictly focusing on a resume. As the title indicates, the key take away was generally that an interview is the place where you should be selling yourself, no modesty here. He also stressed that it’s a 2 way interview, and the interviewer is very interested in making sure that the person will like the job and that they are actually interested to some degree in the work and the company.

It was then time for my own talk, “Code Review for Systems Administrators,” where I talked about how we do our work on the OpenStack Infrastructure team (slides here). I left a bit of extra time for questions than I usually do since my colleague Khai Do was doing a presentation later that did a deeper dive into our continuous integration system (“Scaling the Openstack Test Environment“). I’m glad I did, there were several questions from the audience about some of our additional systems administration focused tooling and how we determine what we use (why Puppet? why Cacti?) and what our review process for those systems looked like.

Unfortunately this was all I could attend of the conference, as I had a flight to catch in order to make it to Croatia in time for DORS/CLUG 2014 this week. I do hope to make it back to Texas Linuxfest at some point, the event had a great venue and was well-organized with speaker helpers in every room to do introductions, keep things on track (so nice!) and make sure the A/V was working properly. Huge thanks to Nathan Willis and the other organizers for doing such a great job.

Paul Tagliamonte: Linode pv-grub chaining

Planet Ubuntu - Sun, 2014-06-15 01:40

I've been using Linode since 2010, and many of my friends have heard me talk about how big a fan I am of linode. I've used Debian unstable on all my Linodes, since I often use them as a remote shell for general purpose Debian development. I've found my linodes to be indispensable, and I really love Linode.

The Problem

Recently, because of my work on Docker, I was forced to stop using the Linode kernel in favor of the stock Debian kernel, since the stock Linode kernel has no aufs support, and the default LVM-based devicemapper backend can be quite a pain.

The btrfs errors are ones I fully expect to be gone soon, I can't wait to switch back to using it.

I tried loading in btrfs support, and using that to host the Docker instance backed with btrfs, but it was throwing errors as well. Stuck with unstable backends, I wanted to use the aufs backend, which, dispite problems in aufs internally, is quite stable with Docker (and in general).

I started to run through the Linode Library's guide on PV-Grub, but that resulted in a cryptic error with xen not understanding the compression of the kernel. I checked for recent changes to the compresson, and lo, the Debian kernel has been switched to use xz compression in sid. Awesome news, really. XZ compression is awesome, and I've been super impressed with how universally we've adopted it in Debian. Keep it up! However, it appears only a newer pv-grub than the Linode hosts have installed will fix this.

After contacting the (ever friendly) Linode support, they were unable to give me a timeline on adding xz support, which would entail upgrading pv-grub. It was quite disapointing news, to be honest. Workarounds were suggested, but I'm not quite happy with them as proper solutions.

After asking in #debian-kernel, waldi was able to give me a few pointers, and the following is very inspired by him, the only thing that changed much was config tweaking, which was easy enough. Thanks, Bastian!

The Constraints

I wanted to maintain a 100% stock configuration from the kernel up. When I upgraded my kernel, I wanted to just work. I didn't want to unpack and repack the kernel, and I didn't want to install software outside main on my system. It had to be 100% Debian and unmodified.

The Solution It's pretty fun to attach to the lish console and watch bootup pass through GRUB 0.9, to GRUB 2.x to Linux. Free Software, Fuck Yeah.

Left unable to run my own kernel directly in the Linode interface, the tact here was to use Linode's old pv-grub to chain-load grub-xen, which loaded a modern kernel. Turns out this works great.

Let's start by creating a config for Linode's pv-grub to read and use.

sudo mkdir -p /boot/grub/

Now, since pv-grub is legacy grub, we can write out the following config to chain-load in grub-xen (which is just Grub 2.0, as far as I can tell) to /boot/grub/menu.lst. And to think, I almost forgot all about menu.lst. Almost.

default 1 timeout 3 title grub-xen shim root (hd0) kernel /boot/xen-shim boot

Just like riding a bike! Now, let's install and set up grub-xen to work for us.

sudo apt-get install grub-xen sudo update-grub

And, let's set the config for the GRUB image we'll create in the next step in the /boot/load.cf file:

configfile (xen/xvda)/boot/grub/grub.cfg

Now, lastly, let's generate the /boot/xen-shim file that we need to boot to:

grub-mkimage --prefix '(xen/xvda)/boot/grub' -c /boot/load.cf -O x86_64-xen /usr/lib/grub/x86_64-xen/*.mod > /boot/xen-shim

Next, change your boot configuration to use pv-grub, and give the machine a kick. Should work great! If you run into issues, use the lish shell to debug it, and let me know what else I should include in this post!

Hack on!

Costales: Review de Ubuntu Touch 1.0 en el Nexus 4 en @xatakamovil por @javipas

Planet Ubuntu - Sat, 2014-06-14 08:06

Realmente disfruté este artículo porque desde hace mucho tiempo tengo la intriga de cómo se comportará realmente la versión de Ubuntu para móviles.

Javier nos muestra desde una objetividad profesional una versión demasiado inmadura para tanto tiempo de desarrollo y que posiblemente adolezca de la ausencia de móviles comerciales en un mercado excesivamente innovador y competitivo.

Sin más os dejo con el artículo completo en Xataka.

Chris J Arges: manually deploying openstack with a virtual maas on ubuntu trusty (part 1)

Planet Ubuntu - Fri, 2014-06-13 22:05
The goal of this new few series of posts is to be able to setup virtual machines to simulate a real-world openstack deployment using maas and juju. This goes through setting up a maas-server in a VM as well as setting up maas-nodes in VMs and getting them enlisted/commissioned into the maas-server. Next juju is configured to use the maas cluster. Finally, openstack is deployed using juju.
OverviewRequirementsIdeally, a large server with 16 cores, 32G memory, 500G disk. Obviously you can tweak this setup to work with less; but be prepared to lock up lesser machines. In addition your host machine needs to be able to support nested virtualization.
TopologyHere is the basics of what will be setup for our virtual maas cluster. Each red box is a virtual machine with two interfaces. The eth0 interface in the VM connects to the NATed maas-internet network, while the VM’s eth1 interface connects to the isolated maas-management network. The number of maas-nodes should match what is required for the deployment; however it is simple enough to enlist more nodes later. I choose to use a public/private network in order to be more flexible later in how openstack networking is set up.


Setup Host MachineInstall RequirementsFirst install all required programs on the host machine.sudo apt-get install libvirt-bin qemu-kvm cpu-checker virtinst uvtool
Next, check if kvm is working correctly.kvm-ok
Ensure nested KVM is enabled. (replace intel with amd if necessary)cat /sys/module/kvm_intel/parameters/nested
This should output Y, if it doesn’t do the following:sudo modprobe -r kvm_intel
sudo modprobe kvm_intel nested=1

Ensure $USER is added to libvirtd group.groups | grep libvirtd
Ensure host machine has SSH keys generated and setup. (Be careful, don’t overwrite your existing keys)[ -d ~/.ssh ] || ssh-keygen -t rsaVirtual Network SetupThis step can be done via virt-manager, but also done via command line using virsh.Setup a virtual network which uses NAT to communicate with the host machine with the following parameters:Network Name: maas_internet
Network: 192.168.100.0/24
Do _not_ Enable DHCP.
Forwarding to physical network; Any physical device; NAT
And setup an isolated virtual network the following parameters: Network Name: maas_management
Network: 10.10.10.0/24
Do _not_ Enable DHCP.
Isolated network;Install the MAAS ServerDownload and Start the InstallEnsure you have virt-manager connected to the hypervisor.
While there are many ways we can create virtual machines, I chose the tool uvtool because it works well in Trusty and quickly creates VM based on the Ubuntu cloud image.

Sync the latest cloud trusty cloud image:
uvt-simplestreams-libvirt sync release=trusty arch=amd64
Create a maas-server VM:
uvt-kvm create maas-server release=trusty arch=amd64 --disk 20 --memory 2048 --password ubuntu
After it boots, shut it down and  edit the VM’s machine configuration.
Make the two network interfaces connect to maas_internet and maas_management respectively.

Now edit /etc/network/interfaces to have the following:
auto eth0
iface eth0 inet static
  address 192.168.100.10
  netmask 255.255.255.0
  gateway 192.168.100.1
  dns-nameservers 10.10.10.10 192.168.100.1

auto eth1
iface eth1 inet static
  address 10.10.10.10
  netmask 255.255.255.0
  dns-nameservers 10.10.10.10 192.168.100.1

And follow the instructions here:
http://maas.ubuntu.com/docs/install.html#pkg-install

Which is essentially:
sudo apt-get install maas maas-dhcp maas-dnsMAAS Server Post Install Taskshttp://maas.ubuntu.com/docs/install.html#post-install

First let’s check if the webpage is working correctly. Depending on your installation, you may need to proxy into a remote host hypervisor before accessing the webpage. If you’re working locally you should be able to access this address directly (as the libvirt maas_internet network is already connected to your local machine).

If you need to access it indirectly (and 192.168.100.0 is a non-conflicting subnet):
sshuttle -D -r <hypervisor IP> 192.168.100.0/24
Access the following:
http://192.168.100.10/MAAS
It should remind you that post installation tasks need to be completed.

Let’s create the admin user from the hypervisor machine:
ssh ubuntu@192.168.100.10
sudo maas-region-admin createadmin --username=root --email="user@host.com" --password=ubuntu
If you want to limit the types of boot images that can be created you need to edit
sudo vim /etc/maas/bootresources.yaml
Import boot images, using the new root user you created to log in:
http://192.168.100.10/MAAS/clusters/
Now click 'import boot images' and be patient as it will take some time before these images are imported.

Add a key for the host machine here:
http://192.168.100.10/MAAS/account/prefs/sshkey/add/
Configure the MAAS ClusterFollow instructions here to setup cluster:
http://maas.ubuntu.com/docs/cluster-configuration.html

http://192.168.100.10/MAAS/clusters/
Click on ‘Cluster master’
Click on edit interface eth1.
Interface: eth1
Management: DHCP and DNS
IP: 10.10.10.10
Subnet mask: 255.255.255.0
Broadcast IP: 10.10.10.255
Router IP: 10.10.10.10
IP Range Low: 10.10.10.100
IP Range High: 10.10.10.200

Click Save Interface
Ensure Nodes Auto-Enlist

Create a MAAS key and use that to log in:
http://192.168.100.10/MAAS/account/prefs/
Click on ‘+ Generate MAAS key’ and copy that down.

Log into the maas-server, and then log into maas using the MAAS key:
maas login maas-server http://192.168.100.10/MAAS

Now set all nodes to auto accept:
maas maas-server nodes accept-all
Setup keys on the maas-server so it can access the virtual machine host
sudo mkdir -p ~maas
sudo chown maas:maas ~maas
sudo -u maas ssh-keygen

Add the pubkey in ~maas/.ssh/id_rsa.pub to the virsh servers authorized_keys and to the maas SSH keys (http://192.168.100.10/MAAS/account/prefs/sshkey/add/)
sudo cat /home/maas/.ssh/id_rsa.pub
Now install virsh to test a connection and allow the maas-server to control maas-nodes.
sudo apt-get install libvirt-bin
Test the connection to the hypervisor (replace ubuntu with hypervisor host user)
sudo -u maas virsh -c qemu+ssh://ubuntu@192.168.100.1/system list --allConfirm Maas-Server NetworkingEnsure we can reach important address via maas-server:
host streams.canonical.com
host store.juju.ubuntu.com
host archive.ubuntu.com

And that we can download charms if needed:
wget https://store.juju.ubuntu.com/charm-infoSetup Traffic ForwardingSetup maas-server to forward traffic from eth1 to eth0:

You can type the following out manually or add it as an upstart script to ensure forwarding is setup properly each time add this file to /etc/init/ovs-routing.conf (thanks to Juan Negron):

description "Setup NAT rules for ovs bridge"

start on runlevel [2345]

env EXTIF="eth0"
env BRIDGE="eth1"

task

script
echo "Configuring modules"
modprobe ip_tables || :
modprobe nf_conntrack || :
modprobe nf_conntrack_ftp || :
modprobe nf_conntrack_irc || :
modprobe iptable_nat || :
modprobe nf_nat_ftp || :

echo "Configuring forwarding and dynaddr"
echo "1" > /proc/sys/net/ipv4/ip_forward
echo "1" > /proc/sys/net/ipv4/ip_dynaddr

echo "Configuring iptables rules"
iptables-restore <<-EOM
*nat
-A POSTROUTING -o ${EXTIF} -j MASQUERADE
COMMIT
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A FORWARD -i ${BRIDGE} -o ${EXTIF} -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
-A FORWARD -i ${EXTIF} -o ${BRIDGE} -j ACCEPT
-A FORWARD -j LOG
COMMIT
EOM

end script
Then start the service:
sudo service ovs-routing startSetup Squid ProxyEnsure squid proxy can access cloud images:
echo "cloud-images.ubuntu.com" | sudo tee /etc/squid-deb-proxy/mirror-dstdomain.acl.d/98-cloud-imagessudo service squid-deb-proxy restartInstall MAAS NodesNow we can virt-install each maas-node on the hypervisor such that it automatically pxe boots and auto-enlists into MAAS. You can adjust the script below to create as many nodes as required. I’ve also simplified things by creating everything with dual nics and ample memory and hard drive space, but of course you could use custom machines per service. Compute-nodes need more compute power, ceph nodes will need more storage, and quantum-gateway will need dual nics. In addition you could specify raw disks instead of qcow2, or use storage pools; but in this case I wanted something simple that didn’t automatically use all the space it needed.

for i in {0..19}; do
virt-install \
--name=maas-node-${i} \
--connect=qemu:///system --ram=4096 --vcpus=1 --hvm --virt-type=kvm \
--pxe --boot network,hd \
--os-variant=ubuntutrusty --graphics vnc --noautoconsole --os-type=linux --accelerate \
--disk=/var/lib/libvirt/images/maas-node-${i}.qcow2,bus=virtio,format=qcow2,cache=none,sparse=true,size=32 \
--network=network=maas_internet,model=virtio \
--network=network=maas_management,model=virtio
done

Now each node needs to be manually enlisted with the proper power configuration.
http://maas.ubuntu.com/docs/nodes.html#virtual-machine-nodes

Host Name: maas-node-${i}.vmaas
Power Type: virsh
Power Address: qemu+ssh://ubuntu@192.168.100.1/system
Power ID: maas-node-${i}
Here we need to match the machines to the mac address and update the power requirements.  You can get the mac addresses of each node by using the following on the hypervisor:

virsh dumpxml maas-node-${i} | grep "mac addr"
Here is a script that helps automate some of this process, it can be run from the maas-server (replace USER ubuntu with the appropriate value) this matches mac address from virsh to the ones in maas and then sets up the power accordingly:

#!/usr/bin/python

import sys, os, libvirt
from xml.dom.minidom import parseString
os.environ['DJANGO_SETTINGS_MODULE'] = 'maas.settings'
sys.path.append("/usr/share/maas")
from maasserver.models import Node, Tag

hhost = 'qemu+ssh://ubuntu@192.168.100.1/system'

conn = libvirt.open(hhost)
nodes_dict = {}
domains = conn.listDefinedDomains()
for node_name in domains:
node = conn.lookupByName(node_name)
node_xml = parseString(node.XMLDesc(0))
node_mac1 = node_xml.getElementsByTagName('interface')[0].getElementsByTagName('mac')[0].getAttribute('address')
nodes_dict[node_mac1] = node_name

maas_nodes = Node.objects.all()
for node in maas_nodes:
try:
system_id = node.system_id
mac = node.get_primary_mac()
node_name = nodes_dict[str(mac)]
node.hostname = node_name
node.power_type = 'virsh'
node.power_parameters = { 'power_address':hhost, 'power_id':node_name }
node.save()
except: pass

Note you will need python-libvirt and run the above command with something like the following:
sudo -u maas ./setup-nodes.pySetup Fastpath and Commission NodesYou most likely want to use fast-path installer on nodes to speed up installation times. Set all nodes to use fastpath installer using another bulk action on the nodes.

After you have all this done, click bulk action commission.
You should see all your machines starting up if you set things up properly, give this some time. You should have all the nodes in the 'Ready' state in maas now!
http://192.168.100.10/MAAS/nodes/
Confirm DNS setupOne point of trouble can be ensuring DNS is setup correctly. We can test this by starting a maas node and inside of that trying the following:
dig streams.canonical.com
dig store.juju.ubuntu.com

If we can’t hit those, we’ll need to ensure the maas server is setup correctly.
Go to: http://192.168.100.10/MAAS/settings/
Enter the host machines upstream DNS if necessary here, it should setup the bind configuration file and restart that service. After this re-test.

In addition I had to disable dnssec-validation for bind. Edit the following file:
sudo vim /etc/bind/named.conf.options
And change the following value:
dnssec-validation no;
And restart the service:
sudo service bind9 restart
Now you have a working virtual maas setup using the latest Ubuntu LTS!

Kees Cook: glibc select weakness fixed

Planet Ubuntu - Fri, 2014-06-13 19:21

In 2009, I reported this bug to glibc, describing the problem that exists when a program is using select, and has its open file descriptor resource limit raised above 1024 (FD_SETSIZE). If a network daemon starts using the FD_SET/FD_CLR glibc macros on fdset variables for descriptors larger than 1024, glibc will happily write beyond the end of the fdset variable, producing a buffer overflow condition. (This problem had existed since the introduction of the macros, so, for decades? I figured it was long over-due to have a report opened about it.)

At the time, I was told this wasn’t going to be fixed and “every program using [select] must be considered buggy.” 2 years later still more people kept asking for this feature and continued to be told “no”.

But, as it turns out, a few months later after the most recent “no”, it got silently fixed anyway, with the bug left open as “Won’t Fix”! I’m glad Florian did some house-cleaning on the glibc bug tracker, since I’d otherwise never have noticed that this protection had been added to the ever-growing list of -D_FORTIFY_SOURCE=2 protections.

I’ll still recommend everyone use poll instead of select, but now I won’t be so worried when I see requests to raise the open descriptor limit above 1024.

© 2014, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Pages

Subscribe to Free Software Magazine aggregator