news aggregator

Ubuntu App Developer Blog: Ubuntu HTML5 apps session in Barcelona

Planet Ubuntu - Fri, 2014-07-04 14:04

Here’s a reminder about next Monday’s 7th of July Ubuntu HTML5 apps session in Barcelona.

At this free event, I’ll be presenting Ubuntu’s HTML5 development story, together with a live coding session and a Q&A round at the end. You’ll learn how to use the Ubuntu SDK and the UI toolkit to easily reuse your web skills to create stunning Ubuntu apps.

HTML5 is the other side of the coin of the Ubuntu app developer offering, where both web and native are first class citizens, offering a very flexible yet focused approach for application development. Teaming up with BeMyApp meetups, the session will start at 7 p.m. at Barcelona’s Mobile World Centre.

I look forward to seeing you there!

Register here for the HTML5 session >

Raphaël Hertzog: Tracker.debian.org is live

Planet Ubuntu - Fri, 2014-07-04 10:15

Maybe do you remember, last year I mentored a Google Summer of code whose aim was to replace our well known Package Tracking System with something more modern, usable by derivatives and more easily hackable. The result of this project is a new Django-based software called Distro Tracker.

With the help of the Debian System Administrators, it’s now setup on tracker.debian.org!

This service is also managed by the Debian QA team, it’s deployed in /srv/tracker.debian.org/ (on ticharich.debian.org, a VM) if you want to verify something on the live installation. It runs under the “qa” user (so members of the “qa-core” group can administer it).

That said you can reproduce the setup on your workstation quite easily, just by checking out the git repository and applying this change:

--- a/distro_tracker/project/settings/local.py +++ b/distro_tracker/project/settings/local.py @@ -10,6 +10,7 @@ overrides on top of those type-of-installation-specific settings.   from .defaults import INSTALLED_APPS from .selected import * +from .debian import *   ## Add your custom settings here

Speaking of contributing, the documentation includes a “Contributing” section to get you up and running, ready to do your first contribution!

Now go use this new service and report any issue against the new tracker.debian.org pseudo-package (BTW tracker.debian.org knows about pseudo-packages, example here).

There are many small things that need to be fixed/improved, if you know Python/Django and would like to start contributing to Debian, here’s your chance!

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

David Tomaschik: CVE-2014-4182 & CVE-2014-4183: XSS & XSRF in Wordpress 'Diagnostic Tool' Plugin

Planet Ubuntu - Fri, 2014-07-04 07:00

Versions less than 1.0.7 of the Wordpress plugin Diagnostic Tool, contain several vulnerabilities:

  1. Persistent XSS in the Outbound Connections view. An attacker that is able to cause the site to request a URL containing an XSS payload will have this XSS stored in the database, and when an admin visits the Outbound Connections view, the payload will run. This can be trivially seen in example by running a query for http://localhost/<script>alert(/xss/)</script> on that page, then refreshing the page to see the content run, as the view is not updated in real time. This is CVE-2014-4183.

  2. Reflected XSS in DNS resolver test page. When a reverse lookup is performed, the results of gethostbyaddr() are inserted into the DOM unescaped. An attacker who (mis-) configures a DNS server to send an XSS payload as a reverse lookup may be able to either trick the administrator into performing a lookup, or (more likely) use the CSRF vulnerability documented below to trigger the XSS.

  3. AJAX handlers do not have any CSRF protection on them. This allows an attacker to trigger the server into sending test emails (low severity), perform DNS lookups (high severity when combined with the reflected XSS above) and request the loading of pages by the server (including URLs that contain XSS payloads, triggering the persistent XSS documented above). Additionally, the last 2 vulnerabilities could be used to trigger an information leak for Wordpress servers that are behind a DDoS protection service (e.g., Cloudflare) or are being run as TOR anonymous services by forcing the server to request a page from the attacker's server or perform a DNS query against the attackers DNS server, allowing the attacker to learn the real IP of the server hosting Wordpress. This is CVE-2014-4182.

Timeline:

  • 2014/06/15: Vulnerabilities discovered & reported to developers.
  • 2014/06/30: Developers release Diagnostic Tool 1.0.7, fixing issues.
  • 2014/07/04: Public disclosure.

Ubuntu Podcast from the UK LoCo: S07E14 – The One with the Tea Leaves

Planet Ubuntu - Thu, 2014-07-03 20:35

We’re back with Season Seven, Episode Fourteen of the Ubuntu Podcast! Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are drinking tea and eating Foxes Ginger Crunch Creams biscuits in Studio L.

 Download OGG  Download MP3 Play in Popup

In this week’s show:

The UUPC Big Clock (by @sil)

reset

and

mount | column -t df | column -t

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Costales: Firefox Search Engine for Explain Shell web page

Planet Ubuntu - Thu, 2014-07-03 20:20
If you're a Linux sysadmin you'll really like Explain Shell webpage.
But I was missing a search engine for Firefox...

Firefox Search Engine for Explain Shell
Download from here and save it into your Firefox profile folder like:
~/.mozilla/firefox/<your_profile>/searchplugins/explainshell.xml

Enjoy it! :)

Duncan McGreggor: Uncovering Inherent Structures in Organizations

Planet Ubuntu - Thu, 2014-07-03 16:04
Vladimir LevenshteinThis post should have a subtitle: "Using Team Analysis and Levenshtein Distance to Reveal said Structure." It's the first part of that subtitle that is the secret, though: being able to correctly analyze and classify individual teams. Without that, using something clever like Levenshtein distance isn't going to be very much help.

But that's coming in towards the end of the story. Let's start at the beginning.

What You're Going to SeeThis post is a bit long. Here are the sections I've divided it into:

  • What You're Going to See
  • Premise
  • Introducing ACME
  • Categorizing Teams
  • Category Example
  • Calculating the Levenshtein Distance of Teams
  • Sorting and Interpretation
  • Conclusion

However, you don't need to read the whole thing to the main benefits. You can get the Cliff Notes version by reading the Premise, Categorizing Teams, Interpretation, and the Conclusion.

PremiseCompanies grow. Teams expand. If you're well-placed in your industry and providing in-demand services or products, this is happening to you. Individuals and small teams tend to deal with this sort of change pretty well. At an organizational level, however, this sort of change tends to have an impact that can bring a group down, or rocket it up to the next level.

Of the many issues faced by companies (or rapidly growing orgs within large companies) is this: "Our old structures, though comfortable, won't scale well with all these new teams and all the new hires joining our existing teams. How do we reorganize? Where do we put folks? Are there natural lines along which we can provide better management (and vision!) structure?"

The answer, of course, is "yes" -- but! It requires careful analysis and a deep understanding every team in your org.

The remainder of this post will set up a scenario and then figure out how to do a re-org. I use a software engineering org as an example, but that's just because I have a long and intimate knowledge of them and understand the ways in which one can classify such teams. These same methods could be applied a Sales group, Marketing groups, etc., as long as you know the characteristics that define the teams of which these orgs are comprised.



Introducing ACMEACME Corporation is the leading producer of some of the most innovative products of the 20th century. The CTO had previously tasked you, the VP of Software Development to bring this product line into the digital age -- and you did! Your great ideas for the updated suite are the new hottness that everyone is clamouring for. Subsequently, the growth of your teams has been fast, and dare we say, exponential.

More details on the scenario: your Software Development Group has several teams of engineers, all working on different products or services, each of which supports ACME Corporation in different ways. In the past 2 years, you've built up your org by an order of magnitude in size. You've started promoting and hiring more managers and directors to help organize these teams into sensible encapsulating structures. These larger groups, once identified, would comprise the whole Development Group.

Ideally, the new groups would represent some aspect of the company, software development, engineering, and product vision -- in other words, some sensible clustering of teams doing related work. How would you group the teams in the most natural way?

Simply dividing along language or platform lines may seem like the obvious answer, but is it the best choice? There are some questions that can help guide you in figuring this out:
  • How do these teams interact with other parts of the company? 
  • Who are the stakeholders in feature development? 
  • Which sorts of customers does each team primarily serve?
There are many more questions you could ask (some are implicit in the analysis data linked below), but this should give a taste.

ACME Software Development has grown the following teams, some of which focus on products, some on infrastructure, some on services, etc.:
  • Digital Anvil Product Team
  • Giant Rubber Band App Team
  • Digital Iron Carrot Team
  • Jet Propelled Unicycle Service Team
  • Jet Propelled Pogo Stick Service Team
  • Ultimatum Dispatcher API Team
  • Virtual Rocket Powered Roller Skates Team
  • Operations (release management, deployments, production maintenance)
  • QA (testing infrastructure, CI/CD)
  • Community Team (documentation, examples, community engagement, meetups, etc.)

Early SW Dev team hacking the ENIACCategorizing TeamsEach of those teams started with 2-4 devs hacking on small skunkworks projects. They've now blossomed to the extent that each team has significant sub-teams working on new features and prototyping for the product they support. These large teams now need to be characterized using a method that will allow them to be easily compared. We need the ability to see how closely related one team is to another, across many different variables. (In the scheme outlined below, we end up examining 50 bits of information for each team.)

Keep in mind that each category should be chosen such that it would make sense for teams categorized similarly to be grouped together. A counter example might be "Team Size"; you don't necessarily want all large teams together in one group, and all small teams in a different group. As such, "Team Size" is probably a poor category choice.
Here are the categories which we will use for the ACME Software Development Group:
  • Language
  • Syntax
  • Platform
  • Implementation Focus
  • Supported OS
  • Deployment Type
  • Product?
  • Service?
  • License Type
  • Industry Segment
  • Stakeholders
  • Customer Type
  • Corporate Priority
Each category may be either single-valued or multi-valued. For instance, the categories ending in question marks will be booleans. In contrast, multiple languages might be used by the same team, so the "Language" category will sometimes have several entries.

Category Example(Things are going to get a bit more technical at this point; for those who care more about the outcomes than the methods used, feel free to skip to the section at the end: Sorting and Interpretation.)

In all cases, we will encode these values as binary digits -- this allows us to very easily compare teams using Levenshtein distance, since the total of all characteristics we are filtering on can be represented as a string value. An example should illustrate this well.

(The Levenshtein distance between two words is the minimum number of single-character edits -- such as insertions, deletions or substitutions -- required to change one word into the other. It is named after Vladimir Levenshtein, who defined this "distance" in 1965 when exploring the possibility of correcting deletions, insertions, and reversals in binary codes.)
Let's say the Software Development Group supports the following languages, with each one assigned a binary value:
  • LFE - #b0000000001
  • Erlang - #b0000000010
  • Elixir - #b0000000100
  • Ruby - #b0000001000
  • Python - #b0000010000
  • Hy - #b0000100000
  • Clojure - #b0001000000
  • Java - #b0010000000
  • JavaScript - #b0100000000
  • CoffeeScript - #b1000000000
A team that used LFE, Hy, and Clojure would obtain its "Language" category value by XOR'ing the three supported languages, and would thus be #b0001100001. In LFE, that could be done by entering the following code the REPL:


We could then compare this to a team that used just Hy and Clojure (#b0001100001), which has a Levenshtein distance of 1 with the previous language category value. A team that used Ruby and Elixir (#b0000001100) would have a Levenshtein distance of 5 with the LFE/Hy/Clojure team (which makes sense: a total of 5 languages between the two teams with no languages shared in common). 

Calculating the Levenshtein Distance of TeamsAs a VP who is keen on deeply understanding your teams, you have put together a spreadsheet with a break-down of not only languages used in each team, but lots of other categories, too. For easy reference, you've put a "legend" for the individual category binary values is at the bottom of the linked spreadsheet.

In the third table on that sheet, all of the values for each column are combined into a single binary string. This (or a slight modification of this) is what will be the input to your calculations. Needless to say, as a complete fan of LFE, you will be writing some Lisp code :-)

(If you would like to try the code out yourself while reading, and you have lfetool installed, simply create a new project and start up the REPL: $ lfetool new library ld; cd ld && make-shellThat will download and compile the dependencies for you. In particular, you will then have access to the lfe-utils project -- which contains the Levenshtein distance functions we'll be using. You should be able to copy-and-paste functions, vars, etc., into the REPL from the Github gists.)
Let's create a couple of data structures that will allow us to more easily work with the data you collected about your teams in the spreadsheet:


We can use a quick copy and paste into the LFE REPL for two of those numbers to do a sanity check on the distance between the Community Team and the Digital Iron Carrot Team:


That result doesn't seem unreasonable, given that at a quick glance we can see both of these strings have many differences in their respective character positions.

It looks like we're on solid ground, then, so let's define some utility functions to more easily work with our data structures:


Now we're ready to roll; let's try sorting the data based on a comparison with a one of the teams:


It may not be obvious at first glance, but what the levenshtein-sort function did for us is compare our "control" string to every other string in our data set, providing both the distance and the string that the control was compared to. The first entry in the results is the our control string, and we see what we would expect: the Levenshtein distance with itself is 0 :-)

The result above is not very easily read by most humans ... so let's define a custom sorter that will take human-readable text and then output the same, after doing a sort on the binary strings:


(If any of that doesn't make sense, please stop in and say "hello" on the LFE mail list -- ask us your questions! We're a friendly group that loves to chat about LFE and how to translate from Erlang, Common Lisp, or Clojure to LFE :-) )


Sorting and InterpretationBefore we try out our new function, we should ponder which team will be compared to all the others -- the sort results will change based on this choice. Looking at the spreadsheet, we see that the "Digital Iron Carrot Team" (DICT) has some interesting properties that make it a compelling choice:

  • it has stakeholders in Sales, Engineering, and Senior Leadership;
  • it has a "Corporate Priority" of "Business critical"; and
  • it has both internal and external customers.
Of all the products and services, it seems to be the biggest star. Let's try a sort now, using our new custom function -- inputting something that's human-readable: 


Here we're making the request "Show me the sorted results of each team's binary string compared to the binary string of the DICT." Here are the human-readable results:


For a better visual on this, take a look at the second tab of the shared spreadsheet. The results have been applied to the collected data there, and then colored by major groupings. The first group shares these things in common:

  • Lisp- and Python-heavy
  • Middleware running on BSD boxen
  • Mostly proprietary
  • Externally facing
  • Focus on apps and APIs
It would make sense to group these three together.
Next on the list is Operations and QA -- often a natural pairing, and this process bears out such conventional wisdom. These two are good candidates for a second group.
Things get a little trickier at the end of the list. Depending upon the number of developers in the Java-heavy Giant Rubber Band App Team, they might make up their own group. However, both that one and the next team on the list have frontend components written in Angular.js. They both are used internally and have Engineering as a stakeholder in common, so let's go ahead and group them.
The next two are cloud-deployed Finance APIs running on the Erlang VM. These make a very natural pairing.
Which leaves us with the oddball: the Community Team. The Levenshtein distance for this team is the greatest for all the teams ... but don't be mislead. Because it has something in common with all teams (the Community Team supports every product with docs, example code, Sales and TAM support, evangelism for open source projects, etc.), it will have many differing bits with each team. This really should be in a group all its own so that structure represents reality: all teams depend upon the Community Team. A good case could also probably be made for having the manager of this team report directly up to you. 
The other groups should probably have directors that the team managers report to (keeping in mind that the teams have grown to anywhere from 20 to 40 per team). The director will be able to guide these teams according to your vision for the Software Group and the shared traits/common vision you have uncovered in the course of this analysis.
Let's go back to the Community Team. Perhaps in working with them, you have uncovered a hidden fact: the community interactions your devs have are seriously driving market adoption through some impressive and passionate service and open source docs+evangelism. You are curious how your teams might be grouped if sorted from the perspective of the Community Team.
Let's find out!


As one might expect, most of the teams remain grouped in the same way ... the notable exception being the split-up of the Anvil and Rubber Band teams. Mostly no surprises, though -- the same groupings persist in this model.

To be fair, if this is something you'd want to fully explore, you should bump the "Corporate Priority" for the Community Team much higher, recalculate it's overall bits, regenerate your data structures, and then resort. It may not change too much in this case, but you'd be applying consistent methods, and that's definitely the right thing to do :-) You might even see the Anvil and Rubber Band teams get back together (left as an exercise for the reader).

As a last example, let's throw caution and good sense to the wind and get crazy. You know, like the many times you've seen bizarre, anti-intuitive re-orgs done: let's do a sort that compares a team of middling importance and a relatively low corporate impact with the rest of the teams. What do we see then?


This ruins everything. Well, almost everything: the only group that doesn't get split up is the middleware product line (Jet Propelled and Iron Carrot). Everything else suffers from a bad re-org.

If you were to do this because a genuine change in priority had occurred, where the Giant Rubber Band App Team was now the corporate leader/darling, then you'd need to recompute the bit values and do re-sorts. Failing that, you'd just be falling into a trap that has beguiled many before you.


ConclusionIf there's one thing that this exercise should show you, it's this: applying tools and analyses from one field to fresh data in another -- completely unrelated -- field can provide pretty amazing results that turn mystery and guesswork into science and planning.

If we can get two things from this, the other might be: knowing the parts of the system may not necessarily reveal the whole (c.f. Complex Systems), but it may provide you with the data that lets you better predict emergent behaviours and identify patterns and structure where you didn't see them (or even think to look!) before.

Alan Pope: Creating Time-lapse Videos on Ubuntu

Planet Ubuntu - Thu, 2014-07-03 15:17

I’ve previously blogged about how I sometimes setup a webcam to take pictures and turn them into videos. I thought I’d update that here with something new I’ve done, fully automated time lapse videos on Ubuntu. Here’s when I came up with:-

(apologies for the terrible music, I added that from a pre-defined set of options on YouTube)

(I quite like the cloud that pops into existence at ~27 seconds in)

Over the next few weeks there’s an Air Show where I live and the skies fill with all manner of strange aircraft. I’m usually working so I don’t always see them as they fly over, but usually hear them! I wanted a way to capture the skies above my house and make it easily available for me to view later.

So my requirements were basically this:-

  • Take pictures at fairly high frequency – one per second – the planes are sometimes quick!
  • Turn all the pictures into a time lapse video – possibly at one hour intervals
  • Upload the videos somewhere online (YouTube) so I can view them from anywhere later
  • Delete all the pictures so I don’t run out of disk space
  • Automate it all
Taking the picture

I’ve already covered this really, but for this job I have tweaked the .webcamrc file to take a picture every second, only save images locally & not to upload them. Here’s the basics of my .webcamrc:-

[ftp] dir = /home/alan/Pictures/webcam/current file = webcam.jpg tmp = uploading.jpeg debug = 1 local = 1 [grab] device = /dev/video0 text = popeycam %Y-%m-%d %H:%M:%S fg_red = 255 fg_green = 0 fg_blue = 0 width = 1280 height = 720 delay = 1 brightness = 50 rotate = 0 top = 0 left = 0 bottom = -1 right = -1 quality = 100 once = 0 archive = /home/alan/Pictures/webcam/archive/%Y/%m/%d/%H/snap%Y-%m-%d-%H-%M-%S.jpg

Key things to note, “delay = 1″ gives us an image every second. The archive directory is where the images will be stored, in sub-folders for easy management and later deletion. That’s it, put that in the home directory of the user taking pictures and then run webcam. Watch your disk space get eaten up.

Making the video

This is pretty straightforward and can be done in various ways. I chose to do two-pass x264 encoding with mencoder. In this snippet we take the images from one hour – in this case midnight to 1AM on 2nd July 2014 – from /home/alan/Pictures/webcam/archive/2014/07/02/00 and make a video in /home/alan/Pictures/webcam/2014070200.avi and a final output in /home/alan/Videos/webcam/2014070200.avi which is the one I upload.

mencoder "mf:///home/alan/Pictures/webcam/archive/2014/07/02/00/*.jpg" -mf fps=60 -o /home/alan/Pictures/webcam/2014070200.avi -ovc x264 -x264encopts direct=auto:pass=1:turbo:bitrate=9600:bframes=1:me=umh:partitions=all:trellis=1:qp_step=4:qcomp=0.7:direct_pred=auto:keyint=300 -vf scale=-1:-10,harddup mencoder "mf:///home/alan/Pictures/webcam/archive/2014/07/02/00/*.jpg" -mf fps=60 -o /home/alan/Pictures/webcam/2014070200.avi -ovc x264 -x264encopts direct=auto:pass=2:bitrate=9600:frameref=5:bframes=1:me=umh:partitions=all:trellis=1:qp_step=4:qcomp=0.7:direct_pred=auto:keyint=300 -vf scale=-1:-10,harddup -o /home/alan/Videos/webcam/2014070200.avi Upload videos to YouTube

The project youtube-upload came in handy here. It’s pretty simple with a bunch of command line parameters – most of which should be pretty obvious – to upload to youtube from the command line. Here’s a snippet with some credentials redacted.

python youtube_upload/youtube_upload.py --email=########## --password=########## --private --title="2014070200" --description="Time lapse of Farnborough sky at 00 on 02 07 2014" --category="Entertainment" --keywords="timelapse" /home/alan/Videos/webcam/2014070200.avi

I have set the videos all to be private for now, because I don’t want to spam any subscriber with a boring video of clouds every hour. If I find an interesting one I can make it public. I did consider making a second channel, but the youtube-upload script (or rather the YouTube API) doesn’t seem to support specifying a different channel from the default one. So I’d have to switch to a different channel by default work around this, and then make them all public by default, maybe.

In addition YouTube sends me a “Well done Alan” patronising email whenever a video is upload, so I know when it breaks, I stop getting those mails.

Delete the pictures

This is easy, I just rm the /home/alan/Pictures/webcam/archive/2014/07/02/00 directory once the upload is done. I don’t bother to check if the video uploaded okay first because if it fails to upload I still want to delete the pictures, or my disk will fill up. I already have the videos archived, so can upload those later if the script breaks.

Automate it all

webcam is running constantly in a ‘screen’ window, that part is easy. I could detect when it dies and re-spawn it maybe. It has been known to crash now and then. I’ll get to that when that happens

I created a cron job which runs at 10 mins past the hour, and collects all the images from the previous hour.

10 * * * * /home/alan/bin/encode_upload.sh

I learned the useful “1 hour ago” option to the GNU date command. This lets me pick up the images from the previous hour and deals with all the silly calculation to figure out what the previous hour was.

Here (on github) is the final script. Don’t laugh.

Ubuntu App Developer Blog: 100,000 App Downloads

Planet Ubuntu - Thu, 2014-07-03 14:43

It was less than a month that we announced crossing the 10,000 users milestone for Ubuntu phones and tablets, and we’ve already reached another: 100,000 app downloads!

Downloads

The new Ubuntu store used by phones, tablets, and soon the desktop as well, provides app developers with some useful statistics about how many times their app was downloaded, which version was downloaded, and what country the download originated from. This is very useful as it it lets the developer gauge how many users they currently have for their app, and how quickly they are updating to new versions.  One side-effect of these statistics is that we can see how many total downloads there have been across all of the apps in the store, and this week we reached (and quickly passed) the 100,000th download.

Users

We’re getting close to having Ubuntu phones go on sale from our partners at Bq and Meizu, but there are still no devices on the market that came with Ubuntu.  This means that we’ve reached this milestone solely from developers and enthusiasts who have installed Ubuntu on one of their own devices (probably a Nexus device) or the device emulator.  

The continued growth in the download number validates the earlier milestone of 10,000 users, a large number of them are clearly still using Ubuntu on their device (or emulator) and keeping their apps up to date (the number represents new app installs and updates). This means that not only are people trying Ubuntu already, many of them are sticking with it too.  Yet another datapoint in support of this is the 600 new unique users who have been using the store since the last milestone announcement.

Pioneers

To supply all of these users with the apps they want, we’re continuing to build our community of app developers around Ubuntu. The first of these have already received their limited edition t-shirts, and are listed on the Ubuntu Pioneers page of the developer portal.

There is still time to get your app published, and claim your place on that page and your t-shirt, but they’re filling up fast so don’t delay. Go to our Developer Portal and get started today, you could be only a few hours away from publishing your first app in the store!

Svetlana Belkin: Doc Team Wiki Page Clean Up

Planet Ubuntu - Wed, 2014-07-02 22:36

Today the Wiki Sub-Team of the Ubuntu Doc Team had a mini-sprint over the team’s homepage (https://wiki.ubuntu.com/DocumentationTeam).  We have cleaned it up by removing some of the unwanted sub-pages, rewrote some of the pages to make it easier for new people to get involved with us, and much more.

Hopefully this is the best step to help people to understand our team’s workflow.


The Fridge: Community Donations Funding Report, Q1 2014

Planet Ubuntu - Wed, 2014-07-02 21:03

When users visit the Ubuntu download page from our main website, ubuntu.com, they are given the option to make a donation to show their appreciation and help further the work that goes into making and distributing Ubuntu. Donations ear-marked for “Community projects” made available to members of our community, and events that our members attend.

In keeping with our core principles of openness and transparency, the way these community funds were spent is detailed in a regular report and made available for everybody to see. These requests for funding are an opportunity for us to invest in our community, and every dollar spent has benefited the Ubuntu project through improved contributions, organization and outreach.

Once again everybody on this list deserves a big thanks for their continued contributions to Ubuntu. The funds they were given may cover immediate expenses, but they in no way cover all of the time, energy, and passion that these contributors have put into our community.

The latest funding report, using the same format as the previous one, can be viewed here.

Published on behalf of Michael Hall of the Community Team

Benjamin Kerensa: Release Management Work Week

Planet Ubuntu - Wed, 2014-07-02 19:46

Team discussing goals

Last week in Portland, Oregon, we had our second release management team work week of the year focusing on our goals and work ahead in Q3 of 2014. I was really excited to meet the new manager of the team, our new intern and two other team members I had not yet met.

It was quite awesome to have the face-to-face time with the team to knock out some discussions and work that required the kind of collaboration that a work week offers. One thing I liked working on the most was discussing the current success of the Early Feedback Release Manager role I have had on the team and discussing ideas for improving the pathways for future contributors in the team while also creating new opportunities and a new pathway for me to continue to grow.

One thing unique about this work week is we also took some time to participate in Open Source Bridge a local conference that Mozilla happened to be sponsoring at The Eliot Center and that Lukas Blakk from our team was speaking at. Lukas used her keynote talk to introduce her awesome project she is working on called the Ascend Project which she will be piloting soon in Portland.

Lukas Blakk Ascend Project Keynote at Open Source Bridge 2014

While this was a great work week and I think we accomplished a lot, I hope in future work weeks that they are either out of town or that I can block off other life obligations to spend more time on-site as I did have to drop off a few times for things that came up or run off to the occasional meeting or Vidyo call.

Thanks to Lawrence Mandel for being such an awesome leader of our team and seeing the value in operating open by default. Thanks to Lukas for being a great mentor and awesome person to contribute alongside. Thanks to Sylvestre for bringing us French Biscuits and fresh ideas. Thanks to Bhavana for being so friendly and always offering new ideas and thanks to Pranav for working so hard on picking up where Willie left off and giving us a new tool that will help our release continue to be even more awesome.

Mohamad Faizul Zulkifli: Sharing Wireless Internet To Ethernet

Planet Ubuntu - Wed, 2014-07-02 13:55
Sometimes we need to share our wireless internet to and ethernet connected clients or anything else such as wireless access points.
In my situation, my smartphone can connect directly to my preferred wireless access point. So i need to setup a simple wireless repeater.
First is first, i need to connect to the main wireless access point using my pc. After that, i configure my pc to relay the internet to my secondary access point which already has its own DHCP server. I dont need to configure any DHCP server then.
My wireless card connected to the main wireless access point is wlan1. This is how i do it.
internet --> wlan1 (on my pc) --> ethernet (on my pc too) --> network cable --> tp-link wireless access point --> smartphone
For your info, i already configured my tp-link wireless access point to match my ethernet configurations (IPs, Subnets, gateways).
All i need to do after connecting to primary wireless access point is run this commands.
ip link set up dev eth0ip addr add 192.168.137.1/24 dev eth0 sysctl net.ipv4.ip_forward=1iptables -t nat -A POSTROUTING -o wlan1 -j MASQUERADEiptables -A FORWARD -i eth0 -o wlan1 -j ACCEPTiptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
take a note that my ethernet is eth0.

Jos&eacute; Antonio Rey: Juju’ing with t1.micro instances on AWS

Planet Ubuntu - Tue, 2014-07-01 21:13

Since February, when I decided I didn’t want to use the local provider with Juju because my internet connection has a download speed of 400KBps, I opened an AWS account. This gave me 750 hours per month to use on t1.micro instances, which are awesome for Juju testing… until you hit some problems.

Main problem with the t1.micro instances is that they only have 613MB RAM. This is good for testing charms which do not require a lot of memory, but there are some (as nova-cloud-controller) which do require some more memory to run properly. Even worst, they require memory to finish installing.

I should note that, in general, my experience with t1.micro instances and the AWS free tier has been awesome, but in this cases there is no other solution than getting a bigger instance. If you are testing in cloud and you see a weird error you don’t understand, it may be that the machine has ran out of memory to allocate, so try in a bigger instance. If it doesn’t solve it, go ahead and report a bug. If it’s something on a charm’s code, we’ll look into it.

Happy deploying!


Chris J Arges: using ktest.pl with ubuntu

Planet Ubuntu - Tue, 2014-07-01 20:10
Bisecting the kernel is one of those tasks that's time-consuming and error prone. Ktest.pl is a script that lives in the linux kernel source tree [1] that helps to automate this process. The script is extremely extensible and as such takes times to understand which variables need to be set and where. In this post, I'll go over how to perform a kernel bisection using a VM as the target machine. In this example I'm using 'ubuntu' as the VM name.

First ensure you have all dependencies correctly setup:
sudo apt-get install libvirt-bin qemu-kvm cpu-checker virtinst uvtool git
sudo apt-get build-dep linux-image-`uname -r`
Ensure kvm works:kvm-ok
In this example we are using uvtool to create VMs using cloud images, but you could just as easily use a preseed install or a manual install via an ISO.First sync the cloud image:uvt-simplestreams-libvirt sync release=trusty arch=amd64
Clone the necessary git directory:git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git linux.git
Copy ktest.pl outside of the linux kernel (since bisecting it also changes the script, this way it remains constant):cd linux
cp tools/testing/ktest/ktest.pl
cp -r tools/testing/ktest/examples/include ..
cd ..
Create directories for script:mkdir configs build
mkdir configs/ubuntu build/ubuntu
Get an appropriate config for the bisect you are using and ensure it can reasonable 'make oldconfig' with the kernel version you are using. For example, if we are bisecting v3.4 kernels, we can use an Ubuntu v3.5 series kernel config and yes '' | make oldconfig to ensure it is very close. Put this config into configs/ubuntu/config-min. For convenience I have a put a config that works here for this example:
http://people.canonical.com/~arges/amd64-config.flavour.generic

Create the VM, ensure you have ssh keys setup on your local machine first:uvt-kvm create ubuntu release=trusty arch=amd64 --password ubuntu --unsafe-caching
Ensure the VM can be ssh'ed to via 'ssh ubuntu@ubuntu':echo "$(uvt-kvm ip ubuntu) ubuntu" | sudo tee -a /etc/hosts
SSH into VM with ssh ubuntu@ubuntu.

Set up the initial target kernel to boot on the VM:
sudo cp /boot/vmlinuz-`uname -r` /boot/vmlinuz-test
sudo cp /boot/initrd.img-`uname -r` /boot/initrd.img-test
Ensure SUBMENUs are disabled on the VM, as the grub2 detection script in ktest.pl fails with submenus, and update grub.echo "GRUB_DISABLE_SUBMENU=y" | sudo tee -a /etc/default/grub
sudo update-grub
Ensure we have a serial console on the VM with /etc/init/ttyS0.conf, and ensure that agetty automatically logs in as root. If you ran with the above script you can do the following:sudo sed -i 's/exec \/sbin\/getty/exec \/sbin\/getty -a root/' /etc/init/ttyS0.conf
Ensure that /root/.ssh/authorized_keys on the VM contains the host keys so that ssh root@ubuntu works automatically. If you are using the above commands you can do:sudo sed -i 's/^.*ssh-rsa/ssh-rsa/g' /root/.ssh/authorized_keys
Finally add a test case to /home/ubuntu/test.sh inside of the ubuntu VM. Ensure it is executable.#!/bin/bash
# Make a unique string
STRING=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c 32)
> /var/log/syslog
echo $STRING > /dev/kmsg
# Wait for things to settle down...
sleep 5
grep $STRING /var/log/syslog
# This should return 0.

Now exit out of the machine and create the following configuration file for ktest.pl called ubuntu.conf. This will bisect from v3.4 (good) to v3.5-rc1 (bad), and run the test case that we put into the VM.# Setup default machine
MACHINE = ubuntu

# Use virsh to read the serial console of the guest
CONSOLE = virsh console ${MACHINE}
CLOSE_CONSOLE_SIGNAL = KILL

# Include defaults from upstream
INCLUDE include/defaults.conf
DEFAULTS OVERRIDE

# Make sure we load up our machine to speed up builds
BUILD_OPTIONS = -j8

# This is required for restarting VMs
POWER_CYCLE = virsh destroy ${MACHINE}; sleep 5; virsh start ${MACHINE}

# Use the defaults that update-grub spits out
GRUB_FILE = /boot/grub/grub.cfg
GRUB_MENU = Ubuntu, with Linux test
GRUB_REBOOT = grub-reboot
REBOOT_TYPE = grub2

DEFAULTS

# Do a simple bisect
TEST_START
RUN_TEST = ${SSH} /home/ubuntu/test.sh
TEST_TYPE = bisect
BISECT_GOOD = v3.4
BISECT_BAD = v3.5-rc1
CHECKOUT = origin/master
BISECT_TYPE = test
TEST = ${RUN_TEST}
BISECT_CHECK = 1

Now we should be ready to run the bisection (this will take many, many hours depending on the speed of your machine):./ktest.pl ubuntu.conf
  1. http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/tools/testing/ktest?id=HEAD

Ubuntu Kernel Team: Kernel Team Meeting Minutes – July 01, 2014

Planet Ubuntu - Tue, 2014-07-01 17:17
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140701 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

We have rebased our Utopic kernel “unstable” branch to v3.16-rc3 and
uploaded to our ckt ppa (ppa:canonical-kernel-team/ppa). Please test.
I don’t anticipate an official v3.16 based upload to the archive until
further testing and baking has taken place.
—–
Important upcoming dates:
Thurs Jul 24 – 14.04.1 (~3 weeks away)
Thurs Jul 31 – 14.10 Alpha 2 (~4 weeks away)
Thurs Aug 07 – 12.04.5 (~5 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (Jul. 1):

  • Lucid – Kernel Prep
  • Precise – Kernel Prep
  • Saucy – Kernel Prep
  • Trusty – Kernel Prep

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Schedule:

    14.04.1 cycle: 29-Jun through 07-Aug
    ====================================================================
    27-Jun Last day for kernel commits for this cycle
    29-Jun – 05-Jul Kernel prep week.
    06-Jul – 12-Jul Bug verification & Regression testing.
    13-Jul – 19-Jul Regression testing & Release to -updates.
    20-Jul – 24-Jul Release prep
    24-Jul 14.04.1 Release [1]
    07-Aug 12.04.5 Release [2]

    [1] This will be the very last kernels for lts-backport-quantal, lts-backport-raring,
    and lts-backport-saucy.

    [2] This will be the lts-backport-trusty kernel as the default in the precise point
    release iso.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Jono Bacon: Getting Started in Community Management

Planet Ubuntu - Tue, 2014-07-01 16:31

If there is one question I get more than most, it is the proverbial:

How do I get started in community management?

While there are many tactical things to learn about building strong communities (which I cover in depth in The Art of Community), the main guidance I am keen to share is the importance of leadership.

Last night, while working in my hotel room, I bored myself writing up my thoughts in a blog post and just fired up my webcam:

If you want to get involved in community management, be sure to join the awesome community that is forming on the Community Leadership Forum and if possible, join us on the 18th and 19th July 2014 in Portland for the Community Leadership Summit.

Martin Pitt: deb, click, schroot, LXC, QEMU, phone, cloud: One autopkgtest to Rule Them All!

Planet Ubuntu - Tue, 2014-07-01 15:15

We currently use completely different methods and tools of building test beds and running tests for Debian vs. Click packages, for normal uploads vs. CI airline landings vs. upstream project merge proposal testing, and keep lots of knowledge about Click package test metadata external and not easily accessible/discoverable.

Today I released autopkgtest 3.0 (and 3.0.1 with a few minor updates) which is a major milestone in unifying how we run package tests both locally and in production CI. The goals of this are:

  • Keep all test metadata, such as test dependencies, commands to run the test etc., in the project/package source itself instead of external. We have had that for a long time for Debian packages with DEP-8 and debian/tests/control, but not yet for Ubuntu’s Click packages.
  • Use the same tools for Debian and Click packages to simplify what developers have to know about and to reduce the amount of test infrastructure code to maintain.
  • Use the exact same testbeds and test runners in production CI than what developers use locally, so that you can reproduce and investigate failures.
  • Re-use the existing autopkgtest capabilities for using various kinds of testbeds, and conversely, making all new testbed types immediately available to all package formats.
  • Stop putting tests into the Ubuntu archive as packages (such as mediaplayer-app-autopilot). This just adds packaging and archive space overhead and also makes updating tests a lot harder and taking longer than it should.

So, let’s dive into the new features!

New runner: adt-virt-ssh

We want to run tests on real hardware such as a laptop of a particular brand with a particular graphics card, or an Ubuntu phone. We also want to restructure our current CI machinery to run tests on a real OpenStack cloud and gradually get rid of our hand-maintained QA lab with its test machines. While these use cases seem rather different, they both have in common that there is an already existing machine which is pretty much only accessible with ssh. Once you have an ssh connection, they look pretty much the same, you just need different initial setup (like fiddling with adb, calling nova boot, etc.) to prepare them.

So the new adt-virt-ssh runner factorizes all the common bits such as communicating with adt-run, auto-detecting sudo availability, doing SSH connection sharing etc., and delegates the target specific bits to a “setup script”. E. g. we could specify --setup-script ssh-setup-nova or --setup-script ssh-setup-adb which would then get called with open at the appropriate time by adt-run; it calls the nova commands to create a VM, or run a few adb commands to install/start ssh and install the public key. Then autopkgtest does its thing, and eventually calls the script with cleanup again. The actual protocol is a bit more involved (see manpage), but that’s the general idea.

autopkgtest now ships readymade scripts for these two use cases. So you could e. g. run the libpng tests in a temporary cloud VM:

# if you don't have one, create it with "nova keypair-create" $ nova keypair-list [...] | pitti | 9f:31:cf:78:50:4f:42:04:7a:87:d7:2a:75:5e:46:56 | # find a suitable image $ nova image-list [...] | ca2e362c-62c9-4c0d-82a6-5d6a37fcb251 | Ubuntu Server 14.04 LTS (amd64 20140607.1) - Partner Image | ACTIVE | $ nova flavor-list [...] | 100 | standard.xsmall | 1024 | 10 | 10 | | 1 | 1.0 | N/A | # now run the tests: please be patient, this takes a few mins! $ adt-run libpng --setup-commands="apt-get update" --- ssh -s /usr/share/autopkgtest/ssh-setup/nova -- \ -f standard.xsmall -i ca2e362c-62c9-4c0d-82a6-5d6a37fcb251 -k pitti [...] adt-run [16:23:16]: test build: - - - - - - - - - - results - - - - - - - - - - build PASS adt-run: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ tests done.

Please see man adt-virt-ssh for details how to use it and how to write setup scripts. There is also a commented /usr/share/autopkgtest/ssh-setup/SKELETON template for writing your own for your use cases. You can also not use any setup script and just specify user and host name as options, but please remember that the ssh runner cannot clean up after itself, so never use this on important machines which you can’t reset/reinstall!

Test dependency installation without apt/root

Ubuntu phones with system images have a read-only file system where you can’t install test dependencies with apt. A similar case is using the “null” runner without root. When apt-get install is not available, autopkgtest now has a reduced fallback mode: it downloads the required test dependencies, unpacks them into a temporary directory, and runs the tests with $PATH, $PYTHONPATH, $GI_TYPELIB_PATH, etc. pointing to the unpacked temp dir. Of course this only works for packages which are relocatable in that way, i. e. libraries, Python modules, or command line tools; it will totally fail for things which look for config files, plugins etc. in hardcoded directory paths. But it’s good enough for the purposes of Click package testing such as installing autopilot, libautopilot-qt etc.

Click package support

autopkgtest now recognizes click source directories and *.click package arguments, and introduces a new test metadata specification syntax in a click package manifest. This is similar in spirit and capabilities to DEP-8 debian/tests/control, except that it’s using JSON:

"x-test": { "unit": "tests/unittests", "smoke": { "path": "tests/smoketest", "depends": ["shunit2", "moreutils"], "restrictions": ["allow-stderr"] }, "another": { "command": "echo hello > /tmp/world.txt" } }

For convenience, there is also some magic to make running autopilot tests particularly simple. E. g. our existing click packages usually specify something like

"x-test": { "autopilot": "ubuntu_calculator_app" }

which is enough to “do what I mean”, i. e. implicitly add the autopilot test depends and run autopilot with the specified test module name. You can specify your own dependencies and/or commands, and restrictions etc., of course.

So with this, and the previous support for non-apt test dependencies and the ssh runner, we can put all this together to run the tests for e. g. the Ubuntu calculator app on the phone:

$ bzr branch lp:ubuntu-calculator-app # built straight from that branch; TODO: where is the official" download URL? $ wget http://people.canonical.com/~pitti/tmp/com.ubuntu.calculator_1.3.283_all.click $ adt-run ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- \ ssh -s /usr/share/autopkgtest/ssh-setup/adb [..] Traceback (most recent call last): File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 93, in test_divide_with_infinity_length_result_number self._assert_result("0.33333333") File "/tmp/adt-run.KfY5bG/tree/tests/autopilot/ubuntu_calculator_app/tests/test_simple_page.py", line 63, in _assert_result self.main_view.get_result, Eventually(Equals(expected_result))) File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 406, in assertThat raise mismatch_error testtools.matchers._impl.MismatchError: After 10.0 seconds test failed: '0.33333333' != '0.3' Ran 33 tests in 295.586s FAILED (failures=1)

Note that the current adb ssh setup script deals with some things like applying the autopilot click AppArmor hooks and disabling screen dimming, but it does not do the first-time setup (connecting to network, doing the gesture intro) and unlocking the screen. These are still on the TODO list, but I need to find out how to do these properly. Help appreciated!

Click app tests in schroot/containers

But, that’s not the only thing you can do! autopkgtest has all these other runners, so why not try and run them in a schroot or container? To emulate the environment of an Ubuntu Touch session I wrote a --setup-commands script:

adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \ ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click --- schroot utopic

This will actually work in the sense of running (and succeeding) the autopilot tests, but it will fail due to a lot of libust[11345/11358]: Error: Error opening shm /lttng-ust-wait... warnings on stderr. I don’t know what these mean, just that I also see them on the phone itself occasionally.

I also wrote another setup-commands script which emulates “read-only apt”, so that you can test the “unpack only” fallback. So you could prepare a container with click and the App framework preinstalled (so that it doesn’t always take ages to install them), starting from a standard adt-build-lxc container:

$ sudo lxc-clone -o adt-utopic -n click $ sudo lxc-start -n click # run "sudo apt-get install click ubuntu-sdk-libs ubuntu-app-launch-tools" there # then "sudo powerdown" # current apparmor profile doesn't allow remounting something read-only $ echo "lxc.aa_profile = unconfined" | sudo tee -a /var/lib/lxc/click/config

Now that container has enough stuff preinstalled to be reasonably fast to set up, and the remaining test dependencies (mostly autopilot) work fine with the unpack/$*_PATH fallback:

$ adt-run --setup-commands /usr/share/autopkgtest/setup-commands/ubuntu-touch-session \ --setup-commands /usr/share/autopkgtest/setup-commands/ro-apt \ ubuntu-calculator-app/ com.ubuntu.calculator_1.3.283_all.click \ --- lxc -es click

This will successfully run all the tests, and provided you have apt-cacher-ng installed, it only takes a few seconds to set up. This might be a nice thing to do on merge proposals, if you don’t have an actual phone at hand, or don’t want to clutter it up.

autopkgtest 3.0.1 will be available in Utopic tomorrow (through autosyncs). If you can’t wait to try it out, download it from my people.c.c page ☺.

Feedback appreciated!

Jos&eacute; Antonio Rey: FOSSETCON in Florida – Coming Next September!

Planet Ubuntu - Tue, 2014-07-01 04:20

Next September, from the 11th to the 13th, FOSSETCON will be held in the Rosen Plaza, in Orlando, Florida.

FOSSETCON stands for Free and Open Source Software Expo and Technology Conference. Organized by the awesome Bryan Smith, it will have a variety of workshops, talks, certifications, and a HUGE Expo Hall, where even the Ubuntu community will be featured! If you want your company to be featured during the conference, this is the perfect place to apply to be a sponsor. On the other hand, if you want to see many awesome things with regards to the Free and Open Source world and it’s close to you, then it’s the right place for you! Plus, attendees get a special discount in their tickets for Universal Studios parks, and more deals coming ;)

If you are planning to be around make sure to visit the Ubuntu booth, there’s still plenty of time to plan your trip!


Ubuntu App Developer Blog: Core Apps Hack Days Starts Now

Planet Ubuntu - Mon, 2014-06-30 14:45

As previously blogged we’re inviting the community to hack on Core Apps and Community Apps this week.

All the details are in the post above, but here’s the executive summary:-

  • Hack days run from 30th June till 4th July
  • We’re hacking on the Core Apps Music, Calendar, Calendar, Clock, Weather & Calculator
  • In addition we’re also hacking on community apps including Beru Ebook Reader, OSM Touch mapping software, and Trojita email client
  • Join us in the #ubuntu-app-devel IRC channel on freenode, and on the ubuntu-phone mailing list to get started
  • Get all the details from the hack days wiki page

As always we welcome new contributions during the Hack Days, but also beyond that.

Lubuntu Blog: LXQt now has “full” Qt5 support

Planet Ubuntu - Mon, 2014-06-30 11:17
Repost from LXDE Blog: After the first official public release 0.7, the LXQt team is working on making it better. Our recent focus is fixing existing bugs and migrating from Qt4 to Qt5, which is required if we want to support Wayland. Now we had something to show. The latest source code in our git repository can be compiled with Qt5 (by just passing -DUSE_QT5=ON flag to cmake). Building with

Pages

Subscribe to Free Software Magazine aggregator