Today I work from Berlin (visiting Daniel Holbach) and took only Chromebook with me to check how bad/good it works as laptop replacement for me.
First issues appeared during first minutes. It was keyboard. Or rather keys which are missing there. XFCE terminal (my main tool) switches between tabs with Ctrl-PgUp/PgDn but I lack those keys. Good that I can edit GTK shortcuts. But remove of them is possible only with Delete key. And guess what — Chromebook lacks it as well ;D So I used some crazy Emacs like shortcuts (Ctrl-LAlt-Shift-something).
Good thing is support for 5GHz WiFi. I have to consider such change at home and provide not only 2.4GHz but also 5GHz network (I have around twenty 802.11g ones at home).
Terrible issue is power plug detection. I took Chromebook from backpack, booted it and got “97% charged, AC connected” message during work on battery. It is serious problem as no one likes to have random shutdowns just because battery went flat.
So there are few things to do:
- better keymap
- fixed power state detection
And then I can go to Hong Kong (for Linaro Connect Asia) with Chromebook only.
A nice e-mail I received about KDE SC 4.10...
Wasn't sure where to send this email but wanted to send a huge thank you
to the Kubuntu team. I have no idea what happened in 4.10 but my God
everything is fast!! Everything loads so quickly and so stable so far.
This is by far the best release I have ever used for a linux distribution.
I don't see windows 7 being used in the coming weeks.
Please forward this to those who would really appreciate it!! Thank you
again and please keep up the amazing work!!
A number of people has asked about the availability of OpenStack 2012.2.1 in Ubuntu 12.10 and the Ubuntu Cloud Archive for Folsom; well its finally out!
Suffice to say it took longer that expected so we are making some improvements to the way we manage these micro-releases going forward which should streamline the process for 2012.2.3.Cloud Archive Version Tracker
Grizzly g2 is currently working its way into the Ubuntu Cloud Archive (its already in Ubuntu Raring) and should finish landing into the updates pocket next week.News from the CI Lab
This highlights the value of the integration and system testing that we do in the Ubuntu OpenStack CI lab (see my previous post for details on the lab). Identifying regressions was high on the list of initial objectives we agreed for this function!
Focus at the moment is on enabling testing of Grizzly on Raring (its already up and running for Precise) and working on an approach to testing the OpenStack Charm HA work currently in-flight within the team. In full this will require upwards of 30 servers to test so we are working on a charm that deploys Ubuntu Juju and MAAS (Metal-as-a-Service) on a single, chunky server, allowing for physical-server-like testing of OpenStack in KVM. For some reason seeing 50 KVM instances running on a single server is somewhat satisfying!
This work will also be re-used for more regular, scheduled testing outside of the normal build-deploy-test pipeline for scale-out services such as Ceph and Swift – more to follow on this…
Ceilometer has also been added to the lab; at the moment we are build testing and publishing packages in the Grizzly Trunk PPA; Yolanda is working on a charm to deploy Ceilometer.Ceph LTS Bobtail
The next Ceph LTS release (Bobtail) is now available in Ubuntu Raring and the Ubuntu Cloud Archive for Grizzly.
One of the key highlights for this release is the support for Keystone authentication and authorization in the Ceph RADOS Gateway.
The Ceph RADOS Gateway provides multi-tenant, highly scalable object storage through Swift and S3 RESTful interfaces.
Integration of the Swift protocol with Keystone completes the complementing story that Ceph provides when used with OpenStack.
Ceph can fulfil ALL storage requirements in an OpenStack deployment; its integrated with Cinder and Nova for block storage, Glance for image storage and can now directly provide integrated, Swift compatible, multi-tenant object storage.
Juju charm updates to support Keystone integration with Ceph RADOS Gateway are in the Ceph charms in the charm store.
At the last UDS we talked quite a bit about LoCo teams in during the Leadership Mini Summit. One interesting point was that many seemed to have the impression that events have to be big, everything has to follow an established protocol or a rigid process. That’s not the case.
I’m sure my friend Jorge Castro would agree with me if I told you to JFDI. The result of not doing things is that things will not get done. Setting up an event is sometimes just a matter of sending a mail to the team and asking everyone to come to a certain place at a certain date and time. Another point discussed was the number of people. Seriously, if it’s just two of you who hang out and make Ubuntu better or just have a good time together, that’s so much better than not meeting at all.
The reason I write all of this is that we’re getting closer to Ubuntu Global Jam again and some of you might be considering setting up an event and adding it to the LoCo Team Portal and you might still be a bit unsure. There’s really no need to.
It’s very very likely you don’t need a huge venue with lots of bells and whistles, maybe just meeting in a coffee shop will be good enough? A room in your local university? Or invite people to your place? Just somewhere with internet might be good enough. You might get to know some new local team members and it’s all about having a good time.
I recently read an interesting book about time and how we perceive it, based on recent neuroscience: Time Warped: Unlocking the Mysteries of Time Perception, by Claudia Hammond. The section on planning projects seemed applicable for both mentors and students. Of course, for students, this is the time to get involved with the KDE community, figure out what projects look interesting, and start the learning the development process.
The crucial resource is the Handbook, written by participants. There is a Mentor's Guide and Student's Guide, also available in ebook format.
In addition to the Handbook, Time Warped offers some valuable insight into the Planning Fallacy which is the tendency to believe that the job will take less time than eventually does. The admins and mentors work with students to create a realistic and detailed timeline, which is one of the important ways to outwit this human tendency. Hammond suggests that you consider your plan and then compare the parts to projects you have done in the past, to fine-tune your time frame to completion. Hammond warns against the common belief that we will have more time in the future than we have now. This caution is very important for mentors too. And it is one reason KDE always tries to have at least one back-up mentor for each accepted project, as well as the teams for general help.
Finally, Hammond suggests that since other people make more accurate judgements about our time, describe the task to a friend and ask them to guess how long it will take you. Those who have mentored before can help new mentors with this, and students can ask those who have seen their previous programming work to help judge the prospective plan.
I look forward to seeing KDE folks, experienced and brand-new, getting to know one another, and digging into the code.
Wow…I just realized how long it’s been since I did a blog post, so apologies for that first off. FWIW, it’s not that I haven’t had any good things to say or write about, it’s just that I haven’t made the time to sit down and type them out….I need a blog thought transfer device or something . Anyway, with all the talk about Ubuntu doing a rolling release, I’ve been thinking about how that would affect Ubuntu Server releases, and more importantly….could Ubuntu Server roll as well? In answering this question, I think it comes down to two main points of consideration (beyond what the client flavors would already have to consider).
How Would This Affect Ubuntu Server Users?
We have a lot of anecdotal data and some survey evidence that most Ubuntu Server users mainly deploy the LTS. I doubt this surprises people, given the support life for an LTS Ubuntu Server release is 5 years, versus only 18 months for a non-LTS Ubuntu Server release. Your average sysadmin is extremely risk adverse (for good reason), and thus wants to minimize any risk to unwanted change in his/her infrastructure. In fact, most production deployments also don’t even pull packages from the main archives, instead they mirror them internally to allow for control of exactly what and when updates and fixes roll out to internal client and/or server machines. Using a server operating system that requires you to upgrade every 18 months, to continue getting fixes and security updates, just doesn’t work in environments where the systems are expected to support 100s to 1000s of users for multiple years, often without significant downtime. With that said, I think there are valid uses of non-LTS releases of Ubuntu Server, with most falling into two main categories: Pre-Production Test/Dev or Start-Ups, with the reasons actually being the same. The non-LTS version is perfect for those looking to roll out products or solutions intended to be production ready in the future. These releases provide users a mechanism to continually test out what their product/solution will eventually look like in the LTS as the versions of the software they depend upon are updated along the way. That is, they’re not stuck having to develop against the old LTS and hope things don’t change too much in two years, or use some “feeder” OS, where there’s no guarantee the forked and backported enterprise version will behave the same or contain the same versions of the software they depend on. In both of these scenarios, the non-LTS is used because it’s fluid, and going to a rolling release only makes this easier…and a little better, I dare say. For one, if the release is rolling, there’s no huge release-to-release jump during your test/dev cycle, you just continue to accept updates when ready. In my opinion, this is actually easier in terms of rolling back as well, in that you have less parts moving all at once to roll back if needed. The second thing is that the process for getting a fix from upstream or a new feature is much less involved because there’s no SRU patch backporting, just the new release with the new stuff. Now admittedly, this also means the possibility for new bugs and/or regressions, however given these versions (or ones built subsequently) are destined to be in the next LTS anyway, the faster the bugs are found out and sorted, the better for the user in the long term. If your solution can’t handle the churn, you either don’t upgrade and accept the security risk, or you smoke test your solution with the new package versions in a duplicate environment. In either case, you’re not running in production, so in theory…a bug or regression shouldn’t be the end of the world. It’s also worth calling out that from a quality and support perspective, a rolling Ubuntu Server means Ubuntu developers and Canonical engineering staff who normally spend a lot of time doing SRUs on non-LTS Ubuntu Server releases, can now focus efforts on the Ubuntu Server LTS release….where we have a majority of users and deployments.
How Would This Affect Juju Users?
In terms of Juju, a move to a rolling release tremendously simplifies some things and mildly complicates others. From the point of view of a charm author, this makes life much easier. Instead of writing a charm to use a package in one release, then continuously duplicating and updating it to work with subsequent releases that have newer packages, you only maintain two charms…maximum of three if you want to include options for running code from upstream. The idea is that every charm in the collection would default to using packages from the latest Ubuntu Server LTS, with options to use the packages in the rolling release, and possibly an extra option to pull and deploy direct from upstream. We already do some of this now, but it varies from charm to charm…a rolling server policy would demand we make this mandatory for all accepted charms. The only place where the rules would be slighlty different, are in the Ubuntu Cloud Archives, where the packages don’t roll, instead new archive pockets are created for each OpenStack release. From a users perspective, a rolling release is good, yet is also complicated unless we help…and we will. In terms of the good, users will know every charmed service works and only have to decide between LTS and rolling as the deployment OS, where as now, they have to choose a release, then hope the charm has been updated to support that release. The reduction in charm-to-release complexity also allows us to do better testing of charms because we don’t have to test every charm against oneiric, precise, raring, “s”, etc, just precise and the rolling release….giving us more time to improve and deepen our test suites.
With all that said, a move to a rolling Ubuntu Server release for non-LTS also adds the danger of inconsistent package versions for a single service in a deployment. For example, you could deploy a solution with 5 instances of wordpress 3.5.1 running, we update the archive to wordpress 3.6, then you decide to add 3 more units, thus giving you a wordpress service of mixed versions….this is bad. So how do we solve this? It’s actually not that hard. First, we would need to ensure that Juju never automatically adds units to an existing service if there’s a mismatch in the version of binaries between the currently deployed instances and the new ones about to be deployed. If Juju detected the binary inconsistency, it would need to return an error, optionally asking the user if he/she wanted it to upgrade the currently running instances to match the new binary versions. We could also add some sort of –I-know-what-I-am-doing option to give the freedom to those users who don’t care about having version mismatches. Secondly, we should ensure an existing deployment can always grow itself without requiring a service upgrade. My current thinking around this is that we’d create a package caching charm, that can be deployed against any existing Juju deployment. The idea is much like squid-deb-proxy (accept the cache never expires or renews), where the caching instance acts as the archive mirror for the other instances in the deployment, providing the same cached packages deployed in that given solution. The package cache should be ran in a separate instance with persistent storage, so that even if the service completely goes down, it can be restored with the same packages in the cache.
So…Can Ubuntu Server Roll?
I honestly think we can and should consider it, but I’d also like to hear the concerns of folks who think we shouldn’t.
Of course, as happens sadly often, the scope creeps..Additional pain points
Zope’s test runner runs things that are not tests, but which users want to know about – ‘layers’. At the moment these are reported as individual tests, but this is problematic in a couple of ways. Firstly, the same ‘test’ runs on multiple backend runners, so timing and stats get more complex. Secondly, if a layer fails to setup or teardown, tools like testrepository that have watched the stream will think a test failed, and on the next run try to explicitly run that ‘test’ – but that test doesn’t really exist, so it won’t run [unless an actual test that needs the layer is being run].
Openstack uses python coverage to gather coverage statistics during test runs. Each worker running tests needs to gather and return such statistics. The current subunit protocol has no space to hand this around, without it pretending to be a test [see a pattern here?]. And that has the same negative side effect – test runners like testrepository will try to run that ‘test’. While testrepository doesn’t want to know about coverage itself, it would be nice to be able to pass everything around and have a local hook handle the aggregation of that data.
The way TAP is reflected into subunit today is to mangle each tap ‘test’ into a subunit ‘test’, but for full benefits subunit tests have a higher bar – they are individually addressable and runnable. So a TAP test script is much more equivalent to a subunit test. A similar concept is landing in Python’s unittest soon – ‘subtests’ – which will give very lightweight additional assertions within a larger test concept. Many C test runners that emit individual tests as simple assertions have this property as well – there may be 5 or 10 executables each with dozens of assertions, but only the executables are individually addressable – there is no way to run just one assertion from an executable as a ‘test’. It would be nice to avoid the friction that currently exists when dealing with that situation.Minimum requirements to support these
Layers can be supported via timestamped stdout output, or fake tests. Neither is compelling, as the former requires special casing in subunit processors to data mine it, and the latter confuses test runners. A way to record something that is structured like a test (has an id – the layer, an outcome – in progress / ok / failed, and attachment data for showing failure details) but isn’t a test would allow the data to flow around without causing confusion in the system.
TAP support could change to just show the entire output as progress on one test and then fail or not at the end. This would result in a cognitive mismatch for folk from the TAP world, as TAP runners report each assertion as a ‘test’, and this would be hidden from subunit. Having a way to record something that is associated with an actual test, and has a name, status, attachment content for the TAP comments field – that would let subunit processors report both the addressable tests (each TAP script) and the individual items, but know that only the overall scripts are runnable.
Python subtests could use a unique test for each subtest, but that has the same issue has layers. Python will ensure a top level test errors if a subtest errors, so strictly speaking we probably don’t need an associated-with concept, but we do need to be able to say that a test-like thing happened that isn’t actually addressable.
Coverage information could be about a single test, or even a subtest, or it could be about the entire work undertaken by the test process. I don’t think we need a single standardised format for Coverage data (though that might be an excellent project for someone to undertake). It is also possible to overthink things . We have the idea of arbitrary attachments for tests. Perhaps arbitrary attachments outside of test scope would be better than specifying stdout/stderr as specific things. On the other hand stdout and stderr are well known things.Proposal version 2
A packetised length prefixed binary protocol, with each packet containing a small signature, length, routing code, a binary timestamp in UTC, a set of UTF8 tags (active only, no negative tags), a content tag – one of (estimate + number, stdin, stdout, stderr, file, test), test-id, runnable, test-status (one of exists/inprogress/xfail/xsuccess/success/fail/skip), an attachment name, mime type, a last-block marker and a block of bytes.
The std/stdout/stderr content tags are gone, replaced with file. The names stdin,stdout,stderr can be placed in the attachment name field to signal those well known files, and any other files that the test process wants to hand over can be simply embedded. Processors that don’t expect them can just pass them on.
Runnable is a boolean, indicating whether this packet is describing a test that can be executed deliberately (vs an individual TAP assertion, Python sub-test etc). This permits describing things like zope layers which are top level test-like things (they start, stop and can error) though they cannot be run.. and it doesn’t explicitly model the setup/teardown aspect that they have. Should we do that?
Testid is for identifying tests. With the runnable flag to indicate whether a test really is a test, subtests can just be namespaced by the generator – reporters can choose whether to be naive and report every ‘test’, or whether to use simple string prefix-with-non-character-seperator to infer child elements.Impact on Python API
If we change the API to:class TestInfo(object): id = unicode status = ('exists', 'inprogress', 'xfail', 'xsuccess', 'success', 'fail', 'error', 'skip') runnable = boolean class StreamingResult(object): def startTestRun(self): pass def stopTestRun(self): passs def estimate(self, count, route_code=None, timestamp=None): pass def file(self, name, bytes, eof=False, mime=None, test_info=None, route_code=None, timestamp=None): """Inform the result about the contents of an attachment.""" def status(self, test_info, route_code=None, timestamp=None): """Inform the result about a test status with no attached data."""
This would permit the full semantics of a subunit stream to be represented I think, while being a narrow interface that should be easy to implement.
Please provide feedback! I’ll probably start implementing this soon.
- MythTV 0.25 (2:0.25.2+fixes.20120802.46cab93-0ubuntu1)
- Starting with 12.04, the Mythbuntu team will only be doing LTS releases. See this page for more info.
- Enable MythTV and Mythbuntu Updates repositories directly from the Mythbuntu Control Centre without needing to install the mythbuntu-repos package
- This is the first release with the LTS HW enablement stack. It will support newer hardware than the old Mythbuntu 12.04 release. For more information see https://wiki.ubuntu.com/PrecisePangolin/ReleaseNotes/UbuntuDesktop#LTS_Hardware_Enablement_Stack
- Recent snapshot of the MythTV 0.25 release is included (see 0.25 Release Notes)
- Mythbuntu theme fixes
For more detailed feature information please visit us on launchpad.
We appreciated all comments and would love to hear what you think. Please make comments to our mailing list, on the forums (with a tag indicating that this is from 12.04 or precise), or in #ubuntu-mythtv. As previously, if you encounter any issues with anything in this release, please file a bug using the Ubuntu bug tool (ubuntu-bug PACKAGENAME) which automatically collects logs and other important system information, or if that is not possible, directly open a ticket on Launchpad (http://bugs.launchpad.net/mythbuntu/12.04/).
- If upgrading and you have mythstream installed. Please remove mythstream before upgrading as mythstream is no longer supported.
- If you have used Jamu in the past, you should run "mythmetadatalookup --refresh-all"
- If you are upgrading and want to use the HTTP Live Streaming you need to create a Streaming storage group
The ISO is available here
Edubuntu 12.04.2 LTS is the second Long Term Support (LTS) version of Edubuntu as part of Edubuntu 12.04's 5 years support cycle.Edubuntu's Second LTS Point Release
The Edubuntu team is proud to announce the release of Edubuntu 12.04.2. This is the second of four LTS point releases for this LTS lifecycle. The point release includes all the bug fixes and improvements that have been applied to Edubuntu 12.04 LTS since it has been released. It also includes updated hardware support and installer fixes. If you have an Edubuntu 12.04 LTS system and have applied all the available updates, then your system will already be on 12.04.2 LTS and there is no need to re-install. For new installations, installing from the updated media is recommended since it will be installable on more systems than before and will require drastically less updates than installing from the original 12.04 LTS media.
This release is the first to ship with the backported kernel and X stack. This should be mostly relevant to users of very recent hardware. Current users of Edubuntu 12.04 won't be automatically updated to this backported stack, you can however manually install the packages if you want them.
- Information on where to download the Edubuntu 12.04.2 LTS media is available from the Downloads page.
- We do not ship free Edubuntu discs at this time, however, there are 3rd party distributors available who ship discs at reasonable prices listed on the Edubuntu Martketplace
Although Edubuntu 10.04 systems will ask for upgrade to 12.04.2, it's not an officialy supported upgrade path. Testing however indicated that this usually works if you're ready to make some minor adjustments afterwards.
To ensure that the Edubuntu 12.04 LTS series will continue to work on the latest hardware as well as keeping quality high right out of the box, we will release another 2 point releases before the next long term support release is made available in 2014. More information is available on the release schedule page on the Ubuntu wiki.
The release notes are available from the Ubuntu Wiki.
Thanks for your support and interest in Edubuntu!
Welcome to the Raring Ringtail Alpha 2 release, which will in time become the 13.04 release.
This alpha features images for Kubuntu and Ubuntu Cloud.
At the end of the 12.10 development cycle, the Ubuntu flavour decided that it would reduce the number of milestone images going forward and the focus would concentrate on daily quality and fortnightly testing rounds known as cadence testing. Based on that change, the Ubuntu product itself will not have an Alpha 2 release. Its first milestone release will be the FinalBetaRelease on the 28th of March 2013. Other Ubuntu flavours have the option to release using the usual milestone schedule.
Pre-releases of Raring Ringtail are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu developers and those who want to help in testing, reporting, and fixing bugs as we work towards getting
this release ready.
Alpha 2 is the second in a series of milestone images that will be released throughout the Raring development cycle, in addition to our daily development images. The Alpha images are known to be reasonably free of showstopper CD build or installer bugs, while representing a very recent snapshot of Raring. You can download them here:
- http://cdimage.ubuntu.com/kubuntu/releases/raring/alpha-2/ (Kubuntu)
- http://cloud-images.ubuntu.com/releases/raring/alpha-2/ (Ubuntu Server Cloud)
Alpha 2 includes a number of software updates that are ready for wider testing. This is an early set of images, so you should expect some bugs. For a more detailed description of the changes in the Alpha 2 release and the known bugs (which can save you the effort of reporting a duplicate bug, or help you find proven workarounds), please see:
If you’re interested in following the changes as we further develop Raring, we suggest that you subscribe initially to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a week) carrying announcements of approved specifications, policy changes, alpha releases, and other interesting events.
Originally posted to the ubuntu-devel-announce mailing list on Thu Feb 14 18:35:05 UTC 2013 by Jonathan Riddell
The Ubuntu team is pleased to announce the release of Ubuntu 12.04.2 LTS (Long-Term Support) for its Desktop, Server, Cloud, and Core products, as well as other flavours of Ubuntu with long-term support.
To help support a broader range of hardware, the 12.04.2 release adds an updated kernel and X stack for new installations on x86 architectures, and matches the ability of 12.10 to install on systems using UEFI firmware with Secure Boot enabled.
As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 12.04 LTS.
Kubuntu 12.04.2 LTS, Edubuntu 12.04.2 LTS, Xubuntu 12.04.2 LTS, Mythbuntu 12.04.2 LTS, and Ubuntu Studio 12.04.2 LTS are also now available. For some of these, more details can be found in their announcements:
- Kubuntu: http://www.kubuntu.org/news/12.04.2-release
- Edubuntu: http://www.edubuntu.org/news/12.04.2-release
- Mythbuntu: http://www.mythbuntu.org/home/news/12042released
- Ubuntu Studio: http://ubuntustudio.org/2013/02/ubuntu-studio-12-04-2-lts-precise-pangolin-release-notes/
In order to download Ubuntu 12.04.2, visit:
Users of Ubuntu 10.04 and 11.10 will be offered an automatic upgrade to 12.04.2 via Update Manager. For further information about upgrading, see:
As always, upgrades to the latest version of Ubuntu are entirely free of charge.
We recommend that all users read the 12.04.2 release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:
If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:
- #ubuntu on irc.freenode.net
If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:
Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.
Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:
You can learn more about Ubuntu and about this release on our website listed below:
To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:
Originally posted to the ubuntu-announce mailing list on Thu Feb 14 20:02:50 UTC 2013 by Colin Watson
phillw, gema, noskcaj, smartboyhw, primes2h, letozaf, sergiomeneses
Thank you as well to pleia2, JoseeAntonioR and the other classroom team members who helped us schedule and run the sessions.
You all rock!
We're serious about wanting to make sure you are able to contribute and join the community as easily as possible. So for the last couple months, as a team we've been writing tutorials, giving classroom sessions, and hosting testing events. We really do want you as part of the team. Check out some of the resources available to you and consider becoming a part of the team!
- Contributing Cadence Testing results
- Contributing Image Testing results
- Contributing Call for Testing results
- Setting up Launchpad
- Contributing Manual Tests
- Contributing Autopkg Tests
- Contributing Autopilot Tests
It’s this time of the year again, where we express our love for what and whom we like:
This year I want to express my personal thanks to
- the kmail developers for the awesome work during the last year,
- the KDE sysadmins for being so fast and efficient,
- the Amarok team for their love and dedication,
- the Sconcho developer for being so responsive and making my knitting pattern editing so easy,
- the whole FSFE team for their hard work on protecting our Freedom!
Subunit is seven and a half years old now – Conrad Parker and I first sketched it up at a CodeCon – camping and coding, a brilliant combination – in mid 2005.revno: 1 committer: Robert Collins <email@example.com> timestamp: Sat 2005-08-27 15:01:20 +1000 message: design up a protocol with kfish
It has proved remarkably resilient as a protocol – the basic nature hasn’t changed at all, even though we’ve added tags, timestamps, support for attachments of arbitrary size.
However a growing number of irritations have been building up with it. I think it is time to design another iteration of the protocol, one that will retain the positive qualities of the current protocol, while helping it become suitable for the next 7 years. Ideally we can keep compatibility and make it possible for a single stream to be represented in any format.Existing design
The existing design is a mostly human readable line orientated protocol that can be sniffed out from the regular output of ‘make’ or other build systems. Binary attachments are done using HTTP chunking, and the parser has to maintain state about the current test, tags, timing data and test progression [a simple stack of progress counters]. How to arrange subunit output is undefined, how to select tests to run is undefined.
This makes writing a parser quite easy, and the tagging and timestamp facility allow multiplexing streams from two or more concurrent test runs into one with good fidelity – but also requires that state be buffered until the end of a test, as two tests cannot be executing at once.Dealing with debuggers
The initial protocol was intended to support dropping into a debugger – just pass each line read through to stdout, and connect stdin to the test process, and voila, you have a working debugger connection. This works, but the current line based parsers make using it tedious – the line buffered nature of it makes feedback on what has been typed fiddly, and stdout tends to be buffered, leading to an inability to see print statements and the like. All in-principle fixable, right ?
When running two or more test processes, which test process should stdin be connected to? What if two or more drop into a debugger at once? What is being typed to which process is more luck than anything else.
We’ve added some idioms in testrepository that control test execution by a similar but different format – one test per line to list tests, and have runners permit listing and selecting by a list. This works well, but the inconsistency with subunit itself is a little annoying – you need two parsers, and two output formats.Good points
The current protocol is extremely easy to implement for emitters, and the arbitrary attachments and tagging features have worked extremely well. There is a comprehensive Python parser which maps everything into Python unittest API calls (an extended version of the standard, with good backwards compatibility).Pain points
The debugging support was a total failure, and the way the parser depraminates it’s toys when a test process corrupts an outcome line is extremely frustrating. (other tests execute but the parser sees them as non-subunit chatter and passes the lines on through stdout).Dealing with concurrency
The original design didn’t cater for concurrency. There are three concurrency issues – the corruption issue (see below for more detail) and multiplexing. Consider two levels of nested concurrency: A supervisor process such as testrepository starts 2 (or more but 2 is sufficient to reason about the issue) subsidiary worker processes (I1 and I2), each of which starts 2 subsidiary processes of their own (W1, W2, W3, W4). Each of the 4 leaf processes is outputting subunit which gets multiplexed in the 2 intermediary processes, and then again in the supervisor. Why would there be two layers? A concrete example is using testrepository to coordinate test runs on multiple machines at once, with each machine running a local testrepository to broker tests amongst the local CPUs. This could be done with 4 separate ssh sessions and no intermediaries, but that only removes a fraction of the issues. What issues?
Well, consider some stdout chatter that W1 outputs. That will get passed to I1 and from there to the supervisor and captured. But there is nothing marking the chatter as belonging to W1: there is no way to tell where it came from. If W1 happened to fail, and there was a diagnostic message printed, we’ve lost information. Or at best muddled it all up.
Secondly, buffering – imagine that a test on W1 hangs. I1 will know that W1 is running a test, but has no way to tell the supervisor (and thus the user) that this is the case, without writing to stdout [and causing a *lot* of noise if that happens a lot]. We could have I1 write to stdout only if W1′s test is taking more than 5 seconds or something – but this is a workaround for a limitation of the protocol. Adding to the confusion, the clock on W1 and W3 may be very skewed, so timestamps for everything have to be carefully synchronised by the multiplexer.
Thirdly, scheduling – if W1/W2 are on a faster machine than W3/W4 then a partition of equal-timed tests onto each machine will lead one idle before the other finishes. It would be nice to be able to pass tests to run to the faster machine when it goes idle, rather than having to start a new runner each time.
Lastly, what to do when W1 and W2 both wait for user input on stdin (e.g. passphrases, debugger input, $other). Naively connecting stdin to all processes doesn’t work well. A GUI supervisor could connect a separate fd to each of I1 and I2, but that doesn’t help when it is W1 and W2 reading from stdin.
So additional requirement over baseline subunit:
- make it possible for stdout and stderr output to be captured from W1 and routed through I1 to the supervisor without losing its origin. It might be chatter from a noisy test, or it might be build output. Either way, the user probably will benefit if we can capture it and show it to them later when they review the test run. The supervisor should probably show it immediately as well – the protocol doesn’t need to care about that, just make it possible.
- make it possible to pass information about tests that have not completed through one subunit stream while they are still incomplete.
- make it possible (but optional) to pass tests to run to a running process that accepts subunit.
- make it possible to route stdin to a specific currently process like W1. This and point 3 suggest that we need a bidirectional protocol rather than the solely unidirectional protocol we have today. I don’t know of a reliable portable way to tell when some process is seeking such input, so that will be up to the user I think. (e.g. printing (pdb) to stdout might be a sufficiently good indicator.)
Consider the following subunit fragment:test: foo starting serversuccess:foo
This is a classic example of corruption: the test ‘foo’ started a server and helpfully wrote to stdout explaining that it did that, but missed the newline. As a result the success message for the test wasn’t printed on a line of its own, and the subunit parser will believe that foo never completed. Every subsequent test is then ignored. This is usually easy to identify and fix, but its a head-scratcher when it happens. Another way it can happen is when a build tool like ‘make’ runs tests in parallel, and they output subunit onto the same stdout file handle. A third way is when a build tool like make runs two separate test scripts serially, and the first one starts a test but errors hard and doesn’t finish it. That looks like:test: foo test: bar success: bar
One way that this sort of corruption can be mitigated is to put subunit on it’s own file descriptor, but this has several caveats: it is harder to tunnel through things like ssh and it doesn’t solve the failing test script case.
I think it is unreasonable to require a protocol where arbitrary interleaving of bytes between different test runner streams will work – so the ‘make -j2′ case can be ignored at the wire level – though we should create a simple way to safely mux the output from such tests when the execute.
The root of the issue is that a dropped update leaves bad state in the parser and it never recovers. So some way to recover, or less state to carry in the parser, would neatly solve things. I favour reducing parser state as that should shift stateful complexity onto end nodes / complex processors, rather than being carried by every node in the transmission path.Dependencies
Various suggestions have been made – JSON, Protobufs, etc…
A key design goal of the first subunit was a low barrier to entry. We keep that by being backward compatible, but being easy to work with for the new revision is also a worthy goal.High level proposal
A packetised length prefixed binary protocol, with each packet containing a small signature, length, routing code, a binary timestamp in UTC, a set of UTF8 tags (active only, no negative tags), a content tag – one of (estimate + number, stdin, stdout, stderr, test- + test id), test status (one of exists/inprogress/xfail/xsuccess/success/fail/skip), an attachment name, mime type, a last-block marker and a block of bytes.
The content tags:
- estimate – the stream is reporting how many tests are expected to run. It affects everything with the same routing code only, and replaces (doesn’t adjust) any current estimate for that routing code. A estimate packet of 0 can be used to say that a routing target has shutdown and cannot run more tests. Routing codes can be used by a subunit aware runner to separate out separate threads in a single process, or even just separate ‘TestSuite’ objects within a single test run (though doing so means that they will need to process subunit and strip packets on stdin. This supercedes the stack of progress indicators that current subunit has. estimates cannot have test status or attachments.
- stdin/stdout/stderr: a packet of data for one of these streams. The routing code identifies the test process that the data came from/should go to in the tree of test workers. These packets cannot have test status but should have a non-empty attachment block.
- test- + testid: a packet of data for a single test. test status may be included, as may attachment name, mime type, last-block and binary data.
Test status values are pretty ordinary. Exists is used to indicate a test that can be run when listing tests, and inprogress is used to report a test that has started but not necessarily completed.
Attachment names must be unique per routing code + testid.
So how does this line up?Interleaving and recovery
We could dispense with interleaving and say the streams are wholly binary, or we can say that packets can start either after a \n or directly after another packet. If we say that binary-only is the approach to take, it would be possible to write a filter that would apply the newline heuristic (or even look for packet headers at every byte offset. I think mandating adjacent to a packet or after \n is a small concession to make and will avoid tools like testrepository forcing users to always configure a heuristic filter. Non-subunit content can be wrapped in subunit for forwarding (the I1 in W1->I1->Supervisor chain would do the wrapping). This won’t eliminate corruption but it will localise it and permit the stream to recover: the test that was corrupted will show up as incomplete, or with incomplete attachment data.listing
Test listing would emit many small non-timestamped packets. It may be useful to have a wrapper packet for bulk amounts of fine grained data like listing is, or for multiplexers with many input streams that will often have multiple data packets available to write at once.Selecting tests to run
Same as for listing – while passing regexes down to the test runner to select groups of tests is a valid use case, thats not something subunit needs to worry about : if the selection is not the result of the supervisor selecting by test id, then it is known at the start of the test run and can just be a command line parameter to the backend : subunit is relevant for passing instructions to a runner mid-execution. Because the supervisor cannot just hand out some tests and wait for the thing it ran to report that it can accept incremental tests on stdin, supervisor processes will need to be informed about that out of band.Debugging
Debugging is straight forward . The parser can read the first 4 or so bytes of a packet one at a time to determine if it is a packet or a line of stdout/stderr, and then either read to end of line, or the binary length of the packet. So, we combine a few things; non-subunit output should be wrapped and presented to the user. Subunit that is being multiplexed and forwarded should prepend a routing code to the packet (e.g. I1 would append ’1′ or ’2′ to select which of W1/W2 the content came from, and then forward the packet. S would append ’1′ or ’2′ to indicate I1/I2 – the routing code is a path through the tree of forwarding processes). The UI the user is using needs to supply some means to switch where stdin is attached. And stdin input should be routed via stdin packets. When there is no routing code left, the packet should be entirely unwrapped and presented as raw bytes to the process in question.Multiplexing
Very straight forward – unwrap the outer layer of the packet, add or adjust the routing code, serialise a header + adjusted length + rest of packet as-is. No buffering is needed, so the supervisor can show in-progress tests (and how long they have been running for).Parsing / representation in Python or other languages
The parser should be very simple to write. Once parsed, this will be fundamentally different to the existing Python TestCase->TestResult API that is in used today. However it should be easy to write two adapters: old-style <-> this new-style. old-style -> new-style is useful for running existing tests suites and generating subunit, because thats way the subunit generation is transparent. new-style->old-style is useful for using existing test reporting facilities (such as junitxml or html TestResult objects) with subunit streams.
Importantly though, a new TestResult style that supports the features of this protocol would enable some useful features for regular Python test suites:
- Concurrent tests (e.g. in multiprocessing) wouldn’t need multiplexers and special adapters – a regular single testresult with a simple mutex around it would be able to handle concurrent execution of tests, and show hung tests etc.
- The routing of input to a particular debugger instance also applies to a simple python process running tests via multiprocessing, so the routing feature would help there.
- The listing facility and incrementally running tests would be useful too I think – we could go to running tests concurrently with test collection happening, but this would apply to other parts of unittest than just the TestResult
The API might be something like:class StreamingResult(object): def startTestRun(self): pass def stopTestRun(self): pass def estimate(self, count, route_code=None): pass def stdin(self, bytes, route_code=None): pass def stdout(self, bytes, route_code=None): pass def test(self, test_id, status, attachment_name=None, attachment_mime=None, attachment_eof=None, attachment_bytes=None): pass
This would support just-in-time debugging by wiring up pdb to the stdin/stdout handlers of the result object, rather than actual stdin/stdout of the process – a simple matter once written. Alternatively, the test runner could replace sys.stdin/stdout etc with thunk file-like objects, which might be a good idea anyway to capture spurious output happening during a test run. That would permit pdb to Just Work (even if the test process is internally running concurrent tests.. until it has two pdb objects running concurrentlyGeneration new streams
Should be very easy in anything except shell. For shell, we can have a command line tool that when invoked outputs a subunit stream for one instruction. E.g. ‘test foo completed + some attachments’ or ‘test foo starting’.
Since I’ve gone through the hassle myself, I thought I’d save your time and tell you how to easily create .pot-files, or translation template files, for WordPress plugins.
Long story short, the command that supports all non-deprecated WordPress translation functions or semantically speaking, gettext keywords, is:
xgettext -o OUTPUT.pot -L PHP -k__ -k_e -k_n:1,2 -k_x:1,2c -k_ex:1,2c -k_nx:4c,1,2 -kesc_attr_-kesc_attr_e -kesc_attr_x:1,2c -kesc_html__ -kesc_html_e -kesc_html_x:1,2c -k_n_noop:1,2 -k_nx_noop:4c,1,2 *.php
You might want to add the --no-wrap argument depending on your needs. After running the script, go edit OUTPUT.pot and edit the information in the header.
In the foreseeable future, I’m going to finish a simple script that automatically updates the .pot -files for plugins as well as edits the header files automatically based on the plugin information on the main PHP file. The script will be available in my WordPress -repository on GitHub. Until that, happy hacking on WordPress and translations!
Chances are that if you're a LoCo team contact you've received an email message from me recently *encouraging* you to get your team signed up for the Ubuntu Global Jam. Or maybe you didn't...
Horror of horrors! You have an email issue.
Are you a "Team Contact?" Does your @ubuntu.com address still work? Check it. Fix it!
Are you a "Team Contact?" Are you cloaking your email address? Hmm. That would make you less than "contactable". ;) Fix it!
And by extension, if you are an Ubuntu Member it's a grand idea to check your email settings in Launchpad to ensure that yourlaunchpadID@ubuntu.com routes to some address that you check periodically.
This concludes my Public Service Announcement on routable email addresses. Happy messaging!
The Xubuntu team had an extra meeting on Monday to discuss and decide a possible move to a bigger ISO size and other important issues for Raring.
After a thorough discussion, which included the obvious drawback that will no longer fit on a CD and the amount of developer time currently spent keeping the ISO small enough to fit on a CD, the team decided that Xubuntu will have a 1GB ISO by clear vote of 8-0. The bigger ISO size will be featured starting from Raring, which is due to be released in April.
With this extra 300MB of space, the team has decided to reintroduce both Gnumeric spreadsheet application and GIMP (GNU Image Manipulation Program) image editor, both of which were dropped for the 12.10 release due to space constraints. Discussion about reintroduction of more of the most popular language packs and extra artwork will continue on the IRC channel #xubuntu-devel and on the Xubuntu development mailing list in the following weeks. As always, the team will adhere to the tenants of the Strategy Document when discussing these changes and strive to not add things to the ISO just because space is available.
Full logs and minutes from our meeting are available on our wiki here.