Let me just start off by saying that I love iloveubuntu.net, I subscribe to their RSS feed and read if pretty regularly, as regularly as I read any other feed. I have a great deal of respect for razvi and the writing he does for that site, and I would never mean any disrespect to him.
So what happened? When I started my UbBloPoMo project, one of the things I said I would do was write a post in the evening, and schedule it to publish the following day. That is what I’ve done for every post this month, and the last one was no exception. I had written my article, and scheduled it to publish at 4am my local time (9am UTC), and at that point in time I was unaware of razvi’s editorial on the same subject (which may or may not have been published by the time I wrote mine).
My choice in headline was in no way a reference to his. In fact, I chose the wording specifically because it wasn’t similar to any other headline for any other article I had read yesterday, so that nobody would feel like I was singling them out in particular. Of all the different ways a headline could have been worded, it was just a matter of random chance (or dump luck on my part) that mine and his were as similar as they were.
I still stand by what I said, about Mozilla and about us as a community. But I wanted to apologize to razvi for the confusion and hurt feelings caused by my headline. And to reiterate, I still love iloveubuntu.net, I still have a great deal of respect for razvi, and I will continue making his site part of my regular news reading.
Stuart Langridge, Jeremy Garcia, Bryan Lunduke, and myself wend our troublesome ways down the road of:
- We weigh in on the upstart/systemd brouhaha in Debian and discuss what happened, why it happened, and whether it was a good thing or not.
- Bryan reviews the Lenovo Miix 2 tablet and we get into the nitty gritty of what you can do with it.
- We take a trip down memory lane about how we each got started with Linux, which distributions we used, and who helped us get on our journey.
- We take a recap and look at community feedback about guns, 3D printing, predictions, Bad Voltage gaming, the Bad Voltage Selfie Competition and more, all making an appearance.
Go and listen or download it here
Be sure to go and share your feedback, ideas, and other comments on the community discussion thread for this show!
Also, be sure to join in the Bad Voltage Selfie Competition to win some free O’Reilly books!
Finally, many thanks to Microsoft for helping us get the Bad Voltage Community Forum up and running, and thanks to A2 Hosting for now hosting it. Thanks also to Bytemark for their long-standing support and helping us actually ship shows.
Finally, finally, I found the time (spread over several months) to refurbish my blog. Not that it took so long, but spare time is rare these days.
I decided to stick with Drupal and created a fresh and clean installation of 7 to replace the old Drupal 6. Now Drupal is somewhat overkill for a simple blog, but I the alternatives did not convince me for various reasons.
Now I do not want to dive into why or why not this and that, but point out some remarkabilities with regard to Drupal. Setting up Drupal is straightforward of course, and everything works fine and smooth, but the real work comes with adjusting it to become what you want. Its flexibility allows to realize anything, on the other hand it is also a reason why some things are laborious to accomplish.Layout and Modules
I chose Bamboo as base theme and sub-themed it. The blog looks now less stale, long code parts are presented properly and it has a mode for mobile devices. It required to dive in into Drupal theming a bit, but not too much. The documentation provided with Bamboo was already very helpful. The big plus of a sub-themed theme is that you easily can update the main theme without patching around endlessly. Theoretically it still can break your layout, if major changes would be applied. In the end, this task was pleasant to complete.
Afterwards it was about selecting modules and do the configuration. Mostly it was searching, installing, configuring, done. Just three things i want to point out here:
- There's no easy solution for a guest preview as offered by Wordpress. You can achieve this by doing something complicated (it did not follow this) or using the view_unpublished module. It does not offer the same convenience, but is good enough.
- Also finally standard elements like captions and buttons are localized into most common languages, e.g. English, Spanish, Chinese, Russian, Arab and German. Drupal does not ship translations by default.
- Avoiding blog spam. On the old version I used reCaptcha. I believe the only type of commentators it kept away were authentic people, instead I had the doubtful pleasure to moderate tons of SEO spam. Now I use a honeypot approach and so far (in testing) it works incredibly good and does not get in the way of real people. I an very fond of this.
I wished I could get the latest Drupal from the repositories, either original Ubuntu ones or a PPA. Web software evolves fast, releases fast and often closes security issues. Unfortunately, neither is provided (only older packages in the 12.04 repositories).
So I need to keep Drupal up to date by hand. Who has ever read the update instructions knows, that you don't want to do it by hand. A lot of stuff to do. Perfect condition for the lazy CS guy and a good opportunity to refresh my shell scripting. I could automate a lot of the ugly and boring stuff. What is left is for me is to kick off the script, and get in and out of the maintenance mode. Even this can be achieved without human interaction, so far i prefer to keep the control. In the end, I need to ensure everything works as expected anyway.Migration
The fun part. First, why did I not upgrade from Drupal 6 to 7, but made everything from scratch? Because I did some decision with the old configuration that were not so useful. Then, there were some modules that were discontinued or replaced with a lacking upgrade path. And somewhere in my head was stuck, that an upgrade was problematic or not recommended, though this is probably of goof of my own memory. Well, in the end almost everything was ready and was just waiting for the content.
To migrate the content, i.e. blog posts, static pages, comments, tags, from Drupal 6 to 7 was easy in the end, once you found the way and fixed what was missing.
There is a module that provides exactly this transfer from an old Drupal 6 installation to a new Drupal 7 one, providing a GUI. I really did not want to write an upgrade script, because I would have needed to get into those details again, while all the content types were standard ones. So, GUI was a plus. At that time there was no stable release including the GUI, though, so I took the development version. Took it, run it, was delighted.
Only a little bit later I found out, that the tags were not assigned and node and term IDs (tags) were shuffled.
Reassigning the tags worked with some SQL select and insert.INSERT INTO field_data_field_tags (entity_type, bundle, deleted, entity_id, revision_id, language, delta, field_tags_tid) SELECT 'node' AS 'entity_type', 'blog' AS 'bundle', 0 AS 'deleted', node.nid AS 'entity_id', node.nid AS 'revision_id', 'und' AS 'language', (@jDelta := @jDelta +1) AS 'delta', taxonomy_term_data.tid AS 'field_tags_tid' FROM taxonomy_term_data, node, oldDatabase.term_data, oldDatabase.node, oldDatabase.term_node, (SELECT @jDelta := 0) AS jDelta WHERE oldDatabase.term_node.nid = oldDatabase.node.nid AND oldDatabase.term_node.tid = oldDatabase.term_data.tid AND taxonomy_term_data.name = oldDatabase.term_data.name AND node.title = oldDatabase.node.title ORDER BY entity_id;
So, the Node IDs and Term IDs were left. This is a problem, because they are contained in the URLs. From a SEO point of view, keeping them different will confuse search engines. Likely that they get it right after a while, but as a former SEO consultant you want to do it the right way. Changing them back would work, but the IDs are used everywhere and there is a lot of tables. Before I decided for the migrate module I considered migrating the content just by copying it from the old to the new database, but things changed are without getting really down into it, many new tables and columns remained unclear.
The lazy approach was to to redirect the old node IDs to the new ones.SELECT CONCAT('redirect 301 /node/', oldDatabase.node.nid, ' http://www.arthur-schiwon.de/', alias) FROM node, url_alias, oldDatabase.node WHERE node.title = oldDatabase.node.title AND source = CONCAT('node/', node.nid);
It redirects the old URLs containing the old node IDs to the clean URLs. For some reasons, something happened canonical tag in Drupal 6 so that the old clean URLs where not used, but the ugly ones. I do not want to have them in the search engines. Now, this is fixed as well. The result contained duplicate lines, somehow, but they could be easily dropped or the correct alias chosen. In few cases, I needed to update the alias, commas led to some problems. I pasted the result at the beginning of the .htaccess file. The same needed to be done for the term IDs.
It is not the best approach, but given the limited time I could and wanted to spent this is OK. In the end, it's a private blog for fun and fame, but not for profit.
It is essential to try whether all important old URLs will still be reachable to avoid broken links. Broken links are bad for visitors as well as search engines. I used linkchecker, available in Ubuntu repositories, to collect all the URLs from my old site.linkchecker -Fcsv/urlstate.csv --stdin -t1 -r0
A lot of stuff is gathered I took the whole path pointing to my domain, replaced the domain to my test domain, saved it in a text file and ran curl against them, I wrote a small script for this.#!/bin/sh OUTPUTFILE=new-url-stats.csv for url in `cat urls-new-ws`; do status=`curl -I $url | grep "HTTP/1.1"` echo "$url,$status" >> $OUTPUTFILE done
In the resulting CSV file I had the URL and the status, good enough for me. In LibreOffice, I auto-filtered it and sorted out the faulty or suspicious URLs, i.e. those throwing 4xx errors. If things needed to be fixed, I fixed them and rerun the script again until I was satisfied.Future
I wondered whether I should switch away from Drupal but decided to stay with it. The migration should be performed as good as possible while spending as little time as possible. In the end, it took quite some time to investigate and find the right strategy. Maybe it would have been faster with a direct upgrade. Probably it is easier and more straight forward to use a software that is dedicated to run blogs. This question will reappear when the next iteration of the blog is going to be done in some years. And I cannot promise to stay with Drupal, since I really only use a little bit of the whole feature set. But I am not a fan of neither Wordpress nor Ghost, so let us see which options will be out in the wild then.
With the result I am satisfied, though there are a few smaller edges that can be taken care of later. It really is a huge relief to deliver "Comment" buttons and likes in common languages instead of just only German and be able to properly read it on mobile devices.
Now I only need to find time to blog more often ;)Tags: BlizzzPlanetUbuntuDrupal
Just about all of you reading this know that I am a technical writer. One of the things I do to keep up to date with the latest trends in the field is read. I read books, articles, blogs, whatever I can find that relates. I especially enjoy Mark Baker’s blog, Every Page is Page One. Baker consistently posts articles that make me think, and in good ways. When I heard he has a book out, I contacted the publisher immediately.
As a side note, Baker’s publisher, XML Press, consistently produces books that I find useful. Every one I have read is well-written, authoritative, and filled with real-world experience and practicality.
Every Page is Page One: Topic-based Writing for Technical Communication and the Web shares the first part of its title with the blog, but the content is not directly from the blog. Rather than a collection of posts on assorted topics assembled into book form, this is a well-thought-out and well-organized text. In it, Baker observes that documentation projects tend to think about technical writing from a very book-centered paradigm. This was once ideal, but in the age of communicating technical information electronically, it forces limits on the end product that hinder the true goal of technical writing, the goal of delivering the right information at the right moment to the person who is seeking it. As someone who is not only a technical writer, but who also has a degree in information resources and library science, I have multiple reasons for supporting this goal.
What Baker does is give tangible form to thoughts and ideas that he, other technical writers, and even I have had in the abstract. How do we provide needed information to people who seek it in an age where the web makes almost anything searchable? Do manuals still matter? What about other forms of documentation? Are there changes to our style of communication, to our style of writing and presenting information, that will make the information seeker’s task easier? Baker discusses serious and realistic ways we can improve our field. It is all organized around the idea that we can no longer control the order in which information seekers will consume or even find our information, that every page (in a documentation wiki, for example) should be created in a way that enables a user to immediately understand and acquire what they need when they need it. Since we know we do not have this control like we had in a printed book, we must modify how we write and present information to fit the expectations of the seeker.
I enjoyed reading this book. I have benefited personally from reading this book. I am taking this book in to my workplace and sharing it with the other tech writers there and I believe our workplace and our employer and our customers will benefit from this book. If you work in the field, I’m convinced you will, too. The whole book is good, but my favorite parts are Section I, which lays the foundation in five chapters, and Chapter 22, which gives very practical and useful advice for making your case to others when you begin to try to make the changes the book describes.
At the Ubucon at Southern California Linux Expo on Friday, February 21st I’ll be doing a presentation on 5 ways to get involved with Ubuntu today. This post is part of a series where I’ll be outlining these ways that regular users can get involved with while only having minimal user-level experience with Ubuntu.
Interested in having a polished release but not able to contribute in a very technical way? Testing pre-releases is a great way to get started, even Mark Shuttleworth is getting in on the testing fun!
In this post, I’ll walk you through doing an ISO test, but there are also package and hardware/laptop tests that can be done, full details here: https://wiki.ubuntu.com/Testing/QATrackerTesting an ISO Log on the the testing tracker
You will want to go to http://iso.qa.ubuntu.com/. The page can be a bit overwhelming at first, but there are two sections you’ll want to focus on, the log in button and the list of builds available for testing.
To log in you’ll need an account with https://login.ubuntu.com/, clicking on the “Log in” button will take you to a page where you can set up one or use your existing one.Select a build to test
Most days the only build that is currently being tested is the “Daily” image – so in the screenshot above that is “Trusty Daily” and you’ll want to click on that link. The “Trusty Alpha2″ and “Trusty Alpha1″ images have already been released, so ISO testing on those is no longer necessary.Select something to test
This screen can be a bit overwhelming too since it lists all the possible builds in the ISO tracker, which is a lot! I highly recommend using the Filters on the left hand side of the screen to select only the builds you’re interested in. In this screenshot I selected only Ubuntu and Xubuntu to make the list short.
Then you can look to see what you want to test. Do you have a new computer? You can test the 64-bit image isos, I circled where you want to click in the screenshot if you want to test the Xubuntu 64-bit ISO (I do!).Select what test you want to do
At this next screen you will be presented with a series of tests that you can do. The easiest is “Live Session” since it doesn’t require you to install anything, it’s just testing a live session. You then also have various options for Installation-based testing.
But let’s say you have a virtual machine (Virtual Box is free and pretty easy to use for this) or a spare computer you want to do tests on, so for the purpose of this walkthrough we’ll select “Install (entire disk)” test case.Download and prepare the ISO
Once you’re on the screen for the test case, there will be a link to downloading the ISO. There are many options for downloading, including just clicking it to download via http, downloading via rsync, zsync or torrent; you can read more about all of these options once you learn more about testing. For now, downloading it through http is fine.
While you’re waiting for it to download you can click on “Testcase” in the grey box below that to read through what you’ll be doing in the test case.
Once downloaded, either use the image directly in something like VirtualBox, or put it on a USB stick or burn a DVD.Begin the test case
Scrolling down on the same page, you will see this:
This is where you will report the results of your test. The “Bugs to look for” are a list of bugs that others have reported that you may encounter, so you might want to look at some of those and include those bug numbers in your test if you encounter them too.
A quick rundown of the meaning of each field is as follows:
Result: Whether or not you were able to get through to the end of the test case with no fatal errors
Critical bugs: Bugs that prevented you from finishing the test case (would generally go along with “failed” above)
Bugs: Bugs that exist, but you were able to work around them and finish the test case (it can be marked as “passed” and still have bugs)
Note: Don’t stress too much over whether you believe a test is really passed or failed and whether bugs are critical or not, there is some judgement involved in here and results are reviewed by release managers who decide whether the ISO is ready for releasing. Just do your best!
Hardware profile: This is an optional field that can give the team an idea of what your hardware is. Using a virtual machine? Actual hardware? How much RAM and what type of graphics card? Put as much information as you can online somewhere and paste your link here. For example, here’s the testing profile I use for my Lenovo G575 and another for when I test in a 1.5G RAM virtual box instance. You can also choose to use the “hardinfo” command to generate information about your hardware and put it online somewhere.
Comment: You can add any additional comments you may have about doing the test case
If you run into any bugs while doing your test, you will need to submit those bugs for them to be recorded. For this you will need an account on launchpad.net, if you don’t have one, get one by clicking here. Once you submit a bug you’ll want to add that bug number to your list of Bugs (or Critical Bugs). Learn more about reporting bugs here.
Note: Reporting bugs can be hard, particularly determining what package to file them against, even I still struggle with this! My recommendation is to do your best, make sure you add your bug to the tracker so people notice it and ask for help on the ubuntu-quality mailing list if you’re really unsure.Done
Click “Submit Result” and you’ll be finished reporting a test case! The Ubuntu community thanks you :)Learn more about testing
A more thorough walkthrough with more screenshots can be found here: https://wiki.ubuntu.com/Testing/ISO/Walkthrough
You can sign up for and email the ubuntu-quality mailing list to introduce yourself and ask any questions you may have, they’re a friendly bunch.
Visit the Quality Assurance Team wiki for more about other kinds of testing.Previous posts in this series
So here I am with a static blog.
I was on Wordpress. I like Wordpress; in particular, I like the vitality of it. There’s a large community of people using it and working on it and making plugins and making themes, and it’s become apparent to me over the years that one of the things I care about quite a lot, when using software which is not a core part of what I do, is that I do not have to solve every problem that I have myself. That is: I would like there to be the problem-solving method of “1. google for desired outcome; 2. find someone else has written a plugin to do it”, rather than “1. google; 2. find nothing; 3. write code”. What this means is using some project with a largeish community. So I settled on Pelican, because it’s one of the more popular static blog engines out there, and hence vibrant community.
At this point there will be questions.
If you wanted a vibrant popular community why didn’t you use Jekyll?
I couldn’t work out how to install it.
It says: gem install jekyll. I did that and it says Permission denied - /var/lib/gems/1.9.1. So for some reason a command run as me wants to install things in a system-level folder. No. System level belongs to apt. Let the word go throughout the land.
I’m sure that it’s entirely possible to configure RubyGems so that it installs things in a ~/gems folder or something. But I don’t want that either: I want this stuff to be self-contained, inside the project folder. Node’s npm gets this completely right and I am impressed down to the very tips of my toes with it. Python gets it rightish: you have to use a virtualenv, which I am doing. Is there a virtualenv-equivalent for Ruby and RubyGems? Almost certainly. But I’m not trying to learn about Ruby, I’m trying to set up a blog. Reading up about how to configure Ruby package installation to be in the project folder when you’re trying to set up a blog isn’t just yak-shaving, it’s like an example you’d tell a child to explain what yak-shaving is. So no Jekyll for me, which is a bit annoying, but not too much since Pelican looks good. And I know Python pretty well, and don’t like Ruby very much, so that’s also indicative.
Why are you using a static blog engine at all? What was wrong with Wordpress?
It got owned. I got an email from a friend of mine saying “hey, did you know that if you look at your blog in an old browser, such as Dillo, there’s a bunch of spam at the top of it?”
I did not know. But it was the case. Sigh.
There are plenty of guides around about how to fix this: dump the DB, reinstall Wordpress, restore the DB, then look for fixes, etc, etc, etc. And I thought: wow, that’s a bunch of effort and what do I get for it? I’m still vulnerable to exactly the same problem, which is that an upgrade to WP happens, it notifies me, I notice thirty nanoseconds later, and in that thirty nanoseconds some script bot somewhere 0wns the blog. I could, in theory, fix this by spending much more time setting up something to auto-update WP, but in practice that’s hard: what do I do, svn update every fifteen seconds in a cron job? Nightmare.
So, what am I getting from Wordpress that I’ll lose if I go static?
My list of plugins contains a bunch of stuff which is only relevant because it is Wordpress and thus dynamic: caching, spam, that sort of thing. Static sites don’t need any of that. I like Jetpack a lot; it gives me a nice mobile theme and stats, and I’ll lose that (as well as comment moderation from a mobile app, which I don’t care about if I don’t have comments; see below). I have a bunch of nice little plugins which throw in features that I like, such as adding footnotes, which I’ll lose. Counterbalance that with how it’s basically impossible to put a <script> element in a Wordpress post, which is incredibly annoying. I won’t be able to create posts if I’m away from my computer (without doing a bunch of setup), but in practice I don’t do that, it turns out. And finally, comments.
Hm, comments. On the one hand, I like my commenters; there have been interesting discussions there, and normally informative. On the other hand, there’s a lot less commenting on blogs going on these days; you get much more real-time discussion on Twitter or G+ about a post than you do on the post itself. That’s a bit worrying — you’re losing the in-one-place nature of the conversation, and their bit of the conversation might vanish from the public discourse if G+ shuts down, which is why I don’t like Disqus — but that’s happening anyway and can’t be stopped. So maybe I can live with it.
Also, themes, but Pelican has a bunch, including rather excellently the same theme I was using on Wordpress! So it looks like nothing changed! Rawk.
So, let’s see how it goes. Possibly I’ll find something else critical that I’m missing and migrate back… and I do still have to write a footnotes plugin… but so far we’re feeling good about it.
Seriously? If you find yourself writing a headline like this, you need to take a break from the internet. If you clicked on this one because you wanted more blood-sport FOSS drama, you’re out of luck (and you should also take a break from the internet). Come on guys, what’s happened to us that we keep jumping from one outrage to another? For a group that loves to make allusions to 1984 when privacy is concerned, you’d think we would put up more resistance to these 2-minutes hate sessions when they are pushed on us.
Mozilla was the first open source project I ever took interest in. I downloaded the source code for Netscape 5 (remember that?) and was *amazed* that they would just give it away to anybody like that. I am involved in open source now, all these years later, because of that one experience. Firefox is still one of the most well known and most used pieces of open software in the world. They have been consistently open and doing good for us users. You don’t just disregard all of that because of a few ads.
So let’s quit this self-indulgent outrage, Firefox is still a great open source project. Mozilla aren’t selling out, they aren’t turning evil, and they haven’t suddenly stopped caring about users. They have a nice feature that shows website thumbnails for previously viewed website, it’s very handy once you have previously viewed websites. For new users, they’re empty, and empty tiles are useless. So Mozilla wants to pre-populate them, until you’ve used the browser enough to fill that space. That’s great, it turns something doing nothing, and makes it do something, that’s a good thing. And they might make some money off it, which for everybody who’s concerned about Google’s dominance and infiltration into our privacy should be a good thing.
Are they ads? Yes. Or No, depending on who you ask. But you’re asking the wrong question. The question should be: do they make Firefox worse for me? or you? or other people? And the answer is we simply don’t know yet, because as of today those tiles are still empty for new users. Mozilla thinks they can make Firefox better, both the web browser and the project behind it, and after 15 years of doing exactly that they’ve more than earned the right to try.
But let’s get back to the bigger problem, this constant stream of flamewars and hate fests. We can’t go on tearing down free and open source projects like this. No matter how much one of these might benefit your favored project at the moment, sooner or later the outrage machine is going to turn on it too. If we care at all about open source as a philosophy, we need to care about the people and projects that live by it.
As Russ Allbery saidrecently, after the Debian project had just weathered an outrage storm of it’s own:
But people should also get engaged and interested in understanding other *people* and finding ways to work with other people in difficult situations, since at the end of the day our communities are about people, not software.
Last week I was in Orlando sprinting with my team as well as the platform, SDK, and security teams and some desktop and design folks. As usual after a sprint, I have been slammed catching up with email, but I wanted to provide a summary of some work going that you can expect to see soon in the Ubuntu app developer platform.HTML5
In the last few months we have been working to refine our HTML5 support in the Ubuntu SDK.
Today we have full HTML5 support in the SDK but we are working to make HTML5 apps more integrated than ever. This work will land in the next week and will include the following improvements:
- Consolidating everything into a single template and container. This means that when you create a new app in the SDK you have a single template to get started with that runs in a single container.
- Updating our Cordova support to access all the devices and sensors on the device (e.g. camera, accelerometer).
- Adding a series of refinements to the look and feel of the HTML5 Ubuntu components. Before the components looked a little different to the QML ones and we are closing the loop.
- Full API documentation for the Cordova and Platform APIs as well as a number of tutorials for getting started with HTML5.
- On a side note, there has been some tremendous speed improvements in Oxide which will benefit all HTML5 apps. Thanks to Chris Coulson for his efforts here.
With these refinements you will be able use the Ubuntu SDK to create a new HTML5 app from a single template, follow a tutorial to make a truly native look and feel HTML5 app utilizing the Cordova and Platform APIs, then click one button to generate a click package and fill in a simple form and get your app in the store.
I want to offer many thanks to David Barth’s team for being so responsive when I asked them to refine our HTML5 support ready for MWC. They have worked tirelessly, and thanks also to Daniel Holbach for coordinating the many moving pieces here.SDK
Our SDK is the jewel in the crown of our app development story. Our goal is that the SDK gets you on your Ubuntu app development adventure and provides all the tools you need to be creative and productive.
Fortunately there are a number of improvements coming here too. This includes:
- We will be including a full emulator. This makes it easy for those of you without a device to test that your app will work well within the context of Ubuntu for smartphones or tablets. This is just a click away in the SDK.
- We are also making a series of user interface refinements to simplify how the SDK works overall. This will include uncluttering some parts of the UI as well as tidying up some of the Ubuntu-specific pieces.
- Device support has been enhanced. This makes it easier than ever to run your app on your Ubuntu phone or tablet with just a click.
- We have looked at some of the common issues people have experienced when publishing their apps to the store and included automatic checks in the SDK to notify the developer before they submit them to the store. This will speed up the submissions process.
- Support for “fat” packages is being added. This means you can ship cross-compiled pieces with your app (e.g. a C++ plugin).
- Last but certainly not least, we are going to be adding preliminary support for Go and QML to the Ubuntu SDK in the next month. We want our app developers to be able to harness Go and with the excellent Go/QML work Gustavo has done, we will be landing this soon.
As ever, you can download the latest Ubuntu SDK by following the instructions on developer.ubuntu.com. Thanks to Zoltan and his team for his effortsdeveloper.ubuntu.com
An awesome SDK and a fantastic platform is only as good as the people who know how to use it. With this in mind we are continuing to expand and improve developer.ubuntu.com to be a world-class developer portal.
With this we have many pieces coming:
- A refinement of the navigational structure of the site to make it easier to get around for new users.
- Our refined HTML5 support will also get full Cordova and Platform API documentation on the site. Michael Hall did a tremendous job integrating Ubuntu and upstream API docs in the same site with a single search engine.
- A library of primers that explain how key parts of our platform work (e.g. Online Accounts, Content Hub, App Lifecycle, App Insulation etc). This will help developers understand how to utilize those parts of the platform.
- Refining our overview pages to explain how the platform works, what is in the SDK etc.
- A refreshed set of cookbook questions, all sourced from our standard support resource, Ask Ubuntu.
- We will also be announcing Ubuntu Pioneers soon. I don’t want to spoil the surprise, so more on this later.
Thanks to David, Michael, and Kyle on my team for all of their wonderful efforts here.Desktop Integration
In the Ubuntu 14.04 cycle we are also making some enhancements to how Ubuntu SDK apps can run on the desktop.
As many of you will know we are planning on shipping a preview session of Unity8 running on Mir. This means that you can open Unity8 from the normal Ubuntu login screen so you can play with it and test it. This will not look like the desktop; that work is on-going to converge Unity 8 into the desktop form-factor and will come later. It will however provide a base in which developers can try the new codebase and hack on it to converge it to the desktop more quickly. We are refreshing our Unity8 developer docs to make this on-ramp easier.
We are also going to make some changes to make running Ubuntu SDK apps on Unity 7 more comfortable. This will include things such as displaying scrollbars, right-click menus etc. More on this will be confirmed as we get closer to release.
All in all, lots of exciting work going on. We are at the beginning of a new revolution in Ubuntu where beautifully designed, integrated, and powerful apps can drive a new generation of Ubuntu, all build on the principles of Open Source, collaboration, and community.
Much has been made of the recent announcement of the Year of Code and the underwhelming interview on Newsnight of Lottie Dexter which contained some selected footage from what appears to be a class on jQuery, possibly by Code First:Girls in which coding is described twice as gobbledegook and went on to have Lottie Dexter announce that she was unable to code. This is not ideal for the director of an organisation that is supposed to inspire and promote the teaching of coding. I don’t demand a string of coding accomplishments from such a position, it is just that without a basic understanding of coding it is hard to articulate how much fun it is. Computing in schools fell apart as a subject in the mid 90s, the emphasis changed from doing programming projects and educational activities to using spreadsheets, word-processors and desktop database applications. In many schools the teaching the foundational skills of computing was replaced by Microsoft Office training. This is not the same, and something I have been concerned about for many years, it is one reason I was involved with supporting the Open Source Schools project around the time of the end of BECTA and one reason why we exhibited at BETT and introduced teachers to the OLPC project and the thinking behind it. A couple of years ago when taking my eldest to an open day at a local secondary school the first words out of the mouth of the teacher when we got to the ICT room were “Don’t worry, there is no coding in this subject”. We selected a different school.
This is all quite sad, but it is fixable. Coding is fun and easy, teaching it is fun and easy. I know this because I do it. Every Tuesday afternoon this term I am visiting a school a few miles away to run an after school Code Club. We are doing programming projects using Scratch, here is the project we did this week, it is a fruit machine that cycles through a few images and you click the images to stop and try to get them all to line up.
Part of the code required to do this looks like this:
It is programmed by dragging and dropping the commands from a palette of options (which is particularly great on an interactive whiteboard), no typing or spelling errors involved and the club of year 5 (age 9) programmers now know about variables, random numbers, if statements, infinite loops, bounded loops, signals, events and designing a fun game by balancing parameters to make it not too easy and not too hard. They have been trying things out, experimenting, getting things wrong and figuring out what the problem is and what they need to do in order to get the outcome they want. This is computing and it is the foundation of the skills we want coming into the industry.
I would encourage everyone in the IT industry, or with an interest in IT in the UK (and elsewhere, but some of this is UK specific) to get involved in Code Club . The Code Club website allows schools to say that they would like to have a Code Club, and volunteers to search for schools in their area that want one. This means that you do not have to approach the school and start by explaining what it is all about and why they should want to have a Code Club. They already know that bit, it means you have to do nothing to “sell” the concept to the school. The activity plans are great, the coders love them and you don’t have to decide what you are going to do each week, that is all done for you. There is a bit of admin and checking that is done in advance, you get a security check called a DBS, but that is all arranged and paid for by STEMNET.
I don’t know if the Year of Code organisation will make any particular contribution itself, but the Newsnight appearance and subsequent kerfuffle has certainly brought some attention on the efforts of Code Club, Young Rewired State, the Raspberry Pi foundation and some other organisations which are actively working to bring the fun of coding back into UK schools and this is a good thing.
There has been a lot of sensational writing by a number of media outlets over the last 24 hours in reaction to a post by Darren Herman who is VP of Content Services. Lots of people have been asking me whether there will be ads in Firefox and pointing out these articles that have sprung up everywhere.
So first I want to look at the Merriam Webster definition of an Advertisement:ad·ver·tise·ment
noun \ˌad-vər-ˈtīz-mənt; əd-ˈvər-təz-mənt, -tə-smənt\
: something (such as a short film or a written notice) that is shown or presented to the public to help sell a product or to make an announcement
: a person or thing that shows how good or effective something is
: the act or process of advertising
Great, now that we have the definition it looks like the fact that Mozilla announces to users their rights and asks about what data choices they make alone meet the criteria of an advertisement. The question is: does the average user consider that an advertisement or a useful bit of content that Mozilla is trying to share? Next, lets look at the fact that Firefox uses Google as a default search/home page and boom, if we use the literal sense of the definition that too is an advertisement isn’t it?
So now lets move on to Darren’s post. While I think the post could have had stronger context and there could have been a better response to address some of these concerns, the basic gist here is that Mozilla plans to offer tiles where there was a gap in content and that some of those tiles may be sponsored but will still be consistent with Mozilla’s values. Furthermore, this new content will only be displayed to new users and its unlikely you will see this anytime soon since it has not landed in Mozilla-Central and gone through the processes necessary to make it to a stable release.Personally I think this is much ado over nothing and I think this feature and features like UP (User Personalization) are going to be very helpful to users and bake in some of the content that add-ons have typically provided.
This week we dedicated the short clinic to sizing, and ensuring widgets and items are usable (touchable).
- The Ubuntu grid unit – for more information, see http://developer.ubuntu.com/api/qml/sdk-1.0/UbuntuUserInterfaceToolkit.resolution-independence/)
- Minimum touch target size – 4×4 gu
- A sneak preview of the updated widgets coming to Ubuntu Touch
If you missed it, or want to watch it again, here it is:
The next App Design Clinic will on Wednesday 26th February. Please send your questions and screenshots to firstname.lastname@example.org by 1pm UTC on Tuesdays to be included in the following Wednesday clinic.
Today was a distracting day for me. My homeowner’s insurance is requiring that I get my house re-roofed, so I’ve had contractors coming and going all day to give me estimates. Beyond just the cost, we’ve been checking on state licensing, insurance, etc. I’ve been most shocked at the differences in the level of professionalism from them, you can really tell the ones for whom it is a business, and not just a job.
But I still managed to get some work done today. After a call with Francis Ginther about the API website importers, we should soon be getting regular updates to the current API docs as soon as their source branch is updated. I will of course make a big announcement when that happens
I didn’t have much time to work on my Debian contributions today, though I did join the DPMT (Debian Python Modules Team) so that I could upload my new python-model-mommy package with the DPMT as the Maintainer, rather than trying to maintain this package on my own. Big thanks to Paul Tagliamonte for walking me through all of these steps while I learn.
I’m now into my second week of UbBloPoMo posts, with 8 posts so far. This is the point where the obligation of posting every day starts to overtake the excitement of it, but I’m going to persevere and try to make it to the end of the month. I would love to hear what you readers, especially those coming from Planet Ubuntu, think of this effort.
 Re-roofing, for those who don’t know, involves removing and replacing the shingles and water-proofing paper, but leaving the plywood itself. In my case, they’re also going to have to re-nail all of the plywood to the rafters and some other things to bring it up to date with new building codes. Can’t be too safe in hurricane-prone Florida.
As prep for the upcoming 14.04 LTS release of Ubuntu I spent some quality time with each of the main flavours that I track – Kubuntu, Ubuntu GNOME, Xubuntu, and Ubuntu with the default DE, Unity.
They are all in really great shape! Thanks and congratulations to the teams that are racing to deliver Trusty versions of their favourite DE’s. I get the impression that all the major environments are settling down from periods of rapid change and stress, and the timing for an LTS release in 14.04 is perfect. Lucky us
The experience reminded me of something people say about Ubuntu all the time – that it’s a place where great people bring diverse but equally important interests together, and a place where people create options for others of which they are proud. You want options? This is the place to get them. You want to collaborate with amazing people? This is the place to find them. I’m very grateful to the people who create those options – for all of them it’s as much a labour of love as a professional concern, and their attention to detail is what makes the whole thing sing.
Of course, my testing was relatively lightweight. I saw tons of major improvements in shared apps like LibreOffice and Firefox and Chromium, and each of the desktop environments feels true to its values, diverse as those are. What I bet those teams would appreciate is all of you taking 14.04 for a spin yourselves. It’s stable enough for any of us who use Linux heavily as an engineering environment, and of course you can use a live boot image off USB if you just want to test drive the future. Cloud images are also available for server testing on all the major clouds.
Having the whole team, and broader community, focus on processes that support faster development at higher quality has really paid off. I’ve upgraded all my systems to Trusty and those I support from afar, too, without any issues. While that’s mere anecdata, the team has far more real data to support a rigorous assessment of 14.04′s quality than any other open platform on the planet, and it’s that rigour that we can all celebrate as the release date approached. There’s still time for tweaks and polish; if you are going to be counting on Trusty, give it a spin and let’s make sure it’s perfect.
It was somewhere between 7th and 11th February 2004 when I got package with my first Linux/ARM device. It was Sharp Zaurus SL-5500 (also named “collie”) and all started…
At that time I had Palm M105 (still own) and Sony CLIE SJ30 (both running PalmOS/m68k) but wanted hackable device. But I did not have idea what this device will do with my life.
Took me about three years to get to the point where I could abandon my daily work as PHP programmer and move to a bit risky business of embedded Linux consulting. But it was worth it. Not only from financial perspective (I paid more tax in first year then earned in previous) but also from my development. I met a lot of great hackers, people with knowledge which I did not have and I worked hard to be a part of that group.
I was a developer in multiple distributions: OpenZaurus, Poky Linux, Ångström, Debian, Maemo, Ubuntu. My patches landed also in many other embedded and “normal” ones. I patched uncountable amount of software packages to get them built and working. Sure, not all of those changes were sent upstream, some were just ugly hacks but this started to change one day.
Worked as distribution leader in OpenZaurus. My duties (still in free time only) were user support, maintaining repositories and images. I organized testing of pre-release images with over one hundred users — we had all supported devices covered. There was “updates” repository where we provided security fixes, kernel updates and other improvements. I also officially ended development of this distribution when we merged into Ångström.
I worked as one of main developers of Poky Linux which later became Yocto Linux. Learnt about build automation, QA control, build-after-commit workflow and many other things. During my work with OpenedHand I also spent some time on learning differences between British and American versions of English.
Worked with some companies based in USA. This allowed me to learn how to organize teamwork with people from quite far timezones (Vernier was based in Portland so 9 hours difference). It was useful then and still is as most of Red Hat ARM team is US based.
I remember moments when I had to explain what I am doing at work to some people (including my mom). For last 1.5 year I used to say “building software for computers which do not exist” but this is slowly changing as AArch64 hardware exists but is not on a mass market yet.
Now I got to a point when I am recognized at conferences by some random people when at FOSDEM 2007 I knew just few guys from OpenEmbedded (but connected many faces with names/nicknames there).
Played with more hardware then wanted. I still have some devices which I never booted (FRI2 for example). There are boards/devices which I would like to get rid of but most of them is so outdated that may go to electronic trash only.
But if I would have an option to move back that 10 years and think again about buying Sharp Zaurus SL-5500 I would not change it as it was one of the best things I did.
Hot on the heels of the new driver manager, we have the new touchpad management app that we introduced in Kubuntu 14.04.
The new app replaces the old Synaptiks touchpad management app and has many more buttons and settings that you can twiddle and tweak to get the best experience. The Kubuntu team would like to thank Alexander Mezin for working on this replacement app as part of his GSoC project. The package comes complete with its own plasmoid for easy access to enable and disable touchpads! Quite useful for folks who don’t have a physical hardware button to Enable/Disable touchpads
Users of Kubuntu 14.04 can grab the new touchpad management app from the Ubuntu repos by installing the kde-touchpad package.
At the Ubucon at Southern California Linux Expo on Friday, February 21st I’ll be doing a presentation on 5 ways to get involved with Ubuntu today. This post is part of a series where I’ll be outlining these ways that regular users can get involved with while only having minimal user-level experience with Ubuntu.
One of the most valuable things about using Ubuntu is the vast wealth of help available from fellow community members. Most of the time when I have a problem, I can search for an answer and find one, or ask on one of the several support outlets that exist.
Helping out others was one of the first ways I got involved, and there are many benefits to having this be yours too.Gentle learning curve
You may need to learn forum or mailing list etiquette, but otherwise you just jump in and help folks with things you know how to help out with. There are no requirements to have special training and you don’t need to know everything. If you’ve been using Ubuntu a few days longer than someone else, you can probably help them out.No set time commitment
You can help as much or as little as you want. There is very little investment in getting set up with help resources made available by the community and you can spend all day answering questions or just answer one question per week. It’s all up to you.There are many outlets for helping
Love forums? Prefer Stack Exchange? Or mailing lists? Want to help via IRC? You have many options! The following are the core help areas for the Ubuntu community:
- Web-based forums: UbuntuForums.org
- Mailing list: ubuntu-users
- Stack Exchange: AskUbuntu.com
- Internet Relay Chat: Channel list
If you’re more interested in passive support, documentation contributions and improvements are always welcome and needed on the Community Help Wiki.You’re making a difference
It may seem like an easy way to contribute, but you’re lending your talents to one of the things that makes our community great. Regardless of how much you contribute, every person you help is someone who is no longer stumped with their issue and that makes a difference.
Nothing new to report.
Release Metrics and Incoming Bugs
Release metrics and incoming bug data can be reviewed at the following link:
Milestone Targeted Work Items
5 work items
2 work items
1 work item
1 work item
3 work items
1 work item
6 work items
Status: Trusty Development Kernel
We have uploaded the 3.13.0-8.27 Trusty kernel to the archive which pulled in the latest v3.13.2 upstream stable updates.
We are also starting to work on opening up our first v3.14 rebase which will be available from our ubuntu-trusty unstable branch.
I want to also point out that the proposal for a 12.04.5 point release appears to have widespread support. This 5th point release for Precise will provide the linux-lts-trusty kernel in 12.04.
Important upcoming dates:
Thurs Feb 20 – Feature Freeze (~1 weeks away)
Thurs Feb 27 – Beta 1 (~2 weeks away)
Thurs Mar 27 – Final Beta (~6 weeks away)
Thurs Apr 03 – Kernel Freeze (~7 weeks away)
The current CVE status can be reviewed at the following link:
Status: Stable, Security, and Bugfix Kernel Updates – Saucy/Quantal/Precise/Lucid
Status for the main kernels, until today (Nov. 26):
- Lucid – Testing
- Precise – Testing
- Quantal – Testing
Saucy – Testing
We are in a holding pattern waiting to see if any regressions show up that would cause us
to respin before the 12.04.4 release goes out.
Current opened tracking bugs details:
For SRUs, SRU report is a good source of information:
Open Discussion or Questions? Raise your hand to be recognized
No open discussions.
For those not yet familiar with this, Ubuntu Touch systems are setup using a read-only root filesystem on top of which writable paths are mounted using bind-mounts from persistent or ephemeral storage.
The default update mechanism is therefore image based. We build new images on our build infrastructure, generate diffs between images and publish the result on the public server.
Each image is made of a bunch of xz compreseed tarballs, the actual number of tarballs may vary, so can their name. At the end of the line, the upgrader simply mounts the partitions and unpacks the tarball in the order it’s given them. It has a list of files to remove and the rest of the files are simply unpacked on top of the existing system.
Delta images only contain the files that are different from the previous image, full images contain them all. Partition images are stored in binary format in a partitions/ directory which the upgrader checks and flashes automatically.
The current list of tarballs we tend to use for the official images are:
- ubuntu: Ubuntu root filesystem (common to all devices)
- device: Device specific data (partition images and Android image)
- custom: Customization tarball (applied on top of the root filesystem in /custom)
- version: Channel/device/build metadata
For more details on how this all works, I’d recommend reading our wiki pages which act as the go-to specification for the server, clients and upgrader.Running a server
There are a lot of reasons why you may want to run your own system-image server but the main ones seem to be:
- Supporting your own Ubuntu Touch port with over-the-air updates
- Publishing your own customized version of an official image
- QA infrastructure for Ubuntu Touch images
- Using it as an internal buffer/mirror for your devices
Up until now, doing this was pretty tricky as there wasn’t an easy way to import files from the public system-image server into a local one nor was there a simple way to replace the official GPG keys by your own (which would result in your updates to be considered invalid).
This was finally resolved on Friday when I landed the code for a few new file generators in the main system-image server branch.
It’s now reasonably easy to setup your own server, have it mirror some bits from the main public server, swap GPG keys and include your local tarballs.
Before I start with step by step instructions, please note that due to bug 1278589, you need a valid SSL certificate (https) on your server. This may be a problem to some porters who don’t have a separate IP for their server or can’t afford an SSL certificate. We plan on having this resolved in the system-image client soon.Installing your server
Those instructions have been tried on a clean Ubuntu 13.10 cloud instance, it assumes that you are running them as an “ubuntu” user with “/home/ubuntu” as its home directory.
Install some required packages:sudo apt-get install -y bzr abootimg android-tools-fsutils \ python-gnupg fakeroot pxz pep8 pyflakes python-mock apache2
You’ll need a fair amount of available entropy to generate all the keys used by the test suite and production server. If you are doing this for testing only and don’t care much about getting strong keys, you may want to install “haveged” too.
Then setup the web server:sudo adduser $USER www-data sudo chgrp www-data /var/www/ sudo chmod g+rwX /var/www/ sudo rm -f /var/www/index.html newgroups www-data
That being done, now let’s grab the server code, generate some keys and run the testsuite:bzr branch lp:~ubuntu-system-image/ubuntu-system-image/server system-image cd system-image tests/generate-keys tests/run cp -R tests/keys/*/ secret/gpg/keys/ bin/generate-keyrings
Now all you need is some configuration. We’ll define a single “test” channel which will contain a single device “mako” (nexus4). It’ll mirror both the ubuntu and device tarball from the main public server (using the trusty-proposed channel over there), repack the device tarball to swap the GPG keys, then download a customization tarball from an http server, stack a keyring tarball (overriding the keys in the ubuntu tarball) and finally generating a version tarball. This channel will contain up to 15 images and will start at image ID “1″.
Doing all this can be done with that bit of configuration (you’ll need to change your server’s FQDN accordingly) in etc/config:[global] base_path = /home/ubuntu/system-image/ channels = test gpg_key_path = secret/gpg/keys/ gpg_keyring_path = secret/gpg/keyrings/ publish_path = /var/www/ state_path = state/ public_fqdn = system-image.test.com public_http_port = 80 public_https_port = 443 [channel_test] type = auto versionbase = 1 fullcount = 15 files = ubuntu, device, custom-savilerow, keyring, version file_ubuntu = remote-system-image;https://system-image.ubuntu.com;trusty-proposed;ubuntu file_device = remote-system-image;https://system-image.ubuntu.com;trusty-proposed;device;keyring=archive-master file_custom-savilerow = http;https://jenkins.qa.ubuntu.com/job/savilerow-trusty/lastSuccessfulBuild/artifact/build/custom.tar.xz;name=custom-savilerow,monitor=https://jenkins.qa.ubuntu.com/job/savilerow-trusty/lastSuccessfulBuild/artifact/build/build_number file_keyring = keyring;archive-master file_version = version
Lastly we need to actual create the channel and device in the server, this is done by calling “bin/si-shell” and then doing:pub.create_channel("test") pub.create_device("test", "mako") for keyring in ("archive-master", "image-master", "image-signing", "blacklist"): pub.publish_keyring(keyring)
And that’s it! Your server is now ready to use.
To generate your first image, simply run “bin/import-images”.
This will take a while as it’ll need to download files from those external servers, repack some bits but once it’s done, you’ll have a new image published.
You’ll probably want to run that command from cron every few minutes so that whenever any of the referenced files change a new image is generated and published (deltas will also be automatically generated).
To look at the result of the above, I have setup a server here: https://phablet.stgraber.org
To use that server, you’d flash using: phablet-flash ubuntu-system –alternate-server phablet.stgraber.org –channel test
I finally got around to release version 0.2.0 of Doxyqml, the Doxygen input filter to document QML code. This new version comes with the following changes:
- Support for readonly properties (Thanks to Burkhard Daniel) ;
- Support for anonymous function (Thanks to Niels Madan) ;
- Keep all comments. This makes it possible to use features like Doxygen @cond/@endcond blocks ;
- Improve handling of unknown arguments ;
- Add support for \return and \param in addition to @return and @param ;
- More tests.
Some of the changes have been sitting in Git for some months now so you may already have them. Others, like support for @cond / @endcond are more recent (like, last-week-recent).
That's it, you can now go back to write documentation for your code :)