I spent the last weekend of 2013 doing major cleaning. I straightened up the half of my bedroom that counts as my home office, got my printer set up in its rightful space on top of the end table/bookshelf by my computer desk so I can use the scanner, bought new ink cartridges, moved around inspirational and educational books to the office bookshelf, and mounted my whiteboard again. I also bought a check holder rail to mount under my whiteboard. With a clean desk, easy office supply access, and a big whiteboard with a ton of dry-erase markers, I was ready to plan for the year.
One of the big problems with freelancing is time management. There are a lot of things to do, but there are also a lot of pictures of cats to look at on reddit. Between the two, it’s easy for important goals to slip between the cracks. In 2014, I decided to go back to a paper-based time management system that worked so well for my first private IT job years ago. It was invented by David Seah and is called the Printable CEO. This system is a collection of mix and match forms which allow you to track time in a variety of ways. You can use any form on its own or combine them to track various projects. It has its foundation in the Getting Things Done method of time management.
When I first started working in IT after graduating, my boss was quite busy with a lot of things, and asked me to keep track of my time and send him a weekly report of the things I worked on. I had never needed to do this before and was able to find the Printable CEO series through searching for time management forms. The Resource Time Tracker was the perfect tool to track my tasks throughout the week, and I actually used the short weekly form on its own week after week. Not only could I see where my time was going, but after a couple of weeks I could actually use it to plan out new projects. When I started writing business reports in Python, it was very useful to know how much actual work I needed to do and how much time I could spend automating. I’ve started a long-term project with a friend that seems just right for these forms and I’ve put it into practice for the first time this month.
For my own day-to-day planning, what I really need is accountability. Every working day since the last week of December, I’ve used the Emergent Task Planner. It’s a single-page sheet that has three work periods (separated with one-hour breaks) where you can list three (or more) major tasks for the day, estimate the time they will take, and then plan when you’ll work on them. There’s another large section for notes and other things. The nice thing about this form is that it was meant to work with the Pomodoro technique, where you work in set intervals. These forms use a 15-minute interval. This means that for every 15 minutes you work on a task, you get to fill in a bubble marking your time spent. This is a silly but addictive reward for getting things done and I’ve found that it works really well for me. I use this for single-day tasks that I know I can finish as well as planning to work on multi-day tasks.
For tasks that need to be tracked over multiple days, I use a form called the Task Order Up. This is like an order check used in restaurants around the world. I write down a task and break it into discrete steps. Then I work on each step and fill in a bubble every 15 minutes. I printed a page of each available color and keep green for direct freelance work, orange for Ubuntu work, and black for anything else. I actually ordered a check rail holder just for these slips. Having them in front of me beside my monitor is a great reminder of my progress.
For its own part, Ubuntu has been a big help in keeping me productive. I’m a big fan of Unity, and with the Launcher hidden, Unity really makes it easy to focus on my work at all times. When I need quick information, the Dash search in Ubuntu 13.10 lets me quickly find not just applications, but also the files and folders I’ve recently worked on for each application. I can do a search and find the folders and files I’ve been working on and open them quickly. Occasionally I’ll put on background music, and the Dash is up to snuff with music searches as well. Meanwhile the messaging indicator keeps me aware of incoming emails without diverting my attention. And the date indicator keeps track of any appointments I enter into my Google account via my phone. There are a few Pomodoro apps for Ubuntu, but I don’t use them personally. I prefer to keep track of start and stop times myself. Still, Ubuntu is one of the major reasons I’ve been productive this year! Well, if you don’t count the time I’ve spent playing Kerbal Space Program via Steam, anyway.
This year I set out to renovate the way I do business, and I found some wonderful, clean time management forms I can use on paper. Ubuntu continues to be the perfect fit for my desktop and laptop computers. Thanks to this combination of organization and accountability, I’ve been able to really get work done and adjust my schedule for my strengths and weaknesses. A month and a half into 2014, the year’s looking bright. I’m grateful to combine the best of legacy time organization and the best software in computing to create a powerful foundation to build on.
The day will start off at 16:00 UTC with a session by Elizabeth Krumbach Joseph (pleia2) consisting of a quick tour of all the Documentation resources available, including Desktop, Server, Wiki and Manual.
The next five sessions will give potential contributors an overview of contributing to each of the resources (all times UTC, click on time for link to time conversions.
- 17:00: Getting started contributing to Desktop docs by Kevin Godby (godbyk)
- 18:00: Getting started contributing to Server docs by Doug Smythies (dsmythies)
- 19:00: Getting started contributing to the Wiki docs by Svetlana Belkin (belkinsa)
- 20:00: Getting started contributing to Manual by Kevin Godby (godbyk)
- 21:00: Ubuntu Manual versions explained by Thomas Corwin (tacorwin) and Patrick Dickey (patrickdickey)
So come learn how to contribute to documentation with us!
Sessions all take place on Internet Relay Chat (IRC). If you want to participate, you just need to join #ubuntu-classroom and #ubuntu-classroom-chat on irc.freenode.net in your IRC client, or just click here for browser-based webchat. The instructor will give the class in #ubuntu-classroom and attendees can chat about the class and ask questions in #ubuntu-classroom-chat.
If you’re unable to attend, logs of each session will be made available following the event.
We hope to see you on Sunday, March 2nd!
Among all the icons I make for software throughout the community (or for my own devices) are those for Ubuntu Touch applications –which, given the publicization of the new style, I'm doing more of recently
So think of this post as an offering of my abilities. As such, I've created a small Google form wherein you can request an icon (or several) from me for your app.Ubuntu Icon Requests
There have been quite a few entertaining discussions on the interwebs about Ubuntu and concerns around privacy. This topic comes and goes on a regular basis, today it has come up because Mozilla are planning on putting some fairly harmless adverts on the blank tiles of new tabs and this is being compared to the Dash search in Ubuntu. Whenever the topic is raised it tends to be a fairly heated discussion, mostly focussing on the Amazon search results in the dash, mostly calling that adverts or spyware. It is a discussion that is mostly overblown and underinformed, with so much time spent freaking out about “adverts” that the real problems have been completely missed. Lets go through a bit of history, and I will try and explain the difference between the real problems and the FUD.
Initially there was the Gnome 2 application launcher, kinda similar to the Windows start button, it is a way to run applications that you have on your computer. They are nicely categorised so you can find all the graphics related applications on your computer and see Inkscape alongside Gimp and choose what you want to run. This worked well and people were generally satisfied at this mechanism for running local applications. Then along came Unity, this introduced the launcher, a dock bar on the left that shows running applications and has the ability to pin applications so you can start them by clicking on them when they are not running. The launcher is the way to run applications that you have on your computer – but not all of them, and not categorised, just your favourite ones you have pinned to the launcher. Unity also introduced the dash. This has a different scope of functionality, I like to call it the OmniGlobalEverywhere search tool. You type stuff in and it searches in lots of places to find what it is you are looking for. This is not the same scope of functionality as the Gnome 2 application launcher, it could search for local files, videos on YouTube and other streaming services, music, photos, other things. It is an extensible search interface and you can plug in additional search things. I wrote an OpenERP plugin so I could type an invoice number and jump straight to that invoice in a browser for example. It was a pretty cool concept as a jack of all trades search interface – but it isn’t the master of the specialised job of viewing and running applications you have already got installed.
Everyone completely missed the fact that the magic privacy button for a long time did almost nothing – it was just an undocumented flag that some lenses looked at and turned themselves off. Others did not. This was a real big deal and nobody noticed because they were obsessed with calling Amazon search results adverts. Now we have all kinds of odd lenses and search queries possibly going to yelp, zotero, yahoo finance, songster, songkick, gallica, europeana, etsy, COLORlovers and other places. Have you even heard of every single one of these? Do you know they are not evil? Do you know they are financially stable enough not to close the doors and let the domain renewal lapse for someone evil to buy it? Amazon I know and trust to continue existing, I also trust them not to want searches for partial mostly irrelevant words for profiling data when they have my product purchase history. The utter junk that the dash sends is of no value to Amazon compared to everything else they have, but this doesn’t stop people banging on about that one specific, relatively harmless and pointless in equal measure lense.
Firstly the Amazon lens is nothing special, and it is perhaps the internet connected lens I am least worried about. I trust Amazon to do what I expect them to do, I am a customer so they know what I bought, sending them random strings like “calcul” and “gedi” and “eclip” does not give them valuable data. It is junk. I am much more concerned about stuff like the Europeana, jstor, grooveshark lenses which do exactly the same thing but I have no idea who those organisations are or what they do. Even things like openweathermap, sounds good, but are they really a trusted organisation?
So, back to how it works. Your query for “socks” goes to products.ubuntu.com. At that point canonical’s secret sauce server looks at your query and decides that most people who search for socks either want to know about products to buy, or applications to run. They don’t tend to click on the results from the medicines or recipes lenses when we try showing those lenses to the user. So, having decided that the shopping lens and the applications lens are reasonable ones to search in it sends the query to Amazon (being the only shop currently supported, but it is designed to support every online sock vendor in the world) and tells your computer that the applications lens is worth looking in. When it gets the results back from Amazon those go to your computer, as a bunch of json data that is very similar to the Amazon json API, Amazon at this point thinks that Canonical’s server has got cold toes and is in need of some nice warm socks. Amazon does not know you exist at this stage.
That bundle of sock related data goes to the shopping lens on your computer, which then displays the results. It does this by showing some text “stripy socks, only £5.30″ and a picture, which it used to retrieve from Amazons content distribution network – O.M.G.!!! a data privacy leak. Amazon could log hits to their CDN (which I doubt they do), consolidate them globally, and figure out that it was displaying a bunch of sock pictures requested by your IP address, shortly after Canonical’s server searched for socks, so they could theoretically tie this together and infer that the reason you are staring at sock pictures is because you searched for socks via the dash search tool. So this huge and seriously concerning data privacy breach was a problem, so they fixed it. Now when you search for socks, Amazon gets CDN requests for images from products.ubuntu.com. Your computer gets the images from products.ubuntu.com (over https rather than http), it is now basically a reverse proxy for Amazon images, so that amazon is now more convinced than ever that Canonical’s server has got cold toes. As it happens, there is nothing wrong with your toes and you actually wanted to configure a socks proxy all along, and the shopping thing was a pointless overhead because when you want new socks the dash isn’t where you dash to.
There is a conversation on the technical board mailing list here https://lists.ubuntu.com/archives/technical-board/2013-October/thread.html and here https://lists.ubuntu.com/archives/technical-board/2013-November/thread.html relating to the closedness of the server side app. Having written something a bit similar myself, mine was closed for a while because it contained the Amazon API oauth keys in the source code. There really isn’t much to it on the server side. My server code is here https://github.com/AlanBell/shopping-search-provider/blob/master/server/index.php
As you read it, I have been working on a Postfix Juju Charm. During the last year or so, I have been working in order to squash minor bugs on the initial branch, and the charm is now (and has been for about a week) on the Juju Charm Store!
So, you can now just go ahead and have a Postfix server deployed in minutes. It includes SSL configuration in case you want to enable it, of course. Feel free to report any bugs you may find, and I’ll make sure to get them fixed as soon as possible.
Before the start of the holidays last year, the Ubuntu Community Council was approached by a concerned member of the community regarding the news that Linux Mint had been asked to sign a license agreement in order to continue distributing software packages out of the Ubuntu repositories.
Over the past two months, the Community Council has had several discussions, mailing list threads and meetings about this. In addition, we’ve reached out to another derivative for their understanding of the situation and spoken with external legal experts.
We are more than aware that some time has passed since the original approach and feel that we need to make it known that we’ve not been ignoring the situation. Legal issues are complex and we have to be mindful of the difference between personal and legal opinions. Understanding the weight of our words would carry we felt it was important to take time to gather facts and discuss the issue thoroughly.
At this time, we are in agreement that one of the keys to Ubuntu’s success is in providing a well-designed, reliable and enjoyable experience to all of our users, whether they are using Ubuntu on a desktop, a phone or in the cloud. To that end it is critical that when people see “Ubuntu”, it adequately represents the software that we all build and stand behind. This is as important to our individual reputations as much as to the reputation of the project as a whole. Trademarks and Copyrights are the legal tools provided to us for safeguarding those reputations, and it’s part of Canonical’s mandate within the Ubuntu project to use those tools appropriately, balancing the needs of all those involved in making Ubuntu. Canonical already provides a license for the use of these to the Ubuntu project and all of its distributions, including Ubuntu itself as well as those flavors that are developed in collaboration with it.
We believe there is no ill-will against Linux Mint, from either the Ubuntu community or Canonical and that Canonical does not intend to prevent them from continuing their work, and that this license is to help ensure that. What Linux Mint does is appreciated, and we want to see them succeed.
The Community Council feels that Canonical is making an honest and reasonable effort to balance the needs of the community and that any specific legal concerns should be addressed to the legal councils of those involved.
Finally, the Community Council would like to take this opportunity to remind people that it is important to work in a respectful collaborative manner when there are issues that concern the community. While this has been a valuable discussion to have, it’s also important to remember that everybody involved in the Ubuntu project, Canonical included, wants to see it and open source in general succeed and become as widely used as possible. Be mindful that you do not get caught up in a controversy, where a discussion with the parties concerned could clear up any misunderstandings. But when you do have any concerns about an issue such as this, we strongly encourage you to contact the Community Council directly and we will always do our best to provide accurate information or, when necessary, appropriate intervention to resolve the issue to the benefit of everybody involved. We are available to everybody, inside or outside the Ubuntu community.
Ubuntu Community Council
Let me just start off by saying that I love iloveubuntu.net, I subscribe to their RSS feed and read if pretty regularly, as regularly as I read any other feed. I have a great deal of respect for razvi and the writing he does for that site, and I would never mean any disrespect to him.
So what happened? When I started my UbBloPoMo project, one of the things I said I would do was write a post in the evening, and schedule it to publish the following day. That is what I’ve done for every post this month, and the last one was no exception. I had written my article, and scheduled it to publish at 4am my local time (9am UTC), and at that point in time I was unaware of razvi’s editorial on the same subject (which may or may not have been published by the time I wrote mine).
My choice in headline was in no way a reference to his. In fact, I chose the wording specifically because it wasn’t similar to any other headline for any other article I had read yesterday, so that nobody would feel like I was singling them out in particular. Of all the different ways a headline could have been worded, it was just a matter of random chance (or dump luck on my part) that mine and his were as similar as they were.
I still stand by what I said, about Mozilla and about us as a community. But I wanted to apologize to razvi for the confusion and hurt feelings caused by my headline. And to reiterate, I still love iloveubuntu.net, I still have a great deal of respect for razvi, and I will continue making his site part of my regular news reading.
Stuart Langridge, Jeremy Garcia, Bryan Lunduke, and myself wend our troublesome ways down the road of:
- We weigh in on the upstart/systemd brouhaha in Debian and discuss what happened, why it happened, and whether it was a good thing or not.
- Bryan reviews the Lenovo Miix 2 tablet and we get into the nitty gritty of what you can do with it.
- We take a trip down memory lane about how we each got started with Linux, which distributions we used, and who helped us get on our journey.
- We take a recap and look at community feedback about guns, 3D printing, predictions, Bad Voltage gaming, the Bad Voltage Selfie Competition and more, all making an appearance.
Go and listen or download it here
Be sure to go and share your feedback, ideas, and other comments on the community discussion thread for this show!
Also, be sure to join in the Bad Voltage Selfie Competition to win some free O’Reilly books!
Finally, many thanks to Microsoft for helping us get the Bad Voltage Community Forum up and running, and thanks to A2 Hosting for now hosting it. Thanks also to Bytemark for their long-standing support and helping us actually ship shows.
Finally, finally, I found the time (spread over several months) to refurbish my blog. Not that it took so long, but spare time is rare these days.
I decided to stick with Drupal and created a fresh and clean installation of 7 to replace the old Drupal 6. Now Drupal is somewhat overkill for a simple blog, but I the alternatives did not convince me for various reasons.
Now I do not want to dive into why or why not this and that, but point out some remarkabilities with regard to Drupal. Setting up Drupal is straightforward of course, and everything works fine and smooth, but the real work comes with adjusting it to become what you want. Its flexibility allows to realize anything, on the other hand it is also a reason why some things are laborious to accomplish.Layout and Modules
I chose Bamboo as base theme and sub-themed it. The blog looks now less stale, long code parts are presented properly and it has a mode for mobile devices. It required to dive in into Drupal theming a bit, but not too much. The documentation provided with Bamboo was already very helpful. The big plus of a sub-themed theme is that you easily can update the main theme without patching around endlessly. Theoretically it still can break your layout, if major changes would be applied. In the end, this task was pleasant to complete.
Afterwards it was about selecting modules and do the configuration. Mostly it was searching, installing, configuring, done. Just three things i want to point out here:
- There's no easy solution for a guest preview as offered by Wordpress. You can achieve this by doing something complicated (it did not follow this) or using the view_unpublished module. It does not offer the same convenience, but is good enough.
- Also finally standard elements like captions and buttons are localized into most common languages, e.g. English, Spanish, Chinese, Russian, Arab and German. Drupal does not ship translations by default.
- Avoiding blog spam. On the old version I used reCaptcha. I believe the only type of commentators it kept away were authentic people, instead I had the doubtful pleasure to moderate tons of SEO spam. Now I use a honeypot approach and so far (in testing) it works incredibly good and does not get in the way of real people. I an very fond of this.
I wished I could get the latest Drupal from the repositories, either original Ubuntu ones or a PPA. Web software evolves fast, releases fast and often closes security issues. Unfortunately, neither is provided (only older packages in the 12.04 repositories).
So I need to keep Drupal up to date by hand. Who has ever read the update instructions knows, that you don't want to do it by hand. A lot of stuff to do. Perfect condition for the lazy CS guy and a good opportunity to refresh my shell scripting. I could automate a lot of the ugly and boring stuff. What is left is for me is to kick off the script, and get in and out of the maintenance mode. Even this can be achieved without human interaction, so far i prefer to keep the control. In the end, I need to ensure everything works as expected anyway.Migration
The fun part. First, why did I not upgrade from Drupal 6 to 7, but made everything from scratch? Because I did some decision with the old configuration that were not so useful. Then, there were some modules that were discontinued or replaced with a lacking upgrade path. And somewhere in my head was stuck, that an upgrade was problematic or not recommended, though this is probably of goof of my own memory. Well, in the end almost everything was ready and was just waiting for the content.
To migrate the content, i.e. blog posts, static pages, comments, tags, from Drupal 6 to 7 was easy in the end, once you found the way and fixed what was missing.
There is a module that provides exactly this transfer from an old Drupal 6 installation to a new Drupal 7 one, providing a GUI. I really did not want to write an upgrade script, because I would have needed to get into those details again, while all the content types were standard ones. So, GUI was a plus. At that time there was no stable release including the GUI, though, so I took the development version. Took it, run it, was delighted.
Only a little bit later I found out, that the tags were not assigned and node and term IDs (tags) were shuffled.
Reassigning the tags worked with some SQL select and insert.INSERT INTO field_data_field_tags (entity_type, bundle, deleted, entity_id, revision_id, language, delta, field_tags_tid) SELECT 'node' AS 'entity_type', 'blog' AS 'bundle', 0 AS 'deleted', node.nid AS 'entity_id', node.nid AS 'revision_id', 'und' AS 'language', (@jDelta := @jDelta +1) AS 'delta', taxonomy_term_data.tid AS 'field_tags_tid' FROM taxonomy_term_data, node, oldDatabase.term_data, oldDatabase.node, oldDatabase.term_node, (SELECT @jDelta := 0) AS jDelta WHERE oldDatabase.term_node.nid = oldDatabase.node.nid AND oldDatabase.term_node.tid = oldDatabase.term_data.tid AND taxonomy_term_data.name = oldDatabase.term_data.name AND node.title = oldDatabase.node.title ORDER BY entity_id;
So, the Node IDs and Term IDs were left. This is a problem, because they are contained in the URLs. From a SEO point of view, keeping them different will confuse search engines. Likely that they get it right after a while, but as a former SEO consultant you want to do it the right way. Changing them back would work, but the IDs are used everywhere and there is a lot of tables. Before I decided for the migrate module I considered migrating the content just by copying it from the old to the new database, but things changed are without getting really down into it, many new tables and columns remained unclear.
The lazy approach was to to redirect the old node IDs to the new ones.SELECT CONCAT('redirect 301 /node/', oldDatabase.node.nid, ' http://www.arthur-schiwon.de/', alias) FROM node, url_alias, oldDatabase.node WHERE node.title = oldDatabase.node.title AND source = CONCAT('node/', node.nid);
It redirects the old URLs containing the old node IDs to the clean URLs. For some reasons, something happened canonical tag in Drupal 6 so that the old clean URLs where not used, but the ugly ones. I do not want to have them in the search engines. Now, this is fixed as well. The result contained duplicate lines, somehow, but they could be easily dropped or the correct alias chosen. In few cases, I needed to update the alias, commas led to some problems. I pasted the result at the beginning of the .htaccess file. The same needed to be done for the term IDs.
It is not the best approach, but given the limited time I could and wanted to spent this is OK. In the end, it's a private blog for fun and fame, but not for profit.
It is essential to try whether all important old URLs will still be reachable to avoid broken links. Broken links are bad for visitors as well as search engines. I used linkchecker, available in Ubuntu repositories, to collect all the URLs from my old site.linkchecker -Fcsv/urlstate.csv --stdin -t1 -r0
A lot of stuff is gathered I took the whole path pointing to my domain, replaced the domain to my test domain, saved it in a text file and ran curl against them, I wrote a small script for this.#!/bin/sh OUTPUTFILE=new-url-stats.csv for url in `cat urls-new-ws`; do status=`curl -I $url | grep "HTTP/1.1"` echo "$url,$status" >> $OUTPUTFILE done
In the resulting CSV file I had the URL and the status, good enough for me. In LibreOffice, I auto-filtered it and sorted out the faulty or suspicious URLs, i.e. those throwing 4xx errors. If things needed to be fixed, I fixed them and rerun the script again until I was satisfied.Future
I wondered whether I should switch away from Drupal but decided to stay with it. The migration should be performed as good as possible while spending as little time as possible. In the end, it took quite some time to investigate and find the right strategy. Maybe it would have been faster with a direct upgrade. Probably it is easier and more straight forward to use a software that is dedicated to run blogs. This question will reappear when the next iteration of the blog is going to be done in some years. And I cannot promise to stay with Drupal, since I really only use a little bit of the whole feature set. But I am not a fan of neither Wordpress nor Ghost, so let us see which options will be out in the wild then.
With the result I am satisfied, though there are a few smaller edges that can be taken care of later. It really is a huge relief to deliver "Comment" buttons and likes in common languages instead of just only German and be able to properly read it on mobile devices.
Now I only need to find time to blog more often ;)Tags: BlizzzPlanetUbuntuDrupal
Just about all of you reading this know that I am a technical writer. One of the things I do to keep up to date with the latest trends in the field is read. I read books, articles, blogs, whatever I can find that relates. I especially enjoy Mark Baker’s blog, Every Page is Page One. Baker consistently posts articles that make me think, and in good ways. When I heard he has a book out, I contacted the publisher immediately.
As a side note, Baker’s publisher, XML Press, consistently produces books that I find useful. Every one I have read is well-written, authoritative, and filled with real-world experience and practicality.
Every Page is Page One: Topic-based Writing for Technical Communication and the Web shares the first part of its title with the blog, but the content is not directly from the blog. Rather than a collection of posts on assorted topics assembled into book form, this is a well-thought-out and well-organized text. In it, Baker observes that documentation projects tend to think about technical writing from a very book-centered paradigm. This was once ideal, but in the age of communicating technical information electronically, it forces limits on the end product that hinder the true goal of technical writing, the goal of delivering the right information at the right moment to the person who is seeking it. As someone who is not only a technical writer, but who also has a degree in information resources and library science, I have multiple reasons for supporting this goal.
What Baker does is give tangible form to thoughts and ideas that he, other technical writers, and even I have had in the abstract. How do we provide needed information to people who seek it in an age where the web makes almost anything searchable? Do manuals still matter? What about other forms of documentation? Are there changes to our style of communication, to our style of writing and presenting information, that will make the information seeker’s task easier? Baker discusses serious and realistic ways we can improve our field. It is all organized around the idea that we can no longer control the order in which information seekers will consume or even find our information, that every page (in a documentation wiki, for example) should be created in a way that enables a user to immediately understand and acquire what they need when they need it. Since we know we do not have this control like we had in a printed book, we must modify how we write and present information to fit the expectations of the seeker.
I enjoyed reading this book. I have benefited personally from reading this book. I am taking this book in to my workplace and sharing it with the other tech writers there and I believe our workplace and our employer and our customers will benefit from this book. If you work in the field, I’m convinced you will, too. The whole book is good, but my favorite parts are Section I, which lays the foundation in five chapters, and Chapter 22, which gives very practical and useful advice for making your case to others when you begin to try to make the changes the book describes.
At the Ubucon at Southern California Linux Expo on Friday, February 21st I’ll be doing a presentation on 5 ways to get involved with Ubuntu today. This post is part of a series where I’ll be outlining these ways that regular users can get involved with while only having minimal user-level experience with Ubuntu.
Interested in having a polished release but not able to contribute in a very technical way? Testing pre-releases is a great way to get started, even Mark Shuttleworth is getting in on the testing fun!
In this post, I’ll walk you through doing an ISO test, but there are also package and hardware/laptop tests that can be done, full details here: https://wiki.ubuntu.com/Testing/QATrackerTesting an ISO Log on the the testing tracker
You will want to go to http://iso.qa.ubuntu.com/. The page can be a bit overwhelming at first, but there are two sections you’ll want to focus on, the log in button and the list of builds available for testing.
To log in you’ll need an account with https://login.ubuntu.com/, clicking on the “Log in” button will take you to a page where you can set up one or use your existing one.Select a build to test
Most days the only build that is currently being tested is the “Daily” image – so in the screenshot above that is “Trusty Daily” and you’ll want to click on that link. The “Trusty Alpha2″ and “Trusty Alpha1″ images have already been released, so ISO testing on those is no longer necessary.Select something to test
This screen can be a bit overwhelming too since it lists all the possible builds in the ISO tracker, which is a lot! I highly recommend using the Filters on the left hand side of the screen to select only the builds you’re interested in. In this screenshot I selected only Ubuntu and Xubuntu to make the list short.
Then you can look to see what you want to test. Do you have a new computer? You can test the 64-bit image isos, I circled where you want to click in the screenshot if you want to test the Xubuntu 64-bit ISO (I do!).Select what test you want to do
At this next screen you will be presented with a series of tests that you can do. The easiest is “Live Session” since it doesn’t require you to install anything, it’s just testing a live session. You then also have various options for Installation-based testing.
But let’s say you have a virtual machine (Virtual Box is free and pretty easy to use for this) or a spare computer you want to do tests on, so for the purpose of this walkthrough we’ll select “Install (entire disk)” test case.Download and prepare the ISO
Once you’re on the screen for the test case, there will be a link to downloading the ISO. There are many options for downloading, including just clicking it to download via http, downloading via rsync, zsync or torrent; you can read more about all of these options once you learn more about testing. For now, downloading it through http is fine.
While you’re waiting for it to download you can click on “Testcase” in the grey box below that to read through what you’ll be doing in the test case.
Once downloaded, either use the image directly in something like VirtualBox, or put it on a USB stick or burn a DVD.Begin the test case
Scrolling down on the same page, you will see this:
This is where you will report the results of your test. The “Bugs to look for” are a list of bugs that others have reported that you may encounter, so you might want to look at some of those and include those bug numbers in your test if you encounter them too.
A quick rundown of the meaning of each field is as follows:
Result: Whether or not you were able to get through to the end of the test case with no fatal errors
Critical bugs: Bugs that prevented you from finishing the test case (would generally go along with “failed” above)
Bugs: Bugs that exist, but you were able to work around them and finish the test case (it can be marked as “passed” and still have bugs)
Note: Don’t stress too much over whether you believe a test is really passed or failed and whether bugs are critical or not, there is some judgement involved in here and results are reviewed by release managers who decide whether the ISO is ready for releasing. Just do your best!
Hardware profile: This is an optional field that can give the team an idea of what your hardware is. Using a virtual machine? Actual hardware? How much RAM and what type of graphics card? Put as much information as you can online somewhere and paste your link here. For example, here’s the testing profile I use for my Lenovo G575 and another for when I test in a 1.5G RAM virtual box instance. You can also choose to use the “hardinfo” command to generate information about your hardware and put it online somewhere.
Comment: You can add any additional comments you may have about doing the test case
If you run into any bugs while doing your test, you will need to submit those bugs for them to be recorded. For this you will need an account on launchpad.net, if you don’t have one, get one by clicking here. Once you submit a bug you’ll want to add that bug number to your list of Bugs (or Critical Bugs). Learn more about reporting bugs here.
Note: Reporting bugs can be hard, particularly determining what package to file them against, even I still struggle with this! My recommendation is to do your best, make sure you add your bug to the tracker so people notice it and ask for help on the ubuntu-quality mailing list if you’re really unsure.Done
Click “Submit Result” and you’ll be finished reporting a test case! The Ubuntu community thanks you :)Learn more about testing
A more thorough walkthrough with more screenshots can be found here: https://wiki.ubuntu.com/Testing/ISO/Walkthrough
You can sign up for and email the ubuntu-quality mailing list to introduce yourself and ask any questions you may have, they’re a friendly bunch.
Visit the Quality Assurance Team wiki for more about other kinds of testing.Previous posts in this series
So here I am with a static blog.
I was on Wordpress. I like Wordpress; in particular, I like the vitality of it. There’s a large community of people using it and working on it and making plugins and making themes, and it’s become apparent to me over the years that one of the things I care about quite a lot, when using software which is not a core part of what I do, is that I do not have to solve every problem that I have myself. That is: I would like there to be the problem-solving method of “1. google for desired outcome; 2. find someone else has written a plugin to do it”, rather than “1. google; 2. find nothing; 3. write code”. What this means is using some project with a largeish community. So I settled on Pelican, because it’s one of the more popular static blog engines out there, and hence vibrant community.
At this point there will be questions.
If you wanted a vibrant popular community why didn’t you use Jekyll?
I couldn’t work out how to install it.
It says: gem install jekyll. I did that and it says Permission denied - /var/lib/gems/1.9.1. So for some reason a command run as me wants to install things in a system-level folder. No. System level belongs to apt. Let the word go throughout the land.
I’m sure that it’s entirely possible to configure RubyGems so that it installs things in a ~/gems folder or something. But I don’t want that either: I want this stuff to be self-contained, inside the project folder. Node’s npm gets this completely right and I am impressed down to the very tips of my toes with it. Python gets it rightish: you have to use a virtualenv, which I am doing. Is there a virtualenv-equivalent for Ruby and RubyGems? Almost certainly. But I’m not trying to learn about Ruby, I’m trying to set up a blog. Reading up about how to configure Ruby package installation to be in the project folder when you’re trying to set up a blog isn’t just yak-shaving, it’s like an example you’d tell a child to explain what yak-shaving is. So no Jekyll for me, which is a bit annoying, but not too much since Pelican looks good. And I know Python pretty well, and don’t like Ruby very much, so that’s also indicative.
Why are you using a static blog engine at all? What was wrong with Wordpress?
It got owned. I got an email from a friend of mine saying “hey, did you know that if you look at your blog in an old browser, such as Dillo, there’s a bunch of spam at the top of it?”
I did not know. But it was the case. Sigh.
There are plenty of guides around about how to fix this: dump the DB, reinstall Wordpress, restore the DB, then look for fixes, etc, etc, etc. And I thought: wow, that’s a bunch of effort and what do I get for it? I’m still vulnerable to exactly the same problem, which is that an upgrade to WP happens, it notifies me, I notice thirty nanoseconds later, and in that thirty nanoseconds some script bot somewhere 0wns the blog. I could, in theory, fix this by spending much more time setting up something to auto-update WP, but in practice that’s hard: what do I do, svn update every fifteen seconds in a cron job? Nightmare.
So, what am I getting from Wordpress that I’ll lose if I go static?
My list of plugins contains a bunch of stuff which is only relevant because it is Wordpress and thus dynamic: caching, spam, that sort of thing. Static sites don’t need any of that. I like Jetpack a lot; it gives me a nice mobile theme and stats, and I’ll lose that (as well as comment moderation from a mobile app, which I don’t care about if I don’t have comments; see below). I have a bunch of nice little plugins which throw in features that I like, such as adding footnotes, which I’ll lose. Counterbalance that with how it’s basically impossible to put a <script> element in a Wordpress post, which is incredibly annoying. I won’t be able to create posts if I’m away from my computer (without doing a bunch of setup), but in practice I don’t do that, it turns out. And finally, comments.
Hm, comments. On the one hand, I like my commenters; there have been interesting discussions there, and normally informative. On the other hand, there’s a lot less commenting on blogs going on these days; you get much more real-time discussion on Twitter or G+ about a post than you do on the post itself. That’s a bit worrying — you’re losing the in-one-place nature of the conversation, and their bit of the conversation might vanish from the public discourse if G+ shuts down, which is why I don’t like Disqus — but that’s happening anyway and can’t be stopped. So maybe I can live with it.
Also, themes, but Pelican has a bunch, including rather excellently the same theme I was using on Wordpress! So it looks like nothing changed! Rawk.
So, let’s see how it goes. Possibly I’ll find something else critical that I’m missing and migrate back… and I do still have to write a footnotes plugin… but so far we’re feeling good about it.
Seriously? If you find yourself writing a headline like this, you need to take a break from the internet. If you clicked on this one because you wanted more blood-sport FOSS drama, you’re out of luck (and you should also take a break from the internet). Come on guys, what’s happened to us that we keep jumping from one outrage to another? For a group that loves to make allusions to 1984 when privacy is concerned, you’d think we would put up more resistance to these 2-minutes hate sessions when they are pushed on us.
Mozilla was the first open source project I ever took interest in. I downloaded the source code for Netscape 5 (remember that?) and was *amazed* that they would just give it away to anybody like that. I am involved in open source now, all these years later, because of that one experience. Firefox is still one of the most well known and most used pieces of open software in the world. They have been consistently open and doing good for us users. You don’t just disregard all of that because of a few ads.
So let’s quit this self-indulgent outrage, Firefox is still a great open source project. Mozilla aren’t selling out, they aren’t turning evil, and they haven’t suddenly stopped caring about users. They have a nice feature that shows website thumbnails for previously viewed website, it’s very handy once you have previously viewed websites. For new users, they’re empty, and empty tiles are useless. So Mozilla wants to pre-populate them, until you’ve used the browser enough to fill that space. That’s great, it turns something doing nothing, and makes it do something, that’s a good thing. And they might make some money off it, which for everybody who’s concerned about Google’s dominance and infiltration into our privacy should be a good thing.
Are they ads? Yes. Or No, depending on who you ask. But you’re asking the wrong question. The question should be: do they make Firefox worse for me? or you? or other people? And the answer is we simply don’t know yet, because as of today those tiles are still empty for new users. Mozilla thinks they can make Firefox better, both the web browser and the project behind it, and after 15 years of doing exactly that they’ve more than earned the right to try.
But let’s get back to the bigger problem, this constant stream of flamewars and hate fests. We can’t go on tearing down free and open source projects like this. No matter how much one of these might benefit your favored project at the moment, sooner or later the outrage machine is going to turn on it too. If we care at all about open source as a philosophy, we need to care about the people and projects that live by it.
As Russ Allbery saidrecently, after the Debian project had just weathered an outrage storm of it’s own:
But people should also get engaged and interested in understanding other *people* and finding ways to work with other people in difficult situations, since at the end of the day our communities are about people, not software.
Last week I was in Orlando sprinting with my team as well as the platform, SDK, and security teams and some desktop and design folks. As usual after a sprint, I have been slammed catching up with email, but I wanted to provide a summary of some work going that you can expect to see soon in the Ubuntu app developer platform.HTML5
In the last few months we have been working to refine our HTML5 support in the Ubuntu SDK.
Today we have full HTML5 support in the SDK but we are working to make HTML5 apps more integrated than ever. This work will land in the next week and will include the following improvements:
- Consolidating everything into a single template and container. This means that when you create a new app in the SDK you have a single template to get started with that runs in a single container.
- Updating our Cordova support to access all the devices and sensors on the device (e.g. camera, accelerometer).
- Adding a series of refinements to the look and feel of the HTML5 Ubuntu components. Before the components looked a little different to the QML ones and we are closing the loop.
- Full API documentation for the Cordova and Platform APIs as well as a number of tutorials for getting started with HTML5.
- On a side note, there has been some tremendous speed improvements in Oxide which will benefit all HTML5 apps. Thanks to Chris Coulson for his efforts here.
With these refinements you will be able use the Ubuntu SDK to create a new HTML5 app from a single template, follow a tutorial to make a truly native look and feel HTML5 app utilizing the Cordova and Platform APIs, then click one button to generate a click package and fill in a simple form and get your app in the store.
I want to offer many thanks to David Barth’s team for being so responsive when I asked them to refine our HTML5 support ready for MWC. They have worked tirelessly, and thanks also to Daniel Holbach for coordinating the many moving pieces here.SDK
Our SDK is the jewel in the crown of our app development story. Our goal is that the SDK gets you on your Ubuntu app development adventure and provides all the tools you need to be creative and productive.
Fortunately there are a number of improvements coming here too. This includes:
- We will be including a full emulator. This makes it easy for those of you without a device to test that your app will work well within the context of Ubuntu for smartphones or tablets. This is just a click away in the SDK.
- We are also making a series of user interface refinements to simplify how the SDK works overall. This will include uncluttering some parts of the UI as well as tidying up some of the Ubuntu-specific pieces.
- Device support has been enhanced. This makes it easier than ever to run your app on your Ubuntu phone or tablet with just a click.
- We have looked at some of the common issues people have experienced when publishing their apps to the store and included automatic checks in the SDK to notify the developer before they submit them to the store. This will speed up the submissions process.
- Support for “fat” packages is being added. This means you can ship cross-compiled pieces with your app (e.g. a C++ plugin).
- Last but certainly not least, we are going to be adding preliminary support for Go and QML to the Ubuntu SDK in the next month. We want our app developers to be able to harness Go and with the excellent Go/QML work Gustavo has done, we will be landing this soon.
As ever, you can download the latest Ubuntu SDK by following the instructions on developer.ubuntu.com. Thanks to Zoltan and his team for his effortsdeveloper.ubuntu.com
An awesome SDK and a fantastic platform is only as good as the people who know how to use it. With this in mind we are continuing to expand and improve developer.ubuntu.com to be a world-class developer portal.
With this we have many pieces coming:
- A refinement of the navigational structure of the site to make it easier to get around for new users.
- Our refined HTML5 support will also get full Cordova and Platform API documentation on the site. Michael Hall did a tremendous job integrating Ubuntu and upstream API docs in the same site with a single search engine.
- A library of primers that explain how key parts of our platform work (e.g. Online Accounts, Content Hub, App Lifecycle, App Insulation etc). This will help developers understand how to utilize those parts of the platform.
- Refining our overview pages to explain how the platform works, what is in the SDK etc.
- A refreshed set of cookbook questions, all sourced from our standard support resource, Ask Ubuntu.
- We will also be announcing Ubuntu Pioneers soon. I don’t want to spoil the surprise, so more on this later.
Thanks to David, Michael, and Kyle on my team for all of their wonderful efforts here.Desktop Integration
In the Ubuntu 14.04 cycle we are also making some enhancements to how Ubuntu SDK apps can run on the desktop.
As many of you will know we are planning on shipping a preview session of Unity8 running on Mir. This means that you can open Unity8 from the normal Ubuntu login screen so you can play with it and test it. This will not look like the desktop; that work is on-going to converge Unity 8 into the desktop form-factor and will come later. It will however provide a base in which developers can try the new codebase and hack on it to converge it to the desktop more quickly. We are refreshing our Unity8 developer docs to make this on-ramp easier.
We are also going to make some changes to make running Ubuntu SDK apps on Unity 7 more comfortable. This will include things such as displaying scrollbars, right-click menus etc. More on this will be confirmed as we get closer to release.
All in all, lots of exciting work going on. We are at the beginning of a new revolution in Ubuntu where beautifully designed, integrated, and powerful apps can drive a new generation of Ubuntu, all build on the principles of Open Source, collaboration, and community.
Much has been made of the recent announcement of the Year of Code and the underwhelming interview on Newsnight of Lottie Dexter which contained some selected footage from what appears to be a class on jQuery, possibly by Code First:Girls in which coding is described twice as gobbledegook and went on to have Lottie Dexter announce that she was unable to code. This is not ideal for the director of an organisation that is supposed to inspire and promote the teaching of coding. I don’t demand a string of coding accomplishments from such a position, it is just that without a basic understanding of coding it is hard to articulate how much fun it is. Computing in schools fell apart as a subject in the mid 90s, the emphasis changed from doing programming projects and educational activities to using spreadsheets, word-processors and desktop database applications. In many schools the teaching the foundational skills of computing was replaced by Microsoft Office training. This is not the same, and something I have been concerned about for many years, it is one reason I was involved with supporting the Open Source Schools project around the time of the end of BECTA and one reason why we exhibited at BETT and introduced teachers to the OLPC project and the thinking behind it. A couple of years ago when taking my eldest to an open day at a local secondary school the first words out of the mouth of the teacher when we got to the ICT room were “Don’t worry, there is no coding in this subject”. We selected a different school.
This is all quite sad, but it is fixable. Coding is fun and easy, teaching it is fun and easy. I know this because I do it. Every Tuesday afternoon this term I am visiting a school a few miles away to run an after school Code Club. We are doing programming projects using Scratch, here is the project we did this week, it is a fruit machine that cycles through a few images and you click the images to stop and try to get them all to line up.
Part of the code required to do this looks like this:
It is programmed by dragging and dropping the commands from a palette of options (which is particularly great on an interactive whiteboard), no typing or spelling errors involved and the club of year 5 (age 9) programmers now know about variables, random numbers, if statements, infinite loops, bounded loops, signals, events and designing a fun game by balancing parameters to make it not too easy and not too hard. They have been trying things out, experimenting, getting things wrong and figuring out what the problem is and what they need to do in order to get the outcome they want. This is computing and it is the foundation of the skills we want coming into the industry.
I would encourage everyone in the IT industry, or with an interest in IT in the UK (and elsewhere, but some of this is UK specific) to get involved in Code Club . The Code Club website allows schools to say that they would like to have a Code Club, and volunteers to search for schools in their area that want one. This means that you do not have to approach the school and start by explaining what it is all about and why they should want to have a Code Club. They already know that bit, it means you have to do nothing to “sell” the concept to the school. The activity plans are great, the coders love them and you don’t have to decide what you are going to do each week, that is all done for you. There is a bit of admin and checking that is done in advance, you get a security check called a DBS, but that is all arranged and paid for by STEMNET.
I don’t know if the Year of Code organisation will make any particular contribution itself, but the Newsnight appearance and subsequent kerfuffle has certainly brought some attention on the efforts of Code Club, Young Rewired State, the Raspberry Pi foundation and some other organisations which are actively working to bring the fun of coding back into UK schools and this is a good thing.
There has been a lot of sensational writing by a number of media outlets over the last 24 hours in reaction to a post by Darren Herman who is VP of Content Services. Lots of people have been asking me whether there will be ads in Firefox and pointing out these articles that have sprung up everywhere.
So first I want to look at the Merriam Webster definition of an Advertisement:ad·ver·tise·ment
noun \ˌad-vər-ˈtīz-mənt; əd-ˈvər-təz-mənt, -tə-smənt\
: something (such as a short film or a written notice) that is shown or presented to the public to help sell a product or to make an announcement
: a person or thing that shows how good or effective something is
: the act or process of advertising
Great, now that we have the definition it looks like the fact that Mozilla announces to users their rights and asks about what data choices they make alone meet the criteria of an advertisement. The question is: does the average user consider that an advertisement or a useful bit of content that Mozilla is trying to share? Next, lets look at the fact that Firefox uses Google as a default search/home page and boom, if we use the literal sense of the definition that too is an advertisement isn’t it?
So now lets move on to Darren’s post. While I think the post could have had stronger context and there could have been a better response to address some of these concerns, the basic gist here is that Mozilla plans to offer tiles where there was a gap in content and that some of those tiles may be sponsored but will still be consistent with Mozilla’s values. Furthermore, this new content will only be displayed to new users and its unlikely you will see this anytime soon since it has not landed in Mozilla-Central and gone through the processes necessary to make it to a stable release.Personally I think this is much ado over nothing and I think this feature and features like UP (User Personalization) are going to be very helpful to users and bake in some of the content that add-ons have typically provided.
This week we dedicated the short clinic to sizing, and ensuring widgets and items are usable (touchable).
- The Ubuntu grid unit – for more information, see http://developer.ubuntu.com/api/qml/sdk-1.0/UbuntuUserInterfaceToolkit.resolution-independence/)
- Minimum touch target size – 4×4 gu
- A sneak preview of the updated widgets coming to Ubuntu Touch
If you missed it, or want to watch it again, here it is:
The next App Design Clinic will on Wednesday 26th February. Please send your questions and screenshots to firstname.lastname@example.org by 1pm UTC on Tuesdays to be included in the following Wednesday clinic.
Today was a distracting day for me. My homeowner’s insurance is requiring that I get my house re-roofed, so I’ve had contractors coming and going all day to give me estimates. Beyond just the cost, we’ve been checking on state licensing, insurance, etc. I’ve been most shocked at the differences in the level of professionalism from them, you can really tell the ones for whom it is a business, and not just a job.
But I still managed to get some work done today. After a call with Francis Ginther about the API website importers, we should soon be getting regular updates to the current API docs as soon as their source branch is updated. I will of course make a big announcement when that happens
I didn’t have much time to work on my Debian contributions today, though I did join the DPMT (Debian Python Modules Team) so that I could upload my new python-model-mommy package with the DPMT as the Maintainer, rather than trying to maintain this package on my own. Big thanks to Paul Tagliamonte for walking me through all of these steps while I learn.
I’m now into my second week of UbBloPoMo posts, with 8 posts so far. This is the point where the obligation of posting every day starts to overtake the excitement of it, but I’m going to persevere and try to make it to the end of the month. I would love to hear what you readers, especially those coming from Planet Ubuntu, think of this effort.
 Re-roofing, for those who don’t know, involves removing and replacing the shingles and water-proofing paper, but leaving the plywood itself. In my case, they’re also going to have to re-nail all of the plywood to the rafters and some other things to bring it up to date with new building codes. Can’t be too safe in hurricane-prone Florida.