In 2012 I was porting OpenEmbedded to target AArch64 so I can say that I did first OE builds for that architecture.
But today I did kind of reverse thing:Build Configuration: BB_VERSION = "1.21.1" BUILD_SYS = "aarch64-linux" NATIVELSBSTRING = "Fedora-21" TARGET_SYS = "arm-oe-linux-gnueabi" MACHINE = "genericarmv7a" DISTRO = "nodistro" DISTRO_VERSION = "nodistro.0" TUNE_FEATURES = "armv7a vfp thumb neon callconvention-hard" TARGET_FPU = "vfp-neon"
Yes — I did build on AArch64 machine targeting ARMv7a system. Had to edit one patch (pseudo-native was set to use very old glibc symbols which are not available on 64-bit ARM) but after that build was running just fine.
I did not tested resulting binaries.
Early last year the Linux Mint developer told me he had been contacted by Canonical's community manager to tell him he needed to licence his use of the packages he used from Ubuntu. Now Ubuntu is free software and as an archive admin, I spend a lot of time reviewing everything that goes into Ubuntu to ensure it has freedom in its copyright. So I advised him to ignore the issue as being FUD.
Later last year rumours of this nonsense started appearing in the tech press so instead of writing a grumpy blog post I e-mailed the community council and said they needed to nip it in the bud and state that no licence is needed to make a derivative distribution. Time passed, at some point Canonical changed their licence policy to be called an Intellectual property rights policy and be much more vague about any licences needed for binary packages. Now the community council have put out a Statement on Canonical Package Licensing which is also extremely vague and generally apologetic for Canonical doing this.
So let me say clearly, no licence is needed to make a derivative distribution of Kubuntu. All you need to do is remove obvious uses of the Kubuntu trademark. Any suggestion that somehow compiling the packages causes Canonical to own extra copyrights is nonsense. Any suggestion that there are unspecified trademarks that need a licence is untrue. Any suggestion that the version number needs a trademark licence is just clutching at straws.
From every school in Brazil to every computer in Munich City Council to projects like Netrunner and Linux Mint KDE we are very pleased to have derivative distributions of Kubuntu and encourage them to be made if you can't be part of the Ubuntu community for whatever reason.In more positive news Ubuntu plans to move to systemd. This makes me happy, although systemd slightly scares me for its complexity and it's a shame Upstart didn't get the take up it deserved given its lead in the replace-sysv-init competition, it's not as scary as being the only major distro that didn't use it.
While testing the developer versions in any way possible is a great idea, there isn’t much benefit in messages telling us Xubuntu works on machine X, or there were no problems with upgrading machine Y.
Why? It’s not measurable.
The following sections will explain the kind of figures we would like to measure, why those figures are important and will hopefully give you some motivation to start running and and reporting tests.Measuring success or failure
Bugs that are being reported. The number and quality of bugs help us measure how smooth the user experience is. In addition, since the bugs are found when running specific testcases, reproducing them is usually trivial, which in turn allows us to get working on them and get them fixed faster.
Of course, doing exploratory testing helps us find bugs that our usual routines do not catch. This is why it’s also important to do tests that go beyond the testcases. If you find such bugs while running a testcase, please report them as well.
The amount of testing that has been done. While quantity doesn’t replace or imply quality, it’s important to know how thoroughly the tests have been run. This is all the more true when people are able to run tests with varying hardware and not just virtualized environments.
The number of people testing. Usually, more eyes find more bugs. Along with the number of tests run, this helps us get a sense of how thorough the testing was.
Furthermore, the last two figures also help us decide whether we need to run more calls for testers as we prepare for the next milestone or cadence testing.Bring out your results
Simply put, reported results are the only reliable way we have to gauge these figures. In the ideal situation, the number of bugs reported is going down while the number of testers and tests run is going up.
However, if the reported results we are currently looking at are the reality, then on average Xubuntu gets released after being tested by somewhere in the region of 20 people. After the release, the version in question is used by thousands. We’re sure that you’d not like to think that!
As you might gather, reporting tests is almost as important as your testing in the first place. Starting reporting will be an extra step or two for you, but don’t be afraid – we will help you to get started and help you throughout.Getting started
If you are one of those unsung heroes who is regularly out there testing for us – let us know, we’ll be looking, as always, for new names on the trackers. Reports are made at each meeting on how testing has gone in the preceding week.
To get started, subscribe to the Xubuntu development mailing list – you’ll see all the calls from QA that way.
If you have any questions about how to get involved, then members of the Xubuntu QA team can usually be found in #xubuntu-devel on Freenode and will be happy to help, as will most that you’ll see in there.
Again, please remember that Xubuntu is a completely community driven project. If you are reading this and are running Xubuntu, consider giving back. Thank you!
The Ubuntu Doc team is proud to present Ubuntu Documentation Day 2014! On this day, which is March 2 at 1600 UTC, we are offering classroom sessions on #ubuntu-classroom and the place to ask questions on #ubuntu-classroom-chat. These rooms are both on irc.freenode.net.
I will be teaching the session on the Wiki (docs) part of the team which is at 1900 UTC.
More info HERE
We hope to see you guys there and hopeful you all learn how to help the Ubuntu Doc to grow.
I am just quite finishing my computer science degree. So I have finally time.
Its being a lot since I am a little sleepy in the Ubuntu Comunity. I have been a little active in Ask Ubuntu and some local events at my city.
But I finally decided that my contribution this year is going to be to develop apps for Ubuntu Desktop/Mobile.
I have being reading a lot of posts here in Planet Ubuntu and checking the website http://developer.ubuntu.com/
I am quite excited in beginning. Planning to write a lot about my experience, examples and possible issues I find.
Beside this I am planning to learn Node.js
I hope that at the end of the year, I can write an article saying this two goals were completed.
With Bdale Garbee’s casting vote this week, the Debian technical committee finally settled the question of init for both Debian and Ubuntu in favour of systemd.
I’d like to thank the committee for their thoughtful debate under pressure in the fishbowl; it set a high bar for analysis and experience-driven decision making since most members of the committee clearly took time to familiarise themselves with both options. I know the many people who work on Upstart appreciated the high praise for its code quality, rigorous testing and clarity of purpose expressed even by members who voted against it; from my perspective, it has been a pleasure to support the efforts of people who want to create truly great free software, and do it properly. Upstart has served Ubuntu extremely well – it gave us a great competitive advantage at a time when things became very dynamic in the kernel, it’s been very stable (it is after all the init used in both Ubuntu and RHEL 6 and has set a high standard for Canonical-lead software quality of which I am proud.
Nevertheless, the decision is for systemd, and given that Ubuntu is quite centrally a member of the Debian family, that’s a decision we support. I will ask members of the Ubuntu community to help to implement this decision efficiently, bringing systemd into both Debian and Ubuntu safely and expeditiously. It will no doubt take time to achieve the stability and coverage that we enjoy today and in 14.04 LTS with Upstart, but I will ask the Ubuntu tech board (many of whom do not work for Canonical) to review the position and map out appropriate transition plans. We’ll certainly complete work to make the new logind work without systemd as pid 1. I expect they will want to bring systemd into Ubuntu as an option for developers as soon as it is reliably available in Debian, and as our default as soon as it offers a credible quality of service to match the existing init.
Technologies of choice evolve, and our platform evolves both to lead (today our focus is on the cloud and on mobile, and we are quite clearly leading GNU/Linux on both fronts) and to embrace change imposed elsewhere. Init is contentious because it is required for both developers and system administrators to understand its quirks and capabilities. No wonder this was a difficult debate, the consequences for hundreds of thousands of people are very high. From my perspective the fact that good people were clearly split suggests that either option would work perfectly well. I trust the new stewards of pid 1 will take that responsibility as seriously as the Upstart team has done, and be as pleasant to work with. And… onward.
And, as promised in the last liberation campaign, the French translation has been published as a paperback by Eyrolles (see the cover above).
Check out the article on debian-handbook.info for more information.
I would like to take the opportunity of this special day to express how much the influence of Free Software to my life is.
Back in 2005 I started to use Linux and because I had no clue I looked for support in the German-language Kubuntu community. This is how I jumped into and got an idea about Free Software. Kubuntu and the people were fun, after some time I was able to give support and not only take it. I helped out with other things, started to attend conferences (usually also in combination with a Kubuntu community booth). I met amazing people who became really really good friends.
An outstanding position does have the LinuxTag in Berlin, because there I made my most important contacts. It was the first time, I met my friends from the German-language Kubuntu community in person. It was the place, where I first met Frank, founder of ownCloud. The later founded company behind the ownCloud project now gives me the chance to make a living with free software. I cannot deny it was a dream.
Most important however was that at the same conference I met my beloved wife. Yes, personally the I Love FS day has more in common with the Valentine's day than you might guess.
Needless to say that LinuxTag was also the first conference, my son attended – at the age of 9 months ;)
Happy I Love FS day!Tags: BlizzzPlanetUbuntuPlanetOwnCloud
I spent the last weekend of 2013 doing major cleaning. I straightened up the half of my bedroom that counts as my home office, got my printer set up in its rightful space on top of the end table/bookshelf by my computer desk so I can use the scanner, bought new ink cartridges, moved around inspirational and educational books to the office bookshelf, and mounted my whiteboard again. I also bought a check holder rail to mount under my whiteboard. With a clean desk, easy office supply access, and a big whiteboard with a ton of dry-erase markers, I was ready to plan for the year.
One of the big problems with freelancing is time management. There are a lot of things to do, but there are also a lot of pictures of cats to look at on reddit. Between the two, it’s easy for important goals to slip between the cracks. In 2014, I decided to go back to a paper-based time management system that worked so well for my first private IT job years ago. It was invented by David Seah and is called the Printable CEO. This system is a collection of mix and match forms which allow you to track time in a variety of ways. You can use any form on its own or combine them to track various projects. It has its foundation in the Getting Things Done method of time management.
When I first started working in IT after graduating, my boss was quite busy with a lot of things, and asked me to keep track of my time and send him a weekly report of the things I worked on. I had never needed to do this before and was able to find the Printable CEO series through searching for time management forms. The Resource Time Tracker was the perfect tool to track my tasks throughout the week, and I actually used the short weekly form on its own week after week. Not only could I see where my time was going, but after a couple of weeks I could actually use it to plan out new projects. When I started writing business reports in Python, it was very useful to know how much actual work I needed to do and how much time I could spend automating. I’ve started a long-term project with a friend that seems just right for these forms and I’ve put it into practice for the first time this month.
For my own day-to-day planning, what I really need is accountability. Every working day since the last week of December, I’ve used the Emergent Task Planner. It’s a single-page sheet that has three work periods (separated with one-hour breaks) where you can list three (or more) major tasks for the day, estimate the time they will take, and then plan when you’ll work on them. There’s another large section for notes and other things. The nice thing about this form is that it was meant to work with the Pomodoro technique, where you work in set intervals. These forms use a 15-minute interval. This means that for every 15 minutes you work on a task, you get to fill in a bubble marking your time spent. This is a silly but addictive reward for getting things done and I’ve found that it works really well for me. I use this for single-day tasks that I know I can finish as well as planning to work on multi-day tasks.
For tasks that need to be tracked over multiple days, I use a form called the Task Order Up. This is like an order check used in restaurants around the world. I write down a task and break it into discrete steps. Then I work on each step and fill in a bubble every 15 minutes. I printed a page of each available color and keep green for direct freelance work, orange for Ubuntu work, and black for anything else. I actually ordered a check rail holder just for these slips. Having them in front of me beside my monitor is a great reminder of my progress.
For its own part, Ubuntu has been a big help in keeping me productive. I’m a big fan of Unity, and with the Launcher hidden, Unity really makes it easy to focus on my work at all times. When I need quick information, the Dash search in Ubuntu 13.10 lets me quickly find not just applications, but also the files and folders I’ve recently worked on for each application. I can do a search and find the folders and files I’ve been working on and open them quickly. Occasionally I’ll put on background music, and the Dash is up to snuff with music searches as well. Meanwhile the messaging indicator keeps me aware of incoming emails without diverting my attention. And the date indicator keeps track of any appointments I enter into my Google account via my phone. There are a few Pomodoro apps for Ubuntu, but I don’t use them personally. I prefer to keep track of start and stop times myself. Still, Ubuntu is one of the major reasons I’ve been productive this year! Well, if you don’t count the time I’ve spent playing Kerbal Space Program via Steam, anyway.
This year I set out to renovate the way I do business, and I found some wonderful, clean time management forms I can use on paper. Ubuntu continues to be the perfect fit for my desktop and laptop computers. Thanks to this combination of organization and accountability, I’ve been able to really get work done and adjust my schedule for my strengths and weaknesses. A month and a half into 2014, the year’s looking bright. I’m grateful to combine the best of legacy time organization and the best software in computing to create a powerful foundation to build on.
The day will start off at 16:00 UTC with a session by Elizabeth Krumbach Joseph (pleia2) consisting of a quick tour of all the Documentation resources available, including Desktop, Server, Wiki and Manual.
The next five sessions will give potential contributors an overview of contributing to each of the resources (all times UTC, click on time for link to time conversions.
- 17:00: Getting started contributing to Desktop docs by Kevin Godby (godbyk)
- 18:00: Getting started contributing to Server docs by Doug Smythies (dsmythies)
- 19:00: Getting started contributing to the Wiki docs by Svetlana Belkin (belkinsa)
- 20:00: Getting started contributing to Manual by Kevin Godby (godbyk)
- 21:00: Ubuntu Manual versions explained by Thomas Corwin (tacorwin) and Patrick Dickey (patrickdickey)
So come learn how to contribute to documentation with us!
Sessions all take place on Internet Relay Chat (IRC). If you want to participate, you just need to join #ubuntu-classroom and #ubuntu-classroom-chat on irc.freenode.net in your IRC client, or just click here for browser-based webchat. The instructor will give the class in #ubuntu-classroom and attendees can chat about the class and ask questions in #ubuntu-classroom-chat.
If you’re unable to attend, logs of each session will be made available following the event.
We hope to see you on Sunday, March 2nd!
Among all the icons I make for software throughout the community (or for my own devices) are those for Ubuntu Touch applications –which, given the publicization of the new style, I'm doing more of recently
So think of this post as an offering of my abilities. As such, I've created a small Google form wherein you can request an icon (or several) from me for your app.Ubuntu Icon Requests
There have been quite a few entertaining discussions on the interwebs about Ubuntu and concerns around privacy. This topic comes and goes on a regular basis, today it has come up because Mozilla are planning on putting some fairly harmless adverts on the blank tiles of new tabs and this is being compared to the Dash search in Ubuntu. Whenever the topic is raised it tends to be a fairly heated discussion, mostly focussing on the Amazon search results in the dash, mostly calling that adverts or spyware. It is a discussion that is mostly overblown and underinformed, with so much time spent freaking out about “adverts” that the real problems have been completely missed. Lets go through a bit of history, and I will try and explain the difference between the real problems and the FUD.
Initially there was the Gnome 2 application launcher, kinda similar to the Windows start button, it is a way to run applications that you have on your computer. They are nicely categorised so you can find all the graphics related applications on your computer and see Inkscape alongside Gimp and choose what you want to run. This worked well and people were generally satisfied at this mechanism for running local applications. Then along came Unity, this introduced the launcher, a dock bar on the left that shows running applications and has the ability to pin applications so you can start them by clicking on them when they are not running. The launcher is the way to run applications that you have on your computer – but not all of them, and not categorised, just your favourite ones you have pinned to the launcher. Unity also introduced the dash. This has a different scope of functionality, I like to call it the OmniGlobalEverywhere search tool. You type stuff in and it searches in lots of places to find what it is you are looking for. This is not the same scope of functionality as the Gnome 2 application launcher, it could search for local files, videos on YouTube and other streaming services, music, photos, other things. It is an extensible search interface and you can plug in additional search things. I wrote an OpenERP plugin so I could type an invoice number and jump straight to that invoice in a browser for example. It was a pretty cool concept as a jack of all trades search interface – but it isn’t the master of the specialised job of viewing and running applications you have already got installed.
Everyone completely missed the fact that the magic privacy button for a long time did almost nothing – it was just an undocumented flag that some lenses looked at and turned themselves off. Others did not. This was a real big deal and nobody noticed because they were obsessed with calling Amazon search results adverts. Now we have all kinds of odd lenses and search queries possibly going to yelp, zotero, yahoo finance, songster, songkick, gallica, europeana, etsy, COLORlovers and other places. Have you even heard of every single one of these? Do you know they are not evil? Do you know they are financially stable enough not to close the doors and let the domain renewal lapse for someone evil to buy it? Amazon I know and trust to continue existing, I also trust them not to want searches for partial mostly irrelevant words for profiling data when they have my product purchase history. The utter junk that the dash sends is of no value to Amazon compared to everything else they have, but this doesn’t stop people banging on about that one specific, relatively harmless and pointless in equal measure lense.
Firstly the Amazon lens is nothing special, and it is perhaps the internet connected lens I am least worried about. I trust Amazon to do what I expect them to do, I am a customer so they know what I bought, sending them random strings like “calcul” and “gedi” and “eclip” does not give them valuable data. It is junk. I am much more concerned about stuff like the Europeana, jstor, grooveshark lenses which do exactly the same thing but I have no idea who those organisations are or what they do. Even things like openweathermap, sounds good, but are they really a trusted organisation?
So, back to how it works. Your query for “socks” goes to products.ubuntu.com. At that point canonical’s secret sauce server looks at your query and decides that most people who search for socks either want to know about products to buy, or applications to run. They don’t tend to click on the results from the medicines or recipes lenses when we try showing those lenses to the user. So, having decided that the shopping lens and the applications lens are reasonable ones to search in it sends the query to Amazon (being the only shop currently supported, but it is designed to support every online sock vendor in the world) and tells your computer that the applications lens is worth looking in. When it gets the results back from Amazon those go to your computer, as a bunch of json data that is very similar to the Amazon json API, Amazon at this point thinks that Canonical’s server has got cold toes and is in need of some nice warm socks. Amazon does not know you exist at this stage.
That bundle of sock related data goes to the shopping lens on your computer, which then displays the results. It does this by showing some text “stripy socks, only £5.30″ and a picture, which it used to retrieve from Amazons content distribution network – O.M.G.!!! a data privacy leak. Amazon could log hits to their CDN (which I doubt they do), consolidate them globally, and figure out that it was displaying a bunch of sock pictures requested by your IP address, shortly after Canonical’s server searched for socks, so they could theoretically tie this together and infer that the reason you are staring at sock pictures is because you searched for socks via the dash search tool. So this huge and seriously concerning data privacy breach was a problem, so they fixed it. Now when you search for socks, Amazon gets CDN requests for images from products.ubuntu.com. Your computer gets the images from products.ubuntu.com (over https rather than http), it is now basically a reverse proxy for Amazon images, so that amazon is now more convinced than ever that Canonical’s server has got cold toes. As it happens, there is nothing wrong with your toes and you actually wanted to configure a socks proxy all along, and the shopping thing was a pointless overhead because when you want new socks the dash isn’t where you dash to.
There is a conversation on the technical board mailing list here https://lists.ubuntu.com/archives/technical-board/2013-October/thread.html and here https://lists.ubuntu.com/archives/technical-board/2013-November/thread.html relating to the closedness of the server side app. Having written something a bit similar myself, mine was closed for a while because it contained the Amazon API oauth keys in the source code. There really isn’t much to it on the server side. My server code is here https://github.com/AlanBell/shopping-search-provider/blob/master/server/index.php
As you read it, I have been working on a Postfix Juju Charm. During the last year or so, I have been working in order to squash minor bugs on the initial branch, and the charm is now (and has been for about a week) on the Juju Charm Store!
So, you can now just go ahead and have a Postfix server deployed in minutes. It includes SSL configuration in case you want to enable it, of course. Feel free to report any bugs you may find, and I’ll make sure to get them fixed as soon as possible.
Before the start of the holidays last year, the Ubuntu Community Council was approached by a concerned member of the community regarding the news that Linux Mint had been asked to sign a license agreement in order to continue distributing software packages out of the Ubuntu repositories.
Over the past two months, the Community Council has had several discussions, mailing list threads and meetings about this. In addition, we’ve reached out to another derivative for their understanding of the situation and spoken with external legal experts.
We are more than aware that some time has passed since the original approach and feel that we need to make it known that we’ve not been ignoring the situation. Legal issues are complex and we have to be mindful of the difference between personal and legal opinions. Understanding the weight of our words would carry we felt it was important to take time to gather facts and discuss the issue thoroughly.
At this time, we are in agreement that one of the keys to Ubuntu’s success is in providing a well-designed, reliable and enjoyable experience to all of our users, whether they are using Ubuntu on a desktop, a phone or in the cloud. To that end it is critical that when people see “Ubuntu”, it adequately represents the software that we all build and stand behind. This is as important to our individual reputations as much as to the reputation of the project as a whole. Trademarks and Copyrights are the legal tools provided to us for safeguarding those reputations, and it’s part of Canonical’s mandate within the Ubuntu project to use those tools appropriately, balancing the needs of all those involved in making Ubuntu. Canonical already provides a license for the use of these to the Ubuntu project and all of its distributions, including Ubuntu itself as well as those flavors that are developed in collaboration with it.
We believe there is no ill-will against Linux Mint, from either the Ubuntu community or Canonical and that Canonical does not intend to prevent them from continuing their work, and that this license is to help ensure that. What Linux Mint does is appreciated, and we want to see them succeed.
The Community Council feels that Canonical is making an honest and reasonable effort to balance the needs of the community and that any specific legal concerns should be addressed to the legal councils of those involved.
Finally, the Community Council would like to take this opportunity to remind people that it is important to work in a respectful collaborative manner when there are issues that concern the community. While this has been a valuable discussion to have, it’s also important to remember that everybody involved in the Ubuntu project, Canonical included, wants to see it and open source in general succeed and become as widely used as possible. Be mindful that you do not get caught up in a controversy, where a discussion with the parties concerned could clear up any misunderstandings. But when you do have any concerns about an issue such as this, we strongly encourage you to contact the Community Council directly and we will always do our best to provide accurate information or, when necessary, appropriate intervention to resolve the issue to the benefit of everybody involved. We are available to everybody, inside or outside the Ubuntu community.
Ubuntu Community Council
Let me just start off by saying that I love iloveubuntu.net, I subscribe to their RSS feed and read if pretty regularly, as regularly as I read any other feed. I have a great deal of respect for razvi and the writing he does for that site, and I would never mean any disrespect to him.
So what happened? When I started my UbBloPoMo project, one of the things I said I would do was write a post in the evening, and schedule it to publish the following day. That is what I’ve done for every post this month, and the last one was no exception. I had written my article, and scheduled it to publish at 4am my local time (9am UTC), and at that point in time I was unaware of razvi’s editorial on the same subject (which may or may not have been published by the time I wrote mine).
My choice in headline was in no way a reference to his. In fact, I chose the wording specifically because it wasn’t similar to any other headline for any other article I had read yesterday, so that nobody would feel like I was singling them out in particular. Of all the different ways a headline could have been worded, it was just a matter of random chance (or dump luck on my part) that mine and his were as similar as they were.
I still stand by what I said, about Mozilla and about us as a community. But I wanted to apologize to razvi for the confusion and hurt feelings caused by my headline. And to reiterate, I still love iloveubuntu.net, I still have a great deal of respect for razvi, and I will continue making his site part of my regular news reading.
Stuart Langridge, Jeremy Garcia, Bryan Lunduke, and myself wend our troublesome ways down the road of:
- We weigh in on the upstart/systemd brouhaha in Debian and discuss what happened, why it happened, and whether it was a good thing or not.
- Bryan reviews the Lenovo Miix 2 tablet and we get into the nitty gritty of what you can do with it.
- We take a trip down memory lane about how we each got started with Linux, which distributions we used, and who helped us get on our journey.
- We take a recap and look at community feedback about guns, 3D printing, predictions, Bad Voltage gaming, the Bad Voltage Selfie Competition and more, all making an appearance.
Go and listen or download it here
Be sure to go and share your feedback, ideas, and other comments on the community discussion thread for this show!
Also, be sure to join in the Bad Voltage Selfie Competition to win some free O’Reilly books!
Finally, many thanks to Microsoft for helping us get the Bad Voltage Community Forum up and running, and thanks to A2 Hosting for now hosting it. Thanks also to Bytemark for their long-standing support and helping us actually ship shows.
Finally, finally, I found the time (spread over several months) to refurbish my blog. Not that it took so long, but spare time is rare these days.
I decided to stick with Drupal and created a fresh and clean installation of 7 to replace the old Drupal 6. Now Drupal is somewhat overkill for a simple blog, but I the alternatives did not convince me for various reasons.
Now I do not want to dive into why or why not this and that, but point out some remarkabilities with regard to Drupal. Setting up Drupal is straightforward of course, and everything works fine and smooth, but the real work comes with adjusting it to become what you want. Its flexibility allows to realize anything, on the other hand it is also a reason why some things are laborious to accomplish.Layout and Modules
I chose Bamboo as base theme and sub-themed it. The blog looks now less stale, long code parts are presented properly and it has a mode for mobile devices. It required to dive in into Drupal theming a bit, but not too much. The documentation provided with Bamboo was already very helpful. The big plus of a sub-themed theme is that you easily can update the main theme without patching around endlessly. Theoretically it still can break your layout, if major changes would be applied. In the end, this task was pleasant to complete.
Afterwards it was about selecting modules and do the configuration. Mostly it was searching, installing, configuring, done. Just three things i want to point out here:
- There's no easy solution for a guest preview as offered by Wordpress. You can achieve this by doing something complicated (it did not follow this) or using the view_unpublished module. It does not offer the same convenience, but is good enough.
- Also finally standard elements like captions and buttons are localized into most common languages, e.g. English, Spanish, Chinese, Russian, Arab and German. Drupal does not ship translations by default.
- Avoiding blog spam. On the old version I used reCaptcha. I believe the only type of commentators it kept away were authentic people, instead I had the doubtful pleasure to moderate tons of SEO spam. Now I use a honeypot approach and so far (in testing) it works incredibly good and does not get in the way of real people. I an very fond of this.
I wished I could get the latest Drupal from the repositories, either original Ubuntu ones or a PPA. Web software evolves fast, releases fast and often closes security issues. Unfortunately, neither is provided (only older packages in the 12.04 repositories).
So I need to keep Drupal up to date by hand. Who has ever read the update instructions knows, that you don't want to do it by hand. A lot of stuff to do. Perfect condition for the lazy CS guy and a good opportunity to refresh my shell scripting. I could automate a lot of the ugly and boring stuff. What is left is for me is to kick off the script, and get in and out of the maintenance mode. Even this can be achieved without human interaction, so far i prefer to keep the control. In the end, I need to ensure everything works as expected anyway.Migration
The fun part. First, why did I not upgrade from Drupal 6 to 7, but made everything from scratch? Because I did some decision with the old configuration that were not so useful. Then, there were some modules that were discontinued or replaced with a lacking upgrade path. And somewhere in my head was stuck, that an upgrade was problematic or not recommended, though this is probably of goof of my own memory. Well, in the end almost everything was ready and was just waiting for the content.
To migrate the content, i.e. blog posts, static pages, comments, tags, from Drupal 6 to 7 was easy in the end, once you found the way and fixed what was missing.
There is a module that provides exactly this transfer from an old Drupal 6 installation to a new Drupal 7 one, providing a GUI. I really did not want to write an upgrade script, because I would have needed to get into those details again, while all the content types were standard ones. So, GUI was a plus. At that time there was no stable release including the GUI, though, so I took the development version. Took it, run it, was delighted.
Only a little bit later I found out, that the tags were not assigned and node and term IDs (tags) were shuffled.
Reassigning the tags worked with some SQL select and insert.INSERT INTO field_data_field_tags (entity_type, bundle, deleted, entity_id, revision_id, language, delta, field_tags_tid) SELECT 'node' AS 'entity_type', 'blog' AS 'bundle', 0 AS 'deleted', node.nid AS 'entity_id', node.nid AS 'revision_id', 'und' AS 'language', (@jDelta := @jDelta +1) AS 'delta', taxonomy_term_data.tid AS 'field_tags_tid' FROM taxonomy_term_data, node, oldDatabase.term_data, oldDatabase.node, oldDatabase.term_node, (SELECT @jDelta := 0) AS jDelta WHERE oldDatabase.term_node.nid = oldDatabase.node.nid AND oldDatabase.term_node.tid = oldDatabase.term_data.tid AND taxonomy_term_data.name = oldDatabase.term_data.name AND node.title = oldDatabase.node.title ORDER BY entity_id;
So, the Node IDs and Term IDs were left. This is a problem, because they are contained in the URLs. From a SEO point of view, keeping them different will confuse search engines. Likely that they get it right after a while, but as a former SEO consultant you want to do it the right way. Changing them back would work, but the IDs are used everywhere and there is a lot of tables. Before I decided for the migrate module I considered migrating the content just by copying it from the old to the new database, but things changed are without getting really down into it, many new tables and columns remained unclear.
The lazy approach was to to redirect the old node IDs to the new ones.SELECT CONCAT('redirect 301 /node/', oldDatabase.node.nid, ' http://www.arthur-schiwon.de/', alias) FROM node, url_alias, oldDatabase.node WHERE node.title = oldDatabase.node.title AND source = CONCAT('node/', node.nid);
It redirects the old URLs containing the old node IDs to the clean URLs. For some reasons, something happened canonical tag in Drupal 6 so that the old clean URLs where not used, but the ugly ones. I do not want to have them in the search engines. Now, this is fixed as well. The result contained duplicate lines, somehow, but they could be easily dropped or the correct alias chosen. In few cases, I needed to update the alias, commas led to some problems. I pasted the result at the beginning of the .htaccess file. The same needed to be done for the term IDs.
It is not the best approach, but given the limited time I could and wanted to spent this is OK. In the end, it's a private blog for fun and fame, but not for profit.
It is essential to try whether all important old URLs will still be reachable to avoid broken links. Broken links are bad for visitors as well as search engines. I used linkchecker, available in Ubuntu repositories, to collect all the URLs from my old site.linkchecker -Fcsv/urlstate.csv --stdin -t1 -r0
A lot of stuff is gathered I took the whole path pointing to my domain, replaced the domain to my test domain, saved it in a text file and ran curl against them, I wrote a small script for this.#!/bin/sh OUTPUTFILE=new-url-stats.csv for url in `cat urls-new-ws`; do status=`curl -I $url | grep "HTTP/1.1"` echo "$url,$status" >> $OUTPUTFILE done
In the resulting CSV file I had the URL and the status, good enough for me. In LibreOffice, I auto-filtered it and sorted out the faulty or suspicious URLs, i.e. those throwing 4xx errors. If things needed to be fixed, I fixed them and rerun the script again until I was satisfied.Future
I wondered whether I should switch away from Drupal but decided to stay with it. The migration should be performed as good as possible while spending as little time as possible. In the end, it took quite some time to investigate and find the right strategy. Maybe it would have been faster with a direct upgrade. Probably it is easier and more straight forward to use a software that is dedicated to run blogs. This question will reappear when the next iteration of the blog is going to be done in some years. And I cannot promise to stay with Drupal, since I really only use a little bit of the whole feature set. But I am not a fan of neither Wordpress nor Ghost, so let us see which options will be out in the wild then.
With the result I am satisfied, though there are a few smaller edges that can be taken care of later. It really is a huge relief to deliver "Comment" buttons and likes in common languages instead of just only German and be able to properly read it on mobile devices.
Now I only need to find time to blog more often ;)Tags: BlizzzPlanetUbuntuDrupal
Just about all of you reading this know that I am a technical writer. One of the things I do to keep up to date with the latest trends in the field is read. I read books, articles, blogs, whatever I can find that relates. I especially enjoy Mark Baker’s blog, Every Page is Page One. Baker consistently posts articles that make me think, and in good ways. When I heard he has a book out, I contacted the publisher immediately.
As a side note, Baker’s publisher, XML Press, consistently produces books that I find useful. Every one I have read is well-written, authoritative, and filled with real-world experience and practicality.
Every Page is Page One: Topic-based Writing for Technical Communication and the Web shares the first part of its title with the blog, but the content is not directly from the blog. Rather than a collection of posts on assorted topics assembled into book form, this is a well-thought-out and well-organized text. In it, Baker observes that documentation projects tend to think about technical writing from a very book-centered paradigm. This was once ideal, but in the age of communicating technical information electronically, it forces limits on the end product that hinder the true goal of technical writing, the goal of delivering the right information at the right moment to the person who is seeking it. As someone who is not only a technical writer, but who also has a degree in information resources and library science, I have multiple reasons for supporting this goal.
What Baker does is give tangible form to thoughts and ideas that he, other technical writers, and even I have had in the abstract. How do we provide needed information to people who seek it in an age where the web makes almost anything searchable? Do manuals still matter? What about other forms of documentation? Are there changes to our style of communication, to our style of writing and presenting information, that will make the information seeker’s task easier? Baker discusses serious and realistic ways we can improve our field. It is all organized around the idea that we can no longer control the order in which information seekers will consume or even find our information, that every page (in a documentation wiki, for example) should be created in a way that enables a user to immediately understand and acquire what they need when they need it. Since we know we do not have this control like we had in a printed book, we must modify how we write and present information to fit the expectations of the seeker.
I enjoyed reading this book. I have benefited personally from reading this book. I am taking this book in to my workplace and sharing it with the other tech writers there and I believe our workplace and our employer and our customers will benefit from this book. If you work in the field, I’m convinced you will, too. The whole book is good, but my favorite parts are Section I, which lays the foundation in five chapters, and Chapter 22, which gives very practical and useful advice for making your case to others when you begin to try to make the changes the book describes.