news aggregator

Joe Liau: Documenting the Death of the Dumb Telephone – Part 1: Unnecessary

Planet Ubuntu - Tue, 2014-10-14 03:49

Source

The gig is up: our telephones aren’t smart, and they can’t save themselves. But, maybe you can!

By far, the dumbest feature of today’s “smart” phones is the phone itself. There is a growing number of people who never use their mobile devices to make calls. This begs the question of whether or not the feature should exist at all (or why even call them phones?). “How silly,” you say; of course, there are justified applications for calling someone who is in the middle of dinner or on a crowded train. However, there is a lack of control over (loco) this function.  Your phone doesn’t know how to suitably deal with and classify a call event (i.e. call-typing beyond known and unknown numbers) and this makes it both not smart, and not safe (for itself).

How many phones have been physically harmed due to phone-call malpractice?
They fly out your car window. They drop from your ear. They get thrown across the room. All because of the wrong call at the wrong time.

You can prevent this, and you can save the phone feature of your mobile device. You can have a say in how your mobile device is programmed.

Opt in to the Ubuntu project today, and SAVE OUR PHONES

 

Lubuntu Blog: Happy Samhain!

Planet Ubuntu - Mon, 2014-10-13 21:06
As our tradition rules, here's this year's Samhain wallpaper. This pagan fest ritual makes us, the northern sons, being one with Nature again, serve it and get back its fruits, in a year of hope. Enjoy!



Randall Ross: Why Smart Phones Aren't - Reason #2

Planet Ubuntu - Mon, 2014-10-13 19:45

I ride public transit, a lot. This gives me the "privilege" to (too often) overhear important matters that are being discussed over the phone.

Can you guess the most common use case for "smart" phones? Apparently it's to obtain the answer to the world's most important question: "Where are you?"

Really? We can send a rover to Mars but we can't solve this problem. Is the world engaged in one giant game of "where's Waldo?" I have yet to meet a phone that is smart.

Phones have GPS, wifi, and of course cellular signalling. They also are programmable. One would think that an off-the-shelf "smart" phone would eliminate the "Where are you?" call once and for all. Or, could it be that the mobile carriers love to prey on people by forcing them to consume and be billed for lots and lots of extraneous voice minutes? Hmm...

So, I'm sorry phones. You are *not* smart. You are still as dumb as the first feature phones.

I remain optimistic that the Ubuntu Phone will overcome this issue. In my lifetime, I hope to be riding a bus, a subway, or a streetcar never to hear the words "Where are you?" uttered again.

---
"Where's Waldo" image by William Murphy
https://www.flickr.com/photos/infomatique/

Ubuntu App Developer Blog: Write a scope in C++ for JSON data – SoundCloud tutorial

Planet Ubuntu - Mon, 2014-10-13 17:29

A scope is a tailored view for a set of data, that can use custom layouts, display and branding options. From RSS news feeds to weather data and search engine results, the flexibility of scopes allows you to provide a simple, recognizable and consistent experience with the rest of the OS.

Scopes can also integrate with system-wide user accounts (email, social networks…), split your content into categories and aggregate into each others (for example, a “shopping” scope aggregating results from several store scopes).

In this tutorial, you will learn how to write a scope in C++ for SoundCloud, using the Ubuntu SDK. Read…

Canonical Design Team: Designing machine view

Planet Ubuntu - Mon, 2014-10-13 12:19

A few weeks ago we launched ‘Machine view’ for Juju, a feature designed to allow users to easily visualise and manage the machines running in their cloud environments. In this post I want to share with you some of the challenges we faced and the solutions we designed in the process of creating it.

A little bit about Juju…
For those of you that are unfamiliar with Juju, a brief introduction. Juju is a software tool that allows you to design, build and manage application services running in the cloud. You can use Juju through the command-line or via a GUI and our team is responsible for the user experience of Juju in the GUI.

First came ‘Service View’
In the past we have primarily focused on Juju’s ‘Service view’ – a virtual canvas that enables users to design and connect the components of their given cloud environment.

This view is fantastic for modelling the concept and relationships that define an application environment. However, for each component or service block, a user can have anything from one unit to hundreds or thousands of units, depending on the scale of the environment, and before machine view, units means machines.

The goal of machine view was to surface these units and enable users to visualise, manage and optimise their use of machines in the cloud.

‘Machine view’: design challenges
There were a number of challenges we needed to address in terms of layout and functionality:

  • The scalability of the solution
  • The glanceability of the data
  • The ability to customise and sort the information
  • The ability to easily place and move units
  • The ability to track changes
  • The ability to deploy easily to the cloud

I’ll briefly go into each one of these topics below.

Scalability: Environments can be made up of a couple of machines or thousands. This means that giving the user a clear, light and accessible layout was incredibly important – we had to make sure the design looked and worked great at both ends of the spectrum.

Glanceability: Users need simple comparative information to help choose the right machine at-a-glace. We designed and tested hundreds of different ways of displaying the same data and eventually ended up with an extremely cut back listing which was clean and balanced.

The ability to sort and customise: As it was possible and probable that users would scale environments to thousands of machines, we needed to provide the ability to sort and customise the views. Users can use the menus at the top of each column to hide information from view and customise the data they want visible at a glance. As users become more familiar with their machines they could turn off extra information for a denser view of their machines. Users are also given basic sorting options to help them find and explore their machines in different ways.

The ability to easily place and move units: Machine view is built around the concept of manual placement – the ability to co-locate (put more than one) items on a single machine or to define specific types of machines for specific tasks. (As opposed to automatic placement, where each unit is given a machine of the pre-determined specification). We wanted to enable users to create the most optimised machine configurations for their applications.

Drag and drop was a key interaction that we wanted to exploit for this interface because it would simplify the process of manually placing units by a significant amount. The three column layout aided the use of drag and drop, where users are able to pick up units that need placing on the left hand side and drag them to a machine in the middle column or a container in the third column. The headers also change to reveal drop zones allowing users to create new machines and containers in one fluid action keeping all of the primary interactions in view and accessible at all times.

The ability to track changes: We also wanted to expose the changes that were being made throughout user’s environments as they were going along and allow them to commit batches of changes altogether. Deciding which changes were exposed and the design of the uncommitted notification was difficult, we had to make sure the notifications were not viewed as repetitive, that they were identifiable and that it could be used throughout the interface.

The ability to deploy easily to the cloud: Before machine view it was impossible for someone to design their entire environment before sending it to the cloud. The deployment bar is a new ever present canvas element that rationalises all of the changes made into a neat listing, it is also where users can deploy or commit those changes. Look for more information about the deployment bar in another post.

We hope that machine view will really help Juju users by increasing the level of control and flexibility they have over their cloud infrastructure.

This project wouldn’t of been possible without the diligent help from the Juju GUI development team. Please take a look and let us know what you think. Find out more about Juju, Machine View or take it for a spin.

Valorie Zimmerman: Heroes

Planet Ubuntu - Sun, 2014-10-12 09:25
Heroes come in all shapes and sizes, ages and nationalities.

The Nobel Peace Prize was inspiring to see this week. A young Pakistani girl who was already known locally for supporting the right of girls to attend school was shot by the Taliban to shut her up. Instead, she now has a world stage, and says that she is determined to work even harder for the right of girls to go to school. I really liked that Malala shares the prize. CNN:
Awarding the Peace Prize to a Pakistani Muslim and an Indian Hindu gives a message to people of love between Pakistan and India, and between different religions, Yousafzai said. The decision sends a message that all people, regardless of language and religion, should fight for the rights of women, children and every human being. - http://www.cnn.com/2014/10/10/world/europe/nobel-peace-prize/index.html
Another of my heroes spoke out this week: Kathy Sierra. Her blog is reprinted on Wired: http://www.wired.com/2014/10/trolls-will-always-win/. After the absolute horror she endured, she continues to speak out, continues to calmly state the facts, continues to lead. And yet the majority lauds her attackers, because they are Bad Boyz! I guess. I don't agree with her that trolls always win, because I can't. Kathy Sierra is still speaking out, so SHE wins, and we all win.

I just finished a lovely book by Cheryl Strayed: Wild: From Lost to Found on the Pacific Crest Trail. Cheryl isn't my hero, but during her journey she became her own hero, so that's OK. My husband is going to walk that trail next year, and reading her book makes me so thankful that he is preparing and training for the journey! Her honesty about the pain she endured when her mother died, and her marriage ended, brought to mind many memories about the death of my own mother, and the death of another of my heroes, my cousin Carol.

Carol died 11 years ago, and I still painfully miss her. I know that her son grieves her loss even more deeply. I hope your journey has taken you to a place of rest, my dear Carol.

Randall Ross: Why Smart Phones Aren't - Reason #1

Planet Ubuntu - Sun, 2014-10-12 07:43

I was ranting to some of my colleagues the other day about "smart" phones, and just how really dumb they are. The topic generated a lively discussion so I thought I'd share the fun!

I have yet to meet a phone that is smart.

Phones have GPS, motion sensing, and NFC, yada yada, yet they still alert/ring when someone is driving. Has society not learned that distracted driving kills people? Not cool.

So, I'm sorry phones. You are *not* smart. You are as dumb as the first feature phones.

Having said that, I still have optimism that the the Ubuntu Phone will become the world's first truly smart phone, respecting its owner and "doing the right thing".

---
dumb phone image by Tom Hoyle
https://www.flickr.com/photos/tomhoyle1985/

Forums Council: Support for Other Operating Systems

Planet Ubuntu - Sat, 2014-10-11 13:32

Some time early in 2011, support for other operating systems was curtailed due to the prevailing server issues we were dealing with at that time.

Since then, we have moved on and the outlook is looking better server space wise.

Given that, and for the recent 10 year anniversary we’ve just seen – what better time than now, we’ve rebuilt support forums for selected operating systems.

You can see what we have done here.


Nekhelesh Ramananthan: Clock App Reboot Backstory

Planet Ubuntu - Sat, 2014-10-11 12:28

This week while reading Michael Hall's blog post about 1.0 being the deadliest milestone I couldn't help but grin when I read,

1.0 isn’t the milestone of success, it’s the crossing of the Rubicon, the point where drastic change becomes inevitable. It’s the milestone where the old code, with all it’s faults, dies, and out of it is born a new codebase.

This was exactly the thought that crossed my mind when I first heard about the Clock Reboot at the Malta Sprint.

Let me share how Clock Reboot came to be :). At the start of the Malta sprint I was told that the Clock app would receive some new designs that would need to be implemented for the Release-To-Manufacture milestone (RTM) which at that time was 4 months away. And on the eve of the meeting, we (Canonical Community team and the Core Apps team) rush into the meeting room where you see Giorgio Venturi, John Lea, Benjamin Keyser, Mark Shuttleworth, and other department heads looking over the designs.

Giorgio goes over the presentation and explains how they are supposed to work. At the end of the presentation, I am asked if this is feasible within the given timeframe and everyone starts to look at me. Honestly, at the moment I felt a shiver run down my spine. I was uncertain given that it took the clock app almost a year to buckle down and get to a point where it was useable and I wondered if the 4 months time was sufficient.

Strangely enough, during the presentation I recollected a conversation I had with Kyle Nitzsche a few days before that meeting, where he asked if the clock app code changes started to stabilize (no invasive code changes) considering we are this close to RTM.

Fast-forwarding to today, I think that the Clock Reboot was a blessing in disguise. I have been working on the clock app since the beginning which was somewhere in the first quarter of 2013. And I can confidently say that I have made a ton of mistakes in the so called 1.0 milestone. The Clock Reboot gave me the opportunity to start from scratch and carefully iterate every piece of code going into the trunk while avoiding the mistakes from the past.

And I like to think that it has resulted in the codebase being much more clean, lean and manageable and in a more reliable clock experience for the user.

The Music App is going through that same transition and I wish them the best since and I think it will make them stronger and better.

I like to end this blog with a blast from the past since it is a one year anniversay for the clock app! ;) It was first released to the Ubuntu Touch store on the 10th of October 2013.

Diego Turcios: Central America is discovering BeagleBoard!

Planet Ubuntu - Sat, 2014-10-11 07:32
The title is clear enough! I was on the VI Encuentro Centroamericano of Software Libre and gave a presentation about what is Beagleboard.org and showing some small demos of the BeagleBone Black.

Important refereces about Central America and Open Hardware/Single Board Computers & Microcontrollers

All of this data is according to what people said on Panama.
  • Arduino it's used in 6 countries of Central America as part of their computer science studies or electrical engineer
  • Raspberry Pi is also use on the 7 countries of Central America
  • Just 2 countries of Central America have listen about the BeagleBone Black. (Costa Rica & Honduras)
  • Just 1 country of Central America has work with a BeagleBone Black. (Honduras)

What was my talk about?

Basically my talk was about, what is the Beagleboard.org community and the Beaglebone Black.

  • I did a presentation on HTML explaining the BeagleBoard.org history and goals. Check the presentation here
  • I show some slides of Jason  Kridner presentation.
  •  Run a small demo on how to build a traffic light with leds, showing how to use the bonescript library. Here
What did people said?
They love it!

  • I will said it again, people love it!
  • They were surprise about the bonescript library,
  • That's basically one of the major problems of other platforms. How to involved Web with Hardware platform.
  • The University of Costa Rica will buy BeagleBone Blacks for next years. (They have listen about the project and were interested on it, but didn't knew about all the capacities it had)
  • Some professors of Panama universities were quite happy to see that they could use this tool on their courses, this is thanks to the bonescript library, it is now easy to request project involving web applications and hardware combined thanks to the BeagleBone Black.

Some images of the event :











Diego Turcios: In the Encuentro Centroamericano de Software Libre!

Planet Ubuntu - Fri, 2014-10-10 19:05
Currently I'm on the city of Chitre in Panama for the VI Encuentro Centroamericano de Software Libre.

This has been an excelent experience. I met several friends and community members I meet on the Primer Encuentro Centroamericano in the city of Esteli Nicaragua and of course new persons from central america.

Something new the ECSL has, is the presence of several recognize people of the open source community. We had the presence of Ramon Ramon, a famous blogger in the latin america. Guillermo Movia Community Manager for Latin America for Mozilla and other people of the Open Source Community in Central America.

I'm going to write in future post about my presentation of the BeagleBoard.org and other talks I have with the Mozilla people. By the way I got the opportunity to know Jorge Aguilar founder of the Mozilla Community of Honduras.

Some images.
If you want to read this presentation in spanish click here.






Michael Hall: 1.0: The deadliest milestone

Planet Ubuntu - Fri, 2014-10-10 18:49

So it’s finally happened, one of my first Ubuntu SDK apps has reached an official 1.0 release. And I think we all know what that means. Yup, it’s time to scrap the code and start over.

It’s a well established mantra, codified by Fred Brooks, in software development that you will end up throwing away the first attempt at a new project. The releases between 0.1 and 0.9 are a written history of your education about the problem, the tools, or the language you are learning. And learn I did, I wrote a whole series of posts about my adventures in writing uReadIt. Now it’s time to put all of that learning to good use.

Often times projects still spend an extremely long time in this 0.x stage, getting ever closer but never reaching that 1.0 release.  This isn’t because they think 1.0 should wait until the codebase is perfect, I don’t think anybody expects 1.0 to be perfect. 1.0 isn’t the milestone of success, it’s the crossing of the Rubicon, the point where drastic change becomes inevitable. It’s the milestone where the old code, with all it’s faults, dies, and out of it is born a new codebase.

So now I’m going to start on uReadIt 2.0, starting fresh, with the latest Ubuntu UI Toolkit and platform APIs. It won’t be just a feature-for-feature rewrite either, I plan to make this a great Reddit client for both the phone and desktop user. To that end, I plan to add the following:

  • A full Javascript library for interacting with the Reddit API
  • User account support, which additionally will allow:
    • Posting articles & comments
    • Reading messages in your inbox
    • Upvoting and downvoting articles and comments
  • Convergence from the start, so it’s usable on the desktop as well
  • Re-introduce link sharing via Content-Hub
  • Take advantage of new features in the UITK such as UbuntuListView filtering & pull-to-refresh, and left/right swipe gestures on ListItems

Another change, which I talked about in a previous post, will be to the license of the application. Where uReadIt 1.0 is GPLv3, the next release will be under a BSD license.

Ubuntu Podcast from the UK LoCo: S07E28 – The One with the List

Planet Ubuntu - Fri, 2014-10-10 10:59

We’re back with Season Seven, Episode Twenty-Eight of the Ubuntu Podcast! Alan Pope, Laura Cowen, Mark Johnson, and Tony Whitmore! We ate this carrot cake from the Co-op. It’s very tasty.

 Download Ogg  Download MP3 Play in Popup

In this week’s show:

  • We also discuss:
  • We share some Command Line Lurve which saves you valuable keystrokes: tar xvf archive.tar.bz2 tar xvf foo.tar.gz

    Tar now auto-detects the compression algorithm used!

  • And we read your feedback. Thanks for sending it in!

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Martin Pitt: Running autopkgtests in the cloud

Planet Ubuntu - Fri, 2014-10-10 07:25

It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding.

This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem.

I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner!

While you can run adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs.

The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines.

To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific "daily" image with nova image-create. I wrote a little script create-nova-adt-image.sh to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical's Bootstack cloud, so it might need some adjustments on other clouds.

Thus something like this should be run daily (pick the base images from nova image-list):

$ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img

This will generate adt-utopic-i386 and adt-utopic-amd64.

Now I picked 34 packages that have the "most demanding" tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):

$ cat pkglist apport apt aptdaemon apache2 autopilot-gtk autopkgtest binutils chromium-browser cups dbus gem2deb glib-networking glib2.0 gvfs kcalc keystone libnih libreoffice lintian lxc mysql-5.5 network-manager nut ofono-phonesim php5 postgresql-9.4 python3.4 sbuild shotwell systemd-shim ubiquity ubuntu-drivers-common udisks2 upstart

Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:

$ cat adt-run-nova #!/bin/sh -e adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \ --flavor m1.small --image adt-utopic-i386 \ --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5

Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an "always need this constant to select a non-default network" option that is specific to Canonical Bootstack.

Finally, let' run the packages from above with using ten VMs in parallel:

parallel -j 10 ./adt-run-nova -- $(< pkglist)

After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.

Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.

Happy testing!

Forums Council: Happy Birthday Ubuntu Forums

Planet Ubuntu - Fri, 2014-10-10 05:53

Somewhere roundabout 10 years ago Ryan Troy started what we all know and love – Ubuntu Forums.

The look and feel has gone through several iterations, matching the Ubuntu colour scheme evolutions. Each look & feel has had its own crowd saying the previous one was better, but the one constant has been the members who in their thousands log on to share knowledge

Click to view slideshow.

As part of the forum wide celebration – there’s a custom avatar you can use if you wish, and if you’ve less than 10 posts, you too can use it now that we’ve changed the 10 post rule to allow you to use custom avatars. We’re also making some changes to how we deal with other operating systems – soon.

For now we’ll be checking the posts that get made roundabout midnight of the 9th – if we pick you – expect a PM from one of us.

Thanks for your participation in helping the forum become what it is – as we passed through 2 million threads and rapidly approach 2 million users.


Scarlett Clark: Kubuntu: KDE Frameworks 5.3.0 Now released to archive.

Planet Ubuntu - Fri, 2014-10-10 01:07

Frameworks 5.3.0 has finished uploading to archive! apt-get update is all that is required to upgrade.
We are currently finishing up Plasma 5.1.0 ! The problems encountered during beta have been resolved

Kubuntu Wire: Weta Uses Kubuntu for Hobbit

Planet Ubuntu - Thu, 2014-10-09 10:30

Top open source news website TheMukt has an article headlined KDE’s Plasma used in Hobbit movies.  Long time fans of Kubuntu will know of previous boasts that they use a 35,000-Core Ubuntu Farm to render films like Avatar and The Hobbit supported by a Kubuntu desktop.  Great to see they’re making use of Plasma 4 and its desktop capabilities.

Marcin Juszkiewicz: 2 years of AArch64 work

Planet Ubuntu - Wed, 2014-10-08 17:55

I do not remember exactly when I started working on ARMv8 stuff. Checked old emails from Linaro times and found that we discussed AArch64 bootstrap using OpenEmbedded during Linaro Connect Asia (June 2012). But it had to wait a bit…

First we took OpenEmbedded and created all tasks/images we needed but built them for 32-bit ARM. But during September we had all toolchain parts available: binutils was public, gcc was public, glibc was on a way to be released. I remember that moment when built first “helloworld” — probably as one of first people outside ARM and their hardware partners.

At first week of October we had ARMv8 sprint in Cambridge, UK (in Linaro and ARM offices). When I arrived and took a seat I got information that glibc just went public. Fetched, rebased my OpenEmbedded tree to drop traces of “private” patches and started build. Once finished all went public at git.linaro.org repository.

But we still lacked hardware… The only thing available was Versatile Express emulator which required license from ARM Ltd. But then free (but proprietary) emulator was released so everyone was able to boot our images. OMG it was so slow…

Then fun porting work started. Patched this, that, sent patches to OpenEmbedded and to upstream projects and time was going. And going.

In January 2013 I started X11 on emulated AArch64. Had to wait few months before other distributions went to that point.

February 2013 was nice moment as Debian/Ubuntu team presented their AArch64 port. It was their first architecture bootstrapped without using external toolchains. Work was done in Ubuntu due to different approach to development than Debian has. All work was merged back so some time later Debian also had AArch64 port.

It was March or April when OpenSUSE did mass build of whole distribution for AArch64. They had biggest amount of packages built for quite long time. But I did not tracked their progress too much.

And then 31st May came. A day when I left Linaro. But I was already after call with Red Hat so future looked quite bright ;D

June was month when first silicon was publicly presented. I do not know what Jon Masters was showing but it probably was some prototype from Applied Micro.

On 1st August I got officially hired by Red Hat and started month later. My wife was joking that next step would be Retired Software Engineer ;D

So I moved from OpenEmbedded to Fedora with my AArch64 work. Lot of work here was already done as Fedora developers were planning 64-bit ARM port few years before — when it was at design phase. So when Fedora 15 was bootstrapped for “armhf” it was done as preparation for AArch64. 64-bit ARM port was started in October 2012 with Fedora 17 packages (and switched to Fedora 19 during work).

My first task at Red Hat was getting Qt4 working properly. That beast took few days in foundation model… Good that we got first hardware then so it went faster. 1-2 months later and I had remote APM Mustang available for my porting work.

In January 2014 QEmu got AArch64 system emulation. People started migrating from foundation model.

Next months were full of hardware announcements. AMD, Cavium, Freescale, Marvell, Mediatek, NVidia, Qualcomm and others.

In meantime I decided to make crazy experiment with OpenEmbedded. I was first to use it to build for AArch64 so why not be first to build OE on 64-bit ARM?

And then June came. With APM Mustang for use at home. Finally X11 forwarding started to be useful. One of first things to do was running firefox on AArch64 just to make fun of running software which porting/upstreaming took me biggest amount of time.

Did not took me long to get idea of transforming APM Mustang (which I named “pinkiepie” as all machines at my home are named after cartoon characters) into ARMv8 desktop. Still waiting for PCI Express riser and USB host support.

Now we have October. Soon will be 2 years since people got foundation model available. And there are rumors about AArch64 development boards in production with prices below 100 USD. Will do what needed to get one of them on my desk ;)

All rights reserved © Marcin Juszkiewicz
2 years of AArch64 work was originally posted on Marcin Juszkiewicz website

Related posts:

  1. AArch64 for everyone
  2. AArch64 porting update
  3. ARM 64-bit porting for OpenEmbedded

Scott Kitterman: Thanks Canonical (reallly)

Planet Ubuntu - Wed, 2014-10-08 15:49

Some of you might recall seeing this insights article about Ubuntu and the City of Munich.  What you may not know is that the desktop in question is the Kubuntu flavor of Ubuntu.  This wasn’t clear in the original article and I really appreciate Canonical being willing to change it to make that clear.

Ubuntu Kernel Team: Kernel Team Meeting Minutes – October 07, 2014

Planet Ubuntu - Tue, 2014-10-07 17:15
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20141007 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel has been rebased to the v3.16.4 upstream stable
kernel. This is available for testing as of the 3.16.0-21.28 upload to
the archive. Please test and let us know your results.
Also, Utopic Kernel Freeze is this Thurs Oct 9. Any patches submitted
after kernel freeze are subject to our Ubuntu kernel SRU policy. I sent
a friendly reminder about this to the Ubuntu kernel-team mailing list
yesterday as well.
—–
Important upcoming dates:
Thurs Oct 9 – Utopic Kernel Freeze (~2 days away)
Thurs Oct 16 – Utopic Final Freeze (~1 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~2 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid

Status for the main kernels, until today (Sept. 30):

  • Lucid – Testing
  • Precise – Testing
  • Trusty – Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 19-Sep through 11-Oct
    ====================================================================
    19-Sep Last day for kernel commits for this cycle
    21-Sep – 27-Sep Kernel prep week.
    28-Sep – 04-Oct Bug verification & Regression testing.
    05-Oct – 08-Oct Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Pages

Subscribe to Free Software Magazine aggregator