I ride public transit, a lot. This gives me the "privilege" to (too often) overhear important matters that are being discussed over the phone.
Can you guess the most common use case for "smart" phones? Apparently it's to obtain the answer to the world's most important question: "Where are you?"
Really? We can send a rover to Mars but we can't solve this problem. Is the world engaged in one giant game of "where's Waldo?" I have yet to meet a phone that is smart.
Phones have GPS, wifi, and of course cellular signalling. They also are programmable. One would think that an off-the-shelf "smart" phone would eliminate the "Where are you?" call once and for all. Or, could it be that the mobile carriers love to prey on people by forcing them to consume and be billed for lots and lots of extraneous voice minutes? Hmm...
So, I'm sorry phones. You are *not* smart. You are still as dumb as the first feature phones.
I remain optimistic that the Ubuntu Phone will overcome this issue. In my lifetime, I hope to be riding a bus, a subway, or a streetcar never to hear the words "Where are you?" uttered again.
"Where's Waldo" image by William Murphy
A scope is a tailored view for a set of data, that can use custom layouts, display and branding options. From RSS news feeds to weather data and search engine results, the flexibility of scopes allows you to provide a simple, recognizable and consistent experience with the rest of the OS.
Scopes can also integrate with system-wide user accounts (email, social networks…), split your content into categories and aggregate into each others (for example, a “shopping” scope aggregating results from several store scopes).
In this tutorial, you will learn how to write a scope in C++ for SoundCloud, using the Ubuntu SDK. Read…
A few weeks ago we launched ‘Machine view’ for Juju, a feature designed to allow users to easily visualise and manage the machines running in their cloud environments. In this post I want to share with you some of the challenges we faced and the solutions we designed in the process of creating it.
A little bit about Juju…
For those of you that are unfamiliar with Juju, a brief introduction. Juju is a software tool that allows you to design, build and manage application services running in the cloud. You can use Juju through the command-line or via a GUI and our team is responsible for the user experience of Juju in the GUI.
First came ‘Service View’
In the past we have primarily focused on Juju’s ‘Service view’ – a virtual canvas that enables users to design and connect the components of their given cloud environment.
This view is fantastic for modelling the concept and relationships that define an application environment. However, for each component or service block, a user can have anything from one unit to hundreds or thousands of units, depending on the scale of the environment, and before machine view, units means machines.
The goal of machine view was to surface these units and enable users to visualise, manage and optimise their use of machines in the cloud.
‘Machine view’: design challenges
There were a number of challenges we needed to address in terms of layout and functionality:
- The scalability of the solution
- The glanceability of the data
- The ability to customise and sort the information
- The ability to easily place and move units
- The ability to track changes
- The ability to deploy easily to the cloud
I’ll briefly go into each one of these topics below.
Scalability: Environments can be made up of a couple of machines or thousands. This means that giving the user a clear, light and accessible layout was incredibly important – we had to make sure the design looked and worked great at both ends of the spectrum.
Glanceability: Users need simple comparative information to help choose the right machine at-a-glace. We designed and tested hundreds of different ways of displaying the same data and eventually ended up with an extremely cut back listing which was clean and balanced.
The ability to sort and customise: As it was possible and probable that users would scale environments to thousands of machines, we needed to provide the ability to sort and customise the views. Users can use the menus at the top of each column to hide information from view and customise the data they want visible at a glance. As users become more familiar with their machines they could turn off extra information for a denser view of their machines. Users are also given basic sorting options to help them find and explore their machines in different ways.
The ability to easily place and move units: Machine view is built around the concept of manual placement – the ability to co-locate (put more than one) items on a single machine or to define specific types of machines for specific tasks. (As opposed to automatic placement, where each unit is given a machine of the pre-determined specification). We wanted to enable users to create the most optimised machine configurations for their applications.
Drag and drop was a key interaction that we wanted to exploit for this interface because it would simplify the process of manually placing units by a significant amount. The three column layout aided the use of drag and drop, where users are able to pick up units that need placing on the left hand side and drag them to a machine in the middle column or a container in the third column. The headers also change to reveal drop zones allowing users to create new machines and containers in one fluid action keeping all of the primary interactions in view and accessible at all times.
The ability to track changes: We also wanted to expose the changes that were being made throughout user’s environments as they were going along and allow them to commit batches of changes altogether. Deciding which changes were exposed and the design of the uncommitted notification was difficult, we had to make sure the notifications were not viewed as repetitive, that they were identifiable and that it could be used throughout the interface.
The ability to deploy easily to the cloud: Before machine view it was impossible for someone to design their entire environment before sending it to the cloud. The deployment bar is a new ever present canvas element that rationalises all of the changes made into a neat listing, it is also where users can deploy or commit those changes. Look for more information about the deployment bar in another post.
We hope that machine view will really help Juju users by increasing the level of control and flexibility they have over their cloud infrastructure.
This project wouldn’t of been possible without the diligent help from the Juju GUI development team. Please take a look and let us know what you think. Find out more about Juju, Machine View or take it for a spin.
The Nobel Peace Prize was inspiring to see this week. A young Pakistani girl who was already known locally for supporting the right of girls to attend school was shot by the Taliban to shut her up. Instead, she now has a world stage, and says that she is determined to work even harder for the right of girls to go to school. I really liked that Malala shares the prize. CNN:
Awarding the Peace Prize to a Pakistani Muslim and an Indian Hindu gives a message to people of love between Pakistan and India, and between different religions, Yousafzai said. The decision sends a message that all people, regardless of language and religion, should fight for the rights of women, children and every human being. - http://www.cnn.com/2014/10/10/world/europe/nobel-peace-prize/index.html
Another of my heroes spoke out this week: Kathy Sierra. Her blog is reprinted on Wired: http://www.wired.com/2014/10/trolls-will-always-win/. After the absolute horror she endured, she continues to speak out, continues to calmly state the facts, continues to lead. And yet the majority lauds her attackers, because they are Bad Boyz! I guess. I don't agree with her that trolls always win, because I can't. Kathy Sierra is still speaking out, so SHE wins, and we all win.
I just finished a lovely book by Cheryl Strayed: Wild: From Lost to Found on the Pacific Crest Trail. Cheryl isn't my hero, but during her journey she became her own hero, so that's OK. My husband is going to walk that trail next year, and reading her book makes me so thankful that he is preparing and training for the journey! Her honesty about the pain she endured when her mother died, and her marriage ended, brought to mind many memories about the death of my own mother, and the death of another of my heroes, my cousin Carol.
Carol died 11 years ago, and I still painfully miss her. I know that her son grieves her loss even more deeply. I hope your journey has taken you to a place of rest, my dear Carol.
I was ranting to some of my colleagues the other day about "smart" phones, and just how really dumb they are. The topic generated a lively discussion so I thought I'd share the fun!
I have yet to meet a phone that is smart.
Phones have GPS, motion sensing, and NFC, yada yada, yet they still alert/ring when someone is driving. Has society not learned that distracted driving kills people? Not cool.
So, I'm sorry phones. You are *not* smart. You are as dumb as the first feature phones.
Having said that, I still have optimism that the the Ubuntu Phone will become the world's first truly smart phone, respecting its owner and "doing the right thing".
dumb phone image by Tom Hoyle
Some time early in 2011, support for other operating systems was curtailed due to the prevailing server issues we were dealing with at that time.
Since then, we have moved on and the outlook is looking better server space wise.
Given that, and for the recent 10 year anniversary we’ve just seen – what better time than now, we’ve rebuilt support forums for selected operating systems.
You can see what we have done here.
This week while reading Michael Hall's blog post about 1.0 being the deadliest milestone I couldn't help but grin when I read,
1.0 isn’t the milestone of success, it’s the crossing of the Rubicon, the point where drastic change becomes inevitable. It’s the milestone where the old code, with all it’s faults, dies, and out of it is born a new codebase.
This was exactly the thought that crossed my mind when I first heard about the Clock Reboot at the Malta Sprint.
Let me share how Clock Reboot came to be :). At the start of the Malta sprint I was told that the Clock app would receive some new designs that would need to be implemented for the Release-To-Manufacture milestone (RTM) which at that time was 4 months away. And on the eve of the meeting, we (Canonical Community team and the Core Apps team) rush into the meeting room where you see Giorgio Venturi, John Lea, Benjamin Keyser, Mark Shuttleworth, and other department heads looking over the designs.
Giorgio goes over the presentation and explains how they are supposed to work. At the end of the presentation, I am asked if this is feasible within the given timeframe and everyone starts to look at me. Honestly, at the moment I felt a shiver run down my spine. I was uncertain given that it took the clock app almost a year to buckle down and get to a point where it was useable and I wondered if the 4 months time was sufficient.
Strangely enough, during the presentation I recollected a conversation I had with Kyle Nitzsche a few days before that meeting, where he asked if the clock app code changes started to stabilize (no invasive code changes) considering we are this close to RTM.
Fast-forwarding to today, I think that the Clock Reboot was a blessing in disguise. I have been working on the clock app since the beginning which was somewhere in the first quarter of 2013. And I can confidently say that I have made a ton of mistakes in the so called 1.0 milestone. The Clock Reboot gave me the opportunity to start from scratch and carefully iterate every piece of code going into the trunk while avoiding the mistakes from the past.
And I like to think that it has resulted in the codebase being much more clean, lean and manageable and in a more reliable clock experience for the user.
The Music App is going through that same transition and I wish them the best since and I think it will make them stronger and better.
I like to end this blog with a blast from the past since it is a one year anniversay for the clock app! ;) It was first released to the Ubuntu Touch store on the 10th of October 2013.
Important refereces about Central America and Open Hardware/Single Board Computers & Microcontrollers
All of this data is according to what people said on Panama.
- Arduino it's used in 6 countries of Central America as part of their computer science studies or electrical engineer
- Raspberry Pi is also use on the 7 countries of Central America
- Just 2 countries of Central America have listen about the BeagleBone Black. (Costa Rica & Honduras)
- Just 1 country of Central America has work with a BeagleBone Black. (Honduras)
What was my talk about?
Basically my talk was about, what is the Beagleboard.org community and the Beaglebone Black.
- I did a presentation on HTML explaining the BeagleBoard.org history and goals. Check the presentation here
- I show some slides of Jason Kridner presentation.
- Run a small demo on how to build a traffic light with leds, showing how to use the bonescript library. Here
They love it!
- I will said it again, people love it!
- They were surprise about the bonescript library,
- That's basically one of the major problems of other platforms. How to involved Web with Hardware platform.
- The University of Costa Rica will buy BeagleBone Blacks for next years. (They have listen about the project and were interested on it, but didn't knew about all the capacities it had)
- Some professors of Panama universities were quite happy to see that they could use this tool on their courses, this is thanks to the bonescript library, it is now easy to request project involving web applications and hardware combined thanks to the BeagleBone Black.
Some images of the event :
This has been an excelent experience. I met several friends and community members I meet on the Primer Encuentro Centroamericano in the city of Esteli Nicaragua and of course new persons from central america.
Something new the ECSL has, is the presence of several recognize people of the open source community. We had the presence of Ramon Ramon, a famous blogger in the latin america. Guillermo Movia Community Manager for Latin America for Mozilla and other people of the Open Source Community in Central America.
I'm going to write in future post about my presentation of the BeagleBoard.org and other talks I have with the Mozilla people. By the way I got the opportunity to know Jorge Aguilar founder of the Mozilla Community of Honduras.
If you want to read this presentation in spanish click here.
So it’s finally happened, one of my first Ubuntu SDK apps has reached an official 1.0 release. And I think we all know what that means. Yup, it’s time to scrap the code and start over.
It’s a well established mantra, codified by Fred Brooks, in software development that you will end up throwing away the first attempt at a new project. The releases between 0.1 and 0.9 are a written history of your education about the problem, the tools, or the language you are learning. And learn I did, I wrote a whole series of posts about my adventures in writing uReadIt. Now it’s time to put all of that learning to good use.
Often times projects still spend an extremely long time in this 0.x stage, getting ever closer but never reaching that 1.0 release. This isn’t because they think 1.0 should wait until the codebase is perfect, I don’t think anybody expects 1.0 to be perfect. 1.0 isn’t the milestone of success, it’s the crossing of the Rubicon, the point where drastic change becomes inevitable. It’s the milestone where the old code, with all it’s faults, dies, and out of it is born a new codebase.
So now I’m going to start on uReadIt 2.0, starting fresh, with the latest Ubuntu UI Toolkit and platform APIs. It won’t be just a feature-for-feature rewrite either, I plan to make this a great Reddit client for both the phone and desktop user. To that end, I plan to add the following:
- User account support, which additionally will allow:
- Posting articles & comments
- Reading messages in your inbox
- Upvoting and downvoting articles and comments
- Convergence from the start, so it’s usable on the desktop as well
- Re-introduce link sharing via Content-Hub
- Take advantage of new features in the UITK such as UbuntuListView filtering & pull-to-refresh, and left/right swipe gestures on ListItems
Another change, which I talked about in a previous post, will be to the license of the application. Where uReadIt 1.0 is GPLv3, the next release will be under a BSD license.
In this week’s show:
- We interviewed Andy Stanford-Clark about his HyPi project, which he gave a talk about at OggCamp 14. It’s based on a hydrogen fuel cell from Arcola Energy:
- We also discuss:
- We share some Command Line Lurve which saves you valuable keystrokes:
tar xvf archive.tar.bz2
tar xvf foo.tar.gz
Tar now auto-detects the compression algorithm used!
And we read your feedback. Thanks for sending it in!
We’ll be back next week, so please send your comments and suggestions to: email@example.com
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: firstname.lastname@example.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+
It’s great to see more and more packages in Debian and Ubuntu getting an autopkgtest. We now have some 660, and soon we’ll get another ~ 4000 from Perl and Ruby packages. Both Debian’s and Ubuntu’s autopkgtest runner machines are currently static manually maintained machines which ache under their load. They just don’t scale, and at least Ubuntu’s runners need quite a lot of handholding.
This needs to stop. To quote Tim “The Tool Man” Taylor: We need more power!. This is a perfect scenario to be put into a cloud with ephemeral VMs to run tests in. They scale, there is no privacy problem, and maintenance of the hosts then becomes Somebody Else’s Problem.
I recently brushed up autopkgtest’s ssh runner and the Nova setup script. Previous versions didn’t support “revert” yet, tests that leaked processes caused eternal hangs due to the way ssh works, and image building wasn’t yet supported well. autopkgtest 3.5.5 now gets along with all that and has a dozen other fixes. So let me introduce the Binford 6100 variable horsepower DEP-8 engine python-coated cloud test runner!
While you can run adt-run from your home machine, it’s probably better to do it from an “autopkgtest controller” cloud instance as well. Testing frequently requires copying files and built package trees between testbeds and controller, which can be quite slow from home and causes timeouts. The requirements on the “controller” node are quite low — you either need the autopkgtest 3.5.5 package installed (possibly a backport to Debian Wheezy or Ubuntu 12.04 LTS), or run it from git ($checkout_dir/run-from-checkout), and other than that you only need python-novaclient and the usual $OS_* OpenStack environment variables. This controller can also stay running all the time and easily drive dozens of tests in parallel as all the real testing action is happening in the ephemeral testbed VMs.
The most important preparation step to do for testing in the cloud is quite similar to testing in local VMs with adt-virt-qemu: You need to have suitable VM images. They should be generated every day so that the tests don’t have to spend 15 minutes on dist-upgrading and rebooting, and they should be minimized. They should also be as similar as possible to local VM images that you get with vmdebootstrap or adt-buildvm-ubuntu-cloud, so that test failures can easily be reproduced by developers on their local machines.
To address this, I refactored the entire knowledge how to turn a pristine “default” vmdebootstrap or cloud image into an autopkgtest environment into a single /usr/share/autopkgtest/adt-setup-vm script. adt-buildvm-ubuntu-cloud now uses this, you shold use it with vmdebootstrap --customize (see adt-virt-qemu(1) for details), and it’s also easy to run for building custom cloud images: Essentially, you pick a suitable “pristine” image, nova boot an instance from it, run adt-setup-vm through ssh, then turn this into a new adt specific "daily" image with nova image-create. I wrote a little script create-nova-adt-image.sh to demonstrate and automate this, the only parameter that it gets is the name of the pristine image to base on. This was tested on Canonical's Bootstack cloud, so it might need some adjustments on other clouds.
Thus something like this should be run daily (pick the base images from nova image-list):$ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-amd64-server-20140923-disk1.img $ ./create-nova-adt-image.sh ubuntu-utopic-14.10-beta2-i386-server-20140923-disk1.img
This will generate adt-utopic-i386 and adt-utopic-amd64.
Now I picked 34 packages that have the "most demanding" tests, in terms of package size (libreoffice), kernel requirements (udisks2, network manager), reboot requirement (systemd), lots of brittle tests (glib2.0, mysql-5.5), or needing Xvfb (shotwell):$ cat pkglist apport apt aptdaemon apache2 autopilot-gtk autopkgtest binutils chromium-browser cups dbus gem2deb glib-networking glib2.0 gvfs kcalc keystone libnih libreoffice lintian lxc mysql-5.5 network-manager nut ofono-phonesim php5 postgresql-9.4 python3.4 sbuild shotwell systemd-shim ubiquity ubuntu-drivers-common udisks2 upstart
Now I created a shell wrapper around adt-run to work with the parallel tool and to keep the invocation in a single place:$ cat adt-run-nova #!/bin/sh -e adt-run "$1" -U -o "/tmp/adt-$1" --- ssh -s nova -- \ --flavor m1.small --image adt-utopic-i386 \ --net-id 415a0839-eb05-4e7a-907c-413c657f4bf5
Please see /usr/share/autopkgtest/ssh-setup/nova for details of the arguments. --image is the image name we built above, --flavor should use a suitable memory/disk size from nova flavor-list and --net-id is an "always need this constant to select a non-default network" option that is specific to Canonical Bootstack.
Finally, let' run the packages from above with using ten VMs in parallel:parallel -j 10 ./adt-run-nova -- $(< pkglist)
After a few iterations of bug fixing there are now only two failures left which are due to flaky tests, the infrastructure now seems to hold up fairly well.
Meanwhile, Vincent Ladeuil is working full steam to integrate this new stuff into the next-gen Ubuntu CI engine, so that we can soon deploy and run all this fully automatically in production.
The look and feel has gone through several iterations, matching the Ubuntu colour scheme evolutions. Each look & feel has had its own crowd saying the previous one was better, but the one constant has been the members who in their thousands log on to share knowledgeClick to view slideshow.
As part of the forum wide celebration – there’s a custom avatar you can use if you wish, and if you’ve less than 10 posts, you too can use it now that we’ve changed the 10 post rule to allow you to use custom avatars. We’re also making some changes to how we deal with other operating systems – soon.
For now we’ll be checking the posts that get made roundabout midnight of the 9th – if we pick you – expect a PM from one of us.
Thanks for your participation in helping the forum become what it is – as we passed through 2 million threads and rapidly approach 2 million users.
Frameworks 5.3.0 has finished uploading to archive! apt-get update is all that is required to upgrade.
We are currently finishing up Plasma 5.1.0 ! The problems encountered during beta have been resolved
Top open source news website TheMukt has an article headlined KDE’s Plasma used in Hobbit movies. Long time fans of Kubuntu will know of previous boasts that they use a 35,000-Core Ubuntu Farm to render films like Avatar and The Hobbit supported by a Kubuntu desktop. Great to see they’re making use of Plasma 4 and its desktop capabilities.
I do not remember exactly when I started working on ARMv8 stuff. Checked old emails from Linaro times and found that we discussed AArch64 bootstrap using OpenEmbedded during Linaro Connect Asia (June 2012). But it had to wait a bit…
First we took OpenEmbedded and created all tasks/images we needed but built them for 32-bit ARM. But during September we had all toolchain parts available: binutils was public, gcc was public, glibc was on a way to be released. I remember that moment when built first “helloworld” — probably as one of first people outside ARM and their hardware partners.
At first week of October we had ARMv8 sprint in Cambridge, UK (in Linaro and ARM offices). When I arrived and took a seat I got information that glibc just went public. Fetched, rebased my OpenEmbedded tree to drop traces of “private” patches and started build. Once finished all went public at git.linaro.org repository.
But we still lacked hardware… The only thing available was Versatile Express emulator which required license from ARM Ltd. But then free (but proprietary) emulator was released so everyone was able to boot our images. OMG it was so slow…
Then fun porting work started. Patched this, that, sent patches to OpenEmbedded and to upstream projects and time was going. And going.
In January 2013 I started X11 on emulated AArch64. Had to wait few months before other distributions went to that point.
February 2013 was nice moment as Debian/Ubuntu team presented their AArch64 port. It was their first architecture bootstrapped without using external toolchains. Work was done in Ubuntu due to different approach to development than Debian has. All work was merged back so some time later Debian also had AArch64 port.
It was March or April when OpenSUSE did mass build of whole distribution for AArch64. They had biggest amount of packages built for quite long time. But I did not tracked their progress too much.
And then 31st May came. A day when I left Linaro. But I was already after call with Red Hat so future looked quite bright ;D
June was month when first silicon was publicly presented. I do not know what Jon Masters was showing but it probably was some prototype from Applied Micro.
On 1st August I got officially hired by Red Hat and started month later. My wife was joking that next step would be Retired Software Engineer ;D
So I moved from OpenEmbedded to Fedora with my AArch64 work. Lot of work here was already done as Fedora developers were planning 64-bit ARM port few years before — when it was at design phase. So when Fedora 15 was bootstrapped for “armhf” it was done as preparation for AArch64. 64-bit ARM port was started in October 2012 with Fedora 17 packages (and switched to Fedora 19 during work).
My first task at Red Hat was getting Qt4 working properly. That beast took few days in foundation model… Good that we got first hardware then so it went faster. 1-2 months later and I had remote APM Mustang available for my porting work.
In January 2014 QEmu got AArch64 system emulation. People started migrating from foundation model.
Next months were full of hardware announcements. AMD, Cavium, Freescale, Marvell, Mediatek, NVidia, Qualcomm and others.
In meantime I decided to make crazy experiment with OpenEmbedded. I was first to use it to build for AArch64 so why not be first to build OE on 64-bit ARM?
And then June came. With APM Mustang for use at home. Finally X11 forwarding started to be useful. One of first things to do was running firefox on AArch64 just to make fun of running software which porting/upstreaming took me biggest amount of time.
Did not took me long to get idea of transforming APM Mustang (which I named “pinkiepie” as all machines at my home are named after cartoon characters) into ARMv8 desktop. Still waiting for PCI Express riser and USB host support.
Now we have October. Soon will be 2 years since people got foundation model available. And there are rumors about AArch64 development boards in production with prices below 100 USD. Will do what needed to get one of them on my desk ;)
Some of you might recall seeing this insights article about Ubuntu and the City of Munich. What you may not know is that the desktop in question is the Kubuntu flavor of Ubuntu. This wasn’t clear in the original article and I really appreciate Canonical being willing to change it to make that clear.
Release Metrics and Incoming Bugs
Release metrics and incoming bug data can be reviewed at the following link:
Status: Utopic Development Kernel
The Utopic kernel has been rebased to the v3.16.4 upstream stable
kernel. This is available for testing as of the 3.16.0-21.28 upload to
the archive. Please test and let us know your results.
Also, Utopic Kernel Freeze is this Thurs Oct 9. Any patches submitted
after kernel freeze are subject to our Ubuntu kernel SRU policy. I sent
a friendly reminder about this to the Ubuntu kernel-team mailing list
yesterday as well.
Important upcoming dates:
Thurs Oct 9 – Utopic Kernel Freeze (~2 days away)
Thurs Oct 16 – Utopic Final Freeze (~1 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~2 weeks away)
The current CVE status can be reviewed at the following link:
Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid
Status for the main kernels, until today (Sept. 30):
- Lucid – Testing
- Precise – Testing
Trusty – Testing
Current opened tracking bugs details:
For SRUs, SRU report is a good source of information:
cycle: 19-Sep through 11-Oct
19-Sep Last day for kernel commits for this cycle
21-Sep – 27-Sep Kernel prep week.
28-Sep – 04-Oct Bug verification & Regression testing.
05-Oct – 08-Oct Regression testing & Release to -updates.
Open Discussion or Questions? Raise your hand to be recognized
No open discussion.
In this week’s show:-
- We take a look at what’s been happening in the news:
- We take a look at what’s been happening in the community:
- New Ubuntu cloud images produced to address Shellshock
- The final beta of Ubuntu is released
- As are betas of all the other Ubuntu versions
- There was just time to get your nominations for the LoCo Council in
- Oh and there’s an event called OggCamp:
- OggCamp 2014 – 4th-5th October – Oxford, UK
- Go take a look at the website for links to all the lovely OggCamp sponsors.
We’ll be back next week, when we’ll have some mystery content and your feedback.
Please send your comments and suggestions to: email@example.com
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: firstname.lastname@example.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+
I was asked, not too long ago, what I hated about the community. The truth, then and now, is that I don’t hate anything about it. There is a lot I don’t like about what happens, of course, but nothing that I hate. I make an effort to understand people, to “grok” them if I may borrow the word from Heinlein. When you understand somebody, or in this case a community of somebodies, you understand the whole of them, the good and the bad. Now understanding the bad parts doesn’t make them any less bad, but it does provide opportunities for correcting or removing them that you don’t get otherwise.You reap what you sow
People will usually respond in kind with the way they are treated. I try to treat everybody I interact with respectfully, kindly, and rationally, and I’ve found that I am treated that way back. But, if somebody is prone to arrogance or cruelty or passion, they will find far more of that treatment given back and them than the positive ones. They are quite often shocked when this happens. But when you are a source of negativity you drive away people who are looking for something positive, and attract people who are looking for something negative. It’s not absolute, nice people will have some unhappy followers, and crumpy people will have some delightful ones, but on average you will be surrounded by people who behave like you.Don’t get even, get better
An eye for an eye makes the whole world blind, as the old saying goes. When somebody is rude or disrespectful to us, it’s easy to give in to the desire to be rude and disrespectful back. When somebody calls us out on something, especially in public, we want to call them out on their own problems to show everybody that they are just as bad. This might feel good in the short term, but it causes long term harm to both the person who does it and the community they are a part of. This ties into what I wrote above, because even if you aren’t naturally a negative person, if you respond to negativity with more of the same, you’ll ultimately share the same fate. Instead use that negativity as fuel to drive you forward in a positive way, respond with coolness, thoughtfulness and introspection and not only will you disarm the person who started it, you’ll attract far more of the kind of people and interactions that you want.Know your audience
Your audience isn’t the person or people you are talking to. Your audience is the people who hear you. Many of the defenders of Linus’ beratement of kernel contributors is that he only does it to people he knows can take it. This defense is almost always countered, quite properly, by somebody pointing out that his actions are seen by far more than just their intended recipient. Whenever you interact with any member of your community in a public space, such as a forum or mailing list, treat it as if you were interacting with every member, because you are. Again, if you perpetuate negativity in your community, you will foster negativity in your community, either directly in response to you or indirectly by driving away those who are more positive in nature. Linus’ actions might be seen as a joke, or necessary “tough love” to get the job done, but the LKML has a reputation of being inhospitable to potential contributors in no small part because of them. You can gather a large number of negative, or negativity-accepting, people into a community and get a lot of work done, but it’s easier and in my opinion better to have a large number of positive people doing it.Monoculture is dangerous
I think all of us in the open source community know this, and most of us have said it at least once to somebody else. As noted security researcher Bruce Schneier says, “monoculture is bad; embrace diversity or die along with everyone else.” But it’s not just dangerous for software and agriculture, it’s dangerous to communities too. Communities need, desperately need, diversity, and not just for the immediate benefits that various opinions and perspectives bring. Including minorities in your community will point out flaws you didn’t know existed, because they didn’t affect anyone else, but a distro-specific bug in upstream is still a bug, and a minority-specific flaw in your community is still a flaw. Communities that are almost all male, or white, or western, aren’t necessarily bad because of their monoculture, but they should certainly consider themselves vulnerable and deficient because of it. Bringing in diversity will strengthen it, and adding minority contributor will ultimately benefit a project more than adding another to the majority. When somebody from a minority tells you there is a problem in your community that you didn’t see, don’t try to defend it by pointing out that it doesn’t affect you, but instead treat it like you would a normal bug report from somebody on different hardware than you.Good people are human too
The appendix is a funny organ. Most of the time it’s just there, innocuous or maybe even slightly helpful. But every so often one happens to, for whatever reason, explode and try to kill the rest of the body. People in a community do this too. I’ve seen a number of people that were good or even great contributors who, for whatever reason, had to explode and they threatened to take down anything they were a part of when it happened. But these people were no more malevolent than your appendix is, they aren’t bad, even if they do need to be removed in order to avoid lasting harm to the rest of the body. Sometimes, once whatever caused their eruption has passed, these people can come back to being a constructive part of your community.Love the whole, not the parts
When you look at it, all of it, the open source community is a marvel of collaboration, of friendship and family. Yes, family. I know I’m not alone in feeling this way about people I may not have ever met in person. And just like family you love them during the good and the bad. There are some annoying, obnoxious people in our family. There are good people who are sometimes annoying and obnoxious. But neither of those truths changes the fact that we are still a part of an amazing, inspiring, wonderful community of open source contributors and enthusiasts.