news aggregator

Rohan Garg: Plasma5 : Now more awesome as a Kubuntu ISO

Planet Ubuntu - Mon, 2014-07-28 09:39

The Kubuntu team is proud to announce the immediate availability of the Plasma 5 flavor of the Kubuntu ISO which can be found here. Unlike it’s Neon 5 counterpart , this ISO contains packages made from the stock Plasma 5.0 release . The ISO is meant to be a technical preview of what is to come when Kubuntu switches to Plasma 5 by default in a future release of Kubuntu.

A special note of thanks to the Plasma team for making a rocking release. If you enjoy using KDE as much as we do, please consider donating to Kubuntu and KDE :)

NB: When booting the live ISO up, at the login screen, just hit the login button and you’ll be logged into a Plasma 5 session.


Benjamin Kerensa: Mozilla at O’Reilly Open Source Convention

Planet Ubuntu - Mon, 2014-07-28 01:48

Mozililla OSCON 2014 Team

This past week marked my fourth year of attending O’Reilly Open Source Convention (OSCON). It was also my second year speaking at the convention. One new thing that happened this year was I co-led Mozilla’s presence during the convention from our booth to the social events and our social media campaign.

Like each previous year, OSCON 2014 didn’t disappoint and it was great to have Mozilla back at the convention after not having a presence for some years. This year our presence was focused on promoting Firefox OS, Firefox Developer Tools and Firefox for Android.

While the metrics are not yet finished being tracked, I think our presence was a great success. We heard from a lot of developers who are already using our Developer tools and from a lot of developers who are not; many of which we were able to educate about new features and why they should use our tools.

Alex shows attendee Firefox Dev Tools

Attendees were very excited about Firefox OS with a majority of those stopping by asking about the different layers of the platform, where they can get a device, and how they can make an app for the platform.

In addition to our booth, we also had members of the team such as Emma Irwin who helped support OSCON’s Children’s Day by hosting a Mozilla Webmaker event which was very popular with the kids and their parents. It really was great to see the future generation tinkering with Open Web technologies.

Finally, we had a social event on Wednesday evening that was very popular so much that the Mozilla Portland office was packed till last call. During the social event, we had a local airbrush artist doing tattoos with several attendees opting for a Firefox Tattoo.

All in all, I think our presence last week was very positive and even the early numbers look positive. I want to give a big thanks to Stormy Peters, Christian Heilmann, Robyn Chau, Shezmeen Prasad, Dave Camp, Dietrich Ayala, Chris Maglione, William Reynolds, Emma Irwin, Majken Connor, Jim Blandy, Alex Lakatos for helping this event be a success.

Duncan McGreggor: The Future of Programming - Themes at OSCON 2014

Planet Ubuntu - Sun, 2014-07-27 21:33
Series Links

A Qualitative OSCON Debrief

As you might have noticed from the OSCON Twitter-storm this year, the conference was a blast. Even if you weren't physically present, given the 17 tracks, you can imagine that the presentations -- and subsequent conversations -- were deeply varied.

This was the second OSCON I'd attended; the first was was in 2008 as a guest of Michael Bernstein, a friend who was speaking there. OSCON 2008 was a zoo - I'm not sure of the actual body count, but I've heard that attendees + vendors + miscellaneous topped 12,000 people over the course of the week (I would love to hear if someone has hard data on that -- googling didn't reveal much). OSCON 2008 was dominated by Big Data, Hadoop, and what seemed like endless posturing by all sorts. The most interesting bits of that conference were the outlines that formed around the conversations people weren't having. In fact, over the following 6 months, that's what I spent my spare time pondering: what people didn't say at OSCON.

This year's conference seemed like a completely different animal. It felt like easily 1/2 to 1/3rd the number of attendees in 2008. Where that had all the anonymizing feel of rush-hour in a major metropolitan hub, OSCON 2014 had a distinctly small-town vibe to it -- I was completely charmed. Conversations (overheard as well as participated in) were not littered with buzzwords, but rather focused on essence. The interactions were not continually distracted, but rather focused, allowing people to form, express, and dispute complete thoughts with their peers.


Conversations

So what were people talking about? Here are some of the topics I heard covered during lunches, in hallways, and at podiums; at pubs, in restaurants and at parks:
  • What communities are thriving?
  • Which [projects, organisations, companies, etc.] are treating their people right?
  • What successful processes are being followed at [project, organisation, etc.]?
  • Who is hiring and why should someone want to work there?
  • Where can I go to learn X? Who is teaching X? Who shares the most about X?
  • Which [projects, organisations] support X?
  • Why don't more [people, projects, organisations] care about [possible future X]?
  • Why don't more [people, projects, organisations] spend more time investigating the history of X for "lessons learned"?
  • There was so much more X in computing during the 60s and 70s -- what happened? [1]
  • Why are we reinventing X?
  • When is X going to be invented, and who's going to do it?
  • Everything is changing! I can't keep up anymore.
  • I want to keep up, but how?
  • Why can't we stop making so many X?
  • Nobody cares about Y anymore; we're all doing X now.
  • Full stack developers!
  • Haskell!
  • Fault-tolerant systems!

(It goes without saying that any one attendee couldn't possibly be exposed to enough conversations to form a perfectly accurate sense of the total distribution of conversation topics. No claim to the contrary is being made here :-))
After lots of reflection, here's how I classified most of the conversations I heard:
  • Developing communities,
  • Developing careers and/or personal/professional qualities, and
  • Developing software, 

along lines such as:
  • Effective maintenance, maturity, and health,
  • Focusing on the "art",  eventual mastery, and investments of time,
  • Tempering bare pragmatism with something resembling science or academic excellence,
  • Learning the new to bolster the old,
  • Inspiring innovation from a place of contemplation and analysis,
  • Mining the past for great ideas, and
  • Figuring out how to better share and spread the adoption of good ideas.


Themes

Generalized to such a degree, this could have been pretty much any congregation of interested, engaged minds since the dawn of civilization. So what does it look like if we don't normalize quite so much? Weighing these with what may well be my own bias (and the bias of like-minded peers), I submit to your review these themes:

  • A very strong interest in programming (thinking and creating) vs. integration (assessing and consuming).
  • An express desire to become better at abstraction (higher-order functions, composition, and types) to better deal with growing systems complexities.
  • An interest in building even more complicated systems.
  • A fear of reimplementing past mistakes or of letting dust gather on past intellectual achievements.

As you might have guessed, these number very highly among the reasons why the conference was such an unexpected pleasure for me. But it should also not come as a surprise that these themes are present:

  • We have had several years of companies such as Google and Amazon (AWS) building and deploying some of the most sophisticated examples of logic-made-manifest in human history. This has created perceived value in our industry and many wish to emulate it. Similarly, we have single purpose distributed systems being purchased for nearly 20 billion USD -- a different kind of complexity, with a different kind of perceived reward.
  • In the 70s and 80s, OOP adoption brought with it the ability to create large software systems in ways that people had not dared dream or were impractical to realize. Today's growing adoption of the Functional paradigm is giving early signs of allowing us to better integrate complex systems with more predictability and fewer errors.
  • Case studies of improvements in productivity or the capacity to handle highly complex or previously intractable problems with better abstractions, has ignited the passions of many. Not wanting to limit their scope of knowledge or sources of inspiration, people are not simply limiting themselves to the exploration of such things as Category Theory -- they are opening the vaults of computer science with such projects as Papers We Love.


There's a brave new world in the making. It's a world for programmers and thinkers, for philosophers and makers. There's a lot to learn, but it's really not so different from older worlds: the same passions drive us, the same idealism burns brightly. And it's nice to see that these themes arise not only in small, highly specialized venues such as university doctoral programs and StrangeLoop (or LambdaJam), but also in larger intersections of the industry like OSCON (or more general-audience ones like Meetups).

Up next: Adopting the Functional Paradigm?
PreviouslyAn Overview


Footnotes

[1] I strongly adhere to the multifaceted hypothesis proposed by Bret Victor here in the section titled "Why did all these ideas happen during this particular time period?"


Duncan McGreggor: The Future of Programming - An Overview

Planet Ubuntu - Sun, 2014-07-27 21:31
Art by Philip StraubThere's a new series of blog posts coming, inspired by on-going conversations with peers, continuous inspection of the development landscape, habitual navel-gazing, and participation at the catalytic OSCON 2014. As you might have inferred, these will be on the topic of "The Future of Programming."

Not to be confused with Bret Victor's excellent talk last year at DBX, these posts will be less about individual technologies or developer user experience, and more about historic trends and viewing the present (and near future) through such a lense.

In this mini-series, the aim is to present posts on following topics:

I did a similar set of posts, conceived in late 2008 and published in 2009 on the future of cloud computing entitled After the Cloud. In general, it was a very successful series and the cloud industry seems to be heading towards some of the predictions made in it -- ZeroVM and Docker are an incremental step towards the future of distributed processes/functions outlined in To Atomic Computation and Beyond
In that post, though, are two quotes from industry greats; these provide an excellent context for this series as well, hinting at an overriding theme:
  • Alan Kay, 1998: A crucial key to growing large systems is effective communications between components.
  • Joe Armstrong, 2004: To effectively model and solve problems in a distributed manner, we need concurrency... this is made easier when we isolate processes and do not share data.

In the decade since these statements were made, we have seen individuals, projects, and companies take that vision to heart -- and succeeding as a result. But as an industry, we continue to struggle with the definition of our art; we still are tormented by change -- both from within and externally -- and do not seem to adapt to it well.
These posts will peer into such places ... in the hope that such inspection might guide us better through the tangled forest of our present into the unimagined forest of our future.
Up next: Themes at OSCON 2014 ...


Xubuntu: Xubuntu 14.04.1 released

Planet Ubuntu - Fri, 2014-07-25 18:55

Xubuntu 14.04 Trusty Tahr

The Xubuntu team is pleased to announce the immediate release of Xubuntu 14.04.1 Xubuntu 14.04 is an LTS (Long-Term Support) release and will be supported for 3 years. This is the first Point Release of it’s cycle.

The final release images are available as Torrents and direct downloads at http://cdimage.ubuntu.com/xubuntu/releases/trusty/release/

As the main server will be very busy in the first days after the release, we recommend using the Torrents wherever possible.

For support with the release, navigate to Help & Support for a complete list of methods to get help.

Bug fixes for the first point release
  • Black screen after wakeup from suspending by closing the laptop lid. (1303736)
  • Light Locker blanks the screen when playing video. (1309744)
  • Include MenuLibre 2.0.4, which contains many fixes. (1323405)
  • The documentation is now attributed to the Translators.
Highlights, changes and known issues

The highlights of this release include:

  • Light Locker replaces xscreensaver for screen locking, a setting editing GUI is included
  • The panel layout is updated, and now uses Whiskermenu as the default menu
  • Mugshot is included to allow you to easily edit your personal preferences
  • MenuLibre for menu editing, with full Xfce support, replaces Alacarte
  • A community wallpapers package, which includes work from the five winners of the wallpaper contest
  • GTK Theme Config to customize your desktop theme colors
  • Updated artwork, including various enhancements to themes as well as a new default wallpaper

Some of the known issues include:

  • Window manager shortcut keys don’t work after reboot (1292290)
  • Sorting by date or name not working correctly in Ristretto (1270894)
  • Due to the switch from xscreensaver to light-locker, some users might have issues with timing of locking; removing xscreensaver from the system should fix these problems
  • IBus does not support certain keyboard layouts (1284635). Only affects upgrades with certain keyboard layouts. See release notes for a workaround.

To see the complete list of new features, improvements and known and fixed bugs, read the release notes.

Ronnie Tucker: Ladies and gentlemen, Full Circle #87 has arrived.

Planet Ubuntu - Fri, 2014-07-25 16:47


This month:

* Command & Conquer
* How-To : Python, LibreOffice, and GRUB2.
* Graphics : Inkscape.
* Book Review: Puppet
* Security – TrueCrypt Alternatives
* CryptoCurrency: Dualminer and dual-cgminer
* Arduino
plus: Q&A, Linux Labs, Ubuntu Games, and Ubuntu Women.

Get it while it’s hot!
http://fullcirclemagazine.org/issue-87/

Canonical Design Team: Bringing Fluid Motion to Browsing

Planet Ubuntu - Fri, 2014-07-25 11:42

In the previous Blog Post, we looked at how we use the Recency principle to redesign the experience around bookmarks, tabs and history.
In this blog post, we look at how the new Ubuntu Browser makes the UI fade to the background in favour of the content. The design focuses on physical impulse familiarity – “muscle memory” – by marrying simple gestures to the two key browser tasks, making the experience feel as fluid and simple as flipping through a magazine.

 

Creating a new tab

For all new browsers, the approach to the URI Top Bar that enables searching as well as manual address entry has made the “new tab” function more central to the experience than ever. In addition, evidence suggests that opening a new tab is the third of the most frequently used action in browser. To facilitate this, we made opening a new tab effortless and even (we think) a bit fun.
By pulling down anywhere on the current page, you activate a sprint loaded “new tab” feature that appears under the address bar of the page. Keep dragging far enough, and you’ll see a new blank page coming into view. If you release at this stage, a new tab will load ready with the address bar and keyboard open as well as an easy way to get to your bookmarks. But, if you change your mind, just drag the current page back up or release early and your current page comes back.

http://youtu.be/zaJkNRvZWgw

 

Get to your open tabs and recently visited sites

Pulling the current page downward can create a new blank tab, and conversely dragging the bottom edge upward shows you already open tabs ordered by recency that echoes the right edge “open apps” view.

If you keep on dragging upward without releasing, you can dig even further into the past with your most recently visited pages grouped by site in a “history” list. By grouping under the site domain name, it’s easier to find what you’re looking for without thumbing through hundreds of individual page URLs. However, if you want all the detail, tap an item in the list to see your complete history.


It’s not easy to improve upon such a well-worn application as the browser, it’s true. We’re hopeful that by adding new fluidity to creating, opening and switching between tabs, our users will find that this browsing experience is simpler to use, especially with one hand, and feels more seamless and fluid than ever.

 

 

Kubuntu: Kubuntu 14.04 LTSUpdate Out

Planet Ubuntu - Fri, 2014-07-25 10:35

The first update to our LTS release 14.04 is out now. This contains all the bugfixes added to 14.04 since its first release in April. Users of 14.04 can run the normal update procedure to get these bufixes.

See the 14.04.1 release announcement.

Download 14.04.1 images.

Edubuntu: Edubuntu 14.04.1 Release Announcement

Planet Ubuntu - Fri, 2014-07-25 04:46

Edubuntu Long-Term Support

The Edubuntu team is proud to announce Edubuntu 14.04.1 LTS, which is the first Long Term Support (LTS) update as part of the Edubuntu 14.04 5 years support cycle.

This point release includes all the bug fixes and improvements that have been applied via updates to Edubuntu 14.04 LTS since it has been released. It also includes updated hardware support and installer fixes. If you have an Edubuntu 14.04 LTS system and have applied all the available updates, then your system will already be on 14.04.1 LTS and there is no need to re-install. For new installations, installing from the updated media is recommended since it will be installable on more systems than before and will require less updates than installing from the original 14.04 LTS media.

  • Information on where to download the Edubuntu 14.04.1 LTS media is available from the Downloads page.
  • We do not ship free Edubuntu discs at this time, however, there are 3rd party distributors available who ship discs at reasonable prices listed on the Edubuntu Martketplace

To ensure that the Edubuntu 14.04 LTS series will continue to work on the latest hardware as well as keeping quality high right out of the box, further point releases will be made available during its lifecycle. More information will be available on the release schedule page on the Ubuntu wiki.

See also

Thanks for your support and interest in Edubuntu!

Sebastian Kügler: Plasma’s Road to Wayland

Planet Ubuntu - Fri, 2014-07-25 02:28

With the Plasma 5.0 release out the door, we can lift our heads a bit and look forward, instead of just looking at what’s directly ahead of us, and make that work by fixing bug after bug. One of the important topics which we have (kind of) excluded from Plasma’s recent 5.0 release is support for Wayland. The reason is that much of the work that has gone into renovating our graphics stack was also needed in preparation for Wayland support in Plasma. In order to support Wayland systems properly, we needed to lift the software stack to Qt5, make X11 dependencies in our underlying libraries, Frameworks 5 optional. This part is pretty much done. We now need to ready support for non-X11 systems in our workspace components, the window manager and compositor, and the workspace shell.

Let’s dig a bit deeper and look at at aspects underlying to and resulting from this transition.

Why Wayland?

The short answer to this question, from a Plasma perspective, is:

  • Xorg lacks modern interfaces and protocols, instead it carries a lot of ballast from the past. This makes it complex and hard to work with.
  • Wayland offers much better graphics support than Xorg, especially in terms of rendering correctness. X11′s asynchronous rendering makes it impossible to be sure about correctness and timeliness of graphics that ends up on screen. Instead, Wayland provides the guarantee that every frame is perfect
  • Security considerations. It is almost impossible to shield applications properly from each other. X11 allows applications to wiretap each other’s input and output. This makes it a security nightmare.

I could go deeply into the history of Xorg, and add lots of technicalities to that story, but instead of giving you a huge swath of text, hop over to Youtube and watch Daniel Stone’s presentation “The Real Story Behind Wayland and X” from last year’s LinuxConf.au, which gives you all the information you need, in a much more entertaining way than I could present it. H-Online also has an interesting background story “Wayland — Beyond X”.

While Xorg is a huge beast that does everything, like input, printing, graphics (in many different flavours), Wayland is limited by design to the use-cases we currently need X for, without the ballast.
With all that in mind, we need to respect our elders and acknowledge Xorg for its important role in the history of graphical Linux, but we also need to look beyond it.

What is Wayland support?

KDE Frameworks 5 apps under Weston

Without communicating our goal, we might think of entirely different things when talking about Wayland support. Will Wayland retire X? I don’t think it will in the near future, the point where we can stop caring for X11-based setups is likely still a number of years away, and I would not be surprised if X11 was still a pretty common thing to find in enterprise setups ten years down the road from now. Can we stop caring about X11? Surely not, but what does this mean for Wayland? The answer to this question is that support for Wayland will be added, and that X11 will not be required anymore to run a Plasma desktop, but that it is possible to run Plasma (and apps) under both, X11 and Wayland systems. This, I believe, is the migration process that serves our users best, as the question “When can I run Plasma on Wayland?” can then be answered on an individual basis, and nobody is going to be thrown into the deep (at least not by us, your distro might still decide to not offer support for X11 anymore — that is not in our hands). To me, while a quick migration to Wayland (once ready) is something desirable, realistically, people will be running Plasma on X11 for years to come. Wayland can be offered as an alternative at first, and then promote to primary platform once the whole stack matures further.

Where at we now?

With the release of KDE Frameworks 5, most of the issues in our underlying libraries have been ironed out, that means X11-dependent codepaths have become optional. Today, it’s possible to run most applications built on top of Frameworks 5 under a Wayland compositor, independent from X11. This means that applications can run under both, X11 and Wayland with the same binary. This is already really cool, as without applications, having a workspace (which in a way is the glue between applications would be a pointless endeavour). This chicken-egg situation plays both ways, though: Without a workspace environment, just having apps run under Wayland is not all that useful. This video shows some of our apps under the Weston compositor. (This is not a pure Wayland session “on bare metal”, but one running in an X11 window in my Plasma 5 session for the purpose of the screen-recoding.)

For a full-blown workspace, the porting situation is a bit different, as the workspace interacts much more intimately with the underlying display server than applications do at this point. These interactions are well-hidden behind the Qt platform abstraction. The workspace provides the host for rendering graphics onto the screen (the compositor) and the machinery to start and switch between applications.

We are currently missing a number of important pieces of the full puzzle: Interfaces between the workspace shell, the compositor (KWin) and the display server are not yet well-defined or implemented, some pioneering work is ahead of us. There is also a number of workspace components that need bigger adjustments, global shortcut handling being a good example. Most importantly, KWin needs to take over the role of Wayland compositor. While some support for Wayland has already been added to KWin, the work is not yet complete. Besides KWin, we also need to add support for Wayland to various bits of our workspace. Information about attached screens and their layout has to be made accessible. Global keyboard shortcuts only support X11 right now. The screen locking mechanism needs to be implemented. Information about Windows for the task-manager has to be shared. Dialog positioning and rendering needs to be ported. There are also a few assumptions in startkde and klauncher that currently prevent them from being able to start a session under Wayland, and more bits and pieces which need additional work to offer a full workspace experience under Wayland.

Porting Strategy

The idea is to be able to run the same binaries under both, X11 and Wayland. This means that we (need to decide at runtime how to interact with the windowing system. The following strategy is useful (in descending order of preference):

  • Use abstract Qt and Frameworks (KF5) APIs
  • Use XCB when there are no suitable Qt and KF5 APIs
  • Decide at runtime whether to call X11-specific functions

In case we have to resort to functions specific to a display server, X11 should be optional both at build-time and at run-time:

  • The build of X11-dependent code optional. This can be done through plugins, which are optionally included by the build-system or (less desirably) by #ifdef’ing blocks of code.
  • Even with X11 support built into the binary, calls into X11-specific libraries should be guarded at runtime (QX11Info::isPlatformX11() can be used to check at runtime).
Get your Hands Dirty!

Computer graphics are an exciting thing, and many of us are longing for the day they can remove X11 from their systems. This day will eventually come, but it won’t come by itself. It’s a very exciting time to get involved, and make the migration happen. As you can see, we have a multitude of tasks that need work. An excellent first step is to build the thing on your system and try running, fix issues, and send us patches. Get in touch with us on Freenode’s #plasma IRC channel, or via our mailing list plasma-devel(at)kde.org.

The Fridge: Ubuntu 14.04.1 LTS released

Planet Ubuntu - Fri, 2014-07-25 02:11

The Ubuntu team is pleased to announce the release of Ubuntu 14.04.1 LTS (Long-Term Support) for its Desktop, Server, Cloud, and Core products, as well as other flavours of Ubuntu with long-term support.

As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 14.04 LTS.

Kubuntu 14.04.1 LTS, Edubuntu 14.04.1 LTS, Xubuntu 14.04.1 LTS, Mythbuntu 14.04.1 LTS, Ubuntu GNOME 14.04.1 LTS, Lubuntu 14.04.1 LTS, Ubuntu Kylin 14.04.1 LTS, and Ubuntu Studio 14.04.1 LTS are also now available. More details can be found in their individual release notes:

https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes#Official_flavours

Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, Ubuntu Core, Ubuntu Kylin, Edubuntu, and Kubuntu. All the remaining flavours will be supported for 3 years.

To get Ubuntu 14.04.1

In order to download Ubuntu 14.04.1, visit:

http://www.ubuntu.com/download

Users of Ubuntu 12.04 will soon be offered an automatic upgrade to 14.04.1 via Update Manager. For further information about upgrading, see:

https://help.ubuntu.com/community/TrustyUpgrades

As always, upgrades to the latest version of Ubuntu are entirely free of charge.

We recommend that all users read the 14.04.1 release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:

https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes

If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

Help Shape Ubuntu

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:

http://www.ubuntu.com/community/get-involved

About Ubuntu

Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:

http://www.ubuntu.com/support

More Information

You can learn more about Ubuntu and about this release on our website listed below:

http://www.ubuntu.com/

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

http://lists.ubuntu.com/mailman/listinfo/ubuntu-announce

Originally posted to the ubuntu-announce mailing list on Fri Jul 25 01:35:00 UTC 2014 by Adam Conrad

Ubuntu Podcast from the UK LoCo: S07E17 – The One with the Chicken Pox

Planet Ubuntu - Thu, 2014-07-24 19:30

Tony Whitmore and Laura Cowen are in Studio L, Alan Pope is AWOL, and Mark Johnson Skypes in from his sick bed for Season Seven, Episode Seventeen of the Ubuntu Podcast!

 Download OGG  Download MP3 Play in Popup

In this week’s show:-

We’ll be back next week, when we’ll be interviewing Graham Binns about the MAAS project project, and we’ll go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Arthur Schiwon: In Kazan? Me too, join my ownCloud talk!

Planet Ubuntu - Thu, 2014-07-24 19:20

Currently I am enjoying my summer vacation. Vacation is when you do non-stop activities that make fun, no matter whether this is more relaxing or challenging, and usually in a different place. So I am going to take the opportunity to visit Kazan, and furthermore I am taking the other opportunity and will give an ownCloud talk (btw, did you hear, ownCloud 7 was released?) at the local hackerspace, FOSS Labs. Due to my lack of Russian I will stick to English, however ;)

So, if you are there and interested in ownCloud please save the date:

Monday, July 28th, 18:00
FOSS Labs
Universitetskaya 22, of. 114
420111 Kazan, Russia

Thank you, FOSS Labs and especially Mansur Ziatdinov, for making this possible. I am very much excited to not only to share information with you, but also to learn and get to know local (FOSS) culture!

Picture: Kazan Kremlin, derived from Skyline of Kazan city by TY-214.

Tags: PlanetOwnCloudPlanetUbuntuownCloud

Oli Warner: Converting an existing Ubuntu Desktop into a Chrome kiosk

Planet Ubuntu - Thu, 2014-07-24 16:16

You might already have Ubuntu Desktop installed and you might want to just run one application without stripping it down. This article should give you a decent idea how to convert a stock Desktop/Unity install into a single-application computer.

This follows straight on from today's other article on building a kiosk computer with Ubuntu and Chrome [from scratch]. In my mind that's the perfect setup: low fat and speedy... But we don't always get it right first time. You might have already been battling with a full Ubuntu install and not have the time to strip it down.

This tutorial assumes you're starting with an Ubuntu desktop, all installed with working network and graphics. While we're in graphical-land, you might as well go and install Chrome.

I have tested this in a clean 14.04 install but be careful. Back up any important data before you commit.

sudo apt update sudo apt install --no-install-recommends openbox sudo install -b -m 755 /dev/stdin /opt/kiosk.sh <<- EOF #!/bin/bash xset -dpms xset s off openbox-session & while true; do rm -rf ~/.{config,cache}/google-chrome/ google-chrome --kiosk --no-first-run 'http://thepcspy.com' done EOF sudo install -b -m 644 /dev/stdin /etc/init/kiosk.conf <<- EOF start on (filesystem and stopped udevtrigger) stop on runlevel [06] emits starting-x respawn exec sudo -u $USER startx /etc/X11/Xsession /opt/kiosk.sh -- EOF sudo dpkg-reconfigure x11-common # select Anybody echo manual | sudo tee /etc/init/lightdm.override # disable desktop sudo reboot

This should boot you into a browser looking at my home page (use sudoedit /opt/kiosk.sh to change that), but broadly speaking, we're done.

If you ever need to get back into the desktop you should be able to run sudo start lightdm. It'll probably appear on VT8 (Control+Alt+F8 to switch).

Why wouldn't I always do it this way?

I'll freely admit that I've done farts longer than it took to run the above. Starting from an Ubuntu Desktop base does do a lot of the work for us, however it is demonstrably flabbier:

  • The Server result was 1.6GB, using 117MB RAM with 38 processes.
  • The Desktop result is 3.7GB, using 294MB RAM with 80 processes!

Yeah, the Desktop is still loading a number of udisks mount helpers, PulseAudio, GVFS, Deja Dup, Bluetooth daemons, volume controls, Ubuntu 1, CUPS the printer server and all the various Network and Modem Manager things a traditional desktop needs.

This is the reason you base your production model off Ubuntu Server (or even Ubuntu Minimal).

And remember that you aren't done yet. There's a big list of boring jobs to do before it's Martini O'Clock

Just remember that everything I said about physical and network security last time applies doubly here. Ubuntu-proper ships a ton of software on its 1GB image and quite a lot more of that will be running, even after we've disabled the desktop. You're going to want to spend time stripping some of that out and putting in place any security you need to stop people getting in.

Just be careful and conscientious about how you deploy software.

Ubuntu Scientists: Who We Are: Svetlana Belkin, Admin/Founder

Planet Ubuntu - Thu, 2014-07-24 15:48

Welcome all to the first of many “Who We Are” posts.  These posts will introduce you to many of our  members of the team.  We will start with Svetlana Belkin, the founder and admin of the team:

I am Svetlana Belkin (A.K.A. belkinsa everywhere in Ubuntu community and
Mechafish on the Ubuntu Forums), and I am getting my BS in biology with
molecular sciences as my focus at University of Cincinnati. I used
Ubuntu since 2009, but the only “scientific” program that I used was
Ugene. But hopefully, I will get to use more in my field.


Filed under: Who We Are

Oli Warner: Building a kiosk computer with Ubuntu 14.04 and Chrome

Planet Ubuntu - Thu, 2014-07-24 10:36

Single-purpose kiosk computing might seem scary and industrial but thanks to cheap hardware and Ubuntu, it's an increasingly popular idea. I'm going to show you how and it's only going to take a few minutes to get to something usable.

Hopefully we'll do better than the image on the right.

We're going to be running a very light stack of X, Openbox and the Google Chrome web browser to load a specified website. The website could be local files on the kiosk or remote. It could be interactive or just an advertising roll. The options are endless.

The whole thing takes less than 2GB of disk space and can run on 512MB of RAM.

Update: Read this companion tutorial if you want to convert an existing Ubuntu Desktop install to a kiosk.

Step 1: Installing Ubuntu Server

I'm picking the Server flavour of Ubuntu for this. It's all the nuts-and-bolts of regular Ubuntu without installing a load of flabby graphical applications that we're never ever going to use.

It's free for download. I would suggest 64bit if your hardware supports it and I'm going with the latest LTS (14.04 at the time of writing). Sidebar: If you've never tested your kiosk's hardware in Ubuntu before it might be worth download the Desktop Live USB, burning it and checking everything works.

Just follow the installation instructions. Burn it to a USB stick, boot the kiosk to it and go through. I just accepted the defaults and when asked:

  • Set my username to user and set an hard-to-guess, strong password.
  • Enabled automatic updates
  • At the end when tasksel ran, opted to install the SSH server task so I could SSH in from a client that supported copy and paste!

After you reboot, you should be looking at a Ubuntu 14.04 LTS ubuntu tty1 login prompt. You can either SSH in (assuming you're networked and you installed the SSH server task) or just log in.

The installer auto-configures an ethernet connection (if one exists) so I'm going to assume you already have a network connection. If you don't or want to change to wireless, this is the point where you'd want to use nmcli to add and enable your connection. It'll go something like this:

sudo apt install network-manager sudo nmcli dev wifi con <SSID> password <password>

Later releases should have nmtui which will make this easier but until then you always have man nmcli :)

Step 2: Install all the things

We obviously need a bit of extra software to get up and running but we can keep this fairly compact. We need to install:

  • X (the display server) and some scripts to launch it
  • A lightweight window manager to enable Chrome to go fullscreen
  • Google Chrome

We'll start by adding the Google-maintained repository for Chrome:

sudo add-apt-repository 'deb http://dl.google.com/linux/chrome/deb/ stable main' wget -qO- https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -

Then update our packages list and install:

sudo apt update sudo apt install --no-install-recommends xorg openbox google-chrome-stable

If you omit --no-install-recommends you will pull in hundreds of megabytes of extra packages that would normally make life easier but in a kiosk scenario, only serve as bloat.

Step 3: Loading the browser on boot

I know we've only been going for about five minutes but we're almost done. We just need two little scripts.

Run sudoedit /opt/kiosk.sh first. This is going to be what loads Chrome once X has started. It also needs to wipe the Chrome profile so that between loads you aren't persisting stuff. This in incredibly important for kiosk computing because you never want a user to be able to affect the next user. We want them to start with a clean environment every time. Here's where I've got to:

#!/bin/bash xset -dpms xset s off openbox-session & while true; do rm -rf ~/.{config,cache}/google-chrome/ google-chrome --kiosk --no-first-run 'http://thepcspy.com' done

When you're done there, Control+X to exit and run sudo chmod +x /opt/kiosk.sh to make the script executable. Then we can move onto starting X (and loading kiosk.sh).

Run sudoedit /etc/init/kiosk.conf and this time fill it with:

start on (filesystem and stopped udevtrigger) stop on runlevel [06] console output emits starting-x respawn exec sudo -u user startx /etc/X11/Xsession /opt/kiosk.sh --

Replace user with your username. Exit, Control+X, save.

X still needs some root privileges to start. These are locked down by default but we can allow anybody to start an X server by running sudo dpkg-reconfigure x11-common and selecting "Anybody".

After that we should be able to test. Run sudo start kiosk (or reboot) and it should all come up.

One last problem to fix is the amount of garbage it prints to screen on boot. Ideally your users will never see it boot but when it does, it's probably better that it doesn't look like the Matrix. A fairly simple fix, just run sudoedit /etc/default/grub and edit so the corresponding lines look like this:

GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=0 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX=""

Save and exit that and run sudo update-grub before rebooting.
The monitor should remain on indefinitely.

Final step: The boring things...

Technically speaking we're done; we have a kiosk and we're probably sipping on a Martini. I know, I know, it's not even midday, we're just that good... But there are extra things to consider before we let a grubby member of the public play with this machine:

  • Can users break it? Open keyboard access is generally a no-no. If they need a keyboard, physically disable keys so they only have what they need. I would disable all the F* keys along with Control, Alt, Super... If they have a standard mouse, right click will let them open links in new windows and tabs and OMG this is a nightmare. You need to limit user-input.

  • Can it break itself? Does the website you're loading have anything that's going to try and open new windows/tabs/etc? Does it ask for any sort of input that you aren't allowing users? Perhaps a better question to ask is Can it fix itself? Consider a mechanism for rebooting that doesn't involve a phone call to you.

  • Is it physically secure? Hide and secure the computer. Lock the BIOS. Ensure no access to USB ports (fill them if you have to). Disable recovery mode. Password protect Grub and make sure it stays hidden (especially with open keyboard access).

  • Is it network secure? SSH is the major ingress vector here so follow some basic tips: so at the very least move it to another port, only allow key-based authentication, install fail2ban and make sure fail2ban is telling you about failed logins.

  • What if Chrome is hacked directly? What if somebody exploited Chrome and had command-level access as user? Well first of all, you can try to stop that happening with AppArmor (should still apply) but you might also want to change things around so that the user running X and the browser doesn't have sudo access. I'd do that by adding a new user and changing the two scripts accordingly.

  • How are you maintaining it? Automatic updates are great but what if that breaks everything? How will you access it in the field to maintain it if (for example) the network dies or there's a hardware failure? This is aimed more at the digital signage people than simple kiosks but it's something to consider.

You can mitigate a lot of the security issues by having no live network (just displaying local files) but this obviously comes at the cost of maintenance. There's no one good answer for that.

Photo credit: allegr0/Candace

Martin Pitt: vim config for Markdown+LaTeX pandoc editing

Planet Ubuntu - Thu, 2014-07-24 09:38

I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I’ve always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin{itemize} \item...\end{itemize} with just two keystrokes, it has been good enough for me.

A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That’s how it should always should have been! ☺

So last night I finally sat down and created a vim config for it:

"-- pandoc Markdown+LaTeX ------------------------------------------- function s:MDSettings() inoremap <buffer> <Leader>n \note[item]{}<Esc>i noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR> noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR> noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR> " adjust syntax highlighting for LaTeX parts " inline formulas: syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$" " environments: syntax region Statement matchgroup=Delimiter start="\\begin{.*}" end="\\end{.*}" contains=Statement " commands: syntax region Statement matchgroup=Delimiter start="{" end="}" contains=Statement endfunction autocmd BufRead,BufNewFile *.md setfiletype markdown autocmd FileType markdown :call <SID>MDSettings()

That gives me “good enough” (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated!

Dustin Kirkland: Improving Random Seeds in Ubuntu 14.04 LTS Cloud Instances

Planet Ubuntu - Thu, 2014-07-24 02:15
Tomorrow, February 19, 2014, I will be giving a presentation to the Capital of Texas chapter of ISSA, which will be the first public presentation of a new security feature that has just landed in Ubuntu Trusty (14.04 LTS) in the last 2 weeks -- doing a better job of seeding the pseudo random number generator in Ubuntu cloud images.  You can view my slides here (PDF), or you can read on below.  Enjoy!

Q: Why should I care about randomness? A: Because entropy is important!
  • Choosing hard-to-guess random keys provide the basis for all operating system security and privacy
    • SSL keys
    • SSH keys
    • GPG keys
    • /etc/shadow salts
    • TCP sequence numbers
    • UUIDs
    • dm-crypt keys
    • eCryptfs keys
  • Entropy is how your computer creates hard-to-guess random keys, and that's essential to the security of all of the above
Q: Where does entropy come from?A: Hardware, typically.
  • Keyboards
  • Mouses
  • Interrupt requests
  • HDD seek timing
  • Network activity
  • Microphones
  • Web cams
  • Touch interfaces
  • WiFi/RF
  • TPM chips
  • RdRand
  • Entropy Keys
  • Pricey IBM crypto cards
  • Expensive RSA cards
  • USB lava lamps
  • Geiger Counters
  • Seismographs
  • Light/temperature sensors
  • And so on
Q: But what about virtual machines, in the cloud, where we have (almost) none of those things?A: Pseudo random number generators are our only viable alternative.
  • In Linux, /dev/random and /dev/urandom are interfaces to the kernel’s entropy pool
    • Basically, endless streams of pseudo random bytes
  • Some utilities and most programming languages implement their own PRNGs
    • But they usually seed from /dev/random or /dev/urandom
  • Sometimes, virtio-rng is available, for hosts to feed guests entropy
    • But not always
Q: Are Linux PRNGs secure enough?A: Yes, if they are properly seeded.
  • See random(4)
  • When a Linux system starts up without much operator interaction, the entropy pool may be in a fairly predictable state
  • This reduces the actual amount of noise in the entropy pool below the estimate
  • In order to counteract this effect, it helps to carry a random seed across shutdowns and boots
  • See /etc/init.d/urandom
...
dd if=/dev/urandom of=$SAVEDFILE bs=$POOLBYTES count=1 >/dev/null 2>&1

...Q: And what exactly is a random seed?A: Basically, its a small catalyst that primes the PRNG pump.
  • Let’s pretend the digits of Pi are our random number generator
  • The random seed would be a starting point, or “initialization vector”
  • e.g. Pick a number between 1 and 20
    • say, 18
  • Now start reading random numbers

  • Not bad...but if you always pick ‘18’...
XKCD on random numbersRFC 1149.5 specifies 4 as the standard IEEE-vetted random number.Q: So my OS generates an initial seed at first boot?A: Yep, but computers are predictable, especially VMs.
  • Computers are inherently deterministic
    • And thus, bad at generating randomness
  • Real hardware can provide quality entropy
  • But virtual machines are basically clones of one another
    • ie, The Cloud
    • No keyboard or mouse
    • IRQ based hardware is emulated
    • Block devices are virtual and cached by hypervisor
    • RTC is shared
    • The initial random seed is sometimes part of the image, or otherwise chosen from a weak entropy pool
Dilbert on random numbers


Q: Surely you're just being paranoid about this, right?A: I’m afraid not...Analysis of the LRNG (2006)
  • Little prior documentation on Linux’s random number generator
  • Random bits are a limited resource
  • Very little entropy in embedded environments
  • OpenWRT was the case study
  • OS start up consists of a sequence of routine, predictable processes
  • Very little demonstrable entropy shortly after boot
  • http://j.mp/McV2gT
Black Hat (2009)
  • iSec Partners designed a simple algorithm to attack cloud instance SSH keys
  • Picked up by Forbes
  • http://j.mp/1hcJMPu
Factorable.net (2012)
  • Minding Your P’s and Q’s: Detection of Widespread Weak Keys in Network Devices
  • Comprehensive, Internet wide scan of public SSH host keys and TLS certificates
  • Insecure or poorly seeded RNGs in widespread use
    • 5.57% of TLS hosts and 9.60% of SSH hosts share public keys in a vulnerable manner
    • They were able to remotely obtain the RSA private keys of 0.50% of TLS hosts and 0.03% of SSH hosts because their public keys shared nontrivial common factors due to poor randomness
    • They were able to remotely obtain the DSA private keys for 1.03% of SSH hosts due to repeated signature non-randomness
  • http://j.mp/1iPATZx
Dual_EC_DRBG Backdoor (2013)
  • Dual Elliptic Curve Deterministic Random Bit Generator
  • Ratified NIST, ANSI, and ISO standard
  • Possible backdoor discovered in 2007
  • Bruce Schneier noted that it was “rather obvious”
  • Documents leaked by Snowden and published in the New York Times in September 2013 confirm that the NSA deliberately subverted the standard
  • http://j.mp/1bJEjrB
Q: Ruh roh...so what can we do about it?A: For starters, do a better job seeding our PRNGs.
  • Securely
  • With high quality, unpredictable data
  • More sources are better
  • As early as possible
  • And certainly before generating
  • SSH host keys
  • SSL certificates
  • Or any other critical system DNA
  • /etc/init.d/urandom “carries” a random seed across reboots, and ensures that the Linux PRNGs are seeded
Q: But how do we ensure that in cloud guests?A: Run Ubuntu!
Sorry, shameless plug...

Q: And what is Ubuntu's solution?A: Meet pollinate.
  • pollinate is a new security feature, that seeds the PRNG.
  • Introduced in Ubuntu 14.04 LTS cloud images
  • Upstart job
  • It automatically seeds the Linux PRNG as early as possible, and before SSH keys are generated
  • It’s GPLv3 free software
  • Simple shell script wrapper around curl
  • Fetches random seeds
  • From 1 or more entropy servers in a pool
  • Writes them into /dev/urandom
  • https://launchpad.net/pollinate
Q: What about the back end?A: Introducing pollen.
  • pollen is an entropy-as-a-service implementation
  • Works over HTTP and/or HTTPS
  • Supports a challenge/response mechanism
  • Provides 512 bit (64 byte) random seeds
  • It’s AGPL free software
  • Implemented in golang
  • Less than 50 lines of code
  • Fast, efficient, scalable
  • Returns the (optional) challenge sha512sum
  • And 64 bytes of entropy
  • https://launchpad.net/pollen
Q: Golang, did you say?  That sounds cool!A: Indeed. Around 50 lines of code, cool!pollen.go
Q: Is there a public entropy service available?A: Hello, entropy.ubuntu.com.
  • Highly available pollen cluster
  • TLS/SSL encryption
  • Multiple physical servers
  • Behind a reverse proxy
  • Deployed and scaled with Juju
  • Multiple sources of hardware entropy
  • High network traffic is always stirring the pot
  • AGPL, so source code always available
  • Supported by Canonical
  • Ubuntu 14.04 LTS cloud instances run pollinate once, at first boot, before generating SSH keys
Q: But what if I don't necessarily trust Canonical?A: Then use a different entropy service :-)
  • Deploy your own pollen
    • bzr branch lp:pollen
    • sudo apt-get install pollen
    • juju deploy pollen
  • Add your preferred server(s) to your $POOL
    • In /etc/default/pollinate
    • In your cloud-init user data
      • In progress
  • In fact, any URL works if you disable the challenge/response with pollinate -n|--no-challenge
Q: So does this increase the overall entropy on a system?A: No, no, no, no, no!
  • pollinate seeds your PRNG, securely and properly and as early as possible
  • This improves the quality of all random numbers generated thereafter
  • pollen provides random seeds over HTTP and/or HTTPS connections
  • This information can be fed into your PRNG
  • The Linux kernel maintains a very conservative estimate of the number of bits of entropy available, in /proc/sys/kernel/random/entropy_avail
  • Note that neither pollen nor pollinate directly affect this quantity estimate!!!
Q: Why the challenge/response in the protocol?A: Think of it like the Heisenberg Uncertainty Principle.
  • The pollinate challenge (via an HTTP POST submission) affects the pollen's PRNG state machine
  • pollinate can verify the response and ensure that the pollen server at least “did some work”
  • From the perspective of the pollen server administrator, all communications are “stirring the pot”
  • Numerous concurrent connections ensure a computationally complex and impossible to reproduce entropy state
Q: What if pollinate gets crappy or compromised or no random seeds?A: Functionally, it’s no better or worse than it was without pollinate in the mix.
  • In fact, you can `dd if=/dev/zero of=/dev/random` if you like, without harming your entropy quality
    • All writes to the Linux PRNG are whitened with SHA1 and mixed into the entropy pool
    • Of course it doesn’t help, but it doesn’t hurt either
  • Your overall security is back to the same level it was when your cloud or virtual machine booted at an only slightly random initial state
  • Note the permissions on /dev/*random
    • crw-rw-rw- 1 root root 1, 8 Feb 10 15:50 /dev/random
    • crw-rw-rw- 1 root root 1, 9 Feb 10 15:50 /dev/urandom
  • It's a bummer of course, but there's no new compromise
Q: What about SSL compromises, or CA Man-in-the-Middle attacks?A: We are mitigating that by bundling the public certificates in the client.
  • The pollinate package ships the public certificate of entropy.ubuntu.com
    • /etc/pollinate/entropy.ubuntu.com.pem
    • And curl uses this certificate exclusively by default
  • If this really is your concern (and perhaps it should be!)
    • Add more URLs to the $POOL variable in /etc/default/pollinate
    • Put one of those behind your firewall
    • You simply need to ensure that at least one of those is outside of the control of your attackers
Q: What information gets logged by the pollen server?A: The usual web server debug info.
  • The current timestamp
  • The incoming client IP/port
    • At entropy.ubuntu.com, the client IP/port is actually filtered out by the load balancer
  • The browser user-agent string
  • Basically, the exact same information that Chrome/Firefox/Safari sends
  • You can override if you like in /etc/default/pollinate
  • The challenge/response, and the generated seed are never logged!
Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server received challenge from [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634146155]

Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server sent response to [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634191843]
Q: Have the code or design been audited?A: Yes, but more feedback is welcome!
  • All of the source is available
  • Service design and hardware specs are available
  • The Ubuntu Security team has reviewed the design and implementation
  • All feedback has been incorporated
  • At least 3 different Linux security experts outside of Canonical have reviewed the design and/or implementation
    • All feedback has been incorporated
Q: Where can I find more information?A: Read Up!
Stay safe out there!
:-Dustin

Matthew Helmke: Open Source Resources Sale

Planet Ubuntu - Wed, 2014-07-23 17:40

I don’t usually post sales links, but this sale by InformIT involves my two Ubuntu books along with several others that I know my friends in the open source world would be interested in.

Save 40% on recommend titles in the InformIT OpenSource Resource Center. The sale ends August 8th.

Michael Hall: Why do you contribute to open source?

Planet Ubuntu - Wed, 2014-07-23 12:00

It seems a fairly common, straight forward question.  You’ve probably been asked it before. We all have reasons why we hack, why we code, why we write or draw. If you ask somebody this question, you’ll hear things like “scratching an itch” or “making something beautiful” or “learning something new”.  These are all excellent reasons for creating or improving something.  But contributing isn’t just about creating, it’s about giving that creation away. Usually giving it away for free, with no or very few strings attached.  When I ask “Why do you contribute to open source”, I’m asking why you give it away.

This question is harder to answer, and the answers are often far more complex than the ones given for why people simply create something. What makes it worthwhile to spend your time, effort, and often money working on something, and then turn around and give it away? People often have different intentions or goals in mind when the contribute, from benevolent giving to a community they care about to personal pride in knowing that something they did is being used in something important or by somebody important. But when you strip away the details of the situation, these all hinge on one thing: Recognition.

If you read books or articles about community, one consistent theme you will find in almost all of them is the importance of recognizing  the contributions that people make. In fact, if you look at a wide variety of successful communities, you would find that one common thing they all offer in exchange for contribution is recognition. It is the fuel that communities run on.  It’s what connects the contributor to their goal, both selfish and selfless. In fact, with open source, the only way a contribution can actually stolen is by now allowing that recognition to happen.  Even the most permissive licenses require attribution, something that tells everybody who made it.

Now let’s flip that question around:  Why do people contribute to your project? If their contribution hinges on recognition, are you prepared to give it?  I don’t mean your intent, I’ll assume that you want to recognize contributions, I mean do you have the processes and people in place to give it?

We’ve gotten very good about building tools to make contribution easier, faster, and more efficient, often by removing the human bottlenecks from the process.  But human recognition is still what matters most.  Silently merging someone’s patch or branch, even if their name is in the commit log, isn’t the same as thanking them for it yourself or posting about their contribution on social media. Letting them know you appreciate their work is important, letting other people know you appreciate it is even more important.

If you the owner or a leader in a project with a community, you need to be aware of how recognition is flowing out just as much as how contributions are flowing in. Too often communities are successful almost by accident, because the people in them are good at making sure contributions are recognized and that people know it simply because that’s their nature. But it’s just as possible for communities to fail because the personalities involved didn’t have this natural tendency, not because of any lack of appreciation for the contributions, just a quirk of their personality. It doesn’t have to be this way, if we are aware of the importance of recognition in a community we can be deliberate in our approaches to making sure it flows freely in exchange for contributions.

Pages

Subscribe to Free Software Magazine aggregator