news aggregator

Ubuntu Podcast from the UK LoCo: S07E17 – The One with the Chicken Pox

Planet Ubuntu - Thu, 2014-07-24 19:30

Tony Whitmore and Laura Cowen are in Studio L, Alan Pope is AWOL, and Mark Johnson Skypes in from his sick bed for Season Seven, Episode Seventeen of the Ubuntu Podcast!

 Download OGG  Download MP3 Play in Popup

In this week’s show:-

We’ll be back next week, when we’ll be interviewing Graham Binns about the MAAS project project, and we’ll go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Arthur Schiwon: In Kazan? Me too, join my ownCloud talk!

Planet Ubuntu - Thu, 2014-07-24 19:20

Currently I am enjoying my summer vacation. Vacation is when you do non-stop activities that make fun, no matter whether this is more relaxing or challenging, and usually in a different place. So I am going to take the opportunity to visit Kazan, and furthermore I am taking the other opportunity and will give an ownCloud talk (btw, did you hear, ownCloud 7 was released?) at the local hackerspace, FOSS Labs. Due to my lack of Russian I will stick to English, however ;)

So, if you are there and interested in ownCloud please save the date:

Monday, July 28th, 18:00
FOSS Labs
Universitetskaya 22, of. 114
420111 Kazan, Russia

Thank you, FOSS Labs and especially Mansur Ziatdinov, for making this possible. I am very much excited to not only to share information with you, but also to learn and get to know local (FOSS) culture!

Picture: Kazan Kremlin, derived from Skyline of Kazan city by TY-214.

Tags: PlanetOwnCloudPlanetUbuntuownCloud

Oli Warner: Converting an existing Ubuntu Desktop into a Chrome kiosk

Planet Ubuntu - Thu, 2014-07-24 16:16

You might already have Ubuntu Desktop installed and you might want to just run one application without stripping it down. This article should give you a decent idea how to convert a stock Desktop/Unity install into a single-application computer.

This follows straight on from today's other article on building a kiosk computer with Ubuntu and Chrome [from scratch]. In my mind that's the perfect setup: low fat and speedy... But we don't always get it right first time. You might have already been battling with a full Ubuntu install and not have the time to strip it down.

This tutorial assumes you're starting with an Ubuntu desktop, all installed with working network and graphics. While we're in graphical-land, you might as well go and install Chrome.

I have tested this in a clean 14.04 install but be careful. Back up any important data before you commit.

sudo apt update sudo apt install --no-install-recommends openbox sudo install -b -m 755 /dev/stdin /opt/kiosk.sh <<- EOF #!/bin/bash xset -dpms xset s off openbox-session & while true; do rm -rf ~/.{config,cache}/google-chrome/ google-chrome --kiosk --no-first-run 'http://thepcspy.com' done EOF sudo install -b -m 644 /dev/stdin /etc/init/kiosk.conf <<- EOF start on (filesystem and stopped udevtrigger) stop on runlevel [06] emits starting-x respawn exec sudo -u $USER startx /etc/X11/Xsession /opt/kiosk.sh -- EOF sudo dpkg-reconfigure x11-common # select Anybody echo manual | sudo tee /etc/init/lightdm.override # disable desktop sudo reboot

This should boot you into a browser looking at my home page (use sudoedit /opt/kiosk.sh to change that), but broadly speaking, we're done.

If you ever need to get back into the desktop you should be able to run sudo start lightdm. It'll probably appear on VT8 (Control+Alt+F8 to switch).

Why wouldn't I always do it this way?

I'll freely admit that I've done farts longer than it took to run the above. Starting from an Ubuntu Desktop base does do a lot of the work for us, however it is demonstrably flabbier:

  • The Server result was 1.6GB, using 117MB RAM with 38 processes.
  • The Desktop result is 3.7GB, using 294MB RAM with 80 processes!

Yeah, the Desktop is still loading a number of udisks mount helpers, PulseAudio, GVFS, Deja Dup, Bluetooth daemons, volume controls, Ubuntu 1, CUPS the printer server and all the various Network and Modem Manager things a traditional desktop needs.

This is the reason you base your production model off Ubuntu Server (or even Ubuntu Minimal).

And remember that you aren't done yet. There's a big list of boring jobs to do before it's Martini O'Clock

Just remember that everything I said about physical and network security last time applies doubly here. Ubuntu-proper ships a ton of software on its 1GB image and quite a lot more of that will be running, even after we've disabled the desktop. You're going to want to spend time stripping some of that out and putting in place any security you need to stop people getting in.

Just be careful and conscientious about how you deploy software.

Ubuntu Scientists: Who We Are: Svetlana Belkin, Admin/Founder

Planet Ubuntu - Thu, 2014-07-24 15:48

Welcome all to the first of many “Who We Are” posts.  These posts will introduce you to many of our  members of the team.  We will start with Svetlana Belkin, the founder and admin of the team:

I am Svetlana Belkin (A.K.A. belkinsa everywhere in Ubuntu community and
Mechafish on the Ubuntu Forums), and I am getting my BS in biology with
molecular sciences as my focus at University of Cincinnati. I used
Ubuntu since 2009, but the only “scientific” program that I used was
Ugene. But hopefully, I will get to use more in my field.


Filed under: Who We Are

Oli Warner: Building a kiosk computer with Ubuntu 14.04 and Chrome

Planet Ubuntu - Thu, 2014-07-24 10:36

Single-purpose kiosk computing might seem scary and industrial but thanks to cheap hardware and Ubuntu, it's an increasingly popular idea. I'm going to show you how and it's only going to take a few minutes to get to something usable.

Hopefully we'll do better than the image on the right.

We're going to be running a very light stack of X, Openbox and the Google Chrome web browser to load a specified website. The website could be local files on the kiosk or remote. It could be interactive or just an advertising roll. The options are endless.

The whole thing takes less than 2GB of disk space and can run on 512MB of RAM.

Update: Read this companion tutorial if you want to convert an existing Ubuntu Desktop install to a kiosk.

Step 1: Installing Ubuntu Server

I'm picking the Server flavour of Ubuntu for this. It's all the nuts-and-bolts of regular Ubuntu without installing a load of flabby graphical applications that we're never ever going to use.

It's free for download. I would suggest 64bit if your hardware supports it and I'm going with the latest LTS (14.04 at the time of writing). Sidebar: If you've never tested your kiosk's hardware in Ubuntu before it might be worth download the Desktop Live USB, burning it and checking everything works.

Just follow the installation instructions. Burn it to a USB stick, boot the kiosk to it and go through. I just accepted the defaults and when asked:

  • Set my username to user and set an hard-to-guess, strong password.
  • Enabled automatic updates
  • At the end when tasksel ran, opted to install the SSH server task so I could SSH in from a client that supported copy and paste!

After you reboot, you should be looking at a Ubuntu 14.04 LTS ubuntu tty1 login prompt. You can either SSH in (assuming you're networked and you installed the SSH server task) or just log in.

The installer auto-configures an ethernet connection (if one exists) so I'm going to assume you already have a network connection. If you don't or want to change to wireless, this is the point where you'd want to use nmcli to add and enable your connection. It'll go something like this:

sudo apt install network-manager sudo nmcli dev wifi con <SSID> password <password>

Later releases should have nmtui which will make this easier but until then you always have man nmcli :)

Step 2: Install all the things

We obviously need a bit of extra software to get up and running but we can keep this fairly compact. We need to install:

  • X (the display server) and some scripts to launch it
  • A lightweight window manager to enable Chrome to go fullscreen
  • Google Chrome

We'll start by adding the Google-maintained repository for Chrome:

sudo add-apt-repository 'deb http://dl.google.com/linux/chrome/deb/ stable main' wget -qO- https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -

Then update our packages list and install:

sudo apt update sudo apt install --no-install-recommends xorg openbox google-chrome-stable

If you omit --no-install-recommends you will pull in hundreds of megabytes of extra packages that would normally make life easier but in a kiosk scenario, only serve as bloat.

Step 3: Loading the browser on boot

I know we've only been going for about five minutes but we're almost done. We just need two little scripts.

Run sudoedit /opt/kiosk.sh first. This is going to be what loads Chrome once X has started. It also needs to wipe the Chrome profile so that between loads you aren't persisting stuff. This in incredibly important for kiosk computing because you never want a user to be able to affect the next user. We want them to start with a clean environment every time. Here's where I've got to:

#!/bin/bash xset -dpms xset s off openbox-session & while true; do rm -rf ~/.{config,cache}/google-chrome/ google-chrome --kiosk --no-first-run 'http://thepcspy.com' done

When you're done there, Control+X to exit and run sudo chmod +x /opt/kiosk.sh to make the script executable. Then we can move onto starting X (and loading kiosk.sh).

Run sudoedit /etc/init/kiosk.conf and this time fill it with:

start on (filesystem and stopped udevtrigger) stop on runlevel [06] console output emits starting-x respawn exec sudo -u user startx /etc/X11/Xsession /opt/kiosk.sh --

Replace user with your username. Exit, Control+X, save.

X still needs some root privileges to start. These are locked down by default but we can allow anybody to start an X server by running sudo dpkg-reconfigure x11-common and selecting "Anybody".

After that we should be able to test. Run sudo start kiosk (or reboot) and it should all come up.

One last problem to fix is the amount of garbage it prints to screen on boot. Ideally your users will never see it boot but when it does, it's probably better that it doesn't look like the Matrix. A fairly simple fix, just run sudoedit /etc/default/grub and edit so the corresponding lines look like this:

GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=0 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX=""

Save and exit that and run sudo update-grub before rebooting.
The monitor should remain on indefinitely.

Final step: The boring things...

Technically speaking we're done; we have a kiosk and we're probably sipping on a Martini. I know, I know, it's not even midday, we're just that good... But there are extra things to consider before we let a grubby member of the public play with this machine:

  • Can users break it? Open keyboard access is generally a no-no. If they need a keyboard, physically disable keys so they only have what they need. I would disable all the F* keys along with Control, Alt, Super... If they have a standard mouse, right click will let them open links in new windows and tabs and OMG this is a nightmare. You need to limit user-input.

  • Can it break itself? Does the website you're loading have anything that's going to try and open new windows/tabs/etc? Does it ask for any sort of input that you aren't allowing users? Perhaps a better question to ask is Can it fix itself? Consider a mechanism for rebooting that doesn't involve a phone call to you.

  • Is it physically secure? Hide and secure the computer. Lock the BIOS. Ensure no access to USB ports (fill them if you have to). Disable recovery mode. Password protect Grub and make sure it stays hidden (especially with open keyboard access).

  • Is it network secure? SSH is the major ingress vector here so follow some basic tips: so at the very least move it to another port, only allow key-based authentication, install fail2ban and make sure fail2ban is telling you about failed logins.

  • What if Chrome is hacked directly? What if somebody exploited Chrome and had command-level access as user? Well first of all, you can try to stop that happening with AppArmor (should still apply) but you might also want to change things around so that the user running X and the browser doesn't have sudo access. I'd do that by adding a new user and changing the two scripts accordingly.

  • How are you maintaining it? Automatic updates are great but what if that breaks everything? How will you access it in the field to maintain it if (for example) the network dies or there's a hardware failure? This is aimed more at the digital signage people than simple kiosks but it's something to consider.

You can mitigate a lot of the security issues by having no live network (just displaying local files) but this obviously comes at the cost of maintenance. There's no one good answer for that.

Photo credit: allegr0/Candace

Martin Pitt: vim config for Markdown+LaTeX pandoc editing

Planet Ubuntu - Thu, 2014-07-24 09:38

I have used LaTeX and latex-beamer for pretty much my entire life of document and presentation production, i. e. since about my 9th school grade. I’ve always found the LaTeX syntax a bit clumsy, but with good enough editor shortcuts to insert e. g. \begin{itemize} \item...\end{itemize} with just two keystrokes, it has been good enough for me.

A few months ago a friend of mine pointed out pandoc to me, which is just simply awesome. It can convert between a million document formats, but most importantly take Markdown and spit out LaTeX, or directly PDF (through an intermediate step of building a LaTeX document and calling pdftex). It also has a template for beamer. Documents now look soo much more readable and are easier to write! And you can always directly write LaTeX commands without any fuss, so that you can use markdown for the structure/headings/enumerations/etc., and LaTeX for formulax, XYTex and the other goodies. That’s how it should always should have been! ☺

So last night I finally sat down and created a vim config for it:

"-- pandoc Markdown+LaTeX ------------------------------------------- function s:MDSettings() inoremap <buffer> <Leader>n \note[item]{}<Esc>i noremap <buffer> <Leader>b :! pandoc -t beamer % -o %<.pdf<CR><CR> noremap <buffer> <Leader>l :! pandoc -t latex % -o %<.pdf<CR> noremap <buffer> <Leader>v :! evince %<.pdf 2>&1 >/dev/null &<CR><CR> " adjust syntax highlighting for LaTeX parts " inline formulas: syntax region Statement oneline matchgroup=Delimiter start="\$" end="\$" " environments: syntax region Statement matchgroup=Delimiter start="\\begin{.*}" end="\\end{.*}" contains=Statement " commands: syntax region Statement matchgroup=Delimiter start="{" end="}" contains=Statement endfunction autocmd BufRead,BufNewFile *.md setfiletype markdown autocmd FileType markdown :call <SID>MDSettings()

That gives me “good enough” (with some quirks) highlighting without trying to interpret TeX stuff as Markdown, and shortcuts for calling pandoc and evince. Improvements appreciated!

Dustin Kirkland: Improving Random Seeds in Ubuntu 14.04 LTS Cloud Instances

Planet Ubuntu - Thu, 2014-07-24 02:15
Tomorrow, February 19, 2014, I will be giving a presentation to the Capital of Texas chapter of ISSA, which will be the first public presentation of a new security feature that has just landed in Ubuntu Trusty (14.04 LTS) in the last 2 weeks -- doing a better job of seeding the pseudo random number generator in Ubuntu cloud images.  You can view my slides here (PDF), or you can read on below.  Enjoy!

Q: Why should I care about randomness? A: Because entropy is important!
  • Choosing hard-to-guess random keys provide the basis for all operating system security and privacy
    • SSL keys
    • SSH keys
    • GPG keys
    • /etc/shadow salts
    • TCP sequence numbers
    • UUIDs
    • dm-crypt keys
    • eCryptfs keys
  • Entropy is how your computer creates hard-to-guess random keys, and that's essential to the security of all of the above
Q: Where does entropy come from?A: Hardware, typically.
  • Keyboards
  • Mouses
  • Interrupt requests
  • HDD seek timing
  • Network activity
  • Microphones
  • Web cams
  • Touch interfaces
  • WiFi/RF
  • TPM chips
  • RdRand
  • Entropy Keys
  • Pricey IBM crypto cards
  • Expensive RSA cards
  • USB lava lamps
  • Geiger Counters
  • Seismographs
  • Light/temperature sensors
  • And so on
Q: But what about virtual machines, in the cloud, where we have (almost) none of those things?A: Pseudo random number generators are our only viable alternative.
  • In Linux, /dev/random and /dev/urandom are interfaces to the kernel’s entropy pool
    • Basically, endless streams of pseudo random bytes
  • Some utilities and most programming languages implement their own PRNGs
    • But they usually seed from /dev/random or /dev/urandom
  • Sometimes, virtio-rng is available, for hosts to feed guests entropy
    • But not always
Q: Are Linux PRNGs secure enough?A: Yes, if they are properly seeded.
  • See random(4)
  • When a Linux system starts up without much operator interaction, the entropy pool may be in a fairly predictable state
  • This reduces the actual amount of noise in the entropy pool below the estimate
  • In order to counteract this effect, it helps to carry a random seed across shutdowns and boots
  • See /etc/init.d/urandom
...
dd if=/dev/urandom of=$SAVEDFILE bs=$POOLBYTES count=1 >/dev/null 2>&1

...Q: And what exactly is a random seed?A: Basically, its a small catalyst that primes the PRNG pump.
  • Let’s pretend the digits of Pi are our random number generator
  • The random seed would be a starting point, or “initialization vector”
  • e.g. Pick a number between 1 and 20
    • say, 18
  • Now start reading random numbers

  • Not bad...but if you always pick ‘18’...
XKCD on random numbersRFC 1149.5 specifies 4 as the standard IEEE-vetted random number.Q: So my OS generates an initial seed at first boot?A: Yep, but computers are predictable, especially VMs.
  • Computers are inherently deterministic
    • And thus, bad at generating randomness
  • Real hardware can provide quality entropy
  • But virtual machines are basically clones of one another
    • ie, The Cloud
    • No keyboard or mouse
    • IRQ based hardware is emulated
    • Block devices are virtual and cached by hypervisor
    • RTC is shared
    • The initial random seed is sometimes part of the image, or otherwise chosen from a weak entropy pool
Dilbert on random numbers


Q: Surely you're just being paranoid about this, right?A: I’m afraid not...Analysis of the LRNG (2006)
  • Little prior documentation on Linux’s random number generator
  • Random bits are a limited resource
  • Very little entropy in embedded environments
  • OpenWRT was the case study
  • OS start up consists of a sequence of routine, predictable processes
  • Very little demonstrable entropy shortly after boot
  • http://j.mp/McV2gT
Black Hat (2009)
  • iSec Partners designed a simple algorithm to attack cloud instance SSH keys
  • Picked up by Forbes
  • http://j.mp/1hcJMPu
Factorable.net (2012)
  • Minding Your P’s and Q’s: Detection of Widespread Weak Keys in Network Devices
  • Comprehensive, Internet wide scan of public SSH host keys and TLS certificates
  • Insecure or poorly seeded RNGs in widespread use
    • 5.57% of TLS hosts and 9.60% of SSH hosts share public keys in a vulnerable manner
    • They were able to remotely obtain the RSA private keys of 0.50% of TLS hosts and 0.03% of SSH hosts because their public keys shared nontrivial common factors due to poor randomness
    • They were able to remotely obtain the DSA private keys for 1.03% of SSH hosts due to repeated signature non-randomness
  • http://j.mp/1iPATZx
Dual_EC_DRBG Backdoor (2013)
  • Dual Elliptic Curve Deterministic Random Bit Generator
  • Ratified NIST, ANSI, and ISO standard
  • Possible backdoor discovered in 2007
  • Bruce Schneier noted that it was “rather obvious”
  • Documents leaked by Snowden and published in the New York Times in September 2013 confirm that the NSA deliberately subverted the standard
  • http://j.mp/1bJEjrB
Q: Ruh roh...so what can we do about it?A: For starters, do a better job seeding our PRNGs.
  • Securely
  • With high quality, unpredictable data
  • More sources are better
  • As early as possible
  • And certainly before generating
  • SSH host keys
  • SSL certificates
  • Or any other critical system DNA
  • /etc/init.d/urandom “carries” a random seed across reboots, and ensures that the Linux PRNGs are seeded
Q: But how do we ensure that in cloud guests?A: Run Ubuntu!
Sorry, shameless plug...

Q: And what is Ubuntu's solution?A: Meet pollinate.
  • pollinate is a new security feature, that seeds the PRNG.
  • Introduced in Ubuntu 14.04 LTS cloud images
  • Upstart job
  • It automatically seeds the Linux PRNG as early as possible, and before SSH keys are generated
  • It’s GPLv3 free software
  • Simple shell script wrapper around curl
  • Fetches random seeds
  • From 1 or more entropy servers in a pool
  • Writes them into /dev/urandom
  • https://launchpad.net/pollinate
Q: What about the back end?A: Introducing pollen.
  • pollen is an entropy-as-a-service implementation
  • Works over HTTP and/or HTTPS
  • Supports a challenge/response mechanism
  • Provides 512 bit (64 byte) random seeds
  • It’s AGPL free software
  • Implemented in golang
  • Less than 50 lines of code
  • Fast, efficient, scalable
  • Returns the (optional) challenge sha512sum
  • And 64 bytes of entropy
  • https://launchpad.net/pollen
Q: Golang, did you say?  That sounds cool!A: Indeed. Around 50 lines of code, cool!pollen.go
Q: Is there a public entropy service available?A: Hello, entropy.ubuntu.com.
  • Highly available pollen cluster
  • TLS/SSL encryption
  • Multiple physical servers
  • Behind a reverse proxy
  • Deployed and scaled with Juju
  • Multiple sources of hardware entropy
  • High network traffic is always stirring the pot
  • AGPL, so source code always available
  • Supported by Canonical
  • Ubuntu 14.04 LTS cloud instances run pollinate once, at first boot, before generating SSH keys
Q: But what if I don't necessarily trust Canonical?A: Then use a different entropy service :-)
  • Deploy your own pollen
    • bzr branch lp:pollen
    • sudo apt-get install pollen
    • juju deploy pollen
  • Add your preferred server(s) to your $POOL
    • In /etc/default/pollinate
    • In your cloud-init user data
      • In progress
  • In fact, any URL works if you disable the challenge/response with pollinate -n|--no-challenge
Q: So does this increase the overall entropy on a system?A: No, no, no, no, no!
  • pollinate seeds your PRNG, securely and properly and as early as possible
  • This improves the quality of all random numbers generated thereafter
  • pollen provides random seeds over HTTP and/or HTTPS connections
  • This information can be fed into your PRNG
  • The Linux kernel maintains a very conservative estimate of the number of bits of entropy available, in /proc/sys/kernel/random/entropy_avail
  • Note that neither pollen nor pollinate directly affect this quantity estimate!!!
Q: Why the challenge/response in the protocol?A: Think of it like the Heisenberg Uncertainty Principle.
  • The pollinate challenge (via an HTTP POST submission) affects the pollen's PRNG state machine
  • pollinate can verify the response and ensure that the pollen server at least “did some work”
  • From the perspective of the pollen server administrator, all communications are “stirring the pot”
  • Numerous concurrent connections ensure a computationally complex and impossible to reproduce entropy state
Q: What if pollinate gets crappy or compromised or no random seeds?A: Functionally, it’s no better or worse than it was without pollinate in the mix.
  • In fact, you can `dd if=/dev/zero of=/dev/random` if you like, without harming your entropy quality
    • All writes to the Linux PRNG are whitened with SHA1 and mixed into the entropy pool
    • Of course it doesn’t help, but it doesn’t hurt either
  • Your overall security is back to the same level it was when your cloud or virtual machine booted at an only slightly random initial state
  • Note the permissions on /dev/*random
    • crw-rw-rw- 1 root root 1, 8 Feb 10 15:50 /dev/random
    • crw-rw-rw- 1 root root 1, 9 Feb 10 15:50 /dev/urandom
  • It's a bummer of course, but there's no new compromise
Q: What about SSL compromises, or CA Man-in-the-Middle attacks?A: We are mitigating that by bundling the public certificates in the client.
  • The pollinate package ships the public certificate of entropy.ubuntu.com
    • /etc/pollinate/entropy.ubuntu.com.pem
    • And curl uses this certificate exclusively by default
  • If this really is your concern (and perhaps it should be!)
    • Add more URLs to the $POOL variable in /etc/default/pollinate
    • Put one of those behind your firewall
    • You simply need to ensure that at least one of those is outside of the control of your attackers
Q: What information gets logged by the pollen server?A: The usual web server debug info.
  • The current timestamp
  • The incoming client IP/port
    • At entropy.ubuntu.com, the client IP/port is actually filtered out by the load balancer
  • The browser user-agent string
  • Basically, the exact same information that Chrome/Firefox/Safari sends
  • You can override if you like in /etc/default/pollinate
  • The challenge/response, and the generated seed are never logged!
Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server received challenge from [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634146155]

Feb 11 20:44:54 x230 2014-02-11T20:44:54-06:00 x230 pollen[28821] Server sent response to [127.0.0.1:55440, pollinate/4.1-0ubuntu1 curl/7.32.0-1ubuntu1.3 Ubuntu/13.10 GNU/Linux/3.11.0-15-generic/x86_64] at [1392173094634191843]
Q: Have the code or design been audited?A: Yes, but more feedback is welcome!
  • All of the source is available
  • Service design and hardware specs are available
  • The Ubuntu Security team has reviewed the design and implementation
  • All feedback has been incorporated
  • At least 3 different Linux security experts outside of Canonical have reviewed the design and/or implementation
    • All feedback has been incorporated
Q: Where can I find more information?A: Read Up!
Stay safe out there!
:-Dustin

Matthew Helmke: Open Source Resources Sale

Planet Ubuntu - Wed, 2014-07-23 17:40

I don’t usually post sales links, but this sale by InformIT involves my two Ubuntu books along with several others that I know my friends in the open source world would be interested in.

Save 40% on recommend titles in the InformIT OpenSource Resource Center. The sale ends August 8th.

Michael Hall: Why do you contribute to open source?

Planet Ubuntu - Wed, 2014-07-23 12:00

It seems a fairly common, straight forward question.  You’ve probably been asked it before. We all have reasons why we hack, why we code, why we write or draw. If you ask somebody this question, you’ll hear things like “scratching an itch” or “making something beautiful” or “learning something new”.  These are all excellent reasons for creating or improving something.  But contributing isn’t just about creating, it’s about giving that creation away. Usually giving it away for free, with no or very few strings attached.  When I ask “Why do you contribute to open source”, I’m asking why you give it away.

This question is harder to answer, and the answers are often far more complex than the ones given for why people simply create something. What makes it worthwhile to spend your time, effort, and often money working on something, and then turn around and give it away? People often have different intentions or goals in mind when the contribute, from benevolent giving to a community they care about to personal pride in knowing that something they did is being used in something important or by somebody important. But when you strip away the details of the situation, these all hinge on one thing: Recognition.

If you read books or articles about community, one consistent theme you will find in almost all of them is the importance of recognizing  the contributions that people make. In fact, if you look at a wide variety of successful communities, you would find that one common thing they all offer in exchange for contribution is recognition. It is the fuel that communities run on.  It’s what connects the contributor to their goal, both selfish and selfless. In fact, with open source, the only way a contribution can actually stolen is by now allowing that recognition to happen.  Even the most permissive licenses require attribution, something that tells everybody who made it.

Now let’s flip that question around:  Why do people contribute to your project? If their contribution hinges on recognition, are you prepared to give it?  I don’t mean your intent, I’ll assume that you want to recognize contributions, I mean do you have the processes and people in place to give it?

We’ve gotten very good about building tools to make contribution easier, faster, and more efficient, often by removing the human bottlenecks from the process.  But human recognition is still what matters most.  Silently merging someone’s patch or branch, even if their name is in the commit log, isn’t the same as thanking them for it yourself or posting about their contribution on social media. Letting them know you appreciate their work is important, letting other people know you appreciate it is even more important.

If you the owner or a leader in a project with a community, you need to be aware of how recognition is flowing out just as much as how contributions are flowing in. Too often communities are successful almost by accident, because the people in them are good at making sure contributions are recognized and that people know it simply because that’s their nature. But it’s just as possible for communities to fail because the personalities involved didn’t have this natural tendency, not because of any lack of appreciation for the contributions, just a quirk of their personality. It doesn’t have to be this way, if we are aware of the importance of recognition in a community we can be deliberate in our approaches to making sure it flows freely in exchange for contributions.

Andrew Pollock: [tech] Going solar

Planet Ubuntu - Wed, 2014-07-23 05:36

With electricity prices in Australia seeming to be only going up, and solar being surprisingly cheap, I decided it was a no-brainer to invest in a solar installation to reduce my ongoing electricity bills. It also paves the way for getting an electric car in the future. I'm also a greenie, so having some renewable energy happening gives me the warm and fuzzies.

So today I got solar installed. I've gone for a 2 kWh system, consisting of 8 250 watt Seraphim panels (I'm not entirely sure which model) and an Aurora UNO-2.0-I-OUTD inverter.

It was totally a case of decision fatigue when it came to shopping around. Everyone claims the particular panels they want to sell at the best. It's pretty much impossible to make a decent assessment of their claims. In the end, I went with the Seraphim panels because they scored well on the PHOTON tests. That said, I've had other solar companies tell me the PHOTON tests aren't indicative of Australian conditions. It's hard to know who to believe. In the end, I chose Seraphim because of the PHOTON test results, and they're also apparently one of the few panels that pass the Thresher test, which tests for durability.

The harder choice was the inverter. I'm told that yield varies wildly by inverter, and narrowed it down to Aurora or SunnyBoy. Jason's got a SunnyBoy, and the appeal with it was that it supported Bluetooth for data gathering, although I don't much care for the aesthetics of it. Then I learned that there was a WiFi card coming out soon for the Aurora inverter, and that struck me as better than Bluetooth, so I went with the Aurora inverter. I discovered at the eleventh hour that the model of Aurora inverter that was going to be supplied wasn't supported by the WiFi card, but was able to switch models to the one that was. I'm glad I did, because the newer model looks really nice on the wall.

The whole system was up at running just in time to catch the setting sun, so I'm looking forward to seeing it in action tomorrow.

Apparently the next step is Energex has to come out to replace my analog power meter with a digital one.

I'm grateful that I was able to get Body Corporate approval to use some of the roof. Being on the top floor helped make the installation more feasible too, I think.

Serge Hallyn: rsync.net feature: subuids

Planet Ubuntu - Wed, 2014-07-23 04:02

The problem: Some time ago, I had a server “in the wild” from which I
wanted some data backed up to my rsync.net account. I didn’t want to
put sensitive credentials on this server in case it got compromised.

The awesome admins at rsync.net pointed out their subuid feature. For
no extra charge, they’ll give you another uid, which can have its own
ssh keys, whose home directory is symbolically linked under your main
uid’s home directory. So the server can rsync backups to the subuid,
and if it is compromised, attackers cannot get at any info which didn’t
originate from that server anyway.

Very nice.


Mattia Migliorini: Going multilingual: welcome Italian!

Planet Ubuntu - Tue, 2014-07-22 17:21

Those of you who follow this blog since some time know for sure that the preferred language is English (a little number of posts in the early stages are an exception). Things are changing though.

It’s not that difficult to understand: if you go on it.deshack.net you can see this website in Italian. I’ve been thinking about giving a big change to this little place in the web for a while, as I want it to become more than a simple blog. I am working on a new theme for business websites, but I’ll let you know when it’s time. In the mean time, don’t be amazed if you see some small changes here.

Note

The main language will remain the English. You will find all the Italian content on it.deshack.net, as said before. Old posts will be translated only if someone asks.

Now it’s time for me to ask something to you: do you think this is an interesting change? Let me know with a comment!

Ubuntu Kernel Team: Kernel Team Meeting Minutes – July 22, 2014

Planet Ubuntu - Tue, 2014-07-22 17:12
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140722 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel has been rebased to v3.16-rc6 and officially uploaded
to the archive. We (as in apw) has also completed a hurculean config
review for Utopic and administered the appropriate changes. Please test
and let us know your results.
—–
Important upcoming dates:
Thurs Jul 24 – 14.04.1 (~2 days away)
Thurs Aug 07 – 12.04.5 (~2 weeks away)
Thurs Aug 21 – Utopic Feature Freeze (~4 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (Jul. 22):

  • Lucid – Released
  • Precise – Released
  • Saucy – Released
  • Trusty – Released

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Schedule:

    14.04.1 cycle: 29-Jun through 07-Aug
    ====================================================================
    27-Jun Last day for kernel commits for this cycle
    29-Jun – 05-Jul Kernel prep week.
    06-Jul – 12-Jul Bug verification & Regression testing.
    13-Jul – 19-Jul Regression testing & Release to -updates.
    20-Jul – 24-Jul Release prep
    24-Jul 14.04.1 Release [1]
    07-Aug 12.04.5 Release [2]

    cycle: 08-Aug through 29-Aug
    ====================================================================
    08-Aug Last day for kernel commits for this cycle
    10-Aug – 16-Aug Kernel prep week.
    17-Aug – 23-Aug Bug verification & Regression testing.
    24-Aug – 29-Aug Regression testing & Release to -updates.

    [1] This will be the very last kernels for lts-backport-quantal, lts-backport-raring,
    and lts-backport-saucy.

    [2] This will be the lts-backport-trusty kernel as the default in the precise point
    release iso.


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

Lubuntu Blog: Box for Qt

Planet Ubuntu - Tue, 2014-07-22 16:11
Box's evolution continues ahead. Due to the Qt development, the main theme for Lubuntu must grow a bit more to cover more apps, devices and, of course, environments. Now it's Qt, the sub-system for the next Lubuntu desktop, but this will allow its use for KDE5 and Plasma Next. For now it's just a project, but the Dolphin file manager looks fine! Note: this is under heavy development, no

Rick Spencer: The Community Team

Planet Ubuntu - Tue, 2014-07-22 12:56


So, given Jono’s departure a few weeks back, I bet a lot of folks have been wondering about the Canonical Community Team. For a little background, the community team reports into the Ubuntu Engineering division of Canonical, which means that they all report into me. We have not been idle, and this post is to discuss a bit about the Community Team going forward.What has Stayed the Same?First, we have made some changes to the structure of the community team itself. However, one thing did not change. I kept the community team reporting directly into me, VP of Engineering, Ubuntu. I decided to do this so that there is a direct line to me for any community concerns that have been raised to anyone on the community team.
I had a call with the Community Council a couple of weeks ago to discuss the community team and get feedback about how it is functioning and how things could be improved going forward. I laid out the following for the team.

First, there were three key things that I think that I wanted the Community Team to continue to focus on:
  • Continue to create and run innovative programs to facilitate ever more community contributions and growing the community.
  • Continue to provide good advice to me and the rest of Canonical regarding how to be the best community members we can be, given our privileged positions of being paid to work within that community.
  • Continue to assist with outward communication from Canonical to the community regarding plans, project status, and changes to those.
The Community Council was very engaged in discussing how this all works and should work in the future, as well as other goals and responsibilities for the community team. What Has Changed?In setting up the team, I had some realizations. First, there was no longer just one “Community Manager”. When the project was young and Canonical was small, we had only one, and the team slowly grew. However, the Team is now four people dedicated to just the Community Team, and there are others who spend almost all of their time working on Community Team projects.
Secondly, while individuals on the team had been hired to have specific roles in the community, every one of them had branched out to tackle new challenges as needed.

Thirdly, there is no longer just one “Community Spokesperson”. Everyone in Ubuntu Engineering can and should speak to/for Canonical and to/for the Ubuntu Community in the right contexts.So, we made some small, but I think important changes to the Community Team.

First, we created the role Community Team Manager. Notice the important inclusion of the word “Team”. This person’s job is not to “manage the community”, but rather to organize and lead the rest of the community team members. This includes things like project planning, HR responsibilities, strategic planning and everything else entailed in being a good line manager. After a rather competitive interview process, with some strong candidates, one person clearly rose to the top as the best candidate. So, I would like formally introduce David Planella (lp, g+) as the Community Team Manager!

Second, I change the other job titles from their rather specific titles to just “Community Manager” in order to reflect the reality that everyone on the community team is responsible for the whole community. So that means, Michael Hall (lp, g+), Daniel Holbach (lp, g+), and Nicholas Skaggs (lp, g+), are all now “Community Manager”. What's Next?This is a very strong team, and a really good group of people. I know them each personally, and have a lot of confidence in each of them personally. Combined as a team, they are amazing. I am excited to see what comes next.

In light of these changes, the most common question I get is, “Who do I talk to if I have a question or concern?” The answer to that is “anyone.” It’s understandable if you feel the most comfortable talking to someone on the community team, so please feel free to find David, Michael, Daniel, or Nicholas online and ask their question. There are, of course, other stalwarts like Alan Pope (lp, g+) and Oliver Grawert (lp, g+) who seem to be always online :) By which, I mean to say that while the Community Managers are here to serve the Ubuntu Community, I hope that anyone in Ubuntu Engineering considers their role in the Ubuntu Community to include working with anyone else in the Ubuntu Community :)

Want talk directly to the community team today? Easy, join their Ubuntu on Air Q&A Session at 15 UTC :)

Finally, please note that I love to be "interrupted" by questions from community members :) The best way to get in touch with me is on freenode, where I go by rickspencer3. Otherwise, I am also on g+, and of course there is this blog :)

Martin Pitt: autopkgtest 3.2: CLI cleanup, shell command tests, click improvements

Planet Ubuntu - Tue, 2014-07-22 06:16

Yesterday’s autopkgtest 3.2 release brings several changes and improvements that developers should be aware of.

Cleanup of CLI options, and config files

Previous adt-run versions had rather complex, confusing, and rarely (if ever?) used options for filtering binaries and building sources without testing them. All of those (--instantiate, --sources-tests, --sources-no-tests, --built-binaries-filter, --binaries-forbuilds, and --binaries-fortests) now went away. Now there is only -B/--no-built-binaries left, which disables building/using binaries for the subsequent unbuilt tree or dsc arguments (by default they get built and their binaries used for tests), and I added its opposite --built-binaries for completeness (although you most probably never need this).

The --help output now is a lot easier to read, both due to above cleanup, and also because it now shows several paragraphs for each group of related options, and sorts them in descending importance. The manpage got updated accordingly.

Another new feature is that you can now put arbitrary parts of the command line into a file (thanks to porting to Python’s argparse), with one option/argument per line. So you could e. g. create config files for options and runners which you use often:

$ cat adt_sid --output-dir=/tmp/out -s --- schroot sid $ adt-run libpng @adt_sid Shell command tests

If your test only contains a shell command or two, or you want to re-use an existing upstream test executable and just need to wrap it with some command like dbus-launch or env, you can use the new Test-Command: field instead of Tests: to specify the shell command directly:

Test-Command: xvfb-run -a src/tests/run Depends: @, xvfb, [...]

This avoids having to write lots of tiny wrappers in debian/tests/. This was already possible for click manifests, this release now also brings this for deb packages.

Click improvements

It is now very easy to define an autopilot test with extra package dependencies or restrictions, without having to specify the full command, using the new autopilot_module test definition. See /usr/share/doc/autopkgtest/README.click-tests.html for details.

If your test fails and you just want to run your test with additional dependencies or changed restrictions, you can now avoid having to rebuild the .click by pointing --override-control (which previously only worked for deb packages) to the locally modified manifest. You can also (ab)use this to e. g. add the autopilot -v option to autopilot_module.

Unpacking of test dependencies was made more efficient by not downloading Python 2 module packages (which cannot be handled in “unpack into temp dir” mode anyway).

Finally, I made the adb setup script more robust and also faster.

As usual, every change in control formats, CLI etc. have been documented in the manpages and the various READMEs. Enjoy!

Andrew Pollock: [debian] Day 174: Kindergarten, startup stuff, tennis

Planet Ubuntu - Tue, 2014-07-22 01:23

I picked up Zoe from Sarah this morning and dropped her at Kindergarten. Traffic seemed particularly bad this morning, or I'm just out of practice.

I spent the day powering through the last two parts of the registration block of my real estate licence training. I've got one more piece of assessment to do, and then it should be done. The rest is all dead-tree written stuff that I have to mail off to get marked.

Zoe's doing tennis this term as her extra-curricular activity, and it's on a Tuesday afternoon after Kindergarten at the tennis court next door.

I'm not sure what proportion of the class is continuing on from previous terms, and so how far behind the eight ball Zoe will be, but she seemed to do okay today, and she seemed to enjoy it. Megan's in the class too, and that didn't seem to result in too much cross-distraction.

After that, we came home and just pottered around for a bit and then Zoe watched some TV until Sarah came to pick her up.

The Fridge: Ubuntu Weekly Newsletter Issue 375

Planet Ubuntu - Mon, 2014-07-21 23:42

Andrew Pollock: [debian] Day 173: Investigation for bug #749410 and fixing my VMs

Planet Ubuntu - Mon, 2014-07-21 20:25

I have a couple of virt-manager virtual machines for doing DHCP-related work. I have one for the DHCP server and one for the DHCP client, and I have a private network between the two so I can simulate DHCP requests without messing up anything else. It works nicely.

I got a bit carried away, and I use LVM to snapshots for the work I do, so that when I'm done I can throw away the virtual machine's disks and work with a new snapshot next time I want to do something.

I have a cron job, that on a good day, fires up the virtual machines using the master logical volumes and does a dist-upgrade on a weekly basis. It seems to have varying degrees of success though.

So I fired up my VMs to do some investigation of the problem for #749410 and discovered that they weren't booting, because the initramfs couldn't find the root filesystem.

Upon investigation, the problem seemed to be that the logical volumes weren't getting activated. I didn't get to the bottom of why, but a manual activation of the logical volumes allowed the instances to continue booting successfully, and after doing manual dist-upgrades and kernel upgrades, they booted cleanly again. I'm not sure if I got hit by a passing bug in unstable, or what the problem was. I did burn about 2.5 hours just fixing everything up though.

Then I realised that there'd been more activity on the bug since I'd last read it while I was on vacation, and half the investigation I needed to do wasn't necessary any more. Lesson learned.

I haven't got to the bottom of the bug yet, but I had a fun day anyway.

Jonathan Riddell: Barcelona, such a beautiful horizon

Planet Ubuntu - Mon, 2014-07-21 19:22
KDE Project:

When life gives you a sunny beach to live on, make a mojito and go for a swim. Since KDE has an office that all KDE developer are welcome to use in Barcelona I decided to move to Barcelona until I get bored. So far there's an interesting language or two, hot weather to help my fragile head and water polo in the sky. Do drop by next time you're in town.


Plasma 5 Release Party Drinks

Also new poll for Plasma 5. What's your favourite feature?

Pages

Subscribe to Free Software Magazine aggregator