news aggregator

Jonathan Riddell: Tier 1 is a Slam Dunk

Planet Ubuntu - Mon, 2014-01-13 17:41
KDE Project:

KDE Frameworks 5 tech preview shipped last week and we've been packaging furiously. This is all new packaging from scratch, none of our automated scripts we have for KDE SC. Tier 1 is in our experimental PPA for Trusty. Special thanks to Scarlett our newest Kubuntu Ninja who is doing a stormer.

Ubuntu LoCo Council: Defining Bug Status for the ubuntu-locoteams Project

Planet Ubuntu - Mon, 2014-01-13 16:50

And as promised, more news from us are coming in. We are starting this year with new plans, new ideas, and trying to refresh some things.

The ubuntu-locoteams Project on Launchpad is where issues against LoCo Teams can be filed, but is mostly used by the LoCo Council to keep track of the (re-)verification processes that are taking place. If you are a team contact or admin you may already know that we use bugs for that purpose. Personally, I like them as they are versatile, and give an option to have a (re-)verification done if the team contact cannot be present on a meeting. But sometimes team contacts, or even we, may be confused about what each bug status means on this pseudo-project. This is why we thought about it, and have created a definition for each bug status, that will be applied when a bug is set on the ubuntu-locoteams project on Launchpad.

We have created a wiki page which explains the following bug statuses:

  • New: Bug has been just filed, no response from the contact/person in charge yet.
  • Incomplete: Cycle has ended and a (re-)verification application is not on file. OR Team has expired and a re-verification application is not on file.
  • Opinion: Everything that is not a (re-)verification.
  • Invalid: Bug is not valid (can be a duplicate or something really invalid).
  • Won’t fix: Team doesn’t exist / bug has been created in vain / team existed but is now closed.
  • Confirmed: Bug has been just filed and the contact/person in charge has replied.
  • Triaged: (Re-)verification will be discussed on a meeting.
  • In Progress: (Re-)verification voting procedure is currently open and the process is being followed on the bug, not on a meeting.
  • Fix Committed: Team is (re-)verificated, but its membership to ~locoteams-verified needs to be (re)set.
  • Fix Released: Everything has been done and team has been added to ~locoteams-verified, process is done.

We hope that this definitions can make the (re-)verification process and its stages more clear, and help everyone understand it better. If you or your team have any enquiries about the (re-)verification process, make sure to send us an email to loco-council@lists.ubuntu.com, or drop by the #ubuntu-locoteams channel on irc.freenode.net (click here to access via a web client). Again, expect more from us soon!

Dustin Kirkland: How I REALLY WISH I could use my Intel NUC

Planet Ubuntu - Mon, 2014-01-13 13:27

Ars Technica posed an interesting question back in October:We have an Intel NUC -- what should we do with it?
Here's one idea...Of course I have Ubuntu One storage and Dropbox account.  And I'm very well familiar with Box.com and dozens of other highly successful cloud storage solutions too.

These are unfortunately not the solution I want, to the problem I have.

I've considered many, many alternatives.  But ultimately, the only product on the market which I'm willing to buy is a co-lo service.  I want full root access, inside of a virtual private server, running a pristine, unspoiled, unmodified Ubuntu LTS server.  And attached to that, I want a lot (like, 1TB or more) of highly available, scalable block storage.  Not object storage.  BFS.  Block frickin' storage.  I want to format it with the file system of my choosing, and encrypt the data within with a cryptosystem and key of my choosing.

And finally I want to run rsync over an encrypted ssh connection multiple times per day to push my backups "to the cloud".

That's it.  And that's neither U1 nor Dropbox.  That's a little bit like rsync.net, but not really.

I currently use AWS's EC2 and EBS.  I'm happy with the technology, but unhappy with the cost.

You're welcome to try, but you're not going to convince me to do this some other way.  Sorry.  This method is time-tested, recovery-proven.

A few years ago, I blogged about how I used a Dell Mini9 netbook as an Ubuntu Server.  I tucked that machine away in a nook at my parents house, and it served me reasonably well as a (free) co-lo for a several years.

 But there is now a clear and present opportunity now for a new cloud services business to emerge.  And the industry perfect poised to offer such a cloud service is one of the oldest brick-and-mortar institutions in human history....


Banks.

Yes, banks.  You know, the important looking place your parents used to visit a couple of times per week to deposit and cash checks, but now largely replaced by robots called Automated Teller Machines (ATMs).



There's really only 2 reasons I've visited a bank in the past 15 years.
  1. To have a document notarized
  2. And to access my safe deposit box

And every single time I do the latter, I yearn for a power outlet and an Ethernet jack in that magic, safe little box.
Consider that for a minute...  How nice would it be, to have your physical co-lo machine, under lock and key, in a safe, held by an old and trusted financial institution?  A physical location that you could travel to, authenticate using multiple forms of identification, present a key, open a sturdy looking box, and access your micro PC (with current technology, a sleek little Intel NUC).
I think banks are extraordinarily well positioned to offered this as a service, as there are strong, established standards for physical security, and they're well placed in most neighborhoods around the world.  Establishing the service would mean beefing up redundant power supplies, internet connectivity, and air flow in at least one portion of the safe deposit vault (which might mean an altogether new vault).
And the service itself?
  • I currently pay $50 per year for a small, document-sized safe deposit box (which, by the way, the NUC fits within -- I've already checked).
  • The NUC itself, at maximum energy consumption, draws 17W, at $0.125/KWh (the current rate in Austin, Texas), costs approximately $18.60 in energy costs per year
  • And a bare minimum Internet service plan runs about $20/month in my area, or $240/year
So at retail costs, I think we're talking somewhere between $300 - $500 per year for this service.  Done well, this is easily worth $1200 per year to me.  Which I would delightfully buy, as this is actually not far off from my yearly AWS bill.
How long have I been thinking about this?  Nearly 10 years!  Regrettably, I filed way-too-many patents during my 8 years at IBM (which itself deserves a blog post of contrition).  Including one on this very concept (US Patent 7,484,657; filed July 14, 2005; granted February 3, 2009).  Not that IBM has done anything productive with it to date, much to my chagrin :-(


So there, Ars Technica, that's what I would do with my Intel NUC :-)

:-Dustin

Canonical Design Team: Sheets transition

Planet Ubuntu - Mon, 2014-01-13 09:21

We’ve recently been exploring how the share transitions should work when you’re previewing a photo in gallery mode. Our main goal is that there is a consistent transition for sharing photos across the phone.

This is the latest iteration of the explorations we’ve been doing, and, as such, these transitions are still work in progress, but certainly worth sharing.

Step by step


Video: Sharing a photo in photo gallery mode

The first transition happens when you select “Share” from the toolbar. This takes you to a ‘content picker’ mode where you can select where you’d like to share your photo (Facebook, Twitter, etc.).

The intention is that the ‘content picker’ transition is similar to the ‘page stack’ one — which takes you deeper into the app — but because you’re going into a ‘content picker’ mode the transition needs to be slightly different. That difference is the direction: instead of going from right to left it goes bottom to top.

Once you’ve selected how to share your photo, the screen splits slightly below where you’ve tapped (in the example, below Facebook), and there is a subtle transparency fade so that the transition is less jarring.

In the next step, the transition takes you to an embedded Facebook share page, where you can write a description about the photo you’re posting. Once you select the description box, the OSK keyboard comes from the bottom to top, something that is always consistent across the phone.

When you click “Post”, a similar transition to the selecting share transition, but reversed, takes you back to the photo.

Your feedback

As I’ve mentioned before, this is still work in progress, but we’re really interested in hearing your thoughts — let us know what you think in the comments.

Duncan McGreggor: Prefix Operators in Haskell

Planet Ubuntu - Mon, 2014-01-13 02:47
I wanted to name this post something a little more catchy, like "McCarthy's Dream: Haskell as Lisp's M-Expressions" but I couldn't quite bring myself to do it. If s-expressions had greater support in Haskell, I would totally have gone for it, but alas, they don't.

However, there is still reason to celebrate: many Haskell operators do support prefix notation! This was a mind-blower to me, since I hadn't  heard about this until last night...

At the Data Day Texas conference this past Saturday, O'Reilly/StrataConf had a fantastic booth. Among the many cool give-aways they were doing, I obtained a coupon for a free ebook and another for 50% off. Upon returning home and having made my free book decision, I was vacillating between an OCaml book and the famous Haskell one. I've done a little Haskell in the past but have never touched OCaml, so I was pretty tempted.

However, something amazing happened next. I stumbled upon a page that was comparing OCaml and Haskell, which led to another page... where Haskell prefix notation was mentioned. I know many Haskellers who might read this would shrug, or say "yeah, we know", but this was quite a delightful little discovery for me :-)

I don't remember the first page I found, but since then, I've come across a couple more resources:
That's pretty much it, though. (But please let me know if you know of or find any other resources!)

As such, I needed to do a lot more exploration. Initially, I was really excited and thought I'd be able to convert all Haskell forms to s-expressions (imports, lets, etc.), but I quickly found this was not the case. But the stuff that did work is pretty cool, and I saved it in a series of gists for your viewing pleasure :-)

Addition

The first test was pretty simple. Finding success, I thought I'd try something I do when using a Lisp/Scheme interpreter as a calculator. As you can see below, that didn't work (the full traceback is elided). Searching on Hoogλe got me to the answer I was looking for, though. Off to a good start:


More Operators

I checked some basic operators next, and a function. Everything's good so far:


Lists

Here are some basic list operations, including the ever-so-cool list difference operator, (\\). Also, I enjoyed the cons (:) so much that I made a t-shirt of it :-) (See the postscript below for more info.)


List Comprehensions

I'm not sure if one can do any more prefixing than this for list comprehensions:


Functions

Same thing for functions; nothing really exciting to see. (Btw, these examples were lifted from LYHGG.)


Lambdas

Things get a little more interesting with lambda expressions, especially when a closure is added for spice:


Function Composition

Using the compose operator in prefix notation is rather... bizarre :-) It looks much more natural as a series of compositions in a lambda. I also added a mostly-standard view of the compose operator for comparison:


Monads

I've saved the best for last, an example of the sort of thing I need when doing matrix operations in game code, graphics, etc. The first one is standard Haskell...


Wow. Such abstract. So brains. 

Note that to make the prefix version anywhere close to legible, I added whitepsace. (If you paste this in ghci, you'll see a bunch of prompts, and then the result. If you're running in the Sublime REPL, be sure to scroll to the right.)

And that pretty much wraps up what I did on Sunday afternoon :-)

Other Resources

Post Script

If you're in the San Antonio / Austin area (or SF; I'll be there in March) and want to go in on a t-shirt order, let me know :-) We need at least six for the smallest order (tri-blend Next Level; these are a bit pricey, but they feel GREAT). I'll collect payment before I make an order.


Sam Hewitt: Cucumber Tzatziki Recipe

Planet Ubuntu - Mon, 2014-01-13 00:00

Tzatziki is a cool and creamy cucumber condiment, with a little tanginess. I use it to frequently accompany grilled meats and veg(-etables) –souvlaki– or when I'm in the mood for Greek.

Tzatziki Recipe
    Ingredients:
  • 1 large cucumber
  • 2 cloves garlic, crushed & minced
  • 1 cup plain greek yoghurt
  • juice of 1/2 a lemon
  • 1 tablespoon dill, mint or parsley, chopped
  • 2 tablespoons olive oil
  • kosher salt & pepper, to taste
    Directions
  1. Seed and chop the cucumber –you may peel it as well, but I like to leave the skin in for added crunch and colour.
  2. Place chopped cucumber in a bowl, and add the garlic, olive oil, lemon juice chopped herbs and liberally season with salt & pepper.
  3. Stir, cover and marinate for 10-15 minutes –during this the salt will draw some of water of the cucumber.
    • Optional step: place the yoghurt in a strainer in some cheesecloth and let strain for as long as you marinate the cucumber. This too will bring out some of the excess water.
    We want to remove as much excess liquid as possible from the ingredients as not to have too runny of end result.
  4. Pour off the excess liquid from the cucumber mixture (or strain with fine mesh strainer).
  5. Stir in the yoghurt and re-cover and refridgerate until use.

Sam Hewitt: Creating "Symbolic" Icons for the Web

Planet Ubuntu - Sun, 2014-01-12 00:00

Something I've become a huge user (and fan) of is embedding SVG files on the web, especially simple pictograms that I can colour with CSS–such as the symbolic icon on this button that plugs Moka, for example. :)

Moka Project

It yields a much better result than raster graphics in my opinion and is much better in this modern age of websites with responsive layouts.

So now I'm going to tell/show you how to do it. More specifically, how to make a "symbolic" icon for your webpage.

Step 1: Design your Icon

I'm going to make the assumption you know how to make a vector graphic. So: open your favourite SVG editor –like Inkscape– and design your icon.

Make a simple pictogram (which is all one path)

If you want a drop shadow (like I've done there) duplicate said pictogram and move it one pixel south and place it on the bottom of the layer stack.

Fill colours are irrelevant, since we'll be using CSS later on.

Step 2: Remove all the unnecessary bits from the SVG

This makes the later steps simpler, but feel free to skip it if you know your way around an SVG's XML.

Personally, I use scour to scrub my SVGs of all the accumulated metadata and the like:

Scour

What you're left with is a trimmed down SVG XML file:

I've replaced the "<" and ">" typically found in XML with "[" and "]" so you can see the code on this site. [?xml version="1.0" encoding="UTF-8" standalone="no"?] [!-- Created with Inkscape (http://www.inkscape.org/) --] [svg id="svg7384" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" style="enable-background:new" xmlns="http://www.w3.org/2000/svg" xmlns:osb="http://www.openswatchbook.org/uri/2009/osb" height="17" width="16" version="1.1" xmlns:cc="http://creativecommons.org/ns#" xmlns:dc="http://purl.org/dc/elements/1.1/"] [defs id="defs7386"] [filter id="filter7554" color-interpolation-filters="sRGB"] [feBlend id="feBlend7556" in2="BackgroundImage" mode="darken"/] [/filter] [/defs] [g id="layer15" transform="translate(-801 196)"] [path id="path3002" opacity=".5" style="color:#bebebe;enable-background:accumulate" d="m15 1.875l-1.125 0.125-8 1-0.875 0.125v0.875 7.188c-0.3165-0.114-0.6445-0.188-1-0.188-1.6569 0-3 1.343-3 3s1.3431 3 3 3 3-1.343 3-3v-8.125l6-0.75v5.063c-0.316-0.114-0.644-0.188-1-0.188-1.657 0-3 1.343-3 3s1.343 3 3 3 3-1.343 3-3v-10-1.125z" transform="translate(801 -196)" /] [path id="path8432" style="color:#bebebe;enable-background:accumulate" d="m15 0.875l-1.125 0.125-8 1-0.875 0.125v0.875 7.188c-0.3165-0.114-0.6445-0.188-1-0.188-1.6569 0-3 1.343-3 3s1.3431 3 3 3 3-1.343 3-3v-8.125l6-0.75v5.0625c-0.316-0.1135-0.644-0.1875-1-0.1875-1.657 0-3 1.343-3 3s1.343 3 3 3 3-1.343 3-3v-10-1.125z" transform="translate(801 -196)" fill="#fff"/] [/g] [/svg]

Now the only parts we're concerned with are the paths that define our icon (bolded above).

Step 3: Creating a Web-Ready version

Using my example/template below, or building one from scratch, in a text editor:

  1. Define an element to house the embedded SVG, in my example it's a div with a custom class.
  2. Copy the value(s) of d of the desired path in the SVG file (bolded above).
  3. Paste the d value in the new paths of our embedded SVG.
  4. Give each path a style class for use with CSS.
  5. Make such to match the width and height to that of the original SVG file.
Remember, I've replaced the "<" and ">" with "[" and "]" so be sure to switch them back for your use. [div class="pictogram"] [svg height="17" width="16" version="1.1" xmlns="http://www.w3.org/2000/svg" style="overflow:hidden;position:relative;" ] [path class="icon-drop-shadow" d="m15 1.875-1.125 0.125-8 1-0.875 0.125v0.875 7.188c-0.3165-0.114-0.6445-0.188-1-0.188-1.6568 0-3 1.3432-3 3s1.3432 3 3 3 3-1.3432 3-3v-8.125l6-0.75v5.0625c-0.316-0.114-0.644-0.188-1-0.188-1.6568 0-3 1.3432-3 3s1.3432 3 3 3 3-1.3432 3-3v-10-1.125z" /] [path class="icon-fill" d="m15 0.875-1.125 0.125-8 1-0.875 0.125v0.875 7.188c-0.3165-0.114-0.6445-0.188-1-0.188-1.6568 0-3 1.3432-3 3s1.3432 3 3 3 3-1.3432 3-3v-8.125l6-0.75v5.0625c-0.316-0.1135-0.644-0.1875-1-0.1875-1.6568 0-3 1.3432-3 3s1.3432 3 3 3 3-1.3432 3-3v-10-1.125z" /] [/svg] [/div] Step 4: Styling

Now we can write our CSS, using the styles we created earlier:

/*----- Pictograms -----*/ .pictogram { opacity:1.0; background:center no-repeat ; display:inline; height:17px; width:16px; /* Plus any other custom CSS */ } /*----- Pictograms -----*/ .icon-fill { fill:#fff; /* Plus any other custom CSS */ } .icon-drop-shadow { fill:#000; opacity:0.2; /* Plus any other custom CSS */ }

Now if you were to use the CSS and the embedded SVG on your website you could have something like this:

Music

The button is also all CSS, but I won't get into that here. Happy vectoring! :)

Adnane Belmadiaf: How to use the Built-in Screen Recording in Android 4.4 KitKat

Planet Ubuntu - Sat, 2014-01-11 20:30

Android 4.4 KitKat now supports screen recording, it's only accessible via an ADB command on unrooted devices a. This featues is really a great way to create walkthroughs, tutorials for apps and also perfect for reporting bugs.



To start using it you need to install ADB, you can use the phablet-team PPA if you are using Ubuntu which has the tools and dependencies for 12.04, 12.10, 13.04 and 13.10 to get everything setup, if not you can download the Android SDK from the Android developer site

$ sudo add-apt-repository ppa:phablet-team/tools $ sudo apt-get update $ sudo apt-get install android-tools-adb Enable Developer Mode
  • Go to the settings menu, and scroll down to "About phone." Tap on it.
  • Scroll down to the bottom, where you see "Build number."
  • Tap on it seven (7) times.
Enable the USB debugging

Once done hit the Back button, youl'll see an new entry called "Developer Options" just above the "About phone." entry, tap on it, scroll down to the debugging section, then enable USB debugging, note that you’ve to confirm the security prompt on your device :

Using the Screen Recording

Once done, you need to make sure that your device is listed & connected using :

$ adb devices List of devices attached xxxxxxxxxxxxxxxx device

Then all you have to do is :

$ adb shell screenrecord /sdcard/recording.mp4 && adb pull /sdcard/recording.mp4

The default and maximum duration of a screenrecord is 3 minutes, you can use the --time-limit argument to set the limit you want, here all arguements you can set :

  • --help : Displays a usage summary.
  • --size : Sets the video size, for example: 1280x720. The default value is the device's main display resolution (if supported), 1280x720 if not. For best results, use a size supported by your device's Advanced Video Coding (AVC) encoder.
  • --bit-rate : Sets the video bit rate for the video, in megabits per second. The default value is 4Mbps. You can increase the bit rate to improve video quality or lower it for smaller movie files. The following example sets the recording bit rate to 6Mbps: screenrecord --bit-rate 6000000 /sdcard/demo.mp4
  • --time-limit : Sets the maximum recording time, in seconds. The default and maximum value is 180 (3 minutes).
  • --rotate : Rotates the output 90 degrees. This feature is experimental.
  • --verbose : Displays log information on command line screen. If you do not set this option, the utility does not display any information while running.

Ubuntu LoCo Council: New Logos Announced!

Planet Ubuntu - Sat, 2014-01-11 18:25

As many of you know, we opened a contest to find the new logos for the LoCo Teams and LoCo Council. After a while, we have finally decided that the winner is…

Sam Hewitt!

You should be seeing the new logos in place in a couple minutes on the ~locoteams and ~ubuntu-locouncil teams on Launchpad. You can check the logo set on SVG format here. We should remind you that the set is licensed under a CC-BY-SA 3.0 License. Thanks to all people who participated with their logos, and make sure to expect more news from us soon!

Ubuntu Classroom: 2nd Call for Instructors: Ubuntu User Days on Jan 25-26th 2014

Planet Ubuntu - Sat, 2014-01-11 06:32

The Classroom Team is proud to announce that we’ll be hosting our next Ubuntu User Days on Saturday January 25th, 14:30 UTC – Sunday the 26th 2014, 3:00 UTC.

“User Days was created to be a set of courses offered during a one day period to teach the beginning or intermediate Ubuntu user the basics to get them started with Ubuntu”

In order for this event to be a success, we need instructors to lead sessions.

To volunteer to lead a session, you can contact a member of the Ubuntu User Days Team by sending an email to myself (jose at ubuntu.com), the ubuntu-classroom at lists.ubuntu.com mailing list or by contacting us on IRC by stopping by #ubuntu-classroom-backstage on irc.freenode.net. We still have plenty of slots open for the two days, so make sure to grab yours now!

If you are unsure of a topic for your session, you can visit the Course Suggestions page:

https://wiki.ubuntu.com/UserDaysTeam/CourseSuggestions

If you are unsure about expectations for class instructors, please ask! You may also visit the logs from past Ubuntu User Days:

We are always keen on seeing new instructors around, if you have any doubts on how all of this is ran you can visit us on our IRC channel (#ubuntu-classroom-backstage on irc.freenode.net). Please be sure to pass this announcement along to any of your friends who might be interested in leading a session.


Lubuntu Blog: Lots of LX updates

Planet Ubuntu - Fri, 2014-01-10 15:43
The coder of the entire desktop LXDE, PCMan, was busy these days. He was updating the main core libs and app (pcmanfm / libfm 1.2.0), the image viewer (gpicview 0.2.4) and the appearance config tool (lxappearance 0.5.5), getting improved with bug fixes and new features: pcmanfm: dual pane view pcmanfm: create symlinks lxappearance: fix creating themes lxappearance: compression use xz format

Colin King: cppcheck - another very useful static code analysis tool

Planet Ubuntu - Fri, 2014-01-10 12:30
Over the past months I have been using static code analysis tools such as smatch and Coverity Scan on various open source projects that I am involved with.  These, combined with using gcc's -Wall -Wextra have proved useful in tracking down and eliminating various bugs.

Recently I stumbled on cppcheck and gave it a spin on several larger projects.  One of the cppcheck project aims is to find errors that the compiler won't spot and also try to keep the number of false positives found to a minimum.

cppcheck is very easy to use, the default settings just work out of the box. However, for extra checking I enabled the --force option to check of all configurations and the --enable=all to report on checks to be totally thorough and pedantic.

The --enable option is especially useful. It allows one to select different types of checking, for example, coding style, execution performance, portability, unused functions and missing include files.

Even though my code has been through smatch and Coverity Scan, cppcheck still managed to find a few issues using --enable=all

1. unused functions
2. a potential memory leak with realloc(), for example:

buf = realloc(buf, new_size);
if (!buf)
     return NULL;

if realloc() fails, buf can be leaked.  A potential fix is:

tmp = realloc(buf, new_size);
if (!tmp) {
     free(buf);
     return NULL;
} else
     buf = tmp;

3. some potential sscanf buffer overflows
4. some coding style improvements, for example, local auto variables could be moved to a deeper scope

So cppcheck worked well for me.  I recommend referring to the cppcheck project wiki to check out the features and then subjecting your code to it and seeing if it can find any bugs.

Jono Bacon: Community Leadership Summit 2014 Announced!

Planet Ubuntu - Fri, 2014-01-10 06:10

I am delighted to announce the Community Leadership Summit 2014, now in it’s sixth year! This year it takes place on the 18th and 19th July 2014, the weekend before OSCON at the Oregon Convention Center. Thanks again to O’Reilly for providing the venue.

For those of you who are unfamiliar with the CLS, it is an entirely free event designed to bring together community leaders and managers and the projects and organizations that are interested in growing and empowering a strong community. The event provides an unconference style schedule in which attendees can discuss, debate and explore topics. This is augmented with a range of scheduled talks, panel discussions, networking opportunities and more.

The heart of CLS is an event driven by the attendees, for the attendees.

The event provides an opportunity to bring together the leading minds in the field with new community builders to discuss topics such as governance, creating collaborative environments, conflict resolution, transparency, open infrastructure, social networking, commercial investment in community, engineering vs. marketing approaches to community leadership and much more.

The previous events have been hugely successful and a great way to connect together different people from different community backgrounds to share best practice and make community management an art and science better understood and shared by us all.

I will be providing more details about the event closer to the time, but in the meantime be sure to register!

Paul Tagliamonte: Docker in Debian

Planet Ubuntu - Fri, 2014-01-10 02:13

Hello, World!

Docker’s in Debian! Isn’t that great! Let’s all get happy! It’s called docker.io, so go ahead and sudo apt-get install docker.io whenever you want :)

The first two uploads have a few errors, which are 100% my fault. The first were a set of FTBFS bugs, which were a stupid error on my part.

Thanks to olasd for catching the remaining bug, related to stripping the binaries.

I’m so sorry this happened, it slipped through as a result of me having a local docker binary in /usr/local, which I tested completely before uploading a totally different binary. I won’t let it happen again.

It should be fixed now.

However, this comes with a warning. It appears as though systemd (which, for the record I adore) is allowing lxc-start unmount /dev/pts and friends, which causes a bunch of damage to the host.

I’ve filed the bug as bug #734813.

So, if you’re a systemd user, please hold off on using docker.io until we resolve this issue in Debian.

Jo Shields: Here Ye, Here Ye

Planet Ubuntu - Thu, 2014-01-09 21:54

Valve Software’s Steam is the number one digital game distribution service, with more than 65 million registered accounts. Steam runs on Windows, Mac OS, and Linux x86/amd64 computers, and provides access to several thousand games, at varying price points – an enormous growth from less than a dozen games for Windows only about a decade ago. Valve’s latest endeavour has been to bring their storefront into the living room, with a three-pronged attack: a new game controller, a programme of licensed x86 “consoles” to plug into the family TV, and an OS to run on the “Steam Machines” to tie it all together.

December saw the first public release of their “SteamOS” – which, as it turns out, is basically just a preconfigured desktop Linux. Specifically, it’s a Debian Wheezy derivative, comprising a subset of 502 source packages from Wheezy; 8 of Valve’s own source packages; and 51 source packages which have been either patched compared to Wheezy, backported from post-Wheezy, or both. For example, the compiler used by default is gcc-4.7 (rather than Wheezy’s 4.6) and the libc version postdates Wheezy too.

Valve’s official instructions and installer release concerned quite a few people who had planned to try SteamOS on an older PC, by mandating a large (500GB) hard disk and a PC with UEFI firmware. Very quickly a number of instructions started appearing from people trying to fix what people felt were real issues – specifically provision for BIOS-based computers, and installation from optical media.

After being assured that redistribution of derivatives of SteamOS were entirely authorized (and, in fact, encouraged) I decided to produce my own variant, calling upon my own experience with debian-installer modification from past and present jobs, as well as calling upon the skills and experience of the UK Debian community as needed.

The end result is Ye Olde SteamOSe.

This weekend saw the third release of Ye Olde SteamOSe, a derivative designed to greatly widen the pool of computers capable of running Valve’s OS. And unlike the first two releases, the public response this time has been crazy. Like, totally crazy. Combined score across several subreddits totals about 1700. 7 pages of Google search results. Hundreds of tweets. Mentions in dozens of blogs around the world. Coverage on the Linux Action Show. Unilaterally added to Softpedia’s list of distributions. Almost 2000 views of a video installation walkthrough I posted on YouTube. Crazy.

Sadly the idea to try and track visitors to the page didn’t occur to me until long after the initial rush subsided, but 1000 visitors on Tuesday for a news story which landed on Sunday is still pretty hot in my book.

It’s also interesting to observe the demographics of site visitors – the #1 referrer on Tuesday was Dutch PC site Tweakers.net, and StumbleUpon outranks reddit for referrals. About 66% of visitors interested in installing my Debian Wheezy derivative derivative came using Windows (20% using Linux), which suggests there’s a lot of potential Linux users out amongst the Windows gamer masses.

So what’s the purpose of this self-congratulatory blog post? Just dick-waving? Well, there’s an element of that (I’m only human), but I think it might be nice to alert the audience on Planet Debian/Ubuntu, many of whom are not big gamers, to the “next big thing” in “embedded” Linux – except this time a real GNU/X.org/sysVinit distribution, not some NIH thing like Android.

So now you know!

Stuart Langridge: Pretending to type like a Hollywood hacker in Sublime Text 2

Planet Ubuntu - Thu, 2014-01-09 16:34

Christian Heilmann has just drawn my attention to a neat trick for automating typing into a text editor, from William Bamberg at Mozilla. Basically, when you’re doing a screencast, popping up a screen full of code is disorienting and hard for your users to take in, but if you actually type the stuff live on air then everyone gets to see all your typos and your mic makes it sound like a herd of wildebeest sweeping majestically across your keyboard.

Bamberg’s solution is to have an AppleScript which reads the file of your choice and then sends keypresses to your editor to “type” the file in, and it’s a neat idea. However, that’s Mac-specific so I can’t use it, and it doesn’t (as Chris notes) work in Sublime Text 2 (my editor, and his) because ST2 does autoindenting and so on and that sods you up.

Conveniently, I needed a script to do precisely this for some screencasts I’m about to work on, so I thought: I shall write it as an ST2 plugin. And lo, I have done so. It’s only about 30 lines: in ST2, do Tools > New Plugin, then paste the Python from https://gist.github.com/stuartlangridge/8336771 and save it as TypeFileOut.py in the ST2 User folder (which should be default).

You then need a way of running it: I added a keybinding for it in Preferences > Key Bindings -- User so that file now looks like

[ { "keys": ["ctrl+shift+."], "command": "type_file_out" } ]

so I can press ctrl-shift-fullstop.

What it actually does is: when you run it, it removes all the text in the current editing tab, waits two seconds, and then types it back in, character by character. The two second wait is to give you a cut point for the screencast, so you enter or load the code you want into ST2, then start your screencast showing slides or whatever, switch to ST2, then press ctrl-shift-. and it’ll type the text back in. When you’re editing your screencast, cut the part between switching to ST2 and the 2 second break.

There’s probably a way of packaging this up so other people can download it with a click, but I don’t think I know how to do that.

Canonical Design Team: New year, new website: the new canonical.com

Planet Ubuntu - Thu, 2014-01-09 14:00

We’ve been talking about it for a while and we are now happy to reveal Canonical’s brand new website.

The brief

We thought that it was more than appropriate that, in the year that Canonical commemorates its 10th anniversary, our website got some love, so that’s exactly what we set out to do.

The homepage of the new canonical.com on various devices

The main goal of this redesign was to create a website that clearly communicates what Canonical is and does. To present our services, describe our role in the creation of Ubuntu and to give users an understanding of the principles behind Canonical as a company.

The journey

We set out to distill the Canonical site into its most essential components. This required a huge amount of editing as the site had grown over time. This was not a straightforward task, but there were a few things that we knew would get us very close to that goal:

  • Clearly define canonical.com’s audiences and make sure the new site’s content was created with them in mind
  • Move the content that dates easily (events, news, etc.) from the site to a searchable repository
  • Move all detailed product and service information to www.ubuntu.com to make it more easy to find

We started preparing to move a lot of the content that previously lived on the site a few months ago when we started the Ubuntu Resources project — a place for content such as news, events, press releases, white papers and case studies.

Ubuntu Resources (currently in ‘alpha’) is also our first responsive site, and a lot of the lessons we have been learning from it, code- and design-wise, have been applied to the new canonical.com, like the small screen site navigation and the global Ubuntu sites navigation.

Carla has published a very interesting post on how she used stakeholder interviews to define the website’s key journeys and audiences. This research was instrumental in keeping the content of the site focused and the information architecture as simple as possible.

Before moving onto a digital format, we did a lot of collaborative sketching, churning out ideas on how we could illustrate each page’s message.

Generating ideas: some of our sketches

Even though we were working towards a fairly tight deadline, we went through several content, design and code iterations, with copywriters, designers and developers working closely together and improving as much as possible until we were happy with the results.

Our ever-changing analog status board — sometimes only sticky notes will do!

The visual design borrowed most of the underlying patterns from www.ubuntu.com, such as the grid and font sizes. Ubuntu’s website has been evolving into a more ‘open’ design and the new Canonical website takes that idea even further by removing the main content container and increasing spacing between elements.

We also brought in new patterns, influenced by the design work that is being done on the phone and tablet, like the grid used in small screens, the Ubuntu shape (the squircle) and the folded paper background.

Using the squircle and the folded paper background on the new canonical.com

The result

We’re very happy with the result, and we think it achieves the goals we set out to accomplish. Now that the site is launched though, it’s up to everyone who visits it to let us know how we did: do let us know your thoughts in the comments!

Bryan Quigley: Multiprocess Firefox – An afternoon testing on Ubuntu 12.04

Planet Ubuntu - Thu, 2014-01-09 13:45

What is multiprocess Firefox?They do a much better description here: https://billmccloskey.wordpress.com/2013/12/05/multiprocess-firefox/
But basically it’s the start of having each tab in Firefox isolated from each other and the Firefox drawn UI (referred to as chrome).  Right now it only isolates the “chrome” from the webpages with one process each.

This is my experience using it for one afternoon on Ubuntu 12.04.   I disabled all add-ons because I don’t think it’s ready for my add-on collection…

What doesn’t work
  • AppTabs don’t come back on a restart (expected)
  • Password management doesn’t autofill (you can still access passwords though and copy/paste them). (expected)
  • Flash doesn’t work (click to play doesn’t seem to work either…) (expected)
  • Accepting no third-party cookies doesn’t work (seems to just disable all cookies)
  • Scrolling is a bit jumpy at times
  • Embedded content issues
    • Salesforce widget fails with “Content Encoding Error” because it doesn’t define a mime type?
    • My TinyTinyRSS installation doesn’t work only when loading slashdot pages (they embed ads)
  • Opening a new tab from the new tab page using middle click. (It does work if you do it via right click open new tab, with middle click it loads in the same window)
  • Zoom works on some pages but not others…
  • Trying to print crashes the page process (expected)
  • It’s one process for all tabs right now, so when one crashes, they all do
  • WebGL isn’t detected at all (expected)
  • Downloads seem to freeze (was downloading a Zentyal 700 MB image..)
  • Can’t attach files.. (this is what ended my testing :/)
What does work

(that seems surprising, most sites just worked as usual)

  • Saving a page
  • HTML5 Vidoe works (youtube), but fullscreening is two steps (one in the window and then hit F11)

One nice item is when you give up and disable it.. your previous session is restored from before you started..

I’m going to keep trying it ever month or so to see how it progresses (I’m already a nightly user)..

Gerfried Fuchs: Clawfinger

Planet Ubuntu - Thu, 2014-01-09 12:39

It's almost a month since I last blogged something, and one of my new year's resolution is to change that, a bit. Let's see how it goes.

I've been listening a lot to this great band from Sweden recently again, put all their songs onto my mobile phone. It might sound weird because they have a rather aggressive style, both soundwise but also lyricswise, but it helps me to get things off my chest and stay relaxed in the rest of my life.

The band I want to present to you was already twice mentioned in some other articles of my blog I noticed, but this is the proper post about them: I'm talking about Clawfinger. They came up in the nineties during the crossover phase and did blend in pretty well, but it's mostly their direct and political statements they carry in their lyrics that did let them stand out.

One warning though: the direct language they use might be considered blunt and maybe even offensive by some. The message behind it though should rather get you thinking of your own doings if you consider it to be offensive. Here are the songs:

  • What Are You Afraid Of: Most people would have probably gone with their first hit for the topic, but I think this song transports the point extremely well too: It's just a color and I'm color blind, the only color I know is the color of my mind. There's only one race and that's the human race, and every human being's got the right to feel save.
  • The Faggot In You: Well, it's like Coyote Too twittered or Andy Singer drew it. Get over it and think about what actually stirs these emotions in you. If you're so sure and you feel secure about yourself and your reality, then why do you need reject and refuse where other people stand sexually?
  • Life Will Kill You: Let's face the fact, it will. ... and given the choice between your own life and death I suggest that you cherish the time you have left 'cause time waits for noone and we're all growing older. Life for today, not for a future that might never come.

Like always, enjoy! And maybe also think about it a bit. :)

/music | permanent link | Comments: 1 |

Elizabeth Krumbach Joseph: Linux Conf AU 2014 Continues!

Planet Ubuntu - Thu, 2014-01-09 10:46

After all of the OpenStack stuff I discussed in my last post, I presented two more times at linux.conf.au, on Tuesday and Wednesday.

Tuesday morning the keynote by Kate Chapman on the Humanitarian OpenStreetMap Team (HOT). I haven’t paid a lot of attention to OpenStreetMap over the years because there are only so many hours in a day, but it was very interesting to learn that the work they’re doing to map developing countries is really making a difference for disaster relief, urban development and more for regions in need. I can’t make time to participate in their program right now, but you should! A copy of her talk is already available on Tuesday’s mirror of talks.

My second talk of the conference came in the form of a Haecksen talk on Tuesday. It was really exciting to have my talk accepted and it was one I was particularly looking forward to it, where I was talking about myths of public speaking. There were several revolutionary points in my learning to do speaking, so I tried to summarize them into a 20 minute presentation, slides here. My main points were:

  • Myth: The audience won’t like you
  • Myth: You have to know everything
  • Myth: Good speakers don’t need to practice
  • Myth: Good speakers don’t get nervous
  • Myth: Shy people make poor speakers
  • Myth: Your talk must be completely original
  • Myth: I can’t because <insert any excuse here>

I also really enjoyed the Q&A during this session, so thanks to everyone who attended and participated with questions and helpful responses to the comments from others. In Haecksen I also learned about Robogals and it was great to hear a talk about giving talks by Alice Boxhall, who I saw speak at OSCON last year on automated testing for accessibility. I was in such great company!

On Tuesday night I attended the speakers dinner and sat between two spouses who I got to talk to about their tag-along status at the conference. Dinner was enjoyable, if a bit slow, and the view from the venue was really nice:

Wednesday morning my final (and primary, not miniconf) talk was first up at 10:40, on Systems Administration in the Open. This is a modified version of my Code Review for Systems Administrators talk where I instead focused on the benefits of projects and organizations open sourcing their operations – or at least making it more available to others in their organization to submit changes to. Working on the OpenStack Infrastructure team continues to be a great experience for me, and it’s funny how I’m feeling about how other projects manage their infrastructure now that I’m so used to how we do it. Submit a ticket to fix something and wait? Can’t I pitch in? I do see more open source projects moving to a more open model, but I’d like to see more. My slides are here.


Thanks to Clark Boylan for taking this photo

And then my talks were done! I could relax and enjoy the rest of the conference.

One of the highlights of the day was a talk on The changing Linux kernel development process by Jonathan Corbet. It was particularly interesting to see a general view of how the ecosystem has changed in relation to folks working on core Linux kernel components vs the mobile-specific changes that support so many new devices today. It was also noteworthy to see that the companies working on the core are largely static, whereas many of the newer commits in new kernels are coming from the mobile-specific vendors.

Karen Sandler did a great talk on the Outreach Program for Women. I’ve heard bits and pieces about this program and have met several participants (one of whom now works with me!) but I hadn’t seen the statistics and successes that Karen shared in this talk. The increase in applications from women in the Google Summer of Code went from 1 to 7 in some projects following the launch of the GNOME program, and this was not an outlier. What this really drove home for me was that there are women out there who are interested in open source and participating, but there are still perception barriers preventing many from applying themselves to a project (“it’s not for me” “I am not good enough”). There were also other things the program does, which she outlined:

  • Offer internships for non-students and non-coders
  • Connect applicants with pool of mentors
  • Require a contribution as part of the application
  • Provide small, manageable tasks throughout the internship rather than one big project
  • Require participants to blog regularly about their work and join project planets if possible
  • Sponsored travel when possible to collaborate with project members in person (conference, summit, sprint, etc)

I’m excited to see such a program be so successful, and look forward to seeing the number of women in our communities continue to rise as a result.

And a highlights post wouldn’t be complete without mentioning Marc MERLIN’s Live upgrading many thousands of servers from an ancient RedHat 7.1 to a 10 year newer Debian. It was as crazy as it sounds, and was super interesting to listen to. More about the talk, including detailed slides here (a link to the paper for a more thorough read is also in that directory).

Tomorrow is the last day of the conference and I’m looking forward to seeing talks by several of my colleagues, Clark Boylan on Processing Continuous Integration Log Events, James Blair talking about Zuul, Robert Collins doing a deep dive into Diskimage-builder and Devananda van der Veen on Provisioning Bare Metal with OpenStack.

Pages

Subscribe to Free Software Magazine aggregator