news aggregator

Ubuntu Kernel Team: Kernel Team Meeting Minutes – March 04, 2014

Planet Ubuntu - Tue, 2014-03-04 17:15
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140304 Meeting Agenda


ARM Status

nothing new to report this week


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

http://people.canonical.com/~kernel/reports/kt-meeting.txt


Milestone Targeted Work Items

sorry

   apw    core-1311-kernel    4 work items          core-1311-cross-compilation    2 work items          core-1311-hwe-plans    1 work item       ogasawara    core-1311-kernel    1 work item       smb    servercloud-1311-openstack-virt    4 work items   


Status: Trusty Development Kernel

The 3.13.0-15.35 Trusty kernel is available in the archive. This is
bssed on the v3.13.5 upstream stable update. Our unstable branch has
also been rebased to track the latest v3.14-rc5 release.
—–
Important upcoming dates:
Thurs Mar 27 – Final Beta (~3 weeks away)
Thurs Apr 03 – Kernel Freeze (~4 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Saucy/Raring/Quantal/Precise/Lucid

Status for the main kernels, until today (Nov. 26):

  • Lucid – Testing
  • Precise – Testing
  • Quantal – Testing
  • Saucy – Testing

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

David Planella: Internationalizing your apps at the Ubuntu App Developer Week

Planet Ubuntu - Tue, 2014-03-04 16:26

As part of the Ubuntu App Developer Week, I just ran a live on-air session on how to internationalize your Ubuntu apps. Some of the participants on the live chat asked me if I could share the slides somewhere online.

So here they are for your viewing pleasure :) If you’ve got any questions on i18n or in Ubuntu app development in general, feel free to ask in the comments or ping me (dpm) on IRC.

The video

The slides

Enjoy!

The post Internationalizing your apps at the Ubuntu App Developer Week appeared first on David Planella.

Sebastian Kügler: Are your Qt builds going OOM?

Planet Ubuntu - Tue, 2014-03-04 12:15

If you’re, like me, regularly building Qt, you probably have noticed a decent hunger for memory, especially when linking Webkit. This part of the build can take well over 8GB of RAM, and when it fails, you get to do it over again.

The unfortunate data point is that my laptop only has 4GB, which is enough for most (but one) cases. Short of buying a new laptop, here’s a trick how you can get through this part of the build: Create a swap file. Creating a swapfile increases your virtual memory. This won’t make it fast, but it at least gives Linux a chance to not run out of memory and kill the ‘ld’ process. Creating a swapfile is actually really easy under Linux, you just have to know your toolbox. Here’s the quick run-down of steps:

First, create an empty file:

fallocate -l 4096M /home/swapfile

Using the fallocate syscall (which works for newer kernels, but only for btrfs, ext4, ocfs2, and xfs filesystems), this can be done fast. In this example, I have enough space on my home partition, so I decided to put the swapfile there. It’s 4GB in size, which should be plenty of virtual memory to finish even the greediest of builds — your mileage may vary. If you’re not able to use fallocate, you’ll need a bit more patience and dd.

As your swap should never be readable by anybody else than root, change its permissions:

chmod 600 /home/swapfile

Next, “format” the swapfile:

mkswap /home/swapfile

Then, add it to your virtual memory pool:

swapon /home/swapfile

You can now check with tools like `free` or `top` (tip: use the much nicer `htop` if you’re into fancy) that your virtual memory actually increased. Once your build is done, and you need your disk-space back, that’s easy as pie:

swapoff /home/swapfile rm /home/swapfile

If you want to make this permanent (so it’ll survive a reboot), add a line like the following to your fstab:

/home/swapfile none swap defaults 0 0

This is just a really quick tip, more details on this process can be found in the excellent Arch Wiki.

Daniel Pocock: Ganglia-Nagios-Bridge and JMXetric new releases

Planet Ubuntu - Tue, 2014-03-04 09:34
ganglia-nagios-bridge v1.1.0

A new release of ganglia-nagios-bridge is now available. The package has also been uploaded to Debian for those who use that and it will appear in backports soon. The main improvement in this version is slightly better support for generating an UNKNOWN alert in Nagios when fresh metrics fail to appear in Ganglia.

There is also a new wiki comparing the various Ganglia and Nagios integration solutions.

JMXetric v1.0.6 and gmetric4j v1.0.6

The latest JMXetric and gmetric4j releases improve support for JBoss, specifically, working around some issues that occur when a profiling agent accesses the logger or MBean platform during JVM startup.

The page about JBoss, Wildfly and Tomcat integration details for Ganglia/JMXetric performance monitoring has also been updated to cover these improvements.

The process of having JMXetric and gmetric4j packaged is now under way and it has been verified that they just work with the tomcat7 packages on Debian and Ubuntu (config sample included).

The Fridge: Ubuntu Weekly Newsletter Issue 357

Planet Ubuntu - Tue, 2014-03-04 03:47

Welcome to the Ubuntu Weekly Newsletter. This is issue #357 for the week February 24 – March 2, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth Krumbach Joseph
  • Paul White
  • Esther Schindler
  • Nathan Dyer
  • Jim Connett
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Nicholas Skaggs: A simple look at testing within ubuntu

Planet Ubuntu - Mon, 2014-03-03 22:35
Since just before the last LTS, quality has been a buzzword within the ubuntu community. We've come a long way since precise and I wanted to provide some help and prospective on what ubuntu's process for quality looks like this cycle. In simple terms. Or as reddit would say, "explain to me like I'm 5".

I'll try and define terms as we go. First let me define CI, which is perhaps the buzzword of this cycle, lest I lose all of you! CI stands for continuous integration, and it means we are testing ubuntu. All the time. Non-stop. Every change we make, we test. The goal behind this idea is to find and fix problems, before well, they become problems on your device!

CI Dashboard
The CI dashboard then is a way to visually see the results of this testing. It acts as a representation of the health of ubuntu as a distribution. At least once a day runs are executed, apps and images are tested and benchmarked, and the results are populated on ci.ubuntu.com. This is perhaps the most visible part of the CI (our continuous testing efforts) that is happening within ubuntu. But let's step back a minute and look at how the overall CI process works within ubuntu.

CI Process
App developers hack on a bit of code, fixing bugs or adding new features to the codebase. Once the code is ready, a merge proposal1 is created by the developer and feedback is sought. If the code passes the peer review and the application's tests, it will then become part of the image after a journey through the CI train.

For the community core apps, the code is merged after peer review, and then undergoes a similar journey to the store where it will become part of the image as well. Provided of course it meets further review criteria by myself and Alan (we'll just call him the gatekeeper).
Though menacing, Alan assures me he doesn't bite
Lest we forget, upstream2 uploads3 are done as well. We can hope some form of testing was done on them before we received them. Nevertheless, tests are run on these as well, and if they pass successfully, the new packages will enter the archive4 and become part of the image.

Generating Images
Now it's time to generate some images. For the desktop a snapshot of what's in the ubuntu archive is taken each day, built, and then subjected to a series of installation tests. If the tests pass, it is released for general testing called Image (or ISO) testing. An image is tested and declared stable as part of a milestone (or testing event) and can become the next version of ubuntu!
Adopted images are healthy images!
On the ubuntu phone side of things, all the new uploads are gathered and evaluated for risk. If something is a large change, it might be prudent to not land it with other large changes so we can tell what broke should the image not work properly. Once everything is ready, a new image is created and is released for testing.  The OTA updates (over-the-air; system updates) on your ubuntu phone come from this process!

How you can help?
Still with me I hope? As you can see there's many things happening each day in regards to quality and lots of places where you can create a positive change for the health of the distro! In my next few posts, I'll cover each of the places you can plug in to help ubuntu be healthy everyday!

1. A merge proposal is a means of changing an applications code via peer review.
2. By upstream, I mean the communities and people who make things we use inside of ubuntu, but are not directly a part of it. Something like the browser (firefox) and kernel are good examples.
3. This can happen via a general sync at the beginning of the cycle from debian. This sync copies part of the debian archive into the ubuntu archive, which in effect causes applications to be updated. Applications are also updated whenever a core ubuntu developer or a MOTU uploads a new version to the archive. 
4. In case you are wondering, "the archive" is the big repository where all of your updates and new applications come from!

James Page: Which Open vSwitch?

Planet Ubuntu - Mon, 2014-03-03 18:09

Since Ubuntu 12.04, we’ve shipped a number of different Open vSwitch versions supporting various different kernels in various different ways; I though it was about time that the options where summarized to enable users to make the right choice for their deployment requirements.

Open vSwitch for Ubuntu 14.04 LTS

Ubuntu 14.04 LTS will be the first Ubuntu release to ship with in-tree kernel support for Open vSwitch with GRE and VXLAN overlay networking – all provided by the 3.13 Linux kernel. GRE and VXLAN are two of the tunnelling protocols used by OpenStack Networking (Neutron) to provide logical separation between tenants within an OpenStack Cloud.

This is great news from an end-user perspective as the requirement to use the openvswitch-datapath-dkms package disappears as everything should just *work* with the default Open vSwitch module. This allows us to have much more integrated testing of Open vSwitch as part of every kernel update that we will release for the 3.13 kernel going forward.

You’ll still need the userspace tooling to operate Open vSwitch; for Ubuntu 14.04 this will be the 2.0.1 release of Open vSwitch.

Open vSwitch for Ubuntu 12.04 LTS

As we did for the Raring 3.8 hardware enablement kernel, an openvswitch-lts-saucy package is working its way through the SRU process to support the Saucy 3.11 hardware enablement kernel; if you are using this kernel, you’ll be able to continue to use the full feature set of Open vSwitch by installing this new package:

sudo apt-get install openvswitch-datapath-lts-saucy-dkms

Note that if you are using Open vSwitch on Ubuntu 12.04 with the Ubuntu Cloud Archive for OpenStack Havana, you will already have access to this newer kernel module through the normal package name (openvswitch-datapath-dkms).

DKMS package names

Ubuntu 12.04/Linux 3.2: openvswitch-datapath-dkms (1.4.6)
Ubuntu 12.04/Linux 3.5: openvswitch-datapath-dkms (1.4.6)
Ubuntu 12.04/Linux 3.8: openvswitch-datapath-lts-raring-dkms (1.9.0)
Ubuntu 12.04/Linux 3.11: openvswitch-datapath-lts-saucy-dkms (1.10.2)
Ubuntu 12.04/Linux 3.13: N/A
Ubuntu 14.04/Linux 3.13: N/A

Hope that makes things clearer…


Lubuntu Blog: [Poll] Community wallpaper contest

Planet Ubuntu - Mon, 2014-03-03 14:54
Ladies and gentlemen, cast your votes! The poll that will decide our additional wallpapers for Trusty Tahr is now up and running. To cast your vote head here and choose your five favorite wallpapers. The poll will continue until March 11th, and the top five contributions will be included in Lubuntu 14.04 and packaged into the Ubuntu repositories.  Spread the word and good luck to all

Marcin Juszkiewicz: How to get Zoom slider on Microsoft keyboard recognized by X11

Planet Ubuntu - Mon, 2014-03-03 11:16

If you are using Microsoft Natural Ergonomic Keyboard 4000 as I do you may wondered how to get that zoom slider in a middle to be useful. Thanks to Hans de Goede there is a solution.

There is one new file and changes to other needed. First we need to instruct udev to remap some keys for us. Create /lib/udev/hwdb.d50-msoffice-keyboard-xorg.hwdb file with this content:

# classic msoffice keyboard keyboard:usb:v045Ep0048d*dc*dsc*dp*ic*isc*ip*in01* KEYBOARD_KEY_0c0184=documents # KEY_WORDPROCESSOR to KEY_DOCUMENTS KEYBOARD_KEY_0c0186=finance # KEY_SPREADSHEET to KEY_FINANCE KEYBOARD_KEY_0c018e=chat # KEY_CALENDAR to KEY_CHAT KEYBOARD_KEY_0c01a3=nextsong # KEY_NEXT to KEY_NEXTSONG KEYBOARD_KEY_0c01a4=previoussong # KEY_PREVIOUS to KEY_PREVIOUSSONG KEYBOARD_KEY_0c01ab=search # KEY_SPELLCHECK to KEY_SEARCH # Microsoft Natural Ergonomic Keyboard 4000 keyboard:usb:v045Ep00DB* KEYBOARD_KEY_0c01ab=search # KEY_SPELLCHECK to KEY_SEARCH KEYBOARD_KEY_0c022d=scrollup # KEY_ZOOMIN to KEY_SCROLLUP KEYBOARD_KEY_0c022e=scrolldown # KEY_ZOOMOUT to KEY_SCROLLDOWN

In Fedora rawhide I also needed to edit 60-keyboard.hwdb file (same directory) to disable some definitions:

# Microsoft Natural Ergonomic Keyboard 4000 #keyboard:usb:v045Ep00DB* # KEYBOARD_KEY_c022d=zoomin # KEYBOARD_KEY_c022e=zoomout

Now update of hwdb is needed:

sudo udevadm hwdb --update sudo udevadm control --reload

And the only thing left is replugging the keyboard (or system reboot). As a bonus you get XF86Search button instead of non-working Spell (F10). Those who use Microsoft Office Keyboard (old one with scroller on left side) will get all keys working as well but they also need 3.14 kernel to get all recent fixes.

And why all that is needed at all? Simple — Xorg is still sitting in 80s when it comes to handling keyboard and ignores all keycodes with >8bit values. I hope that Wayland does not follow that way and does/will just take whatever system under is telling about input devices.

All rights reserved © Marcin Juszkiewicz
How to get Zoom slider on Microsoft keyboard recognized by X11 was originally posted on Marcin Juszkiewicz website

Related posts:

  1. How to update Chrubuntu 12.04 to Ubuntu 13.04
  2. Flashing U-Boot on Efika MX Smartbook
  3. Spending whole day with just Chromebook

Daniel Pocock: Automatically building Java projects

Planet Ubuntu - Mon, 2014-03-03 09:27

Another of the GSoC project areas I have offered to mentor involves automatically and recursively building Java projects.

Why is this important?

Recently, I decided to start using travis-ci to automatically build some of my Java projects.

One of the projects, JMXetric, depends on another, gmetric4j. When travis-ci builds JMXetric, it needs to look in Maven repositories to find gmetric4j (and also remotetea / oncrpc.jar and junit). The alternative to fetching things from repositories is to stop using Maven and ship the binary JARs of dependencies in the JMXetric repository itself, which is not desirable for various reasons.

Therefore, I submitted gmetric4j into the Maven Central repository by using the Sonatype Nexus service. One discovery I made in this process disturbed me: Sonatype allows me to sign and upload my binary JAR, they don't build it from source themselves, so users of the JAR have no way to know that it really is built from the source I publish in Github.

In fact, as anybody who works with Java knows, there is no shortage of Java projects out there that have a mix of binary dependencies in their source tree, sometimes without any easy way to find the dependency sources. HermesJMS is one popular example that is crippled by the inclusion of some JARs that are binary-only.

No silver bullet, but there is hope

Although there are now tens of thousands of JAR libraries out there in repositories and projects that depend on them (and their transitive dependencies and build tools and such), there is some hope:

  • Many JARs provide a -source JAR including source. This doesn't include all of the build artifacts of a true source package or source tarball, it just provides a subset of the source for use with a debugger.
  • Many Maven pom.xml files now include metadata about where the source is located - example

With that in mind, I'm hopeful that a system could be developed to scrape some of these data sources to find some source code and properly build some subset of the thousands of JARs available in the Maven Central Repository.

But why bother if you can't completely succeed?

One recent post on maven-users suggested that because nobody would ever be able to build 100% of JARs from source, the project is already doomed to failure.

Personally, I feel it is quite the opposite: by failing to build 100% of JARs from source, the project will help to pinpoint those hierarchies of JARs that are not really free software and increase pressure on their publishers to adhere to the standards that people reasonably expect for source distribution or provide a red flag to help dependant projects stop using them.

On top of that, the confirmation of true free-software status for many other JARs will make it safer for people to rely on them, package them and distribute them in various ways.

Dumping a gazillion new Java packages into Debian

Just to clear up one point: being able to automatically build JARs from source (or a chain of dependencies involving hundreds of them) doesn't mean they will be automatically uploaded to the official Debian archive by a Debian Developer (DD).

Having this view of the source will make it easier for a DD to look at a set of JARs and decide if they are suitable for packaging, but there would still be some manual oversight involved. The tool would simply take out some of the tedious manual steps (where possible) and give the DD the ability to spot traps (JARs without real source hidden in the dependency hierarchy) much more quickly.

How would it work?

The project - whether completed under GSoC or by other means - would probably be broken down into a few discrete components. Furthermore, it would utilize some existing tools where possible. All of this makes it easier for a student to focus on and complete some subset of the work even if the whole thing is quite daunting.

Here are some of the ideas for the architecture of the solution and the different components to be used or developed:

  • The data set:
    • A database schema, tracking each binary artifact, the source repository location (e.g. Git or SVN URL), source tarball location, source JAR availability and dependency relationships (including versions)
    • A local Maven repository - only containing JARs that we have built locally from some source
    • A set of Git repositories to mirror the upstream repositories of projects that need to be tweaked.
  • Tool set:
    • A web interface or command line tool would be necessary for a user to kick-start the process by specifying some artifact they want to build
    • There would need to be a script that tries to work out all the possible ways to get the source for an artifact (e.g. by looking for a Git URL in the pom.xml from the Maven Central repository). This script would be able to do other things, like identifying the existence of -source JARs which may or may not be sufficient to build the artifact.
    • A script would need to be created for testing the artifact's source tarball or repository for binary artifacts (e.g. copies of junit.jar). Whenever such things were found, the script would mirror the repository into our local git and create a branch with binaries removed. A record of the binaries would be added to the local database so we can symlink them from a trusted source when building.
    • A script would need to be created for testing whether the project includes a recognised build system (such as build.xml for ant or pom.xml for Maven). For projects without such artifacts, the script would need to generate a template build.xml and store it in a local clone of the repository
    • Jenkins would be used to build the JARs. A script would need to be created to build the Jenkins config file for the project, pointing Jenkins to the upstream Git or the local Git repository depending upon the situation.
    • If the project is a Maven or Ivy project, then there are likely to be attempts to find dependencies during the build process. Running under Jenkins, these tools would be configured in such a way that they only look to the local repository and use dependencies that we have already built. If the build fails during dependency resolution, this is where the recursive process would kick off: the attempt to find each missing dependency would be logged to a queue, and the requests in this queue would each be handled by restarting the whole process again at the beginning. Each of these requests would also be logged to the database.
    • Sometimes, the system would be unable to proceed (e.g. because there are no clues about source locations in a given pom.xml). A user interface would need to be constructed to show a list of artifacts with exceptions and allow the user to manually locate the source and supply the URLs. The system would then continue iterating with this new data.
  • Reporting: we already know that for some JARs, we will simply fail to make any progress and we are not going to lose any sleep over that. The important thing is to provide accurate reports to help people make decisions that may involve working around those JARs in future:
    • For what percentage of projects could we determine the license from the pom.xml? Reports on licensing: can we spot any license mismatch in the dependency hierarchy?
    • Tools: which build tools in the chain of dependencies don't provide any source code? Are they optional tools (such as code quality analysis) that we can skip in the build process (e.g. by producing a mock version of the tool or plugin)?
    • Which non-free/sourceless JARs are most widely depended upon by other projects in the free Java eco-system? Can we make a list of the top 10 or 20?
    • Abandonware: can we detect JARs that haven't been updated for an extended period of time, with no activity in the source repository? For these projects in particular, it is a really good idea to make backups of the source repositories (or mirrors of their web sites and source download directories) in case they disappear altogether.

Michael Hall: Ubuntu App Developer Week starts Today!

Planet Ubuntu - Mon, 2014-03-03 09:00

Starting at 1400 UTC today, and continuing all week long, we will be hosting a series of online classes covering many aspects of Ubuntu application development. We have experts both from Canonical and our always amazing community who will be discussing the Ubuntu SDK, QML and HTML5 development, as well as the new Click packaging and app store.

You can find the full schedule here: http://summit.ubuntu.com/appdevweek-1403/

We’re using a new format for this year’s app developer week.  As you can tell from the link above, we’re using the Summit website.  It will work much like the virtual UDS, where each session will have a page containing an embedded YouTube video that will stream the presenter’s hangout, an embedded IRC chat window that will log you into the correct channel, and an Etherpad document where the presenter can post code examples, notes, or any other text.

Use the chatroom like you would an Ubuntu On Air session, start your questions with “QUESTION:” and wait for the presenter to get to it. After the session is over, the recorded video will be available on that page for you to replay later. If you register yourself as attending on the website (requires a Launchpad profile), you can mark yourself as attending those sessions you are interested in, and Summit can then give you a personalize schedule as well as an ical feed you can subscribe to in your calendar.

If you want to use the embedded Etherpad, make sure you’re a member of https://launchpad.net/~ubuntu-etherpad

That’s it!  Enjoy the session, ask good questions, help others when you can, and happy hacking.

Stuart Langridge: Writing a simple desktop widget for Ubuntu

Planet Ubuntu - Mon, 2014-03-03 00:47

I needed a way to display the contents of an HTML file on my desktop, in such a way that it looks like it’s part of the wallpaper. Fortunately, most of the answer was in How can I make my own custom desktop widgets? on Ask Ubuntu, along with Create a Gtk Window insensitive to Show Desktop and Won’t show in Launcher. Combining that with the excellent Python GI API Reference which contains everything and which I can never find when I go looking for it, I came up with a simple little Python app. I have it monitoring the HTML file which it displays for changes; when that file changes, I refresh the widget.

from gi.repository import WebKit, Gtk, Gdk, Gio import signal, os class MainWin(Gtk.Window): def __init__(self): Gtk.Window.__init__(self, skip_pager_hint=True, skip_taskbar_hint=True) self.set_wmclass("sildesktopwidget","sildesktopwidget") self.set_type_hint(Gdk.WindowTypeHint.DOCK) self.set_size_request(600,400) self.set_keep_below(True) # transparency screen = self.get_screen() rgba = screen.get_rgba_visual() self.set_visual(rgba) self.override_background_color(Gtk.StateFlags.NORMAL, Gdk.RGBA(0,0,0,0)) self.view = WebKit.WebView() self.view.set_transparent(True) self.view.override_background_color(Gtk.StateFlags.NORMAL, Gdk.RGBA(0,0,0,0)) self.view.props.settings.props.enable_default_context_menu = False self.view.load_uri("file://path/to/html/file") box = Gtk.Box() self.add(box) box.pack_start(self.view, True, True, 0) self.set_decorated(False) self.connect("destroy", lambda q: Gtk.main_quit()) self.show_all() self.move(100,100) def file_changed(monitor, file, unknown, event): mainwin.view.reload() if __name__ == '__main__': # the HTML file needs to have background colour rgba(0,0,0,0) gio_file = Gio.File.new_for_path("/path/to/html/file") monitor = gio_file.monitor_file(Gio.FileMonitorFlags.NONE, None) monitor.connect("changed", file_changed) mainwin = MainWin() signal.signal(signal.SIGINT, signal.SIG_DFL) # make ^c work Gtk.main()

Lots of little tricks in there: the widget acts as a widget (that is: it stays glued to the desktop, and doesn’t vanish when you Show Desktop) because of the Gdk.WindowTypeHint.DOCK, skip_pager_hint=True, skip_taskbar_hint=True, and set_keep_below(True) parts; it’s transparent because the HTML file sets its background colour to rgba(0,0,0,0) with CSS and then we use override_background_color to make that actually be transparent; the window has no decorations because of set_decorated(False). Then I just add it to Startup Applications and we’re done.

Ubuntu Classroom: Ubuntu Documentation Day wrap-up

Planet Ubuntu - Sun, 2014-03-02 23:20

Interested in getting involved with documentation for Ubuntu, but not sure where to begin?

This weekend the Ubuntu Documentation and Ubuntu Manual teams got together to offer 5.5 hours of documentation-related sessions in Ubuntu Classroom.

In case you missed out, the logs from the sessions are now available:

Also thanks to classroom helper Jose Antonio Rey who was available to help instructors throughout this event.


Paul Tagliamonte: Crazy experimental idea: Take sundays off

Planet Ubuntu - Sun, 2014-03-02 21:16

I’ve slowly come to the conclusion that I should likely not work as much as I do. Looking back over my GitHub graphs, and feeling my own internal pressure to do more work is getting to the point of insanity. I can’t remember the last time I took a whole day off.

As a result, I’m going to make a new rule for myself, and put it in a semi-public place to help with peer-pressure enforcing this.

I’m not going to use my computer (except in vary rare and urgent circumstances) on Sundays, from here on out. The exact rules are still up in the air (Is the Phone OK? Tablet?)

Let’s see how this goes! I’ll start next week (or just stop doing things today)

Daniel Pocock: Google Summer of Code opportunities in JavaScript, HTML5, jQuery and WebRTC

Planet Ubuntu - Sun, 2014-03-02 12:07

WebRTC is one of the highlights of HTML5. The latest stable releases of Google Chrome (including the free Chromium browser) and Firefox include support for this technology and although the specifications are still being finalized, this is the time when developers can start preparing the first generation of web-apps that will define the future of this technology.

The JSCommunicator project aims to make it easy to integrate WebRTC into existing web applications.

It provides a high-level API on top of the popular JsSIP SIP stack for JavaScript. The JSCommunicator API provides basic dialing and call control features, allowing you to focus on the application level rather than the SIP level. The configuration file suggests many possible ways it can be deployed and customized without any coding at all.

The popular DruCall module for the Drupal CMS provides a practical demonstration of how JSCommunicator can be embedded in other products, multiplying the power of WebRTC for millions of other web developers, bloggers, online shops and other sites. This part of the DruCall source code provides a practical demonstration of how JSCommunicator integrates with PHP and this demonstrates how we extract the parameters from PHP in the JavaScript and convert them to the JSCommunicator configuration object.

Practical deployments of WebRTC today

The student selected for this GSoC project will focus on two practical, real-world deployments of JSCommunicator:

It is particularly important to think about ways to make this technology useful for the Debian Developer community in the pursuit of Debian's work.

Not just for Debian

A communication product is not much use if there is nobody to talk to.

The optimal outcome of this project may involve helping two or three additional free software communities to build portals similar to https://rtc.debian.org so that they can interact with Debian and each other using Federated SIP. As Metcalfe's Law explains, this outcome would be a win-win situation for everybody.

Mentors needed too

The more diverse the mentoring community, the more positive outcomes we can achieve with this project.

If you would like to be part of a mentoring team for this project, please email me and subscribe to the Debian SOC co-ordination email list. There is no strict requirement for all mentors in the team to be full Debian Developers and emerging technology like this clearly needs people with strengths in a range of different areas.

Get started now

For general ideas about getting selected for Google Summer of Code, please see the comments at the bottom of my earlier blog post about Ganglia projects

For this project in WebRTC in particular, please:

  • Review the project brief on the Debian wiki
  • Discuss your ideas on the JsSIP Google group
  • Send an email to introduce yourself on the Debian SOC co-ordination email list and give a link to your post on the JsSIP list
  • You must complete a coding test. Please see the open bug list for JSCommunicator, complete one of these tasks and submit a pull request on Github. Please send an email to the JsSIP Google group to discuss your pull request.
  • Explain what features you would create during the summer
  • Explain which other tools or libraries you would like to use
  • Give examples of previous work you have done with HTML and JavaScript
  • Explain how your participation will benefit Debian, free software and the wider world. Give some practical examples.
  • If you are interested in other JavaScript and jQuery opportunities for GSoC 2014, please also read more about the web components in the Ganglia project - if you apply for both Debian and Ganglia, you only have to complete one coding test as part of your application.

Aurélien Gâteau: Dependency diagrams on api.kde.org!

Planet Ubuntu - Sat, 2014-03-01 21:40

Today I am happy to announce that after a bit of work we finally got dependency diagrams for KDE Frameworks 5 automatically generated on api.kde.org. Pick a framework, then click on the "Dependencies" link in the sidebar. For example, a tier 2 framework like KAuth, a tier 3 one like KIconThemes or if your looking for something a bit crazier, here is KIO.

Getting this to work was a bit more involved than I originally thought because the first part of the diagram generation is done by running cmake --graphviz on the source code. This means it must be run on a system with all the necessary dependencies installed. We ended up having the build servers run this part and rsync'ing the result to the server responsible for running Doxygen.

If you are interested in how it works, the code is available in the kapidox repository. Feel free to ping me if you need help with it.

I am still debating whether it makes sense to host the "mother of all diagrams" on api.kde.org or not. My gut feeling tells me that while it's impressive, it is of limited value and this is only going to get worse as we add new frameworks (the linked diagram is already outdated). It would probably be more interesting to produce a readable diagram of all tier 1 and tier 2 frameworks instead. I experimented a bit with this, but I am not done yet: I need to find a way to get more vertical spaces between cluster graphs. Right now the result is not useful because the arrows are all stuck together, even without Qt libs.

Ubuntu GNOME: Marketing & Communications Report – Feb 14

Planet Ubuntu - Sat, 2014-03-01 18:11

February 2014 Report

Every month, Ubuntu GNOME Marketing & Communications Team creates a monthly report and share it with the community. We decided, starting from 2014 to publish our reports on our website as well.

To view the report of January 2014, please click here.

Our Social Media Channels are still growing and we’re having more subscribers, as always.

We’re looking forward to provide the best information on our Social Media Channels, as always. Team Updates, System Updates, etc. We’re doing our best to keep everyone in the loop.

If you have any feedback, please don’t hesitate to contact us.

If you feel you can help with our Social Media Channels, kindly have a read at our basic requirements to become a moderator.

Thank you for choosing and using Ubuntu GNOME and thanks for joining our Social Media Channels. We ask to spread the word of Ubuntu GNOME. We highly appreciate if you invite your friends, cycles and/or followers to our channels.

Enjoy Ubuntu GNOME!

Stuart Langridge: I bought a new computer

Planet Ubuntu - Sat, 2014-03-01 13:49

In the most recent episode of Bad Voltage I reviewed my new computer, but we diverged mainly into a discussion of why anyone should buy laptops at all, in which I was right and everyone else in the world was wrong. Anyway, I’ve been promising for a while that I’d talk about my lovely new machine once I had it, and I now have it. So, a review.

A few months ago, my laptop, a Lenovo ultrabook running Ubuntu, decided to corrupt its disk when resuming from suspend. Now, admittedly, I’d suspended it and then let it run out of battery, and I’d understand it if it’d lost my session. But, no, it wouldn’t boot at all. A few panicked hours later, and with quite a bit of help from the #ubuntu-uk IRC channel, I got it back and I hadn’t lost anything. However, tragically, this now meant that my formerly innocent SSD1 had learned about the existence of disk errors. If you’ve worked in an office you’ll know that you mustn’t use whiteboard cleaner, because once you’ve taught a whiteboard that special cleaning fluid exists, it sulks and refuses to be cleaned without it for ever more. Well, disks are just the same: once they’ve learned that they’re allowed to error and you’ll just fix it, they feel able — actually, they feel obliged — to throw more errors just to see how much you’ll put up with. So I started shopping around for a new laptop.

Over Christmas, my mum, who is lovely but is about as good with technology as I am with blindfolded rock climbing, said: why are you buying a laptop? Why not buy a desktop computer?

This is a better question than you might initially think.

What benefit is there to a laptop? Well, there are I think two things. The first is that it has a zillion peripherals built in. Speakers, mouse, webcam, keyboard, it’s all part of the one package. And the second is that you can use it without it being plugged in, in coffee shops and conferences and on the sofa and the like.2 Other than that, every single thing about a desktop computer is better. It’s more upgradeable. It’s cheaper. It’s prettier, if you try and buy prettiness rather than a ghastly beige case from the 1980s. It’s got more USB ports. And you can make one which works how you want it to rather than how your laptop manufacturer wants it to. I want loads of RAM and a gorgeous case and I couldn’t give a damn about graphics, as long as it can drive my 27 inch monitor. So that’s what I built.

There are quite a lot of custom PC builders. There’s no way on God’s green earth that I’m going to buy a bunch of components and fit them together myself. For a start, I don’t give a crap about which type of RAM fits in which motherboard — I want someone else to decide that for me. I don’t want to have to touch a radiator and wear an anti-static strip and lose all those tiny screws every nine seconds. So I shopped around a bit and ended up with PC Specialist, a custom PC builder here in the UK. I got an Inwin 904 case, which is stone cold gorgeous — tempered glass, brushed steel. It’s a bloody work of art is what it is. And all the RAM I can stuff in my pockets, and decent Logitech speakers and webcam and wireless mouse and keyboard and an Asus 27 inch monitor and HDMI and just everything I wanted, and it was pretty well priced… and I can stick with it almost forever. Remember Trigger’s broom in Only Fools and Horses? It’d had 16 new heads and 14 new handles. This machine can be poked and have bits swapped out and keep on trucking long after any five laptops have been consigned to the laptop graveyard in your basement or the slow death of being given to your parents. So well done PC Specialist.

My requirements look roughly like this:

  1. machine will run Ubuntu, not Windows
  2. it’s not for gaming (I use my PS3 for that, and don’t use it much), so the integrated graphics is fine; I do not need a separate graphics card.
  3. I want my box to be attractive. This takes precedence over almost every other requirement, and is obviously massively subjective.
  4. I want a huge amount of RAM. Two years ago I bought the ultrabook laptop I’m typing this on, which has 4GB of RAM, and I thought that was loads. It’s now struggling a bit. I do not want to have to buy more memory a year from now, and Ubuntu is pretty heavy on RAM use, especially since I have about a zillion Chrome tabs open. So, 16GB for me. This will be sufficient for whatever Ubuntu requires for a couple of years at least, and will let me spin up VMs to my heart’s content. I did think about 32GB, but it’s just extra memory I don’t need, and because I have a desktop machine I can easily upgrade later if I need that, and not having it saves me a hundred notes or so now.
  5. I’d like a decent (that is: better than 1920 HD) monitor. However, 4K monitors are three grand each, which is way too much. So, 2560x1440 if I can. Note that the integrated graphics I pick has to support this.
  6. Other things needed: speakers (I’d like three-piece, but they don’t have to be great), keyboard and mouse (again, I’m not picky here, but wireless would be nice), wifi (doesn’t have to be good wifi, and I’ll be wired most of the time, but it’s a very handy fallback).
  7. Total budget: ~£1500
  8. adequate cooling. I don’t know anything about cooling.
  9. not overclocked.

The actual spec of the machine looks like this, which is a long boring list but can’t be helped:

Component type Component chosen notes Case InWIN 904 so gorgeous a case. wow. Processor (CPU) Intel® Core™i5 Quad Core Processor i5-4670 (3.4GHz) 6MB Cache didn’t get the K model because I don’t plan to overclock it. Haswell, because that’s the newest. Only an i5, though; the i7 was quite a bit extra and I decided I could do without it Motherboard ASUS® Z87-A: ATX, USB3.0, SATA6GB/S, SLi, XFIRE Memory (RAM) 16GB KINGSTON DUAL-DDR3 1600MHz (2 x 8GB) memory! yes! never going to run out again, ever Graphics Card INTEGRATED GRAPHICS ACCELERATOR (GPU) Hard Disk 180GB INTEL® 530 SERIES SSD, SATA 6 Gb/s (upto 540MB/sR | 490MB/sW) don’t need a lot of storage (I have a home server for that) but I do want SSD. 180GB is enough; currently have 120GB which is a tiny bit tight 1st DVD/BLU-RAY Drive 24x DUAL LAYER DVD WRITER ±R/±RW/RAM Memory Card Reader NONE irritatingly, this can’t go in my chosen case; they’re all to fit 3.5” spaces. I may buy a blanking plate and put one in the 5.25” slot. Power Supply CORSAIR 350W VS SERIES™ VS-350 POWER SUPPLY Processor Cooling Super Quiet 22dBA Triple Copper Heatpipe Intel CPU Cooler as per recommendation from the PC Specialist forums Extra Case Fans & Fan Controller NONE because I am not a lunatic gamer Sound Card ONBOARD 6 CHANNEL (5.1) HIGH DEF AUDIO Wireless/Wired Networking WIRELESS 802.11N 150Mbps PCI-E CARD Monitor ASUS 27" Professional SERIES PB278Q twenty seven inches of glory3 Keyboard & Mouse LOGITECH® MK520 WIRELESS KEYBOARD & MOUSE COMBO always rated Logitech stuff, personally Speakers LOGITECH LS21 2.1 SILVER/BLACK SPEAKER SYSTEM ditto Webcam Logitech® HD Webcam C525 - 720p HD Video, 8 Megapixel Photos ditto Warranty 3 Year Silver Warranty (1 Year Collect & Return, 1 Year Parts, 3 Year Labour) Price £1,365.00 have to skip lunch and save the money. Possibly several times a day.

I’m super-pleased with it. It looks gorgeous, which is precisely what I was hoping for. And the screen’s massive.

I’d like to live in a world where it’s possible to buy off-the-shelf gorgeous Ubuntu computers. At the moment, we’re not quite in that world. It is possible to buy pretty desktop PCs; in the UK, the place to go for that is Utopia Computers. Unfortunately, everyone who sells attractive machines rather than beige boxes is either (a) Apple or (b) primarily catering to an audience of gamers. So the machines you buy have super-high-end graphics cards and nutty cooling systems. I don’t want one of those; I’m not a gamer. I want raw power, but Intel graphics is enough for me, and it’s better supported by Ubuntu. So, annoyingly, that meant having to spec my own machine. I didn’t want to do that, because I have no idea which RAM to buy or which motherboard. PC Specialist did a pretty reasonable job of checking that sort of thing — their online wizard thing pops up saying “that card doesn’t fit with that motherboard”. Part of the issue here, I think, is that most people don’t consider it a good use of money to buy an attractive computer, and those that do are already in the Apple camp. Linux users are even worse at this — spending money on something because it looks nice is actively discouraged, which is thunderously wrong but I can’t stop people from doing it. Ubuntu is attempting to project a different vibe — that form is just as important as function — but because it’s still at least partially most popular among Linux people, it’s not making much headway. I surely can’t be the only person alive who appreciates the Apple aesthetic but wants Ubuntu machines, who thinks that we the Ubuntu community are allowed beautiful machines and should not be inured to the idea that you buy cheap-looking plastic stuff because that’s all you can get that runs Ubuntu. But it feels like I am, some days. I’d like to say that someone4 should set up a business selling pretty Ubuntu computers, but I fear that the subset of users who

  1. want a desktop rather than a laptop
  2. want Ubuntu
  3. value attractiveness over cheapness

is small enough that there’s not enough of a business model there.

Anyway, I shan’t rant. I have a lovely computer and I am happy. Hooray! I hope your machine is beautiful too. If not… maybe think about that, next time.

  1. even if it is some weird Lenovo-specific weird thing
  2. If you’re interested in further discussion of why having a laptop is important, and the idea that lots of people have one unmoving “main” laptop” and one small “conference” laptop, and that the “main” laptop could be a desktop instead, then see the Bad Voltage episode above
  3. an essential part of a good evening
  4. yes, I’m aware that I could be someone

Michael Hall: UbBloPoMo ends

Planet Ubuntu - Sat, 2014-03-01 04:49

I almost forgot to post today, which would have been sad because it’s the last day of my UbBloPoMo plan. But I still have 30 minutes before March 1st (my time), so I’ll type quickly to make this one count.

This will be the 21st post using the UbBloPoMo tag, and I’m quite pleased with the way it turned out. Not only has it gotten me out of a rut on not blogging, it also boosted my site traffic rather significantly and, I hope, given Planet Ubuntu readers a little more regular and irregular content.

I did miss one day, but the following post proved to be the most popular of the month.  It got a lot of people talking and even landed me a guest spot on the Linux Unplugged show, which was quite a bit of fun and I would love to do again (subtle hints there).

I don’t plan on continuing this blog-a-day through into March, but I do plan on keeping this blog more active than it was most of last year.  I hope you’ve all enjoyed reading these posts as much as I’ve enjoyed writing them, and even more I hope I’ve inspired some of you to blog more on your own websites.

Posted with 10 minutes to spare.

Sam Hewitt: Chicken Madras

Planet Ubuntu - Sat, 2014-03-01 00:00

Among my favourite dishes are curries and to do them justice takes a little more preparation and a well stocked pantry, but they're always worth it.

    Ingredients
  • 1 tablespoon coriandre seeds
  • 1 cumin coriandre seeds
  • 1+ dried red chili peppers
  • 1/2 tablespoon fennel seeds
  • 3 cloves garlic
  • 1 three-inch piece of ginger, peeled
  • 1-2 kg chicken legs (or thighs), skinned
  • 2-3 tablespoons Ghee (clarified butter) or coconut oil*
  • 1 onion, thinly sliced
  • 350mL (1 can) coconut milk
  • 2 teaspoons salt
  • 2 table spoons ground turmeric./li>
  • 8 green cardamom pods
  • 2 cinnamon sticks
  • 1 tablespoon garam masala
  • 2 tablespoons honey
  • 3 tablespoons tamarind paste
  • Water

Tamarind is a very sour fruit paste and is essential to making this dish a madras.

*You can substitute regular butter or a vegetable oil if you have neither of these ingredients.

Also if you don't have whole spices (like the coraindre, cumin and fennel) you can use the pre-ground stuff from your favourite store. Having said that, if you're into spice-rich dishes like this one, it's best to stock up on your whole spices.

The chilies are where the heat comes from in this dish. You can use as many as your tolerance allows, or leave them out entirely –like most things in cooking it's up to you really.
  1. In a dry non-stick skillet and in separate batches roast the coriandre, cumin and fennel seeds –this means leaving the spices in the pan on high heat until they start to become fragrant–then remove each and set aside.
  2. Grind the coriandre, red chilies and cumin in a spice grinder.
  3. In a small blender or food processor, make a paste of the garlic, ginger, and just ground cumin & coriandre (add a little water if need be to get it going). Set aside.
  4. Dissolve the tamarind in 1/2 a cup or so of water with the garam masala and honey. Set aside.
  5. Preheat a large saucepan over medium-high heat.
  6. Add the butter (or oil) and brown the chicken on all sides. When finished, remove the chicken and set aside.
  7. Add a little water and the sliced onions to the pot. Scrape all the lovely browned bits off the bottom of the pot while sauteeing the onion. Continue to saute for ~5 minutes
  8. Add your ginger-garlic-spice paste and further saute (while constantly stirring to avoid burning) for another few minutes.
  9. Add the coconut milk and another couple equal parts water. Bring to a boil, then reduce heat to low-ish.
  10. Add the cinnamon sticks, fennel, turmeric and cardamom pods. Season with the salt.
  11. Return the chicken and simmer for 60-90 minutes, adding a little water every so often to replace lost liquid.
  12. When the chicken is tender, remove and strain the remaing sauce through a fine-mesh strainer pressing out any liquid from the remnants –if you're into the whole rustic thing, feel free to leave in the whole spices, picking them out as you eat.
  13. Return the sauce to the pot and add the tamarind paste mixture. Return the chicken also. Allow the flavours to blend for another 10-15 minutes.
  14. Serve with rice and/or naan bread. Enjoy.

This version of a madras is done with chicken, but of course if can be done with lamb, beef, or even potato, although with the latter, you'd just add them raw, peeled and whole in the simmering step.

Pages

Subscribe to Free Software Magazine aggregator