news aggregator

Edubuntu: Edubuntu 12.04.5 Release Announcement

Planet Ubuntu - Thu, 2014-08-07 22:25
Edubuntu Long-Term Support Edubuntu 12.04.5 LTS is the fifth Long Term Support (LTS) version of Edubuntu as part of Edubuntu 12.04's 5 years support cycle. Edubuntu's Fifth LTS Point Release The Edubuntu team is proud to announce the release of Edubuntu 12.04.5. This is the last of five LTS point releases for this LTS lifecycle. The point release includes all the bug fixes and improvements that have been applied to Edubuntu 12.04 LTS since it has been released. It also includes updated hardware support and installer fixes. If you have an Edubuntu 12.04 LTS system and have applied all the available updates, then your system will already be on 12.04.5 LTS and there is no need to re-install. For new installations, installing from the updated media is recommended since it will be installable on more systems than before and will require drastically less updates than installing from the original 12.04 LTS media. This release ships with a backported kernel and X stack. This enables users to make use of more recently released hardware. Current users of Edubuntu 12.04 won't be automatically updated to this back-ported stack, you can however manually install the packages as well.
  • Information on where to download the Edubuntu 12.04.5 LTS media is available from the Downloads page.
  • We do not ship free Edubuntu discs at this time, however, there are 3rd party distributors available who ship discs at reasonable prices listed on the Edubuntu Martketplace
Although Edubuntu 10.04 systems will ask for upgrade to 12.04.5, it's not an officially supported upgrade path. Testing however indicated that this usually works if you're ready to make some minor adjustments afterwards. To ensure that the Edubuntu 12.04 LTS series will continue to work on the latest hardware as well as keeping quality high right out of the box, we will release another point release before the next long term support release is made available in 2014. More information is available on the release schedule page on the Ubuntu wiki. The release notes are available from the Ubuntu Wiki. Thanks for your support and interest in Edubuntu!

Ubuntu Podcast from the UK LoCo: S07E19 – The One Where No One’s Ready

Planet Ubuntu - Thu, 2014-08-07 22:15

Tony Whitmore, Laura Cowen, Alan Pope, and Mark Johnson are all together in Studio L for Season Seven, Episode Seventeen of the Ubuntu Podcast!

 Download OGG  Download MP3 Play in Popup

In this week’s show:-

We’ll be back next week, when we’ll be discussing whether it is the Year of the Linux Desktop (via the Chromebook), and we’ll go through your feedback.

Please send your comments and suggestions to:
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Julian Andres Klode: Configuring an OpenWRT Router as a repeater for a FRITZ!Box with working Multicast

Planet Ubuntu - Wed, 2014-08-06 23:01

Since some time, those crappy Fritz!Box devices do not support WDS anymore, but rather a proprietary solution created by AVM. Now what happens if you have devices in another room that need/want wired access (like TVs, Playstations) or if you want to extend the range of your network? Buying another Fritz!Box is not very cost efficient – What I did was to buy a cheap TP-Link TL-WR841N (can be bought for 18 euros) and installed OpenWRT on it. Here’s how I configured it to act as a WiFi bridge.

Basic overview: You configure OpenWRT into station mode (that is, as a WiFi client) and use relayd to relay between the WiFi network and your local network. You also need igmpproxy to proxy multicast packages between those networks, other UPnP stuff won’t work.

I did this on the recent Barrier Braker RC2. It should work on older versions as well, but I cannot promise it (I did not get igmpproxy to work in Attitude Adjustment, but that was probably my fault).

Note: I don’t know if it works with IPv6, I only use IPv4.

You might want to re-start (or start) services after the steps, or reboot the router afterwards.

Configuring WiFi connection to the FRITZ!Box

Add to: /etc/config/network

config interface 'wwan' option proto 'dhcp'

(you can use any other name you want instead of wwan, and a static IP. This will be your uplink to the Fritz!Box)

Replace wifi-iface in: /etc/config/wireless:

config wifi-iface option device 'radio0' option mode 'sta' option ssid 'FRITZ!Box 7270' option encryption 'psk2' option key 'PASSWORD' option network 'wwan'

(adjust values as needed for your network)

Setting up the pseudo-bridge

First, add wwan to the list of networks in the lan zone in the firewall. Then add a forward rule for the lan network (not sure if needed). Afterwards, configure a new stabridge network and disable the built-in DHCP server.

Diff for /etc/config/firewall

@@ -10,2 +10,3 @@ config zone list network 'lan' + list network 'wwan' option input 'ACCEPT' @@ -28,2 +29,7 @@ config forwarding +# Not sure if actually needed +config forwarding + option src 'lan' + option dest 'lan' + config rule

Add to /etc/config/network

config interface 'stabridge' option proto 'relay' option network 'lan wwan' option ipaddr ''

(Replace with the IP address your OpenWRT router was given by the Fritz!Box on wlan0)

Also make sure to ignore dhcp on the lan interface, as the DHCP server of the FRITZ!Box will be used:

Diff for /etc/config/dhcp

@@ -24,2 +24,3 @@ config dhcp 'lan' option ra 'server' + option ignore '1'

Proxying multicast packages

For proxying multicast packages, we need to install igmpproxy and configure it:

Add to: /etc/config/firewall

# Configuration for igmpproxy config rule option src lan option proto igmp option target ACCEPT config rule option src lan option proto udp option dest lan option target ACCEPT

(OpenWRT wiki gives a different 2nd rule now, but this is the one I currently use)

Replace /etc/config/igmpproxy with:

config igmpproxy option quickleave 1 config phyint option network wwan option direction upstream list altnet config phyint option network lan option direction downstream list altnet

(Assuming Fritz!Box uses the network)

Don’t forget to enable the igmpproxy script:

# /etc/init.d/igmpproxy enable

Optional: Repeat the WiFi signal

If you want to repeat your WiFi signal, all you need to do is add a second wifi-iface to your /etc/config/wireless.

config wifi-iface option device 'radio0' option mode 'ap' option network 'lan' option encryption 'psk2+tkip+ccmp' option key 'PASSWORD' option ssid 'YourForwardingSSID'

Known Issues

If I was connected via WiFi to the OpenWRT AP and switch to the FRITZ!Box AP, I cannot connect to the OpenWRT router for some time.

The igmpproxy tool writes to the log about changing routes.

Future Work

I’ll try to get the FRITZ!Box replaced by something that runs OpenWRT as well, and then use OpenWRT’s WDS support for repeating; because the FRITZ!Box 7270v2 is largely crap – loading a page in its web frontend takes 5 (idle) – 20 seconds (1 download), and it’s WiFi speed is limited to about 20 Mbit/s in WiFi-n mode (2.4 GHz (or 5 GHz, does not matter), 40 MHz channel). It seems the 7270 has a really slow CPU.

Filed under: OpenWRT

Stephan Adig: Python and JavaScript?

Planet Ubuntu - Wed, 2014-08-06 20:38

Is it possible to combine the worlds amazing prototyping language (aka Python) with JavaScript?

Yes, it is. Welcome to PyV8!


So, first we some libraries and modules:

  1. Boost with Python Support

    • On Ubuntu/Debian you just do apt-get install libboost-python-dev, for Fedora/RHEL use your package manager.
    • On MAC OSX:
      • When you are on Homebrew do this:

        brew install boost --with python

  2. PyV8 Module

    (You need Subversion installed for this)

    mkdir pyv8 cd pyv8 svn co cd trunk

    When you are on Mac OS X you need to add this first:

    export CXXFLAGS='-std=c++11 -stdlib=libc++ -mmacosx-version-min=10.8' export LDFLAGS=-lc++

    Now just do this:

    python ./ install

    And wait !

    (Some words of advise: When you are installing boost from your OS, make sure you are using the python version which boost was compiled with)

  3. Luck ;)

    Means, if this doesn’t work, you have to ask Google.

Now, how does it work?

Easy, easy, my friend.

The question is, why should we use JavaScript inside a Python tool?

Well, while doing some crazy stuff with our ElasticSearch cluster, I wrote a small python script to do some nifty parsing and correlation. After not even 30 mins I had a commandline tool, which read in a YAML file, with ES Queries written in YAML Format, and an automated way to query more than one ES cluster.

So, let’s say you have a YAML like this:

Example YAML Query File lineos:true 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 title: name: &ldquo;Example YAML Query File&rdquo; esq: hosts:</p> <pre><code>es_cluster_1: fqdn: "localhost" port: 9200 es_cluster_2: fqdn: "localhost" port: 10200 es_cluster3: fqdn: "localhost" port: 11200_ </code></pre> <p>indices: &ndash; index:</p> <pre><code> id: "all" name: "_all" all: true </code></pre> <ul> <li>index: id: &ldquo;events_for_three_days&rdquo; name: &ldquo;[events-]YYYY-MM-DD&rdquo; type: &ldquo;failover&rdquo; days_from_today: 3</li> <li>index: id: &ldquo;events_from_to&rdquo; name: &ldquo;[events-]YYYY-MM-DD&rdquo; type: &ldquo;failover&rdquo; interval: from: &ldquo;2014-08-01&rdquo; to: &ldquo;2014-08-04&rdquo; query: on_index: all: filtered: filter: term: code: &ldquo;INFO&rdquo; events_for_three_days_: filtered: filter: term: code: &ldquo;ERROR&rdquo; events_from_to: filtered: filter: term: code: &ldquo;DEBUG&rdquo;

No, this is not really what we are doing :) But I think you get the idea.

Now, in this example, we have 3 different ElasticSearch Clusters to search in, and all three have different data, but all are sharing the same Event format. So, my idea was to generate reports of the requested data, but eventually for a single ES Cluster, or correlated over all three. I wanted to have the functionality inside the YAML file, so everybody who is writing such a YAML file can also add some processing code. Well, the result set of an ES search query is a JSON blob, and thanks to it will be converted to a Python dictionary.

Huu…so, why don’t you use python code inside YAML and eval it inside your Python Script?

Well, when you ever wrote Front/Backend Web Apps, you know it’s pretty difficult to write Frontend Python Scripts which are running inside your browser. So, JavaScript here for the rescue. And everybody knows how easy it is, to deal with JSON object structures inside JavaScript. So, why don’t we use this knowledge and invite users who are not familiar with Python, to participate?

Now, think about an idea like this:

Example YAML Query File lineos:true 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 title: name: &ldquo;Example YAML Query File&rdquo; esq: hosts:</p> <pre><code>es_cluster_1: fqdn: "localhost" port: 9200 es_cluster_2: fqdn: "localhost" port: 10200 es_cluster3: fqdn: "localhost" port: 11200_ </code></pre> <p>indices: &ndash; index:</p> <pre><code> id: "all" name: "_all" all: true </code></pre> <ul> <li>index: id: &ldquo;events_for_three_days&rdquo; name: &ldquo;[events-]YYYY-MM-DD&rdquo; type: &ldquo;failover&rdquo; days_from_today: 3</li> <li>index: id: &ldquo;events_from_to&rdquo; name: &ldquo;[events-]YYYY-MM-DD&rdquo; type: &ldquo;failover&rdquo; interval: from: &ldquo;2014-08-01&rdquo; to: &ldquo;2014-08-04&rdquo; query: on_index: all: filtered: filter: term: code: &ldquo;INFO&rdquo; events_for_three_days_: filtered: filter: term: code: &ldquo;ERROR&rdquo; events_from_to: filtered: filter: term: code: &ldquo;DEBUG&rdquo; processing: for: report1: | function find_in_collection(collection, search_entry) { for (entry in collection) { if (search_entry[entry][&lsquo;msg&rsquo;] == collection[entry][&lsquo;msg&rsquo;]) { return collection[entry]; } } return null; } function correlate_cluster_1_and_cluster_2(collections) { collection_cluster_1 = collections[&ldquo;cluster_1&rdquo;][&ldquo;hits&rdquo;][&ldquo;hits&rdquo;]; collection_cluster_2 = collections[&ldquo;cluster_2&rdquo;][&ldquo;hits&rdquo;][&ldquo;hits&rdquo;]; similar_entries = []; for (entry in collection_cluster_1) { similar_entry = null; similar_entry = find_in_collection(collection_cluster_2, collection_cluster_1[entry]); if (similar_entry != null) { similar_entries.push(similar_entry); } } result = {&lsquo;similar_entries&rsquo;: similar_entries}; return(result) } var result = correlate_cluster_1_and_cluster_2(collections); // this will return the data to the python method result result output: reports; report1: |

(This is not my actual code, I just scribbled it down, so don’t lynch me if this fails)

So, actually, I am passing a python dict with all the query resulsets from the ES clusters (defined at the top of the YAML file) towards a PyV8 Context Object, can access those collections inside my JavaScript and return a JavaScript HASH / Object. In the end, after JavaScript Processing, there could be a Jinja Template inside the YAML file, and we can pass the JavaScript results into this template, for printing a nice report. There are many things you can do with this.

So, let’s see it in python code:

Example Python Code (shorted) lineos: true 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 </p> <h1>&ndash;<em>&ndash; coding: utf-8 &ndash;</em>&ndash;</h1> <h1>This will be a short form of this,</h1> <h1>so don&rsquo;t expect that this code will do the reading and validation</h1> <h1>of the YAML file</h1> <p>from elasticsearch import Elasticsearch import PyV8 from jinja2 import Template</p> <p>class JSCollections(PyV8.JSClass):</p> <pre><code>def __init__(self, *args, **kwargs): super(JSCollections, self).__init__() self.collections = {} if 'collections' in kwargs: self.collections=kwargs['collections'] def write(self, val): print(val) </code></pre> <p>if <strong>name</strong> == &lsquo;<strong>main</strong>&rsquo;:</p> <pre><code>es_cluster_1 = Elasticsearch({"host":"localhost", port: 9200}) es_cluster_2 = Elasticsearch({"host":"localhost", port: 10200}) collections = {} collections['cluster_1] ="_all", body={"query": { "filtered": {"filter": {"term": {"code": "DEBUG"}}}}}, size=100) collections['cluster_2] ="_all", body={"query": { "filtered": {"filter": {"term": {"code": "DEBUG"}}}}}, size=100) js_ctx = PyV8.JSContext(JSCollection(collections=collections)) js_ctx.enter() # # here comes the javascript code # process_result = js_ctx.eval(""" function find_in_collection(collection, search_entry) { for (entry in collection) { if (search_entry[entry]['msg'] == collection[entry]['msg']) { return collection[entry]; } } return null; } function correlate_cluster_1_and_cluster_2(collections) { collection_cluster_1 = collections["cluster_1"]["hits"]["hits"]; collection_cluster_2 = collections["cluster_2"]["hits"]["hits"]; similar_entries = []; for (entry in collection_cluster_1) { similar_entry = null; similar_entry = find_in_collection(collection_cluster_2, collection_cluster_1[entry]); if (similar_entry != null) { similar_entries.push(similar_entry); } } result = {'similar_entries': similar_entries}; return(result) } var result = correlate_cluster_1_and_cluster_2(collections); // this will return the data to the python method result result """) # back to python print("RAW Process Result".format(process_result)) # create a jinja2 template and print it with the results from javascript processing template = Template(""" """) print(template.render(process_result)) </code></pre> <p>

Again, just wrote it down, not the actual code, so dunno if it really works.

But still, this is pretty simple.

You can even use JavaScript Events, or JS debuggers, or create your own Server Side Browsers. You can find those examples in the demos directory of the PyV8 Source Tree.

So, this was all a 30 mins Prove of Concept, and last night I refactored the code and this morning, I thought, well, let’s write a real library for this. So, eventually there is some code on github over the weekend. I’ll let you know.

Oh, before I forget, the idea of writing all this in a YAML file came from work with Junipers JunOS PyEZ Library, which has a similar way. But they are using the YAML file as description for autogenerated Python Classes. Very Nifty.

Costales: Jugando con MAME: Recreativas en Ubuntu

Planet Ubuntu - Wed, 2014-08-06 15:30
En mi infancia tener un ordenador en el hogar era un auténtico privilegio. Sólo el padre de un amigo de la pandilla de la calle tenía un sobremesa con pantalla verde fluorescente, sin disco duro, suplido por 2 disqueteras de 5 1/4 y creo recordar que con un procesador 8086.

Yo tiraba millas con mi Spectrum. Un ordenador que compré con la mayor ilusión del mundo y avergüenza confesar que sólo me sirvió para jugar a juegos que costaban unas 1.200 ptas y que poco tenían que ver con la recreativa. También por supuesto, para probar la montonera de demos que incluía la revista Micromanía en cómodas casettes.

Explicado esto, podréis comprender cómo me impresionaban las arcade de las salas de máquinas.

Tron plasmó la esencia de una sala de máquinasGráficos extraordinarios para aquella época, un montón de colores, animaciones, un joystick y botones magníficos (tengo pendiente comprar uno así para el PC, o al menos comprarme un miniarcade), sonidos que incluso sintetizaban extraordinariamente la voz... pero... pero, no sólo era eso... Había algo más en aquellos videojuegos...

Apelotonadas en la penumbra, como nosotros alrededor del mejor jugador cuando llegaba al enemigo finalCon el paso de los años me dejaron de atraer los videojuegos. A pesar de que cada vez había mejor hardware y los juegos ya no tenían limitaciones técnicas, se convirtieron en... ¿Cómo decirlo? ¿Aburridos? Cada vez más obras de arte con intros que poco envidian a una película, historias largas, rebuscadas... pero les falta lo que encontré en los arcade: la adicción.
Puede que la dosificación de pagar las 25 ptas de cada partida provocara el mono de otra dosis al día siguiente, pero estos juegos permitían que ese mañana llegarás un poco (sólo un poco) más lejos en el juego. Cada día descubrías un enemigo final nuevo y tras llegar a él durante varias partidas, conseguías el 'truco' de dónde colocarte para matarlo...

Ahora tenemos los juegazos de Steam para Ubuntu, pero los que siguen atrayéndome son los antiguos arcade; aún recuerdo girando los volantes de Super Gran Prix a ritmo de acelerador, haciendo malabarismos con Metal Slug, estrellándome en OutRun, pateando a los gnomos en Golden Axe ávido de pócimas o pegando codazos a todo un M.A. en Double Dragon...

Y encima, estos juegos son gratis... sí, has oído bien, te descargas una ROM (que es el juego) y pista.

¿Quieres probar? Abre una terminal y copia y pega estos comandos:
sudo apt-get install mame -y
Cierra MAME.
cd ~/.mame && mame -cc
gedit ~/.mame/mame.ini
Graba el fichero y cierra el editor.
mkdir  ~/.mame/nvram memcard roms inp comments sta snap diff
nautilus ~/.mame/roms
Copia aquí en formato .zip las ROMs que bajes.
Y cuidado, la ROM tiene que estar hecha para la versión de MAME que tengas: Esto ocurre porque MAME, además de un emulador, pretende ser un proyecto histórico que documente las recreativas fielmente, y además de los juegos y clones nuevos que se añaden a cada versión hay muchos otros juegos que se cambian por versiones más "correctas", aunque ya antes funcionaran bien. Por esto, si un juego te dice que le falta ficheros, es que estás usando una versión distinta al de la MAME ;)

Una vez con las roms copiadas en el directorio ~/.mame/roms, ejecutamos de nuevo el emulador desde la terminal con:
Y escogemos el juego.

Y si os da pereza tener que buscar/instalar MAME y sus Roms, tenéis disponible distros como Advance que automatizan todo, incluso ya traen cientos de juegos. Arrancáis desde CD y a jugar :P
... bueno... os dejo, hora de otro cara a cara con Mr. Bison... después de tantos años... snif!

¿Comenzamos?Fuentes: UPUbuntu & Emulatronia.

Mythbuntu: Mythbuntu 14.04.1 Released

Planet Ubuntu - Wed, 2014-08-06 14:30
Mythbuntu 14.04.1 has been released. This is a point release on our 14.04 LTS release. If you are already on 14.04, you can get these same updates via the normal update process.The 14.04.x series of releases is the Mythbuntu team's second LTS and is supported until shortly after the 16.04 release.You can get the Mythbuntu ISO from our downloads page.Highlights
  • MythTV 0.27
  • This is our second LTS release (the first being 12.04). See this page for more info.
  • Bug fixes from 14.04 release
Underlying system
  • Underlying Ubuntu updates are found here
  • Recent snapshot of the MythTV 0.27 release is included (see 0.27 Release Notes)
  • Mythbuntu theme fixes
We appreciated all comments and would love to hear what you think. Please make comments to our mailing list, on the forums (with a tag indicating that this is from 14.04 or trusty), or in #ubuntu-mythtv on Freenode. As previously, if you encounter any issues with anything in this release, please file a bug using the Ubuntu bug tool (ubuntu-bug PACKAGENAME) which automatically collects logs and other important system information, or if that is not possible, directly open a ticket on Launchpad (
Known issues
  • If you are upgrading and want to use the HTTP Live Streaming you need to create a Streaming storage group

Mythbuntu: Mythbuntu 14.04 Released! (Better late than never edition)

Planet Ubuntu - Wed, 2014-08-06 14:27
After some last minute critical fixes and ISO respins by the release team (thanks again Infinity, we owe you and the rest of the release team a beer), the Mythbuntu team is proud to announce we have removed our socks (see relevant post) and released Mythbuntu 14.04 LTS. This is the Mythbuntu team's second LTS release and is supported until shortly after the 16.04 release.With this release, we are providing mirroring on sponsored mirrors and torrents. It is very important to note that this release is only compatible with MythTV 0.27 systems. The MythTV component of previous Mythbuntu releases can be be upgraded to a compatible MythTV version by using the Mythbuntu Repos. For a more detailed explanation, see here.You can get the Mythbuntu 14.04 ISO from our downloads page.Highlights
  • MythTV 0.27 (2:0.27.0+fixes.20140324.8ee257c-0ubuntu3)
  • This is our second LTS release (the first being 12.04). See this page for more info.
Underlying system
  • Underlying Ubuntu updates are found here
  • Recent snapshot of the MythTV 0.27 release is included (see 0.27 Release Notes)
  • Mythbuntu theme fixes
We appreciated all comments and would love to hear what you think. Please make comments to our mailing list, on the forums (with a tag indicating that this is from 14.04 or trusty), or in #ubuntu-mythtv on Freenode. As previously, if you encounter any issues with anything in this release, please file a bug using the Ubuntu bug tool (ubuntu-bug PACKAGENAME) which automatically collects logs and other important system information, or if that is not possible, directly open a ticket on Launchpad (
Known issues
  • Upgraders should hold off until our first point (14.04.1) coming this summer. (See bugs #1307546 )
  • Don't select VNC during install. It can be activated later. (Bug #1309752)
  • If you are upgrading and want to use the HTTP Live Streaming you need to create a Streaming storage group

Michael Hall: Web apps vs. Native apps in Ubuntu

Planet Ubuntu - Wed, 2014-08-06 09:00

A couple of days ago internet luminaries Stuart Langridge and Bryan Lunduke had a long twitter conversation about webapps and modern desktops, specifically around whether or not web apps were better than, worse than, or equal to native apps.  It was a fun thread to read, but I think there was still a lack of clarity about what exactly makes them different in the first place.

Webapps are basically a collection of markup files (HTML) glued together with a dynamic scripting language (Javascript), executed in a specialized container (browser) that you download from a remote source (web server). Where as native apps, in Ubuntu, are a collection of modeling files (QML) glued together with a dynamic scripting language (Javascript), executed in a specialized container (qmlscene) than you download from a remote source (App Store). The only real difference is from where and how often the code for the app is downloaded.

The biggest obstacle that webapps have faced in reaching parity with so called “native” apps is the fact that they have historically been cut off from the integration options offered by a platform. They are often little more than glorified bookmarks in a chrome-less browser window.  Some efforts, like local storage and desktop notifications, have tried to bridge this gap, but they’ve been limited by a need for consensus and having to support only the lowest common set of functionality.

With Unity, several years ago, we pioneered a deeper integration with our webapps-api, which not only gave webapps their own launcher and icon in the window switcher, but also exposed new APIs to let them interact with the desktop shell in the same was as native apps. You could even control audio players in webapps from the sound indicator controls when that webapp wasn’t focused.

With the new Ubuntu SDK and webbrowser-app we’re expanding this even further. For example, the Facebook webapp on the Ubuntu phone actually comes with a fair amount of QML that lets you share content to it from other apps, just like you can with local HTML5 or QML apps. You will also soon be getting notifications for the Facebook app, even when it’s not running. All of this is only an extension of the remote code that is being loaded from Facebook’s own server.

Given all of this, I have to agree with Stuart (which doesn’t happen often) that webapps should be treated like native apps, and increasingly will be on modern computing platforms, because the distinction between them is either being chipped away or becoming irrelevant. I don’t think they will replace traditional apps, but I think that in the next few years the difference is no longer going to matter, and a platform that can’t handle that is going to be relegated to history.

Hajime MIZUNO: [Event]Open Source Conference 2014 Kansai@Kyoto

Planet Ubuntu - Wed, 2014-08-06 06:51

On August 1-2, 2014, Open Source Conference 2014 Kansai@Kyoto was held in Kyoto.

This event is the largest OSS community gathers for in Japan. We participate as exhibitor and speaker every year. Yes, Of course as for this year.

Hiroshi Chonan was in charge of the seminar. He talked about trusty and Utopic.

The highlight of the event is as follows:

The NetBSD team displayed Luna work station of OMRON. Luna(= opposed to Sun) is very old UNIX workstation in Japan. However, the newest Twitter client "Mikutter" works in this workstation on NetBSD!

Mikutter is simple, powerful and moeful twitter client. This is very popular among young geek. (We call them "Teokure" in Japan)

This is Takayoshi Okano. He is one of greatest FLOSS translators in Japan. And it is Open Street Map mapper too.

In this way, the Japanese FLOSS community is very active and happy!

Javier L.: UbunConLa 2014, next week! (August 14th-16th)

Planet Ubuntu - Wed, 2014-08-06 00:36

This is just a kind reminder of the UbuConLa happening next week (August 14th-16th) in Cartagena Colombia. With only two previous attempts, the UbuConLa is quickly heading to integrate most of the Ubuntu Latinoamerican teams. This year there will be presence from at least the following countries; Colombia, Brazil, Peru, Venezuela, Uruguay, Argentina, Paraguay, Mexico, España, India, Panama and Ecuador, pretty awesome =)!

With so many attendees I’m sure the UbuConLa will not only increase the awareness of Ubuntu and libre software in the region but will be an ideal forum to exchange experience between local teams and its correlation with local governments and other institutions.

I look forward to talking with you!

Este es un recordatorio del UbuConLa a celebrarse la siguiente semana (14-16 de Agosto) en Cartagena Colombia. Con solo dos versiones anteriores, la UbuConLa ha integrado a la mayoría de los equipos latinoamericanos de Ubuntu. Este año se contara con personas provenientes de por lo menos; Colombia, Brazil, Peru, Venezuela, Uruguay, Argentina, Paraguay, Mexico, España, India, Panama y Ecuador =)!

Con tantos asistentes estoy seguro que la UbuConLa no solo incrementara la popularidad de Ubuntu y del software libre en la region tambien permitira compartir experiencias entre los equipos locaes y sus correlaciones con los gobiernos locales y otras instituciones.

¡Espero verlos por ahí!

Benjamin Kerensa: Dogfooding: Flame on Nightly

Planet Ubuntu - Tue, 2014-08-05 20:20

Just about two weeks ago, I got a Flame and have decided to use it as my primary phone and put away my Nexus 5. I’m running Firefox OS Nightly on it and so far have not run into any bugs so critical that I have needed to go back to Android.

I have however found some bugs and have some thoughts on things that need improvement to make the Firefox OS experience even better.



One thing that has really irked me while using Firefox OS 2.1 is the keyboard always shows capital letters even if you don’t have caps lock on and are typing lower case letters. This experience is not what I have grown used to while using Android for years and iOS before that. When typing my password into web apps it is confusing to not see a visual cue on the keyboard to let me know what casing I am inputting.

For whatever reason this was an intended feature and I honestly think this UX decision needs to be rethought because it just doesn’t feel natural.


Call Volume

On Firefox OS 2.1, at least in my experience, the max volume is totally inadequate because indoors, it sounds like the person on the other end of a call is whispering or talking quietly. Making or receiving a call outdoors or in a noisy environment is impossible. I filed a bug on this so hopefully it will be sorted.


Mail App

I like that the mail app is fast and nimble but the lack of having threaded e-mail conversations just leaves me wanting more. It would also be nice to be able to have a smart inbox like the GMail app on Android has since that is also my mail provider.


Social Media Apps

The Facebook and Twitter app both work better and luckily the Facebook web app has avoided many of the UI and feature changes Facebook has landed to its Android and iOS app so the experience is still good. One thing I do miss though is that I got notices on Android where currently the Facebook and Twitter apps on Firefox OS do not check for new messages on either platform and send notifications to your phone.

Additionally, it would really be nice to have a standalone Google+ packaged app in the marketplace since the current one I can add to the home screen seems a bit hit or miss and has some weird grey bar above the app which limits the app using the full screen resolution.


Overall Impression

I’m still using my Flame so none of the issues I have pointed out are making my phone unusable and I know that I am running a nightly unreleased version of Firefox OS so stability and issues are expected on Nightly. That being said, I’m really impressed by how polished the UI is compared to previous versions. Each time I get an OTA, I see more and more polish and improvements coming to certified apps.

I am really happy with the fact the Flame had dual SIM since I am traveling to Europe in a few weeks and can buy a SIM for a few euros if my T-Mobile International Roaming for some reason fails me.

I also know others who are not involved in the Mozilla community that are also using the Flame as their daily driver in North America, so that’s very optimistic to see people buying the Flame and dogfooding it on their own simply because they want a platform that puts the Open Web while also being Open Source.

Ubuntu Server blog: Server team meeting minutes: 2014-08-05

Planet Ubuntu - Tue, 2014-08-05 17:05
Meeting Actions U Development
  • DebianImportFreeze on the 7th and FeatureFreeze on the 21st
  • coreycb agreed to take on bug 1347567
  • Everyone reminded to keep BP up to date
Server & Cloud Bugs (caribou)
  • no updates
Weekly Updates & Questions for the QA Team (psivaa)
  • psivaa reported that utopic installation jobs are broken at the moment due to a parted bug and that the Foundations team is working on a fix.
Weekly Updates & Questions for the Kernel Team (smb, sforshee)
  • smb reported that several bugs were reported by EC2 users.
Ubuntu Server Team Events
  • Linuxcon on the 20th and TL sprint going on during this week.
Open Discussion
  • lutostag reminded everyone that voting for ODS sessions are about to close, so people should vote ASAP. Ubuntu related sessions:
Agree on next meeting date and time
  • Next meeting will be on Tuesday, Aug 12th at 16:00 UTC in #ubuntu-meeting. Chaired by gaughen

Valorie Zimmerman: Coming up: excitement and work

Planet Ubuntu - Tue, 2014-08-05 05:25
First, many of us will be taking off this week for Randa, Switzerland. Many sprints are taking place simultaneously, and the most important to me is that we're writing another book. Book sprints are fun, and lots of work! As well as the team in Randa, a few people will be helping us write and edit from afar, and I'll be posting a link soon so that you can help out as well.

Here is a recent article on Randa and what goes on here: Most of the attendees are traveling on funds contributed by the community. Thanks so much! I hope our work will be worth your generosity.

Today, the Akademy session schedule was announced: This will be my third Akademy, and they are always fascinating, friendly, educational, and just plain awesome! At this point, we're still open for more sponsorship of Akademy, and registration seems a bit slow. If you are interested, please register and make your way to Brno for Akademy. If you know a company who is not yet a sponsor and should be, please urge them to register as a sponsor now.

The talks will be great, as will the "hall track." Following the formal meeting, we have a few days of informal talks, mini-sprints, and workshops. This is when Kubuntu will be meeting, which is why Ubuntu sponsored me to attend! \o/ Thanks so much for that, generous Ubuntu users. To sum up, I'll quote Myriam from the above article:
While a lot of the work in Free Software is done over the internet, nothing replaces the real life meetings, as it provides an extra drive in terms of motivation. Modern software development is mostly agile, something even corporate software development is using more and more. Due to the global distribution of our contributors; Free Software development has always been agile to start with, even if we didn't put a label on it in the early days.And in agile development; sprints are a very important element to push the project forward. While sprints can be done over the web, they are hindered by time-zones, external distractions, availability of contributors, etc. Having real life sprints, even if those are few, are more productive as all the hindrances of the web meetings are eliminated and the productivity is greatly enhanced. [emphasis added]

The Fridge: Ubuntu Weekly Newsletter Issue 377

Planet Ubuntu - Mon, 2014-08-04 23:38

Welcome to the Ubuntu Weekly Newsletter. This is issue #377 for the week July 28 – August 3, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Tiago Carrondo
  • David Morfin
  • Jose Antonio Rey
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Robie Basak: My git-based Ubuntu package merge workflow

Planet Ubuntu - Mon, 2014-08-04 16:17

Originally posted to the ubuntu-devel mailing list (archive).

I thought it was about time that I shared my own merge workflow, as I think it is quite different from most other Ubuntu developers. I'm an advanced git user (even a fanatic, perhaps), and I make extensive use of git's interactive rebase feature. To me, an Ubuntu package merge is just a rebase in git's terminology, and in this case I use git as nothing more than an advanced patchset manager.

I find my workflow allows me to handle arbitrarily complex package merges - something I've not been able to do any other way. And once I've merged a particular package with this workflow once, future merges take me far less time because checking individual broken down diffs is even quicker still.

This workflow may be useful to others, but probably only if you are already very familiar with git's interactive rebase feature. I don't suggest that you try to use this workflow without first being extremely comfortable with this (for example, working with git while not attached to a branch).

On the other hand, if you are very familiar with rebasing in git, then like me you may find this workflow to be the logically obvious way of doing package merges in Ubuntu. I wonder if anybody else feels like this.

In my mind, this write-up may seem complex, but I think this complexity is just a reflection of the reality of what's really going on when one does an Ubuntu package merge. But by using git, the complexity gets moved to the complexity of doing git rebases, and this is something that only needs to be learnt once.

I'm also interested to know how this fits in with other recent work in using git with Debian packaging. My impression is that it doesn't fit so well, because in Ubuntu we need to deal with all Debian packages, including those not managed in git in Debian. Comments, feedback and criticism are all appreciated.

Considering merges Merge essentials

Let's me first consider what an Ubuntu package merge really is. Existing Ubuntu developers probably want to skip this section.

First, some terminology. For a given package that needs merging, Ubuntu has applied some set of changes from the Debian version it is based on. So we have some Debian version from which Ubuntu diverged (the base version), the latest Debian version, and the current Ubuntu version. The old Ubuntu delta is the diff between the base version and the current Ubuntu version. The new Ubuntu delta will be the diff between the latest Debian version and the newest Ubuntu version that we will upload.

To do a package merge, we must re-apply all of the Ubuntu delta that is still required onto the latest Debian version. On the way, we might find that some changes are no longer required, some changes that have to be modified to work against the latest Debian version, and may perhaps need to introduce new changes.

We expect the result to contain a changelog entry summarising what remains in the Ubuntu delta, what was modified or dropped, and any new changes that were made.

The logical delta

So when doing a package merge, it is essential to understand what exactly logically constituted the previous Ubuntu delta, so that we can identify what changes are no longer required, how we might need to modify some previous changes, and what new changes may be needed.

When the Ubuntu delta is relatively trivial, checking all of this by examining the diffs produced by merge-o-matic is normally fine. Even if the delta consists of a few changes, they are easy to identify and understand in a small diff.

But when the delta is larger, I find it far more difficult to follow it all in my head at once, particularly when multiple logical changes apply changes to similar overlapping areas across multiple files. This is, of course, yet another good reason why we should be sending our changes to Debian and keeping our delta small, but in some cases this maintaining a large delta is necessary, at least in the short term.

In following my workflow, I have come across a number of merge errors made by multiple Ubuntu developers, where the claimed delta in the changelog for a merge did not match the delta itself. This suggests to me that developers are not always checking and understanding the delta as they should.

Applying git

git makes it easy to take a large "squashed" diff and split it into multiple constituent logical parts. This is what I've been doing here. Once split like this, I use git rebase to apply the logical parts back on to the latest Debian version. This allows me to examine each logical part of the delta separately, modifying or removing them as required. When I'm done, it is easy to review each part, and even compare against the previous version. And I can save the broken down parts for the next merge.

So broadly, my workflow for packages with complex deltas is:

  1. Import the base, latest Debian and all Ubuntu revisions since the base version into a git repository.

  2. Break down the Ubuntu revisions into constituent logical parts using git rebase. Or If I followed this workflow last time, then I just run git am against what I saved previously. One might consider this step to be the opposite of a "squash" operation. "Unsquash", if you like.

  3. Rebase onto the latest Debian version, dropping any metadata changes (eg. debian/changelog changes and update-maintainer) and amending the delta on the way as required.

  4. Update debian/changelog, apply update-maintainer, review, test and upload.

  5. Run git format-patch to save my set of logical changes for next time.

To help with these tasks, I have written some tooling that I use. I've pushed these to git://

  • xgit is a wrapper around setting GIT_WORK_DIR and GIT_DIR, so that I can operate with a .git directory that is outside my working tree. This means that dpkg-buildpackage, dpkg-source etc. don't need to know or care that I'm using git, and I can run git commands without necessarily being in my working tree.

  • git-dsc-commit imports a source package by just committing and tagging a new commit (in the current branch, or detached HEAD) that is exactly the unpacked source package.

  • git-merge-changelogs is a wrapper around dpkg-mergechangelogs that takes its input changelogs from debian/changelog files found in specific git revisions.

  • git-reconstruct-changelog extracts commit log messages from a set of git commits and writes them to debian/changelog.

These tools are incomplete. I didn't know where I was going when I wrote them, and there is certainly scope for more automation. I addressed the biggest needs first, and what is remaining costs me little time so I have not spent time to automate any more yet.

Importing revisions into a git repository

I generally start with:

# Import relevant source packages. This could probably be automated # with the help of grab-merge. pull-debian-source -d <package> pull-debian-source -d <package> <base-revision> pull-lp-source -d <package> pull-lp-source -d <package> <version-since-base> # 0 or more times # Set up git repository for this merge . /path/to/xgit.bash xgit mkdir git gitwd # git = moved .git directory; gitwd = working directory # without .git git init

Next I import the sources package into git, modelling the Ubuntu divergence by having the new Debian package have a parent commit of the base Debian package, and the Ubuntu packages on a separate branch also rooted at the base package:

git dsc-commit <base-revision .dsc> git dsc-commit <latest-debian-version .dsc> git checkout <base-revision tag> git dsc-commit <Ubuntu version since base .dsc> # 0 or more times git dsc-commit <current Ubuntu version .dsc>

git-dsc-commit automatically tags revisions, but since ~ and : are invalid in git tag names, _ is substituted. So right now, I have to correctly name the tag that git-dsc-commit used in the git checkout call above.

git-dsc-commit commits "3.0 (quilt)" source packages without patches applied. I prefer to work with quilt patches directly if they need refreshing or other changes made. Otherwise I just get noise in .pc/, and it is difficult to rationalise any changes made back into the separate quilt patches they belong to.

Note that git-dsc-commit commits the entire source package tree exactly as it is. It is not like a normal commit, where logically you're committing a change. Underneath, git commits are really snapshots, not changesets, so git-dsc-commit just commits a snapshot identical to the source package. For example, if you have made a change, then from your point of view git-dsc-commit will effectively commit the reverse of that change if necessary so that the result looks identical to the source package you're importing.

When this step is done, I have a git repository with imported source packages in commits that mirror the Ubuntu divergence.

I find this point very useful in itself, since I can now easily compare things. If I want to know if two files specific are different between specific Debian and Ubuntu source package versions, or how they are different, or want a list of files in a particular debian/ subdirectory that have changed, then I just ask and git will tell me. Querying for changes between arbitrary revisions and files is something that git does very well.

Breaking down the delta into logical parts

If I already perfomed this for a previous upload, then a simple git am against my saved work allows me to skip this step. I can verify the result by diffing against the imported squashed equivalent.

I won't go into how to use git rebase here; I assume you know that. For every commit I edit, I generally git reset HEAD^ back to the previous version, so all changes made in this particular source package version become unstaged. Then I go through the changelog entries one by one, staging only those changes (often using git add -p) and committing them one by one.

The point in this step is to reflect what was logically present in an already-uploaded source package, errors and all. Some notes:

  • I generally aim to end up with commits that follow the same order as the entries in debian/changelog.

  • git log --decorate is useful here, since all the imported source packages are tagged.

  • I make the commit message for each logical change identical to its entry in debian/changelog where possible, including leading whitespace and the *, - or + bullet points.

  • I make the debian/changelog file change for the entire upload a separate commit at the end (most recent) for each source package version.

  • If update-maintainer was run and thus modified debian/control, or VCS-* entries changed to XS-Debian-VCS* entries, I put this in a separate logical commit with an "ubuntu-meta" commit message.

  • Quilt patches that exist only in Ubuntu involve logical commits that add the file in debian/patches/ and add a single line to debian/patches/series. Patches remain unapplied. Similarly, for other types of quilt patch modification, only changes to debian/patches/ end up in the commit.

  • This is the stage that I often find errors in the previously documented changelog. Where this happens, I just figure out what happened logically and try and commit something that matches. If debian/changelog specified a change that was actually not present, it doesn't get a logical commit, but the commit with the full debian/changelog change does include the erroneous text.

When done, it is trivial to run git log -p and check that all commits match their description. I also run git diff <tag> and verify that the result is still identical to the source package import we started from by checking that the reported diff is empty.

Rebasing onto the newest Debian version

Again, this should be straightforward to follow for git rebasers, and I assume you know how to operate the details.

First, I drop any previous commits that changed debian/changelog only, as well as any "ubuntu-meta" commits. Then something like git rebase --onto <new_debian_version> <base_version> does the job. If there are conflicts, they can be handled during the rebase in the normal way.

While I'm doing this, I take notes of the changes I made so that I can write up the changelog later. Where possible, I directly squash these notes into the commit messages in the form of the future changelog entry. If the rebase step drops commits because they have been applied in Debian, then it's important to note these. git doesn't specifically point these out except as they scroll past.

Next, I check that all quilt patches apply and are still correct, and do any further editing required. This includes test builds, running dep8 tests, etc. As I do this, I use git rebase extensively again, squashing the commits down into their original places and updating commit messages (which will be the basis of the future changelog message).

When doing test builds at this stage, I don't want to overwrite the Debian source package in my parent directory, so I do have to insert a temporary changelog entry or something. I haven't worked out a strong pattern for this yet; sometimes I complete the remaining steps first to avoid this issue, and rebase and squash any changes I needed back in. git-buildpackage can probably help me here; I haven't looked into integrating it into my workflow at this step yet.

It is important to note that this stage really uses git as an advanced patchset editor. I am editing the patchset itself. I specifically do not add new commits to the end, except temporarily before I squash them down again.

When this step is complete, my commits start from the imported latest Debian source package version, and show the logical delta (one logical entry per commit) that will form the new Ubuntu upload. Changelog entries exist only as commit messages; debian/changelog is not modified at all yet.

Updating the changelog

Since an Ubuntu merge is expected to include a merged changelog, adding to the Debian changelog will not do; we need to import all previous Ubuntu changelog entries too.

Most of this could probably be automated more.

Merging old changelog entries

My tool git-merge-changelogs does this. Calling it as git merge-changelogs <base version tag> <latest Debian version tag> <current Ubuntu version tag> fetches the changelog entries out of the imported source packages, calls dpkg-mergechangelogs and writes out debian/changelog in the working tree. Then I usually just git commit -mmerge-changelogs debian/changelog to commit this step.

Automatically creating new changelog entries

Next, I need to add the changelog entry for the merge itself. I do this with my tool git-reconstruct-changelog. Calling git reconstruct-changelog <latest Debian version tag> inserts the commit messages into debian/changelog. Then I usually run git commit -mreconstruct-changelogs debian/changelog to commit this step.

Finishing the changelog

Reconstructing the changelog will miss out the merge introduction, and also will fail to mention any dropped changes since there are no commits that correspond to these. Consulting my notes from earlier, I edit up the changelog manually, fix any whitespace/wrapping issues, release it with dch -r '' and commit it with git commit -mchangelog debian/changelog.

Other metadata

Next I run update-maintainer, and then git commit -mubuntu-meta debian/control to commit this step. Any VCS-* to XS-Debian-VCS-* type translation goes into this commit, too.


That's it. Since my working tree has no .git directory, I can just run debuild as usual to create my source package ready for upload.

If there's a problem and I need to go round again, it's quite easy to squash a change in where I need it, re-run git reconstruct-changelog and edit the changelog, and rebuild the source package.

Saving the logical delta for future use

After upload, I make sure to save my logical delta by using git format-patch. This allows me to reconstruct it quickly the next time I merge the same package. There is no need for me to keep the git repository around.

The patchsets I've saved this way don't always follow what I've written here precisely, as I have taken a while to settle on it, and I still deviate on a whim. It doesn't really matter though; by separating out logical changes into separate commits, when I look at it the next time it's easy to mould a patchset into whatever form I will need.

Example of use

This workflow allows me to handle any merge that is thrown at me, however complex it may be. When I merged mysql-5.5 last cycle, it had diverged considerably from Debian, but with much cherry-picking going both ways. The sheer complexity of it, and the time necessary to figure it all out, had put off developers before me from sorting it out. Instead, some changes kept getting cherry-picked and other changes were getting lost.

When I reconstructed the logical set of changes made in Ubuntu since we diverged, I ended up with a branch of around 120 commits (IIRC). With extensive rebasing, I ended up reducing this to 8 logical changes to send to Debian, and just 4 commits remaining in the Ubuntu delta. Importantly, I did this in a way that I could be confident about the results, since I could easily verify my work.

I'm now going to do the same for mysql-5.6, and I'm much happier doing it knowing that I can manage it this way.


I have a local store of these logical delta patchsets. Currently this is for apache2, facter, nginx, php5, subversion and vsftpd. If others want to follow the same workflow, we should work out some way to share them.

And if many people find a git repository that follows Debian and Ubuntu source packages useful, then perhaps we should set one of those up to share, too, to save doing the import step.

I did have some code that auto-imported into git from UDD bzr, and cached, so I could just git clone a UDD branch, but this is limited to UDD's package import reliability, so I stopped using it. My git-dsc-commit tool should always work. I have had to fix a number of edge cases, but I am not aware of any that are outstanding.

The End

What I think I have here are the pieces needed to make merging Ubuntu packages with git work. The workflow itself doesn't matter so much - you can mix and match, and you should be fine.

Adolfo Jayme Barrientos: Have you taken a look at LibreOffice 4.3?

Planet Ubuntu - Mon, 2014-08-04 02:49

Because it’s an awesome release in many fronts, and it’s now in Utopic and the PPAs. It’s been especially significant for me as it includes some patches of my own :) (thanks to Caolán who kindly reviewed them).

I loved this comment from an LWN reader:

I also just noticed that legend Michael Meeks has mentioned me in his epic blog post detailing the work that all the different LibreOffice teams have been doing during the last six months (definitely check it out if you haven’t yet). The mention was cool, but I only played a small part on making this release the best yet: everyone, from developers, bug triagers and translators to marketers and designers, has done an excellent job. The LibreOffice community is a delightful place to be, and we need your help.

Dimitri John Ledkov: What is net neutrality?

Planet Ubuntu - Sun, 2014-08-03 04:21
Sorry, the web page you have requested is not available through your internet connection.We have received an order from the Courts requiring us to prevent access to this site in order to help protect against Lex Julia Majestatis infridgement.If you are a home broadband customer, for more information on why certain web pages are blocked, please click here.If you are a business customer, or are trying to view this page through your company's internet connection, please click here. ∞

David Tomaschik: Weekly Reading List for 8/2/14

Planet Ubuntu - Sun, 2014-08-03 02:02

This has been missing for a few weeks, but it's back!

Why is CSP Failing?

Why is CSP Failing? Trends and Challenges in CSP Adoption. Despite being an "academic" paper, this actually has a lot to offer about why one of the most effective defenses against XSS isn't yet getting widely implemented, and what the implementation costs and strategies are.

Safari Bites the Dust

Ian Beer of Google Project Zero recently popped Safari and then proceeded to pwn OS X. This post dives into exploiting a WebKit unbounded write bug, and makes it obvious just how many hoops an attacker needs to go through compared to the 'buffer overflow to overwrite EIP' bugs of the 'good old days'. It's a great read, especially if you're new to browser/client exploitation.

Blackhat & DEF CON Tips

It's that time of year again -- the annual Las Vegas pilgrimage for hackers. As usual, Chief Monkey over at has some protips for first time attendees. (Or reminders for seasoned vets!)

Xubuntu: 5 Things to Do After Upgrading from 12.04 to 14.04

Planet Ubuntu - Sat, 2014-08-02 13:43

The first point release of 14.04 just came out a few days ago and many LTS users waited for this to upgrade from 12.04 – in fact do-release-upgrade only offers the LTS to LTS upgrade after the first point release for stability reasons. So we thought this would be the perfect time to do a quick writeup of a few things to do after upgrading your system. User configuration isn’t updated and installed applications aren’t removed when upgrading and that’s a good thing: Upgraders will not have to restore their customizations and their system will mostly look as before.
However, for those of you who want to get closer to the default setup of Xubuntu 14.04 Trusty Tahr, here go five easy steps you can quickly follow to that end.

  1. Light Locker has replaced XScreenSaver. Light Locker uses LightDM to lock the screen, merging the functionality of the login screen and the lock screen. Having both applications installed at the same time may produce bugs or regressions, so it is recommended to remove XScreenSaver. To remove it just run the following command in a terminal window: sudo apt-get remove xscreensaver
    If you would rather see a screensaver instead of an improved screen locker, you can alternatively remove Light Locker and keep XScreenSaver. 
  2. MenuLibre, an advanced menu editor that provides modern features in a clean, easy-to-use interface, with full Xfce support, replaces Alacarte for menu editing. To remove Alacarte open a terminal window and run the following command: sudo apt-get remove alacarte 
  3. Due to a duplication of functionalities, the Xubuntu Team decided to favor Ristretto for photo viewing, and drop gThumb. To remove gThumb from your system run in a terminal window: sudo apt-get remove gthumb 
  4. As Whiskermenu is now the default menu in Xubuntu, swap out the old application menu with it. Just right click the top panel and navigate to Panel > Add New Items, then select “Whisker Menu” and click “Add”.
    After that, and to remove the old application menu, just right click on its icon and choose the “Remove” option.
  5. All PPAs are automatically disabled when you upgrade, so you’ll have to re-enable release-independent PPAs manually, taking in consideration that you’ll have to check if the old PPAs work with the new Xubuntu version.


Subscribe to Free Software Magazine aggregator