news aggregator

Ubuntu Women: 2014 Leadership Poll Results

Planet Ubuntu - Mon, 2014-08-11 17:11

Polling has closed, and we are pleased to announce that the new leadership team for Ubuntu Women has been selected.

A. Mani, Svetlana Belkin, and Emma Marshall will be the leadership committee for the next two years!

Please join me in congratulating them and supporting them as we transition into this new term for the Ubuntu Women team!

Harald Sitter: Volume

Planet Ubuntu - Mon, 2014-08-11 14:27

Volume controls. Based on PulseAudio. For Plasma 5.

Built in Randa.

Ubuntu Scientists: Who We Are: Willem Ligtenberg,A Member

Planet Ubuntu - Mon, 2014-08-11 14:17

I am Willem Ligtenberg.
I have studied Biomedical Engineering and later specialized myself in bioinformatics, which is also
known as computational biology. More specifically, I specialized myself in biomodeling and bioinformatics.
During my PhD thesis I investigated the use of graph theoretic approaches in biology.
I used graph algorithms in combination with machine learning algorithms to reverse engineer gene regulatory networks. If you are interested you can have a
read here: http://www.biomedcentral.com/1471-2105/13/281

During this I used a lot of Python and a bit of R for the statistics. I also took a course on
biostatistics for PhD students and although the course was given using Statgraphics, I did using R.
Which the tutors thought was fine, but it was not their expertise. However, they did give me the
e-mail address of a collegue of theirs who used R as well.

Fast-forward a couple of years, and currently I am working as a consultant for Open Analytics
(http://www.openanalytics.eu), which is a company that helps with the data analysis from start to finish. As the name suggests
the company believes in openness and therefore focusses on the use of (fibre/libre) open source software (FLOSS).
The FLOSS aspect of the company was a big plus for me, since I have been using Ubuntu since Warty (2004).
For the data analysis part we mainly use R, but we will use other languages if they are more
appropriate. So I still get to use Python now and then.

I have written and contributed to a few R packages that are on Bioconductor: (reactome.db,
the a4 packages and MLP). I am currenty working on an Object Relational Mapper in R, which
I hope to publish soon. Feel free to reach out to me if you want to know more about data analysis
using open source software, specifically bioinformatics and databases.

Cheers,

Willem Ligtenberg

http://www.wligtenberg.nl


Filed under: Who We Are

Valorie Zimmerman: Randa Meetings sprint: KDE Frameworks Cookbook progress

Planet Ubuntu - Mon, 2014-08-11 14:06
We groaned and suffered with the up-and-down network, and had to abandon our plan to write and edit the book on Booki at Flossmanuals. So we began to create text files on Kate or Kwrite, but how to share our work?

The best answer seemed to be a git repository, and our success began there. Once created, we consulted again and again with the Frameworks developers in the room across the hall, and brainstormed and wrote, and even created new tools (Mirko). Our repo is here: kde:scratch/garg/book. If you want to see the live code examples, you will need this tool: https://github.com/endocode/snippetextractor .

I'm so happy with what we have so far! The texts are just great, and the code examples will be updated as they are updated in their repositories. So if people planning a booth at a Qt Contributor Conference, for instance, wanted to print up some copies of the book, it will be completely up-to-date. Our goal is committing every part of the book so that it can be auto-fetched for reading as an epub, pdf, text file or printed as a book.

It is a tremendous help to be in the same place. Thank you KDE community for sending me here, all the way from Seattle. Thank you for bringing all the other developers here as well. We are eating well, meeting, coding, writing, walking, drinking coffee and even some Free Beer, and sometimes sleeping too. Mario brings around a huge box of chocolate every night. We're all going to arrive home somewhat tired from working so hard, and somewhat fat from eating so well!


Paul Tagliamonte: DebConf 14

Planet Ubuntu - Mon, 2014-08-11 01:06

I’ll be giving a short talk on Debian and Docker!

I’ll prepare some slides to give a brief talk about Debian and Docker, then open it up to have a normal session to talk over what Docker is and isn’t, and how we can use it in Debian better.

Hope to see y’all in Portland!

Stuart Langridge: Reverse SSH tunnels

Planet Ubuntu - Sun, 2014-08-10 22:49

My dad’s got a computer. Infrequently, it goes wrong and I need to fix it. Slightly more frequently, it doesn’t go wrong but it does something which is confusing, and I need to try to fix it until I realise what the confusing thing was and then either fix that or explain it. So, being able to connect to his machine is useful.

His ADSL router, from TalkTalk1, allows one to set up a port forward2 so that I can connect to his external IP and have that routed to port 22 on his machine, thus allowing me SSH access, and with SSH I can do everything else3. However, that router also controls the DHCP addresses for things on the network, and it does not always give the same address out to the same machine. So, every now and again, it’ll give his machine a different IP, and then the port forward stops working.4

So, after mithering about this a bit, Daviey Walker suggested5 that I use a reverse SSH tunnel. That is to say: I have his machine ssh into one of mine, and then port forward a port on my machine back along the SSH tunnel to port 22 on his machine, meaning that I can ssh into it and don’t have to care about IPs or anything.

This was a dead clever idea. It relies on me having a machine which is sshable from the outside world, but I do, so that’s OK.

Obviously, something needs to set the tunnel up. So, first I set things up so that his machine could ssh into mine with key authentication and without a password needed (see ssh-copy-id or a guide for that), and then I wrote this little script:

#!/bin/bash createTunnel() { date ssh -N -o BatchMode=yes -R 9102:localhost:22 mylogin@mysshablemachine if [[ $? -eq 0 ]]; then echo Tunnel created successfully else echo An error occurred creating a tunnel. RC was $? fi } /bin/pidof ssh > /dev/null if [[ $? -ne 0 ]]; then echo Creating new tunnel connection createTunnel fi

which creates this ssh tunnel connection. There’s a hack there: it assumes that if there’s an ssh process, it’s our ssh process. If you regularly ssh from the box you’re doing this on, you’ll want to do something cleverer. In this case I don’t, so I keep it easy. Couple of little tricks in the script: there’s a date command, so the output mentions when this happened, which is useful for the log file in the next bit (and this is also why the script generates no output if the tunnel is already up). Secondly, -o BatchMode=yes in the ssh options means that it’ll instantly fail if you haven’t got key auth set up right, rather than hanging forever waiting for a password, and it’ll send server keepalives every 300 seconds and kill the connection if they break, which means that if the connection hangs but doesn’t terminate, it’ll get terminated. This is what we want, because we want some monitoring process to restart the tunnel if it dies. There are all sorts of clever ways to do this: upstart, systemd, whatever6, but I just put this line in the crontab7:

*/1 * * * * /path/to/above/script.sh >> /path/to/tunnel.log 2>&1

which just reruns the above script every minute and sticks any output into a logfile so if it’s not working I can, at a push, ask dad to read the logfile. Inefficient and low-budget, but it works. So once all this is set up, I can, from my sshable machine, do ssh -p 9012 dadlogin@localhost and I then get to log in to his machine. Then I can fix it. And never deal with his horrid router’s horrid web UI ever again.

  1. a Huawei HG533 with the most annoying web config UI I’ve ever seen ever, which gets even more annoying if you try and control it from curl because it’s all JavaScript-dependent. Why? You are a router, not gmail! Grrrr!
  2. hooray
  3. hooray
  4. Also, I really really don’t like static IPs, so I don’t want to configure it with one.
  5. that is, I was mithering about it. Not Daviey.
  6. not /etc/init.d though. This is a user-level process. It should not be being run by system-level stuff. System level belongs to apt.
  7. a file which defines jobs to be run at specific times; it’s like a super-techie Scheduled Tasks wizard, and it usefully will run things even when you’re not logged in. You can edit yours from the command line with crontab -e.

Diego Turcios: How I miss you Synaptic!

Planet Ubuntu - Sun, 2014-08-10 01:03
Several years  have passed since we saw the Synaptic included in Ubuntu. You can found reasons here .
So in a clear english the reason was to have a better add/remove program for users. A friendly application. The explanation sounds good, I didn't complain about that, until right now. Ubuntu has change a lot, it's really a friendly user OS.

I have use CLI when necessary, but today I couldn't believe it.
I'm a Google Chrome user, I know you will tell me it's not open source or I should use Chromium or FF. But no. I'm a user of Google Chrome, and many people also prefer Chrome over Chromium, so why it should be quite complex remove it? If Ubuntu wants to be more friendly user why you should use the terminal for removing one of the most popular web browsers? I could understand if is a browser few people use, a good reason. But not a popular browser, Chrome is one of the most popular browsers on the world!

A screen shot of the Ubuntu Software Center, trying to remove Google Chrome.
No results, not even a message telling no results found. This is not a friendly user application!



So, I decided to check Synaptic to see if it was possible to remove Google Chrome yes it's so simple.












So please tell me, I'm wrong or I'm right. What's your opinion?



Daniel Pocock: Help needed reviewing Ganglia GSoC changes

Planet Ubuntu - Fri, 2014-08-08 20:14

The Ganglia project has been delighted to have Google's support for 5 students in Google Summer of Code 2014. The program officially finishes in ten more days, on 18 August.

If you are a user of Ganglia, Nagios, RRDtool or R or just an enthusiastic C or Python developer, you may be able to use and provide feedback for the students while benefitting from the cool new features they have been working on.

Student Technology Comments Chandrika Parimoo Python, Nagios and some Syslog Chandrika generalized some of my ganglia-nagios-bridge code into the PyNag library. I then used it as the basis for syslog-nagios-bridge. Chandrika has also done some work on improving the ganglia-nagios-bridge configuration file format. Oliver Hamm C Oliver has been working on metrics about Ganglia infrastructure. If you have a large and dynamic Ganglia cloud, this is for you. Plamen Dimitrov R, RRDtool Plamen has been building an R plugin for inspecting RRD files from Ganglia or any other type of RRD. Rana NVIDIA, C Rana has been working on improvements to Ganglia monitoring of NVIDIA GPUs, especially in HPC clusters Zhi An Java, JMX Zhi An has been extending the JMXetric and gmetric4j projects to provide more convenient monitoring of Java server processes.

If you have any feedback or questions, please feel free to discuss on the Ganglia-general mailing list and CC the student and their mentor.

Canonical Design Team: Ubuntu 14.10 wallpapers – we needs ‘em!

Planet Ubuntu - Fri, 2014-08-08 09:12

Verónica Sousa’s Cul de sac

Ubuntu was once described to me by a wise(ish ;) ) man as a train that was leaving whether you’re on it or not. That’s the beauty of a 6 month release cycle. As many of you will already know, each release we include photos and illustrations produced by community members. We ask that you submit your images using free photo sharing site Flickr and that you limit your images this time to 2. The group won’t let you submit more than that but if you change your mind after you’ve submitted, fear not, simply remove one and it’ll let you add another.

As with previous submissions processes we’ve run, and in conjunction with the designers at Canonical we’ve come up with the following tips for creating wallpaper images.

  1. Images shouldn’t be too busy and filled with too many shapes and colours, a similar tone throughout is a good rule of thumb.
  2. A single point of focus, a single area that draws the eye into the image, can also help you avoid something too cluttered.
  3. The left and top edges are home to Ubuntu’s Launcher and Panel so be careful to consider how your images look in place so as not to clash with the user interface. Try them out on your own desktop, see how they feel.
  4. Try your image at different aspect ratios to make sure something important isn’t cropped out on smaller/ larger screens at different resolutions.
  5. Take a look at the wallpapers guidance on the Ubuntu Wiki regarding the size of images. Our target resolution is 2560 x 1600.
  6. Break all the rules except the resolution one! :D

To shortlist from this collection we’ll be going to the contributors whose images were selected last time around to act as our selection judges. In doing this we’ll hold a series of public IRC meetings on Freenode in #1410wallpaper to discuss the selection. In those sessions we’ll get the selection team to try out the images on their own Ubuntu machines to see what they look like on a range of displays and resolutions.

Anyone is welcome to come to these sessions but please keep in mind that an outcome is needed from the time that people are volunteering and there’s usually a lot of images to get through so we’d appreciate it if there isn’t too much additional debate.

Based on the Utopic release schedule, I think our schedule for this cycle should look like this:

  • 08/08/14 – Kick off 14.10 wallpaper submission process.
  • 22/08/14 – First get together on #1410wallpaper at 19:30 GMT.
  • 29/08/14 – Submissions deadline at 18:00 GMT – Flickr group will be locked and the selection process will begin.
  • 09/09/14 – Deliver final selection in zip format to the appropriate bug on Launchpad.
  • 11/09/14 – UI freeze for latest version of Ubuntu with our fantastic images in it!

As always, ping me if you have any questions, I’ll be lurking in #1410wallpaper on freenode or leave a question in the Flickr group for wider discussion, that’s probably the fastest way to get an answer to a question.

I’ll be posting updates on our schedule here from time to time but the Flickr group will serve as our hub.

Happy snapping and scribbling and on behalf of the community, thanks for contributing to Ubuntu! 


The Fridge: Ubuntu 12.04.5 LTS released

Planet Ubuntu - Fri, 2014-08-08 01:05

The Ubuntu team is pleased to announce the release of Ubuntu 12.04.5 LTS (Long-Term Support) for its Desktop, Server, Cloud, and Core products, as well as other flavours of Ubuntu with long-term support.

As with 12.04.4, 12.04.5 contains an updated kernel and X stack for new installations on x86 architectures.

As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 12.04 LTS.

Kubuntu 12.04.5 LTS, Edubuntu 12.04.5 LTS, and Ubuntu Studio 12.04.5 LTS are also now available. For some of these, more details can be found in their announcements:

Kubuntu: http://www.kubuntu.org/news/kubuntu-12.04.5

Edubuntu: http://www.edubuntu.org/news/12.04.5-release

Ubuntu Studio:http://ubuntustudio.org/2014/08/ubuntu-studio-12-04-5-point-release

To get Ubuntu 12.04.5

In order to download Ubuntu 12.04.5, visit:

http://www.ubuntu.com/download

It may take a little while before the 12.04.5 images show up at the link above, if they aren’t there yet, you can also download them directly from the URL below or from a nearby mirror:

http://releases.ubuntu.com/12.04.5/

Users of Ubuntu 10.04 will be offered an automatic upgrade to 12.04.5 via Update Manager. For further information about upgrading, see:

https://help.ubuntu.com/community/PreciseUpgrades

As always, upgrades to the latest version of Ubuntu are entirely free of charge.

We recommend that all users read the 12.04.5 release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:

https://wiki.ubuntu.com/PrecisePangolin/ReleaseNotes

If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

#ubuntu on irc.freenode.net

http://lists.ubuntu.com/mailman/listinfo/ubuntu-users

http://www.ubuntuforums.org

http://askubuntu.com

Help Shape Ubuntu

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:

http://www.ubuntu.com/community/get-involved

About Ubuntu

Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:

http://www.ubuntu.com/support

More Information

You can learn more about Ubuntu and about this release on our website listed below:

http://www.ubuntu.com/

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

http://lists.ubuntu.com/mailman/listinfo/ubuntu-announce

Originally posted to the ubuntu-announce mailing list on Fri Aug 8 00:06:15 UTC 2014 by Stéphane Graber

Kubuntu: Kubuntu Precise LTS Release 12.04.5

Planet Ubuntu - Fri, 2014-08-08 00:58
Our current LTS release has had an update, 12.04.5. It adds all the current bugfixes and security updates to keep your LTS systems fresh. Download now.

Edubuntu: Edubuntu 12.04.5 Release Announcement

Planet Ubuntu - Thu, 2014-08-07 22:25
Edubuntu Long-Term Support Edubuntu 12.04.5 LTS is the fifth Long Term Support (LTS) version of Edubuntu as part of Edubuntu 12.04's 5 years support cycle. Edubuntu's Fifth LTS Point Release The Edubuntu team is proud to announce the release of Edubuntu 12.04.5. This is the last of five LTS point releases for this LTS lifecycle. The point release includes all the bug fixes and improvements that have been applied to Edubuntu 12.04 LTS since it has been released. It also includes updated hardware support and installer fixes. If you have an Edubuntu 12.04 LTS system and have applied all the available updates, then your system will already be on 12.04.5 LTS and there is no need to re-install. For new installations, installing from the updated media is recommended since it will be installable on more systems than before and will require drastically less updates than installing from the original 12.04 LTS media. This release ships with a backported kernel and X stack. This enables users to make use of more recently released hardware. Current users of Edubuntu 12.04 won't be automatically updated to this back-ported stack, you can however manually install the packages as well.
  • Information on where to download the Edubuntu 12.04.5 LTS media is available from the Downloads page.
  • We do not ship free Edubuntu discs at this time, however, there are 3rd party distributors available who ship discs at reasonable prices listed on the Edubuntu Martketplace
Although Edubuntu 10.04 systems will ask for upgrade to 12.04.5, it's not an officially supported upgrade path. Testing however indicated that this usually works if you're ready to make some minor adjustments afterwards. To ensure that the Edubuntu 12.04 LTS series will continue to work on the latest hardware as well as keeping quality high right out of the box, we will release another point release before the next long term support release is made available in 2014. More information is available on the release schedule page on the Ubuntu wiki. The release notes are available from the Ubuntu Wiki. Thanks for your support and interest in Edubuntu!

Ubuntu Podcast from the UK LoCo: S07E19 – The One Where No One’s Ready

Planet Ubuntu - Thu, 2014-08-07 22:15

Tony Whitmore, Laura Cowen, Alan Pope, and Mark Johnson are all together in Studio L for Season Seven, Episode Seventeen of the Ubuntu Podcast!

 Download OGG  Download MP3 Play in Popup

In this week’s show:-

We’ll be back next week, when we’ll be discussing whether it is the Year of the Linux Desktop (via the Chromebook), and we’ll go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Julian Andres Klode: Configuring an OpenWRT Router as a repeater for a FRITZ!Box with working Multicast

Planet Ubuntu - Wed, 2014-08-06 23:01

Since some time, those crappy Fritz!Box devices do not support WDS anymore, but rather a proprietary solution created by AVM. Now what happens if you have devices in another room that need/want wired access (like TVs, Playstations) or if you want to extend the range of your network? Buying another Fritz!Box is not very cost efficient – What I did was to buy a cheap TP-Link TL-WR841N (can be bought for 18 euros) and installed OpenWRT on it. Here’s how I configured it to act as a WiFi bridge.

Basic overview: You configure OpenWRT into station mode (that is, as a WiFi client) and use relayd to relay between the WiFi network and your local network. You also need igmpproxy to proxy multicast packages between those networks, other UPnP stuff won’t work.

I did this on the recent Barrier Braker RC2. It should work on older versions as well, but I cannot promise it (I did not get igmpproxy to work in Attitude Adjustment, but that was probably my fault).

Note: I don’t know if it works with IPv6, I only use IPv4.

You might want to re-start (or start) services after the steps, or reboot the router afterwards.

Configuring WiFi connection to the FRITZ!Box

Add to: /etc/config/network

config interface 'wwan' option proto 'dhcp'

(you can use any other name you want instead of wwan, and a static IP. This will be your uplink to the Fritz!Box)

Replace wifi-iface in: /etc/config/wireless:

config wifi-iface option device 'radio0' option mode 'sta' option ssid 'FRITZ!Box 7270' option encryption 'psk2' option key 'PASSWORD' option network 'wwan'

(adjust values as needed for your network)

Setting up the pseudo-bridge

First, add wwan to the list of networks in the lan zone in the firewall. Then add a forward rule for the lan network (not sure if needed). Afterwards, configure a new stabridge network and disable the built-in DHCP server.

Diff for /etc/config/firewall

@@ -10,2 +10,3 @@ config zone list network 'lan' + list network 'wwan' option input 'ACCEPT' @@ -28,2 +29,7 @@ config forwarding +# Not sure if actually needed +config forwarding + option src 'lan' + option dest 'lan' + config rule

Add to /etc/config/network

config interface 'stabridge' option proto 'relay' option network 'lan wwan' option ipaddr '192.168.178.26'

(Replace 192.168.178.26 with the IP address your OpenWRT router was given by the Fritz!Box on wlan0)

Also make sure to ignore dhcp on the lan interface, as the DHCP server of the FRITZ!Box will be used:

Diff for /etc/config/dhcp

@@ -24,2 +24,3 @@ config dhcp 'lan' option ra 'server' + option ignore '1'

Proxying multicast packages

For proxying multicast packages, we need to install igmpproxy and configure it:

Add to: /etc/config/firewall

# Configuration for igmpproxy config rule option src lan option proto igmp option target ACCEPT config rule option src lan option proto udp option dest lan option target ACCEPT

(OpenWRT wiki gives a different 2nd rule now, but this is the one I currently use)

Replace /etc/config/igmpproxy with:

config igmpproxy option quickleave 1 config phyint option network wwan option direction upstream list altnet 192.168.178.0/24 config phyint option network lan option direction downstream list altnet 192.168.178.0/24

(Assuming Fritz!Box uses the 192.168.178.0/24 network)

Don’t forget to enable the igmpproxy script:

# /etc/init.d/igmpproxy enable

Optional: Repeat the WiFi signal

If you want to repeat your WiFi signal, all you need to do is add a second wifi-iface to your /etc/config/wireless.

config wifi-iface option device 'radio0' option mode 'ap' option network 'lan' option encryption 'psk2+tkip+ccmp' option key 'PASSWORD' option ssid 'YourForwardingSSID'

Known Issues

If I was connected via WiFi to the OpenWRT AP and switch to the FRITZ!Box AP, I cannot connect to the OpenWRT router for some time.

The igmpproxy tool writes to the log about changing routes.

Future Work

I’ll try to get the FRITZ!Box replaced by something that runs OpenWRT as well, and then use OpenWRT’s WDS support for repeating; because the FRITZ!Box 7270v2 is largely crap – loading a page in its web frontend takes 5 (idle) – 20 seconds (1 download), and it’s WiFi speed is limited to about 20 Mbit/s in WiFi-n mode (2.4 GHz (or 5 GHz, does not matter), 40 MHz channel). It seems the 7270 has a really slow CPU.


Filed under: OpenWRT

Stephan Adig: Python and JavaScript?

Planet Ubuntu - Wed, 2014-08-06 20:38

Is it possible to combine the worlds amazing prototyping language (aka Python) with JavaScript?

Yes, it is. Welcome to PyV8!

Prerequisites

So, first we some libraries and modules:

  1. Boost with Python Support

    • On Ubuntu/Debian you just do apt-get install libboost-python-dev, for Fedora/RHEL use your package manager.
    • On MAC OSX:
      • When you are on Homebrew do this:

        brew install boost --with python

  2. PyV8 Module

    (You need Subversion installed for this)

    mkdir pyv8 cd pyv8 svn co http://pyv8.googlecode.com/svn/trunk/ cd trunk

    When you are on Mac OS X you need to add this first:

    export CXXFLAGS='-std=c++11 -stdlib=libc++ -mmacosx-version-min=10.8' export LDFLAGS=-lc++

    Now just do this:

    python ./setup.py install

    And wait !

    (Some words of advise: When you are installing boost from your OS, make sure you are using the python version which boost was compiled with)

  3. Luck ;)

    Means, if this doesn’t work, you have to ask Google.

Now, how does it work?

Easy, easy, my friend.

The question is, why should we use JavaScript inside a Python tool?

Well, while doing some crazy stuff with our ElasticSearch cluster, I wrote a small python script to do some nifty parsing and correlation. After not even 30 mins I had a commandline tool, which read in a YAML file, with ES Queries written in YAML Format, and an automated way to query more than one ES cluster.

So, let’s say you have a YAML like this:

Example YAML Query File lineos:true 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 title: name: &ldquo;Example YAML Query File&rdquo; esq: hosts:</p> <pre><code>es_cluster_1: fqdn: "localhost" port: 9200 es_cluster_2: fqdn: "localhost" port: 10200 es_cluster3: fqdn: "localhost" port: 11200_ </code></pre> <p>indices: &ndash; index:</p> <pre><code> id: "all" name: "_all" all: true </code></pre> <ul> <li>index: id: &ldquo;events_for_three_days&rdquo; name: &ldquo;[events-]YYYY-MM-DD&rdquo; type: &ldquo;failover&rdquo; days_from_today: 3</li> <li>index: id: &ldquo;events_from_to&rdquo; name: &ldquo;[events-]YYYY-MM-DD&rdquo; type: &ldquo;failover&rdquo; interval: from: &ldquo;2014-08-01&rdquo; to: &ldquo;2014-08-04&rdquo; query: on_index: all: filtered: filter: term: code: &ldquo;INFO&rdquo; events_for_three_days_: filtered: filter: term: code: &ldquo;ERROR&rdquo; events_from_to: filtered: filter: term: code: &ldquo;DEBUG&rdquo;

No, this is not really what we are doing :) But I think you get the idea.

Now, in this example, we have 3 different ElasticSearch Clusters to search in, and all three have different data, but all are sharing the same Event format. So, my idea was to generate reports of the requested data, but eventually for a single ES Cluster, or correlated over all three. I wanted to have the functionality inside the YAML file, so everybody who is writing such a YAML file can also add some processing code. Well, the result set of an ES search query is a JSON blob, and thanks to elasticsearch.py it will be converted to a Python dictionary.

Huu…so, why don’t you use python code inside YAML and eval it inside your Python Script?

Well, when you ever wrote Front/Backend Web Apps, you know it’s pretty difficult to write Frontend Python Scripts which are running inside your browser. So, JavaScript here for the rescue. And everybody knows how easy it is, to deal with JSON object structures inside JavaScript. So, why don’t we use this knowledge and invite users who are not familiar with Python, to participate?

Now, think about an idea like this:

Example YAML Query File lineos:true 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 title: name: &ldquo;Example YAML Query File&rdquo; esq: hosts:</p> <pre><code>es_cluster_1: fqdn: "localhost" port: 9200 es_cluster_2: fqdn: "localhost" port: 10200 es_cluster3: fqdn: "localhost" port: 11200_ </code></pre> <p>indices: &ndash; index:</p> <pre><code> id: "all" name: "_all" all: true </code></pre> <ul> <li>index: id: &ldquo;events_for_three_days&rdquo; name: &ldquo;[events-]YYYY-MM-DD&rdquo; type: &ldquo;failover&rdquo; days_from_today: 3</li> <li>index: id: &ldquo;events_from_to&rdquo; name: &ldquo;[events-]YYYY-MM-DD&rdquo; type: &ldquo;failover&rdquo; interval: from: &ldquo;2014-08-01&rdquo; to: &ldquo;2014-08-04&rdquo; query: on_index: all: filtered: filter: term: code: &ldquo;INFO&rdquo; events_for_three_days_: filtered: filter: term: code: &ldquo;ERROR&rdquo; events_from_to: filtered: filter: term: code: &ldquo;DEBUG&rdquo; processing: for: report1: | function find_in_collection(collection, search_entry) { for (entry in collection) { if (search_entry[entry][&lsquo;msg&rsquo;] == collection[entry][&lsquo;msg&rsquo;]) { return collection[entry]; } } return null; } function correlate_cluster_1_and_cluster_2(collections) { collection_cluster_1 = collections[&ldquo;cluster_1&rdquo;][&ldquo;hits&rdquo;][&ldquo;hits&rdquo;]; collection_cluster_2 = collections[&ldquo;cluster_2&rdquo;][&ldquo;hits&rdquo;][&ldquo;hits&rdquo;]; similar_entries = []; for (entry in collection_cluster_1) { similar_entry = null; similar_entry = find_in_collection(collection_cluster_2, collection_cluster_1[entry]); if (similar_entry != null) { similar_entries.push(similar_entry); } } result = {&lsquo;similar_entries&rsquo;: similar_entries}; return(result) } var result = correlate_cluster_1_and_cluster_2(collections); // this will return the data to the python method result result output: reports; report1: |

(This is not my actual code, I just scribbled it down, so don’t lynch me if this fails)

So, actually, I am passing a python dict with all the query resulsets from the ES clusters (defined at the top of the YAML file) towards a PyV8 Context Object, can access those collections inside my JavaScript and return a JavaScript HASH / Object. In the end, after JavaScript Processing, there could be a Jinja Template inside the YAML file, and we can pass the JavaScript results into this template, for printing a nice report. There are many things you can do with this.

So, let’s see it in python code:

Example Python Code (shorted) lineos: true 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 </p> <h1>&ndash;<em>&ndash; coding: utf-8 &ndash;</em>&ndash;</h1> <h1>This will be a short form of this,</h1> <h1>so don&rsquo;t expect that this code will do the reading and validation</h1> <h1>of the YAML file</h1> <p>from elasticsearch import Elasticsearch import PyV8 from jinja2 import Template</p> <p>class JSCollections(PyV8.JSClass):</p> <pre><code>def __init__(self, *args, **kwargs): super(JSCollections, self).__init__() self.collections = {} if 'collections' in kwargs: self.collections=kwargs['collections'] def write(self, val): print(val) </code></pre> <p>if <strong>name</strong> == &lsquo;<strong>main</strong>&rsquo;:</p> <pre><code>es_cluster_1 = Elasticsearch({"host":"localhost", port: 9200}) es_cluster_2 = Elasticsearch({"host":"localhost", port: 10200}) collections = {} collections['cluster_1] = es_cluster_1.search(index="_all", body={"query": { "filtered": {"filter": {"term": {"code": "DEBUG"}}}}}, size=100) collections['cluster_2] = es_cluster_2.search(index="_all", body={"query": { "filtered": {"filter": {"term": {"code": "DEBUG"}}}}}, size=100) js_ctx = PyV8.JSContext(JSCollection(collections=collections)) js_ctx.enter() # # here comes the javascript code # process_result = js_ctx.eval(""" function find_in_collection(collection, search_entry) { for (entry in collection) { if (search_entry[entry]['msg'] == collection[entry]['msg']) { return collection[entry]; } } return null; } function correlate_cluster_1_and_cluster_2(collections) { collection_cluster_1 = collections["cluster_1"]["hits"]["hits"]; collection_cluster_2 = collections["cluster_2"]["hits"]["hits"]; similar_entries = []; for (entry in collection_cluster_1) { similar_entry = null; similar_entry = find_in_collection(collection_cluster_2, collection_cluster_1[entry]); if (similar_entry != null) { similar_entries.push(similar_entry); } } result = {'similar_entries': similar_entries}; return(result) } var result = correlate_cluster_1_and_cluster_2(collections); // this will return the data to the python method result result """) # back to python print("RAW Process Result".format(process_result)) # create a jinja2 template and print it with the results from javascript processing template = Template(""" """) print(template.render(process_result)) </code></pre> <p>

Again, just wrote it down, not the actual code, so dunno if it really works.

But still, this is pretty simple.

You can even use JavaScript Events, or JS debuggers, or create your own Server Side Browsers. You can find those examples in the demos directory of the PyV8 Source Tree.

So, this was all a 30 mins Prove of Concept, and last night I refactored the code and this morning, I thought, well, let’s write a real library for this. So, eventually there is some code on github over the weekend. I’ll let you know.

Oh, before I forget, the idea of writing all this in a YAML file came from work with Junipers JunOS PyEZ Library, which has a similar way. But they are using the YAML file as description for autogenerated Python Classes. Very Nifty.

Costales: Jugando con MAME: Recreativas en Ubuntu

Planet Ubuntu - Wed, 2014-08-06 15:30
En mi infancia tener un ordenador en el hogar era un auténtico privilegio. Sólo el padre de un amigo de la pandilla de la calle tenía un sobremesa con pantalla verde fluorescente, sin disco duro, suplido por 2 disqueteras de 5 1/4 y creo recordar que con un procesador 8086.

Yo tiraba millas con mi Spectrum. Un ordenador que compré con la mayor ilusión del mundo y avergüenza confesar que sólo me sirvió para jugar a juegos que costaban unas 1.200 ptas y que poco tenían que ver con la recreativa. También por supuesto, para probar la montonera de demos que incluía la revista Micromanía en cómodas casettes.

Explicado esto, podréis comprender cómo me impresionaban las arcade de las salas de máquinas.

Tron plasmó la esencia de una sala de máquinasGráficos extraordinarios para aquella época, un montón de colores, animaciones, un joystick y botones magníficos (tengo pendiente comprar uno así para el PC, o al menos comprarme un miniarcade), sonidos que incluso sintetizaban extraordinariamente la voz... pero... pero, no sólo era eso... Había algo más en aquellos videojuegos...

Apelotonadas en la penumbra, como nosotros alrededor del mejor jugador cuando llegaba al enemigo finalCon el paso de los años me dejaron de atraer los videojuegos. A pesar de que cada vez había mejor hardware y los juegos ya no tenían limitaciones técnicas, se convirtieron en... ¿Cómo decirlo? ¿Aburridos? Cada vez más obras de arte con intros que poco envidian a una película, historias largas, rebuscadas... pero les falta lo que encontré en los arcade: la adicción.
Puede que la dosificación de pagar las 25 ptas de cada partida provocara el mono de otra dosis al día siguiente, pero estos juegos permitían que ese mañana llegarás un poco (sólo un poco) más lejos en el juego. Cada día descubrías un enemigo final nuevo y tras llegar a él durante varias partidas, conseguías el 'truco' de dónde colocarte para matarlo...

Ahora tenemos los juegazos de Steam para Ubuntu, pero los que siguen atrayéndome son los antiguos arcade; aún recuerdo girando los volantes de Super Gran Prix a ritmo de acelerador, haciendo malabarismos con Metal Slug, estrellándome en OutRun, pateando a los gnomos en Golden Axe ávido de pócimas o pegando codazos a todo un M.A. en Double Dragon...

Y encima, estos juegos son gratis... sí, has oído bien, te descargas una ROM (que es el juego) y pista.

Hadoken!!
¿Quieres probar? Abre una terminal y copia y pega estos comandos:
sudo apt-get install mame -y
mame
Cierra MAME.
cd ~/.mame && mame -cc
gedit ~/.mame/mame.ini
Sustituye:
$HOME/mame/roms;
por
$HOME/mame/roms;
Graba el fichero y cierra el editor.
mkdir  ~/.mame/nvram memcard roms inp comments sta snap diff
nautilus ~/.mame/roms
Copia aquí en formato .zip las ROMs que bajes.
Y cuidado, la ROM tiene que estar hecha para la versión de MAME que tengas: Esto ocurre porque MAME, además de un emulador, pretende ser un proyecto histórico que documente las recreativas fielmente, y además de los juegos y clones nuevos que se añaden a cada versión hay muchos otros juegos que se cambian por versiones más "correctas", aunque ya antes funcionaran bien. Por esto, si un juego te dice que le falta ficheros, es que estás usando una versión distinta al de la MAME ;)

Una vez con las roms copiadas en el directorio ~/.mame/roms, ejecutamos de nuevo el emulador desde la terminal con:
mame
Y escogemos el juego.

Y si os da pereza tener que buscar/instalar MAME y sus Roms, tenéis disponible distros como Advance que automatizan todo, incluso ya traen cientos de juegos. Arrancáis desde CD y a jugar :P
... bueno... os dejo, hora de otro cara a cara con Mr. Bison... después de tantos años... snif!

¿Comenzamos?Fuentes: UPUbuntu & Emulatronia.

Mythbuntu: Mythbuntu 14.04.1 Released

Planet Ubuntu - Wed, 2014-08-06 14:30
Mythbuntu 14.04.1 has been released. This is a point release on our 14.04 LTS release. If you are already on 14.04, you can get these same updates via the normal update process.The 14.04.x series of releases is the Mythbuntu team's second LTS and is supported until shortly after the 16.04 release.You can get the Mythbuntu ISO from our downloads page.Highlights
  • MythTV 0.27
  • This is our second LTS release (the first being 12.04). See this page for more info.
  • Bug fixes from 14.04 release
Underlying system
  • Underlying Ubuntu updates are found here
MythTV
  • Recent snapshot of the MythTV 0.27 release is included (see 0.27 Release Notes)
  • Mythbuntu theme fixes
We appreciated all comments and would love to hear what you think. Please make comments to our mailing list, on the forums (with a tag indicating that this is from 14.04 or trusty), or in #ubuntu-mythtv on Freenode. As previously, if you encounter any issues with anything in this release, please file a bug using the Ubuntu bug tool (ubuntu-bug PACKAGENAME) which automatically collects logs and other important system information, or if that is not possible, directly open a ticket on Launchpad (http://bugs.launchpad.net/mythbuntu/14.04/).
Known issues
  • If you are upgrading and want to use the HTTP Live Streaming you need to create a Streaming storage group

Mythbuntu: Mythbuntu 14.04 Released! (Better late than never edition)

Planet Ubuntu - Wed, 2014-08-06 14:27
After some last minute critical fixes and ISO respins by the release team (thanks again Infinity, we owe you and the rest of the release team a beer), the Mythbuntu team is proud to announce we have removed our socks (see relevant post) and released Mythbuntu 14.04 LTS. This is the Mythbuntu team's second LTS release and is supported until shortly after the 16.04 release.With this release, we are providing mirroring on sponsored mirrors and torrents. It is very important to note that this release is only compatible with MythTV 0.27 systems. The MythTV component of previous Mythbuntu releases can be be upgraded to a compatible MythTV version by using the Mythbuntu Repos. For a more detailed explanation, see here.You can get the Mythbuntu 14.04 ISO from our downloads page.Highlights
  • MythTV 0.27 (2:0.27.0+fixes.20140324.8ee257c-0ubuntu3)
  • This is our second LTS release (the first being 12.04). See this page for more info.
Underlying system
  • Underlying Ubuntu updates are found here
MythTV
  • Recent snapshot of the MythTV 0.27 release is included (see 0.27 Release Notes)
  • Mythbuntu theme fixes
We appreciated all comments and would love to hear what you think. Please make comments to our mailing list, on the forums (with a tag indicating that this is from 14.04 or trusty), or in #ubuntu-mythtv on Freenode. As previously, if you encounter any issues with anything in this release, please file a bug using the Ubuntu bug tool (ubuntu-bug PACKAGENAME) which automatically collects logs and other important system information, or if that is not possible, directly open a ticket on Launchpad (http://bugs.launchpad.net/mythbuntu/14.04/).
Known issues
  • Upgraders should hold off until our first point (14.04.1) coming this summer. (See bugs #1307546 )
  • Don't select VNC during install. It can be activated later. (Bug #1309752)
  • If you are upgrading and want to use the HTTP Live Streaming you need to create a Streaming storage group

Michael Hall: Web apps vs. Native apps in Ubuntu

Planet Ubuntu - Wed, 2014-08-06 09:00

A couple of days ago internet luminaries Stuart Langridge and Bryan Lunduke had a long twitter conversation about webapps and modern desktops, specifically around whether or not web apps were better than, worse than, or equal to native apps.  It was a fun thread to read, but I think there was still a lack of clarity about what exactly makes them different in the first place.

Webapps are basically a collection of markup files (HTML) glued together with a dynamic scripting language (Javascript), executed in a specialized container (browser) that you download from a remote source (web server). Where as native apps, in Ubuntu, are a collection of modeling files (QML) glued together with a dynamic scripting language (Javascript), executed in a specialized container (qmlscene) than you download from a remote source (App Store). The only real difference is from where and how often the code for the app is downloaded.

The biggest obstacle that webapps have faced in reaching parity with so called “native” apps is the fact that they have historically been cut off from the integration options offered by a platform. They are often little more than glorified bookmarks in a chrome-less browser window.  Some efforts, like local storage and desktop notifications, have tried to bridge this gap, but they’ve been limited by a need for consensus and having to support only the lowest common set of functionality.

With Unity, several years ago, we pioneered a deeper integration with our webapps-api, which not only gave webapps their own launcher and icon in the window switcher, but also exposed new APIs to let them interact with the desktop shell in the same was as native apps. You could even control audio players in webapps from the sound indicator controls when that webapp wasn’t focused.

With the new Ubuntu SDK and webbrowser-app we’re expanding this even further. For example, the Facebook webapp on the Ubuntu phone actually comes with a fair amount of QML that lets you share content to it from other apps, just like you can with local HTML5 or QML apps. You will also soon be getting notifications for the Facebook app, even when it’s not running. All of this is only an extension of the remote code that is being loaded from Facebook’s own server.

Given all of this, I have to agree with Stuart (which doesn’t happen often) that webapps should be treated like native apps, and increasingly will be on modern computing platforms, because the distinction between them is either being chipped away or becoming irrelevant. I don’t think they will replace traditional apps, but I think that in the next few years the difference is no longer going to matter, and a platform that can’t handle that is going to be relegated to history.

Hajime MIZUNO: [Event]Open Source Conference 2014 Kansai@Kyoto

Planet Ubuntu - Wed, 2014-08-06 06:51

On August 1-2, 2014, Open Source Conference 2014 Kansai@Kyoto was held in Kyoto.
(http://www.ospn.jp/osc2014-kyoto/)






This event is the largest OSS community gathers for in Japan. We participate as exhibitor and speaker every year. Yes, Of course as for this year.




Hiroshi Chonan was in charge of the seminar. He talked about trusty and Utopic.

The highlight of the event is as follows:


The NetBSD team displayed Luna work station of OMRON. Luna(= opposed to Sun) is very old UNIX workstation in Japan. However, the newest Twitter client "Mikutter" works in this workstation on NetBSD!


Mikutter is simple, powerful and moeful twitter client. This is very popular among young geek. (We call them "Teokure" in Japan)


This is Takayoshi Okano. He is one of greatest FLOSS translators in Japan. And it is Open Street Map mapper too.

In this way, the Japanese FLOSS community is very active and happy!

Pages

Subscribe to Free Software Magazine aggregator