news aggregator

Lubuntu Blog: Box support for MATE

Planet Ubuntu - Mon, 2014-07-14 23:47
The Box theme support continues growing, covering more and more environments. Now we're celebrating that the MATE desktop environment, a GTK3 fork of the traditional Gnome2, will have its own Ubuntu flavour, named Ubuntu MATE Remix. Once tested, I noticed I missed something familiar, our beloved Lubuntu spirit on it. So here begins the (experimental) theme support. It'll be available to download

Nicholas Skaggs: Utopic Test Writing Hackfest

Planet Ubuntu - Mon, 2014-07-14 18:09
We're having our first hackfest of the utopic cycle this week on Tuesday, July 15th. You can catch us live in a hangout on ubuntuonair.com starting at 1900 UTC. Everything you need to know can be found on the wiki page for the event.

During the hangout, we'll be demonstrating writing a new manual testcase, as well as reviewing writing automated testcases. We'll be answering any questions you have as well about contributing a testcase.

We need your help to write some new testcases! We're targeting both manual and automated testcase, so everyone is welcome to pitch in.

We are looking at writing and finishing some testcases for ubuntu studio and some other flavors. All you need is some basic tester knowledge and the ability to write in English.

If you know python, we are also going to be hacking on the toolkit helper for autopilot for the ubuntu sdk. That's a mouthful! Specifically it's the helpers that we use for writing autopilot tests against ubuntu-sdk applications. All app developers make use of these helpers, and we need more of them to ensure we have good coverage for all components developers use. 

Don't worry about getting stuck, we'll be around to help, and there's guides to well, guide you!

Hope to see everyone there!

Ubuntu App Developer Blog: Content Hub to replace Friends API

Planet Ubuntu - Mon, 2014-07-14 16:52

As part of the continued development of the Ubuntu platform, the Content Hub has gained the ability to share links (and soon text) as a content type, just as it has been able to share images and other file-based content in the past. This allows applications to more easily, and more consistently, share things to a user’s social media accounts.

Consolidating APIs


Thanks to the collaborative work going on between the Content Hub and the Ubuntu Webapps developers, it is now possible for remote websites to be packaged with local user scripts that provide deep integration with our platform services. One of the first to take advantage of this is the Facebook webapp, which while displaying remote content via a web browser wrapper, is also a Content Hub importer. This means that when you go to share an image from the Gallery app, the Facebook webapp is displayed as an optional sharing target for that image. If you select it, it will use the Facebook web interface to upload that image to your timeline, without having to go through the separate Friends API.

This work not only brings the social sharing user experience inline with the rest of the system’s content sharing experience, it also provide a much simpler API for application developers to use for accomplishing the same thing. As a result, the Friends API is being deprecated in favor of the new Content Hub functionality.

What it means for App Devs

Because this is an API change, there are things that you as an app developer need to be aware of. First, though the API is being deprecated immediately, it is not being removed from the device images until after the release of 14.10, which will continue to support the ubuntu-sdk-14.04 framework which included the Friends API. The API will not be included in the final ubuntu-sdk-14.10 framework, or any new 14.10-dev frameworks after -dev2.

After the 14.10 release in October, when device images start to build for utopic+1, the ubuntu-sdk-14.04 framework will no longer be on the images. So if you haven’t updated your Click package by then to use the ubuntu-sdk-14.10 framework, it won’t be available to install on devices with the new image. If you are not using the Friends API, this would simply be a matter of changing your package metadata to the new framework version.  For new apps, it will default to the newer version to begin with, so you shouldn’t have to do anything.

David Tomaschik: Passing Android Traffic through Burp

Planet Ubuntu - Sun, 2014-07-13 20:57

I wanted to take a look at all HTTP(S) traffic coming from an Android device, even if applications made direct connections without a proxy, so I set up a transparent Burp proxy. I decided to put the Proxy on my Kali VM on my laptop, but didn't want to run an AP on there, so I needed to get the traffic to there.

Network Setup

The diagram shows that my wireless lab is on a separate subnet from the rest of my network, including my laptop. The lab network is a NAT run by IPTables on the Virtual Router. While I certainly could've ARP poisoned the connection between the Internet Router and the Virtual Router, or even added a static route, I wanted a cleaner solution that would be easier to enable/disable.

Setting up the Redirect

I decided to use IPTables on the virtual router to redirect the traffic to my Kali Laptop. Furthermore, I decided to enable/disable the redirect based on logging in/out via SSH, but I needed to make sure the redirect would get torn down even if there's not a clean logout: i.e., the VM crashes, the SSH connection gets interrupted, etc. Enter pam_exec. By using the pam_exec module, we can have an arbitrary command run on log in/out, which can setup and reset the IPTables REDIRECT via an SSH tunnel to my Burp Proxy.

In order to get the command executed on any login/logout, I added the following line to /etc/pam.d/common-session:

session optional pam_exec.so log=/var/log/burp.log /opt/burp.sh

This launches the following script, that checks if its being invoked for the right user, for SSH sessions, and then inserts or deletes the relevant IPTables rules.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32#!/bin/bash BURP_PORT=8080 BURP_USER=tap LAN_IF=eth1 set -o nounset function ipt_command { ACTION=$1 echo iptables -t nat $ACTION PREROUTING -i $LAN_IF -p tcp -m multiport --dports 80,443 -j REDIRECT --to-ports $BURP_PORT\; echo iptables $ACTION INPUT -i $LAN_IF -p tcp --dport $BURP_PORT -j ACCEPT\; } if [ $PAM_USER != $BURP_USER ] ; then exit 0 fi if [ $PAM_TTY != "ssh" ] ; then exit 0 fi if [ $PAM_TYPE == "open_session" ] ; then CMD=`ipt_command -I` elif [ $PAM_TYPE == "close_session" ] ; then CMD=`ipt_command -D` fi date echo $CMD eval $CMD

This redirects all traffic incoming from $LAN_IF destined for ports 80 and 443 to local port 8080. This does have the downside of missing traffic on other ports, but this will get nearly all HTTP(S) traffic.

Of course, since the IPTables REDIRECT target still maintains the same interface as the original incoming connection, we need to allow our SSH Port Forward to bind to all interfaces. Add this line to /etc/ssh/sshd_config and restart SSH:

GatewayPorts clientspecified Setting up Burp and SSH

Burp's setup is pretty straightforward, but since we're not configuring a proxy in our client application, we'll need to use invisible proxying mode. I actually put invisible proxying on a separate port (8081) so I have 8080 setup as a regular proxy. I also use the per-host certificate setting to get the "best" SSL experience.

It turns out that there's an issue with OpenJDK 6 and SSL certificates. Apparently it will advertise algorithms not actually available, and then libnss will throw an exception, causing the connection to fail, and the client will retry with SSLv3 without SNI, preventing Burp from creating proper certificates. It can be worked around by disabling NSS in Java. In /etc/java-6-openjdk/security/java.security, comment out the line with security.provider.9=sun.security.pkcs11.SunPKCS11 ${java.home}/lib/security/nss.cfg.

Forwarding the port over to the wifilab server is pretty straightforward. You can either use the -R command-line option, or better, set things up in ~/.ssh/config.

Host wifitap User tap Hostname wifilab RemoteForward *:8080 localhost:8081

This logs in as user tap on host wifilab, forwarding local port 8081 to port 8080 on the wifilab machine. The * for a hostname is to ensure it binds to all interfaces (0.0.0.0), not just localhost.

Setting up Android

At this point, you should have a good setup for intercepting traffic from any client of the WiFi lab, but since I started off wanting to intercept Android traffic, let's optimize for that by installing our certificate. You can install it as a user certificate, but I'd rather do it as a system cert, and my testing tablet is already rooted, so it's easy enough.

You'll want to start by exporting the certificate from Burp and saving it to a file, say burp.der.

Android's system certificate store is in /system/etc/security/cacerts, and expects OpenSSL-hashed naming, like a0b1c2d3.0 for the certificate names. Another complication is that it's looking for PEM-formatted certificates, and the export from Burp is DER-formatted. We'll fix all that up in one chain of OpenSSL commands:

(openssl x509 -inform DER -outform PEM -in burp.der; openssl x509 -inform DER -in burp.der -text -fingerprint -noout ) > /tmp/`openssl x509 -inform DER -in burp.der -subject_hash -noout`.0

Android before ICS (4.0) uses OpenSSL versions below 1.0.0, so you'll need to use -subject_hash_old if you're using an older version of Android. Installing is a pretty simple task (replace HASH.0 with the filename produced by the command above):

$ adb push HASH.0 /tmp/HASH.0 $ adb shell android$ su android# mount -o remount,rw /system android# cp /tmp/HASH.0 /system/etc/security/cacerts/ android# chmod 644 /system/etc/security/cacerts/HASH.0 android# reboot

Connect your Android device to your WiFi lab, ssh wifitap from your Kali install running Burp, and you should see your HTTP(S) traffic in Burp (excepting apps that use pinned certificates, that's another matter entirely). You can check your installed certificate from the Android Security Settings.

Good luck with your Android auditing!

Colin King: a final few more features in stress-ng

Planet Ubuntu - Sun, 2014-07-13 16:47
While hoping to get a feature complete stress-ng sooner than later, I found a few more ways to fiendishly stress a system.

Stress-ng 0.01.22 will be landing soon in Ubuntu 14.10 with three more stress mechanisms:
  • CPU affinity stressing; this rapidly changes CPU affinity of the stress processes just to keep the scheduling busy wasting effort.
  • Timer stressing using the real-time clock; this allows one to generate a large amount of timer interrupts, so it is a useful interrupt saturation test.
  • Directory entry thrashing; this creates and deletes a selectable number of zero length files and hence populates and destroys directory entries.
I have also removed the need to use rand() for random number generation for some of the stress tests and re-used a the faster MWC "random" number generator to add in some well known and very simple math operations for CPU stressing.

Stress-ng now has 15 different simple stress mechanisms that exercise CPU, cache, memory, file system, I/O and CPU schedulers.  I could add more tests, but I think this is a large enough set to allow one to thrash a machine and see how well it performs under pressure.

Lubuntu Blog: PCManFM 1.2.1

Planet Ubuntu - Sat, 2014-07-12 15:34
Another update of our file manager PCManFM, tones of bug fixes and new implementations: fixed dragging and dropping icons behavior fixed icons positioning fixed resetting cursor in location bar corrected folder popup update on loading reordered ‘View’ menu item implemented drawing icons of dragged items etc. Also a huge update and bug fixing in libfm libraries (1.2.1) too. You can use

Darcy Casselman: New Motherboard: ASUS Z97-A (and Ubuntu)

Planet Ubuntu - Sat, 2014-07-12 05:31

My old desktop was seeing random drive errors on multiple drives, including a drive I only got a few months ago. And since my motherboard was about 5 years old, I decided it was time to replace it.

I asked the KWLUG mailing list if they had any advice on picking motherboards. The consensus seems to be pretty much “it’s still a crapshoot.” But I bit the bullet and reported back:

I bought a motherboard! An ASUS Z97-A

Mostly because I wanted Intel integrated graphics and I’ve got 3 monitors it needs to drive. And I was hoping the mSATA SSD card I got to replace the one in my Dell Mini 9 (that didn’t work) would fit in the m.2 slot. It doesn’t. Oh well.

I wanted to get it all set up while I was off for Canada Day. Except Canada Computers didn’t have any of my preferred CPU options. So I’ll be waiting for that to come in via NewEgg.

I gave myself a budget of about $500 for mobo, CPU and RAM and I’ll end up going over a little bit (mostly tax and shipping), and tried to build the best machine I could for that.

One of the things I did this time that I hadn’t done before was spec out a desktop machine at System76 and used that as a starting point. System76 is more explicit about things like chipsets for desktops than Zareason is. Which would be great, except they’re using the older H87 chipsets.

…Like the latest Ars System Guide Hot Rod But that’s over 6 months old now. And >they’re balancing their budget against having to buy a graphics card, which I don’t want to do.

I still have some unanswered questions about the Z97 chipset. It’s only been out for about a month. So who knows?

My laptop has mostly been my desktop for the last few years. But I want to knock that off because I’ve been developing back and neck problems. My desktop layout is okay ergonomically, at least better than anything I have for the laptop (including and especially my easy chair with a lapdesk, which is comfy, but kind of horrible on the neck). One of the things that’s holding me back is my desktop is 5 years old and was built cheap because I was mostly using it as a server by that point. I really want to make it something I want to use over the laptop (which is a very nice laptop). Which is why I ended up going somewhat upper-mid range.

That’s one of the nice things about building from parts, despite the lack of useful information: This is the 3rd motherboard I’ve put in this case. I replaced the PSU once a couple years ago so it’s quite sufficient to handle the new stuff. I’m keeping my old harddrives. I could keep the graphics card. I’ll need to buy an adapter for the DVD burner (and I’ve yet to decide if I’m going to do that, or buy a new SATA one or just go without). And I can keep my (frankly pretty awesome) monitors. So $500 gets me a kick-ass whole new machine.

Anyway, long story short, I still have a lot of questions about whether this was the best purchase, but I’m hopeful it’s a good one.

Aside: is Canada Computers really the only store in town that keeps desktop CPUs in stock anymore? I couldn’t get into the UW Tech Shop, but since they’re mostly iPads and crap now, I’m not optimistic. Computer XS doesn’t (at least the Waterloo one). Future Shop and Best Buy don’t. I even went into Neutron for the first time in over 15 years. Nope. Nobody.

It… didn’t go as well as I’d hoped:

So, anyway, I got the motherboard, CPU and put it all in my old case.

I booted up and all three monitors came up without any fuss, which has never happened for me. Awesome! This is great!

Then I tried to play game.

Apparently the current snd_intel_hda ALSA drivers don’t like H97 and Z97 chipsets. The sound was staticky, crackly and distorted.

I’ve spent more than a few hours over the last week hunting around for a fix. I installed Windows on a spare harddrive to make sure it wasn’t a hardware problem (for which I needed to spend the $20 to get a new SATA DVD drive so I could run the Windows driver disk to actually get actual video, networking and sound support :P). And I found this thing on the Arch WIki which, while not fixing the problem, did actually make it worse, leading me to conclude there was some sort of sound driver/pulseaudio problem.

Top tip: when trying to sort out sound driver problems for specific hardware the best thing to do is search for the hardware product id (in my case “8ca0″). That’s how I found this:

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1321421

Hurray! The workaround works great and now I’m back in business!

So I got burned by going with the bleeding edge, and I should know better. But, even though the information isn’t widely diseminated yet, there is a fix. And a workaround. I’m sure Ubuntu 14.10 will have no problem with it. It’s not as bad as the bleeding edge was years ago. If the fix was easier to find (and I’m going to work on that), it was easier getting going with Ubuntu than it was with Windows.

Paul Tagliamonte: Satuday's the new Sunday

Planet Ubuntu - Sat, 2014-07-12 00:41

Hello, World!

For those of you who enforce my Sundays on me (keep doing that, thank you!), I’ll be changing my Saturdays with my Sundays.

That’s right! In this new brave world, I’ll be taking Saturdays off, not Sundays. Feel free to pester me all day on Sunday, now!

This means, as a logical result, I will not be around tomorrow, Saturday.

Much love.

Dan Chapman: One week in and Dekko has 41 users

Planet Ubuntu - Fri, 2014-07-11 12:47

For those of you who haven’t seen Dekko in the software store, it’s a native IMAP email client for Ubuntu Touch. Dekko is essentially my development/ideas branch of my work on Trojita, which in the end is intended to replace Dekko in the store.

The reasoning behind publishing Dekko is for a few reasons really, firstly Trojita prides itself on being standards compliant, already has a desktop client that uses QtWidgets, supports both Qt4 & Qt5 and also has a technical preview Harmattan qml front-end, which was great as most of the initial work for the IMAP parts was in place, so we didn’t need to “re-invent the wheel” (for the most-part anyway), but we soon hit a point where we had surpassed what had previously been done and now was the job of unwinding the intertwined style that QtWidget UI’s naturally ensue. So that we can share the same business logic between all front-ends without losing standards compliance, support both Qt4 & Qt5 and maintain Trojita’s robust quality standards.

I am still relatively new to C++ so this is like one of those “in at the deep end” scenario’s, resulting in the ietf rfc specifications and Qt’s documentation having become the majority of my daily reading.  Dekko was born out of the need to understand the separation (call it a learning project) and to devise a way to create common components that can be shared between all front-ends. This “learning project” resulted in a functional but limited capability email client, so I decided to publish it with the hope of getting as much feedback, bug reports or design ideas as possible, and use this to ensure Trojita becomes a rock solid native email client for Ubuntu.

A quick list of current features in Dekko,

  • Support for viewing of plain text messages. We cannot show html messages, due to not being able to block network requests with QtWebKits custom url scheme functionality (If you are an Oxide dev who happens to be reading this “wink wink”  ). But is great for viewing all your launchpad mail.
  • Navigating the mailbox hierarchy, it’s not entirely obvious at first (Open to new ideas here) If you see a progression arrow on a mailbox, tapping the arrow displays the nested mailbox’s. Otherwise tapping elsewhere shows the messages within that mailbox.
  • Composing and replying to messages, this utilizes the bottom edge so pulling up on an opened message will set up a reply to the opened message. One thing to note with replying to messages, at the moment it basically does a “reply all” action so you need to delete or add recipients to the message manually until support for mailing lists and other reply modes are implemented.
  • Supports defining a single sender identity for mail submission.
  • Mark message as deleted, expunge mailbox and auto-expunge on marked for deletion options.
  • Mark all messages as read.
  • Offline, Online and Bandwidth saving mode, perfect for mobile data connections

There is a known bug with the message list view sometimes not updating properly, but can usually be resolved by closing and reopening that mailbox.

So if you haven’t already please give it a try, and if you have any design/implementation ideas, issues, bugs or anything else you wish to say, please report them to the dekko project on launchpad https://launchpad.net/dekko.

Note: Please don’t file bugs against upstream Trojita, unless you are using a build of Trojita and not a Dekko build. 

And finally a few snaps to wet the appetite

 

Ronnie Tucker: PHP Fixes OpenSSL Flaws in New Releases

Planet Ubuntu - Fri, 2014-07-11 07:00

The PHP Group has released new versions of the popular scripting language that fix a number of bugs, including two in OpenSSL. The flaws fixed in OpenSSL don’t rise to the level of the major bugs such as Heartbleed that have popped up in the last few months. But PHP 5.5.14 and 5.4.30 both contain fixes for the two vulnerabilities, one of which is related to the way that OpenSSL handles timestamps on some certificates, and the other of which also involves timestamps, but in a different way.

Source:

http://threatpost.com/php-fixes-openssl-flaws-in-new-releases/106908

Submitted by: Dennis Fisher

 

Ubuntu Podcast from the UK LoCo: S07E15 – The One with the Thumb

Planet Ubuntu - Thu, 2014-07-10 20:02

Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are in Studio L for Season Seven, Episode Fifteen of the Ubuntu Podcast!

 Download OGG Play in Popup  Download MP3 Play in Popup

In this week’s show:-

We’ll be back next week, when we’ll be interviewing David Hermann about his MiracleCast project, and we’ll go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Ronnie Tucker: Why did Microsoft join the Linux Foundation’s AllSeen Alliance?

Planet Ubuntu - Thu, 2014-07-10 06:59

When people think of open source they don’t usually associate Microsoft with it. But the company recently surprised many when it joined the Linux Foundation’s open source AllSeen Alliance. The AllSeen Alliance’s mission is to create a standard for device communications.

Has Microsoft changed its attitude toward open source in general or is there another reason for its uncharacteristic behavior? Computerworld speculates on what might have motivated Microsoft to join the AllSeen Alliance.

Source:

http://www.itworld.com/open-source/425651/why-did-microsoft-join-linux-foundations-allseen-alliance

Submitted by: Jim Lynch

Dustin Kirkland: Scalable, Parallel Video Transcoding on Linux

Planet Ubuntu - Thu, 2014-07-10 06:02
Transcoding video is a very resource intensive process.  It can take many minutes to process a small clip, or even hours to process a full movie.
And that's on the home video scale.  When it comes to commercial video production, it can take thousands of machines, hundreds of compute hours to render a full movie.  I had the distinct privilege some time ago to visit WETA Digital in Wellington, New Zealand and tour the render farm that processed The Lord of the Rings triology, Avatar, and The Hobbit, etc.  And just a few weeks ago, I visited another quite visionary, cloud savvy digital film processing firm in Hollywood, called Digital Film Tree.
While Windows and Mac OS may be the first platforms that come to mind, when you think about front end video production, Linux is far more widely used for batch video processing, and with Ubuntu, in particular, being extensively at both WETA Digital and Digital Film Tree, among others.
There are numerous, excellent, open source video transcoding and processing tools freely available in Ubuntu, including libav-toolsffmpeg, mencoder, and handbrake.
Surprisingly, however, none of those support parallel computing easily or out of the box.  And disappointingly, I couldn't find any MPI support readily available either.
I happened to have an Orange Box for a few days recently, so I decided to tackle the problem myself, and develop a scalable, parallel video transcoding solution myself.  I'm delighted to share the result with you today!
While I could have worked with any of a number of tools, I settled on avconv (the successor(?) of ffmpeg), as it was the first one that I got working well on my laptop, before scaling it out to the cluster.
I designed an approach on my whiteboard, in fact quite similar to some work I did parallelizing and scaling the john-the-ripper password quality checker.

At a high level, the algorithm looks like this:
  1. Create a shared network filesystem, simultaneously readable and writable by all nodes
  2. Have the master node split the work into even sized chunks for each worker
  3. Have each worker process their segment of the video, and raise a flag when done
  4. Have the master node wait for each of the all-done flags, and then concatenate the result
And that's exactly what I implemented that in a new transcode charm and transcode-cluster bundle.  It provides linear scalability and performance improvements, as you add additional units to the cluster.  A transcode job that takes 24 minutes on a single node, is down to 3 minutes on 8 worker nodes in the Orange Box, using Juju and MAAS against physical hardware nodes.


For the curious, the real magic is in the config-changed hook, which has decent inline documentation.



The trick, for anyone who might make their way into this by way of various StackExchange questions and (incorrect) answers, is in the command that splits up the original video (around line 54):

avconv -ss $start_time -i $filename -t $length -s $size -vcodec libx264 -acodec aac -bsf:v h264_mp4toannexb -f mpegts -strict experimental -y ${filename}.part${current_node}.ts

And the one that puts it back together (around line 72):
avconv -i concat:"$concat" -c copy -bsf:a aac_adtstoasc -y ${filename}_${size}_x264_aac.${format}

I found this post and this documentation particularly helpful in understanding and solving the problem.

In any case, once deploy, my cluster bundle looks like this.  8 units of transcoders, all connected to a shared filesystem, and performance monitoring too.


I was able to leverage the shared-fs relation provided by the nfs charm, as well as the ganglia charm to monitor the utilization of the cluster.  You can see the spikes in the cpu, disk, and network in the graphs below, during the course of a transcode job.




For my testing, I downloaded the movie Code Rushfreely available under the CC-BY-NC-SA 3.0 license.  If you haven't seen it, it's an excellent documentary about the open source software around Netscape/Mozilla/Firefox and the dotcom bubble of the late 1990s.

Oddly enough, the stock, 746MB high quality MP4 video doesn't play in Firefox, since it's an mpeg4 stream, rather than H264.  Fail.  (Yes, of course I could have used mplayer, vlc, etc., that's not the point ;-)


Perhaps one of the most useful, intriguing features of HTML5 is it's support for embedding multimedia, video, and sound into webpages.  HTML5 even supports multiple video formats.  Sounds nice, right?  If it only were that simple...  As it turns out, different browsers have, and lack support for the different formats.  While there is no one format to rule them all, MP4 is supported by the majority of browsers, including the two that I use (Chromium and Firefox).  This matrix from w3schools.com illustrates the mess.
http://www.w3schools.com/html/html5_video.asp
The file format, however, is only half of the story.  The audio and video contents within the file also have to be encoded and compressed with very specific codecs, in order to work properly within the browsers.  For MP4, the video has to be encoded with H264, and the audio with AAC.
Among the various brands of phones, webcams, digital cameras, etc., the output format and codecs are seriously all over the map.  If you've ever wondered what's happening, when you upload a video to YouTube or Facebook, and it's a while before it's ready to be viewed, it's being transcoded and scaled in the background. 
In any case, I find it quite useful to transcode my videos to MP4/H264/AAC format.  And for that, a scalable, parallel computing approach to video processing would be quite helpful.

During the course of the 3 minute run, I liked watching the avconv log files of all of the nodes, using Byobu and Tmux in a tiled split screen format, like this:


Also, the transcode charm installs an Apache2 webserver on each node, so you can expose the service and point a browser to any of the nodes, where you can find the input, output, and intermediary data files, as well as the logs and DONE flags.



Once the job completes, I can simply click on the output file, Code_Rush.mp4_1280x720_x264_aac.mp4, and see that it's now perfectly viewable in the browser!


In case you're curious, I have verified the same charm with a couple of other OGG, AVI, MPEG, and MOV input files, too.


Beyond transcoding the format and codecs, I have also added configuration support within the charm itself to scale the video frame size, too.  This is useful to take a larger video, and scale it down to a more appropriate size, perhaps for a phone or tablet.  Again, this resource intensive procedure perfectly benefits from additional compute units.


File format, audio/video codec, and frame size changes are hardly the extent of video transcoding workloads.  There are hundreds of options and thousands of combinations, as the manpages of avconv and mencoder attest.  All of my scripts and configurations are free software, open source.  Your contributions and extensions are certainly welcome!

In the mean time, I hope you'll take a look at this charm and consider using it, if you have the need to scale up your own video transcoding ;-)

Cheers,
Dustin

Joe Liau: Utopic Unicron[sic]

Planet Ubuntu - Thu, 2014-07-10 04:14

I was playing around with possible designs for a Utopic Unicorn t-shirt when an inevitable (and awesome) typo lead me to…

SVG version here.

 

 

 

Paul Tagliamonte: Dell XPS 13

Planet Ubuntu - Thu, 2014-07-10 02:38

More hardware adventures.

I got my Dell XPS13. Amazing.

The good news: This MacBook Air clone is clearly an Air competitor, and easily slightly better in nearly every regard except for the battery.


The bad news is that the Intel Wireless card needs non-free (I’ll be replacing that shortly), and the touchpad’s driver isn’t totally implemented until Kernel 3.16. I’m currently building a 3.14 kernel with the patch to send to the kind Debian kernel people. We’ll see if that works. Ubuntu Trusty already has the patch, but it didn’t get upstreamed. That kinda sucks.

It also shipped with UEFI disabled, and was defaulting to boot in ‘legacy’ mode. It shipped with Ubuntu, a bit disappointed to not see Ubuntu keys on the machine.

Touchscreen works; in short -stunning. I think I found my new travel buddy. Debian unstable runs great, stable had some issues.

Ronnie Tucker: XFCE App Launcher `WHISKER MENU` sees new release

Planet Ubuntu - Wed, 2014-07-09 06:58

Whisker Menu is an application menu / launcher for Xfce that features a search function so you can easily find the application you want to launch. The menu supports browsing apps by category, you can add applications to favorites and more. The tool is used as the default Xubuntu application menu starting with the latest 14.04 release and in Linux Mint Xfce starting with version 15 (Olivia).

The Whisker Menu PPA was updated to the latest 1.4.0 version recently and you can use to both upgrade to the latest version obviously, as well as to install the tool in (X)Ubuntu versions for which Whisker Menu isn’t available in the official repositories (supported versions: Ubuntu 14.04, 13.10 and 12.04 and the corresponding Linux Mint versions). For see what is the  difference with the previous release, see the changelog in its main website.

Source:

http://www.webupd8.org/2014/06/xfce-app-launcher-whisker-menu-sees-new.html

Submitted by: Andrew

Nicholas Skaggs: Utopic Bug Hug and Testing Day

Planet Ubuntu - Tue, 2014-07-08 19:38
The first testing day of the utopic cycle is coming this week on Thursday, July 10th. You can catch us live in a hangout on ubuntuonair.com starting at 1900 UTC. We'll be demonstrating running and testing the development release of ubuntu, reporting test results, reporting bugs, and doing triage work. We'll also be availible to answer your questions and help you get started testing as well.

Please join us in testing utopic and helping the next release of ubuntu become the best it can be. Hope to see everyone there!

P.S. We have a team calendar that can help you keep track of the release schedule along with this and other events. Check it out!

The Fridge: Ubuntu Online Summit dates: 4-6 Nov 2014

Planet Ubuntu - Tue, 2014-07-08 18:54

in discussions at the last Online Summit and afterwards it became clear that we need to bring the summit dates closer to our release dates again. With the Unicorn being released on Oct 23, we decided to pick the following dates for the next Online Summit:

4th – 6th November 2014

This unfortunately won’t leave too much room for a mid-cycle UOS, as it’d get too close to Feature Freeze and other release/freeze dates. Michael Hall will start a discussion on ubuntu-devel-discuss@ about the subject of Ubuntu Online Summit soon, so we can discuss changes and start general planning. Your feedback and help are much appreciated.

If you should want to have any ad-hoc, public planning sessions before the next UOS, we’d like to remind you of Ubuntu On Air, which is a good way to get your discussion recorded and where you can very easily get people involved for the subject. Find out more info on https://wiki.ubuntu.com/OnAir

Originally posted to the community-announce mailing list on Tue Jul 8 10:42:20 UTC 2014 by Daniel Holbach

Jorge Castro: Juju 1.20 is out the door!

Planet Ubuntu - Tue, 2014-07-08 18:48

The following is a guest post from Curtis Hovey, the Juju release manager. You can find the original announcement on the Juju mailing list.

Juju 1.20.0 is released

A new stable release of Juju, juju-core 1.20.0, is now available.

Getting Juju

juju-core 1.20.0 is available for utopic and backported to earlier series in the following PPA:

  • https://launchpad.net/~juju/+archive/stable
New and Notable
  • High Availability
  • Availability Zone Placement
  • Azure Availability Sets
  • Juju debug-log Command Supports Filtering and Works with LXC
  • Constraints Support instance-type
  • The lxc-use-clone Option Makes LXC Faster for Non-Local Providers
  • Support for Multiple NICs with the Same MAC
  • MAAS Network Constraints and Deploy Argument
  • MAAS Provider Supports Placement and add-machine
  • Server-Side API Versioning
High Availability

The juju state-server (bootstrap node) can be placed into high availability mode. Juju will automatically recover when one or more the state-servers fail. You can use the ‘ensure-availability’ command to create the additional state-servers:

juju ensure-availability

The ‘ensure-availability’ command creates 3 state servers by default, but you may use the ‘-n’ option to specify a larger number. The number of state servers must be odd. The command supports the ‘series’ and ‘constraints’ options like the ‘bootstrap’ command. You can learn more details by running ‘juju ensure-availability –help’

Availability Zone Placement

Juju supports explicit placement of machines to availability zones (AZs), and implicitly spreads units across the available zones.

When bootstrapping or adding a machine, you can specify the availability zone explicitly as a placement directive. e.g.

juju bootstrap --to zone=us-east-1b juju add-machine zone=us-east-1c

If you don’t specify a zone explicitly, Juju will automatically and uniformly distribute units across the available zones within the region. Assuming the charm and the charm’s service are well written, you can rest assured that IaaS downtime will not affect your application. Commands you already use will ensure your services are always available. e.g.

juju deploy -n 10 <service>

When adding machines without an AZ explicitly specified, or when adding units to a service, the ec2 and openstack providers will now automatically spread instances across all available AZs in the region. The spread is based on density of instance “distribution groups”.

State servers compose a distribution group: when running ‘juju ensure-availability’, state servers will be spread across AZs. Each deployed service (e.g. mysql, redis, whatever) composes a separate distribution group; the AZ spread of one service does not affect the AZ spread of another service.

Amazon’s EC2 and OpenStack Havana-based clouds and newer are supported. This includes HP Cloud. Older versions of OpenStack are not supported.

Azure availability sets

Azure environments can be configured to use availability sets. This feature ensures services are distributed for high availability; as long as at least two units are deployed, Azure guarantees 99.95% availability of the service overall. Exposed ports will be automatically load balanced across all units within the service.

New Azure environments will have support for availability sets by default. To revert to the previous behaviour, the ‘availability-sets-enabled’ option must be set in environments.yaml like so:

availability-sets-enabled: false

Placement is disabled when ‘availability-sets-enabled’ is true. The option cannot be disabled after the environment is bootstrapped.

Juju debug-log Command Supports Filtering and Works with LXC

The ‘debug-log’ command shows the consolidate logs of all juju agents running on all machines in the environment. The command operates like ‘tail -f’ to stream the logs to the your terminal. The feature now support local-provider LXC environments. Several options are available to select which log lines to display.

The ‘lines’ and ‘limit’ options allow you to select the starting log line and how many additional lines to display. The default behaviour is to show the last 10 lines of the log. The ‘lines’ option selects the starting line from the end of the log. The ‘limit’ option restricts the number of lines to show. For example, you can see just 20 lines from last 100 lines of the log like this:

juju debug-log --lines 100 --limit 20

There are many ways to filter the juju log to see just the pertinent information. A juju log line is written in this format:

<entity> <timestamp> <log-level> <module>:<line-no> <message>

The ‘include’ and ‘exclude’ options select the entity that logged the message. An entity is a juju machine or unit agent. The entity names are similar to the names shown by ‘juju status’. You can exclude all the log messages from the bootstrap machine that hosts the state-server like this:

juju debug-log --exclude machine-0

The options can be used multiple times to select the log messages. This example selects all the message from a unit and its machine as reported by status:

juju debug-log --include unit-mysql-0 --include machine-1

The ‘level’ option restricts messages to the specified log-level or greater. The levels from lowest to highest are TRACE, DEBUG, INFO, WARNING, and ERROR. The WARNING and ERROR messages from the log can seen thusly:

juju debug-log --level WARNING

The ‘include-module’ and ‘exclude-module’ are used to select the kind of message displayed. The module name is dotted. You can specify all or some of a module name to include or exclude messages from the log. This example progressively excludes more content from the logs

juju debug-log --exclude-module juju.state.apiserver juju debug-log --exclude-module juju.state juju debug-log --exclude-module juju

The ‘include-module’ and ‘exclude-module’ options can be used multiple times to select the modules you are interested in. For example, you can see the juju.cmd and juju.worker messages like this:

juju debug-log --include-module juju.cmd --include-module juju.worker

The ‘debug-log’ command output can be piped to grep to filter the message like this:

juju debug-log --lines 500 | grep amd64

You can learn more by running ‘juju debug-log –help’ and ‘juju help logging’

Constraints Support instance-type

You can specify ‘instance-type’ with the ‘constraints’ option to select a specific image defined by the cloud provider. The ‘instance-type’ constraint can be used with Azure, EC2, HP Cloud, and all OpenStack-based clouds. For example, when creating an EC2 environment, you can specify ‘m1.small’:

juju bootstrap --constraints instance-type=m1.small

Constraints are validated by all providers to ensure values conflicts and unsupported options are rejected. Previously, juju would reconcile such problems and select an image, possibly one that didn’t meet the needs of the service.

The lxc-use-clone Option Makes LXC Faster for Non-Local Providers

When ‘lxc-use-clone’ is set to true, the LXC provisioner will be configured to use cloning regardless of provider type. This option cannot be changed once it is set. You can set the option to true in environments.yaml like this:

lxc-use-clone: true

This speeds up LXC provisioning when using placement with any provider. For example, deploying mysql to a new LXC container on machine 0 will start faster:

juju deploy --to lxc:0 mysql Support for Multiple NICs with the Same MAC

Juju now supports multiple physical and virtual network interfaces with the same MAC address on the same machine. Juju takes care of this automatically, there is nothing you need to do.

Caution, network settings are not upgraded from 1.18.x to 1.20.x. If you used juju 1.18.x to deploy an environment with specified networks, you must redeploy your environment instead of upgrading to 1.20.0.

The output of ‘juju status’ will include information about networks when there is more than one. The networks will be presented in this manner:

machines: ... services: ... networks: net1: provider-id: foo cidr: 0.1.2.0/24 vlan-tag: 42 MaaS Network Constraints and Deploy Argument

You can specify which networks to include or exclude as a constraint to the deploy command. The constraint is used to select a machine to deploy the service’s units too. The value of ‘networks’ is a comma-delimited list of juju network names (provided by MaaS). Excluded networks are prefixed with a “^”. For example, this command specify the service requires the “logging” and “storage” networks and conflicts with the “db” and “dmz” networks.

juju deploy mongodb --constraints networks=logging,storage,^db,^dmz

The network constraint does not enable the network for the service. It only defines what machine to pick.

Use the ‘deploy’ command’s ‘networks’ option to specify service-specific network requirements. The ‘networks’ option takes a comma-delimited list of juju-specific network names. Juju will enable the networks on the machines that host service units.

Juju networking support is still experimental and under development, currently only supported with the MaaS provider.

juju deploy mongodb --networks=logging,storage

The ‘exclude-network’ option was removed from the deploy command as it is superseded by the constraint option.

There are plans to add support for network constraint and argument with Amazon EC2, Azure, and OpenStack Havana-based clouds like HP Cloud in the future.

MAAS Provider Supports Placement and add-machine

You can specify which MAAS host to place the juju state-server on with the ‘to’ option. To bootstrap on a host named ‘fnord’, run this:

juju bootstrap --to fnord

The MAAS provider support the add-machine command now. You can provision an existing host in the MAAS-based Juju environment. For example, you can add running machine named fnord like this:

juju add-machine fnord Server Side API Versioning

The Juju API server now has support for a Version field in requests that are made. For this release, there are no RPC calls that require anything other than ‘version=0’ which is the default when no Version is supplied. This should have limited impact on existing CLI or API users, since it allows us to maintain exact compatibility with existing requests. New features and APIs should be exposed under versioned requests.

For details on the internals (for people writing API clients), see this document.

Finally

We encourage everyone to subscribe the mailing list at juju-dev at lists.canonical.com, or join us on #juju-dev on freenode.

PS. Juju just got 20% more amazing.

Colin King: more stress with stress-ng

Planet Ubuntu - Tue, 2014-07-08 09:27
Since my last article about stress-ng I have been adding a few more stress mechanisms to stress-ng:
  • file locking - exercise file locking with one or more processes (the more processes the better).
  • fallocate - this allocates a 4MB file, sync's, truncates to zero size and syncs repeatedly
  • yield - this loops on sched_yield() to repeatedly relinquish the CPU forcing a high context switch rate when run with multiple yielding processes.
Also, I have added some new features to tweak scheduling, I/O characteristics and memory allocations of the running stress processes:
  • --sched and --sched-prio options to specify the scheduler type and priority
  • --ionice-class and --ionice-level options to tweak I/O niceness
  • --vm-populate option to populate (pre-fault) page tables for a mapping for the --vm stress test.
If I think of other mechanisms to stress the kernel I will add them, but for now, stress-ng is becoming almost feature complete.

Pages

Subscribe to Free Software Magazine aggregator