Planet Ubuntu
Subscribe to Planet Ubuntu feed
Planet Ubuntu - http://planet.ubuntu.com/
Updated: 4 hours 8 min ago

Nicholas Skaggs: Utopic Test Writing Hackfest

Mon, 2014-07-14 18:09
We're having our first hackfest of the utopic cycle this week on Tuesday, July 15th. You can catch us live in a hangout on ubuntuonair.com starting at 1900 UTC. Everything you need to know can be found on the wiki page for the event.

During the hangout, we'll be demonstrating writing a new manual testcase, as well as reviewing writing automated testcases. We'll be answering any questions you have as well about contributing a testcase.

We need your help to write some new testcases! We're targeting both manual and automated testcase, so everyone is welcome to pitch in.

We are looking at writing and finishing some testcases for ubuntu studio and some other flavors. All you need is some basic tester knowledge and the ability to write in English.

If you know python, we are also going to be hacking on the toolkit helper for autopilot for the ubuntu sdk. That's a mouthful! Specifically it's the helpers that we use for writing autopilot tests against ubuntu-sdk applications. All app developers make use of these helpers, and we need more of them to ensure we have good coverage for all components developers use. 

Don't worry about getting stuck, we'll be around to help, and there's guides to well, guide you!

Hope to see everyone there!

Ubuntu App Developer Blog: Content Hub to replace Friends API

Mon, 2014-07-14 16:52

As part of the continued development of the Ubuntu platform, the Content Hub has gained the ability to share links (and soon text) as a content type, just as it has been able to share images and other file-based content in the past. This allows applications to more easily, and more consistently, share things to a user’s social media accounts.

Consolidating APIs


Thanks to the collaborative work going on between the Content Hub and the Ubuntu Webapps developers, it is now possible for remote websites to be packaged with local user scripts that provide deep integration with our platform services. One of the first to take advantage of this is the Facebook webapp, which while displaying remote content via a web browser wrapper, is also a Content Hub importer. This means that when you go to share an image from the Gallery app, the Facebook webapp is displayed as an optional sharing target for that image. If you select it, it will use the Facebook web interface to upload that image to your timeline, without having to go through the separate Friends API.

This work not only brings the social sharing user experience inline with the rest of the system’s content sharing experience, it also provide a much simpler API for application developers to use for accomplishing the same thing. As a result, the Friends API is being deprecated in favor of the new Content Hub functionality.

What it means for App Devs

Because this is an API change, there are things that you as an app developer need to be aware of. First, though the API is being deprecated immediately, it is not being removed from the device images until after the release of 14.10, which will continue to support the ubuntu-sdk-14.04 framework which included the Friends API. The API will not be included in the final ubuntu-sdk-14.10 framework, or any new 14.10-dev frameworks after -dev2.

After the 14.10 release in October, when device images start to build for utopic+1, the ubuntu-sdk-14.04 framework will no longer be on the images. So if you haven’t updated your Click package by then to use the ubuntu-sdk-14.10 framework, it won’t be available to install on devices with the new image. If you are not using the Friends API, this would simply be a matter of changing your package metadata to the new framework version.  For new apps, it will default to the newer version to begin with, so you shouldn’t have to do anything.

David Tomaschik: Passing Android Traffic through Burp

Sun, 2014-07-13 20:57

I wanted to take a look at all HTTP(S) traffic coming from an Android device, even if applications made direct connections without a proxy, so I set up a transparent Burp proxy. I decided to put the Proxy on my Kali VM on my laptop, but didn't want to run an AP on there, so I needed to get the traffic to there.

Network Setup

The diagram shows that my wireless lab is on a separate subnet from the rest of my network, including my laptop. The lab network is a NAT run by IPTables on the Virtual Router. While I certainly could've ARP poisoned the connection between the Internet Router and the Virtual Router, or even added a static route, I wanted a cleaner solution that would be easier to enable/disable.

Setting up the Redirect

I decided to use IPTables on the virtual router to redirect the traffic to my Kali Laptop. Furthermore, I decided to enable/disable the redirect based on logging in/out via SSH, but I needed to make sure the redirect would get torn down even if there's not a clean logout: i.e., the VM crashes, the SSH connection gets interrupted, etc. Enter pam_exec. By using the pam_exec module, we can have an arbitrary command run on log in/out, which can setup and reset the IPTables REDIRECT via an SSH tunnel to my Burp Proxy.

In order to get the command executed on any login/logout, I added the following line to /etc/pam.d/common-session:

session optional pam_exec.so log=/var/log/burp.log /opt/burp.sh

This launches the following script, that checks if its being invoked for the right user, for SSH sessions, and then inserts or deletes the relevant IPTables rules.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32#!/bin/bash BURP_PORT=8080 BURP_USER=tap LAN_IF=eth1 set -o nounset function ipt_command { ACTION=$1 echo iptables -t nat $ACTION PREROUTING -i $LAN_IF -p tcp -m multiport --dports 80,443 -j REDIRECT --to-ports $BURP_PORT\; echo iptables $ACTION INPUT -i $LAN_IF -p tcp --dport $BURP_PORT -j ACCEPT\; } if [ $PAM_USER != $BURP_USER ] ; then exit 0 fi if [ $PAM_TTY != "ssh" ] ; then exit 0 fi if [ $PAM_TYPE == "open_session" ] ; then CMD=`ipt_command -I` elif [ $PAM_TYPE == "close_session" ] ; then CMD=`ipt_command -D` fi date echo $CMD eval $CMD

This redirects all traffic incoming from $LAN_IF destined for ports 80 and 443 to local port 8080. This does have the downside of missing traffic on other ports, but this will get nearly all HTTP(S) traffic.

Of course, since the IPTables REDIRECT target still maintains the same interface as the original incoming connection, we need to allow our SSH Port Forward to bind to all interfaces. Add this line to /etc/ssh/sshd_config and restart SSH:

GatewayPorts clientspecified Setting up Burp and SSH

Burp's setup is pretty straightforward, but since we're not configuring a proxy in our client application, we'll need to use invisible proxying mode. I actually put invisible proxying on a separate port (8081) so I have 8080 setup as a regular proxy. I also use the per-host certificate setting to get the "best" SSL experience.

It turns out that there's an issue with OpenJDK 6 and SSL certificates. Apparently it will advertise algorithms not actually available, and then libnss will throw an exception, causing the connection to fail, and the client will retry with SSLv3 without SNI, preventing Burp from creating proper certificates. It can be worked around by disabling NSS in Java. In /etc/java-6-openjdk/security/java.security, comment out the line with security.provider.9=sun.security.pkcs11.SunPKCS11 ${java.home}/lib/security/nss.cfg.

Forwarding the port over to the wifilab server is pretty straightforward. You can either use the -R command-line option, or better, set things up in ~/.ssh/config.

Host wifitap User tap Hostname wifilab RemoteForward *:8080 localhost:8081

This logs in as user tap on host wifilab, forwarding local port 8081 to port 8080 on the wifilab machine. The * for a hostname is to ensure it binds to all interfaces (0.0.0.0), not just localhost.

Setting up Android

At this point, you should have a good setup for intercepting traffic from any client of the WiFi lab, but since I started off wanting to intercept Android traffic, let's optimize for that by installing our certificate. You can install it as a user certificate, but I'd rather do it as a system cert, and my testing tablet is already rooted, so it's easy enough.

You'll want to start by exporting the certificate from Burp and saving it to a file, say burp.der.

Android's system certificate store is in /system/etc/security/cacerts, and expects OpenSSL-hashed naming, like a0b1c2d3.0 for the certificate names. Another complication is that it's looking for PEM-formatted certificates, and the export from Burp is DER-formatted. We'll fix all that up in one chain of OpenSSL commands:

(openssl x509 -inform DER -outform PEM -in burp.der; openssl x509 -inform DER -in burp.der -text -fingerprint -noout ) > /tmp/`openssl x509 -inform DER -in burp.der -subject_hash -noout`.0

Android before ICS (4.0) uses OpenSSL versions below 1.0.0, so you'll need to use -subject_hash_old if you're using an older version of Android. Installing is a pretty simple task (replace HASH.0 with the filename produced by the command above):

$ adb push HASH.0 /tmp/HASH.0 $ adb shell android$ su android# mount -o remount,rw /system android# cp /tmp/HASH.0 /system/etc/security/cacerts/ android# chmod 644 /system/etc/security/cacerts/HASH.0 android# reboot

Connect your Android device to your WiFi lab, ssh wifitap from your Kali install running Burp, and you should see your HTTP(S) traffic in Burp (excepting apps that use pinned certificates, that's another matter entirely). You can check your installed certificate from the Android Security Settings.

Good luck with your Android auditing!

Colin King: a final few more features in stress-ng

Sun, 2014-07-13 16:47
While hoping to get a feature complete stress-ng sooner than later, I found a few more ways to fiendishly stress a system.

Stress-ng 0.01.22 will be landing soon in Ubuntu 14.10 with three more stress mechanisms:
  • CPU affinity stressing; this rapidly changes CPU affinity of the stress processes just to keep the scheduling busy wasting effort.
  • Timer stressing using the real-time clock; this allows one to generate a large amount of timer interrupts, so it is a useful interrupt saturation test.
  • Directory entry thrashing; this creates and deletes a selectable number of zero length files and hence populates and destroys directory entries.
I have also removed the need to use rand() for random number generation for some of the stress tests and re-used a the faster MWC "random" number generator to add in some well known and very simple math operations for CPU stressing.

Stress-ng now has 15 different simple stress mechanisms that exercise CPU, cache, memory, file system, I/O and CPU schedulers.  I could add more tests, but I think this is a large enough set to allow one to thrash a machine and see how well it performs under pressure.

Lubuntu Blog: PCManFM 1.2.1

Sat, 2014-07-12 15:34
Another update of our file manager PCManFM, tones of bug fixes and new implementations: fixed dragging and dropping icons behavior fixed icons positioning fixed resetting cursor in location bar corrected folder popup update on loading reordered ‘View’ menu item implemented drawing icons of dragged items etc. Also a huge update and bug fixing in libfm libraries (1.2.1) too. You can use

Darcy Casselman: New Motherboard: ASUS Z97-A (and Ubuntu)

Sat, 2014-07-12 05:31

My old desktop was seeing random drive errors on multiple drives, including a drive I only got a few months ago. And since my motherboard was about 5 years old, I decided it was time to replace it.

I asked the KWLUG mailing list if they had any advice on picking motherboards. The consensus seems to be pretty much “it’s still a crapshoot.” But I bit the bullet and reported back:

I bought a motherboard! An ASUS Z97-A

Mostly because I wanted Intel integrated graphics and I’ve got 3 monitors it needs to drive. And I was hoping the mSATA SSD card I got to replace the one in my Dell Mini 9 (that didn’t work) would fit in the m.2 slot. It doesn’t. Oh well.

I wanted to get it all set up while I was off for Canada Day. Except Canada Computers didn’t have any of my preferred CPU options. So I’ll be waiting for that to come in via NewEgg.

I gave myself a budget of about $500 for mobo, CPU and RAM and I’ll end up going over a little bit (mostly tax and shipping), and tried to build the best machine I could for that.

One of the things I did this time that I hadn’t done before was spec out a desktop machine at System76 and used that as a starting point. System76 is more explicit about things like chipsets for desktops than Zareason is. Which would be great, except they’re using the older H87 chipsets.

…Like the latest Ars System Guide Hot Rod But that’s over 6 months old now. And >they’re balancing their budget against having to buy a graphics card, which I don’t want to do.

I still have some unanswered questions about the Z97 chipset. It’s only been out for about a month. So who knows?

My laptop has mostly been my desktop for the last few years. But I want to knock that off because I’ve been developing back and neck problems. My desktop layout is okay ergonomically, at least better than anything I have for the laptop (including and especially my easy chair with a lapdesk, which is comfy, but kind of horrible on the neck). One of the things that’s holding me back is my desktop is 5 years old and was built cheap because I was mostly using it as a server by that point. I really want to make it something I want to use over the laptop (which is a very nice laptop). Which is why I ended up going somewhat upper-mid range.

That’s one of the nice things about building from parts, despite the lack of useful information: This is the 3rd motherboard I’ve put in this case. I replaced the PSU once a couple years ago so it’s quite sufficient to handle the new stuff. I’m keeping my old harddrives. I could keep the graphics card. I’ll need to buy an adapter for the DVD burner (and I’ve yet to decide if I’m going to do that, or buy a new SATA one or just go without). And I can keep my (frankly pretty awesome) monitors. So $500 gets me a kick-ass whole new machine.

Anyway, long story short, I still have a lot of questions about whether this was the best purchase, but I’m hopeful it’s a good one.

Aside: is Canada Computers really the only store in town that keeps desktop CPUs in stock anymore? I couldn’t get into the UW Tech Shop, but since they’re mostly iPads and crap now, I’m not optimistic. Computer XS doesn’t (at least the Waterloo one). Future Shop and Best Buy don’t. I even went into Neutron for the first time in over 15 years. Nope. Nobody.

It… didn’t go as well as I’d hoped:

So, anyway, I got the motherboard, CPU and put it all in my old case.

I booted up and all three monitors came up without any fuss, which has never happened for me. Awesome! This is great!

Then I tried to play game.

Apparently the current snd_intel_hda ALSA drivers don’t like H97 and Z97 chipsets. The sound was staticky, crackly and distorted.

I’ve spent more than a few hours over the last week hunting around for a fix. I installed Windows on a spare harddrive to make sure it wasn’t a hardware problem (for which I needed to spend the $20 to get a new SATA DVD drive so I could run the Windows driver disk to actually get actual video, networking and sound support :P). And I found this thing on the Arch WIki which, while not fixing the problem, did actually make it worse, leading me to conclude there was some sort of sound driver/pulseaudio problem.

Top tip: when trying to sort out sound driver problems for specific hardware the best thing to do is search for the hardware product id (in my case “8ca0″). That’s how I found this:

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1321421

Hurray! The workaround works great and now I’m back in business!

So I got burned by going with the bleeding edge, and I should know better. But, even though the information isn’t widely diseminated yet, there is a fix. And a workaround. I’m sure Ubuntu 14.10 will have no problem with it. It’s not as bad as the bleeding edge was years ago. If the fix was easier to find (and I’m going to work on that), it was easier getting going with Ubuntu than it was with Windows.

Paul Tagliamonte: Satuday's the new Sunday

Sat, 2014-07-12 00:41

Hello, World!

For those of you who enforce my Sundays on me (keep doing that, thank you!), I’ll be changing my Saturdays with my Sundays.

That’s right! In this new brave world, I’ll be taking Saturdays off, not Sundays. Feel free to pester me all day on Sunday, now!

This means, as a logical result, I will not be around tomorrow, Saturday.

Much love.

Dan Chapman: One week in and Dekko has 41 users

Fri, 2014-07-11 12:47

For those of you who haven’t seen Dekko in the software store, it’s a native IMAP email client for Ubuntu Touch. Dekko is essentially my development/ideas branch of my work on Trojita, which in the end is intended to replace Dekko in the store.

The reasoning behind publishing Dekko is for a few reasons really, firstly Trojita prides itself on being standards compliant, already has a desktop client that uses QtWidgets, supports both Qt4 & Qt5 and also has a technical preview Harmattan qml front-end, which was great as most of the initial work for the IMAP parts was in place, so we didn’t need to “re-invent the wheel” (for the most-part anyway), but we soon hit a point where we had surpassed what had previously been done and now was the job of unwinding the intertwined style that QtWidget UI’s naturally ensue. So that we can share the same business logic between all front-ends without losing standards compliance, support both Qt4 & Qt5 and maintain Trojita’s robust quality standards.

I am still relatively new to C++ so this is like one of those “in at the deep end” scenario’s, resulting in the ietf rfc specifications and Qt’s documentation having become the majority of my daily reading.  Dekko was born out of the need to understand the separation (call it a learning project) and to devise a way to create common components that can be shared between all front-ends. This “learning project” resulted in a functional but limited capability email client, so I decided to publish it with the hope of getting as much feedback, bug reports or design ideas as possible, and use this to ensure Trojita becomes a rock solid native email client for Ubuntu.

A quick list of current features in Dekko,

  • Support for viewing of plain text messages. We cannot show html messages, due to not being able to block network requests with QtWebKits custom url scheme functionality (If you are an Oxide dev who happens to be reading this “wink wink”  ). But is great for viewing all your launchpad mail.
  • Navigating the mailbox hierarchy, it’s not entirely obvious at first (Open to new ideas here) If you see a progression arrow on a mailbox, tapping the arrow displays the nested mailbox’s. Otherwise tapping elsewhere shows the messages within that mailbox.
  • Composing and replying to messages, this utilizes the bottom edge so pulling up on an opened message will set up a reply to the opened message. One thing to note with replying to messages, at the moment it basically does a “reply all” action so you need to delete or add recipients to the message manually until support for mailing lists and other reply modes are implemented.
  • Supports defining a single sender identity for mail submission.
  • Mark message as deleted, expunge mailbox and auto-expunge on marked for deletion options.
  • Mark all messages as read.
  • Offline, Online and Bandwidth saving mode, perfect for mobile data connections

There is a known bug with the message list view sometimes not updating properly, but can usually be resolved by closing and reopening that mailbox.

So if you haven’t already please give it a try, and if you have any design/implementation ideas, issues, bugs or anything else you wish to say, please report them to the dekko project on launchpad https://launchpad.net/dekko.

Note: Please don’t file bugs against upstream Trojita, unless you are using a build of Trojita and not a Dekko build. 

And finally a few snaps to wet the appetite

 

Ronnie Tucker: PHP Fixes OpenSSL Flaws in New Releases

Fri, 2014-07-11 07:00

The PHP Group has released new versions of the popular scripting language that fix a number of bugs, including two in OpenSSL. The flaws fixed in OpenSSL don’t rise to the level of the major bugs such as Heartbleed that have popped up in the last few months. But PHP 5.5.14 and 5.4.30 both contain fixes for the two vulnerabilities, one of which is related to the way that OpenSSL handles timestamps on some certificates, and the other of which also involves timestamps, but in a different way.

Source:

http://threatpost.com/php-fixes-openssl-flaws-in-new-releases/106908

Submitted by: Dennis Fisher

 

Pages