news aggregator

Riccardo Padovani: Create your first QML game with Bacon2D

Planet Ubuntu - Mon, 2014-09-15 07:00

Hi all,
after long time I return to write to show you how to create a simple game for Ubuntu for Phones (but also for Android) with Bacon2D.

Bacon2D is a framework to ease 2D game development, providing ready-to-use QML elements representing basic game entities needed by most of games.

As tutorial I’ll explain you how I create my first QML game, 100balls, that you could find on Ubuntu Store on Phones. Source is available on Github.

Installation

So, first of all we need to install Bacon2D on our system. I suppose you have already installed QT on your system, so we only need to take source and compile it:

git clone git@github.com:Bacon2D/Bacon2D.git
cd Bacon2D
mkdir build && cd build
qmake ..
make
sudo make install

Now you have Bacon2D on your system, and you can import it in every project you want.

A first look to Bacon2D

Bacon2D provides a good number of custom components for your app. Of course, I can’t describe them all in one article, so please read the documentation. We’ll use only few of them, and I think the best way to introduce you to them is writing the app.
So, let’s start!

First of all, we create our base file, called 100balls.qml:

import QtQuick 2.0 import Bacon2D 1.0

The first element we add is the Game element. Game is the top-level container, where all the game will be. We set some basic property and the name of the game, with gameName property:

import QtQuick 2.0 import Bacon2D 1.0   Game { id: game anchors.centerIn: parent   height: 680 width: 440   gameName: "com.ubuntu.developer.rpadovani.100balls" // Ubuntu Touch name format, you can use whatever you want }

But the Game itself is useless, we need to add one or more Scene to it. A scene is the place where all Entity of the game will be placed.
Scene has a lot of property, for now is importat to set two of them: running indicates if all things in the scene will move, and if game engine works; second property is physics, that indicates if Box2D has to be used to simulate physic in the game. We want a game where some balls fall, so we need to set it to true.

import QtQuick 2.0 import Bacon2D 1.0   Game { id: game anchors.centerIn: parent   height: 680 width: 440   gameName: "com.ubuntu.developer.rpadovani.100balls" // Ubuntu Touch name format, you can use whatever you want   Scene { id: gameScene physics: true running: true } }

Stuart Langridge: Brum Tech Scene interviews

Planet Ubuntu - Sun, 2014-09-14 23:56

Today I released the first of the Brum Tech Scene interviews, with me talking to Simon Jenner of Silicon Canal and Oxygen Startups. There’s a video on the site from me explaining why I’m doing this, but I figure that the more discerning audience for as days pass by might appreciate a more in-depth discussion.

I love this city. I love that we’re prepared to spend a hundred and ninety million quid on building the best library in the whole world. I love that there’s so much going on, tech-wise. But nobody talks to anybody else. If you look at, say, Brighton, the whole tech scene there all hang out together. They can put on a Digital Brighton week and have dConstruct be part of it and Seb do mad things with visualisations and that’s marvellous. We ought to have that. I want us to have that.

We don’t have a tech scene. We’ve got twenty separate tech scenes. What I want to do is knock down the walls a bit. So the designers talk to the SEO people and the Linux geeks talk to the designers. Because there is no way that this can be a bad thing.

I also want to learn a bit about videos. Now, let’s be clear here. I know from a decade of podcasting that with a mild expenditure of money on gear, and a great sound engineer (Jono Bacon, step forward) you can produce something as good as the professionals. Bad Voltage sounds as good, production-wise, as the BBC’s Today programme does. Video is not like that. There is a substantial difference between amateur and professional efforts; one bloke using mobile phones to record cannot make something that looks like Sherlock or Game of Thrones. I’m not trying to look professional here; I’m aiming for “competent amateur”. I’ve learned loads about how to record a video interview, how to mix it, how to do the editing. Sit far enough apart that your voice doesn’t sound on their mic. Apply video effects to the clip before you cut it up. Don’t speak over the interviewee. KDEnLive’s “set audio reference” is witchcraft brilliance. I knew none of this two months ago. And I’ve really enjoyed learning. I am in no wise good at this stuff, but I’m better than I was.

This has been a fun project to set up, and it will continue being fun as I record more interviews. My plan is to have a new one every Monday morning, indefinitely, as long as people like them and I’m still interested in doing them. I should give big love to Mike, my designer, who I fought with tooth and nail about the site design and the desaturated blue look to the videos, and to Dan Newns who sat and was interviewed as a test when I first came up with this idea, and has provided invaluable feedback throughout.

If you know something about video editing, I’d love to hear how I can do better. Ping me on twitter or by mail. Tell me as well who you want to hear interviewed; which cool projects are going on that I don’t know about. I’d also love to hear about cool venues in the city in which I can do interviews; one of my subsidiary goals here is to show off the city’s tech places. Annoyingly, I spoke to the Library and to the Birmingham Museums Trust and they were all “fill out our fifteen page form” because they’re oriented around the BBC coming in with a crew of twenty camera people, not one ginger guy with a mobile phone and a dream. Maybe I’ll do things with @HubBirmingham once they actually exist.

I should talk about the tech, here. I record the interviews on an iPhone 5, a Nexus 4, and a little HD camera I bought years ago. The audio is done with two Røde Smartlav lapel mics plugged into the two phones. None of this is expensive, which has a cost in terms of video and audio quality but critically doesn’t have much of a cost in terms of actual pounds sterling. And editing is done with KDEnLive (kdenlive?) which is a really powerful non-linear video editor for Ubuntu, and the team who make it should be quite proud. The big thing I’m missing (apart from a cameraman) is a tripod, which I can probably buy for about ten quid, and I will do once I find one that’s tall and yet still fits in my laptop bag.

Anyway, that’s the story of the Brum Tech Scene interviews. There’ll be one every Monday. I hope you like them. I hope they help, even in a small way, to make the Brum tech scene gel together even more than it has thus far. Let me know what you think. brumtechscene.co.uk.

Joel Leclerc: I’m quitting relinux

Planet Ubuntu - Sun, 2014-09-14 23:24

I will start this off by saying: I’m very (and honestly) sorry for, well, everything.

To give a bit of history, I started relinux as a side-project for my CosmOS project (cloud-based distribution … which failed), in order to build the ISO’s. The only reasonable alternative at the time was remastersys, and I realized I would have to patch it anyways, so I thought that I might as well make a reusable tool for other distributions to use too.

Then came a rather large amount of friction between me and the author of remastersys, of which I will not go into any detail of. I acted very immaturely then, and wronged him several times. I had defamed him, made quite a few people very angry at him, and even managed to get some of his supporters against him. True, age and maturity had something to do with it (I was 12 at the time), but that still doesn’t excuse my actions at all.

So my first apology is to Tony Brijeski, the author of remastersys, for all the trouble and possible pain I had put him through. I’m truly sorry for all of this.

However, though the dynamics with Tony and remastersys are definitely a large part of why I’m quitting relinux, that is not all. The main reason, actually, is lack of interest. I have rewritten relinux a total of 7 times (including the original fork of remastersys), and I really hate the debugging process (takes 15-20 minutes to create an ISO, so that I can debug it). I have also lost interest in creating linux distributions, so not only am I very tired of working on it, I also don’t really care about what it does.

On this note, my second apologies (and thanks) have to go those who have helped me so much through the process, especially those who have tried to encourage me to finish relinux. Those listed are in no particular order, and if I forgot you, then let me know (and I apologize for that!):

  • Ko Ko Ye
  • Raja Genupula
  • Navdeep Sidhu
  • Members of the TSS Web Dev Club
  • Ali Hallahi
  • Gert van Spijker
  • Aritra Das
  • Diptarka Das
  • Alejandro Fernandez
  • Kendall Weaver

Thank you very much for everything you’ve done!

Lastly, I would like to explain my plans for it, in case anyone wants to continue it (by no means do I want to enforce these, these are just ideas).

My plan for the next release of relinux was to actually make a very generic and scriptable CLI ISO creation tool, and then make relinux as a specific set of “profiles” for that tool (plus an interface). The tool would basically contain a few libraries for the chosen scripting language, for things like storing the filesystem (SquashFS or other), ISO creation, and general utilities for editing files while keeping permissions, mutli-threading/processing, etc… The “profiles” would then copy, edit, and delete files as needed, set up the tool wanted for running the live system (in ubuntu’s case, this’d be casper), setup the installer/bootloader, and such.

I would like to apologize to you all, the people who have used relinux and have waited for a stable version for 3 years, for not doing this. Thank you very much for your support, and I’m very sorry for having constantly pushed releases back and having never made a stable or well working version of relinux. Though I do have some excuses as to why the releases didn’t work, or why I didn’t test them well enough, none of them can cover why I didn’t fix them or work on it more. And for that, I am very sorry.

I know that this is a very large post for something so simple, but I feel that it would not be right if I didn’t apologize to those I have done wrong to, and thanked those who have helped me along the way.

So to summarize, thank you, sorry, and relinux is now dead.

- Joel Leclerc (MiJyn)


David Tomaschik: Getting Started in CTFs

Planet Ubuntu - Sun, 2014-09-14 20:07

My last post was about getting started in a career in information security. This post is about the sport end of information security: Capture the Flag (CTFs).

I'd played around with some wargames (Smash the Stack, Over the Wire, and Hack this Site) before, but my first real CTF (timed, competitive, etc.) was the CTF run by Mad Security at BSides SF 2013. By some bizarre twist of fate, I ended up winning the CTF, and I was hooked. I've probably played in about 30 CTFs since, most of them online with the team Shadow Cats. It's been a bumpy ride, but I've learned a lot about a variety of topics by doing this.

If you're in the security industry and you've never tried a CTF, you really should. Personally, I love CTFs because they get me to exercise skills that I never get to use at work. They also inspire some of my research and learning. The only problem is making the time. :)

Here's some resources I've thought were interesting:

Luke Faraone: "Your release sucks."

Planet Ubuntu - Sat, 2014-09-13 20:43
I look forward to Ubuntu's semiannual release day, because it's the completion of 6ish months of work by Ubuntu (and by extension Debian) developers.

I also loathe it, because every single time we get people saying "This Ubuntu release is the worst release ever!".

Ubuntu releases are always rocky around release time, because the first time Ubuntu gets widespread testing is on or after release day.

We ship software to 12 Million Ubuntu Users with only 150 MOTUs who work directly on the platform. That's a little less than 1 developer with upload rights to the archive for every 60,000 users. ((This number, like all other usage data, is dated, and probably wasn't even accurate when it was first calculated)) Compared to Debian, which (at last estimate in 2010) had 1.5 million uniques on security.debian.org, yet has around 1000 Debian Developers.

Debian has a strong testing culture; someone once estimated that around ¾ of Debian users are running unstable or testing. In Ubuntu, we don't have good metrics on how many people are using the development release that I'm aware of (pointers welcome), but I'd guess that it's a very very small percentage. A common thread in bug reports, if we get a response at all, goes on as follows:
Triager: ((Developer, bugcontrol member, etc. Somebody who is not experiencing the problem but wants to help.)) "Is this a problem in $devel?"User: "I'll let you know when it hits final"Triager: "It's too late then. Then we'll want you to test in the next release. We have to fix it BEFORE its final"User: "Ok, I'll test at beta."Triager: "That's 2 weeks before release, which will be too late. Please test ASAP if you want us to have time to fix it"
Of course, there are really important bugs with hardware support which keep on cropping up. But if they're just getting reported on or around release day, there are limits to what can be done about them this cycle.

We need to make it easier for people to run early development versions, and encourage more people to use them (as long as they're willing to deal with breakage). I'm not sure whether unstable/testing is appropriate for Ubuntu, and I'm fairly confident that we don't want to move to a rolling release (currently being discussed in Debian, summary). But we badly need more developers, and equally importantly, more testers to try it out earlier in the release process.

To users: please, please try out the development versions. Download a LiveCD and run a smoketest, or check if bugs you reported are in fact fixed in the later versions. And do it early and often.

David Tomaschik: Getting Started in Information Security

Planet Ubuntu - Sat, 2014-09-13 19:30

I've only been an information security practitioner for about a year now, but I've been doing things on my own for years before that. However, many people are just getting into security, and I've recently stumbled on a number of resources for newcomers, so I thought I'd put together a short list.

Stuart Langridge: Developers are users too

Planet Ubuntu - Sat, 2014-09-13 15:51

When you talk about the “user experience” of the thing you’re building, remember that developers who use your APIs are users too. And you need to think about their experience.

We seem to have created a world centred on github where everyone has to manage dependencies by hand, like we had to in 1997. This problem was completely solved by apt twenty years ago, but the new cool github world is, it seems, too cool to care about that. Go off to get some new project by git cloneing it and it’s quite likely to say “oh, and it depends on $SOME_OTHER_PROJECT (here’s a link to that project’s github repo)”. And then you have to go fetch both and set them up yourself. Which is really annoying.

Now, there are good reasons why to not care about existing dependency package management systems such as apt. Getting stuff into Ubuntu is hard, laborious work and most projects don’t want to do it. PPAs make it easier, but not much easier; if you’re building a thing and not specifically targeting Ubuntu with it, you don’t want to have to learn about Launchpad and PPAs and build recipes and whatnot. This sort of problem is also solves neatly for packages in a specific language by that language’s own packaging system; Python stuff is installable with pip install whatever and a virtualenv; Node stuff is installable with npm install whatever; all these take care of fetching any dependent stuff. But this rush for each language to have its own “app store” for its apps and libraries means that combining things from different languages is still the same 20th century nightmare. Take, for example, Mozilla’s new Firefox Tools Adaptor. I’m not picking on Mozilla here; the FTA is new, and it’s pretty cool, and it’s not finished yet. This is just the latest in a long line of things which exhibit the problem. The FTA allows you to use the Firefox devtools to debug web things running in other browsers. Including, excitingly, debugging things running in iOS Safari on the iPhone. Now, doing that’s a pain in the ringpiece at the moment; you have to install Google’s ios-webkit-debug-proxy, which needs to be compiled, and Apple break compatibility with it all the time and so you have to fetch and build new versions of libimobiledevice or something. I was eager to see that the new Firefox Tools Adaptor promises to allow debugging on iOS Safari just by installing a Firefox extension.

And then I read about it, and it says, “The Adapter’s iOS support uses Google’s ios-webkit-debug-proxy. Until that support is built directly into the add-on, you’ll need to install and run the ios-webkit-debug-proxy binary yourself”. Sigh. That’s the hard part. And it’s not any easier here.

Again, I’m not blaming Mozilla here — they plan to fix this, but they’ll have to fix it by essentially bundling ios-webkit-debug-proxy with the FTA. That’ll work, and that’s an important thing for them to do in order to provide a slick user experience for developers using this tool (because “download and compile this other thing first” is not ever ever a nice user experience). This is sorta kinda solved by brew for Mac users, but there’s a lot of stuff not in brew either. Still, there is willingness to solve it that way by having a packaging system. But it’s annoying that Ubuntu already has one and people are loath to use it. Using it makes for a better developer user experience. That’s important.

John Baer: Get a free Chromebook from the Google Lending Library

Planet Ubuntu - Fri, 2014-09-12 22:52

Are you are enrolled in college, need a laptop computer, and willing to accept a new Chromebook? If so, Google got a deal for you and it’s called the Google Lending Library.

The Chromebook Lending Library is traveling to 12 college campuses across the U.S. loaded with the latest Chromebooks. The Lending Library is a bit like your traditional library, but instead of books, we’re letting students borrow Chromebooks (no library card needed). Students can use a Chromebook during the week for life on campus— whether it’s in class, during an all-nighter, or browsing the internet in their dorm.

Lindsay Rumer, Chrome Marketing


Assuming you attend one the partnered Universities, here is how it works.

1. Request a Chromebook from the Library
2. Agree to the Terms of Use Agreement
3. Use the Chromebook as you like while you attend school
4. Return it when you want or when you leave

What happens if you don’t return it? Expect to receive a bill for the fair market value not to exceed $220.

Here’s the fine print.

“Evaluation Period” means the period of time specified to you at the time of checkout of a Device.

“Checkout Location” means the location specified by Google where Devices will be issued to you and collected from you.

1.1 Device Use. You may use the Device issued to you for your personal evaluation purposes. Upon your use of the Device, Google transfers title to the Device equipment to you, but retains all ownership rights, title and interest to any Google Devices and services and anything else that Google makes available to you, including without limitation any software on the Device.

1.2 Evaluation Period. You may use the Device during the Evaluation Period. Upon (i) expiration of the Evaluation Period, or (ii) termination of this Agreement, if this Agreement is terminated early in accordance Section 4, you agree to return the Device to the Checkout Location. If you fail to return the Device at the end of the Evaluation Period or upon termination of this Agreement, you agree Google may, to the extent allowed by applicable law, charge you up to the fair market value of the Device less normal wear and tear and any applicable taxes for an amount not to exceed Two Hundred Twenty ($220.00) Dollars USD.

1.3 Feedback. Google may ask you to provide feedback about the Device and related Google products optimized for Google Services. You are not required to provide feedback, but, if you do, it must only be from you, truthful, and accurate and you grant Google permission to use your name, logo and feedback in presentations and marketing materials regarding the Device. Your participation in providing feedback may be suspended at any time.

1.4 No Compensation. You will not be compensated for your use of the Devices or for your feedback.

2. Intellectual Property Rights. Nothing in this Agreement grants you intellectual property rights in the Devices or any other materials provided by Google. Except as provided in Section 1.1, Google will own all rights to anything you choose to submit under this Agreement. If that isn’t possible, then you agree to do whichever of the following that Google asks you to do: transfer all of your rights regarding your submissions to Google; give Google an exclusive, irrevocable, worldwide, royalty-free license to your submissions to Google; or grant Google any other reasonable rights. You will transfer your submissions to Google, and sign documents and provide support as requested by Google, and you appoint Google to act on your behalf to secure these rights from you. You waive any moral rights you have and agree not to exercise them, unless you notify Google and follow Google’s instructions.

3. Confidentiality. Your feedback and other submissions, is confidential subject to Google’s use of your feedback pursuant to Section 1.3.

4. Term. This Agreement becomes effective when you click the “I Agree” button and remains in force through the end of the Evaluation Period or earlier if either party gives written termination notice, which will be effective immediately. Upon expiration or termination, you will return the Device as set forth below. Additionally, Google will remove you from any related mailing lists within thirty (30) days of expiration or termination. Sections 1.3, 1.4, and Sections 2 through 5 survive any expiration or termination of this Agreement.

5. Device Returns. You will return the Device(s) to Google or its agents to the Checkout Location at the time specified to you at the time of checkout of the Device or if unavailable, to Google Chromebook Lending Library, 1600 Amphitheatre Parkway, Mountain View, CA 94043. Google may notify you during or after the term of this Agreement regarding return details or fees chargeable to you if you fail to return the Device.

The post Get a free Chromebook from the Google Lending Library appeared first on john's journal.

Ayrton Araujo: CloudFlare as a ddclient provider under Debian/Ubuntu

Planet Ubuntu - Fri, 2014-09-12 18:47

Dyn's free dynamic DNS service closed on Wednesday, May 7th, 2014.

CloudFlare, however, has a little known feature that will allow you to update
your DNS records via API or a command line script called ddclient. This will
give you the same result, and it's also free.

Unfortunately, ddclient does not work with CloudFlare out of the box. There is
a patch available
and here is how to hack[1] it up on Debian or Ubuntu, also works in Raspbian with Raspberry Pi.

Requirements

basic command line skills, and a domain name
that you own.

CloudFlare

Sign up to CloudFlare and add your domain name.
Follow the instructions, the default values it gives should be fine.

You'll be letting CloudFlare host your domain so you need to adjust the
settings at your registrar.

If you'd like to use a subdomain, add an 'A' record for it. Any IP address
will do for now.

Let's get to business...

Installation $ sudo apt-get install ddclient Patch $ sudo apt-get install curl sendmail libjson-any-perl libio-socket-ssl-perl $ curl -O http://blog.peter-r.co.uk/uploads/ddclient-3.8.0-cloudflare-22-6-2014.patch $ sudo patch /usr/sbin/ddclient < ddclient-3.8.0-cloudflare-22-6-2014.patch Config $ sudo vi /etc/ddclient.conf

Add:

## ### CloudFlare (cloudflare.com) ### ssl=yes use=web, web=dyndns protocol=cloudflare, \ server=www.cloudflare.com, \ zone=domain.com, \ login=you@email.com, \ password=api-key \ host.domain.com

Comment out:

#daemon=300

Your api-key comes from the account page

ssl=yes might already be in that file

use=web, web=dyndns will use dyndns to check IP (useful for NAT)

You're done. Log in to https://www.cloudflare.com and check that the IP listed for
your domain matches http://checkip.dyndns.com

To verify your settings:

sudo ddclient -daemon=0 -debug -verbose -noquiet

Fork this:
https://gist.github.com/ayr-ton/f6db56f15ab083ab6b55

Ayrton Araujo: New blog with Ghost

Planet Ubuntu - Fri, 2014-09-12 18:42

Here I am again, moving once more. This time from octopress to ghost.
At this time I'm moving because I want an easy way to update my blog. Keep it in a static content generator is a little bit harder to update and fix posts. But I will miss some things from octopress like the codesnipets and the responsible videos plugin. I'm considering to make some pull requests with the features I am missing. I will continue using Haroopad for off-line drafting markdown posts.

Well, for this migration I used an account in Wabble (because it is really cheap) with 4 VPS.

As I don't have an API in Wablle and theres no roadmap for this I used juju manual provisioning. Here is my enviroments.yaml:

environments: wable: type: manual default-series: precise bootstrap-host: example.com bootstrap-user: root

Before add new units, I cleaned up the machine with:

apt-get update && apt-get install curl && curl https://dl.dropboxusercontent.com/u/X/juju-agent.sh | sh

Because of the X, it will not works for you, but heres the script (remember to change the X):

#!/bin/bash # curl https://dl.dropboxusercontent.com/u/x/juju-agent.sh | sh locale-gen en_US.UTF-8 dpkg-reconfigure locales apt-get purge apache2.2-common -y apt-get dist-upgrade -y apt-get autoremove -y apt-get install dbus -y mkdir $HOME/.ssh echo 'ssh-rsa yourpubkey' > $HOME/.ssh/authorized_keys

And then I added new units with juju add-machine ssh:root@example.com with no error.

Then I deployed 2 mysql units and 4 ghost units, with haproxy as follows (as we're using manual provisioning, we need to specify the machines, otherwise it will not work):

juju deploy mysql --to 0 juju add-unit mysql --to 1 juju deploy haproxy --to 2 juju deploy ghost --to 0 juju add-unit ghost --to 1 juju add-unit ghost --to 2 juju add-unit ghost --to 3 juju add-relation mysql ghost juju add-relation haproxy ghost juju expose haproxy

Wait for units to deploy before add the relations.

And voilà:

All this power is just for test, I will change this schema soon, as my blog will never have engagement to justify that scale schema. Ahaha

Here's my juju-gui canvas:

I would like to say thanks to hatch, the creator of Ghost charm, that helped me a lot with some breaked deploys in #juju at irc.freenode.org and quote a related post of him: http://fromanegg.com/post/97035773367/juju-explain-it-to-me-like-im-5

What I would like to have next:

Search for me in #juju at irc.freenode.org if you pass through any problem.

Harald Sitter: My Family…

Planet Ubuntu - Fri, 2014-09-12 15:33

… is the best in the whole wide world!

Ubuntu Podcast from the UK LoCo: S07E24 – The One with the Holiday Armadillo

Planet Ubuntu - Fri, 2014-09-12 13:05

We’re back with Season Seven, Episode Twenty-Four of the Ubuntu Podcast! Alan Pope, Mark Johnson, and Laura Cowen are drinking tea and eating Battenburg cake in Studio L.

 Download OGG  Download MP3 Play in Popup

In this week’s show:

  • We discuss whether communities suck…

  • We also discuss:

    • Aurasma augmented reality
    • Upgrading to 14.10
    • Converting a family member to Ubuntu
  • We share some Command Line Lurve which does this (from Patrick Archibald on G+): curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","method":"GUI.ShowNotification","params":{"title":"This is the title of the message","message":"This is the body of the message"},"id":1}' http://wopr.local:8080/jsonrpc
  • And we read your feedback. Thanks for sending it in!

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Benjamin Kerensa: Off to Berlin

Planet Ubuntu - Thu, 2014-09-11 20:45
Right now, as this post is published, I’m probably settling into my seat for the next ten hours headed to Berlin, Germany as part of a group of leaders at Mozilla who will be meeting for ReMo Camp. This is my first transatlantic trip ever and perhaps my longest flight so far, so I’m both […]

Jonathan Riddell: Akademy Wednesday and Thursday Photo Blog

Planet Ubuntu - Thu, 2014-09-11 20:34
KDE Project:


Hacking hard in the hacking room


Blue Systems Beer


You will keep GStreamer support in Phonon


Boat trip on the loch


Off the ferry


Bushan leads the way


A fairy castle appears in the distance


The talent show judges


Sinny models our stylish Kubuntu polo shirts


Kubuntu Day discussions with developers from the Munich Kubuntu rollout


Kubuntu Day group photo with people from Kubuntu, KDE, Debian, Limux and Net-runner


Jonathan gets a messiah complex

Benjamin Kerensa: On Wearable Technology

Planet Ubuntu - Thu, 2014-09-11 12:00
The Web has been filled with buzz of the news of new Android watches and the new Apple Watch but I’m still skeptical as to whether these first iterations of Smartwatches will have the kind of sales Apple and Google are hoping for. I do think wearable tech is the future. In fact, I owned […]

Valorie Zimmerman: Accessible KDE, Kubuntu

Planet Ubuntu - Thu, 2014-09-11 10:31
KDE is community. We welcome everyone, and make our software work for everyone. So, accessibility is central to all our work, in the community, in testing, coding, documentation. Frederik has been working to make this true in Qt and in KDE for many years, Peter has done valuable work with Simon and Jose is doing testing and some patches to fix stuff.

However, now that KF5 is rolling out, we're finding a few problems with our KDE software such as widgets, KDE configuration modules (kcm) and even websites. However, the a11y team is too small to handle all this! Obviously, we need to grow the team.

So we've decided to make heavier use of the forums, where we might find new testers and folks to fix the problems, and perhaps even people to fix up the https://accessibility.kde.org/ website to be as
awesome as the KDE-Edu site. The Visual Design Group are the leaders here, and they are awesome!

Please drop by #kde-accessibility on Freenode or the Forum https://forum.kde.org/viewforum.php?f=216 to read up on what needs doing, and learn how to test. People stepping up to learn forum
moderation are also welcome. Frederik has recently posted about the BoF: https://forum.kde.org/viewtopic.php?f=216&t=122808

A11y was a topic in the Kubuntu BoF today, and we're going to make a new push to make sure our accessibility options work well out of the box, i.e. from first boot. This will involve working with the Ubuntu a11y team, yeah!

More information is available at
https://community.kde.org/Accessibility and
https://userbase.kde.org/Applications/Accessibility

Canonical Design Team: Canonical and Ubuntu at dConstruct

Planet Ubuntu - Thu, 2014-09-11 09:57

Brighton is not just a lovely seaside town, mostly known for being overcrowded in Summer by Londoners in search for a bit of escapism, but also the home of a thriving community of designers, makers and entrepreneurs. Some of these people run dConstruct, a gathering where creative minds of all sorts converge every year to discuss important themes around digital innovation and culture.

When I found out that we were sponsoring the conference this year, I promptly jumped in to help my colleagues in the Phone, Web and Juju design teams. Our stand was situated in the foyer of the Brighton Dome, flashing the orange banner of Ubuntu and a number of origami unicorns.

We had an incredibly positive response from the attendees, as our stand was literally teeming with Ubuntu enthusiasts who were really keen to check our progress with the phone. We had a few BQ phones on display where we showed the new features and designs.

For us, it was a great occasion to gather fresh impressions of the user experience on the phone and across a variety of apps. After a few moments, people started to understand the edge interactions and began to swipe left and right, giving positive feedback on the responsiveness of the UI. Our pre-release models of BQ phones don’t have the final shell and they still display softkeys, as a result some people found this confusing. We took the opportunity to quickly design our own custom BQ phone by using a bunch of Ubuntu stickers…and viola, problem solved! ;)

Our ‘Make your Unicorn’ competition had a fantastic response. To celebrate the coming release of Utopic Unicorn and of the BQ phone, the maker of the best origami unicorn being awarded a new phone. The crowd did not hesitate to tackle the complex paper-bending challenge and came up with a bunch of creative outcomes. We were very impressed to see how many people managed to complete the instructions, as I didn’t manage to go beyond step 15..

Jono Bacon: Ubuntu for Smartwatches?

Planet Ubuntu - Thu, 2014-09-11 05:11

I read an interesting article on OMG! Ubuntu! about whether Canonical will enter the wearables business, now the smartwatch industry is hotting up.

On one hand (pun intended), it makes sense. Ubuntu is all about convergence; a core platform from top to bottom that adjusts and expands across different form factors, with a core developer platform, and a focus on content.

On the other hand (pun still intended), the wearables market is another complex economy, that is heavily tethered, both technically and strategically, to existing markets and devices. If we think success in the phone market is complex, success in the burgeoning wearables market is going to be just as complex too.

Now, to be clear, I have no idea whether Canonical is planning on entering the wearables market or not. It wouldn’t surprise me if this is a market of interest though as the investment in Ubuntu over the last few years has been in building a platform that could ultimately scale. It is logical to think this could map to a smartwatch as “another form factor”.

So, if technically it is doable, Canonical should do it, right?

No.

I want to see my friends and former colleagues at Canonical succeed, and this needs focus.

Great companies focus on doing a small set of things and doing them well. Spiraling off in a hundred different directions means dividing teams, dividing focus, and limiting opportunities. To use a tired saying…being a “jack of all trades and master of none”.

While all companies can be tempted in this direction, I am happy that on the client side of Canonical, the focus has been firmly placed on phone. TV has taken a back seat, tablet has taken a back seat. The focus has been on building a featureful, high-quality platform that is focused on phone, and bringing that product to market.

I would hate to think that Canonical would get distracted internally by chasing the smartwatch market while it is young. I believe it would do little but direct resources away from the major push now, which is phone.

If there is something we can learn from Apple here is that it isn’t important to be first. It is important to be the best. Apple rarely ships the first innovation, but they consistently knock it out of the park by building brilliant products that become best in class.

So, I have no doubt that the exciting new convergent future of Ubuntu could run on a watch, but lets keep our heads down and get the phone out there and rocking, and the wearables and other form factors can come later.

David Tomaschik: [CVE-2014-5204] Wordpress nonce Issues

Planet Ubuntu - Wed, 2014-09-10 22:54

Wordpress 3.9.2, released August 6th, contained fixes for two closely related vulnerabilities (CVE-2014-5204) in the way it handles Wordpress nonces (CSRF Tokens, essentially) that I reported to the Wordpress Security Team. I'd like to say the delay in my publishing this write-up was to allow people time to patch, but the reality is I've just been busy and haven't gotten around to this.

TL;DR: Wordpress < 3.9.2 generated nonces in a manner that would allow an attacker to generate valid nonces for other users for a small subset of possible actions. Additionally, nonces were compared with ==, leading to a timing attack against nonce comparison. (Although this is very difficult to execute.)

Review of CSRF Protection

A common technique for avoiding Cross Site Request Forgery (CSRF) is to have the server generate a token specific to the current user, include that in the page, and then have the client echo that token back with the request. This way the server can tell that the request was in response to a page from the server, rather than a request triggered on the user's behalf by an attacker. OWASP calls this the Synchronizer Token Pattern and one of the requirements is that an attacker is not able to predict or determine tokens for another user.

Wordpress Nonces

Wordpress uses what they call "nonces" (but they're not, in fact, guaranteed to be used only once) for CSRF protection. These nonces include a timestamp, a user identifier, and an action, all of which are part of best practices for CSRF tokens. These values are HMAC'd with a secret key to generate the final token. All of this is in accordance with best practices, and at first blush, the nonce generation code looks good. Here's how nonces were generated prior to the 3.9.2 fix:

1 2 3 4 5 6 7 8 9function wp_create_nonce($action = -1) { $user = wp_get_current_user(); $uid = (int) $user->ID; # snipped $i = wp_nonce_tick(); return substr(wp_hash($i . $action . $uid, 'nonce'), -12, 10); }

wp_nonce_tick returns a monotonically increasing value that increments every 12 hours to provide a timeout on the resulting nonce. $user->ID is the auto-increment id column from the database. wp_hash performs an HMAC-MD5 using a key selected by the 2nd argument, the nonce key in this case. So, we're esentially getting an HMAC of a string concatenation of the current time, the action value passed in, and the current user's UID. Assuming HMAC is strong, we've got a user, action and time-specific token, right?

Wrong. What if we can figure out a way to collide inputs to the HMAC? Turns out this is pretty easy, actually. Let's look at some instances where wp_create_nonce is used:

1 2 3wp_create_nonce( "approve-comment_$comment->comment_ID" ) wp_create_nonce( 'set_post_thumbnail-' . $post->ID ); wp_create_nonce( 'update-post_' . $attachment->ID );

In more than one case, we see places where nonces are created that end in an ID value (an integer from the database). Note that these action values are immediately before the UID, also an integer. This means that once the concatenation is done, there is no separation between the integer values of the action and the UID, leading to collisions in the hash input, and consequently the same nonce value being generated. Take, for example, an installation where users are privileged to update their own post but not those of other users. Let's take user 1 and post 32, and user 21 and post 3. What are the respective inputs to wp_hash? (I'm substituting 0 for the timestamp value as it's the same for all users at the same time.)

$i . 'update-post_32' . 1 => '0update-post_321' $i . 'update-post_3' . 21 => '0update-post_321'

Despite being two separate users and two separate actions, their nonce values will be the same. While this is fairly limited in what an attacker can do (you can't pick arbitrary users and values, only "related" users and values), it's also very easy to fix and completely eliminate the hole: simply add a non-integer separator between the segments of the hash input. Wordpress 3.9.2 now inserts a | between each segment, so now the hash inputs look like this:

$i . '|' . 'update-post_32' . '|' . 1 => '0|update-post_32|1' $i . '|' . 'update-post_3' . '|' . 21 => '0|update-post_3|21'

No longer will the HMACs collide, so now two distinct nonces are generated, closing the CSRF hole. The implementation also now includes your session token, making it even harder for an attacker to generate a collision, though I can't think of a specific hole that fixes (it does generate new nonces after a logout/login):

1 2 3 4 5 6 7 8 9 10function wp_create_nonce($action = -1) { $user = wp_get_current_user(); $uid = (int) $user->ID; # snipped $token = wp_get_session_token(); $i = wp_nonce_tick(); return substr( wp_hash( $i . '|' . $action . '|' . $uid . '|' . $token, 'nonce' ), -12, 10 ); } Timing Attack

Though probably very difficult to exploit on modern systems, using PHP's == to compare hashes results in a timing attack (not to mention the possibility of running afoul of PHP's bizarre comparison behavior).

Formerly:

1 2if ( substr(wp_hash($i . $action . $uid, 'nonce'), -12, 10) === $nonce ) { ...

Now:

1 2 3$expected = substr( wp_hash( $i . '|' . $action . '|' . $uid . '|' . $token, 'nonce'), -12, 10 ); if ( hash_equals( $expected, $nonce ) ) { ...

hash_equals was added in PHP 5.6, but Wordpress provides their own, using a fairly common constant-time comparison pattern, if you don't have it.

Summary

Even when you include all the right things in your CSRF implementation, it's still possible to run into trouble if you combine them the wrong way. Much like a hash length extension attack, cryptography won't save you if you're putting things together without thinking about how an attacker can alter or vary it.

I'd like to thank the Wordpress security team for their responsiveness when I reported the issues here. I have nothing but positive things to say about the team and my interactions with them.

Pages

Subscribe to Free Software Magazine aggregator