Planet Ubuntu
Subscribe to Planet Ubuntu feed
Planet Ubuntu - http://planet.ubuntu.com/
Updated: 2 hours 2 min ago

David Tomaschik: Getting Started in Information Security

Sat, 2014-09-13 19:30

I've only been an information security practitioner for about a year now, but I've been doing things on my own for years before that. However, many people are just getting into security, and I've recently stumbled on a number of resources for newcomers, so I thought I'd put together a short list.

Stuart Langridge: Developers are users too

Sat, 2014-09-13 15:51

When you talk about the “user experience” of the thing you’re building, remember that developers who use your APIs are users too. And you need to think about their experience.

We seem to have created a world centred on github where everyone has to manage dependencies by hand, like we had to in 1997. This problem was completely solved by apt twenty years ago, but the new cool github world is, it seems, too cool to care about that. Go off to get some new project by git cloneing it and it’s quite likely to say “oh, and it depends on $SOME_OTHER_PROJECT (here’s a link to that project’s github repo)”. And then you have to go fetch both and set them up yourself. Which is really annoying.

Now, there are good reasons why to not care about existing dependency package management systems such as apt. Getting stuff into Ubuntu is hard, laborious work and most projects don’t want to do it. PPAs make it easier, but not much easier; if you’re building a thing and not specifically targeting Ubuntu with it, you don’t want to have to learn about Launchpad and PPAs and build recipes and whatnot. This sort of problem is also solves neatly for packages in a specific language by that language’s own packaging system; Python stuff is installable with pip install whatever and a virtualenv; Node stuff is installable with npm install whatever; all these take care of fetching any dependent stuff. But this rush for each language to have its own “app store” for its apps and libraries means that combining things from different languages is still the same 20th century nightmare. Take, for example, Mozilla’s new Firefox Tools Adaptor. I’m not picking on Mozilla here; the FTA is new, and it’s pretty cool, and it’s not finished yet. This is just the latest in a long line of things which exhibit the problem. The FTA allows you to use the Firefox devtools to debug web things running in other browsers. Including, excitingly, debugging things running in iOS Safari on the iPhone. Now, doing that’s a pain in the ringpiece at the moment; you have to install Google’s ios-webkit-debug-proxy, which needs to be compiled, and Apple break compatibility with it all the time and so you have to fetch and build new versions of libimobiledevice or something. I was eager to see that the new Firefox Tools Adaptor promises to allow debugging on iOS Safari just by installing a Firefox extension.

And then I read about it, and it says, “The Adapter’s iOS support uses Google’s ios-webkit-debug-proxy. Until that support is built directly into the add-on, you’ll need to install and run the ios-webkit-debug-proxy binary yourself”. Sigh. That’s the hard part. And it’s not any easier here.

Again, I’m not blaming Mozilla here — they plan to fix this, but they’ll have to fix it by essentially bundling ios-webkit-debug-proxy with the FTA. That’ll work, and that’s an important thing for them to do in order to provide a slick user experience for developers using this tool (because “download and compile this other thing first” is not ever ever a nice user experience). This is sorta kinda solved by brew for Mac users, but there’s a lot of stuff not in brew either. Still, there is willingness to solve it that way by having a packaging system. But it’s annoying that Ubuntu already has one and people are loath to use it. Using it makes for a better developer user experience. That’s important.

John Baer: Get a free Chromebook from the Google Lending Library

Fri, 2014-09-12 22:52

Are you are enrolled in college, need a laptop computer, and willing to accept a new Chromebook? If so, Google got a deal for you and it’s called the Google Lending Library.

The Chromebook Lending Library is traveling to 12 college campuses across the U.S. loaded with the latest Chromebooks. The Lending Library is a bit like your traditional library, but instead of books, we’re letting students borrow Chromebooks (no library card needed). Students can use a Chromebook during the week for life on campus— whether it’s in class, during an all-nighter, or browsing the internet in their dorm.

Lindsay Rumer, Chrome Marketing


Assuming you attend one the partnered Universities, here is how it works.

1. Request a Chromebook from the Library
2. Agree to the Terms of Use Agreement
3. Use the Chromebook as you like while you attend school
4. Return it when you want or when you leave

What happens if you don’t return it? Expect to receive a bill for the fair market value not to exceed $220.

Here’s the fine print.

“Evaluation Period” means the period of time specified to you at the time of checkout of a Device.

“Checkout Location” means the location specified by Google where Devices will be issued to you and collected from you.

1.1 Device Use. You may use the Device issued to you for your personal evaluation purposes. Upon your use of the Device, Google transfers title to the Device equipment to you, but retains all ownership rights, title and interest to any Google Devices and services and anything else that Google makes available to you, including without limitation any software on the Device.

1.2 Evaluation Period. You may use the Device during the Evaluation Period. Upon (i) expiration of the Evaluation Period, or (ii) termination of this Agreement, if this Agreement is terminated early in accordance Section 4, you agree to return the Device to the Checkout Location. If you fail to return the Device at the end of the Evaluation Period or upon termination of this Agreement, you agree Google may, to the extent allowed by applicable law, charge you up to the fair market value of the Device less normal wear and tear and any applicable taxes for an amount not to exceed Two Hundred Twenty ($220.00) Dollars USD.

1.3 Feedback. Google may ask you to provide feedback about the Device and related Google products optimized for Google Services. You are not required to provide feedback, but, if you do, it must only be from you, truthful, and accurate and you grant Google permission to use your name, logo and feedback in presentations and marketing materials regarding the Device. Your participation in providing feedback may be suspended at any time.

1.4 No Compensation. You will not be compensated for your use of the Devices or for your feedback.

2. Intellectual Property Rights. Nothing in this Agreement grants you intellectual property rights in the Devices or any other materials provided by Google. Except as provided in Section 1.1, Google will own all rights to anything you choose to submit under this Agreement. If that isn’t possible, then you agree to do whichever of the following that Google asks you to do: transfer all of your rights regarding your submissions to Google; give Google an exclusive, irrevocable, worldwide, royalty-free license to your submissions to Google; or grant Google any other reasonable rights. You will transfer your submissions to Google, and sign documents and provide support as requested by Google, and you appoint Google to act on your behalf to secure these rights from you. You waive any moral rights you have and agree not to exercise them, unless you notify Google and follow Google’s instructions.

3. Confidentiality. Your feedback and other submissions, is confidential subject to Google’s use of your feedback pursuant to Section 1.3.

4. Term. This Agreement becomes effective when you click the “I Agree” button and remains in force through the end of the Evaluation Period or earlier if either party gives written termination notice, which will be effective immediately. Upon expiration or termination, you will return the Device as set forth below. Additionally, Google will remove you from any related mailing lists within thirty (30) days of expiration or termination. Sections 1.3, 1.4, and Sections 2 through 5 survive any expiration or termination of this Agreement.

5. Device Returns. You will return the Device(s) to Google or its agents to the Checkout Location at the time specified to you at the time of checkout of the Device or if unavailable, to Google Chromebook Lending Library, 1600 Amphitheatre Parkway, Mountain View, CA 94043. Google may notify you during or after the term of this Agreement regarding return details or fees chargeable to you if you fail to return the Device.

The post Get a free Chromebook from the Google Lending Library appeared first on john's journal.

Ayrton Araujo: CloudFlare as a ddclient provider under Debian/Ubuntu

Fri, 2014-09-12 18:47

Dyn's free dynamic DNS service closed on Wednesday, May 7th, 2014.

CloudFlare, however, has a little known feature that will allow you to update
your DNS records via API or a command line script called ddclient. This will
give you the same result, and it's also free.

Unfortunately, ddclient does not work with CloudFlare out of the box. There is
a patch available
and here is how to hack[1] it up on Debian or Ubuntu, also works in Raspbian with Raspberry Pi.

Requirements

basic command line skills, and a domain name
that you own.

CloudFlare

Sign up to CloudFlare and add your domain name.
Follow the instructions, the default values it gives should be fine.

You'll be letting CloudFlare host your domain so you need to adjust the
settings at your registrar.

If you'd like to use a subdomain, add an 'A' record for it. Any IP address
will do for now.

Let's get to business...

Installation $ sudo apt-get install ddclient Patch $ sudo apt-get install curl sendmail libjson-any-perl libio-socket-ssl-perl $ curl -O http://blog.peter-r.co.uk/uploads/ddclient-3.8.0-cloudflare-22-6-2014.patch $ sudo patch /usr/sbin/ddclient < ddclient-3.8.0-cloudflare-22-6-2014.patch Config $ sudo vi /etc/ddclient.conf

Add:

## ### CloudFlare (cloudflare.com) ### ssl=yes use=web, web=dyndns protocol=cloudflare, \ server=www.cloudflare.com, \ zone=domain.com, \ login=you@email.com, \ password=api-key \ host.domain.com

Comment out:

#daemon=300

Your api-key comes from the account page

ssl=yes might already be in that file

use=web, web=dyndns will use dyndns to check IP (useful for NAT)

You're done. Log in to https://www.cloudflare.com and check that the IP listed for
your domain matches http://checkip.dyndns.com

To verify your settings:

sudo ddclient -daemon=0 -debug -verbose -noquiet

Fork this:
https://gist.github.com/ayr-ton/f6db56f15ab083ab6b55

Ayrton Araujo: New blog with Ghost

Fri, 2014-09-12 18:42

Here I am again, moving once more. This time from octopress to ghost.
At this time I'm moving because I want an easy way to update my blog. Keep it in a static content generator is a little bit harder to update and fix posts. But I will miss some things from octopress like the codesnipets and the responsible videos plugin. I'm considering to make some pull requests with the features I am missing. I will continue using Haroopad for off-line drafting markdown posts.

Well, for this migration I used an account in Wabble (because it is really cheap) with 4 VPS.

As I don't have an API in Wablle and theres no roadmap for this I used juju manual provisioning. Here is my enviroments.yaml:

environments: wable: type: manual default-series: precise bootstrap-host: example.com bootstrap-user: root

Before add new units, I cleaned up the machine with:

apt-get update && apt-get install curl && curl https://dl.dropboxusercontent.com/u/X/juju-agent.sh | sh

Because of the X, it will not works for you, but heres the script (remember to change the X):

#!/bin/bash # curl https://dl.dropboxusercontent.com/u/x/juju-agent.sh | sh locale-gen en_US.UTF-8 dpkg-reconfigure locales apt-get purge apache2.2-common -y apt-get dist-upgrade -y apt-get autoremove -y apt-get install dbus -y mkdir $HOME/.ssh echo 'ssh-rsa yourpubkey' > $HOME/.ssh/authorized_keys

And then I added new units with juju add-machine ssh:root@example.com with no error.

Then I deployed 2 mysql units and 4 ghost units, with haproxy as follows (as we're using manual provisioning, we need to specify the machines, otherwise it will not work):

juju deploy mysql --to 0 juju add-unit mysql --to 1 juju deploy haproxy --to 2 juju deploy ghost --to 0 juju add-unit ghost --to 1 juju add-unit ghost --to 2 juju add-unit ghost --to 3 juju add-relation mysql ghost juju add-relation haproxy ghost juju expose haproxy

Wait for units to deploy before add the relations.

And voilà:

All this power is just for test, I will change this schema soon, as my blog will never have engagement to justify that scale schema. Ahaha

Here's my juju-gui canvas:

I would like to say thanks to hatch, the creator of Ghost charm, that helped me a lot with some breaked deploys in #juju at irc.freenode.org and quote a related post of him: http://fromanegg.com/post/97035773367/juju-explain-it-to-me-like-im-5

What I would like to have next:

Search for me in #juju at irc.freenode.org if you pass through any problem.

Harald Sitter: My Family…

Fri, 2014-09-12 15:33

… is the best in the whole wide world!

Ubuntu Podcast from the UK LoCo: S07E24 – The One with the Holiday Armadillo

Fri, 2014-09-12 13:05

We’re back with Season Seven, Episode Twenty-Four of the Ubuntu Podcast! Alan Pope, Mark Johnson, and Laura Cowen are drinking tea and eating Battenburg cake in Studio L.

 Download OGG  Download MP3 Play in Popup

In this week’s show:

  • We discuss whether communities suck…

  • We also discuss:

    • Aurasma augmented reality
    • Upgrading to 14.10
    • Converting a family member to Ubuntu
  • We share some Command Line Lurve which does this (from Patrick Archibald on G+): curl -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","method":"GUI.ShowNotification","params":{"title":"This is the title of the message","message":"This is the body of the message"},"id":1}' http://wopr.local:8080/jsonrpc
  • And we read your feedback. Thanks for sending it in!

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Benjamin Kerensa: Off to Berlin

Thu, 2014-09-11 20:45
Right now, as this post is published, I’m probably settling into my seat for the next ten hours headed to Berlin, Germany as part of a group of leaders at Mozilla who will be meeting for ReMo Camp. This is my first transatlantic trip ever and perhaps my longest flight so far, so I’m both […]

Jonathan Riddell: Akademy Wednesday and Thursday Photo Blog

Thu, 2014-09-11 20:34
KDE Project:


Hacking hard in the hacking room


Blue Systems Beer


You will keep GStreamer support in Phonon


Boat trip on the loch


Off the ferry


Bushan leads the way


A fairy castle appears in the distance


The talent show judges


Sinny models our stylish Kubuntu polo shirts


Kubuntu Day discussions with developers from the Munich Kubuntu rollout


Kubuntu Day group photo with people from Kubuntu, KDE, Debian, Limux and Net-runner


Jonathan gets a messiah complex

Benjamin Kerensa: On Wearable Technology

Thu, 2014-09-11 12:00
The Web has been filled with buzz of the news of new Android watches and the new Apple Watch but I’m still skeptical as to whether these first iterations of Smartwatches will have the kind of sales Apple and Google are hoping for. I do think wearable tech is the future. In fact, I owned […]

Valorie Zimmerman: Accessible KDE, Kubuntu

Thu, 2014-09-11 10:31
KDE is community. We welcome everyone, and make our software work for everyone. So, accessibility is central to all our work, in the community, in testing, coding, documentation. Frederik has been working to make this true in Qt and in KDE for many years, Peter has done valuable work with Simon and Jose is doing testing and some patches to fix stuff.

However, now that KF5 is rolling out, we're finding a few problems with our KDE software such as widgets, KDE configuration modules (kcm) and even websites. However, the a11y team is too small to handle all this! Obviously, we need to grow the team.

So we've decided to make heavier use of the forums, where we might find new testers and folks to fix the problems, and perhaps even people to fix up the https://accessibility.kde.org/ website to be as
awesome as the KDE-Edu site. The Visual Design Group are the leaders here, and they are awesome!

Please drop by #kde-accessibility on Freenode or the Forum https://forum.kde.org/viewforum.php?f=216 to read up on what needs doing, and learn how to test. People stepping up to learn forum
moderation are also welcome. Frederik has recently posted about the BoF: https://forum.kde.org/viewtopic.php?f=216&t=122808

A11y was a topic in the Kubuntu BoF today, and we're going to make a new push to make sure our accessibility options work well out of the box, i.e. from first boot. This will involve working with the Ubuntu a11y team, yeah!

More information is available at
https://community.kde.org/Accessibility and
https://userbase.kde.org/Applications/Accessibility

Canonical Design Team: Canonical and Ubuntu at dConstruct

Thu, 2014-09-11 09:57

Brighton is not just a lovely seaside town, mostly known for being overcrowded in Summer by Londoners in search for a bit of escapism, but also the home of a thriving community of designers, makers and entrepreneurs. Some of these people run dConstruct, a gathering where creative minds of all sorts converge every year to discuss important themes around digital innovation and culture.

When I found out that we were sponsoring the conference this year, I promptly jumped in to help my colleagues in the Phone, Web and Juju design teams. Our stand was situated in the foyer of the Brighton Dome, flashing the orange banner of Ubuntu and a number of origami unicorns.

We had an incredibly positive response from the attendees, as our stand was literally teeming with Ubuntu enthusiasts who were really keen to check our progress with the phone. We had a few BQ phones on display where we showed the new features and designs.

For us, it was a great occasion to gather fresh impressions of the user experience on the phone and across a variety of apps. After a few moments, people started to understand the edge interactions and began to swipe left and right, giving positive feedback on the responsiveness of the UI. Our pre-release models of BQ phones don’t have the final shell and they still display softkeys, as a result some people found this confusing. We took the opportunity to quickly design our own custom BQ phone by using a bunch of Ubuntu stickers…and viola, problem solved! ;)

Our ‘Make your Unicorn’ competition had a fantastic response. To celebrate the coming release of Utopic Unicorn and of the BQ phone, the maker of the best origami unicorn being awarded a new phone. The crowd did not hesitate to tackle the complex paper-bending challenge and came up with a bunch of creative outcomes. We were very impressed to see how many people managed to complete the instructions, as I didn’t manage to go beyond step 15..

Jono Bacon: Ubuntu for Smartwatches?

Thu, 2014-09-11 05:11

I read an interesting article on OMG! Ubuntu! about whether Canonical will enter the wearables business, now the smartwatch industry is hotting up.

On one hand (pun intended), it makes sense. Ubuntu is all about convergence; a core platform from top to bottom that adjusts and expands across different form factors, with a core developer platform, and a focus on content.

On the other hand (pun still intended), the wearables market is another complex economy, that is heavily tethered, both technically and strategically, to existing markets and devices. If we think success in the phone market is complex, success in the burgeoning wearables market is going to be just as complex too.

Now, to be clear, I have no idea whether Canonical is planning on entering the wearables market or not. It wouldn’t surprise me if this is a market of interest though as the investment in Ubuntu over the last few years has been in building a platform that could ultimately scale. It is logical to think this could map to a smartwatch as “another form factor”.

So, if technically it is doable, Canonical should do it, right?

No.

I want to see my friends and former colleagues at Canonical succeed, and this needs focus.

Great companies focus on doing a small set of things and doing them well. Spiraling off in a hundred different directions means dividing teams, dividing focus, and limiting opportunities. To use a tired saying…being a “jack of all trades and master of none”.

While all companies can be tempted in this direction, I am happy that on the client side of Canonical, the focus has been firmly placed on phone. TV has taken a back seat, tablet has taken a back seat. The focus has been on building a featureful, high-quality platform that is focused on phone, and bringing that product to market.

I would hate to think that Canonical would get distracted internally by chasing the smartwatch market while it is young. I believe it would do little but direct resources away from the major push now, which is phone.

If there is something we can learn from Apple here is that it isn’t important to be first. It is important to be the best. Apple rarely ships the first innovation, but they consistently knock it out of the park by building brilliant products that become best in class.

So, I have no doubt that the exciting new convergent future of Ubuntu could run on a watch, but lets keep our heads down and get the phone out there and rocking, and the wearables and other form factors can come later.

David Tomaschik: [CVE-2014-5204] Wordpress nonce Issues

Wed, 2014-09-10 22:54

Wordpress 3.9.2, released August 6th, contained fixes for two closely related vulnerabilities (CVE-2014-5204) in the way it handles Wordpress nonces (CSRF Tokens, essentially) that I reported to the Wordpress Security Team. I'd like to say the delay in my publishing this write-up was to allow people time to patch, but the reality is I've just been busy and haven't gotten around to this.

TL;DR: Wordpress < 3.9.2 generated nonces in a manner that would allow an attacker to generate valid nonces for other users for a small subset of possible actions. Additionally, nonces were compared with ==, leading to a timing attack against nonce comparison. (Although this is very difficult to execute.)

Review of CSRF Protection

A common technique for avoiding Cross Site Request Forgery (CSRF) is to have the server generate a token specific to the current user, include that in the page, and then have the client echo that token back with the request. This way the server can tell that the request was in response to a page from the server, rather than a request triggered on the user's behalf by an attacker. OWASP calls this the Synchronizer Token Pattern and one of the requirements is that an attacker is not able to predict or determine tokens for another user.

Wordpress Nonces

Wordpress uses what they call "nonces" (but they're not, in fact, guaranteed to be used only once) for CSRF protection. These nonces include a timestamp, a user identifier, and an action, all of which are part of best practices for CSRF tokens. These values are HMAC'd with a secret key to generate the final token. All of this is in accordance with best practices, and at first blush, the nonce generation code looks good. Here's how nonces were generated prior to the 3.9.2 fix:

1 2 3 4 5 6 7 8 9function wp_create_nonce($action = -1) { $user = wp_get_current_user(); $uid = (int) $user->ID; # snipped $i = wp_nonce_tick(); return substr(wp_hash($i . $action . $uid, 'nonce'), -12, 10); }

wp_nonce_tick returns a monotonically increasing value that increments every 12 hours to provide a timeout on the resulting nonce. $user->ID is the auto-increment id column from the database. wp_hash performs an HMAC-MD5 using a key selected by the 2nd argument, the nonce key in this case. So, we're esentially getting an HMAC of a string concatenation of the current time, the action value passed in, and the current user's UID. Assuming HMAC is strong, we've got a user, action and time-specific token, right?

Wrong. What if we can figure out a way to collide inputs to the HMAC? Turns out this is pretty easy, actually. Let's look at some instances where wp_create_nonce is used:

1 2 3wp_create_nonce( "approve-comment_$comment->comment_ID" ) wp_create_nonce( 'set_post_thumbnail-' . $post->ID ); wp_create_nonce( 'update-post_' . $attachment->ID );

In more than one case, we see places where nonces are created that end in an ID value (an integer from the database). Note that these action values are immediately before the UID, also an integer. This means that once the concatenation is done, there is no separation between the integer values of the action and the UID, leading to collisions in the hash input, and consequently the same nonce value being generated. Take, for example, an installation where users are privileged to update their own post but not those of other users. Let's take user 1 and post 32, and user 21 and post 3. What are the respective inputs to wp_hash? (I'm substituting 0 for the timestamp value as it's the same for all users at the same time.)

$i . 'update-post_32' . 1 => '0update-post_321' $i . 'update-post_3' . 21 => '0update-post_321'

Despite being two separate users and two separate actions, their nonce values will be the same. While this is fairly limited in what an attacker can do (you can't pick arbitrary users and values, only "related" users and values), it's also very easy to fix and completely eliminate the hole: simply add a non-integer separator between the segments of the hash input. Wordpress 3.9.2 now inserts a | between each segment, so now the hash inputs look like this:

$i . '|' . 'update-post_32' . '|' . 1 => '0|update-post_32|1' $i . '|' . 'update-post_3' . '|' . 21 => '0|update-post_3|21'

No longer will the HMACs collide, so now two distinct nonces are generated, closing the CSRF hole. The implementation also now includes your session token, making it even harder for an attacker to generate a collision, though I can't think of a specific hole that fixes (it does generate new nonces after a logout/login):

1 2 3 4 5 6 7 8 9 10function wp_create_nonce($action = -1) { $user = wp_get_current_user(); $uid = (int) $user->ID; # snipped $token = wp_get_session_token(); $i = wp_nonce_tick(); return substr( wp_hash( $i . '|' . $action . '|' . $uid . '|' . $token, 'nonce' ), -12, 10 ); } Timing Attack

Though probably very difficult to exploit on modern systems, using PHP's == to compare hashes results in a timing attack (not to mention the possibility of running afoul of PHP's bizarre comparison behavior).

Formerly:

1 2if ( substr(wp_hash($i . $action . $uid, 'nonce'), -12, 10) === $nonce ) { ...

Now:

1 2 3$expected = substr( wp_hash( $i . '|' . $action . '|' . $uid . '|' . $token, 'nonce'), -12, 10 ); if ( hash_equals( $expected, $nonce ) ) { ...

hash_equals was added in PHP 5.6, but Wordpress provides their own, using a fairly common constant-time comparison pattern, if you don't have it.

Summary

Even when you include all the right things in your CSRF implementation, it's still possible to run into trouble if you combine them the wrong way. Much like a hash length extension attack, cryptography won't save you if you're putting things together without thinking about how an attacker can alter or vary it.

I'd like to thank the Wordpress security team for their responsiveness when I reported the issues here. I have nothing but positive things to say about the team and my interactions with them.

Svetlana Belkin: Open Science: Improving Collaboration Between Researchers

Wed, 2014-09-10 22:39

The Open Source movement has evolved into other areas of computering.  Open Data, Open Hardware, and ,the topic that I want to talk about, Open Science, are three examples of this.  Since I’m a biologist, I’m deeply connected to the science community but I want to also tie in my hobby of FOSS/Linux into my work.  There are many non-coding (and coding) based things and groups that one can use for research and I want to talk about a few of them.

Mozilla Science Lab

Mozilla, the creators of Firefox and Thunderbird, started a group last year that aims to help scientists, “to use the power of the open web to change the way science is done. [They] build educational resources, tools and prototypes for the research community to make science more open, collaborative and efficient.” (main page of Mozilla Science Lab).

Right now, they are are focusing on teaching scientists the basic skills in research via the Software Carpentry project.  But I know that they are planning to get some projects for the community-building side for non-coders.  I don’t know what those projects are but I know that they will be listed soon on the mailing-list of the group.  For myself, I can’t wait until I get my hands on those projects to help them grow.

Open Science Framework

Another fairly new project within the last two years that was started by Center of Open Science that focuses on creating a framework that allows scientists to use the, “entire research lifecycle: planning, execution, reporting, archiving, and discovery”, (main page of OSF) fully and be able to share that with other people in there teams but thy could be in another place not near the head researcher.

I think this is one of the best tools out there because it allows you to upload things on the site and also from Dropbox and other services.  I played around with it a bit but I have not fully used it, but when I do, I will write a post about it.

Open Notebook Science

This is maybe one of the oldest projects that I think there is for Open Science and it’s Open Notebook Science.  It’s the idea of have the lab notebook publicly available online.  There is a small network of these.

I think, along with the OSF project, it is one of the best tools out there mainly because the data and other stuff is publicly available online for everyone to learn from your mistakes or to work with the data.

Hopefully as the time goes by, these projects will grow and researchers can collaborate better.

 


Dustin Kirkland: Deploy OpenStack IceHouse like a Boss!

Wed, 2014-09-10 21:54

This little snippet of ~200 lines of YAML is the exact OpenStack that I'm deploying tonight, at the OpenStack Austin Meetup.

Anyone with a working Juju and MAAS setup, and 7 registered servers should be able to deploy this same OpenStack setup, in about 12 minutes, with a single command.


$ wget http://people.canonical.com/~kirkland/icehouseOB.yaml
$ juju-deployer -c icehouseOB.yaml
$ cat icehouseOB.yaml

icehouse:
overrides:
openstack-origin: "cloud:trusty-icehouse"
source: "distro"
services:
ceph:
charm: "cs:trusty/ceph-27"
num_units: 3
constraints: tags=physical
options:
fsid: "9e7aac42-4bf4-11e3-b4b7-5254006a039c"
"monitor-secret": AQAAvoJSOAv/NRAAgvXP8d7iXN7lWYbvDZzm2Q==
"osd-devices": "/srv"
"osd-reformat": "yes"
annotations:
"gui-x": "2648.6688842773438"
"gui-y": "708.3873901367188"
keystone:
charm: "cs:trusty/keystone-5"
num_units: 1
constraints: tags=physical
options:
"admin-password": "admin"
"admin-token": "admin"
annotations:
"gui-x": "2013.905517578125"
"gui-y": "75.58013916015625"
"nova-compute":
charm: "cs:trusty/nova-compute-3"
num_units: 3
constraints: tags=physical
to: [ceph=0, ceph=1, ceph=2]
options:
"flat-interface": eth0
annotations:
"gui-x": "776.1040649414062"
"gui-y": "-81.22811031341553"
"neutron-gateway":
charm: "cs:trusty/quantum-gateway-3"
num_units: 1
constraints: tags=virtual
options:
ext-port: eth1
instance-mtu: 1400
annotations:
"gui-x": "329.0572509765625"
"gui-y": "46.4658203125"
"nova-cloud-controller":
charm: "cs:trusty/nova-cloud-controller-41"
num_units: 1
constraints: tags=physical
options:
"network-manager": Neutron
annotations:
"gui-x": "1388.40185546875"
"gui-y": "-118.01156234741211"
rabbitmq:
charm: "cs:trusty/rabbitmq-server-4"
num_units: 1
to: mysql
annotations:
"gui-x": "633.8120727539062"
"gui-y": "862.6530151367188"
glance:
charm: "cs:trusty/glance-3"
num_units: 1
to: nova-cloud-controller
annotations:
"gui-x": "1147.3269653320312"
"gui-y": "1389.5643157958984"
cinder:
charm: "cs:trusty/cinder-4"
num_units: 1
to: nova-cloud-controller
options:
"block-device": none
annotations:
"gui-x": "1752.32568359375"
"gui-y": "1365.716194152832"
"ceph-radosgw":
charm: "cs:trusty/ceph-radosgw-3"
num_units: 1
to: nova-cloud-controller
annotations:
"gui-x": "2216.68212890625"
"gui-y": "697.16796875"
cinder-ceph:
charm: "cs:trusty/cinder-ceph-1"
num_units: 0
annotations:
"gui-x": "2257.5515747070312"
"gui-y": "1231.2130126953125"
"openstack-dashboard":
charm: "cs:trusty/openstack-dashboard-4"
num_units: 1
to: "keystone"
options:
webroot: "/"
annotations:
"gui-x": "2353.6898193359375"
"gui-y": "-94.2642593383789"
mysql:
charm: "cs:trusty/mysql-1"
num_units: 1
constraints: tags=physical
options:
"dataset-size": "20%"
annotations:
"gui-x": "364.4567565917969"
"gui-y": "1067.5167846679688"
mongodb:
charm: "cs:trusty/mongodb-0"
num_units: 1
constraints: tags=physical
annotations:
"gui-x": "-70.0399979352951"
"gui-y": "1282.8224487304688"
ceilometer:
charm: "cs:trusty/ceilometer-0"
num_units: 1
to: mongodb
annotations:
"gui-x": "-78.13333225250244"
"gui-y": "919.3128051757812"
ceilometer-agent:
charm: "cs:trusty/ceilometer-agent-0"
num_units: 0
annotations:
"gui-x": "-90.9158582687378"
"gui-y": "562.5347595214844"
heat:
charm: "cs:trusty/heat-0"
num_units: 1
to: mongodb
annotations:
"gui-x": "494.94012451171875"
"gui-y": "1363.6024169921875"
ntp:
charm: "cs:trusty/ntp-4"
num_units: 0
annotations:
"gui-x": "-104.57728099822998"
"gui-y": "294.6641273498535"
relations:
- - "keystone:shared-db"
- "mysql:shared-db"
- - "nova-cloud-controller:shared-db"
- "mysql:shared-db"
- - "nova-cloud-controller:amqp"
- "rabbitmq:amqp"
- - "nova-cloud-controller:image-service"
- "glance:image-service"
- - "nova-cloud-controller:identity-service"
- "keystone:identity-service"
- - "glance:shared-db"
- "mysql:shared-db"
- - "glance:identity-service"
- "keystone:identity-service"
- - "cinder:shared-db"
- "mysql:shared-db"
- - "cinder:amqp"
- "rabbitmq:amqp"
- - "cinder:cinder-volume-service"
- "nova-cloud-controller:cinder-volume-service"
- - "cinder:identity-service"
- "keystone:identity-service"
- - "neutron-gateway:shared-db"
- "mysql:shared-db"
- - "neutron-gateway:amqp"
- "rabbitmq:amqp"
- - "neutron-gateway:quantum-network-service"
- "nova-cloud-controller:quantum-network-service"
- - "openstack-dashboard:identity-service"
- "keystone:identity-service"
- - "nova-compute:shared-db"
- "mysql:shared-db"
- - "nova-compute:amqp"
- "rabbitmq:amqp"
- - "nova-compute:image-service"
- "glance:image-service"
- - "nova-compute:cloud-compute"
- "nova-cloud-controller:cloud-compute"
- - "cinder:storage-backend"
- "cinder-ceph:storage-backend"
- - "ceph:client"
- "cinder-ceph:ceph"
- - "ceph:client"
- "nova-compute:ceph"
- - "ceph:client"
- "glance:ceph"
- - "ceilometer:identity-service"
- "keystone:identity-service"
- - "ceilometer:amqp"
- "rabbitmq:amqp"
- - "ceilometer:shared-db"
- "mongodb:database"
- - "ceilometer-agent:container"
- "nova-compute:juju-info"
- - "ceilometer-agent:ceilometer-service"
- "ceilometer:ceilometer-service"
- - "heat:shared-db"
- "mysql:shared-db"
- - "heat:identity-service"
- "keystone:identity-service"
- - "heat:amqp"
- "rabbitmq:amqp"
- - "ceph-radosgw:mon"
- "ceph:radosgw"
- - "ceph-radosgw:identity-service"
- "keystone:identity-service"
- - "ntp:juju-info"
- "neutron-gateway:juju-info"
- - "ntp:juju-info"
- "ceph:juju-info"
- - "ntp:juju-info"
- "keystone:juju-info"
- - "ntp:juju-info"
- "nova-compute:juju-info"
- - "ntp:juju-info"
- "nova-cloud-controller:juju-info"
- - "ntp:juju-info"
- "rabbitmq:juju-info"
- - "ntp:juju-info"
- "glance:juju-info"
- - "ntp:juju-info"
- "cinder:juju-info"
- - "ntp:juju-info"
- "ceph-radosgw:juju-info"
- - "ntp:juju-info"
- "openstack-dashboard:juju-info"
- - "ntp:juju-info"
- "mysql:juju-info"
- - "ntp:juju-info"
- "mongodb:juju-info"
- - "ntp:juju-info"
- "ceilometer:juju-info"
- - "ntp:juju-info"
- "heat:juju-info"
series: trusty

:-Dustin

Lucas Nussbaum: Will the packages you rely on be part of Debian Jessie?

Wed, 2014-09-10 19:28

The start of the jessie freeze is quickly approaching, so now is a good time to ensure that packages you rely on will the part of the upcoming release. Thanks to automated removals, the number of release critical bugs has been kept low, but this was achieved by removing many packages from jessie: 841 packages from unstable are not part of jessie, and won’t be part of the release if things don’t change.

It is actually simple to check if you have packages installed locally that are part of those 841 packages:

  1. apt-get install how-can-i-help (available in backports if you don’t use testing or unstable)
  2. how-can-i-help --old
  3. Look at packages listed under Packages removed from Debian ‘testing’ and Packages going to be removed from Debian ‘testing’

Then, please fix all the bugs :-) Seriously, not all RC bugs are hard to fix. A good starting point to understand why a package is not part of jessie is tracker.d.o.

On my laptop, the two packages that are not part of jessie are the geeqie image viewer (which looks likely to be fixed in time), and josm, the OpenStreetMap editor, due to three RC bugs. It seems much harder to fix… If you fix it in time for jessie, I’ll offer you a $drink!

Didier Roche: How to help on Ubuntu Developer Tools Center

Wed, 2014-09-10 14:16

Last week, we announced our "Ubuntu Loves Developers" effort! We got some great feedback and coverage. Multiple questions arose around how to help and be part of this effort. Here is the post to answer about this

Our philosophy

First, let's define the core principles around the Ubuntu Developer Tools Center and what we are trying to achieve with this:

  1. UDTC will always download, tests and support the latest available upstream developer stack. No version stuck in stone for 5 years, we get the latest and the best release that upstream delivers to all of us. We are conscious that being able to develop on a freshly updated environment is one of the core values of the developer audience and that's why we want to deliver that experience.
  2. We know that developers want stability overall and not have to upgrade or spend time maintaining their machine every 6 months. We agree they shouldn't have to and the platform should "get out of my way, I've got work to do". That's the reason why we focus heavily on the latest LTS release of Ubuntu. All tools will always be backported and supported on the latest Long Term Support release. Tests are running multiple times a day on this platform. In addition to this, we support, of course, the latest available Ubuntu Release for developers who likes to live on the edge!
  3. We want to ensure that the supported developer environment is always functional. Indeed, by always downloading latest version from upstream, the software stack can change its requirements, requiring newer or extra libraries and thus break. That's why we are running a whole suite of functional tests multiple times a day, on both version that you can find in distro and latest trunk. That way we know if:
  • we broke ourself in trunk and needs to fix it before releasing.
  • the platform broke one of the developer stack and we can promptly fix it.
  • a third-party application or a website changed and broke the integration. We can then fix this really early on.

All those tests running will ensure the best experience we can deliver, while fetching always latest released version from upstream, and all this, on a very stable platform!

Sounds cool, how can I help? Reports bugs and propose enhancements

The more direct way of reporting a bug or giving any suggestions is through the upstream bug tracker. Of course, you can always reach us out as well on social networks like g+, through the comments section of this blog, or on IRC: #ubuntu-desktop, on freenode. We are also starting to look at the #ubuntulovesdevs hashtag.

The tool is really to help developers, so do not hesitate to help us directing the Ubuntu Developer Tools Center on the way which is the best for you.

Help translating

We already had some good translations contributions through launchpad! Thanks to all our translators, we got Basque, Chinese (Hong Kong), Chinese (Simplified), French, Italian and Spanish! There are only few strings up for translations in udtc and it should take less than half an hour in total to add a new one. It's a very good and useful way to contribute for people speaking other languages than English! We do look at them and merge them in the mainline automatically.

Contribute on the code itself

Some people started to offer code contribution and that's a very good and motivating news. Do not hesitate to fork us on the upstream github repo. We'll ensure we keep up to date on all code contributions and pull requests. If you have any questions or for better coordination, open a bug to start the discussion around your awesome idea. We'll try to be around and guide you on how to add any framework support! You will not be alone!

Write some documentation

We have some basic user documentation. If you feel there are any gaps or any missing news, feel free to edit the wiki page! You can as well merge some of the documentation of the README.md file or propose some enhancements to it!

To give an easy start to any developers who wants to hack on udtc iitself, we try to keep the README.md file readable and up to the current code content. However, this one can deviate a little bit, if you think that any part missing/explanation requires, you can propose any modifications to it to help future hackers having an easier start.

Spread the word!

Finally, spreading the word that Ubuntu Loves Developers and we mean it! Talk about it on social network, tagging with #ubuntulovesdevs or in blog posts, or just chatting to your local community! We deeply care about our developer audience on the Ubuntu Desktop and Server and we want this to be known!

For more information and hopefully goodness, we'll have an ubuntu on air session session soon! We'll keep you posted on this blog when we have final dates details.

If you felt that I forgot to mention anything, do not hesitate to signal it as well, this is another form of very welcome contributions!

I'll discuss next week how we maintain and runs tests to ensure your developer tools are always working and supported!

Raphaël Hertzog: Freexian’s first report about Debian Long Term Support

Wed, 2014-09-10 11:30

When we setup Freexian’s offer to bring together funding from multiple companies in order to sponsor the work of multiple developers on Debian LTS, one of the rules that I imposed is that all paid contributors must provide a public monthly report of their paid work.

While the LTS project officially started in June, the first month where contributors were actually paid has been July. Freexian sponsored Thorsten Alteholz and Holger Levsen for 10.5 hours each in July and for 16.5 hours each in August. Here are their reports:

It’s worth noting that Freexian sponsored Holger’s work to fix the security tracker to support squeeze-lts. It’s my belief that using the money of our sponsors to make it easier for everybody to contribute to Debian LTS is money well spent.

As evidenced by the progress bar on Freexian’s offer page, we have not yet reached our minimal goal of funding the equivalent of a half-time position. And it shows in the results, the dla-needed.txt still shows around 30 open issues. This is slightly better than the state two months ago but we can improve a lot on the average time to push out a security update…

To have an idea of the relative importance of the contributions of the paid developers, I counted the number of uploads made by Thorsten and Holger since July: of 40 updates, they took care of 19 of them, so about the half.

I also looked at the other contributors: Raphaël Geissert stands out with 9 updates (I believe that he is contracted by Électricité de France for doing this) and most of the other contributors look like regular Debian maintainers taking care of their own packages (Paul Gevers with cacti, Christoph Berg with postgresql, Peter Palfrader with tor, Didier Raboud with cups, Kurt Roeckx with openssl, Balint Reczey with wireshark) except Matt Palmer and Luciano Bello who (likely) are benevolent members of the LTS team.

There are multiple things to learn here:

  1. Paid contributors already handle almost 70% of the updates. Counting only on volunteers would not have worked.
  2. Quite a few companies that promised help (and got mentioned in the press release) have not delivered the promised help yet (neither through Freexian nor directly).

Last but not least, this project wouldn’t exist without the support of multiple companies and organizations. Many thanks to them:

Hopefully this list will expand over time! Any help to reach out to new companies and organizations is more than welcome.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Pages