news aggregator

Jono Bacon: Bringing Literacy to Millions of Kids With Open Source

Planet Ubuntu - Tue, 2014-09-23 05:00

Today we are launching the Global Learning XPRIZE complete with Indigogo crowdfunding campaign.

This is a $15 million competition in which teams are challenged to create Open Source software that will teach a child to read, write, and perform arithmetic in 18 months without the aid of a teacher. This is not designed to replace teachers but to instead provide an educational solution where little or none exists.

There are 57 million children aged 5 – 12 in the world today who have no access to education. There are 250 million children below basic literacy levels, even after several years of school. You may think the solution to this is to build more schools, but we would need an extra 1.6 million teachers by next year to provide universal primary education.

This is a tragedy.

This new XPRIZE is designed to help fix this.

Every child should have a right to the core ingredient that is literacy. It unlocks their potential and opens up opportunity. Just think of all the resources we depend on today for growth and education…the Internet, books, wikipedia, collaborative tools…without literacy all of these are inaccessible. It is time to change this. Too many suffer from a lack of literacy, and sadly girls bear much of the brunt of this too.

This prize is open to anyone to participate in. Professional developers, hobbyists, students, scientists, teachers…everyone is welcome to join in and compete. While the $15 million purse is attractive in itself, just think of the impact of potentially changing the lives of hundreds of millions of kids.

Coopetition For a Great Cause

What really excites me about this new XPRIZE is that it is the first Open Source XPRIZE. The winning team and the four runner-up teams will be expected to Open Source their entire code-base, assets, and material. This will create a solid foundation of education technology that can live on…long past the conclusion of this XPRIZE.

That isn’t the only reason why this excites me though. The Open Source nature of this prize provides an incredible opportunity for coopetition; where teams can collaborate around common areas of interest and problem-solving, while keeping their secret sauce to themselves. The impact of this could be profound.

I will be working hard to build an environment in which we encourage this kind of collaboration. It makes no sense for 100 teams to all solve the same problems privately in their own silo. Let’s get everyone up and running in GitHub, collaborating around these common challenges, so all the teams benefit from that pooling of resources.

Let’s also open this up so everyone can help us be successful. Let’s invite designers, translators, testers, teachers, scientists, musicians, artists and more…everyone has something they can bring to solve one of our grandest challenges, and help create a more literate and peaceful world.

Everyone Can Play a Part

As part of this new XPRIZE we are also launching a crowdfunding campaign that is designed to raise additional resources so we can do even more as part of the prize. We have already funded the $15 million prize purse and some field-testing, but this crowdfunding campaign will provide the resources for us to do so much more.

This will help us broaden the field-testing in more countries, with more kids, to grow a global community around solving these grand challenges, build a collaborative environment for teams to work together on common problems, and optimize this new XPRIZE to be as successful as possible. Every penny contributed helps us to do more and ultimately squeeze the most value out of this important XPRIZE.

There are ten things you can do to help:

  1. Contribute! - a great place to start is to buy one of our awesome perks from the crowdfunding campaign. Find out more here.
  2. Join the Community - come and meet the new XPRIZE community at http://forum.xprize.org and share ideas, brainstorm, and collaborate around new projects.
  3. Refer Friends and Win Prizes - want to win an expenses-paid trip to our Visioneering event where we create new XPRIZEs while also helping spread the word? To find out more click here.
  4. Download the Street Team Kit - head over to our Get Involved page and download a free kit with avatars, banners, posters, presentations, FAQs and more. The page also includes videos for how to get started!
  5. Create and Share Digital Content - we are encouraging authors, developers, content-creators and more to create content that will spread the word about literacy, the Global Learning XPRIZE, and more!
  6. Share and Tag Your Fave Children’s Book - which children’s books have been the most memorable for you? Share your favorite (and preferably post a picture of the cover), complete with a link to

    http://igg.me/at/learningxprize and tag 5 friends to share theirs too! When using social media, be sure to use the #learningprize hashtag.

  7. Show Your Pride -  go and download the Street Team Kit and use the images and avatars in there to change your profile picture and banner on your favorite social media networks (e.g. Twitter, Facebook, Google+).
  8. Create Your ‘Learning Moment’ Video - record a video and share on a video website (such as YouTube) about how learning has really impact your life. Give the video the title “Global Learning XPRIZE: My Learning Moment“. Be sure to share your video on social media too with the #learningprize hashtag!
  9. Put Posters up in Your Community - go and download the Street Team Kit, print the posters out and put them up in your local coffee shops, universities, colleges, schools, and elsewhere!
  10. Organize a Local Event - create a local event to share the Global Learning XPRIZE. Fortunately we have a video on our Get Involved page that explains how you can do this, and we have a presentation deck with notes ready for you to use!

I know a lot of people who read my blog are Open Source folks, and I believe this prize offers an incredible opportunity for us to come together to have a very real profound impact on the world. Come and join the community, support the crowdfunding campaign, and help us to all be successful in bringing literacy to millions.

Luis de Bethencourt: Now Samsung @ London

Planet Ubuntu - Mon, 2014-09-22 16:18


I just moved back to Europe, this time to foggy London town, to join the Open Source Group at Samsung. Where I will be contributing upstream to GStreamer and WebKit/Blink during the day and ironically mocking the local hipsters at night.

After 4 years with Collabora it is sad to leave behind the talented and enjoyable people I've grown fond of there, but it's time to move on to the next chapter in my life. The Open Source Group is a perfect fit: contribute upstream, participate in innovative projects and be active in the Open Source community. I am very excited for this new job opportunity and to explore new levels of involvement in Open Source.

I am going to miss Montreal. It's very particular joie de vivre. Will miss the poutine, not the winter.

For all of those in London, I will be joining the next GNOME Beers event or let me know if you want to meet up for a coffee/pint.

Charles Butler: Juju + Digital Ocean = Awesome!

Planet Ubuntu - Mon, 2014-09-22 15:35

Syndicators, there is a video above that may not have made it into syndication. Visit the source link to view the video.

Juju on Digital Ocean, WOW! That's all I have to say. Digital Ocean is one of the fastest cloud hosts around with their SSD backed virtual machines. To top it off their billing is a no-nonsense straight forward model. $5/mo for their lowest end server, with 1TB of included traffic. That's enough to scratch just about any itch you might have with the cloud.

Speaking of scratching itches, if you haven't checked out Juju yet, now you have a prime, low cost cloud provider to toe the waters. Spinning up droplets with Juju is very straight forward, and offers you a hands on approach to service orchestration thats affordable enough for a weekend hacker to whet their appetite. Not to mention, Juju is currently the #1 project on their API Integration listing!

In about 11 minutes, we will go from zero to deployed infrastructure for a scale-out blog (much like the one you're reading right now).

Links in Video:

Juju Docean Github - http://github.com/kapilt/juju-digital...
Juju Documentation -http://juju.ubuntu.com/docs
Juju CharmStore - http://jujucharms.com
Kapil Thangavelu - http://blog.kapilt.com/
The Juju Community Members on DO - http://goo.gl/m6u781

Text Instructions Below:

Pre-Requisits:

  • A Recent Ubuntu Installation (12.04 +)
  • A CreditCard (for DO)
Install Juju sudo add-apt-repository ppa:juju/stable sudo apt-get update sudo apt-get install juju Install Juju-Docean Plugin sudo apt-get install python-pip sudo pip install juju-docean juju generate-config Generate an SSH Key ssh-keygen cat ~/.ssh/id_rsa.pub Setup DO API Credentials in Environment vim ~/.bashrc

You'll want the following exports in $HOME/.bashrc

export DO_CLIENT_ID="XXXXXX" export DO_API_KEY="XXXXXX"

Then source the file so its in our current, active session.

source ~/.bashrc Setup Environment and Bootstrap

vim ~/.juju/environments.yaml

Place the following lines in the environments.yaml, under the environments: key (indented 4 spaces) - ENSURE you use 4 spaces per indentation block, NOT a TAB key.

digitalocean: type: manual bootstrap-host: null bootstrap-user: root Switch to the DigitalOcean environment, and bootstrap juju switch digitalocean juju docean bootstrap

Now you're free to add machines with constraints.

juju docean add-machine -n 3 --constraints="mem=2g region=nyc3" --series=precise

And deploy our infrastructure:

juju deploy ghost juju deploy mysql juju deploy haproxy juju add-relation ghost mysql juju add-relation ghost haproxy juju expose haproxy

From here, pull the status off the HAProxy node, copy/paste the public-address into your browser and revel in your brand new Ghost blog deployed on Digital Ocean's blazing fast SSD servers.

Caveats to Juju DigitalOcean as of Sept. 2014:

These are important things to keep in mind as you move forward. This is a beta project. Evaluate the following passages for quick fixes to known issues, and warnings.

Not all charms have been tested on DO, and you may find missing libraries. Most notably python-yaml on charms that require it. Most "install failed" charms is due to missing python-yaml.

A quick hotseat fix is:

juju ssh service/# sudo apt-get install python-yaml exit juju resolved -r service/#

And then file a bug against the culprit charm that it's missing a dependency for Digital Ocean.

While this setup is amazingly cheap, and works really well, the Docean plugin provider should be considered beta software, as Hazmat is still actively working on it.

All in all, this is a great place to get started if you're willing to invest a bit of time working with a manual environment. Juju's capable orchestration will certainly make most if not all of your deployments painless, and bring you to scaling nirvana.

Happy Orchestrating!

Stephen Kelly: Grantlee 5.0.0 release candidate available for testing

Planet Ubuntu - Mon, 2014-09-22 11:44

I have tagged, pushed and tarball‘d a release candidate for Grantlee 5.0.0, based on Qt 5. Please go ahead and test it, and port your software to it, because some things have changed.

Also, following from the release last week, I’ve made a new 0.5.1 release with some build fixes for examples with Qt 5 and a change to how plugins are handled. Grantlee will no longer attempt to unload plugins at any point.


Stuart Langridge: Reconnecting

Planet Ubuntu - Mon, 2014-09-22 10:38

After I took issue with some thoughts of Aaron Gustafson regarding JavaScript, Aaron has clarified his thoughts elegantly.

His key issue here is summed up by one sentence from his post: “The fact is that you can’t build a robust Web experience that relies solely on client-side JavaScript.” Now, I could nit-pick various details about the argument he provides (I’ve had as many buggy modules from npm or PyPI as I’ve had from the Google jQuery CDN, and if you specify exact version numbers that’s less of a problem; writing software specifically for one client might allow you to run a carbon copy of their server, but server software for wide distribution, especially open source, doesn’t and shouldn’t have that luxury) but those sorts of pettifogging nit-picks are, I hope, beneath us. In short, I’ll say this: I agree with Aaron. He is right. However, this discussion uncovers some wider issues.

Now, I’ve built pure client-side apps. The WebUtils are a suite of small apps which one would expect to find on a particular platform (a calculator, a compass, a currency converter, that sort of thing), but built to be pure web apps in order that someone deciding to use the web as their platform has access to these things. (If you’re interested in this idea, get involved in the project.) I built two of them; a calculator and a currency converter. Both are pure-client-side, JavaScript-requiring, Angular-using apps. I am, in general and in total agreement with Aaron, opposed to the idea that without JavaScript a web app doesn’t work. (And here by “without JavaScript” we must also include being with JavaScript but JavaScript which gets screwed up by your mobile carrier injecting extra scripts, or your ISP blocking CDNs, or your social media buttons throwing errors, or your ads stomping on your variables. All of these are real problems which go unacknowledged and Aaron is entirely right to bring them up.) However, the policy that apps should be robust to JS not working, by being server apps which are progressively enhanced, does ignore an elephant in the room.

It is this. I should not need an internet connection in order to make my calculator add 4 and 5.

A web app which does not require its client-side scripting, which works on the server and then is progressively enhanced, does not work in an offline environment. It doesn’t work when you’re in and out of train tunnels, or at a conference with bad wifi, or on a metered data connection. The offline first concept, which should be informing how we build apps, is right about this.

So, what to do? It is in theory possible to write a web app which does processing on the server and is entirely robust against its client-side scripting being broken or missing, and which does processing on the client so that it works when the server’s unavailable or uncontactable or expensive or slow. But let’s be honest here. That’s not an app. That’s two apps. They might share a bit of code (the server being node.js and using JavaScript might help here, but that’s not what I was talking about last time), but in practice you’re building the same app twice in two different environments and then delivering both through one URL.

This is a problem. I am not sure how to square this circle. Aaron is right that you can’t build a robust Web experience that relies solely on client-side JavaScript, but the word robust is doing an awful lot of work in that sentence. In particular, it means “you can’t build a thing which you can be sure will work for everybody”. Most of the time, for most of the people, your experience can be robust — fine, it’ll break if your ad JavaScript sods you up, or if your ISP blocks your jQuery download, and these are problems. But going the other direction — explicitly not relying on your client-side JS — means that you’ll be robust until you’re somewhere without a good internet connection. Both of these approaches end up breaking in certain situations. And making something which handles both these camps is a serious amount of work, and we don’t really know how to do it. (I’d be interested in hearing examples of a web app which will work if I run it in an environment with a flaky internet connection and having redefined window.Array to be something else on the client. I bet there isn’t one.)

This needs to be a larger conversation. It needs discussion from both JavaScript people and server people. These two camps should not be separate, not be balkanised; there is an attitude still among “real programmers” that JavaScript is not a real language. Now, I will be abundantly clear here: Aaron is not one of the people who thinks this. But I worry that perpetuating the idea that JavaScript is an unstable environment will cause JS people to wall themselves off and stop listening to reasoned debate, which won’t help. As I say, I don’t know how to square this circle. The Web is an environment unlike any other, and as Aaron says, “building the Web requires more of us than traditional software development”. We get huge benefits from doing so — you build for the Web, and do it right, and your thing is available to everybody everywhere — but to “do it right” is a pretty big task, and it gets bigger every day. We need to be talking about that more so we can work out how to do it.

Robert Ancell: Using EGL with GTK+

Planet Ubuntu - Mon, 2014-09-22 09:58
I recently needed to port some code from GTK+ OpenGL code from GLX to EGL and I couldn't find any examples of how to do this. So to seed the Internet here is what I found out.

This is the simplest example I could make to show how to do this. In real life you probably want to do a lot more error checking. This will only work with X11; for other systems you will need to use equivalent methods in gdk/gdkwayland.h etc. For anything modern you should probably use OpenGL ES instead of OpenGL - to do this you'll need to change the attributes to eglChooseConfig and use EGL_OPENGL_ES_API in eglBindAPI.

Compile with:
gcc -g -Wall egl.c -o egl pkg-config --cflags --libs `gtk+-3.0 gdk-x11-3.0` -lEGL -lGL

#include <gtk/gtk.h>
#include <gdk/gdkx.h>
#include <EGL/egl.h>
#include <GL/gl.h>

static EGLDisplay *egl_display;
static EGLSurface *egl_surface;
static EGLContext *egl_context;

static void realize_cb (GtkWidget *widget)
{
EGLConfig egl_config;
EGLint n_config;
EGLint attributes[] = { EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT,
EGL_NONE };

egl_display = eglGetDisplay ((EGLNativeDisplayType) gdk_x11_display_get_xdisplay (gtk_widget_get_display (widget)));
eglInitialize (egl_display, NULL, NULL);
eglChooseConfig (egl_display, attributes, &egl_config, 1, &n_config);
eglBindAPI (EGL_OPENGL_API);
egl_surface = eglCreateWindowSurface (egl_display, egl_config, gdk_x11_window_get_xid (gtk_widget_get_window (widget)), NULL);
egl_context = eglCreateContext (egl_display, egl_config, EGL_NO_CONTEXT, NULL);
}

static gboolean draw_cb (GtkWidget *widget)
{
eglMakeCurrent (egl_display, egl_surface, egl_surface, egl_context);

glViewport (0, 0, gtk_widget_get_allocated_width (widget), gtk_widget_get_allocated_height (widget));

glClearColor (0, 0, 0, 1);
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
glOrtho (0, 100, 0, 100, 0, 1);

glBegin (GL_TRIANGLES);
glColor3f (1, 0, 0);
glVertex2f (50, 10);
glColor3f (0, 1, 0);
glVertex2f (90, 90);
glColor3f (0, 0, 1);
glVertex2f (10, 90);
glEnd ();

eglSwapBuffers (egl_display, egl_surface);

return TRUE;
}

int main (int argc, char **argv)
{
GtkWidget *w;

gtk_init (&argc, &argv);

w = gtk_window_new (GTK_WINDOW_TOPLEVEL);
gtk_widget_set_double_buffered (GTK_WIDGET (w), FALSE);
g_signal_connect (G_OBJECT (w), "realize", G_CALLBACK (realize_cb), NULL);
g_signal_connect (G_OBJECT (w), "draw", G_CALLBACK (draw_cb), NULL);

gtk_widget_show (w);

gtk_main ();

return 0;
}

Zygmunt Krynicki: Announcing Morris 1.0

Planet Ubuntu - Sun, 2014-09-21 19:07
Earlier today I've released the first standalone version of Morris (source, documentation). Morris is named after Gabriel Morris, the inventor of Colonne Morris aka the advertisement column. Morris is a simple and proven Python event/signaling library (not for watching sockets or for doing IO but for generic, in-process broadcast messages).

Morris is the first part of the Plainbox project that I've released as a standalone, small library. We've been using that code for two years now. Morris is simple, well-defined and I'd dare to say, complete. Hence the 1.0 version, unlike the traditional 0.1 that many free software projects start with.
Morris works on python 2.7+ , pypy and python 3.2+. It comes with tests, examples and extensive docstrings. Currently you can install it from pypi but a Debian package is in the works and should be ready for review later today.
Here's a simple example on how to use the library in practice:
from __future__ import print_function
from morris import signal
class Processor(object):    def process(self, thing):        self.on_processing(thing)
    @signal    def on_processing(self, thing):        pass
def on_processing(thing):    print("Processing {}".format(thing))
proc = Processor()proc.on_processing.connect(on_processing)proc.process("foo")proc.process("bar")

For more information check out morris.readthedocs.org

Ronnie Tucker: Stephen Hawking Talks About the Linux-Based Intel Connected Wheelchair Project

Planet Ubuntu - Sun, 2014-09-21 05:07

Intel has revealed a new, interesting concept called the Connected Wheelchair, which takes data from users and allows people to share that info with the community and is powered by Linux.

When people say Intel, they usually think about processors, but the company also makes a host of other products, including very cool or useful concepts that might have some very important applications in everyday life.

The latest initiative is called the Connected Wheelchair and the guys from Intel even convinced the famous Stephen Hawking to help them spread the word about this amazing project. It’s still in the testing phases and it’s one of those products that might show a lot of promise but never go anywhere because there is no one to produce and sell it.

Source:

http://news.softpedia.com/news/Stephen-Hawking-Talks-About-the-Linux-Based-Intel-Connected-Wheelchair-Project-458539.shtml

Submitted by: Silivu Stahie

Ubuntu Podcast from the UK LoCo: S07E25 – The One Where the Monkey Gets Away

Planet Ubuntu - Sat, 2014-09-20 13:30

Just Laura Cowen and Alan Pope are in Studio L for Season Seven, Episode Twenty-Five of the Ubuntu Podcast!

Apologies for the terrible audio quality in this episode. It turns out one of the channels on the compressor is broken and we didn’t realise until much later on.

 Download Ogg  Download MP3 Play in Popup

In this week’s show:-

We’ll be back next week, when we’ll have some interviews from JISC, and we’ll go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Ronnie Tucker: Mozilla Thunderbird 31.1.1 Lands in Ubuntu

Planet Ubuntu - Sat, 2014-09-20 05:06

Canonical has shared some details about a number of Thunderbird vulnerabilities identified in its Ubuntu 14.04 LTS and Ubuntu 12.04 LTS operating systems, and the devs have pushed a new version into the repositories.

The Thunderbird email client was updated a couple of days ago and the new version has landed pretty quickly in the Ubuntu repos. This means that it should be available when users update their systems.

For example, “Abhishek Arya discovered a use-after-free during DOM interactions with SVG. If a user were tricked in to opening a specially crafted message with scripting enabled, an attacker could potentially exploit this to cause a denial of service via application crash or execute arbitrary code with the privileges of the user invoking Thunderbird,” reads the announcement.

Source:

http://news.softpedia.com/news/Mozilla-Thunderbird-13-1-1-Lands-in-the-Ubuntu-458664.shtml

Submitted by: Silviu Stahie

John Baer: The Promise of a Broadwell Chromebook

Planet Ubuntu - Fri, 2014-09-19 22:53

There were many announcements made at the 2014 Intel Developers Conference, but a Broadwell powered Chromebook was not on the list. That is not to say it’s not coming.

The Intel Broadwell system on a chip “SoC” is a great fit for a Chromebook.

Central Processing Unit

You won’t find significant innovation or design changes in the Broadwell central processing unit (CPU) as the primary focus of this release was to shrink the manufacturing die from 22nm to 14nm to gain power efficiency. The result is a 4.5W power draw which should result in 8 or more hours of battery life.

From a getting stuff done perspective it looks like the CPU will perform as well or better than a Haswell Celeron 2955U found in many Chrome devices. Intel states twice the performance of a four year old i5. Some folks are speculating the performance will equal the new i3-4005U but I anticipate Octane scores will equal or exceed 11000.

Graphical Processing Unit

This is where it gets interesting. I assume Intel is feeling some pressure from nVidia, Imagination Technologies, and new entrants such as RockChip which compelled them to enhance the performance of the graphical processing unit (GPU). Indeed, preliminary 3DMark benchmarks are successfully showing a 40% advantage to the nVidia Tegra 32 bit K1.

Will there be a Chromebook?

Francois Beaufort blogged a Broadwell development board has been added the Chrome OS repository which supports the fact a Chromebook is under consideration. As a platinum member of the Linux foundation, Intel has the knowledge and experience to optimize Chrome OS to leverage the features of their processors and I am confidant they will do so for Broadwell.

The big question becomes price. Wholesale pricing for this SoC is expected to exceed $250 and that is a premium price for an entry level Chromebook but may work for a professional grade device (viz. Pixel 2) targeted to college students and the enterprise.

  • High Quality Touch IPS FHD 13 inch screen
  • 4 GB RAM
  • 64 GB SSD
  • Backlit Keyboard
  • Wifi ac / Bluetooth 4.0 / USB 3.0 / HDMI

Retail priced at $599.

The post The Promise of a Broadwell Chromebook appeared first on john's journal.

Jorge Castro: Juju Ecosystem Team Sprint Report

Planet Ubuntu - Fri, 2014-09-19 16:49

Greetings folks,

The Juju Ecosystem team at Canonical (joined remotely by community members) recently had a developer sprint in beautiful Dillon, Colorado to Get Things Done(™).

Here are the highlights:

Automated Charm Testing

Tim Van Steenburgh and Marco Ceppi made a ton of progress with automated charm testing, here’s the prelim state-of-the-world:

Jenkins Jobs Fired off: 22

This enabled us to dedicate hours of block time of getting as many of those red charms to green as possible. The priority for our team over the next few weeks will be fixing these charms, and of course, gating new charms via this method, as well as kicking back broken charms to personal namespaces.

Ben Saller helped out by prototyping “Dockerizing” charm testing so that developers can test their charms in a fully containerized way. This will help CI by giving us isolation, density, and reliability.

Charm Tests are now launched from review queue to help gating based on tests passing.

Thanks to Aaron Bentley for supporting our efforts here!

Review Queue

The Charmers (Marco Ceppi, Charles Butler, and Matt Bruzek) dedicated time to getting through reviews. The whole team spent time creating fixes for the automated test results mentioned above. We’re in great shape to driving this down and not ever letting it get out control again thanks to our new team review guidelines: http://review.juju.solutions/

The goal was to help submitters and reviews know the where they are at in a review, and next steps needed.

Here are the numbers:

  • Reviews Performed: 189
  • Commits: 228
  • Charms Promulgated: 10
  • Charms Unpromulgated: 7
  • Lines of Code touched: 34109 (artificially high due to SVG icons, heh)
  • Reviews Submitted: 84
  • Energy Drinks: 80

Some new features:

  • Users can now log in with Ubuntu SSO and see what reviews they have submitted, and reviewed
  • Ability to query the review system and search/filter reviews based on several metrics (http://review.juju.solutions/search)
  • Ability for charmers to fire off an automated test of a charm on demand right from the queue. When an MP is done against a charm, we’ll now automatically reply to the MP with a link to the test results. \o/
  • You can now “lock” a review when you’re doing one so that the rest of the community can see when a review is claimed so we don’t duplicate work. (Essential for mass reviews!)
  • Queues divided and separated to highlight priority items and items for different teams
CloudFoundry
  • Improving the downloader/packaging story so it’s more reusable
  • Cory Johns developed a pattern for charm helpers for CloudFoundry; the CF sub-team feels this will be a useful pattern for other charmers. They’re calling it the “charm services framework”, expect to hear more from them in the future.
  • We were able to replicate the Juju/Rails Framework deployment of an application and compare doing the same thing on CF: https://plus.google.com/117270619435440230164/posts/gHgB6k5f7Fv
  • Whit concentrated on tracking changes to Pivotal’s build procedures.

Charm Developer Workflow

This involves two things:

“The first 30 minutes of Juju”

This primarily involved finding and fixing issues with our user and developer workflow. This included doing some initial work on what we’re calling “Landing Pages”, which will be topic based landing pages for different places where people can use Juju, so for example, a “Big Data” page with specific solutions for that field. We expect to have these for a bunch of different fields of study.

We have identified the following 5 charms as “flagbearer””: Rails (in progress), elasticsearch, postgresql, meteor, and chamilo. We consider these charms to be excellent examples of quality, integration with other tools, and usage of charm tools. We will be modifying the documentation to highlight these charms as reference points. All these charms have tests now, though some might not have landed yet.

Better tools for Charm Authors:

Ben, Tim, and Whit have a prototype of a fully Dockerized developer environment that contains all of the local developer tools and all of the flagbearer charms. The intention is to also provide a fully bootstrapped local provider. The goal is “test anything in 30 seconds in a container”.

In addition to this, Adam Israel tackled some of our Vagrant development stories, that will allow us to provide better Vagrant developer workflow, thanks to Ben Howard and his team for helping us get these features in our boxes.

We expect both the Docker-based and Vagrant-based approaches to eventually converge. Having both now gives us a nice “spread” to cover developers on multiple operating systems with tools they’re familiar with.

Big Data

Amir/Chuck worked on the following things:

  • Upgrading the ELK stack for Trusty
  • Planning out new Landing Pages focused on the Big Data story
  • Bringing up existing Big Data (Hortonworks) Stack to Charm Store standards for Trusty, and getting those charms merged
  • Pre-planning for next phase of Big Data Workloads (MapR, Apache distributions)
Other
  • General familiarity training with MAAS, OpenStack on OBs and NUCs.
  • Very fast firehose drinking for new team members, Adam Israel, Randall Ross, and Kevin Monroe have joined the team.
  • Special Thanks to Jose Antonio Rey, Sebas, and Josh Strobl, for joining us to help get reviews and fixes in the store and documentation.
  • We have a new team blog at: http://juju-solutions.github.io/ (Beta, thanks Whit.)
  • Most of the topics here had corresponding fixes/updates to the Juju documentation.

Leo Iannacone: Use GTK-3.0 Dark variant theme for your GTK-2 terminal emulator

Planet Ubuntu - Fri, 2014-09-19 14:13

This is a workaround to force you preferred terminal emulator to use the Dark variant of Adwaita theme in GNOME >= 3.12 (maybe less, but untested).

Just add these lines to you ~/.bashrc file:

# set dark theme for xterm emulators if [ "$TERM" == "xterm" ] ; then xprop -f _GTK_THEME_VARIANT 8u -set _GTK_THEME_VARIANT "dark" -id `xprop -root | awk '/^_NET_ACTIVE_WINDOW/ {print $5}'` fi

This is how it works with Terminator:

Before

After

Stephen Kelly: Grantlee 0.5.0 (codename Auf 2 Hochzeiten tanzen) now available

Planet Ubuntu - Fri, 2014-09-19 09:29

The Grantlee community is pleased to announce the release of Grantlee version 0.5 (Mirror). Source and binary compatibility are maintained as with all previous releases. Django is an implementation of the Django template system in Qt.

This release builds with both Qt 5 and Qt 4. The Qt 5 build is only for transitional purposes so that a downstream can get their own code built and working with Qt 5 without being first blocked by Grantlee backward incompatible changes. The Qt 5 based version of Grantlee 0.5.0 should not be relied upon as a stable interface. It is only there to assist porting. There won’t be any more Qt 4 based releases, except to fix build issues if needed.

The next release of Grantlee will happen next week and will be exclusively Qt 5 based. It will have a small number of backward incompatible changes, such as adding missing const and dropping some deprecated stuff.

The minimum CMake required has also been increased to version 2.8.11. This release contains most of the API for usage requirements and so allows cleaning up a lot of older CMake code.

Also in this release is a small number of new bug fixes and memory leak plugs etc.


Daniel Pocock: reSIProcate migration from SVN to Git completed

Planet Ubuntu - Fri, 2014-09-19 06:47

This week, the reSIProcate project completed the move from SVN to Git.

With many people using the SIP stack in both open source and commercial projects, the migration was carefully planned and tested over an extended period of time. Hopefully some of the experience from this migration can help other projects too.

Previous SVN committers were tracked down using my script for matching emails to Github accounts. This also allowed us to see their recent commits on other projects and see how they want their name and email address represented when their previous commits in SVN were mapped to Git commits.

For about a year, the sync2git script had been run hourly from cron to maintain an official mirror of the project in Github. This allowed people to test it and it also allowed us to start using some Github features like travis-CI.org before officially moving to Git.

At the cut-over, the SVN directories were made read-only, sync2git was run one last time and then people were advised they could commit in Git.

Documentation has also been created to help people get started quickly sharing patches as Github pull requests if they haven't used this facility before.

Ronnie Tucker: Curl Exploits Closed in All Supported Ubuntu OSes

Planet Ubuntu - Fri, 2014-09-19 05:05

Canonical has announced that a couple of curl vulnerabilities have been found and fixed in its Ubuntu 14.04 LTS, Ubuntu 12.04 LTS, and Ubuntu 10.04 LTS operating systems.

The developers have released a new update for the curl package and it looks like a number of security issues have been corrected.

“Tim Ruehsen discovered that curl incorrectly handled partial literal IP addresses. This could lead to the disclosure of cookies to the wrong site, and malicious sites being able to set cookies for others,” reads the security notice.

Source:

http://news.softpedia.com/news/Curl-Exploits-Close-in-All-Supported-Ubuntu-OSes-458899.shtml

Submitted by: Silviu Stahie

Stuart Langridge: Fundamentally connected

Planet Ubuntu - Fri, 2014-09-19 04:11

Aaron Gustafson recently wrote a very interesting monograph bemoaning a recent trend to view JavaScript as “a virtual machine in the browser”. I’ll quote fairly extensively, because Aaron makes some really strong points here, and I have a lot of sympathy with them. But at bottom I think he’s wrong, or at the very least he’s looking at this question from the wrong direction, like trying to divine the purpose of the Taj Mahal by looking at it from underneath.

“The one problem I’ve seen,” says Aaron, “is the fundamental disconnect many of these developers [who began taking JavaScript seriously after Ajax became popular] seem to have with the way deploying code on the Web works. In traditional software development, we have some say in the execution environment. On the Web, we don’t.” He goes on to explain: “If we’re writing server-side software in Python or Rails or even PHP, … we control the server environment [or] we have knowledge of it and can author… accordingly”, and “in the more traditional installed software world, we can similarly control the environment by placing certain restrictions on what operating systems our code can run on”.

I believe that this criticism, while essentially valid, misapprehends the real case here. It underestimates the universality of JavaScript implementations, it overestimates the stability of old-fashioned software development, and most importantly it starts from the presumption that building things for one particular computer is actually a good idea. Which it isn’t.

Now, nobody is arguing that the web environment is occasionally challengingly different across browsers and devices. But a lot of it isn’t. No browser ships with a JavaScript implementation in which 1 and 1 add to make 3, or in which Arrays don’t have a length property, or in which the for keyword doesn’t exist. If we ignore some of the Mozilla-specific stuff which is becoming ES6 (things such as array comprehensions, which nobody is actually using in actual live code out there in the universe), JavaScript is pretty stable and pretty unchanging across all its implementations. Of course, what we’re really talking about here is the DOM model, not JavaScript-the-language, and to claim that “JavaScript can be the virtual machine” and then say “aha I didn’t mean the DOM” is sophistry on a par with a child asking “can I not not not not not have an ice-cream?”. But the DOM model is pretty stable too, let’s be honest. In things I build, certainly I find myself in murky depths occasionally with JavaScript across different browsers and devices, but those depths are as the sparkling waters of Lake Treviso by comparison with CSS across different browsers. In fact, when CSS proves problematic across browsers, JavaScript is the bandage used to fix it and provide a consistent experience — your keyframed CSS animation might be unreliable, but jQuery plugins work everywhere. JavaScript is the glue that binds the other bits together.

Equally, I am not at all sold that “we have knowledge of [the server environment] and can author your program accordingly so it will execute as anticipated” when doing server development. Or, at least, that’s possible, but nobody does. If you doubt this, I invite you to go file a bug on any server-side app you like and say “this thing doesn’t work right for me” and then add at the bottom “oh, and I’m running FreeBSD, not Ubuntu”. The response will occasionally be “oh really? we had better get that fixed then!” but is much more likely to be “we don’t support that. Use Ubuntu and this git repository.” Now, that’s a valid approach — we only support this specific known configuration! — but importantly, on the web Aaron sees requiring a specific browser/OS combination as an impractical impossibility and the wrong thing to do, whereas doing this on the server is positively virtuous. I believe that this is no virtue. Dismissing claims of failure with “well, you should be using the environment I demand” is just as large a sin on the server or the desktop as it is in the browser. You, the web developer, can’t require that I use your choice of browser, but equally you, the server developer, shouldn’t require that I use your particular exact combination of server packages either. Why do client users deserve more respect than server users? If a developer using your server software should be compelled to go and get a different server, how’s that different from asking someone to install a different web browser? Sure, I’m not expecting someone who built a server app running on Linux to necessarily also make it run on Windows (although wise developers will do so), but then I’m not really expecting someone who’s built a 3d game with WebGL to make the experience meaningful for someone browsing with Lynx, either.

Perhaps though you differ there, gentle reader. That the web is the web, and one should have a meaningful experience (although importantly not necessarily the same meaningful experience) which ever class of browser and device and capability one uses to get at the web. That is a very good point, one with which I have a reasonable amount of sympathy, and it leads me on to the final part of the argument.

It is this. Web developers are actually better than non-web developers. And Aaron explains precisely why. It is because to build a great web app is precisely to build a thing which can be meaningfully experienced by people on any class of browser and device and capability. The absolute tip-top very best “native” app can only be enjoyed by those to whom it is native. “Native apps” are poetry: undeniably beautiful when done well, but useless if you don’t speak the language. A great web app, on the other hand, is a painting: beautiful to experience and available to everybody. The Web has trained its developers to attempt to build something that is fundamentally egalitarian, fundamentally available to everyone. That’s why the Web’s good. The old software model, of something which only works in one place, isn’t the baseline against which the Web should be judged; it’s something that’s been surpassed. Software development is easiest if it only has to work on your own machine, but that doesn’t mean that that’s all we should aim for. We’re all still collaboratively working out exactly how to build apps this way. Do we always succeed? No. But by any measure the Web is the largest, most widely deployed, most popular and most ubiquitous computing platform the world has ever known. And its programming language is JavaScript.

Paul Tagliamonte: Docker PostgreSQL Foreign Data Wrapper

Planet Ubuntu - Fri, 2014-09-19 01:49

For the tl;dr: Docker FDW is a thing. Star it, hack it, try it out. File bugs, be happy. If you want to see what it's like to read, there's some example SQL down below.

The question is first, what the heck is a PostgreSQL Foreign Data Wrapper? PostgreSQL Foreign Data Wrappers are plugins that allow C libraries to provide an adaptor for PostgreSQL to talk to an external database.

Some folks have used this to wrap stuff like MongoDB, which I always found to be hilarous (and an epic hack).

Enter Multicorn

During my time at PyGotham, I saw a talk from Wes Chow about something called Multicorn. He was showing off some really neat plugins, such as the git revision history of CPython, and parsed logfiles from some stuff over at Chartbeat. This basically blew my mind.

If you're interested in some of these, there are a bunch in the Multicorn VCS repo, such as the gitfdw example.

All throughout the talk I was coming up with all sorts of things that I wanted to do -- this whole library is basically exactly what I've been dreaming about for years. I've always wanted to provide a SQL-like interface into querying API data, joining data cross-API using common crosswalks, such as using Capitol Words to query for Legislators, and use the bioguide ids to JOIN against the congress api to get their Twitter account names.

My first shot was to Multicorn the new Open Civic Data API I was working on, chuckled and put it aside as a really awesome hack.

Enter Docker

It wasn't until tianon connected the dots for me and suggested a Docker FDW did I get really excited. Cue a few hours of hacking, and I'm proud to say -- here's Docker FDW.

Currently it only implements reading from the API, but extending this to allow for SQL DELETE operations isn't out of the question, and likely to be implemented soon. This lets us ask all sorts of really interesting questions out of the API, and might even help folks writing webapps avoid adding too much Docker-aware logic.

Setting it up The only stumbling block you might find (at least on Debian and Ubuntu) is that you'll need a Multicorn `.deb`. It's currently undergoing an official Debianization from the Postgres team, but in the meantime I put the source and binary up on my people.debian.org. Feel free to use that while the Debian PostgreSQL team prepares the upload to unstable.

I'm going to assume you have a working Multicorn, PostgreSQL and Docker setup (including adding the postgres user to the docker group)

So, now let's pop open a psql session. Create a database (I called mine dockerfdw, but it can be anything), and let's create some tables.

Before we create the tables, we need to let PostgreSQL know where our objects are. This takes a name for the server, and the Python importable path to our FDW.

CREATE SERVER docker_containers FOREIGN DATA WRAPPER multicorn options ( wrapper 'dockerfdw.wrappers.containers.ContainerFdw'); CREATE SERVER docker_image FOREIGN DATA WRAPPER multicorn options ( wrapper 'dockerfdw.wrappers.images.ImageFdw');

Now that we have the server in place, we can tell PostgreSQL to create a table backed by the FDW by creating a foreign table. I won't go too much into the syntax here, but you might also note that we pass in some options - these are passed to the constructor of the FDW, letting us set stuff like the Docker host.

CREATE foreign table docker_containers ( "id" TEXT, "image" TEXT, "name" TEXT, "names" TEXT[], "privileged" BOOLEAN, "ip" TEXT, "bridge" TEXT, "running" BOOLEAN, "pid" INT, "exit_code" INT, "command" TEXT[] ) server docker_containers options ( host 'unix:///run/docker.sock' ); CREATE foreign table docker_images ( "id" TEXT, "architecture" TEXT, "author" TEXT, "comment" TEXT, "parent" TEXT, "tags" TEXT[] ) server docker_image options ( host 'unix:///run/docker.sock' );

And, now that we have tables in place, we can try to learn something about the Docker containers. Let's start with something fun - a join from containers to images, showing all image tag names, the container names and the ip of the container (if it has one!).

SELECT docker_containers.ip, docker_containers.names, docker_images.tags FROM docker_containers RIGHT JOIN docker_images ON docker_containers.image=docker_images.id; ip | names | tags -------------+-----------------------------+----------------------------------------- | | {ruby:latest} | | {paultag/vcs-mirror:latest} | {/de-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/ny-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/ar-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} 172.17.0.47 | {/ms-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} 172.17.0.46 | {/nc-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/ia-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/az-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/oh-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/va-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} 172.17.0.41 | {/wa-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/jovial_poincare} | {<none>:<none>} | {/jolly_goldstine} | {<none>:<none>} | {/cranky_torvalds} | {<none>:<none>} | {/backstabbing_wilson} | {<none>:<none>} | {/desperate_hoover} | {<none>:<none>} | {/backstabbing_ardinghelli} | {<none>:<none>} | {/cocky_feynman} | {<none>:<none>} | | {paultag/postgres:latest} | | {debian:testing} | | {paultag/crank:latest} | | {<none>:<none>} | | {<none>:<none>} | {/stupefied_fermat} | {hackerschool/doorbot:latest} | {/focused_euclid} | {debian:unstable} | {/focused_babbage} | {debian:unstable} | {/clever_torvalds} | {debian:unstable} | {/stoic_tesla} | {debian:unstable} | {/evil_torvalds} | {debian:unstable} | {/foo} | {debian:unstable} (31 rows)

Success! This is just a taste of what's to come, so please feel free to hack on Docker FDW, tweet me @paultag, file bugs / feature requests. It's currently a bit of a hack, and it's something that I think has long-term potential after some work goes into making sure that this is a rock solid interface to the Docker API.

Ubuntu LoCo Council: Call for nominations to the LoCo Council

Planet Ubuntu - Thu, 2014-09-18 20:16

Hello All,

As you may know the LoCo council members are set with a two years term, due this situation we are facing the difficult task of replacing Bhavani. A special thanks to Bhavani for all of the great contributions he had made while serving with us on the LoCo Council.

So with that in mind, we are writing this to ask for volunteers to step forward and nominate themselves or another contributor for the three open positions. The LoCo Council is defined on our wiki page.

Wiki: https://wiki.ubuntu.com/LoCoCouncil

Team Agenda: https://wiki.ubuntu.com/LoCoCouncilAgenda

Typically, we meet up once a month in IRC to go through items on the team agenda also we started to have Google Hangouts too (The time for hangouts may vary depending the availability of the members time). This involves approving new LoCo Teams, Re-approval of Approved LoCo Teams, resolving issues within Teams, approving LoCo Team mailing list requests, and anything else that comes along.

We have the following requirements for Nominees:

Be an Ubuntu member

Be available during typical meeting times of the council

Insight into the culture(s) and typical activities within teams is a plus

Here is a description of the current LoCo Council:

They are current Ubuntu Members with a proven track record of activity in the community. They have shown themselves over time to be able to work well with others, and display the positive aspects of the Ubuntu Code of Conduct. They should be people who can judge contribution quality without emotion while engaging in an interview/discussion that communicates interest, a welcoming atmosphere, and which is marked by humanity, gentleness, and kindness.

If this sounds like you, or a person you know, please e-mail the LoCo Council with your nomination(s) using the following e-mail address: loco-council<at>lists.ubuntu.com.

Please include a few lines about yourself, or whom you’re nominating, so we can get a good idea of why you/they’d like to join the council, and why you feel that you/they should be considered. If you plan on nominating another person, please let them know, so they are aware.

We welcome nominations from anywhere in the world, and from any LoCo team. Nominees do not need to be a LoCo Team Contact to be nominated for this post. We are however looking for people who are active in their LoCo Team.

The time frame for this process is as follows:

Nominations will open: September 18th, 2014

Nominations will close: October 2nd, 2014

We will then forward the nominations to the CC, Requesting they take the following their next meeting to make their selections.

Robbie Williamson: Priorities & Perseverance

Planet Ubuntu - Thu, 2014-09-18 05:32

This is a not a stock ticker, rather a health ticker…and unlike with a stock price, a downward trend is good.  Over the last 3 years or so, I’ve been on a personal mission of improving my health.  As you can see it wasn’t perfect, but I managed to lose a good amount of weight.

So why did I do it…what was the motivation…it’s easy, I decided in 2011 that I needed to put me first.   This was me from 2009

At my biggest, I was pushing 270lbs.  I was so busy trying to do for others, be it work, family, or friends, I was constantly putting my needs last, i.e. exercise and healthy eating.  You see, I actually like to exercise and healthy eating isn’t a hard thing for me, but when you start putting those things last on your priorities, it becomes easy to justify skipping the exercise or grabbing junk food because your short on time or exhausted from being the “hero”.

Now I have battled weight issues most of my life.  Given how I looked as a baby, this shouldn’t come as a surprise. LOL

But I did thin out as a child.

To only get bigger again

And even bigger again

But then I got lucky.  My metabolism kicked into high gear around 20, and I grew about 5 inches and since I was playing a ton of basketball daily, I ate anything I wanted and still stayed skinny

I remained so up until I had my first child, then the pounds began to come on.  Many parents will tell you that the first time is always more than you expected, so it’s not surprising with sleep deprivation and stress, you gain weight.  To make it even more fun, I had decide to start a new job and buy a new house a few years later, when my second child came…even more “fun”.

To be clear, I’m not blaming any of my weight gain on these events, however they became easy crutches to justify putting myself last.  And here’s the crazy part, by doing all this, I actually ended up doing less for those I cared about in the long run, because I was physically exhausted, mentally fatigued, and emotionally spent a lot of the time.

So, around October of 2012 I made a decision.  In order for me to be the man I wanted to be for my family, friends, and even colleagues, I had to put myself first.  While it sounds selfish, it’s the complete opposite.  In order to be the best I could be for others, I realized I had to get myself together first.  For those of you who followed me on Facebook then, you already know what it took…a combination of MyFitnessPal calorie tracking and a little known workout program called Insanity:

Me and my boy, Shaun T, worked out religiously…everyday…sometimes mornings…sometimes afternoons…sometimes evenings.  I carried him with me all for work travel on my laptop and phone…doing Insanity videos in hotels rooms around the world.  I did the 60day program about 4 times through (with breaks in between cycles)…adding in some weight workouts towards the end.  The results were great, as you can see in the first graphic starting around October 2012.  By staying focused and consistent, I dropped from about 255lbs to 226lbs at my lowest in July 2013.  I got rid of a lot of XXL shirts and 42in waist pants/shorts, and got to a point where I didn’t always feel the need to swim with a shirt on….if ya know what I mean ;-).  So August rolled around, and while I was feeling good about myself…didn’t feel great, because I knew that while I was lighter, and healthier, I wasn’t necessarily that much stronger.  I knew that if I wanted to really be healthy and keep this weight off, I’d need more muscle mass…plus I’d look better too :-P.

So the Crossfit journey began.

Now I’ll be honest, it wasn’t my first thought.  I had read all the horror stories about injuries and seen some of the cult-like stuff about it.  However, a good friend of mine from college was a coach, and pretty much called me out on it…she was right…I was judging something based on others opinions and not my own (which is WAY outta character for me).  So…I went to my first Crossfit event…the Women’s Throwdown in Austin, TX (where I live) held by Woodward Crossfit in July of 2013.  It was pretty awesome….it wasn’t full of muscle heads yelling at each other or insane paleo eating nut jobs trying to out shine another…it was just hardworking athletes pushing themselves as hard as they could…for a great cause (it’s a charity event)…and having a lot of fun.  I planned to only stay for a little bit, but ended up staying the whole damn day! Long story, short…I joined Woodward Crossfit a few weeks after (the delay was because I was determined to complete my last Insanity round, plus I had to go on a business trip), which was around the week of my birthday (Aug 22).

Fast forward a little over a year, with a recently added 21-day Fitness Challenge by David King (who also goes to the same gym), and as of today I’m down about 43lbs (212), with a huge reduction in body fat percentage.  I don’t have the starting or current percentage, but let’s just say all 43lbs lost was fat, and I’ve gained a good amount of muscle in the last year as well…which is why the line flattened a bit before I kicked it up another notch with the 21-Day last month.

Now I’m not posting any more pictures, because that’s not the point of this post (but trust me…I look goooood :P).  My purpose is exactly what the subject says, priorities & perseverance.  What are you prioritizing in your life?  Are you putting too many people’s needs ahead of your own?  Are you happy as a result?  If you were like me, I already know the answer…but you don’t have to stay this way.  You only get one chance at this life, so make the most out of it.  Make the choice to put your happiness first, and I don’t mean selfishly…that’s called pleasure.  You’re happier when your loved ones are doing well and happy…you’re happier when you have friends who like you and that you can depend on….you’re happier when you kick ass at work…you’re happier when you kill it on the basketball court (or whatever activity you like).  Make the decision to be happy, set your goals, then perservere until you attain them…you will stumble along the way…and there will be those around you who either purposely or unknowingly discourage you, but stay focused…it’s not their life…it’s yours.  And when it gets really hard…just remember the wise words of Stuart Smalley:


Pages

Subscribe to Free Software Magazine aggregator