news aggregator

Ubuntu Scientists: Introducting the “Ubuntu Scientists Blog”!

Planet Ubuntu - Tue, 2014-06-17 18:50

In the last UOS, the founder (belkinsa, Svetlana Belkin) of Ubuntu Scientists decided to create a blog where the team will post news, interviews of the members, and stories of members to help other scientists to get a feeling of who we are, how to help us, and how to use FOSS in the science fields.  Another team also have done this in the past.

Other posts are HERE


Filed under: News Tagged: blog, Interviews, News, Stories, Svetlana Belkin, Ubuntu, Ubuntu Scientists, UOS

Ubuntu Kernel Team: Kernel Team Meeting Minutes – June 17, 2014

Planet Ubuntu - Tue, 2014-06-17 17:16
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140617 Meeting Agenda


ARM Status

Nothing new to report this week


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Milestone Targeted Work Items

I’d note that this section of the meeting is becoming less and less
useful to me in its current form. I’d like to take a vote to skip this
section until I/we can find a better solution. All in favor (ie +1)?
I’ll take silence as agreement as well
ppisati: +1
jsalisbury: +1
rtg: +1
ogasawara: ok, motion passed.
(actually the same could be said for the ARM status part since support it’s part of generic now FWIW)
Dropping ARM Status from this meeting as well.

   apw    core-1405-kernel    2 work items       ogasawara    core-1405-kernel    2 work items   


Status: Utopic Development Kernel

We have rebased our Utopic kernel to v3.15 final and uploaded
(3.15.0-6.11). As noted in previous meetings, we are planning on
converging on the v3.16 kernel for Utopic. We have started tracking
v3.16-rc1 in our “unstable” ubuntu-utopic branch. We’ll let this
marinate and bake for a bit before we do an official v3.16 based upload
to the archive.
—–
Important upcoming dates:
Thurs Jun 26 – Alpha 1 (~1 week away)
Fri Jun 27 – Kernel Freeze for 12.04.5 and 14.04.1 (~1 week away)
Thurs Jul 31 – Alpha 2 (~6 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (May. 6):

  • Lucid – Verification and Testing
  • Precise – Verification and Testing
  • Saucy – Verification and Testing
  • Trusty – Verification and Testing
    Current opened tracking bugs details:
  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Schedule:
    cycle: 08-Jun through 28-Jun
    ====================================================================
    06-Jun Last day for kernel commits for this cycle
    08-Jun – 14-Jun Kernel prep week.
    15-Jun – 21-Jun Bug verification & Regression testing.
    22-Jun – 28-Jun Regression testing & Release to -updates.
    14.04.1 cycle: 29-Jun through 07-Aug
    ====================================================================
    27-Jun Last day for kernel commits for this cycle
    29-Jun – 05-Jul Kernel prep week.
    06-Jul – 12-Jul Bug verification & Regression testing.
    13-Jul – 19-Jul Regression testing & Release to -updates.
    20-Jul – 24-Jul Release prep
    24-Jul 14.04.1 Release [1]
    07-Aug 12.04.5 Release [2]
    [1] This will be the very last kernels for lts-backport-quantal, lts-backport-raring,
    and lts-backport-saucy.
    [2] This will be the lts-backport-trusty kernel as the default in the precise point
    release iso.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Svetlana Belkin: UOS 14.06 Summary and Lessons Learned (as a Track Lead)

Planet Ubuntu - Tue, 2014-06-17 17:11

The UOS 14.06 was last week during June 12 to June 14 and it was the first one that I was able to be there for the whole thing and I was a track lead for the Community Track which I feel that I ended up running most of the show along with Daniel Holbach.  To the other track leads of the same track, I mean no offence.  :)  Because this was my first full UOS, I tired myself out quickly after each day (the weather was gloomy all three days too), I had no mood to do anything else after and this is why this blog post is almost a week late.

First thing that I will share with you are the summaries for the Community track:

Introduction to Lubuntu: Phill Whiteside and Harry Webber talked about what Lubuntu is and what is planned.
Ubuntu Women Utopic Goals: To get more women involved in Ubuntu, the team has been looking into adding a “get involved quiz” to the website. The plan is now to get it up on community.ubuntu.com. The women’s team also want to take a look at Harvest and see how it could be improved to show new developers what needs to get done. The team website will also get more stories and updated best practices. More classroom sessions are planned as well.
Community Roundtable: A number of topics were discussed, among them dates for the next events. UOS dates will be picked soon, it was suggested to bring it back in line with the release cycle again. We will work with the LoCo community and Classroom team to organise the Global Jam and other events this cycle.
In the LoCo part of our community we want to look into making it easier to share stories and pictures of LoCo events and publish them on Planet. We also want to look into helping teams to train new coordinators and organisers on their teams.
From fix to image: how your patch makes it into Ubuntu: The CI team has put together an impressive process to get changes automatically built and tested. This makes it a lot easier to land high quality changes in Ubuntu. Łukasz Zemczak gave a great presentation on how this process works.
Ubuntu Documentation Team Roundtable: A number of initiatives were discussed to make it easier for newcomers to get involved with the team: a cleanup of current documentation and referring to it on help.ubuntu.com and elsewhere. Regular meetings are planned again as well.
Kubuntu Documentation Team Roundtable June 2014: They talked about following Ubuntu GNOME and setting up a Kubuntu Promo team to help promote and gather contributors and then send them to the right team (Docs, Dev, etc) They are also talked about once or server things get setup we can work on docs.kubuntu.org to make it look more in line with the new kubuntu set.
Introduction to Ubuntu GNOME: Ali Linx talked about Ubuntu GNOME, the web site, and the history of the flavour. He and other team members also talked about plans for the website, mainly about art work.
App development training programme: In the last cycle some of our app developers went out to their LoCo meetings and did some app development workshops. We put together a plan to turn this into a more formal training programme, starting in phase 1 in July.
Ubuntu Scientists June 2014 Roundtable: The team reviewed the team’s wiki page and discussed a few changes to it, to make it more inviting and set clearer tasks for newcomers. Another idea was to interview scientist users about their use of Ubuntu and blog about it.

Thanks Daniel Holbach for the summaries and the links to the sessions are in the Community Track embedded link.  Sorry if it’s hard to read, I can’t fix this issue!

I went to other sessions but this my favorite:

And I still have some to watch!

Since I was a track lead, I have a few lessons that I learned:

  • Give enough notice to the team or group of people but I think this was completely my fault since the UOS organizers didn’t give us a month’s notice
  • Use Chrome not Firefox for Hangouts and if needed, restart your computer before the next Hangout.  I had issues with my netbook and my mic where no one was able to hear me.
  • Even though it’s suggested to set up the Hangout on Air ten minutes before, but if you have time, do it a bit early and check if you have any problems
  • You can host a session for someone else but you don’t need to say anything

I enjoyed this one but I think it could of been better, but I know that is getting worked on.


Canonical Design Team: Making ubuntu.com responsive: JavaScript considerations (14)

Planet Ubuntu - Tue, 2014-06-17 10:05

This post is part of the series ‘Making ubuntu.com responsive‘.

The JavaScript used on ubuntu.com is very light. We limit its use to small functional elements of the web style guide, which act to enhance the user experience but are never required to deliver the content the user is there to consume.

At Canonical we use YUI as our JavaScript framework of choice. We have many years of using it for our websites and web apps therefore have a large knowledge base to fall back on. We have a single core.js which contains a number of functions called on when required.

Below I will discuss some of the functions and workarounds we have provided in the web style guide.

Providing fallbacks

When considering our transition from PNGs to SVGs across the site, we provided a fallback for background images with Modernizr and reset the background image with the .no-svg class on the body. Our approached to a fallback replacement in markup images was a JavaScript snippet from CSS Tricks – SVG Fallbacks, which I converted to YUI:

The snippet above checks if Modernizr exists in the namespace. It then interrogates the Modernizr object for SVG support. If the browser does not support SVGs we loop through each image with .svg contained in the src and replace the src with the same path and filename but a .png version. This means all SVGs need to have a PNG version at the same location.

Navigation and fallback

The mobile navigation on ubuntu.com uses JavaScript to toggle the menu open and closed. We decided to use JavaScript because it’s well supported. We explored using :target as a pure CSS solution, but this selector isn’t supported in Internet Explorer 7, which represented a fair chunk of our visitors.

The navigation on ubuntu.com, in small screens.

For browsers that don’t support JavaScript we resort to displaying the “burger” icon, which acts as an in-page anchor to the footer which contains the site navigation.

Equal height

As part of the guidelines project we needed a way of setting a number of elements to the same height. We would love to use the flexbox to do this but the browser support is not there yet. Therefore we developed a small JavaScript solution:

This function finds all elements with an .equal-height class. We then look for child divs or lis and measure the tallest one. Then set all these children to the highest value.

Using combined YUI

One of the obstacles discovered when working on this project was that YUI will load modules from an http (non secure) domain as the library requires. This of course causes issues on any site that is hosted on a secure domain. We definitely didn’t want to restrict the use of the web style guide to non secure sites, therefore we need combine all required modules into a combined YUI file.

To combine your own YUI visit YUI configurator. Add the modules you require and copy the code from the Output Console panel into your own hosted file.

Final thoughts

Obviously we had a fairly easy time of making our JavaScript responsive as we only use the minimum required as a general principle on our site. But using integrating tools like Modernizr into our workflow and keeping top of CSS browser support, we can keep what we do lean and current.

Reading list

The Fridge: Ubuntu Weekly Newsletter Issue 372

Planet Ubuntu - Mon, 2014-06-16 21:38

Welcome to the Ubuntu Weekly Newsletter. This is issue #372 for the week June 9 – 15, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Emily Gonyer
  • Jose Antonio Rey
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Chris J Arges: manually deploying openstack with a virtual maas on ubuntu trusty (part 2)

Planet Ubuntu - Mon, 2014-06-16 17:16
In the previous post, I went over how to setup a virtual MAAS environment using KVM [1]. Here I will explain how to setup Juju for use with this environment.

For this setup, we’ll use the maas-server as the juju client to interact with the cluster.

This guide was very useful:
https://maas.ubuntu.com/docs/juju-quick-start.html

Update to the latest stable tools:
sudo apt-add-repository ppa:juju/stable
sudo apt-get update
Next we want to setup juju on the host machine.
sudo apt-get install juju-coreCreate juju environment file.
juju initGenerate a specific MAAS API key by using the following link:
http://192.168.100.10/MAAS/account/prefs/

write the following to ~/.juju/environments.yaml replacing ‘maas api key’ with what was generated above:
default: vmaas
environments:
  vmaas:
    type: maas
    maas-server: 'http://192.168.100.10/MAAS'
    maas-oauth: '<maas api key>'
    admin-secret: ubuntu # or something generated with uuid
    default-series: trusty
Now let’s sync tools and bootstrap a node. Note, if you have multiple juju environments then you may need to specify ‘-e vmaas’ if it isn’t your default environment.
juju sync-tools
juju bootstrap # add --show-log --debug  for more output
See if it works by using the following command:
juju statusYou should see something similar to the following:
~$ juju status
environment: vmaas
machines:
  "0":
    agent-state: down
    agent-state-info: (started)
    agent-version: 1.18.4
    dns-name: maas-node-0.maas
    instance-id: /MAAS/api/1.0/nodes/node-e41b0c34-e1cb-11e3-98c6-5254001aae69/
    series: trusty
services: {}
Now we can do a test deployment with the juju-gui to our bootstrap node.
juju deploy juju-guiWhile it is deploying you can type the following to get a log.
juju debug-logI wanted to be able to access the juju-gui from an ‘external’ address, so I edited /etc/networking/interfaces on that machine to have a static address:
juju ssh 0sudo vim /etc/networking/interfaces
Add the following to the file:
auto eth0
iface eth0 inet static
  address 192.168.100.11
  netmask 255.255.255.0
Bring that interface up.
sudo ifup eth0The password can be found here on the host machine:
grep pass .juju/environments/vmaas.jenvIf you used the above configurations it should be ‘ubuntu’.

Log into the service so you can monitor the status graphically during the deployment.

If you get errors saying that servers couldn’t be reached you may have DNS configuration or proxy issues. You’ll have to first resolve these before using Juju. I’ve had is intermittent network issues in my lab. In order to workaround those physical issues you may have to retry the bootstrap, or increase the timeout values in ~/.juju/environments.yaml to use the following:
  bootstrap-retry-delay: 5
  bootstrap-timeout: 1200
Now you’re cooking with juju.
  1. http://dinosaursareforever.blogspot.com/2014/06/manually-deploying-openstack-with.html

Valorie Zimmerman: Primes, and products of primes

Planet Ubuntu - Mon, 2014-06-16 08:25
I was finding it difficult to stop thinking and fall asleep last night, so I decided to count as far as I could, in primes or products of primes. I'm not sure why, but it did stop the thoughts whirling around like a hamster on a wheel.

Note on superscripts: HTML only allows me to show squares and cubes. Do the addition.

1
2
3

5
2×3
7


2×5
11
2×3²
13
2x7
3x5
2²×2²
17
2×3²
19
2²×5
3×7
2×11
23
2³×3

2×13

2²×7
29
2×3×5
31
2²×2³
3×11
2×17
5×7
2²×3²
37
2×19
3×13
2³×5
41
2×3×7
43
2²×11
3²×5
2×23
47
2²×2²×3

2×5²
3×17
2²×13
53
2×3³
5×11
2³×7
3×19
2×29
59
2²×3×5
61
2×31
2×3×11
2³×2³
5×13
2×3×11
67
2²×17
3×23
2×5×7
71
2³×3² 
73
2×37
3×5²
2²×19
7×11
2×3×13
79
2²×2²×5
3²×3²
2×41
83
2²×3×7
5×17
2×43
3×29
2³×11
89
2×3³×5
7×13
2²×23
3×31
2×47
5×19
2²×2³×3
97
2²×2³×3
3²×11
2²×5²
101

Please comment if I got the coding or arithmetic wrong! It's been fun to figure all this out -- in my head if I could, on paper if I had to.

Valorie Zimmerman: Emotional Maturity and Free / Open Source communities

Planet Ubuntu - Mon, 2014-06-16 06:32
12 Signs of emotional maturity has an excellent list of the characteristics we look for in FOSS team members -- and traits I want to strengthen in my Self.

1. Flexibility - So necessary. The only constant is change, so survival dictates flexibility.

 2. Responsibility - Carthage Buckley, the author of 12 Signs of emotional maturity says:
You take responsibility for your own life. You understand that your current circumstances are a result of the decisions you have taken up to now. When something goes wrong, you do not rush to blame others. You identify what you can do differently the next time and develop a plan to implement these changes.The world is a mirror. Sometimes when things go wrong, I mistake what I see as caused by some malevolent force, or even someone being stupid. The human brain is designed to keep us from recognizing our own errors and mistakes, unfortunately. So I need to remember to take responsibility, and seek out evidence of personal shortcomings, in order to improve.

I want my team members to do the same! When someone has caused a mess, I want them to take responsibility, and clean up. I want to learn to more often do the same.

 3. Vision trumps knowledge - If I have a dream and desire, I can get the knowledge I need. Whereas a body of knowledge, by itself, doesn't make anything happen.

Good marketing sells the sizzle, not the steak. In other words, make people hungry, and they will buy your steak. Tell them how great it is, and they'll go somewhere they can smell steak! When working in my team, I need to remember this.

4. Personal growth - A priority every day. Who wants to be around stagnant people?

5. Seek alternative views - This one is so difficult, and so important. The hugely expanded media choices available to people now leads to many of us never interacting with people who disagree with us, or have a different perspective. This leads to groupthink, and even disaster. One way to prevent this in teams is to value diversity, and recruit with diversity as a goal. 

 6. Non-judgmental - Another hard one. Those who seek out alternative views, will more easily recognize how different we all can be, while all being of worth. And when we focus on shared goals rather than positions, we can continue to make shared progress towards those goals.

 7. Resilience - Stuff happens. When it does, we all can learn to pick up, dust off, and get going again. This doesn't mean denying that stuff happens; rather it means accepting that and continuing on anyway.

8. A calm demeanor - I think this results from resilience. Freaking out just wastes time and energy, and gets me further off-balance. Better to breathe a bit, and continue on my way.

 9. Realistic optimism - I love this word pair. Seeing that a glass is half-full, rather than half-empty is a habit, and habits can be created. Bad habits can be changed. Buckley says that success requires effort and patience. Your goals are worth effort and patience, creativity, and perseverance.

10. Approachable - Again, a choice. If I'm open to others, they will feel free to offer their help, encouragement or even warnings. If seeking alternative views is a value, then being approachable is one way to get those views.

11. Self-belief - I think this can be carried too far, but if we've looked for alternative views and perspectives, and created a plan with those views in mind, then criticism will not stop progress. When our goals are deeply desired, we can be flexible in details, and yet continue progress towards the ultimate destination.

12. Humor - Laughter and joy are signs that you are healthy and on your right path. The teams I want to work with are those full of humor, laughter and joy.

PS: I was unable to work the wonderful new word bafulates into this blog post, to my regret. Please accept my apologies.

Elizabeth K. Joseph: Texas Linuxfest wrap-up

Planet Ubuntu - Mon, 2014-06-16 05:23

Last week I finally had the opportunity to attend Texas Linuxfest. I first heard about this conference back when it started from some Ubuntu colleagues who were getting involved with it, so it was exciting when my talk on Code Review for Systems Administrators was accepted.

I arrived late on Thursday night, much later than expected after some serious flight delays due to weather (including 3 hours on the tarmac at a completely different airport due to running out of fuel over DFW). But I got in early enough to get rest before the expo hall opened on Friday afternoon where I helped staff the HP booth.

At the HP booth, we were showing off the latest developments in the high density Moonshot system, including the ARM-based processors that are coming out later this year (currently it’s sold with server grade Atom processors). It was cool to be able to see one, learn more about it and chat with some of the developers at HP who are focusing on ARM.


HP Moonshot

That evening I joined others at the Speaker dinner at one of the Austin Java locations in town. Got to meet several cool new people, including another fellow from HP who was giving a talk, an editor from Apress who joined us from England and one of the core developers of BusyBox.

On Saturday the talks portion of the conference began!

The keynote was by Karen Sandler, titled “Identity Crisis: Are we who we say we are? which was a fascinating look at how we all present ourselves in the community. As a lawyer, she gave some great insight into the multiple loyalties that many contributors to Open Source have and explored some of them. This was quite topical for me as I continue to do a considerable amount of volunteer work with Ubuntu and work at HP on the OpenStack project as my paid job. But am I always speaking for HP in my role in OpenStack? I am certainly proud to represent HP’s considerable efforts in the community, but in my day to day work I’m largely passionate about the project and my work on a personal level and my views tend to be my own. During the Q&A there was also interesting discussion about use of email aliases, which got me thinking about my own. I have an Ubuntu address which I pretty strictly use for Ubuntu mailing lists and private Ubuntu-related correspondences, I have an HP address that I pretty much just use for internal HP work and then everything else in my life pretty much goes to my main personal address – including all correspondences on the OpenStack, local Linux and other mailing lists.


Karen Sandler beginning her talk with a “Thank You” to the conference organizers

The next talk I went to was by Corey Quinn on “Selling Yourself: How to handle a technical interview” (slides here). I had a chat with him a couple weeks back about this talk and was able to give some suggestions, so it was nice to see the full talk laid out. His experience comes from work at Taos where he does a lot of interviewing of candidates and was able to make several observations based on how people present themselves. He began by noting that a resume’s only job is to get you an interview, so more time should be spent on actually practicing interviewing rather than strictly focusing on a resume. As the title indicates, the key take away was generally that an interview is the place where you should be selling yourself, no modesty here. He also stressed that it’s a 2 way interview, and the interviewer is very interested in making sure that the person will like the job and that they are actually interested to some degree in the work and the company.

It was then time for my own talk, “Code Review for Systems Administrators,” where I talked about how we do our work on the OpenStack Infrastructure team (slides here). I left a bit of extra time for questions than I usually do since my colleague Khai Do was doing a presentation later that did a deeper dive into our continuous integration system (“Scaling the Openstack Test Environment“). I’m glad I did, there were several questions from the audience about some of our additional systems administration focused tooling and how we determine what we use (why Puppet? why Cacti?) and what our review process for those systems looked like.

Unfortunately this was all I could attend of the conference, as I had a flight to catch in order to make it to Croatia in time for DORS/CLUG 2014 this week. I do hope to make it back to Texas Linuxfest at some point, the event had a great venue and was well-organized with speaker helpers in every room to do introductions, keep things on track (so nice!) and make sure the A/V was working properly. Huge thanks to Nathan Willis and the other organizers for doing such a great job.

Paul Tagliamonte: Linode pv-grub chaining

Planet Ubuntu - Sun, 2014-06-15 01:40

I've been using Linode since 2010, and many of my friends have heard me talk about how big a fan I am of linode. I've used Debian unstable on all my Linodes, since I often use them as a remote shell for general purpose Debian development. I've found my linodes to be indispensable, and I really love Linode.

The Problem

Recently, because of my work on Docker, I was forced to stop using the Linode kernel in favor of the stock Debian kernel, since the stock Linode kernel has no aufs support, and the default LVM-based devicemapper backend can be quite a pain.

The btrfs errors are ones I fully expect to be gone soon, I can't wait to switch back to using it.

I tried loading in btrfs support, and using that to host the Docker instance backed with btrfs, but it was throwing errors as well. Stuck with unstable backends, I wanted to use the aufs backend, which, dispite problems in aufs internally, is quite stable with Docker (and in general).

I started to run through the Linode Library's guide on PV-Grub, but that resulted in a cryptic error with xen not understanding the compression of the kernel. I checked for recent changes to the compresson, and lo, the Debian kernel has been switched to use xz compression in sid. Awesome news, really. XZ compression is awesome, and I've been super impressed with how universally we've adopted it in Debian. Keep it up! However, it appears only a newer pv-grub than the Linode hosts have installed will fix this.

After contacting the (ever friendly) Linode support, they were unable to give me a timeline on adding xz support, which would entail upgrading pv-grub. It was quite disapointing news, to be honest. Workarounds were suggested, but I'm not quite happy with them as proper solutions.

After asking in #debian-kernel, waldi was able to give me a few pointers, and the following is very inspired by him, the only thing that changed much was config tweaking, which was easy enough. Thanks, Bastian!

The Constraints

I wanted to maintain a 100% stock configuration from the kernel up. When I upgraded my kernel, I wanted to just work. I didn't want to unpack and repack the kernel, and I didn't want to install software outside main on my system. It had to be 100% Debian and unmodified.

The Solution It's pretty fun to attach to the lish console and watch bootup pass through GRUB 0.9, to GRUB 2.x to Linux. Free Software, Fuck Yeah.

Left unable to run my own kernel directly in the Linode interface, the tact here was to use Linode's old pv-grub to chain-load grub-xen, which loaded a modern kernel. Turns out this works great.

Let's start by creating a config for Linode's pv-grub to read and use.

sudo mkdir -p /boot/grub/

Now, since pv-grub is legacy grub, we can write out the following config to chain-load in grub-xen (which is just Grub 2.0, as far as I can tell) to /boot/grub/menu.lst. And to think, I almost forgot all about menu.lst. Almost.

default 1 timeout 3 title grub-xen shim root (hd0) kernel /boot/xen-shim boot

Just like riding a bike! Now, let's install and set up grub-xen to work for us.

sudo apt-get install grub-xen sudo update-grub

And, let's set the config for the GRUB image we'll create in the next step in the /boot/load.cf file:

configfile (xen/xvda)/boot/grub/grub.cfg

Now, lastly, let's generate the /boot/xen-shim file that we need to boot to:

grub-mkimage --prefix '(xen/xvda)/boot/grub' -c /boot/load.cf -O x86_64-xen /usr/lib/grub/x86_64-xen/*.mod > /boot/xen-shim

Next, change your boot configuration to use pv-grub, and give the machine a kick. Should work great! If you run into issues, use the lish shell to debug it, and let me know what else I should include in this post!

Hack on!

Costales: Review de Ubuntu Touch 1.0 en el Nexus 4 en @xatakamovil por @javipas

Planet Ubuntu - Sat, 2014-06-14 08:06

Realmente disfruté este artículo porque desde hace mucho tiempo tengo la intriga de cómo se comportará realmente la versión de Ubuntu para móviles.

Javier nos muestra desde una objetividad profesional una versión demasiado inmadura para tanto tiempo de desarrollo y que posiblemente adolezca de la ausencia de móviles comerciales en un mercado excesivamente innovador y competitivo.

Sin más os dejo con el artículo completo en Xataka.

Chris J Arges: manually deploying openstack with a virtual maas on ubuntu trusty (part 1)

Planet Ubuntu - Fri, 2014-06-13 22:05
The goal of this new few series of posts is to be able to setup virtual machines to simulate a real-world openstack deployment using maas and juju. This goes through setting up a maas-server in a VM as well as setting up maas-nodes in VMs and getting them enlisted/commissioned into the maas-server. Next juju is configured to use the maas cluster. Finally, openstack is deployed using juju.
OverviewRequirementsIdeally, a large server with 16 cores, 32G memory, 500G disk. Obviously you can tweak this setup to work with less; but be prepared to lock up lesser machines. In addition your host machine needs to be able to support nested virtualization.
TopologyHere is the basics of what will be setup for our virtual maas cluster. Each red box is a virtual machine with two interfaces. The eth0 interface in the VM connects to the NATed maas-internet network, while the VM’s eth1 interface connects to the isolated maas-management network. The number of maas-nodes should match what is required for the deployment; however it is simple enough to enlist more nodes later. I choose to use a public/private network in order to be more flexible later in how openstack networking is set up.


Setup Host MachineInstall RequirementsFirst install all required programs on the host machine.sudo apt-get install libvirt-bin qemu-kvm cpu-checker virtinst uvtool
Next, check if kvm is working correctly.kvm-ok
Ensure nested KVM is enabled. (replace intel with amd if necessary)cat /sys/module/kvm_intel/parameters/nested
This should output Y, if it doesn’t do the following:sudo modprobe -r kvm_intel
sudo modprobe kvm_intel nested=1

Ensure $USER is added to libvirtd group.groups | grep libvirtd
Ensure host machine has SSH keys generated and setup. (Be careful, don’t overwrite your existing keys)[ -d ~/.ssh ] || ssh-keygen -t rsaVirtual Network SetupThis step can be done via virt-manager, but also done via command line using virsh.Setup a virtual network which uses NAT to communicate with the host machine with the following parameters:Network Name: maas_internet
Network: 192.168.100.0/24
Do _not_ Enable DHCP.
Forwarding to physical network; Any physical device; NAT
And setup an isolated virtual network the following parameters: Network Name: maas_management
Network: 10.10.10.0/24
Do _not_ Enable DHCP.
Isolated network;Install the MAAS ServerDownload and Start the InstallEnsure you have virt-manager connected to the hypervisor.
While there are many ways we can create virtual machines, I chose the tool uvtool because it works well in Trusty and quickly creates VM based on the Ubuntu cloud image.

Sync the latest cloud trusty cloud image:
uvt-simplestreams-libvirt sync release=trusty arch=amd64
Create a maas-server VM:
uvt-kvm create maas-server release=trusty arch=amd64 --disk 20 --memory 2048 --password ubuntu
After it boots, shut it down and  edit the VM’s machine configuration.
Make the two network interfaces connect to maas_internet and maas_management respectively.

Now edit /etc/network/interfaces to have the following:
auto eth0
iface eth0 inet static
  address 192.168.100.10
  netmask 255.255.255.0
  gateway 192.168.100.1
  dns-nameservers 10.10.10.10 192.168.100.1

auto eth1
iface eth1 inet static
  address 10.10.10.10
  netmask 255.255.255.0
  dns-nameservers 10.10.10.10 192.168.100.1

And follow the instructions here:
http://maas.ubuntu.com/docs/install.html#pkg-install

Which is essentially:
sudo apt-get install maas maas-dhcp maas-dnsMAAS Server Post Install Taskshttp://maas.ubuntu.com/docs/install.html#post-install

First let’s check if the webpage is working correctly. Depending on your installation, you may need to proxy into a remote host hypervisor before accessing the webpage. If you’re working locally you should be able to access this address directly (as the libvirt maas_internet network is already connected to your local machine).

If you need to access it indirectly (and 192.168.100.0 is a non-conflicting subnet):
sshuttle -D -r <hypervisor IP> 192.168.100.0/24
Access the following:
http://192.168.100.10/MAAS
It should remind you that post installation tasks need to be completed.

Let’s create the admin user from the hypervisor machine:
ssh ubuntu@192.168.100.10
sudo maas-region-admin createadmin --username=root --email="user@host.com" --password=ubuntu
If you want to limit the types of boot images that can be created you need to edit
sudo vim /etc/maas/bootresources.yaml
Import boot images, using the new root user you created to log in:
http://192.168.100.10/MAAS/clusters/
Now click 'import boot images' and be patient as it will take some time before these images are imported.

Add a key for the host machine here:
http://192.168.100.10/MAAS/account/prefs/sshkey/add/
Configure the MAAS ClusterFollow instructions here to setup cluster:
http://maas.ubuntu.com/docs/cluster-configuration.html

http://192.168.100.10/MAAS/clusters/
Click on ‘Cluster master’
Click on edit interface eth1.
Interface: eth1
Management: DHCP and DNS
IP: 10.10.10.10
Subnet mask: 255.255.255.0
Broadcast IP: 10.10.10.255
Router IP: 10.10.10.10
IP Range Low: 10.10.10.100
IP Range High: 10.10.10.200

Click Save Interface
Ensure Nodes Auto-Enlist

Create a MAAS key and use that to log in:
http://192.168.100.10/MAAS/account/prefs/
Click on ‘+ Generate MAAS key’ and copy that down.

Log into the maas-server, and then log into maas using the MAAS key:
maas login maas-server http://192.168.100.10/MAAS

Now set all nodes to auto accept:
maas maas-server nodes accept-all
Setup keys on the maas-server so it can access the virtual machine host
sudo mkdir -p ~maas
sudo chown maas:maas ~maas
sudo -u maas ssh-keygen

Add the pubkey in ~maas/.ssh/id_rsa.pub to the virsh servers authorized_keys and to the maas SSH keys (http://192.168.100.10/MAAS/account/prefs/sshkey/add/)
sudo cat /home/maas/.ssh/id_rsa.pub
Now install virsh to test a connection and allow the maas-server to control maas-nodes.
sudo apt-get install libvirt-bin
Test the connection to the hypervisor (replace ubuntu with hypervisor host user)
sudo -u maas virsh -c qemu+ssh://ubuntu@192.168.100.1/system list --allConfirm Maas-Server NetworkingEnsure we can reach important address via maas-server:
host streams.canonical.com
host store.juju.ubuntu.com
host archive.ubuntu.com

And that we can download charms if needed:
wget https://store.juju.ubuntu.com/charm-infoSetup Traffic ForwardingSetup maas-server to forward traffic from eth1 to eth0:

You can type the following out manually or add it as an upstart script to ensure forwarding is setup properly each time add this file to /etc/init/ovs-routing.conf (thanks to Juan Negron):

description "Setup NAT rules for ovs bridge"

start on runlevel [2345]

env EXTIF="eth0"
env BRIDGE="eth1"

task

script
echo "Configuring modules"
modprobe ip_tables || :
modprobe nf_conntrack || :
modprobe nf_conntrack_ftp || :
modprobe nf_conntrack_irc || :
modprobe iptable_nat || :
modprobe nf_nat_ftp || :

echo "Configuring forwarding and dynaddr"
echo "1" > /proc/sys/net/ipv4/ip_forward
echo "1" > /proc/sys/net/ipv4/ip_dynaddr

echo "Configuring iptables rules"
iptables-restore <<-EOM
*nat
-A POSTROUTING -o ${EXTIF} -j MASQUERADE
COMMIT
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A FORWARD -i ${BRIDGE} -o ${EXTIF} -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
-A FORWARD -i ${EXTIF} -o ${BRIDGE} -j ACCEPT
-A FORWARD -j LOG
COMMIT
EOM

end script
Then start the service:
sudo service ovs-routing startSetup Squid ProxyEnsure squid proxy can access cloud images:
echo "cloud-images.ubuntu.com" | sudo tee /etc/squid-deb-proxy/mirror-dstdomain.acl.d/98-cloud-imagessudo service squid-deb-proxy restartInstall MAAS NodesNow we can virt-install each maas-node on the hypervisor such that it automatically pxe boots and auto-enlists into MAAS. You can adjust the script below to create as many nodes as required. I’ve also simplified things by creating everything with dual nics and ample memory and hard drive space, but of course you could use custom machines per service. Compute-nodes need more compute power, ceph nodes will need more storage, and quantum-gateway will need dual nics. In addition you could specify raw disks instead of qcow2, or use storage pools; but in this case I wanted something simple that didn’t automatically use all the space it needed.

for i in {0..19}; do
virt-install \
--name=maas-node-${i} \
--connect=qemu:///system --ram=4096 --vcpus=1 --hvm --virt-type=kvm \
--pxe --boot network,hd \
--os-variant=ubuntutrusty --graphics vnc --noautoconsole --os-type=linux --accelerate \
--disk=/var/lib/libvirt/images/maas-node-${i}.qcow2,bus=virtio,format=qcow2,cache=none,sparse=true,size=32 \
--network=network=maas_internet,model=virtio \
--network=network=maas_management,model=virtio
done

Now each node needs to be manually enlisted with the proper power configuration.
http://maas.ubuntu.com/docs/nodes.html#virtual-machine-nodes

Host Name: maas-node-${i}.vmaas
Power Type: virsh
Power Address: qemu+ssh://ubuntu@192.168.100.1/system
Power ID: maas-node-${i}
Here we need to match the machines to the mac address and update the power requirements.  You can get the mac addresses of each node by using the following on the hypervisor:

virsh dumpxml maas-node-${i} | grep "mac addr"
Here is a script that helps automate some of this process, it can be run from the maas-server (replace USER ubuntu with the appropriate value) this matches mac address from virsh to the ones in maas and then sets up the power accordingly:

#!/usr/bin/python

import sys, os, libvirt
from xml.dom.minidom import parseString
os.environ['DJANGO_SETTINGS_MODULE'] = 'maas.settings'
sys.path.append("/usr/share/maas")
from maasserver.models import Node, Tag

hhost = 'qemu+ssh://ubuntu@192.168.100.1/system'

conn = libvirt.open(hhost)
nodes_dict = {}
domains = conn.listDefinedDomains()
for node_name in domains:
node = conn.lookupByName(node_name)
node_xml = parseString(node.XMLDesc(0))
node_mac1 = node_xml.getElementsByTagName('interface')[0].getElementsByTagName('mac')[0].getAttribute('address')
nodes_dict[node_mac1] = node_name

maas_nodes = Node.objects.all()
for node in maas_nodes:
try:
system_id = node.system_id
mac = node.get_primary_mac()
node_name = nodes_dict[str(mac)]
node.hostname = node_name
node.power_type = 'virsh'
node.power_parameters = { 'power_address':hhost, 'power_id':node_name }
node.save()
except: pass

Note you will need python-libvirt and run the above command with something like the following:
sudo -u maas ./setup-nodes.pySetup Fastpath and Commission NodesYou most likely want to use fast-path installer on nodes to speed up installation times. Set all nodes to use fastpath installer using another bulk action on the nodes.

After you have all this done, click bulk action commission.
You should see all your machines starting up if you set things up properly, give this some time. You should have all the nodes in the 'Ready' state in maas now!
http://192.168.100.10/MAAS/nodes/
Confirm DNS setupOne point of trouble can be ensuring DNS is setup correctly. We can test this by starting a maas node and inside of that trying the following:
dig streams.canonical.com
dig store.juju.ubuntu.com

If we can’t hit those, we’ll need to ensure the maas server is setup correctly.
Go to: http://192.168.100.10/MAAS/settings/
Enter the host machines upstream DNS if necessary here, it should setup the bind configuration file and restart that service. After this re-test.

In addition I had to disable dnssec-validation for bind. Edit the following file:
sudo vim /etc/bind/named.conf.options
And change the following value:
dnssec-validation no;
And restart the service:
sudo service bind9 restart
Now you have a working virtual maas setup using the latest Ubuntu LTS!

Kees Cook: glibc select weakness fixed

Planet Ubuntu - Fri, 2014-06-13 19:21

In 2009, I reported this bug to glibc, describing the problem that exists when a program is using select, and has its open file descriptor resource limit raised above 1024 (FD_SETSIZE). If a network daemon starts using the FD_SET/FD_CLR glibc macros on fdset variables for descriptors larger than 1024, glibc will happily write beyond the end of the fdset variable, producing a buffer overflow condition. (This problem had existed since the introduction of the macros, so, for decades? I figured it was long over-due to have a report opened about it.)

At the time, I was told this wasn’t going to be fixed and “every program using [select] must be considered buggy.” 2 years later still more people kept asking for this feature and continued to be told “no”.

But, as it turns out, a few months later after the most recent “no”, it got silently fixed anyway, with the bug left open as “Won’t Fix”! I’m glad Florian did some house-cleaning on the glibc bug tracker, since I’d otherwise never have noticed that this protection had been added to the ever-growing list of -D_FORTIFY_SOURCE=2 protections.

I’ll still recommend everyone use poll instead of select, but now I won’t be so worried when I see requests to raise the open descriptor limit above 1024.

© 2014, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Jonathan Riddell: Kubuntu on Twitter and Facebook

Planet Ubuntu - Fri, 2014-06-13 18:42
KDE Project:

Slightly late to the game, Kubuntu now has a Twitter and Facebook account to join the Google+ account. New headlines will go there and we've a fancy account from the nice people at SoDash that makes it easy to interact. Give us a Like or a Tweet.

https://twitter.com/kubuntu

https://www.facebook.com/kubuntu.org

Kubuntu Wire: Neon5: KDE Crack of the Day

Planet Ubuntu - Fri, 2014-06-13 18:31

A sister project of Kubuntu is Project Neon, daily builds of KDE Software you can install alongside your normal software to test.  Neon 5 is the packages for KDE Frameworks 5 and Plasma 5 and there is also a weekly ISO made (Friday update should be arriving shortly) with the latest Plasma 5 desktop.

It’s pleasing to see various reviews of Plasma 5 using Neon as the easiest way to test the next generation in KDE Software and most importantly take pretty screen shots.  The Screenshots of KDE Plasma Next beta 1 article on LInuxBSDos.com takes a look around the desktop. Datamotion’s article KDE’s Risky Gamble on New Interface takes a sceptical look at desktop redesigns “Still, KDE is showing signs of caution, so it might manage its re-design better than its rivals did”.   We think it will!

Ubuntu App Developer Blog: 10,000 Users of Ubuntu Phone

Planet Ubuntu - Fri, 2014-06-13 16:35

As we enter the final months before the first Ubuntu phones ship from our partners Meizu and Bq, the numbers of apps, users and downloads continues to grow at a steady pace. Today I’m excited to announce that we have more than ten thousand unique users of Ubuntu on phones or tablets!

Users

Ubuntu phone (and tablet) users sign into their Ubuntu One account on their device in order to download or update the applications on their phone. This allows us to provide many useful features that users expect coming from Android or iOS, such as being able to re-install their collection of apps on a new phone or after resetting their current one, or browsing the store’s website (coming soon) and having the option to install an app directly to their device from there. As a side effect, it means we know how many unique Ubuntu One accounts have connected to the store to in order to download an app, and that number has this week passed the 10,000 mark.

Excitement

Not only is this a milestone, but it’s down right amazing when you consider that there are currently no phones available to purchase with Ubuntu on them. The first phones from OEMs will be shipping later this year, but for now there isn’t a phone or tablet that comes with the new Ubuntu device OS on it. That means that each of these 10,000 people have purchased (or already had) either a supported Nexus device, or are using one of the community ports, and either wiped Android off them in favor of Ubuntu, or are dual booting. If this many people are willing to install the beta release of Ubuntu phone on their device, just imagine how many more will want to purchase a phone with Ubuntu pre-installed and with full support from the manufacturer.

Pioneers

In addition to users of Ubuntu phone, we’ve also seen a steady growth in the number of applications and application developers targeting Ubuntu phone and using the Ubuntu SDK. To celebrate them, we created Ubuntu App Pioneers page, and the first batch of Pioneers t-shirts are being sent out to those intrepid developers who, again, are so excited about a platform that isn’t even available to consumers yet that they’ve dedicated their time and energy into making it better for everyone.

Canonical Design Team: Making ubuntu.com responsive: ensuring performance (13)

Planet Ubuntu - Fri, 2014-06-13 08:18

This post is part of the series ‘Making ubuntu.com responsive‘.

Performance has always been one of the top priorities when it came to building the responsive ubuntu.com. We started with a list of performance snags and worked to improve each one as much as possible in the time we had. Here is a quick run through of the points we collected and the way we managed to improve them.

Asset caching

We now have a number of websites using our web style guide. Because of this, we needed to deliver assets on both http and secure https domains. We decided to build an asset server to support the guidelines and other sites that require asset hosting.

This gave us the ability to increase the far future expires (FFE) of each file. By doing so the file is cached by the server and not resupplied. This gives us a much faster round trip speed. But as we are still able to update a single file we cannot set the FFE too far in the future. We plan to resolve this with a new and improved assets system, which is currently under development.

The new asset system will have a internal frontend to upload a binary file. This will provide a link to the asset with a 6 character hexadecimal attached to the file name.

/ho686yst/favicon.ico

The new system restricts the ability to edit or update a file. Only upload a new one and change the link in the markup. This guarantees the asset to stay the same forever.

Minification and concatenation

We introduced a minification and concatenation step to the build of the web style guide. This saves precious bytes and reduces the number of requests performed by each page.

We use the sass ruby gem to generate minified and concatenated CSS in production. We also run the small amount of JavaScript we have through UglifyJS before delivering to production.

Compressed images

Images were the main issue when it came to performance.

We had a look at the file sizes of some of our key images (like the ones in the tablet section of the site) and were shocked to discover we hadn’t been treating our visitors’ bandwidth kindly.

After analysing a handful of images, we decided to have a look into our assets folder and flag the images that were over 100 KB as a first go.

One of the largest time consuming jobs in this project was converting all images that could to SVGs. This meant creating pictograms and illustrations as vectors from earlier PNGs. Any images that could not be recreated as a vector graphic were heavy compressed. This squeezed an alarming amount out of the original file.

We continued this for every image on the site. By doing so the total reduction across the site was 7.712MB.

Reduce required fonts

We currently load a large selection of the Ubuntu font.

<link href='//fonts.googleapis.com/css?family=Ubuntu:400,300,300italic,400italic,700,700italic%7CUbuntu+Mono' rel='stylesheet' type='text/css' />

The designers are exploring the patterns of the present and ideal future to discover unneeded types. Since the move from normal font weight to light a few months ago as our base font style, we rarely use the bold weight (700) anymore, resorting to normal (400) for highlighting text.

Once we determine which weights we can drop, we will be able to make significant savings, as seen below:

Reducing loaded fonts: before and after

Using SVG

Taking the leap to SVGs over PNG caused a number of issues. We decided to load SVGs as opposed to inline SVGs to keep our markup clean and easy to read and update. This meant we needed to provide four different coloured images for each pictogram.

We introduced Modernizr to give us an easy way to detect browsers that do not support SVGs and replace the image with PNGs of the same path and name.

Remove unnecessary enhancements

We explored a parallaxing effect for our site’s background with JavaScript. With worked well on normal resolution screens but lagged on retina displays, so we decided not do it and set the background position to static instead — user experience is always paramount and trumps visual enhancements.

Future improvements

One of the things in our roadmap is to remove unused styles remaining in the stylesheets. There are a number of solutions for this such as grunt-uncss.

Conclusion

There is still a lot to do but we have definitely broken the back of the work to steer ubuntu.com in the right direction. The aim is to push the site up to 90+ in the speed page tests in the next wave of updates.

Reading list

Dustin Kirkland: Elon Musk, Tesla Motors, and My Own Patent Apologies

Planet Ubuntu - Fri, 2014-06-13 02:25
It's hard for me to believe that I have sat on a this draft blog post for almost 6 years.  But I'm stuck on a plane this evening, inspired by Elon Musk and Tesla's (cleverly titled) announcement, "All Our Patents Are Belong To You."  Musk writes:
Yesterday, there was a wall of Tesla patents in the lobby of our Palo Alto headquarters. That is no longer the case. They have been removed, in the spirit of the open source movement, for the advancement of electric vehicle technology.When I get home, I'm going to take down a plaque that has proudly hung in my own home office for nearly 10 years now.  In 2004, I was named an IBM Master Inventor, recognizing sustained contributions to IBM's patent portfolio.

Musk continues:
When I started out with my first company, Zip2, I thought patents were a good thing and worked hard to obtain them. And maybe they were good long ago, but too often these days they serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors. After Zip2, when I realized that receiving a patent really just meant that you bought a lottery ticket to a lawsuit, I avoided them whenever possible.And I feel the exact same way!  When I was an impressionable newly hired engineer at IBM, I thought patents were wonderful expressions of my own creativity.  IBM rewarded me for the work, and recognized them as important contributions to my young career.  Remember, in 2003, IBM was defending the Linux world against evil SCO.  (Confession: I think I read Groklaw every single day.)

Yeah, I filed somewhere around 75 patents in about 4 years, 47 of which have been granted by the USPTO to date.

I'm actually really, really proud of a couple of them.  I was the lead inventor on a couple of early patents defining the invention you might know today as Swype (Android) or Shapewriter (iPhone) on your mobile devices.  In 2003, I called it QWERsive, as the was basically applying "cursive handwriting" to a "qwerty keyboard."  Along with one of my co-inventors, we actually presented a paper at the 27th UNICODE conference in Berlin in 2005, and IBM sold the patent to Lenovo a year later.  (To my knowledge, thankfully that patent has never been enforced, as I used Swype every single day.)


But that enthusiasm evaporated very quickly between 2005 and 2007, as I reviewed thousands of invention disclosures by my IBM colleagues, and hundreds of software patents by IBM competitors in the industry.

I spent most of 2005 working onsite at Red Hat in Westford, MA, and came to appreciate how much more efficiently innovation happened in a totally open source world, free of invention disclosures, black out periods, gag orders, and software patents.  I met open source activists in the free software community, such as Jon maddog Hall, who explained the wake of destruction behind, and the impending doom ahead, in a world full of software patents.

Finally, in 2008, I joined an amazing little free software company called Canonical, which was far too busy releasing Ubuntu every 6 months on time, and building an amazing open source software ecosystem, to fart around with software patents.  To my delight, our founder, Mark Shuttleworth, continues to share the same enlightened view, as he states in this TechCrunch interview (2012):
“People have become confused,” Shuttleworth lamented, “and think that a patent is incentive to create at all.” No one invents just to get a patent, though — people invent in order to solve problems. According to him, patents should incentivize disclosure. Software is not something you can really keep secret, and as such Shuttleworth’s determination is that “society is not benefited by software patents at all.”Software patents, he said, are a bad deal for society. The remedy is to shorten the duration of patents, and reduce the areas people are allowed to patent. “We’re entering a third world war of patents,” Shuttleworth said emphatically. “You can’t do anything without tripping over a patent!” One cannot possibly check all possible patents for your invention, and the patent arms race is not about creation at all.And while I'm still really proud of some of my ideas today, I'm ever so ashamed that they're patented.

If I could do what Elon Musk did with Tesla's patent portfolio, you have my word, I absolutely would.  However, while my name is listed as the "inventor" on four dozen patents, all of them are "assigned" to IBM (or Lenovo).  That is to say, they're not mine to give, or open up.

What I can do, is speak up, and formally apologize.  I'm sorry I filed software patents.  A lot of them.  I have no intention on ever doing so again.  The system desperately needs a complete overhaul.  Both the technology and business worlds are healthier, better, more innovative environment without software patents.

I do take some consolation that IBM seems to be "one of the good guys", in so much as our modern day IBM has not been as litigious as others, and hasn't, to my knowledge, used any of the patents for which I'm responsible in an offensive manner.

But there are certainly those that do.  Patent trolls.

Another former employer of mine, Gazzang was acquired earlier this month (June 3rd) by Cloudera -- a super sharp, up-and-coming big data open source company with very deep pockets and tremendous market potential.  Want to guess what happened 3 days later?  A super shady patent infringement lawsuit is filed, of course!
Protegrity Corp v. Gazzang, Inc.
Complaint for Patent InfringementCivil Action No. 3:14-cv-00825; no judge yet assigned. Filed on June 6, 2014 in the U.S. District Court for the District of Connecticut;Patents in case 7,305,707: “Method for intrusion detection in a database system” by Mattsson. Prosecuted by Neuner; George W. Cohen; Steven M. Edwards Angell Palmer & Dodge LLP. Includes 22 claims (2 indep.). Was application 11/510,185. Granted 12/4/2007.
Yuck.  And the reality is that happens every single day, and in places where the stakes are much, much higher.  See: Apple v. Google, for instance.

Musk concludes his post:
Technology leadership is not defined by patents, which history has repeatedly shown to be small protection indeed against a determined competitor, but rather by the ability of a company to attract and motivate the world’s most talented engineers. We believe that applying the open source philosophy to our patents will strengthen rather than diminish Tesla’s position in this regard.What a brave, bold, ballsy, responsible assertion!

I've never been more excited to see someone back up their own rhetoric against software patents, with such a substantial, palpable, tangible assertion.  Kudos, Elon.

Moreover, I've also never been more interested in buying a Tesla.   Coincidence?

Maybe it'll run an open source operating system and apps, too.  Do that, and I'm sold.

:-Dustin

Ken VanDine: Game Development on Ubuntu with Bacon2D

Planet Ubuntu - Fri, 2014-06-13 01:45

During Ubuntu Online Summit today, I did a presentation on game development with Bacon2D.

Bacon2D is a game engine for QML that I've been working on.  For anyone that missed the session, you can go back and watch it any time here.

I've shared the slides as well, if anyone has any questions or suggestions, please join us in #bacon2d on Freenode.  You can also file bugs at https://github.com/Bacon2D/Bacon2D.


Jono Bacon: FirefoxOS and Developing Markets

Planet Ubuntu - Thu, 2014-06-12 23:40

It seems Mozilla is targeting emerging markets and developing nations with $25 cell phones. This is tremendous news, and an admirable focus for Mozilla, but it is not without risk.

Bringing simple, accessible technology to these markets can have a profound impact. As an example, in 2001, 134 million Nigerians shared 500,000 land-lines (as covered by Jack Ewing in Businessweek back in 2007). That year the government started encouraging wireless market competition and by 2007 Nigeria had 30 million cellular subscribers.

This generated market competition and better products, but more importantly, we have seen time and time again that access to technology such as cell phones improves education, provides opportunities for people to start small businesses, and in many cases is a contributing factor for bringing people out of poverty.

So, cell phones are having a profound impact in these nations, but the question is, will it work with FirefoxOS?

I am not sure.

In Mozilla’s defence, they have done an admirable job with FirefoxOS. They have built a powerful platform, based on open web technology, and they lined up a raft of carriers to launch with. They have a strong brand, an active and passionate community, and like so many other success stories, they already have a popular existing product (their browser) to get them into meetings and headlines.

Success though is judged by many different factors, and having a raft of carriers and products on the market is not enough. If they ship in volume but get high return rates, it could kill them, as is common for many new product launches.

What I don’t know is whether this volume/return-rate balance plays such a critical role in developing markets. I would imagine that return rates could be higher (such as someone who has never used a cell phone before taking it back because it is just too alien to them). On the other hand, I wonder if those consumers there are willing to put up with more quirks just to get access to the cell network and potentially the Internet.

What seems clear to me is that success here has little to do with the elegance or design of FirefoxOS (or any other product for that matter). It is instead about delivering incredibly dependable hardware. In developing nations people have less access to energy (for charging devices) and have to work harder to obtain it, and have lower access to support resources for how to use new technology. As such, it really needs to just work. This factor, I imagine, is going to be more outside of Mozilla’s hands.

So, in a nutshell, if the $25 phones fail to meet expectations, it may not be Mozilla’s fault. Likewise, if they are successful, it may not be to their credit.

Pages

Subscribe to Free Software Magazine aggregator