news aggregator

The Fridge: Ubuntu Weekly Newsletter Issue 372

Planet Ubuntu - Mon, 2014-06-16 21:38

Welcome to the Ubuntu Weekly Newsletter. This is issue #372 for the week June 9 – 15, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Emily Gonyer
  • Jose Antonio Rey
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Chris J Arges: manually deploying openstack with a virtual maas on ubuntu trusty (part 2)

Planet Ubuntu - Mon, 2014-06-16 17:16
In the previous post, I went over how to setup a virtual MAAS environment using KVM [1]. Here I will explain how to setup Juju for use with this environment.

For this setup, we’ll use the maas-server as the juju client to interact with the cluster.

This guide was very useful:
https://maas.ubuntu.com/docs/juju-quick-start.html

Update to the latest stable tools:
sudo apt-add-repository ppa:juju/stable
sudo apt-get update
Next we want to setup juju on the host machine.
sudo apt-get install juju-coreCreate juju environment file.
juju initGenerate a specific MAAS API key by using the following link:
http://192.168.100.10/MAAS/account/prefs/

write the following to ~/.juju/environments.yaml replacing ‘maas api key’ with what was generated above:
default: vmaas
environments:
  vmaas:
    type: maas
    maas-server: 'http://192.168.100.10/MAAS'
    maas-oauth: '<maas api key>'
    admin-secret: ubuntu # or something generated with uuid
    default-series: trusty
Now let’s sync tools and bootstrap a node. Note, if you have multiple juju environments then you may need to specify ‘-e vmaas’ if it isn’t your default environment.
juju sync-tools
juju bootstrap # add --show-log --debug  for more output
See if it works by using the following command:
juju statusYou should see something similar to the following:
~$ juju status
environment: vmaas
machines:
  "0":
    agent-state: down
    agent-state-info: (started)
    agent-version: 1.18.4
    dns-name: maas-node-0.maas
    instance-id: /MAAS/api/1.0/nodes/node-e41b0c34-e1cb-11e3-98c6-5254001aae69/
    series: trusty
services: {}
Now we can do a test deployment with the juju-gui to our bootstrap node.
juju deploy juju-guiWhile it is deploying you can type the following to get a log.
juju debug-logI wanted to be able to access the juju-gui from an ‘external’ address, so I edited /etc/networking/interfaces on that machine to have a static address:
juju ssh 0sudo vim /etc/networking/interfaces
Add the following to the file:
auto eth0
iface eth0 inet static
  address 192.168.100.11
  netmask 255.255.255.0
Bring that interface up.
sudo ifup eth0The password can be found here on the host machine:
grep pass .juju/environments/vmaas.jenvIf you used the above configurations it should be ‘ubuntu’.

Log into the service so you can monitor the status graphically during the deployment.

If you get errors saying that servers couldn’t be reached you may have DNS configuration or proxy issues. You’ll have to first resolve these before using Juju. I’ve had is intermittent network issues in my lab. In order to workaround those physical issues you may have to retry the bootstrap, or increase the timeout values in ~/.juju/environments.yaml to use the following:
  bootstrap-retry-delay: 5
  bootstrap-timeout: 1200
Now you’re cooking with juju.
  1. http://dinosaursareforever.blogspot.com/2014/06/manually-deploying-openstack-with.html

Valorie Zimmerman: Primes, and products of primes

Planet Ubuntu - Mon, 2014-06-16 08:25
I was finding it difficult to stop thinking and fall asleep last night, so I decided to count as far as I could, in primes or products of primes. I'm not sure why, but it did stop the thoughts whirling around like a hamster on a wheel.

Note on superscripts: HTML only allows me to show squares and cubes. Do the addition.

1
2
3

5
2×3
7


2×5
11
2×3²
13
2x7
3x5
2²×2²
17
2×3²
19
2²×5
3×7
2×11
23
2³×3

2×13

2²×7
29
2×3×5
31
2²×2³
3×11
2×17
5×7
2²×3²
37
2×19
3×13
2³×5
41
2×3×7
43
2²×11
3²×5
2×23
47
2²×2²×3

2×5²
3×17
2²×13
53
2×3³
5×11
2³×7
3×19
2×29
59
2²×3×5
61
2×31
2×3×11
2³×2³
5×13
2×3×11
67
2²×17
3×23
2×5×7
71
2³×3² 
73
2×37
3×5²
2²×19
7×11
2×3×13
79
2²×2²×5
3²×3²
2×41
83
2²×3×7
5×17
2×43
3×29
2³×11
89
2×3³×5
7×13
2²×23
3×31
2×47
5×19
2²×2³×3
97
2²×2³×3
3²×11
2²×5²
101

Please comment if I got the coding or arithmetic wrong! It's been fun to figure all this out -- in my head if I could, on paper if I had to.

Valorie Zimmerman: Emotional Maturity and Free / Open Source communities

Planet Ubuntu - Mon, 2014-06-16 06:32
12 Signs of emotional maturity has an excellent list of the characteristics we look for in FOSS team members -- and traits I want to strengthen in my Self.

1. Flexibility - So necessary. The only constant is change, so survival dictates flexibility.

 2. Responsibility - Carthage Buckley, the author of 12 Signs of emotional maturity says:
You take responsibility for your own life. You understand that your current circumstances are a result of the decisions you have taken up to now. When something goes wrong, you do not rush to blame others. You identify what you can do differently the next time and develop a plan to implement these changes.The world is a mirror. Sometimes when things go wrong, I mistake what I see as caused by some malevolent force, or even someone being stupid. The human brain is designed to keep us from recognizing our own errors and mistakes, unfortunately. So I need to remember to take responsibility, and seek out evidence of personal shortcomings, in order to improve.

I want my team members to do the same! When someone has caused a mess, I want them to take responsibility, and clean up. I want to learn to more often do the same.

 3. Vision trumps knowledge - If I have a dream and desire, I can get the knowledge I need. Whereas a body of knowledge, by itself, doesn't make anything happen.

Good marketing sells the sizzle, not the steak. In other words, make people hungry, and they will buy your steak. Tell them how great it is, and they'll go somewhere they can smell steak! When working in my team, I need to remember this.

4. Personal growth - A priority every day. Who wants to be around stagnant people?

5. Seek alternative views - This one is so difficult, and so important. The hugely expanded media choices available to people now leads to many of us never interacting with people who disagree with us, or have a different perspective. This leads to groupthink, and even disaster. One way to prevent this in teams is to value diversity, and recruit with diversity as a goal. 

 6. Non-judgmental - Another hard one. Those who seek out alternative views, will more easily recognize how different we all can be, while all being of worth. And when we focus on shared goals rather than positions, we can continue to make shared progress towards those goals.

 7. Resilience - Stuff happens. When it does, we all can learn to pick up, dust off, and get going again. This doesn't mean denying that stuff happens; rather it means accepting that and continuing on anyway.

8. A calm demeanor - I think this results from resilience. Freaking out just wastes time and energy, and gets me further off-balance. Better to breathe a bit, and continue on my way.

 9. Realistic optimism - I love this word pair. Seeing that a glass is half-full, rather than half-empty is a habit, and habits can be created. Bad habits can be changed. Buckley says that success requires effort and patience. Your goals are worth effort and patience, creativity, and perseverance.

10. Approachable - Again, a choice. If I'm open to others, they will feel free to offer their help, encouragement or even warnings. If seeking alternative views is a value, then being approachable is one way to get those views.

11. Self-belief - I think this can be carried too far, but if we've looked for alternative views and perspectives, and created a plan with those views in mind, then criticism will not stop progress. When our goals are deeply desired, we can be flexible in details, and yet continue progress towards the ultimate destination.

12. Humor - Laughter and joy are signs that you are healthy and on your right path. The teams I want to work with are those full of humor, laughter and joy.

PS: I was unable to work the wonderful new word bafulates into this blog post, to my regret. Please accept my apologies.

Elizabeth K. Joseph: Texas Linuxfest wrap-up

Planet Ubuntu - Mon, 2014-06-16 05:23

Last week I finally had the opportunity to attend Texas Linuxfest. I first heard about this conference back when it started from some Ubuntu colleagues who were getting involved with it, so it was exciting when my talk on Code Review for Systems Administrators was accepted.

I arrived late on Thursday night, much later than expected after some serious flight delays due to weather (including 3 hours on the tarmac at a completely different airport due to running out of fuel over DFW). But I got in early enough to get rest before the expo hall opened on Friday afternoon where I helped staff the HP booth.

At the HP booth, we were showing off the latest developments in the high density Moonshot system, including the ARM-based processors that are coming out later this year (currently it’s sold with server grade Atom processors). It was cool to be able to see one, learn more about it and chat with some of the developers at HP who are focusing on ARM.


HP Moonshot

That evening I joined others at the Speaker dinner at one of the Austin Java locations in town. Got to meet several cool new people, including another fellow from HP who was giving a talk, an editor from Apress who joined us from England and one of the core developers of BusyBox.

On Saturday the talks portion of the conference began!

The keynote was by Karen Sandler, titled “Identity Crisis: Are we who we say we are? which was a fascinating look at how we all present ourselves in the community. As a lawyer, she gave some great insight into the multiple loyalties that many contributors to Open Source have and explored some of them. This was quite topical for me as I continue to do a considerable amount of volunteer work with Ubuntu and work at HP on the OpenStack project as my paid job. But am I always speaking for HP in my role in OpenStack? I am certainly proud to represent HP’s considerable efforts in the community, but in my day to day work I’m largely passionate about the project and my work on a personal level and my views tend to be my own. During the Q&A there was also interesting discussion about use of email aliases, which got me thinking about my own. I have an Ubuntu address which I pretty strictly use for Ubuntu mailing lists and private Ubuntu-related correspondences, I have an HP address that I pretty much just use for internal HP work and then everything else in my life pretty much goes to my main personal address – including all correspondences on the OpenStack, local Linux and other mailing lists.


Karen Sandler beginning her talk with a “Thank You” to the conference organizers

The next talk I went to was by Corey Quinn on “Selling Yourself: How to handle a technical interview” (slides here). I had a chat with him a couple weeks back about this talk and was able to give some suggestions, so it was nice to see the full talk laid out. His experience comes from work at Taos where he does a lot of interviewing of candidates and was able to make several observations based on how people present themselves. He began by noting that a resume’s only job is to get you an interview, so more time should be spent on actually practicing interviewing rather than strictly focusing on a resume. As the title indicates, the key take away was generally that an interview is the place where you should be selling yourself, no modesty here. He also stressed that it’s a 2 way interview, and the interviewer is very interested in making sure that the person will like the job and that they are actually interested to some degree in the work and the company.

It was then time for my own talk, “Code Review for Systems Administrators,” where I talked about how we do our work on the OpenStack Infrastructure team (slides here). I left a bit of extra time for questions than I usually do since my colleague Khai Do was doing a presentation later that did a deeper dive into our continuous integration system (“Scaling the Openstack Test Environment“). I’m glad I did, there were several questions from the audience about some of our additional systems administration focused tooling and how we determine what we use (why Puppet? why Cacti?) and what our review process for those systems looked like.

Unfortunately this was all I could attend of the conference, as I had a flight to catch in order to make it to Croatia in time for DORS/CLUG 2014 this week. I do hope to make it back to Texas Linuxfest at some point, the event had a great venue and was well-organized with speaker helpers in every room to do introductions, keep things on track (so nice!) and make sure the A/V was working properly. Huge thanks to Nathan Willis and the other organizers for doing such a great job.

Paul Tagliamonte: Linode pv-grub chaining

Planet Ubuntu - Sun, 2014-06-15 01:40

I've been using Linode since 2010, and many of my friends have heard me talk about how big a fan I am of linode. I've used Debian unstable on all my Linodes, since I often use them as a remote shell for general purpose Debian development. I've found my linodes to be indispensable, and I really love Linode.

The Problem

Recently, because of my work on Docker, I was forced to stop using the Linode kernel in favor of the stock Debian kernel, since the stock Linode kernel has no aufs support, and the default LVM-based devicemapper backend can be quite a pain.

The btrfs errors are ones I fully expect to be gone soon, I can't wait to switch back to using it.

I tried loading in btrfs support, and using that to host the Docker instance backed with btrfs, but it was throwing errors as well. Stuck with unstable backends, I wanted to use the aufs backend, which, dispite problems in aufs internally, is quite stable with Docker (and in general).

I started to run through the Linode Library's guide on PV-Grub, but that resulted in a cryptic error with xen not understanding the compression of the kernel. I checked for recent changes to the compresson, and lo, the Debian kernel has been switched to use xz compression in sid. Awesome news, really. XZ compression is awesome, and I've been super impressed with how universally we've adopted it in Debian. Keep it up! However, it appears only a newer pv-grub than the Linode hosts have installed will fix this.

After contacting the (ever friendly) Linode support, they were unable to give me a timeline on adding xz support, which would entail upgrading pv-grub. It was quite disapointing news, to be honest. Workarounds were suggested, but I'm not quite happy with them as proper solutions.

After asking in #debian-kernel, waldi was able to give me a few pointers, and the following is very inspired by him, the only thing that changed much was config tweaking, which was easy enough. Thanks, Bastian!

The Constraints

I wanted to maintain a 100% stock configuration from the kernel up. When I upgraded my kernel, I wanted to just work. I didn't want to unpack and repack the kernel, and I didn't want to install software outside main on my system. It had to be 100% Debian and unmodified.

The Solution It's pretty fun to attach to the lish console and watch bootup pass through GRUB 0.9, to GRUB 2.x to Linux. Free Software, Fuck Yeah.

Left unable to run my own kernel directly in the Linode interface, the tact here was to use Linode's old pv-grub to chain-load grub-xen, which loaded a modern kernel. Turns out this works great.

Let's start by creating a config for Linode's pv-grub to read and use.

sudo mkdir -p /boot/grub/

Now, since pv-grub is legacy grub, we can write out the following config to chain-load in grub-xen (which is just Grub 2.0, as far as I can tell) to /boot/grub/menu.lst. And to think, I almost forgot all about menu.lst. Almost.

default 1 timeout 3 title grub-xen shim root (hd0) kernel /boot/xen-shim boot

Just like riding a bike! Now, let's install and set up grub-xen to work for us.

sudo apt-get install grub-xen sudo update-grub

And, let's set the config for the GRUB image we'll create in the next step in the /boot/load.cf file:

configfile (xen/xvda)/boot/grub/grub.cfg

Now, lastly, let's generate the /boot/xen-shim file that we need to boot to:

grub-mkimage --prefix '(xen/xvda)/boot/grub' -c /boot/load.cf -O x86_64-xen /usr/lib/grub/x86_64-xen/*.mod > /boot/xen-shim

Next, change your boot configuration to use pv-grub, and give the machine a kick. Should work great! If you run into issues, use the lish shell to debug it, and let me know what else I should include in this post!

Hack on!

Costales: Review de Ubuntu Touch 1.0 en el Nexus 4 en @xatakamovil por @javipas

Planet Ubuntu - Sat, 2014-06-14 08:06

Realmente disfruté este artículo porque desde hace mucho tiempo tengo la intriga de cómo se comportará realmente la versión de Ubuntu para móviles.

Javier nos muestra desde una objetividad profesional una versión demasiado inmadura para tanto tiempo de desarrollo y que posiblemente adolezca de la ausencia de móviles comerciales en un mercado excesivamente innovador y competitivo.

Sin más os dejo con el artículo completo en Xataka.

Chris J Arges: manually deploying openstack with a virtual maas on ubuntu trusty (part 1)

Planet Ubuntu - Fri, 2014-06-13 22:05
The goal of this new few series of posts is to be able to setup virtual machines to simulate a real-world openstack deployment using maas and juju. This goes through setting up a maas-server in a VM as well as setting up maas-nodes in VMs and getting them enlisted/commissioned into the maas-server. Next juju is configured to use the maas cluster. Finally, openstack is deployed using juju.
OverviewRequirementsIdeally, a large server with 16 cores, 32G memory, 500G disk. Obviously you can tweak this setup to work with less; but be prepared to lock up lesser machines. In addition your host machine needs to be able to support nested virtualization.
TopologyHere is the basics of what will be setup for our virtual maas cluster. Each red box is a virtual machine with two interfaces. The eth0 interface in the VM connects to the NATed maas-internet network, while the VM’s eth1 interface connects to the isolated maas-management network. The number of maas-nodes should match what is required for the deployment; however it is simple enough to enlist more nodes later. I choose to use a public/private network in order to be more flexible later in how openstack networking is set up.


Setup Host MachineInstall RequirementsFirst install all required programs on the host machine.sudo apt-get install libvirt-bin qemu-kvm cpu-checker virtinst uvtool
Next, check if kvm is working correctly.kvm-ok
Ensure nested KVM is enabled. (replace intel with amd if necessary)cat /sys/module/kvm_intel/parameters/nested
This should output Y, if it doesn’t do the following:sudo modprobe -r kvm_intel
sudo modprobe kvm_intel nested=1

Ensure $USER is added to libvirtd group.groups | grep libvirtd
Ensure host machine has SSH keys generated and setup. (Be careful, don’t overwrite your existing keys)[ -d ~/.ssh ] || ssh-keygen -t rsaVirtual Network SetupThis step can be done via virt-manager, but also done via command line using virsh.Setup a virtual network which uses NAT to communicate with the host machine with the following parameters:Network Name: maas_internet
Network: 192.168.100.0/24
Do _not_ Enable DHCP.
Forwarding to physical network; Any physical device; NAT
And setup an isolated virtual network the following parameters: Network Name: maas_management
Network: 10.10.10.0/24
Do _not_ Enable DHCP.
Isolated network;Install the MAAS ServerDownload and Start the InstallEnsure you have virt-manager connected to the hypervisor.
While there are many ways we can create virtual machines, I chose the tool uvtool because it works well in Trusty and quickly creates VM based on the Ubuntu cloud image.

Sync the latest cloud trusty cloud image:
uvt-simplestreams-libvirt sync release=trusty arch=amd64
Create a maas-server VM:
uvt-kvm create maas-server release=trusty arch=amd64 --disk 20 --memory 2048 --password ubuntu
After it boots, shut it down and  edit the VM’s machine configuration.
Make the two network interfaces connect to maas_internet and maas_management respectively.

Now edit /etc/network/interfaces to have the following:
auto eth0
iface eth0 inet static
  address 192.168.100.10
  netmask 255.255.255.0
  gateway 192.168.100.1
  dns-nameservers 10.10.10.10 192.168.100.1

auto eth1
iface eth1 inet static
  address 10.10.10.10
  netmask 255.255.255.0
  dns-nameservers 10.10.10.10 192.168.100.1

And follow the instructions here:
http://maas.ubuntu.com/docs/install.html#pkg-install

Which is essentially:
sudo apt-get install maas maas-dhcp maas-dnsMAAS Server Post Install Taskshttp://maas.ubuntu.com/docs/install.html#post-install

First let’s check if the webpage is working correctly. Depending on your installation, you may need to proxy into a remote host hypervisor before accessing the webpage. If you’re working locally you should be able to access this address directly (as the libvirt maas_internet network is already connected to your local machine).

If you need to access it indirectly (and 192.168.100.0 is a non-conflicting subnet):
sshuttle -D -r <hypervisor IP> 192.168.100.0/24
Access the following:
http://192.168.100.10/MAAS
It should remind you that post installation tasks need to be completed.

Let’s create the admin user from the hypervisor machine:
ssh ubuntu@192.168.100.10
sudo maas-region-admin createadmin --username=root --email="user@host.com" --password=ubuntu
If you want to limit the types of boot images that can be created you need to edit
sudo vim /etc/maas/bootresources.yaml
Import boot images, using the new root user you created to log in:
http://192.168.100.10/MAAS/clusters/
Now click 'import boot images' and be patient as it will take some time before these images are imported.

Add a key for the host machine here:
http://192.168.100.10/MAAS/account/prefs/sshkey/add/
Configure the MAAS ClusterFollow instructions here to setup cluster:
http://maas.ubuntu.com/docs/cluster-configuration.html

http://192.168.100.10/MAAS/clusters/
Click on ‘Cluster master’
Click on edit interface eth1.
Interface: eth1
Management: DHCP and DNS
IP: 10.10.10.10
Subnet mask: 255.255.255.0
Broadcast IP: 10.10.10.255
Router IP: 10.10.10.10
IP Range Low: 10.10.10.100
IP Range High: 10.10.10.200

Click Save Interface
Ensure Nodes Auto-Enlist

Create a MAAS key and use that to log in:
http://192.168.100.10/MAAS/account/prefs/
Click on ‘+ Generate MAAS key’ and copy that down.

Log into the maas-server, and then log into maas using the MAAS key:
maas login maas-server http://192.168.100.10/MAAS

Now set all nodes to auto accept:
maas maas-server nodes accept-all
Setup keys on the maas-server so it can access the virtual machine host
sudo mkdir -p ~maas
sudo chown maas:maas ~maas
sudo -u maas ssh-keygen

Add the pubkey in ~maas/.ssh/id_rsa.pub to the virsh servers authorized_keys and to the maas SSH keys (http://192.168.100.10/MAAS/account/prefs/sshkey/add/)
sudo cat /home/maas/.ssh/id_rsa.pub
Now install virsh to test a connection and allow the maas-server to control maas-nodes.
sudo apt-get install libvirt-bin
Test the connection to the hypervisor (replace ubuntu with hypervisor host user)
sudo -u maas virsh -c qemu+ssh://ubuntu@192.168.100.1/system list --allConfirm Maas-Server NetworkingEnsure we can reach important address via maas-server:
host streams.canonical.com
host store.juju.ubuntu.com
host archive.ubuntu.com

And that we can download charms if needed:
wget https://store.juju.ubuntu.com/charm-infoSetup Traffic ForwardingSetup maas-server to forward traffic from eth1 to eth0:

You can type the following out manually or add it as an upstart script to ensure forwarding is setup properly each time add this file to /etc/init/ovs-routing.conf (thanks to Juan Negron):

description "Setup NAT rules for ovs bridge"

start on runlevel [2345]

env EXTIF="eth0"
env BRIDGE="eth1"

task

script
echo "Configuring modules"
modprobe ip_tables || :
modprobe nf_conntrack || :
modprobe nf_conntrack_ftp || :
modprobe nf_conntrack_irc || :
modprobe iptable_nat || :
modprobe nf_nat_ftp || :

echo "Configuring forwarding and dynaddr"
echo "1" > /proc/sys/net/ipv4/ip_forward
echo "1" > /proc/sys/net/ipv4/ip_dynaddr

echo "Configuring iptables rules"
iptables-restore <<-EOM
*nat
-A POSTROUTING -o ${EXTIF} -j MASQUERADE
COMMIT
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A FORWARD -i ${BRIDGE} -o ${EXTIF} -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
-A FORWARD -i ${EXTIF} -o ${BRIDGE} -j ACCEPT
-A FORWARD -j LOG
COMMIT
EOM

end script
Then start the service:
sudo service ovs-routing startSetup Squid ProxyEnsure squid proxy can access cloud images:
echo "cloud-images.ubuntu.com" | sudo tee /etc/squid-deb-proxy/mirror-dstdomain.acl.d/98-cloud-imagessudo service squid-deb-proxy restartInstall MAAS NodesNow we can virt-install each maas-node on the hypervisor such that it automatically pxe boots and auto-enlists into MAAS. You can adjust the script below to create as many nodes as required. I’ve also simplified things by creating everything with dual nics and ample memory and hard drive space, but of course you could use custom machines per service. Compute-nodes need more compute power, ceph nodes will need more storage, and quantum-gateway will need dual nics. In addition you could specify raw disks instead of qcow2, or use storage pools; but in this case I wanted something simple that didn’t automatically use all the space it needed.

for i in {0..19}; do
virt-install \
--name=maas-node-${i} \
--connect=qemu:///system --ram=4096 --vcpus=1 --hvm --virt-type=kvm \
--pxe --boot network,hd \
--os-variant=ubuntutrusty --graphics vnc --noautoconsole --os-type=linux --accelerate \
--disk=/var/lib/libvirt/images/maas-node-${i}.qcow2,bus=virtio,format=qcow2,cache=none,sparse=true,size=32 \
--network=network=maas_internet,model=virtio \
--network=network=maas_management,model=virtio
done

Now each node needs to be manually enlisted with the proper power configuration.
http://maas.ubuntu.com/docs/nodes.html#virtual-machine-nodes

Host Name: maas-node-${i}.vmaas
Power Type: virsh
Power Address: qemu+ssh://ubuntu@192.168.100.1/system
Power ID: maas-node-${i}
Here we need to match the machines to the mac address and update the power requirements.  You can get the mac addresses of each node by using the following on the hypervisor:

virsh dumpxml maas-node-${i} | grep "mac addr"
Here is a script that helps automate some of this process, it can be run from the maas-server (replace USER ubuntu with the appropriate value) this matches mac address from virsh to the ones in maas and then sets up the power accordingly:

#!/usr/bin/python

import sys, os, libvirt
from xml.dom.minidom import parseString
os.environ['DJANGO_SETTINGS_MODULE'] = 'maas.settings'
sys.path.append("/usr/share/maas")
from maasserver.models import Node, Tag

hhost = 'qemu+ssh://ubuntu@192.168.100.1/system'

conn = libvirt.open(hhost)
nodes_dict = {}
domains = conn.listDefinedDomains()
for node_name in domains:
node = conn.lookupByName(node_name)
node_xml = parseString(node.XMLDesc(0))
node_mac1 = node_xml.getElementsByTagName('interface')[0].getElementsByTagName('mac')[0].getAttribute('address')
nodes_dict[node_mac1] = node_name

maas_nodes = Node.objects.all()
for node in maas_nodes:
try:
system_id = node.system_id
mac = node.get_primary_mac()
node_name = nodes_dict[str(mac)]
node.hostname = node_name
node.power_type = 'virsh'
node.power_parameters = { 'power_address':hhost, 'power_id':node_name }
node.save()
except: pass

Note you will need python-libvirt and run the above command with something like the following:
sudo -u maas ./setup-nodes.pySetup Fastpath and Commission NodesYou most likely want to use fast-path installer on nodes to speed up installation times. Set all nodes to use fastpath installer using another bulk action on the nodes.

After you have all this done, click bulk action commission.
You should see all your machines starting up if you set things up properly, give this some time. You should have all the nodes in the 'Ready' state in maas now!
http://192.168.100.10/MAAS/nodes/
Confirm DNS setupOne point of trouble can be ensuring DNS is setup correctly. We can test this by starting a maas node and inside of that trying the following:
dig streams.canonical.com
dig store.juju.ubuntu.com

If we can’t hit those, we’ll need to ensure the maas server is setup correctly.
Go to: http://192.168.100.10/MAAS/settings/
Enter the host machines upstream DNS if necessary here, it should setup the bind configuration file and restart that service. After this re-test.

In addition I had to disable dnssec-validation for bind. Edit the following file:
sudo vim /etc/bind/named.conf.options
And change the following value:
dnssec-validation no;
And restart the service:
sudo service bind9 restart
Now you have a working virtual maas setup using the latest Ubuntu LTS!

Kees Cook: glibc select weakness fixed

Planet Ubuntu - Fri, 2014-06-13 19:21

In 2009, I reported this bug to glibc, describing the problem that exists when a program is using select, and has its open file descriptor resource limit raised above 1024 (FD_SETSIZE). If a network daemon starts using the FD_SET/FD_CLR glibc macros on fdset variables for descriptors larger than 1024, glibc will happily write beyond the end of the fdset variable, producing a buffer overflow condition. (This problem had existed since the introduction of the macros, so, for decades? I figured it was long over-due to have a report opened about it.)

At the time, I was told this wasn’t going to be fixed and “every program using [select] must be considered buggy.” 2 years later still more people kept asking for this feature and continued to be told “no”.

But, as it turns out, a few months later after the most recent “no”, it got silently fixed anyway, with the bug left open as “Won’t Fix”! I’m glad Florian did some house-cleaning on the glibc bug tracker, since I’d otherwise never have noticed that this protection had been added to the ever-growing list of -D_FORTIFY_SOURCE=2 protections.

I’ll still recommend everyone use poll instead of select, but now I won’t be so worried when I see requests to raise the open descriptor limit above 1024.

© 2014, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.

Jonathan Riddell: Kubuntu on Twitter and Facebook

Planet Ubuntu - Fri, 2014-06-13 18:42
KDE Project:

Slightly late to the game, Kubuntu now has a Twitter and Facebook account to join the Google+ account. New headlines will go there and we've a fancy account from the nice people at SoDash that makes it easy to interact. Give us a Like or a Tweet.

https://twitter.com/kubuntu

https://www.facebook.com/kubuntu.org

Kubuntu Wire: Neon5: KDE Crack of the Day

Planet Ubuntu - Fri, 2014-06-13 18:31

A sister project of Kubuntu is Project Neon, daily builds of KDE Software you can install alongside your normal software to test.  Neon 5 is the packages for KDE Frameworks 5 and Plasma 5 and there is also a weekly ISO made (Friday update should be arriving shortly) with the latest Plasma 5 desktop.

It’s pleasing to see various reviews of Plasma 5 using Neon as the easiest way to test the next generation in KDE Software and most importantly take pretty screen shots.  The Screenshots of KDE Plasma Next beta 1 article on LInuxBSDos.com takes a look around the desktop. Datamotion’s article KDE’s Risky Gamble on New Interface takes a sceptical look at desktop redesigns “Still, KDE is showing signs of caution, so it might manage its re-design better than its rivals did”.   We think it will!

Ubuntu App Developer Blog: 10,000 Users of Ubuntu Phone

Planet Ubuntu - Fri, 2014-06-13 16:35

As we enter the final months before the first Ubuntu phones ship from our partners Meizu and Bq, the numbers of apps, users and downloads continues to grow at a steady pace. Today I’m excited to announce that we have more than ten thousand unique users of Ubuntu on phones or tablets!

Users

Ubuntu phone (and tablet) users sign into their Ubuntu One account on their device in order to download or update the applications on their phone. This allows us to provide many useful features that users expect coming from Android or iOS, such as being able to re-install their collection of apps on a new phone or after resetting their current one, or browsing the store’s website (coming soon) and having the option to install an app directly to their device from there. As a side effect, it means we know how many unique Ubuntu One accounts have connected to the store to in order to download an app, and that number has this week passed the 10,000 mark.

Excitement

Not only is this a milestone, but it’s down right amazing when you consider that there are currently no phones available to purchase with Ubuntu on them. The first phones from OEMs will be shipping later this year, but for now there isn’t a phone or tablet that comes with the new Ubuntu device OS on it. That means that each of these 10,000 people have purchased (or already had) either a supported Nexus device, or are using one of the community ports, and either wiped Android off them in favor of Ubuntu, or are dual booting. If this many people are willing to install the beta release of Ubuntu phone on their device, just imagine how many more will want to purchase a phone with Ubuntu pre-installed and with full support from the manufacturer.

Pioneers

In addition to users of Ubuntu phone, we’ve also seen a steady growth in the number of applications and application developers targeting Ubuntu phone and using the Ubuntu SDK. To celebrate them, we created Ubuntu App Pioneers page, and the first batch of Pioneers t-shirts are being sent out to those intrepid developers who, again, are so excited about a platform that isn’t even available to consumers yet that they’ve dedicated their time and energy into making it better for everyone.

Canonical Design Team: Making ubuntu.com responsive: ensuring performance (13)

Planet Ubuntu - Fri, 2014-06-13 08:18

This post is part of the series ‘Making ubuntu.com responsive‘.

Performance has always been one of the top priorities when it came to building the responsive ubuntu.com. We started with a list of performance snags and worked to improve each one as much as possible in the time we had. Here is a quick run through of the points we collected and the way we managed to improve them.

Asset caching

We now have a number of websites using our web style guide. Because of this, we needed to deliver assets on both http and secure https domains. We decided to build an asset server to support the guidelines and other sites that require asset hosting.

This gave us the ability to increase the far future expires (FFE) of each file. By doing so the file is cached by the server and not resupplied. This gives us a much faster round trip speed. But as we are still able to update a single file we cannot set the FFE too far in the future. We plan to resolve this with a new and improved assets system, which is currently under development.

The new asset system will have a internal frontend to upload a binary file. This will provide a link to the asset with a 6 character hexadecimal attached to the file name.

/ho686yst/favicon.ico

The new system restricts the ability to edit or update a file. Only upload a new one and change the link in the markup. This guarantees the asset to stay the same forever.

Minification and concatenation

We introduced a minification and concatenation step to the build of the web style guide. This saves precious bytes and reduces the number of requests performed by each page.

We use the sass ruby gem to generate minified and concatenated CSS in production. We also run the small amount of JavaScript we have through UglifyJS before delivering to production.

Compressed images

Images were the main issue when it came to performance.

We had a look at the file sizes of some of our key images (like the ones in the tablet section of the site) and were shocked to discover we hadn’t been treating our visitors’ bandwidth kindly.

After analysing a handful of images, we decided to have a look into our assets folder and flag the images that were over 100 KB as a first go.

One of the largest time consuming jobs in this project was converting all images that could to SVGs. This meant creating pictograms and illustrations as vectors from earlier PNGs. Any images that could not be recreated as a vector graphic were heavy compressed. This squeezed an alarming amount out of the original file.

We continued this for every image on the site. By doing so the total reduction across the site was 7.712MB.

Reduce required fonts

We currently load a large selection of the Ubuntu font.

<link href='//fonts.googleapis.com/css?family=Ubuntu:400,300,300italic,400italic,700,700italic%7CUbuntu+Mono' rel='stylesheet' type='text/css' />

The designers are exploring the patterns of the present and ideal future to discover unneeded types. Since the move from normal font weight to light a few months ago as our base font style, we rarely use the bold weight (700) anymore, resorting to normal (400) for highlighting text.

Once we determine which weights we can drop, we will be able to make significant savings, as seen below:

Reducing loaded fonts: before and after

Using SVG

Taking the leap to SVGs over PNG caused a number of issues. We decided to load SVGs as opposed to inline SVGs to keep our markup clean and easy to read and update. This meant we needed to provide four different coloured images for each pictogram.

We introduced Modernizr to give us an easy way to detect browsers that do not support SVGs and replace the image with PNGs of the same path and name.

Remove unnecessary enhancements

We explored a parallaxing effect for our site’s background with JavaScript. With worked well on normal resolution screens but lagged on retina displays, so we decided not do it and set the background position to static instead — user experience is always paramount and trumps visual enhancements.

Future improvements

One of the things in our roadmap is to remove unused styles remaining in the stylesheets. There are a number of solutions for this such as grunt-uncss.

Conclusion

There is still a lot to do but we have definitely broken the back of the work to steer ubuntu.com in the right direction. The aim is to push the site up to 90+ in the speed page tests in the next wave of updates.

Reading list

Dustin Kirkland: Elon Musk, Tesla Motors, and My Own Patent Apologies

Planet Ubuntu - Fri, 2014-06-13 02:25
It's hard for me to believe that I have sat on a this draft blog post for almost 6 years.  But I'm stuck on a plane this evening, inspired by Elon Musk and Tesla's (cleverly titled) announcement, "All Our Patents Are Belong To You."  Musk writes:
Yesterday, there was a wall of Tesla patents in the lobby of our Palo Alto headquarters. That is no longer the case. They have been removed, in the spirit of the open source movement, for the advancement of electric vehicle technology.When I get home, I'm going to take down a plaque that has proudly hung in my own home office for nearly 10 years now.  In 2004, I was named an IBM Master Inventor, recognizing sustained contributions to IBM's patent portfolio.

Musk continues:
When I started out with my first company, Zip2, I thought patents were a good thing and worked hard to obtain them. And maybe they were good long ago, but too often these days they serve merely to stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors. After Zip2, when I realized that receiving a patent really just meant that you bought a lottery ticket to a lawsuit, I avoided them whenever possible.And I feel the exact same way!  When I was an impressionable newly hired engineer at IBM, I thought patents were wonderful expressions of my own creativity.  IBM rewarded me for the work, and recognized them as important contributions to my young career.  Remember, in 2003, IBM was defending the Linux world against evil SCO.  (Confession: I think I read Groklaw every single day.)

Yeah, I filed somewhere around 75 patents in about 4 years, 47 of which have been granted by the USPTO to date.

I'm actually really, really proud of a couple of them.  I was the lead inventor on a couple of early patents defining the invention you might know today as Swype (Android) or Shapewriter (iPhone) on your mobile devices.  In 2003, I called it QWERsive, as the was basically applying "cursive handwriting" to a "qwerty keyboard."  Along with one of my co-inventors, we actually presented a paper at the 27th UNICODE conference in Berlin in 2005, and IBM sold the patent to Lenovo a year later.  (To my knowledge, thankfully that patent has never been enforced, as I used Swype every single day.)


But that enthusiasm evaporated very quickly between 2005 and 2007, as I reviewed thousands of invention disclosures by my IBM colleagues, and hundreds of software patents by IBM competitors in the industry.

I spent most of 2005 working onsite at Red Hat in Westford, MA, and came to appreciate how much more efficiently innovation happened in a totally open source world, free of invention disclosures, black out periods, gag orders, and software patents.  I met open source activists in the free software community, such as Jon maddog Hall, who explained the wake of destruction behind, and the impending doom ahead, in a world full of software patents.

Finally, in 2008, I joined an amazing little free software company called Canonical, which was far too busy releasing Ubuntu every 6 months on time, and building an amazing open source software ecosystem, to fart around with software patents.  To my delight, our founder, Mark Shuttleworth, continues to share the same enlightened view, as he states in this TechCrunch interview (2012):
“People have become confused,” Shuttleworth lamented, “and think that a patent is incentive to create at all.” No one invents just to get a patent, though — people invent in order to solve problems. According to him, patents should incentivize disclosure. Software is not something you can really keep secret, and as such Shuttleworth’s determination is that “society is not benefited by software patents at all.”Software patents, he said, are a bad deal for society. The remedy is to shorten the duration of patents, and reduce the areas people are allowed to patent. “We’re entering a third world war of patents,” Shuttleworth said emphatically. “You can’t do anything without tripping over a patent!” One cannot possibly check all possible patents for your invention, and the patent arms race is not about creation at all.And while I'm still really proud of some of my ideas today, I'm ever so ashamed that they're patented.

If I could do what Elon Musk did with Tesla's patent portfolio, you have my word, I absolutely would.  However, while my name is listed as the "inventor" on four dozen patents, all of them are "assigned" to IBM (or Lenovo).  That is to say, they're not mine to give, or open up.

What I can do, is speak up, and formally apologize.  I'm sorry I filed software patents.  A lot of them.  I have no intention on ever doing so again.  The system desperately needs a complete overhaul.  Both the technology and business worlds are healthier, better, more innovative environment without software patents.

I do take some consolation that IBM seems to be "one of the good guys", in so much as our modern day IBM has not been as litigious as others, and hasn't, to my knowledge, used any of the patents for which I'm responsible in an offensive manner.

But there are certainly those that do.  Patent trolls.

Another former employer of mine, Gazzang was acquired earlier this month (June 3rd) by Cloudera -- a super sharp, up-and-coming big data open source company with very deep pockets and tremendous market potential.  Want to guess what happened 3 days later?  A super shady patent infringement lawsuit is filed, of course!
Protegrity Corp v. Gazzang, Inc.
Complaint for Patent InfringementCivil Action No. 3:14-cv-00825; no judge yet assigned. Filed on June 6, 2014 in the U.S. District Court for the District of Connecticut;Patents in case 7,305,707: “Method for intrusion detection in a database system” by Mattsson. Prosecuted by Neuner; George W. Cohen; Steven M. Edwards Angell Palmer & Dodge LLP. Includes 22 claims (2 indep.). Was application 11/510,185. Granted 12/4/2007.
Yuck.  And the reality is that happens every single day, and in places where the stakes are much, much higher.  See: Apple v. Google, for instance.

Musk concludes his post:
Technology leadership is not defined by patents, which history has repeatedly shown to be small protection indeed against a determined competitor, but rather by the ability of a company to attract and motivate the world’s most talented engineers. We believe that applying the open source philosophy to our patents will strengthen rather than diminish Tesla’s position in this regard.What a brave, bold, ballsy, responsible assertion!

I've never been more excited to see someone back up their own rhetoric against software patents, with such a substantial, palpable, tangible assertion.  Kudos, Elon.

Moreover, I've also never been more interested in buying a Tesla.   Coincidence?

Maybe it'll run an open source operating system and apps, too.  Do that, and I'm sold.

:-Dustin

Ken VanDine: Game Development on Ubuntu with Bacon2D

Planet Ubuntu - Fri, 2014-06-13 01:45

During Ubuntu Online Summit today, I did a presentation on game development with Bacon2D.

Bacon2D is a game engine for QML that I've been working on.  For anyone that missed the session, you can go back and watch it any time here.

I've shared the slides as well, if anyone has any questions or suggestions, please join us in #bacon2d on Freenode.  You can also file bugs at https://github.com/Bacon2D/Bacon2D.


Jono Bacon: FirefoxOS and Developing Markets

Planet Ubuntu - Thu, 2014-06-12 23:40

It seems Mozilla is targeting emerging markets and developing nations with $25 cell phones. This is tremendous news, and an admirable focus for Mozilla, but it is not without risk.

Bringing simple, accessible technology to these markets can have a profound impact. As an example, in 2001, 134 million Nigerians shared 500,000 land-lines (as covered by Jack Ewing in Businessweek back in 2007). That year the government started encouraging wireless market competition and by 2007 Nigeria had 30 million cellular subscribers.

This generated market competition and better products, but more importantly, we have seen time and time again that access to technology such as cell phones improves education, provides opportunities for people to start small businesses, and in many cases is a contributing factor for bringing people out of poverty.

So, cell phones are having a profound impact in these nations, but the question is, will it work with FirefoxOS?

I am not sure.

In Mozilla’s defence, they have done an admirable job with FirefoxOS. They have built a powerful platform, based on open web technology, and they lined up a raft of carriers to launch with. They have a strong brand, an active and passionate community, and like so many other success stories, they already have a popular existing product (their browser) to get them into meetings and headlines.

Success though is judged by many different factors, and having a raft of carriers and products on the market is not enough. If they ship in volume but get high return rates, it could kill them, as is common for many new product launches.

What I don’t know is whether this volume/return-rate balance plays such a critical role in developing markets. I would imagine that return rates could be higher (such as someone who has never used a cell phone before taking it back because it is just too alien to them). On the other hand, I wonder if those consumers there are willing to put up with more quirks just to get access to the cell network and potentially the Internet.

What seems clear to me is that success here has little to do with the elegance or design of FirefoxOS (or any other product for that matter). It is instead about delivering incredibly dependable hardware. In developing nations people have less access to energy (for charging devices) and have to work harder to obtain it, and have lower access to support resources for how to use new technology. As such, it really needs to just work. This factor, I imagine, is going to be more outside of Mozilla’s hands.

So, in a nutshell, if the $25 phones fail to meet expectations, it may not be Mozilla’s fault. Likewise, if they are successful, it may not be to their credit.

Ubuntu Podcast from the UK LoCo: S07E11 – The One with the East German Laundry Detergent

Planet Ubuntu - Thu, 2014-06-12 18:39

Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are in Studio L for Season Seven, Episode Eleven of the Ubuntu Podcast!

 Download OGG  Download MP3 Play in Popup

In this week’s show:-

We’ll be back next week, when we’ll be discussing alternatives to Ubuntu One, which recently shut down, and we’ll go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Jonathan Riddell: Plasma 5 Is Green

Planet Ubuntu - Thu, 2014-06-12 15:17
KDE Project:

On our Plasma 5 build status page most of the packages are now a pleasing green colour. For the first time today I installed them all and logged in and... it worked! It took a bit of removing old caches and obsolete installs that'd I'd been making in the months previously and that nice temporary Next wallpaper everyone uses doesn't really get shipped so I had to add that and the icons sometimes work and sometimes don't and there's no plasma-nm release yet so I had to grab a copy and build that before I could use the network. But with some fiddle and wee bit ay faff, it works!

Today is a beautiful day.

If you just want to try it out the Neon 5 ISO is the easiest way still. But the packages I'm pleased about today are the Next PPA packages which is the packaging Kubuntu (and hopefully Debian) will be using going forward. It doesn't co-install with Plasma 1 so only use if you want breakage and if you want to help fix breakage, join #kubuntu-devel and help out.

Canonical Design Team: Making ubuntu.com responsive: our Sass architecture (12)

Planet Ubuntu - Thu, 2014-06-12 13:01

This post is part of the series ‘Making ubuntu.com responsive‘.

When working to make the current web style guide responsive, we made some large updates to the core Sass. We decided to update the file and folder structure of our styles. I love reading about other people or organisations Sass architectures, so I thought it would be only right to share the structure that has evolved over time here at Canonical.

Let’s get right to it.

  • core.scss
  • core-constants.scss
  • core-grid.scss
  • core-mixins.scss
  • core-print.scss
  • core-templates.scss
  • patterns
    • patterns.scss
    • _arrows.scss
    • _blockquotes.scss
    • _boxes.scss
    • _buttons.scss
    • _contextual-footer.scss
    • _footer.scss
    • _forms.scss
    • _header.scss
    • _helpers.scss
    • _image-centered.scss
    • _inline-logos.scss
    • _lists.scss
    • _notifications.scss
    • _resource.scss
    • _rows.scss
    • _slider.scss
    • _structure.scss
    • _tabbed-content.scss
    • _tooltips.scss
    • _typography.scss
    • _vertical-divider.scss

I won’t describe each file as some are self-explanatory but let’s just go through the core files to understand the structure.

core.scss contains the core HTML element styling. Such as img, p, ul, etc. You could say this acts as a reset file customised to match our style.

core-constants.scss is home to all variables used throughout. This file contains all the set colours used on the site. Base font size and some extra grid variables used to extend the layout.

core-grid.scss holds the entire responsive grid styles. This file mainly consists of generated code from Gridinator which we extended with breakpoints to modify the layout as the viewport gets smaller. You can read more about how we did this in “Making ubuntu.com responsive: making our grid responsive”.

core-mixins.scss holds all the mixins used in our Sass.

core-templates.scss is used to hold full pages styling classes. Without applying a template class to the <body> of a page you get a standard page style, if you add a template class, you will get the styles that are appropriate for that template.

Web team front end working on the web style guide.

Divide and conquer

Patterns were originally all in one huge scss file, which became difficult to maintain. So we decided to split the patterns file apart in a pattern folder. This allows us to find and work in a much more modular way. This involved manually working through the file. Removing all the components styles into a new file and import back into the same position.

Naming conventions

Our mission when setting up the naming convention for our CSS was to make the markup as human readable as possible.

We decided early on to almost use a object oriented, inheritance system for large structural elements. For example, the class .row can be extended by adding the .row-enterprise class which applies a dark aubergine background and modifies the elements inside to be display correctly on a dark background.

We switch to a single class approach for small modular components, such as lists. If you apply the class .list the list items are styled with our simple Ubuntu list style. This can be modified by changing the class to .list-ubuntu or .list-canonical, which apply their corresponding branding themed bullets to the items.

List styles.

The decision to use different systems arose from the desire to keep the markup clean and easy to skim read by limiting the classes applied to each element. We could have continued with the inheritance system for smaller elements but that would have lead to two or more classes (.list and .list-canonical) for each element. We felt this was overkill for every small component. For large structural elements such as rows it’s easier to start with a .row class and have added functionality and styling by adding classes.

Mixins

We mainly use mixins to handle browser prefixes as we haven’t yet added a “prefixer” step to our build system.

A lot of our styles are quite specific and therefore would not benefit from being included as a mixin.

A note on Block Element Module syntax

We would like to have used the Block Element Module (BEM) syntax as we think it is a good convention and easy for people external to the project to understand and use. Since we started this project back in 2013 with the above syntax, which is now used on a number of sites across the Canonical/Ubuntu web real estate, the effort to convert every class name to follow the BEM naming convention would far outweigh the benefits it would return.

Conclusion

By splitting our bloated patterns file into multiple small modular files we have made it much easier to maintain and diagnose bugs within components. I would recommend anyone in a similar situation to find the time to split the components into separate files sooner rather then later. The effort grows exponentially the longer it’s left.

Introducing linting to the production of the guidelines will keep our coding style the same throughout the team and help readability to new members of the team.

Reading list

Jono Bacon: Community Management Training at OSCON, LinuxCon North America, and LinuxCon Europe

Planet Ubuntu - Wed, 2014-06-11 17:55

I am a firm believer in building strong and empowered communities. We are in an age of a community management renaissance in which we are defining repeatable best practice that can be applied many different types of communities, whether internal to companies, external to volunteers, or a mix of both.

I have been working to further this growth in community management via my books, The Art of Community and Dealing With Disrespect, the Community Leadership Summit, the Community Leadership Forum, and delivering training to our next generation of community managers and leaders.

Last year I ran my first community management training course, and it was very positively received. I am delighted to announce that I will be running an updated training course at three events over the coming months.

OSCON

On Sunday 20th July 2014 I will be presenting the course at the OSCON conference in Portland, Oregon. This is a tutorial, so you will need to purchase a tutorial ticket to attend. Attendance is limited, so be sure to get to the class early on the day to reserve a seat!

Find Out More

LinuxCon North America and Europe

I am delighted to bring my training to the excellent LinuxCon events in both North America and Europe.

Firstly, on Fri 22nd August 2014 I will be presenting the course at LinuxCon North America in Chicago, Illinois and then on Thurs Oct 16th 2014 I will deliver the training at LinuxCon Europe in Düsseldorf, Germany.

Tickets are $300 for the day’s training. This is a steal; I usually charge $2500+/day when delivering the training as part of a consultancy arrangement. Thanks to the Linux Foundation for making this available at an affordable rate.

Space is limited, so go and register ASAP:

What Is Covered

So what is in the training course?

My goal with each training day is to discuss how to build and grow a community, including building collaborative workflows, defining a governance structure, planning, marketing, and evaluating effectiveness. The day is packed with Q&A, discussion, and I encourage my students to raise questions, challenge me, and explore ways of optimizing their communities. This is not a sit-down-and-listen-to-a-teacher-drone on kind of session; it is interactive and designed to spark discussion.

The day is mapped out like this:

  • 9.00am – Welcome and introductions
  • 9.30am – The core mechanics of community
  • 10.00am – Planning your community
  • 10.30am – Building a strategic plan
  • 11.00am – Building collaborative workflow
  • 12.00pm – Governance: Part I
  • 12.30pm – Lunch
  • 1.30pm – Governance: Part II
  • 2.00pm – Marketing, advocacy, promotion, and social
  • 3.00pm – Measuring your community
  • 3.30pm – Tracking, measuring community management
  • 4.30pm – Burnout and conflict resolution
  • 5.00pm – Finish

I will warn you; it is an exhausting day, but ultimately rewarding. It covers a lot of ground in a short period of time, and then you can follow with further discussion of these and other topics on our Community Leadership discussion forum.

I hope to see you there!

Pages

Subscribe to Free Software Magazine aggregator