news aggregator

Chris J Arges: using CRIU to checkpoint a kernel build

Planet Ubuntu - Wed, 2014-06-25 20:46
CRIU stands for Checkpoint/Restart in Userspace [1]. As the criu package should be landing in utopic soon, and I wanted to test drive it to see how it handles.

I thought of an interesting example of being in the middle of a linux kernel build and a security update needing to be installed and the machine rebooted. While most of us could probably just reboot and rebuild, why not checkpoint it and save the progress; then restore after the system update?  I admit its not the most useful example; but it is pretty cool nonetheless.
sudo apt-get install criu
# start build; save the PID for later
cd linux; make clean; make defconfig
make & echo $! > ~/make.pid
# enter a clean directory that isn't tmp and dump state
mkdir ~/cr-build && cd $_
sudo criu dump --shell-job -t $(cat ~/make.pid)# install and reboot machine
# restore your build
cd ~/cr-build
sudo criu restore --shell-job -t $(cat ~/make.pid)

And you're building again!
  1. http://criu.org/Main_Page

Jono Bacon: Community Leadership Forum

Planet Ubuntu - Wed, 2014-06-25 16:29

A little while ago I set up the Community Leadership Forum. The forum is designed to be a place where community leaders and managers can learn and share experience about how to grow fun, productive, and empowered communities.

The forum is open and accessible to all communities – technology, social, environmental, entertainment, or anything else. It is intended to be diverse and pull together a great set of people.

It is also designed to be another tool (in addition to the Community Leadership Summit) to further the profession, art, and science of building great communities.

We are seeing some wonderful growth on the forum, and because the forum is powered by Discourse it is a simple pleasure to use.

I am also encouraging organizations who are looking for community managers to share their job descriptions on the forum. This forum will be a strong place to find the best talent in community management and for the talent to find great job opportunities.

I hope to see you there!

Join the Community Leadership Forum

Ubuntu App Developer Blog: Bring Your Apps to Hack Days

Planet Ubuntu - Wed, 2014-06-25 15:17
Ready for RTM*: Ubuntu Touch Core App Hack Days!

* Release to Manufacturing

We’re running another set of Core Apps Hack Days next week. Starting Monday 30th June through to Friday 4th July we’ll be hacking on Core Apps, getting them polished for our upcoming RTM (Release To Manufacture) images. The goal of our hack days is as always to implement missing features, fix bugs, get new developers involved in coding on Ubuntu using the SDK and to have some fun hacking on Free Software.

For those who’ve not seen the hack days before, it’s really simple. We get together from 09:00 UTC till 21:00 UTC on #ubuntu-app-devel on freenode IRC and hack on the Core Apps. We will be testing the apps to destruction, filing and triaging bugs, creating patches, discussing and testing proposals and generally do whatever we can to get these apps ready for RTM. It’s good fun, relaxed and a great way to get started in Ubuntu app development with the SDK

We’ll have developers hanging around to answer questions, and can call on platform and SDK experts for assistance when required. We focus on specific apps each day, but as always we welcome contributions to all the core apps both during the designated days, and beyond.

Not just Core Apps

This time around we’re also doing things a little differently. Typically we only focus attention on the main community maintained Core Apps we ship on our device images. For this set of Hack Days we’d like to invite 3rd party community app developers to bring their apps along as well and hack with us. We’re looking for developers who have already developed their Ubuntu app using the SDK but maybe need help with the “last mile”. Perhaps you have design questions, bugs or feature enhancements which you’d like to get people involved in.

 

We won’t be writing your code for you, but we can certainly help to find experienced people to answer your questions and advise of platform and SDK details. We’d expect you to make your code available somewhere, to allow contributions and perhaps enable some kind of bug tracker or task manager. It’s up to you to manage your own community app, we’re here to help though!

Get involved

If you’re interested in bringing your app to hack days, then get in touch with popey (Alan Pope) on IRC or via email [popey@ubuntu.com] and we’ll schedule it in for next week and get the word out.

You can find out more about the Core Apps Hack Days on the wiki, and can discuss this with us on IRC in #ubuntu-app-devel.

Stuart Langridge: Throttling or slowing down network interfaces on Ubuntu

Planet Ubuntu - Wed, 2014-06-25 08:58

Michael Mahemoff, on Google Plus, points out an idea from the Rails people to slow down your network connection to your local machine in order to simulate the experience of using the web, where slow connections are common (especially on mobile).

The commands they mention are for OS X, though. To do this in Ubuntu you want the following:

Slow down your network connection to localhost by adding a 500ms delay:

sudo tc qdisc add dev lo root handle 1:0 netem delay 500msec

If you do this, then ping localhost, you’ll see that packets now take a second to return (because the 500ms delay applies on the way out and the way back).

To remove this delay:

sudo tc qdisc del dev lo root

Since those commands are pretty alarmingly impenetrable, I put together a tiny little app to do it for you instead. Grab the Python code below and run it, and then you can enable throttling by just ticking a box, and drag the sliders to set the amount you want to slow down your connection. Try it next time you’re building an app which talks to the network — you may find it enlightening (although depressing) how badly your app (or the framework you’re using) deals with slow connections… and half your users will have those slow connections. So we need to get better at dealing with them.

from gi.repository import Gtk, GLib import socket, fcntl, struct, array, sys, os def which(program): import os def is_exe(fpath): return os.path.isfile(fpath) and os.access(fpath, os.X_OK) fpath, fname = os.path.split(program) if fpath: if is_exe(program): return program else: for path in os.environ["PATH"].split(os.pathsep): path = path.strip('"') exe_file = os.path.join(path, program) if is_exe(exe_file): return exe_file return None def all_interfaces(): max_possible = 128 # arbitrary. raise if needed. bytes = max_possible * 32 s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) names = array.array('B', '\0' * bytes) outbytes = struct.unpack('iL', fcntl.ioctl( s.fileno(), 0x8912, # SIOCGIFCONF struct.pack('iL', bytes, names.buffer_info()[0]) ))[0] namestr = names.tostring() lst = {} for i in range(0, outbytes, 40): name = namestr[i:i+16].split('\0', 1)[0] ip = namestr[i+20:i+24] friendly = "" if name == "lo": friendly = "localhost" if name.startswith("eth"): friendly = "Wired connection %s" % (name.replace("eth", "")) if name.startswith("wlan"): friendly = "Wifi connection %s" % (name.replace("eth", "")) lst[name] =( {"friendly": friendly, "action_timer": None, "current_real_value": 0, "toggled_by_code": False} ) return lst class App(object): def __init__(self): win = Gtk.Window() win.set_size_request(300, 200) win.connect("destroy", Gtk.main_quit) win.set_title("Network Throttle") self.ifs = all_interfaces() tbl = Gtk.Table(rows=len(self.ifs.keys())+1, columns=4) tbl.set_row_spacings(3) tbl.set_col_spacings(10) tbl.attach(Gtk.Label("Throttled"), 2, 3, 0, 1) delay_label = Gtk.Label("Delay") delay_label.set_size_request(150, 40) tbl.attach(delay_label, 3, 4, 0, 1) row = 1 for k, v in self.ifs.items(): tbl.attach(Gtk.Label(k), 0, 1, row, row+1) tbl.attach(Gtk.Label(v["friendly"]), 1, 2, row, row+1) tb = Gtk.CheckButton() tbl.attach(tb, 2, 3, row, row+1) tb.connect("toggled", self.toggle_button, k) self.ifs[k]["checkbox"] = tb sl = Gtk.HScale() sl.set_draw_value(True) sl.set_increments(20, 100) sl.set_range(20, 980) sl.connect("value_changed", self.value_changed, k) sl.set_sensitive(False) tbl.attach(sl, 3, 4, row, row+1) self.ifs[k]["slider"] = sl row += 1 box = Gtk.Box(spacing=6) box.pack_start(tbl, True, True, 6) win.add(box) win.show_all() self.get_tc() def toggle_button(self, button, interface): self.ifs[interface]["slider"].set_sensitive(button.get_active()) if self.ifs[interface]["toggled_by_code"]: print "ignoring toggle button because it was toggled by code, not user" self.ifs[interface]["toggled_by_code"] = False return print "toggled to", button.get_active() if button.get_active(): self.turn_on_throttling(interface) else: self.turn_off_throttling(interface) def value_changed(self, slider, interface): print "value_changed", slider.get_value() if slider.get_value() == self.ifs[interface]["current_real_value"]: print "Not setting if because it already is that value" return self.turn_on_throttling(interface) def get_tc(self): print "getting tc" self.throttled_ifs = {} def get_tc_output(io, condition): print "got tc output", condition line = io.readline() print "got tc line", line parts = line.split() if len(parts) > 2 and parts[0] == "qdisc" and parts[1] == "netem": if len(parts) == 12: self.throttled_ifs[parts[4]] = {"delay": parts[11].replace("ms", "")} if condition == GLib.IO_IN: return True elif condition == GLib.IO_HUP|GLib.IO_IN: GLib.source_remove(self.source_id) print "throttled IFs are", self.throttled_ifs self.update_throttled_list(self.throttled_ifs) return False pid, stdin, stdout, stderr = GLib.spawn_async( ["tc", "qdisc"], flags=GLib.SpawnFlags.SEARCH_PATH, standard_output=True ) io = GLib.IOChannel(stdout) self.source_id = io.add_watch(GLib.IO_IN|GLib.IO_HUP, get_tc_output, priority=GLib.PRIORITY_HIGH) pid.close() def actually_turn_on_throttling(self, interface, value): print "actually throttling", interface, "to", value self.ifs[interface]["action_timer"] = None cmd = "pkexec tc qdisc replace dev %s root handle 1:0 netem delay %smsec" % (interface, int(value),) print cmd os.system(cmd) def turn_on_throttling(self, interface): val = self.ifs[interface]["slider"].get_value() if self.ifs[interface]["action_timer"] is not None: print "aborting previous throttle request for", interface GLib.source_remove(self.ifs[interface]["action_timer"]) print "throttling", interface, "to", val source_id = GLib.timeout_add_seconds(1, self.actually_turn_on_throttling, interface, val) self.ifs[interface]["action_timer"] = source_id def actually_turn_off_throttling(self, interface): print "actually unthrottling", interface self.ifs[interface]["action_timer"] = None cmd = "pkexec tc qdisc del dev %s root" % (interface,) print cmd os.system(cmd) def turn_off_throttling(self, interface): if self.ifs[interface]["action_timer"] is not None: print "aborting previous throttle request for", interface GLib.source_remove(self.ifs[interface]["action_timer"]) print "unthrottling", interface source_id = GLib.timeout_add_seconds(1, self.actually_turn_off_throttling, interface) self.ifs[interface]["action_timer"] = source_id def update_throttled_list(self, throttled_ifs): for k, v in self.ifs.items(): if k in throttled_ifs: current = v["checkbox"].get_active() if not current: self.ifs[k]["toggled_by_code"] = True v["checkbox"].set_active(True) delay = float(throttled_ifs[k]["delay"]) self.ifs[k]["current_real_value"] = delay v["slider"].set_value(delay) else: current = v["checkbox"].get_active() if current: v["checkbox"].set_active(False) if __name__ == "__main__": if not which("pkexec"): print "You need pkexec installed." sys.exit(1) app = App() Gtk.main()

Marcin Juszkiewicz: From a diary of AArch porter –- testsuites

Planet Ubuntu - Wed, 2014-06-25 07:56

More and more software come with testsuites. But not every distribution runs them for each package (nevermind is it Debian, Fedora or Ubuntu). Why it matters? Let me give example from yesterday: HDF 4.2.10.

There is a bug reported against libhdf with information that it built fine for Ubuntu. As I had issues with hdf in Fedora I decided to look and found even simpler patch than one I wrote. Tried it and got package built. But that’s all…

Running testsuite is easy: “make check”. But result was awesome:

!!! 31294 Error(s) were detected !!!

It does not look good, right? So yesterday I spent some time yesterday on searching for architecture related check and found main reason for so big amount of errors — unknown systems are treated as big endian… Simple switch there and from 31294 it dropped to just 278 ones.

Took me a while to find all 27 places where miscellaneous variations of “#if defined(__aarch64__)” were needed and finally got to point where “make check” simply worked as it should.

So if you port software do not assume it is fine once it builds. Run testsuite to be sure that it runs properly.

All rights reserved © Marcin Juszkiewicz
From a diary of AArch porter –- testsuites was originally posted on Marcin Juszkiewicz website

Related posts:

  1. From a diary of AArch porter – part II
  2. I miss Debian tools
  3. AArch64 is in the house

Ubuntu Kernel Team: Kernel Team Meeting Minutes – June 24, 2014

Planet Ubuntu - Tue, 2014-06-24 17:11
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140624 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

We have rebased our Utopic kernel “unstable” branch to v3.16-rc2. We
are preparing initial packages and performing some test builds and DKMS
package testing. I do not anticipate a v3.16 based upload until it has
undergone some additional widespread baking and testing.
—–
Important upcoming dates:
Thurs Jun 26 – Alpha 1 (2 days away)
Fri Jun 27 – Kernel Freeze for 12.04.5 and 14.04.1 (3 days away)
Thurs Jul 31 – Alpha 2 (~5 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (May. 6):

  • Lucid – Testing
  • Precise – Testing
  • Saucy – Testing
  • Trusty – Testing

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Schedule:

    cycle: 08-Jun through 28-Jun
    ====================================================================
    06-Jun Last day for kernel commits for this cycle
    08-Jun – 14-Jun Kernel prep week.
    15-Jun – 21-Jun Bug verification & Regression testing.
    22-Jun – 28-Jun Regression testing & Release to -updates.

    14.04.1 cycle: 29-Jun through 07-Aug
    ====================================================================
    27-Jun Last day for kernel commits for this cycle
    29-Jun – 05-Jul Kernel prep week.
    06-Jul – 12-Jul Bug verification & Regression testing.
    13-Jul – 19-Jul Regression testing & Release to -updates.
    20-Jul – 24-Jul Release prep
    24-Jul 14.04.1 Release [1]
    07-Aug 12.04.5 Release [2]

    [1] This will be the very last kernels for lts-backport-quantal, lts-backport-raring,
    and lts-backport-saucy.

    [2] This will be the lts-backport-trusty kernel as the default in the precise point
    release iso.


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

Jono Bacon: The Return of my Weekly Q&A

Planet Ubuntu - Tue, 2014-06-24 05:09

As many of you will know, I used to do a weekly Q&A on Ubuntu On Air for the Ubuntu community where anyone could come and ask any question about anything.

I am pleased to announce my weekly Q&A is coming back but in a new time and place. Now it will be every Thursday at 6pm UTC (6pm UK, 7pm Europe, 11am Pacific, 2pm Eastern), starting this week.

You can join each weekly session at http://www.jonobacon.org/live/

You are welcome to ask questions about:

  • Community management, leadership, and how to build fun and productive communities.
  • XPRIZE, our work there, and how we solve the world’s grand challenges.
  • My take on Ubuntu from the perspective of an independent community member.
  • My views on technology, Open Source, news, politics, or anything else.

As ever, all questions are welcome! I hope to see you there!

The Fridge: Ubuntu Weekly Newsletter Issue 373

Planet Ubuntu - Tue, 2014-06-24 02:35

Sam Hewitt: Chicken Laksa

Planet Ubuntu - Mon, 2014-06-23 19:05

Laksa is a quite popular spicy noodle soup, from the regions of China, Singapore, Malaysia & Indonesia, as such many variations and regional differences. You can get more info from Wikipedia.

This recipe I'm sharing is for a chicken laksa but it can be varied or added to, such as using fish or shrimp instead. The addition of veg. such as bok choy or shredded carrot wouldn't go astray either.

Chicken Laksa Recipe
    Ingredients

    4 hours of waiting; 10 minutes of work; Makes 8 buns

  • 2 tablespoons peanut oil (or sunflower oil)
  • 1 kg chicken thigh fillets, sliced
  • 200g laksa paste –if not purchased arecipe follows
  • 3 cups (700ml) chicken stock
  • 1 can (~400ml) coconut milk/cream
  • 1 lime, juice of
  • 2 kaffir lime leaves, very finely shredded (if not available add more lime juice)
  • 2 tablespoons crushed rock sugar (or light brown sugar)
  • 2 tablespoons fish sauce
  • 800g (a box) dried vermicelli rice noodles (or Udon noodles)
  • salt
  • Garnishes
  • Fried Asian shallots (see note) and sliced red chile, to garnish
  • 2 cups (160g) bean sprouts, trimmed
  • 1/2 cup Thai basil leaves (you can substitute regular basil)
  • 1/2 cup cilantro leaves
  • 1/2 cup green onion, sliced on a bias
  • 1-2 chilies, sliced thinly
  • 4 eggs, hardboiled
  • lime slices
    Directions
  1. In a wok, stir fry the chicken slices, in batches, until golden brown (3-4 minutes). Remove from wok and set aside.
  2. Add the laksa paste and stir fry until fragrant. Transfer to a large pot.
  3. Return the chicken, add the coconut milk, chicken stock, kaffir lime leaves (if using) & the lime juice.
  4. Bring the broth to a boil, reduce heat & simmer for 10 minutes to cook the chicken.
  5. Stir in the sugar, fish sauce and season to taste.
  6. Cook noodles according to directions.
  7. Serve broth and cooked chicken over cooked noodles and garnish with slices of boiled egg, bean sprouts, Thai basil & cilantro leaves, sliced green onion and sliced chiles.
Laksa Paste Recipe
    Ingredients

    Makes about 300ml

  • 6 dried long red chiles (or a couple tablespoons dried chile flakes)
  • 1 teaspoon ground coriander
  • 1/2 teaspoon ground cumin
  • 1/2 teaspoon ground turmeric
  • 1/2 teaspoon sweet paprika
  • 1 3-cm piece galangal (or ginger), peeled & chopped
  • 1 onion, chopped
  • 2 garlic cloves, chopped
  • 2 stalks lemongrass, white part only, chopped
  • several cashew nuts
  • 2 teaspoons shrimp paste (or dried shrimp)
  • 1 tablespoon peanut oil
    Directions
  1. Place the dried chile in a small heatproof bowl (and the dried shrimp if using in another) and cover with boiled water. Let stand for ~10 minutes.
  2. In a small non-stick skillet toast the coriander, cumin, turmeric & paprika over med-high heat, until fragrant –1-2 minutes.
  3. In a food processor, pulse to coarsely chop the ginger/galanagal, onion, lemongrass, garlic & cashews.
  4. Drain and add the reconsituted chiles (and dried shrimp, if using) and the shrimp paste (if using).
  5. Turn the food processor on and pour in the peanut oil, blending until a smooth paste forms.
  6. This laksa paste can be kept refridgerated for a couple weeks.

Sam Hewitt: Making Udon Noodles

Planet Ubuntu - Mon, 2014-06-23 19:00

Asian cuisines are among my favourites, particularly broth-y noodle dishes like Laksa or Pho. So I took it upon my self to learn how to make udon noodles.

The dough, I was surprised to discover, is a very basic wheat flour dough, however the footwork (you read correctly) required may deter some; given the dough's density it requires quite a bit of force to knead, but it's well worth it, in my opinion.

Udon Dough Recipe
    Ingredients
  • 2 cups white all-purpose flour
  • 1/2 cup warm water
  • 2 teaspoons kosher salt
    Directions
  1. Dissolve the salt in the warm water.
  2. In a large bowl, combine the salt water solution and the flour.
  3. Knead with your hands until it is a lumpy mass & transfer it to a clean surface.
  4. Continue to knead and shape it into a ball ߝthis'll take about 10 minutes.
  5. Let the dough-ball rest for a few minutes.
  6. The traditional method is to knead the dough with your feet, using your body weight to help flatten it.
  7. Shape it into more-or-less a rectangle & place it in a large zip-seal bag or between two sheets of plastic wrap and wrap in a towel.
  8. Using your feet, flatten it out until it's ~1 cm thick, then fold in half and knead it out again.
  9. Repeat this another 3 or 4 times, folding the dough over in the same direction each time ߝthis smooths out the dough.
  10. Hey, how often do you get to walk on your food?
  11. After the final folding/foot massage, set the dough aside (still wrapped up) for 3+ hours.
  12. When ready, unwrap the dough onto a clean, lightly-floured surface.
  13. Using a pasta machine (for ease), roll to desired thinness and cut into the noodle shape you like.
  14. Cooking in boiling water or directly in a soup broth until tender.

Tony Whitmore: Tom Baker at 80

Planet Ubuntu - Mon, 2014-06-23 17:31

Back in March I photographed the legendary Tom Baker at the Big Finish studios in Kent. The occasion was the recording of a special extended interview with Tom, to mark his 80th birthday. The interview was conducted by Nicholas Briggs, and the recording is being released on CD and download by Big Finish.

I got to listen in to the end of the recording session and it was full of Tom’s own unique form of inventive story-telling, as well as moments of reflection. I got to photograph Tom on his own using a portable studio set up, as well as with Nick and some other special guests. All in about 7 minutes! The cover has been released now and it looks pretty good I think.

The CD is available for pre-order from the Big Finish website now. Pre-orders will be signed by Tom, so buy now!

Pin It

Dustin Kirkland: The Yo Charm. It's that simple.

Planet Ubuntu - Mon, 2014-06-23 14:44
It's that simple.

It was about 4pm on Friday afternoon, when I had just about wrapped up everything I absolutely needed to do for the day, when I decided to kick back and have a little fun with the remainder of my work day.

 It's now 4:37pm on Friday, and I'm now done.

Done with what?  The Yo charm, of course!

The Internet has been abuzz this week about the how the Yo app received a whopping $1 million dollars in venture funding.  (Forbes notes that this is a pretty surefire indication that there's another internet bubble about to burst...)

It's little more than the first program any kid writes -- hello world!

Subsequently I realized that we don't really have a "hello world" charm.  And so here it is, yo.

$ juju deploy yo

Deploying up a webpage that says "Yo" is hardly the point, of course.  Rather, this is a fantastic way to see the absolute simplest form of a Juju charm.  Grab the source, and explore yourself.

$ charm-get yo
$ tree yo
├── config.yaml
├── copyright
├── hooks
│   ├── config-changed
│   ├── install
│   ├── start
│   ├── stop
│   ├── upgrade-charm
│   └── website-relation-joined
├── icon.svg
├── metadata.yaml
└── README.md
1 directory, 11 files


  • The config.yaml let's you set and dynamically changes the service itself (the color and size of the font that renders "Yo").
  • The copyright is simply boilerplate GPLv3
  • The icon.svg is just a vector graphics "Yo."
  • The metadata.yaml explains what this charm is, how it can relate to other charms
  • The README.md is a simple getting-started document
  • And the hooks...
    • config-changed is the script that runs when you change the configuration -- basically, it uses sed to inline edit the index.html Yo webpage
    • install simply installs apache2 and overwrites /var/www/index.html
    • start and stop simply starts and stops the apache2 service
    • upgrade-charm is currently a no-op
    • website-relation-joined sets and exports the hostname and port of this system
The website relation is very important here...  Declaring and defining this relation instantly lets me relate this charm with dozens of other services.  As you can see in the screenshot at the top of this post, I was able to easily relate the varnish website accelerator in front of the Yo charm.
Hopefully this simple little example might help you examine the anatomy of a charm for the first time, and perhaps write your own first charm!
Cheers,
Dustin

Benjamin Kerensa: Mozilla at Open Source Bridge

Planet Ubuntu - Sun, 2014-06-22 23:49

Ben Kero, Firefox OS Talk at OSBridge 2013

This week Open Source Bridge will kick off in Portland and I’m extremely excited that Mozilla will once again be sponsoring this wonderful event. This will also mark my second year attending.

To me, Open Source Bridge is the kind of conference that has a lot of great content while also having a small feel to it; where you feel like you can dive in and do some networking and attend many of the talks.

This year, like previous years, Mozilla will have a number of speakers and attendees at Open Source Bridge and we will be giving out some swag in the Hacker Lounge throughout the week and chatting with people about Firefox OS and other Mozilla Projects.

If you are a Mozillian in town for AdaCamp or Open Source Bridge, be sure to stop by the Portland MozSpace and say hello.

Be sure to catch one of these awesome talks being given by Mozillians:

Explicit Invitations: Passion is Not Enough for True Diversity – Lukas Blakk

Making language selection smarter in Wikipedia – Sucheta Ghosal

The Outreach Program for Women: what works & what’s next – Liz Henry

The joy of volunteering with open technology and culture – Netha Hussain

Making your mobile web app accessible – Eitan Isaacson

Modern Home Automation – Ben Kero

Nest + Pellet Stove + Yurt – Lars John

When Firefox Faceplants – what the fox says and who is listening – Lars John

From navel gazing to ass kicking: Building leadership in the journalism code community – Erika Owens

Badging and Beyond: Rubrics and Building a Culture of Recognition as Community Building Strategies – Larissa Shapiro

 

Paul Tagliamonte: Adventures in AsyncIO: Moxie

Planet Ubuntu - Sun, 2014-06-22 17:49

This week, I started work on something I’m calling moxie. Due to wanting to use my aiodocker bindings on the backend, I decided to implement it in 100% AsyncIO Python 3.4.

What pushed me over the edge was finding the aiopg driver (postgres asyncio bindings), with very (let me stress - very) immature SQLAlchemy support.

Unfortunately, no web frameworks support asyncio as a first-class member of the framework, so I was forced into writing a microframework. The resulting “app” looks pretty not bad, and likely easy to switch if Flask ever gets support for asyncio.

One neat side-effect was that the framework can support stuff like websockets as a first-class element of the framework, just like GET requests.

Moxie will be a tool to run periodic long-running jobs in a sane way using docker.io.

More soon!

Valorie Zimmerman: Metalinks, an excellent fast way to download KDE files

Planet Ubuntu - Sun, 2014-06-22 06:00
My G+ repost of Harald's announcement of the Plasma Next ISOs got some complaints about slow downloads. When I checked the with KDE sysadmins, I got some great information.

First, torrents aren't available, since 1. that requires dedicated tracker software, which isn't needed since 2. KDE doesn't distribute many large files.

However, files available at http://files.kde.org/snapshots/ and http://download.kde.org have a Details tab, where metalinks and mirrors are listed. I knew nothing about metalinks, but we could all benefit from them when downloading large files.

PovAddict (Nicolas Alvarez) told me that he uses the commandline for them: `aria2c http://files.kde.org/snapshots/neon5-201406200837.iso.metalink` for instance. I had to install aria2c for this to work; and the file took less than 15 minutes to download.

I read man wget and it seems not to support metalinks; at least I didn't find a reference.

Bshah (Bushan Shah) tried with kget and says it works very well. He said, New Download > Paste metalink > it will ask which files to download.

He also found the nice Wikipedia page for me: http://en.wikipedia.org/wiki/Metalink

Thanks to bcooksley, PovAddict and bshaw for their help.

PS: Bcooksley adds, the .mirrorlist url is generally what we recommend for people to use anyway. So even if you don't point to the metalink, please use the .mirrorlist URL when posting a file hosted at download.kde.org or files.kde.org. If people forget to do that, click that Details link to get there for the hashes, lists of mirrors, and metalink files.

Eric Hammond: EBS-SSD Boot AMIs For Ubuntu On Amazon EC2

Planet Ubuntu - Sat, 2014-06-21 19:24

With Amazon’s announcement that SSD is now available for EBS volumes, they have also declared this the recommended EBS volume type.

The good folks at Canonical are now building Ubuntu AMIs with EBS-SSD boot volumes. In my preliminary tests, running EBS-SSD boot AMIs instead of EBS magnetic boot AMIs speeds up the instance boot time by approximately… a lot.

Canonical now publishes a wide variety of Ubuntu AMIs including:

  • 64-bit, 32-bit
  • EBS-SSD, EBS-SSD pIOPS, EBS-magnetic, instance-store
  • PV, HVM
  • in every EC2 region
  • for every active Ubuntu release

Matrix that out for reasonable combinations and you get 561 AMIs actively supported today.

On the Alestic.com blog, I provide a handy reference to the much smaller set of Ubuntu AMIs that match my generally recommended configurations for most popular uses, specifically:

I list AMIs for both PV and HVM, because different virtualization technologies are required for different EC2 instance types.

Where SSD is not available, I list the magnetic EBS boot AMI (e.g., Ubuntu 10.04 Lucid).

To access this list of recommended AMIs, select an EC2 region in the pulldown menu towards the top right of any page on Alestic.com.

If you like using the AWS console to launch instances, click on the orange launch button to the right of the AMI id.

The AMI ids are automatically updated using an API provided by Canonical, so you always get the freshest released images.

Original article: http://alestic.com/2014/06/ec2-ebs-ssd-ami

Colin King: stress-ng: an updated system stress test tool

Planet Ubuntu - Sat, 2014-06-21 14:07
Recently added to Ubuntu 14.10 is stress-ng, a simple tool designed to stress various components of a Linux system.   stress-ng is a re-implementation of the original stress tool written by Amos  Waterland and adds various new ways to exercise a computer as well as a very simple "bogo-operation" set of metrics for each stress method.

stress-ng current contains the following methods to exercise the machine:
  • CPU compute - just lots of sqrt() operations on pseudo-random values. One can also specify the % loading of the CPUs
  • Cache thrashing, a naive cache read/write exerciser
  • Drive stress by writing and removing many temporary files
  • Process creation and termination, just lots of fork() + exit() calls
  • I/O syncs, just forcing lots of sync() calls
  • VM stress via mmap(), memory write and munmap()
  • Pipe I/O, large pipe writes and reads that exercise pipe, copying and context switching
  • Socket stressing, much like the pipe I/O test but using sockets
  • Context switching between a pair of producer and consumer processes
Many of the above stress methods have additional configuration options.  Each stress method can be run by one or more child processes.

The --metrics option dumps the number of operations performed by each stress method, aka "bogo ops", bogos because they are a rough and unscientific metric.  One can specify how long to run a test either by test duration in sections or by bogo ops.

I've tried to make stress-ng compatible with the older stress tool, but note that it is not guaranteed to produce identical results as the common test methods between the two tools have been implemented differently.

Stress-ng has been a useful for helping me measure different power consuming loads.  It is also useful with various thermald optimisation tweaks on one of my older machines.

For more information, consult the stress-ng manual page.  Be warned, this tool can make your system get seriously busy and warm!

José Antonio Rey: Juju: Multiple environments with just one account

Planet Ubuntu - Sat, 2014-06-21 01:41

Today, I was checking some charming as usual, and found myself in a problem. I wanted to have different environments for automated testing and manual code testing, but I only had one AWS account. I thought I needed an account in another cloud, or another AWS account, but after thinking for a while I decided it wasn’t worth it, leaving those thoughts in the past. But suddenly I asked myself if it was possible to just clone my information on my environments.yaml file and set up another environment with the same credentials. Indeed, it was.

The only thing I did here was:

  • Open my environments.yaml file.
  • Copy the exact same information I had for my old EC2 environment.
  • Give a new name to the environment I was creating.
  • Change the name of the storage bucket (as it has to be unique).
  • Save the changes, close the file, and bootstrap the new environment.

Easy enough, right? That way you can just have multiple environments and execute different things on each one with just one account. I am not sure how this will work for other providers, but at least for AWS it works this way. This just adds more awesome-ness to Juju than it already has. Now, let’s play with this environments!


The Fridge: Ubuntu 13.10 (Saucy Salamander) reaches End of Life on July 17 2014

Planet Ubuntu - Fri, 2014-06-20 20:17

Ubuntu announced its 13.10 (Saucy Salamander) release almost 9 months ago, on October 17, 2013. This was the second release with our new 9 month support cycle and, as such, the support period is now nearing its end and Ubuntu 13.10 will reach end of life on Thursday, July 17th. At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 13.10.

The supported upgrade path from Ubuntu 13.10 is via Ubuntu 14.04 LTS. Instructions and caveats for the upgrade may be found at:

https://help.ubuntu.com/community/TrustyUpgrades

Ubuntu 14.04 LTS continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce

Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Fri Jun 20 05:00:13 UTC 2014 by Adam Conrad on behalf of the Ubuntu Release Team

Marco Ceppi: Deploying OpenStack with just two machines. The MaaS and Juju way.

Planet Ubuntu - Fri, 2014-06-20 18:46

A lot of people have been asking lately about what the minimum number of nodes are required to setup OpenStack and there seems to be a lot of buzz around setting up OpenStack with Juju and MAAS. Some would speculate it has something to do with the amazing keynote presentation by Mark Shuttleworth, others would conceed it’s just because charms are so damn cool. Whatever the reason my answer is as follows

You really want 12 nodes to do OpenStack right, even more for high availability, but at a bare minimum you only need two nodes.

So, naturally, as more people dive in to OpenStack and evaluate how they can use it in their organizations, they jump at the thought “Oh, I have two servers laying around!” and immediately want to know how to achieve such a feat with Juju and MAAS. So, I took an evening to do such a thing with my small cluster and share the process.

This post makes a few assumptions. First, that you have already set up MAAS, installed Juju, and configured Juju to speak to your MAAS environment. Secondly, that the two machine allotment is nodes after setting up MAAS and that these two nodes are already enlisted in MAAS.

My setup

Before I dive much deeper, let me briefly show my setup.

I realize the photo is terrible, the Nexus 4 just doesn’t have a super stellar camera compared to other phones on the market. For the purposes of this demo I’m using my home MAAS cluster which consists of three Intel NUCs, a gigabit switch, a switched PDU, and an old Dell Optiplex with an extra nick which acts as the MAAS region controller. All the NUCs have been enlisted in MAAS and commissioned already.

Diving in

Once MAAS and Juju are configured you can go ahead and run juju bootstrap. This will provision one of the MAAS nodes and use it as the orchestration node for your juju environment. This can take some time, especially if you don’t have fastpath installer selected, if you get a timeout during your first bootstrap don’t fret! You can increase the bootstrap timeout in the environments.yaml file with the following directive in your maas definition: bootstrap-timeout: 900. During the video I increase this timeout to 900 seconds in the hopes of eliminating this issue.

After you’ve bootstrapped it’s time to get deploying! If you care to use the Juju GUI now would be the time to deploy it. You can do so with by running the following command:

juju deploy --to 0 juju-gui

To avoid having juju spin us up another machine we can tell Juju to simply place it on machine 0.

NOTE: the --to flag is crazy dangerous. Not all services can be safely co-located with each other. This is tandumount to “hulk smashing” services and will likely break things. Juju GUI is designed to coincide with the bootstrap node so this has been safe. Running this elsewhere will likely result in bad things. You have been warned.

Now it’s time to get OpenStack going! Run the following commands:

juju deploy --to lxc:0 mysql juju deploy --to lxc:0 keystone juju deploy --to lxc:0 nova-cloud-controller juju deploy --to lxc:0 glance juju deploy --to lxc:0 rabbitmq-server juju deploy --to lxc:0 openstack-dashboard juju deploy --to lxc:0 cinder

To break this down, what you’re doing is deploying the minimum number of components required to support OpenStack, only, your deploying them to machine 0 (the bootstrap node) in LXC containers. If you don’t know what LXC containers are, they are very light weight Linux containers (virtual machines) that don’t produce a lot of overhead but allow you to safely compartmentalize these services. So, after a few minutes these machines will begin to pop online, but in the meantime we can press on because Juju waits for nothing!

The next step is to deploy the nova-compute node. This is the powerhouse behind OpenStack and is the hypervisor for launching instances. As such, we don’t really want to virtualize it as KVM (or XEN, etc) don’t work well inside of LXC machines.

juju deploy nova-compute

That’s it. MAAS will allocate the second, and final node if you only have two, to nova-compute. Now while all these machines are popping up and becoming ready lets create relations. The magic of Juju and what it can do is in creating relations between services. It’s what turns a bunch of scripts into LEGOs for the cloud. You’ll need to run the following commands to create all the relations necessary for the OpenStack components to talk to eachother:

juju add-relation mysql keystone juju add-relation nova-cloud-controller mysql juju add-relation nova-cloud-controller rabbitmq-server juju add-relation nova-cloud-controller glance juju add-relation nova-cloud-controller keystone juju add-relation nova-compute nova-cloud-controller juju add-relation nova-compute mysql juju add-relation nova-compute rabbitmq-server:amqp juju add-relation nova-compute glance juju add-relation glance mysql juju add-relation glance keystone juju add-relation glance cinder juju add-relation mysql cinder juju add-relation cinder rabbitmq-server juju add-relation cinder nova-cloud-controller juju add-relation cinder keystone juju add-relation openstack-dashboard keystone

Whew, I know that’s a lot to go through, but OpenStack isn’t a walk in the park. It’s a pretty intricate system with lots of dependencies. The good news is we’re nearly done! No doubt most of the nodes have turned green in the GUI or are marked as “started” in the output of juju status.

One of the last things is configuration for the cloud. Since this is all working against Trusty, we have the latest OpenStack being installed. All that’s left is to configure our admin password in keystone so we can log in to the dashboard.

juju set keystone admin-password="helloworld"

Set the password to whatever you’d like. Once complete, run juju status openstack-dashboard find the public-address for that unit, load it’s address in your browser and navigate to /horizon. (For example, if the public-address was 10.0.1.2 you would go to http://10.0.1.2/horizon). Log in with the username admin and the password as you set it in the command line. You should now be in the horizon dashboard for OpenStack. Click on Admin -> System Panel -> Hypervisors and confirm you have a hypervisor listed.

Congradulations! You’ve create a condensed OpenStack installation.

Pages

Subscribe to Free Software Magazine aggregator