news aggregator

Lubuntu Blog: PCManFM 1.2.1

Planet Ubuntu - Sat, 2014-07-12 15:34
Another update of our file manager PCManFM, tones of bug fixes and new implementations: fixed dragging and dropping icons behavior fixed icons positioning fixed resetting cursor in location bar corrected folder popup update on loading reordered ‘View’ menu item implemented drawing icons of dragged items etc. Also a huge update and bug fixing in libfm libraries (1.2.1) too. You can use

Darcy Casselman: New Motherboard: ASUS Z97-A (and Ubuntu)

Planet Ubuntu - Sat, 2014-07-12 05:31

My old desktop was seeing random drive errors on multiple drives, including a drive I only got a few months ago. And since my motherboard was about 5 years old, I decided it was time to replace it.

I asked the KWLUG mailing list if they had any advice on picking motherboards. The consensus seems to be pretty much “it’s still a crapshoot.” But I bit the bullet and reported back:

I bought a motherboard! An ASUS Z97-A

Mostly because I wanted Intel integrated graphics and I’ve got 3 monitors it needs to drive. And I was hoping the mSATA SSD card I got to replace the one in my Dell Mini 9 (that didn’t work) would fit in the m.2 slot. It doesn’t. Oh well.

I wanted to get it all set up while I was off for Canada Day. Except Canada Computers didn’t have any of my preferred CPU options. So I’ll be waiting for that to come in via NewEgg.

I gave myself a budget of about $500 for mobo, CPU and RAM and I’ll end up going over a little bit (mostly tax and shipping), and tried to build the best machine I could for that.

One of the things I did this time that I hadn’t done before was spec out a desktop machine at System76 and used that as a starting point. System76 is more explicit about things like chipsets for desktops than Zareason is. Which would be great, except they’re using the older H87 chipsets.

…Like the latest Ars System Guide Hot Rod But that’s over 6 months old now. And >they’re balancing their budget against having to buy a graphics card, which I don’t want to do.

I still have some unanswered questions about the Z97 chipset. It’s only been out for about a month. So who knows?

My laptop has mostly been my desktop for the last few years. But I want to knock that off because I’ve been developing back and neck problems. My desktop layout is okay ergonomically, at least better than anything I have for the laptop (including and especially my easy chair with a lapdesk, which is comfy, but kind of horrible on the neck). One of the things that’s holding me back is my desktop is 5 years old and was built cheap because I was mostly using it as a server by that point. I really want to make it something I want to use over the laptop (which is a very nice laptop). Which is why I ended up going somewhat upper-mid range.

That’s one of the nice things about building from parts, despite the lack of useful information: This is the 3rd motherboard I’ve put in this case. I replaced the PSU once a couple years ago so it’s quite sufficient to handle the new stuff. I’m keeping my old harddrives. I could keep the graphics card. I’ll need to buy an adapter for the DVD burner (and I’ve yet to decide if I’m going to do that, or buy a new SATA one or just go without). And I can keep my (frankly pretty awesome) monitors. So $500 gets me a kick-ass whole new machine.

Anyway, long story short, I still have a lot of questions about whether this was the best purchase, but I’m hopeful it’s a good one.

Aside: is Canada Computers really the only store in town that keeps desktop CPUs in stock anymore? I couldn’t get into the UW Tech Shop, but since they’re mostly iPads and crap now, I’m not optimistic. Computer XS doesn’t (at least the Waterloo one). Future Shop and Best Buy don’t. I even went into Neutron for the first time in over 15 years. Nope. Nobody.

It… didn’t go as well as I’d hoped:

So, anyway, I got the motherboard, CPU and put it all in my old case.

I booted up and all three monitors came up without any fuss, which has never happened for me. Awesome! This is great!

Then I tried to play game.

Apparently the current snd_intel_hda ALSA drivers don’t like H97 and Z97 chipsets. The sound was staticky, crackly and distorted.

I’ve spent more than a few hours over the last week hunting around for a fix. I installed Windows on a spare harddrive to make sure it wasn’t a hardware problem (for which I needed to spend the $20 to get a new SATA DVD drive so I could run the Windows driver disk to actually get actual video, networking and sound support :P). And I found this thing on the Arch WIki which, while not fixing the problem, did actually make it worse, leading me to conclude there was some sort of sound driver/pulseaudio problem.

Top tip: when trying to sort out sound driver problems for specific hardware the best thing to do is search for the hardware product id (in my case “8ca0″). That’s how I found this:

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1321421

Hurray! The workaround works great and now I’m back in business!

So I got burned by going with the bleeding edge, and I should know better. But, even though the information isn’t widely diseminated yet, there is a fix. And a workaround. I’m sure Ubuntu 14.10 will have no problem with it. It’s not as bad as the bleeding edge was years ago. If the fix was easier to find (and I’m going to work on that), it was easier getting going with Ubuntu than it was with Windows.

Paul Tagliamonte: Satuday's the new Sunday

Planet Ubuntu - Sat, 2014-07-12 00:41

Hello, World!

For those of you who enforce my Sundays on me (keep doing that, thank you!), I’ll be changing my Saturdays with my Sundays.

That’s right! In this new brave world, I’ll be taking Saturdays off, not Sundays. Feel free to pester me all day on Sunday, now!

This means, as a logical result, I will not be around tomorrow, Saturday.

Much love.

Dan Chapman: One week in and Dekko has 41 users

Planet Ubuntu - Fri, 2014-07-11 12:47

For those of you who haven’t seen Dekko in the software store, it’s a native IMAP email client for Ubuntu Touch. Dekko is essentially my development/ideas branch of my work on Trojita, which in the end is intended to replace Dekko in the store.

The reasoning behind publishing Dekko is for a few reasons really, firstly Trojita prides itself on being standards compliant, already has a desktop client that uses QtWidgets, supports both Qt4 & Qt5 and also has a technical preview Harmattan qml front-end, which was great as most of the initial work for the IMAP parts was in place, so we didn’t need to “re-invent the wheel” (for the most-part anyway), but we soon hit a point where we had surpassed what had previously been done and now was the job of unwinding the intertwined style that QtWidget UI’s naturally ensue. So that we can share the same business logic between all front-ends without losing standards compliance, support both Qt4 & Qt5 and maintain Trojita’s robust quality standards.

I am still relatively new to C++ so this is like one of those “in at the deep end” scenario’s, resulting in the ietf rfc specifications and Qt’s documentation having become the majority of my daily reading.  Dekko was born out of the need to understand the separation (call it a learning project) and to devise a way to create common components that can be shared between all front-ends. This “learning project” resulted in a functional but limited capability email client, so I decided to publish it with the hope of getting as much feedback, bug reports or design ideas as possible, and use this to ensure Trojita becomes a rock solid native email client for Ubuntu.

A quick list of current features in Dekko,

  • Support for viewing of plain text messages. We cannot show html messages, due to not being able to block network requests with QtWebKits custom url scheme functionality (If you are an Oxide dev who happens to be reading this “wink wink”  ). But is great for viewing all your launchpad mail.
  • Navigating the mailbox hierarchy, it’s not entirely obvious at first (Open to new ideas here) If you see a progression arrow on a mailbox, tapping the arrow displays the nested mailbox’s. Otherwise tapping elsewhere shows the messages within that mailbox.
  • Composing and replying to messages, this utilizes the bottom edge so pulling up on an opened message will set up a reply to the opened message. One thing to note with replying to messages, at the moment it basically does a “reply all” action so you need to delete or add recipients to the message manually until support for mailing lists and other reply modes are implemented.
  • Supports defining a single sender identity for mail submission.
  • Mark message as deleted, expunge mailbox and auto-expunge on marked for deletion options.
  • Mark all messages as read.
  • Offline, Online and Bandwidth saving mode, perfect for mobile data connections

There is a known bug with the message list view sometimes not updating properly, but can usually be resolved by closing and reopening that mailbox.

So if you haven’t already please give it a try, and if you have any design/implementation ideas, issues, bugs or anything else you wish to say, please report them to the dekko project on launchpad https://launchpad.net/dekko.

Note: Please don’t file bugs against upstream Trojita, unless you are using a build of Trojita and not a Dekko build. 

And finally a few snaps to wet the appetite

 

Ronnie Tucker: PHP Fixes OpenSSL Flaws in New Releases

Planet Ubuntu - Fri, 2014-07-11 07:00

The PHP Group has released new versions of the popular scripting language that fix a number of bugs, including two in OpenSSL. The flaws fixed in OpenSSL don’t rise to the level of the major bugs such as Heartbleed that have popped up in the last few months. But PHP 5.5.14 and 5.4.30 both contain fixes for the two vulnerabilities, one of which is related to the way that OpenSSL handles timestamps on some certificates, and the other of which also involves timestamps, but in a different way.

Source:

http://threatpost.com/php-fixes-openssl-flaws-in-new-releases/106908

Submitted by: Dennis Fisher

 

Ubuntu Podcast from the UK LoCo: S07E15 – The One with the Thumb

Planet Ubuntu - Thu, 2014-07-10 20:02

Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are in Studio L for Season Seven, Episode Fifteen of the Ubuntu Podcast!

 Download OGG Play in Popup  Download MP3 Play in Popup

In this week’s show:-

We’ll be back next week, when we’ll be interviewing David Hermann about his MiracleCast project, and we’ll go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Ronnie Tucker: Why did Microsoft join the Linux Foundation’s AllSeen Alliance?

Planet Ubuntu - Thu, 2014-07-10 06:59

When people think of open source they don’t usually associate Microsoft with it. But the company recently surprised many when it joined the Linux Foundation’s open source AllSeen Alliance. The AllSeen Alliance’s mission is to create a standard for device communications.

Has Microsoft changed its attitude toward open source in general or is there another reason for its uncharacteristic behavior? Computerworld speculates on what might have motivated Microsoft to join the AllSeen Alliance.

Source:

http://www.itworld.com/open-source/425651/why-did-microsoft-join-linux-foundations-allseen-alliance

Submitted by: Jim Lynch

Dustin Kirkland: Scalable, Parallel Video Transcoding on Linux

Planet Ubuntu - Thu, 2014-07-10 06:02
Transcoding video is a very resource intensive process.  It can take many minutes to process a small clip, or even hours to process a full movie.
And that's on the home video scale.  When it comes to commercial video production, it can take thousands of machines, hundreds of compute hours to render a full movie.  I had the distinct privilege some time ago to visit WETA Digital in Wellington, New Zealand and tour the render farm that processed The Lord of the Rings triology, Avatar, and The Hobbit, etc.  And just a few weeks ago, I visited another quite visionary, cloud savvy digital film processing firm in Hollywood, called Digital Film Tree.
While Windows and Mac OS may be the first platforms that come to mind, when you think about front end video production, Linux is far more widely used for batch video processing, and with Ubuntu, in particular, being extensively at both WETA Digital and Digital Film Tree, among others.
There are numerous, excellent, open source video transcoding and processing tools freely available in Ubuntu, including libav-toolsffmpeg, mencoder, and handbrake.
Surprisingly, however, none of those support parallel computing easily or out of the box.  And disappointingly, I couldn't find any MPI support readily available either.
I happened to have an Orange Box for a few days recently, so I decided to tackle the problem myself, and develop a scalable, parallel video transcoding solution myself.  I'm delighted to share the result with you today!
While I could have worked with any of a number of tools, I settled on avconv (the successor(?) of ffmpeg), as it was the first one that I got working well on my laptop, before scaling it out to the cluster.
I designed an approach on my whiteboard, in fact quite similar to some work I did parallelizing and scaling the john-the-ripper password quality checker.

At a high level, the algorithm looks like this:
  1. Create a shared network filesystem, simultaneously readable and writable by all nodes
  2. Have the master node split the work into even sized chunks for each worker
  3. Have each worker process their segment of the video, and raise a flag when done
  4. Have the master node wait for each of the all-done flags, and then concatenate the result
And that's exactly what I implemented that in a new transcode charm and transcode-cluster bundle.  It provides linear scalability and performance improvements, as you add additional units to the cluster.  A transcode job that takes 24 minutes on a single node, is down to 3 minutes on 8 worker nodes in the Orange Box, using Juju and MAAS against physical hardware nodes.


For the curious, the real magic is in the config-changed hook, which has decent inline documentation.



The trick, for anyone who might make their way into this by way of various StackExchange questions and (incorrect) answers, is in the command that splits up the original video (around line 54):

avconv -ss $start_time -i $filename -t $length -s $size -vcodec libx264 -acodec aac -bsf:v h264_mp4toannexb -f mpegts -strict experimental -y ${filename}.part${current_node}.ts

And the one that puts it back together (around line 72):
avconv -i concat:"$concat" -c copy -bsf:a aac_adtstoasc -y ${filename}_${size}_x264_aac.${format}

I found this post and this documentation particularly helpful in understanding and solving the problem.

In any case, once deploy, my cluster bundle looks like this.  8 units of transcoders, all connected to a shared filesystem, and performance monitoring too.


I was able to leverage the shared-fs relation provided by the nfs charm, as well as the ganglia charm to monitor the utilization of the cluster.  You can see the spikes in the cpu, disk, and network in the graphs below, during the course of a transcode job.




For my testing, I downloaded the movie Code Rushfreely available under the CC-BY-NC-SA 3.0 license.  If you haven't seen it, it's an excellent documentary about the open source software around Netscape/Mozilla/Firefox and the dotcom bubble of the late 1990s.

Oddly enough, the stock, 746MB high quality MP4 video doesn't play in Firefox, since it's an mpeg4 stream, rather than H264.  Fail.  (Yes, of course I could have used mplayer, vlc, etc., that's not the point ;-)


Perhaps one of the most useful, intriguing features of HTML5 is it's support for embedding multimedia, video, and sound into webpages.  HTML5 even supports multiple video formats.  Sounds nice, right?  If it only were that simple...  As it turns out, different browsers have, and lack support for the different formats.  While there is no one format to rule them all, MP4 is supported by the majority of browsers, including the two that I use (Chromium and Firefox).  This matrix from w3schools.com illustrates the mess.
http://www.w3schools.com/html/html5_video.asp
The file format, however, is only half of the story.  The audio and video contents within the file also have to be encoded and compressed with very specific codecs, in order to work properly within the browsers.  For MP4, the video has to be encoded with H264, and the audio with AAC.
Among the various brands of phones, webcams, digital cameras, etc., the output format and codecs are seriously all over the map.  If you've ever wondered what's happening, when you upload a video to YouTube or Facebook, and it's a while before it's ready to be viewed, it's being transcoded and scaled in the background. 
In any case, I find it quite useful to transcode my videos to MP4/H264/AAC format.  And for that, a scalable, parallel computing approach to video processing would be quite helpful.

During the course of the 3 minute run, I liked watching the avconv log files of all of the nodes, using Byobu and Tmux in a tiled split screen format, like this:


Also, the transcode charm installs an Apache2 webserver on each node, so you can expose the service and point a browser to any of the nodes, where you can find the input, output, and intermediary data files, as well as the logs and DONE flags.



Once the job completes, I can simply click on the output file, Code_Rush.mp4_1280x720_x264_aac.mp4, and see that it's now perfectly viewable in the browser!


In case you're curious, I have verified the same charm with a couple of other OGG, AVI, MPEG, and MOV input files, too.


Beyond transcoding the format and codecs, I have also added configuration support within the charm itself to scale the video frame size, too.  This is useful to take a larger video, and scale it down to a more appropriate size, perhaps for a phone or tablet.  Again, this resource intensive procedure perfectly benefits from additional compute units.


File format, audio/video codec, and frame size changes are hardly the extent of video transcoding workloads.  There are hundreds of options and thousands of combinations, as the manpages of avconv and mencoder attest.  All of my scripts and configurations are free software, open source.  Your contributions and extensions are certainly welcome!

In the mean time, I hope you'll take a look at this charm and consider using it, if you have the need to scale up your own video transcoding ;-)

Cheers,
Dustin

Joe Liau: Utopic Unicron[sic]

Planet Ubuntu - Thu, 2014-07-10 04:14

I was playing around with possible designs for a Utopic Unicorn t-shirt when an inevitable (and awesome) typo lead me to…

SVG version here.

 

 

 

Paul Tagliamonte: Dell XPS 13

Planet Ubuntu - Thu, 2014-07-10 02:38

More hardware adventures.

I got my Dell XPS13. Amazing.

The good news: This MacBook Air clone is clearly an Air competitor, and easily slightly better in nearly every regard except for the battery.


The bad news is that the Intel Wireless card needs non-free (I’ll be replacing that shortly), and the touchpad’s driver isn’t totally implemented until Kernel 3.16. I’m currently building a 3.14 kernel with the patch to send to the kind Debian kernel people. We’ll see if that works. Ubuntu Trusty already has the patch, but it didn’t get upstreamed. That kinda sucks.

It also shipped with UEFI disabled, and was defaulting to boot in ‘legacy’ mode. It shipped with Ubuntu, a bit disappointed to not see Ubuntu keys on the machine.

Touchscreen works; in short -stunning. I think I found my new travel buddy. Debian unstable runs great, stable had some issues.

Ronnie Tucker: XFCE App Launcher `WHISKER MENU` sees new release

Planet Ubuntu - Wed, 2014-07-09 06:58

Whisker Menu is an application menu / launcher for Xfce that features a search function so you can easily find the application you want to launch. The menu supports browsing apps by category, you can add applications to favorites and more. The tool is used as the default Xubuntu application menu starting with the latest 14.04 release and in Linux Mint Xfce starting with version 15 (Olivia).

The Whisker Menu PPA was updated to the latest 1.4.0 version recently and you can use to both upgrade to the latest version obviously, as well as to install the tool in (X)Ubuntu versions for which Whisker Menu isn’t available in the official repositories (supported versions: Ubuntu 14.04, 13.10 and 12.04 and the corresponding Linux Mint versions). For see what is the  difference with the previous release, see the changelog in its main website.

Source:

http://www.webupd8.org/2014/06/xfce-app-launcher-whisker-menu-sees-new.html

Submitted by: Andrew

Nicholas Skaggs: Utopic Bug Hug and Testing Day

Planet Ubuntu - Tue, 2014-07-08 19:38
The first testing day of the utopic cycle is coming this week on Thursday, July 10th. You can catch us live in a hangout on ubuntuonair.com starting at 1900 UTC. We'll be demonstrating running and testing the development release of ubuntu, reporting test results, reporting bugs, and doing triage work. We'll also be availible to answer your questions and help you get started testing as well.

Please join us in testing utopic and helping the next release of ubuntu become the best it can be. Hope to see everyone there!

P.S. We have a team calendar that can help you keep track of the release schedule along with this and other events. Check it out!

The Fridge: Ubuntu Online Summit dates: 4-6 Nov 2014

Planet Ubuntu - Tue, 2014-07-08 18:54

in discussions at the last Online Summit and afterwards it became clear that we need to bring the summit dates closer to our release dates again. With the Unicorn being released on Oct 23, we decided to pick the following dates for the next Online Summit:

4th – 6th November 2014

This unfortunately won’t leave too much room for a mid-cycle UOS, as it’d get too close to Feature Freeze and other release/freeze dates. Michael Hall will start a discussion on ubuntu-devel-discuss@ about the subject of Ubuntu Online Summit soon, so we can discuss changes and start general planning. Your feedback and help are much appreciated.

If you should want to have any ad-hoc, public planning sessions before the next UOS, we’d like to remind you of Ubuntu On Air, which is a good way to get your discussion recorded and where you can very easily get people involved for the subject. Find out more info on https://wiki.ubuntu.com/OnAir

Originally posted to the community-announce mailing list on Tue Jul 8 10:42:20 UTC 2014 by Daniel Holbach

Jorge Castro: Juju 1.20 is out the door!

Planet Ubuntu - Tue, 2014-07-08 18:48

The following is a guest post from Curtis Hovey, the Juju release manager. You can find the original announcement on the Juju mailing list.

Juju 1.20.0 is released

A new stable release of Juju, juju-core 1.20.0, is now available.

Getting Juju

juju-core 1.20.0 is available for utopic and backported to earlier series in the following PPA:

  • https://launchpad.net/~juju/+archive/stable
New and Notable
  • High Availability
  • Availability Zone Placement
  • Azure Availability Sets
  • Juju debug-log Command Supports Filtering and Works with LXC
  • Constraints Support instance-type
  • The lxc-use-clone Option Makes LXC Faster for Non-Local Providers
  • Support for Multiple NICs with the Same MAC
  • MAAS Network Constraints and Deploy Argument
  • MAAS Provider Supports Placement and add-machine
  • Server-Side API Versioning
High Availability

The juju state-server (bootstrap node) can be placed into high availability mode. Juju will automatically recover when one or more the state-servers fail. You can use the ‘ensure-availability’ command to create the additional state-servers:

juju ensure-availability

The ‘ensure-availability’ command creates 3 state servers by default, but you may use the ‘-n’ option to specify a larger number. The number of state servers must be odd. The command supports the ‘series’ and ‘constraints’ options like the ‘bootstrap’ command. You can learn more details by running ‘juju ensure-availability –help’

Availability Zone Placement

Juju supports explicit placement of machines to availability zones (AZs), and implicitly spreads units across the available zones.

When bootstrapping or adding a machine, you can specify the availability zone explicitly as a placement directive. e.g.

juju bootstrap --to zone=us-east-1b juju add-machine zone=us-east-1c

If you don’t specify a zone explicitly, Juju will automatically and uniformly distribute units across the available zones within the region. Assuming the charm and the charm’s service are well written, you can rest assured that IaaS downtime will not affect your application. Commands you already use will ensure your services are always available. e.g.

juju deploy -n 10 <service>

When adding machines without an AZ explicitly specified, or when adding units to a service, the ec2 and openstack providers will now automatically spread instances across all available AZs in the region. The spread is based on density of instance “distribution groups”.

State servers compose a distribution group: when running ‘juju ensure-availability’, state servers will be spread across AZs. Each deployed service (e.g. mysql, redis, whatever) composes a separate distribution group; the AZ spread of one service does not affect the AZ spread of another service.

Amazon’s EC2 and OpenStack Havana-based clouds and newer are supported. This includes HP Cloud. Older versions of OpenStack are not supported.

Azure availability sets

Azure environments can be configured to use availability sets. This feature ensures services are distributed for high availability; as long as at least two units are deployed, Azure guarantees 99.95% availability of the service overall. Exposed ports will be automatically load balanced across all units within the service.

New Azure environments will have support for availability sets by default. To revert to the previous behaviour, the ‘availability-sets-enabled’ option must be set in environments.yaml like so:

availability-sets-enabled: false

Placement is disabled when ‘availability-sets-enabled’ is true. The option cannot be disabled after the environment is bootstrapped.

Juju debug-log Command Supports Filtering and Works with LXC

The ‘debug-log’ command shows the consolidate logs of all juju agents running on all machines in the environment. The command operates like ‘tail -f’ to stream the logs to the your terminal. The feature now support local-provider LXC environments. Several options are available to select which log lines to display.

The ‘lines’ and ‘limit’ options allow you to select the starting log line and how many additional lines to display. The default behaviour is to show the last 10 lines of the log. The ‘lines’ option selects the starting line from the end of the log. The ‘limit’ option restricts the number of lines to show. For example, you can see just 20 lines from last 100 lines of the log like this:

juju debug-log --lines 100 --limit 20

There are many ways to filter the juju log to see just the pertinent information. A juju log line is written in this format:

<entity> <timestamp> <log-level> <module>:<line-no> <message>

The ‘include’ and ‘exclude’ options select the entity that logged the message. An entity is a juju machine or unit agent. The entity names are similar to the names shown by ‘juju status’. You can exclude all the log messages from the bootstrap machine that hosts the state-server like this:

juju debug-log --exclude machine-0

The options can be used multiple times to select the log messages. This example selects all the message from a unit and its machine as reported by status:

juju debug-log --include unit-mysql-0 --include machine-1

The ‘level’ option restricts messages to the specified log-level or greater. The levels from lowest to highest are TRACE, DEBUG, INFO, WARNING, and ERROR. The WARNING and ERROR messages from the log can seen thusly:

juju debug-log --level WARNING

The ‘include-module’ and ‘exclude-module’ are used to select the kind of message displayed. The module name is dotted. You can specify all or some of a module name to include or exclude messages from the log. This example progressively excludes more content from the logs

juju debug-log --exclude-module juju.state.apiserver juju debug-log --exclude-module juju.state juju debug-log --exclude-module juju

The ‘include-module’ and ‘exclude-module’ options can be used multiple times to select the modules you are interested in. For example, you can see the juju.cmd and juju.worker messages like this:

juju debug-log --include-module juju.cmd --include-module juju.worker

The ‘debug-log’ command output can be piped to grep to filter the message like this:

juju debug-log --lines 500 | grep amd64

You can learn more by running ‘juju debug-log –help’ and ‘juju help logging’

Constraints Support instance-type

You can specify ‘instance-type’ with the ‘constraints’ option to select a specific image defined by the cloud provider. The ‘instance-type’ constraint can be used with Azure, EC2, HP Cloud, and all OpenStack-based clouds. For example, when creating an EC2 environment, you can specify ‘m1.small’:

juju bootstrap --constraints instance-type=m1.small

Constraints are validated by all providers to ensure values conflicts and unsupported options are rejected. Previously, juju would reconcile such problems and select an image, possibly one that didn’t meet the needs of the service.

The lxc-use-clone Option Makes LXC Faster for Non-Local Providers

When ‘lxc-use-clone’ is set to true, the LXC provisioner will be configured to use cloning regardless of provider type. This option cannot be changed once it is set. You can set the option to true in environments.yaml like this:

lxc-use-clone: true

This speeds up LXC provisioning when using placement with any provider. For example, deploying mysql to a new LXC container on machine 0 will start faster:

juju deploy --to lxc:0 mysql Support for Multiple NICs with the Same MAC

Juju now supports multiple physical and virtual network interfaces with the same MAC address on the same machine. Juju takes care of this automatically, there is nothing you need to do.

Caution, network settings are not upgraded from 1.18.x to 1.20.x. If you used juju 1.18.x to deploy an environment with specified networks, you must redeploy your environment instead of upgrading to 1.20.0.

The output of ‘juju status’ will include information about networks when there is more than one. The networks will be presented in this manner:

machines: ... services: ... networks: net1: provider-id: foo cidr: 0.1.2.0/24 vlan-tag: 42 MaaS Network Constraints and Deploy Argument

You can specify which networks to include or exclude as a constraint to the deploy command. The constraint is used to select a machine to deploy the service’s units too. The value of ‘networks’ is a comma-delimited list of juju network names (provided by MaaS). Excluded networks are prefixed with a “^”. For example, this command specify the service requires the “logging” and “storage” networks and conflicts with the “db” and “dmz” networks.

juju deploy mongodb --constraints networks=logging,storage,^db,^dmz

The network constraint does not enable the network for the service. It only defines what machine to pick.

Use the ‘deploy’ command’s ‘networks’ option to specify service-specific network requirements. The ‘networks’ option takes a comma-delimited list of juju-specific network names. Juju will enable the networks on the machines that host service units.

Juju networking support is still experimental and under development, currently only supported with the MaaS provider.

juju deploy mongodb --networks=logging,storage

The ‘exclude-network’ option was removed from the deploy command as it is superseded by the constraint option.

There are plans to add support for network constraint and argument with Amazon EC2, Azure, and OpenStack Havana-based clouds like HP Cloud in the future.

MAAS Provider Supports Placement and add-machine

You can specify which MAAS host to place the juju state-server on with the ‘to’ option. To bootstrap on a host named ‘fnord’, run this:

juju bootstrap --to fnord

The MAAS provider support the add-machine command now. You can provision an existing host in the MAAS-based Juju environment. For example, you can add running machine named fnord like this:

juju add-machine fnord Server Side API Versioning

The Juju API server now has support for a Version field in requests that are made. For this release, there are no RPC calls that require anything other than ‘version=0’ which is the default when no Version is supplied. This should have limited impact on existing CLI or API users, since it allows us to maintain exact compatibility with existing requests. New features and APIs should be exposed under versioned requests.

For details on the internals (for people writing API clients), see this document.

Finally

We encourage everyone to subscribe the mailing list at juju-dev at lists.canonical.com, or join us on #juju-dev on freenode.

PS. Juju just got 20% more amazing.

Colin King: more stress with stress-ng

Planet Ubuntu - Tue, 2014-07-08 09:27
Since my last article about stress-ng I have been adding a few more stress mechanisms to stress-ng:
  • file locking - exercise file locking with one or more processes (the more processes the better).
  • fallocate - this allocates a 4MB file, sync's, truncates to zero size and syncs repeatedly
  • yield - this loops on sched_yield() to repeatedly relinquish the CPU forcing a high context switch rate when run with multiple yielding processes.
Also, I have added some new features to tweak scheduling, I/O characteristics and memory allocations of the running stress processes:
  • --sched and --sched-prio options to specify the scheduler type and priority
  • --ionice-class and --ionice-level options to tweak I/O niceness
  • --vm-populate option to populate (pre-fault) page tables for a mapping for the --vm stress test.
If I think of other mechanisms to stress the kernel I will add them, but for now, stress-ng is becoming almost feature complete.

Ronnie Tucker: Linux Kernel 3.15.3 Is Now Available for Download

Planet Ubuntu - Tue, 2014-07-08 06:58

Greg Kroah-Hartman had the pleasure of announcing earlier today, July 1, that the third maintenance release for the current stable 3.15 branch of the Linux kernel is available for download, urging users to upgrade as soon as their Linux distributions update the respective packages on the official software repositories.

The Linux kernel 3.15.3 is a pretty standard release that introduces various updated drivers, some filesystem improvements, especially for Btrfs and EXT4, random mm and Bluetooth fixes, and the usual architecture enhancements (ARM, ARM64, IA64, SPARC, PowerPC, s390, and x86).

Be aware, though, that upgrading to a new Linux kernel package might break some things on your system, so it is preferable to wait a few days and see if anyone complains about it on the official channels of your distribution.

Source:

http://news.softpedia.com/news/Linux-Kernel-3-15-3-Is-Now-Available-for-Download-448998.shtml

Submitted by: Marius Nestor

Ubuntu Server blog: 2014-07-01 Meeting Minutes

Planet Ubuntu - Tue, 2014-07-08 03:35
Agenda
  • Review ACTION points from previous meeting
  • U Development
  • Server & Cloud Bugs (caribou)
  • Weekly Updates & Questions for the QA Team (psivaa)
  • Weekly Updates & Questions for the Kernel Team (smb, sforshee)
  • Ubuntu Server Team Events
  • Open Discussion
  • Announce next meeting date, time and chair
Minutes
  • bug 1317587 is in progress
Next Meeting

Next meeting will be on Tuesday, July 8th at 16:00 UTC in #ubuntu-meeting.

Additional logs @ https://wiki.ubuntu.com/MeetingLogs/Server/20140701

The Fridge: Ubuntu Weekly Newsletter Issue 374

Planet Ubuntu - Tue, 2014-07-08 00:52

The Fridge: New Ubuntu Membership Board Members

Planet Ubuntu - Mon, 2014-07-07 16:44

Back in April and June the Community Council put out a call to restaff the Ubuntu Membership Board for several open spots on the board.

Today I’m happy to announce that the Community Council has appointed (or renewed membership of) the following individuals:

For the 1200 UTC time slot:

For the 2200 UTC time slot:

Thanks to all nominees for putting their names forward for consideration and thanks to the outgoing members who have served on the board these past couple of years!

Elizabeth K. Joseph, on behalf of the Community Council

Jonathan Riddell: Frameworks 5 and Plasma 5 almost done!

Planet Ubuntu - Mon, 2014-07-07 14:42
KDE Project:

KDE Frameworks 5 is due out today, the most exciting clean-up of libraries KDE has seen in years. Use KDE classes without brining in the rest of kdelibs. Packaging for Kubuntu is almost all green and Rohan should be uploading it to Utopic this week.

Plasma 5 packages are being made now. We're always looking for people to help out with packaging, if you want to be part of making your distro do join us in #kubuntu-devel

Pages

Subscribe to Free Software Magazine aggregator