news aggregator

Kubuntu: Calligra 2.8 is Out

Planet Ubuntu - Wed, 2014-03-05 22:37

Packages for the release of KDE's document suite Calligra 2.8 are available for Kubuntu 12.04 LTS and 13.10. You can get it from the Kubuntu Backports PPA (alongside KDE SC 4.12). They are also in our development release.

Harald Sitter: Kubuntu Testing and You

Planet Ubuntu - Wed, 2014-03-05 15:05

With the latest Kubuntu 14.04 Beta 1 out the door, the Kubuntu team is hard at work to deliver the highest possible quality for the upcoming LTS release.

As part of this we are introducing basic test cases that every user can run to ensure that core functionality such as instant messaging and playing MP3 files is working as expected. All tests are meant to take no more than 10 minutes and should be doable by just about everyone. They are the perfect way to get some basic testing done without all the hassle testing usually involves.

If you are already testing Beta 1, head on over to our Quality Assurance Headquarters to get the latest test cases.

Feel free to run any test case, at any time.

If you have any questions, drop me a mail at apachelogger@kubuntu.org, or stop by in #kubuntu-devel on irc.freenode.net.

kitten by David Flores

Jonathan Riddell: New Blue Systems Office Edinburgh

Planet Ubuntu - Wed, 2014-03-05 11:21
KDE Project:

The Blue Systems office in Edinburgh has moved across The Meadows to the Grassmarket to a larger office which is also surrounded on two sides by curious artist collectives and the occasional hipster café. Hosted in Edinburgh's new technology incubator Codebase we are in another building which is nicer on the inside looking out, this time with a view of our local volcano Arthur's Seat.

I'd like to thank Cloudsoft for squatting in the office with me, making software to manage your applications in the cloud they are hiring now if you fancy a job in the most beautiful city in the world..

In terms of skill set we are looking for, the main thing is that they are
good software engineers rather than specific skills. However, here are a
few potential areas:
1. Experienced Java programmer (and/or other languages a big plus; Java
is not a pre-requisite).
2. Devops experience.
3. Good sys admin skills.
4. Distributed computing (i.e. understands the architectural
considerations etc).
5. Cloud/virtualization.
6. Javascript / web-dev (for one of them, perhaps)

Michael Zanetti: Ubuntu App Developer week

Planet Ubuntu - Wed, 2014-03-05 10:14

Yesterday I did a tutorial for the Ubuntu App Developer Week. The topic was “Extending QML Applications with Qt/C++ plugins”. If you’re interested in that topic but you’ve missed the session, you can watch it here:

You can find the example code used in the tutorial on Launchpad.

Today I’m going to do a session about using qmltestrunner to test your QML applications. You can watch it live at 15:00 UTC at http://summit.ubuntu.com/appdevweek-1403/meeting/22137/testing-with-qmltestrunner/. If you can’t make it, this site will keep the video for you to watch it later.

Ubuntu GNOME: [Poll] Community Wallpaper Contest for Trusty Tahr

Planet Ubuntu - Wed, 2014-03-05 09:15

Hi,

Ubuntu GNOME Team is pleased to announce the pre-final step for our very first Community Wallpaper Contest for Trusty Tahr Cycle.

As communicated before, since we have ended the submission for the wallpapers, it is time to ask the whole community to vote and select their favourite wallpapers which will make it to the final release of Ubuntu GNOME 14.04 (Trusty Tahr).

Important Note:
Please note that you’re required to select (Ten – 10) wallpapers and vote for your selections and the top (Ten – 10) wallpapers will be included in Ubuntu GNOME 14.04.

Dead Line:
The poll will continue until: 9-March-2014

To view the poll, please click here:
[Poll] Ubuntu GNOME 14.04 Community Wallpaper Contest

What will happen next?
Once we finish the poll, we shall announce the winner wallpapers that will make it to Ubuntu GNOME 14.04 and that would be the final step of the Community Wallpaper Contest.

Thank you for your time and your vote!

On Behalf of Ubuntu GNOME Artwork Team

Mark Shuttleworth: #11 – Ubuntu is the #1 platform for production OpenStack deployments

Planet Ubuntu - Wed, 2014-03-05 07:36

OpenStack has emerged as the consensus forum for open source private cloud software. That of course makes it a big and complex community, with complex governance and arguably even more complex politics, but it has survived several rounds of competition and is now settling down as THE place to get diverse vendors to work together on a IAAS that anybody can deploy for themselves. It is a big enough forum with sufficient independent leadership that no one vendor will ever control it (despite some fantastically impressive efforts to do so!). In short, OpenStack is what you want if you are trying to figure out how to build yourself a cloud.

And by quite a large majority, most of the people who have actually chosen to deploy OpenStack in production, have done so on Ubuntu.

At the latest OpenStack summit, an official survey of production OpenStack deployments found 55% of them on Ubuntu, a stark contrast with the 10% of OpenStack deployments on RHEL.

Canonical and Ubuntu play an interesting role in OpenStack. We do not seek to control any particular part of the project, although some of our competitors clearly think that would be useful for them to achieve, we think OpenStack would be greatly diminished in importance if it was perceived to be controlled by a single vendor, and we think there are enough contributors and experts around the table to ensure that the end result cannot actually be controlled by a single party. To a certain extent, the battle for notional control of key aspects of OpenStack just holds the project back; it’s a distraction from the real task at hand, which is to deliver a high quality, high performance open cloud story. So our focus is on supporting the development of OpenStack, supporting the broadest range of vendors who want to offer OpenStack solutions, components and services, and enabling a large ecosystem to accelerate the adoption of OpenStack in their markets.

It’s a point of pride for us that you can get an OpenStack cloud built on Ubuntu from just about every participant in the OpenStack ecosystem – Dell, HP, Mirantis, and many more – we think the healthiest approach is for us to ensure that people have great choices when it comes to their cloud solution.

We were founding members and are platinum sponsors of the OpenStack Foundation. But what’s more important to us, is that most OpenStack development happens on Ubuntu. We take the needs of OpenStack developers very seriously – for 14.04 LTS, our upcoming bi-annual enterprise release, a significant part of our product requirements were driven by the goal of supporting large-scale enterprise deployments of OpenStack with high availability as a baseline. Our partners like HP, who run one of the largest OpenStack public cloud offerings, invest heavily in OpenStack’s CI and test capabilities, ensuring that OpenStack on Ubuntu is of high quality for anybody who chooses the same base platform.

We publish stable, maintained archives of each OpenStack release for the LTS releases of Ubuntu. That means you can ALWAYS deploy the latest version of OpenStack on the current LTS of Ubuntu, and there is a clear upgrade path as new versions of both OpenStack and Ubuntu are released. And the fact that the OpenStack release cadence and the Ubuntu release cadence are perfectly aligned is no accident – it ensures that the OpenStack developers can always deliver their latest code straight to a very large audience of developers and operators. That’s important because of the extraordinary pace of innovation inside OpenStack; there are significant and valuable improvements in each six-month release, so customers, even enterprise customers, find themselves wanting a more aggressive upgrade schedule for OpenStack than is normal for them in platform environments. We support that and have committed to continue doing so, though we do expect the urgency of those upgrades to diminish as OpenStack matures over the next three years.

For commercial support of OpenStack, we are happy for industry to engage either with our partners who can provide local talent combined with an escalation path to Canonical for L3 support of the whole solution, or directly with Canonical if the circumstances warrant it. That means building on Ubuntu opens up a wide range of solution providers who can make the same high commitment to SLAs and upgrades.

For Canonical itself, our focus is on scale and quality. Our direct customers run the very largest production deployments of OpenStack, both private and public, and we enjoy collaborating with their architects to push the limits of the stack as it stands today. That gives us a lot of insight into the approaches being taken by a wide range of architects in telco, finance and media. We ourselves invest very heavily in testing, continuous integration, and interoperability, with the largest OpenStack interop program (OIL) that gives us the ability to speak with confidence about what combinations of vendor offerings will actually work, and in many cases, how they will perform together for different applications.

The fact that the traditional enterprise Linux vendors have now joined OpenStack is a tremendous validation of the role that OpenStack has assumed in industry: THE open cloud forum. But for all the reasons outlined above, most of the actual production deployments of OpenStack are not on traditional, legacy enterprise Linux. This mirrors the public cloud, where even the largest and most mission-critical deployments tend not to be on proprietary Linux offerings; the economics of HA single-node solutions just don’t apply in a scale-out environment. So just as Ubuntu is by far the most widely used platform for public cloud guests, it is also on track to be the enterprise choice for scale-out infrastructure like IAAS, storage, and big data. Even if you have always done Linux a particular way, the transition to scale-out thinking is an opportunity to reset expectations about your base OS; and for the fastest-moving players in telco, media and finance, Ubuntu turns out to be a great way to get more done, more efficiently.

Nigel Babu: Goodbye Ubuntu

Planet Ubuntu - Wed, 2014-03-05 03:33

On March 8th, my Ubuntu membersip will expire. I’ve been getting email notifications for a few days and I’ve decided not to renew my membership. Ubuntu introduced me to open source. Thank you for the great operating system and the sense of community that I’ve had for a few years. I’ve made a lot of friends and I’ve had a lot of mentors who’ve helped me become a better person.

I won’t disappear entirely – I will still be in a few IRC channels and help in whatever little way I can.

Thank you everyone for the spectacular few years.

Luis de Bethencourt: Why Coding Is Fun

Planet Ubuntu - Tue, 2014-03-04 17:33


"First is the sheer joy of making things. As the child delights in his mud pie, so the adult enjoys building things, especially things of his own design. [...]

Second is the pleasure of making things that are useful to other people. Deep within, we want others to use our work and to find it helpful. [...]

Third is the fascination of fashioning complex puzzle-like objects of interlocking moving parts and watching them work in subtle cycles, playing out the consequences of principles built in from the beginning. [...]

Fourth is the joy of always learning, which springs from the non-repeating nature of the task. In one way or another the problem is ever new, and its solver learns something; sometimes practical, sometimes theoretical, and sometimes both.

Finally, there is the delight of working in such a tractable medium. The programmer [...] works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework,so readily capable of realizing gran conceptual structures.

Yet the program construct [...] is real in the sense that it moves and works, producing visible outputs separate from the construct itself.
"



from The Joys of the Craft, sixhats.com.ar
Image from Hello Ruby

Ubuntu Kernel Team: Kernel Team Meeting Minutes – March 04, 2014

Planet Ubuntu - Tue, 2014-03-04 17:15
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140304 Meeting Agenda


ARM Status

nothing new to report this week


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

http://people.canonical.com/~kernel/reports/kt-meeting.txt


Milestone Targeted Work Items

sorry

   apw    core-1311-kernel    4 work items          core-1311-cross-compilation    2 work items          core-1311-hwe-plans    1 work item       ogasawara    core-1311-kernel    1 work item       smb    servercloud-1311-openstack-virt    4 work items   


Status: Trusty Development Kernel

The 3.13.0-15.35 Trusty kernel is available in the archive. This is
bssed on the v3.13.5 upstream stable update. Our unstable branch has
also been rebased to track the latest v3.14-rc5 release.
—–
Important upcoming dates:
Thurs Mar 27 – Final Beta (~3 weeks away)
Thurs Apr 03 – Kernel Freeze (~4 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Saucy/Raring/Quantal/Precise/Lucid

Status for the main kernels, until today (Nov. 26):

  • Lucid – Testing
  • Precise – Testing
  • Quantal – Testing
  • Saucy – Testing

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html


Open Discussion or Questions? Raise your hand to be recognized

No open discussions.

David Planella: Internationalizing your apps at the Ubuntu App Developer Week

Planet Ubuntu - Tue, 2014-03-04 16:26

As part of the Ubuntu App Developer Week, I just ran a live on-air session on how to internationalize your Ubuntu apps. Some of the participants on the live chat asked me if I could share the slides somewhere online.

So here they are for your viewing pleasure :) If you’ve got any questions on i18n or in Ubuntu app development in general, feel free to ask in the comments or ping me (dpm) on IRC.

The video

The slides

Enjoy!

The post Internationalizing your apps at the Ubuntu App Developer Week appeared first on David Planella.

Sebastian Kügler: Are your Qt builds going OOM?

Planet Ubuntu - Tue, 2014-03-04 12:15

If you’re, like me, regularly building Qt, you probably have noticed a decent hunger for memory, especially when linking Webkit. This part of the build can take well over 8GB of RAM, and when it fails, you get to do it over again.

The unfortunate data point is that my laptop only has 4GB, which is enough for most (but one) cases. Short of buying a new laptop, here’s a trick how you can get through this part of the build: Create a swap file. Creating a swapfile increases your virtual memory. This won’t make it fast, but it at least gives Linux a chance to not run out of memory and kill the ‘ld’ process. Creating a swapfile is actually really easy under Linux, you just have to know your toolbox. Here’s the quick run-down of steps:

First, create an empty file:

fallocate -l 4096M /home/swapfile

Using the fallocate syscall (which works for newer kernels, but only for btrfs, ext4, ocfs2, and xfs filesystems), this can be done fast. In this example, I have enough space on my home partition, so I decided to put the swapfile there. It’s 4GB in size, which should be plenty of virtual memory to finish even the greediest of builds — your mileage may vary. If you’re not able to use fallocate, you’ll need a bit more patience and dd.

As your swap should never be readable by anybody else than root, change its permissions:

chmod 600 /home/swapfile

Next, “format” the swapfile:

mkswap /home/swapfile

Then, add it to your virtual memory pool:

swapon /home/swapfile

You can now check with tools like `free` or `top` (tip: use the much nicer `htop` if you’re into fancy) that your virtual memory actually increased. Once your build is done, and you need your disk-space back, that’s easy as pie:

swapoff /home/swapfile rm /home/swapfile

If you want to make this permanent (so it’ll survive a reboot), add a line like the following to your fstab:

/home/swapfile none swap defaults 0 0

This is just a really quick tip, more details on this process can be found in the excellent Arch Wiki.

Daniel Pocock: Ganglia-Nagios-Bridge and JMXetric new releases

Planet Ubuntu - Tue, 2014-03-04 09:34
ganglia-nagios-bridge v1.1.0

A new release of ganglia-nagios-bridge is now available. The package has also been uploaded to Debian for those who use that and it will appear in backports soon. The main improvement in this version is slightly better support for generating an UNKNOWN alert in Nagios when fresh metrics fail to appear in Ganglia.

There is also a new wiki comparing the various Ganglia and Nagios integration solutions.

JMXetric v1.0.6 and gmetric4j v1.0.6

The latest JMXetric and gmetric4j releases improve support for JBoss, specifically, working around some issues that occur when a profiling agent accesses the logger or MBean platform during JVM startup.

The page about JBoss, Wildfly and Tomcat integration details for Ganglia/JMXetric performance monitoring has also been updated to cover these improvements.

The process of having JMXetric and gmetric4j packaged is now under way and it has been verified that they just work with the tomcat7 packages on Debian and Ubuntu (config sample included).

The Fridge: Ubuntu Weekly Newsletter Issue 357

Planet Ubuntu - Tue, 2014-03-04 03:47

Welcome to the Ubuntu Weekly Newsletter. This is issue #357 for the week February 24 – March 2, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth Krumbach Joseph
  • Paul White
  • Esther Schindler
  • Nathan Dyer
  • Jim Connett
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Nicholas Skaggs: A simple look at testing within ubuntu

Planet Ubuntu - Mon, 2014-03-03 22:35
Since just before the last LTS, quality has been a buzzword within the ubuntu community. We've come a long way since precise and I wanted to provide some help and prospective on what ubuntu's process for quality looks like this cycle. In simple terms. Or as reddit would say, "explain to me like I'm 5".

I'll try and define terms as we go. First let me define CI, which is perhaps the buzzword of this cycle, lest I lose all of you! CI stands for continuous integration, and it means we are testing ubuntu. All the time. Non-stop. Every change we make, we test. The goal behind this idea is to find and fix problems, before well, they become problems on your device!

CI Dashboard
The CI dashboard then is a way to visually see the results of this testing. It acts as a representation of the health of ubuntu as a distribution. At least once a day runs are executed, apps and images are tested and benchmarked, and the results are populated on ci.ubuntu.com. This is perhaps the most visible part of the CI (our continuous testing efforts) that is happening within ubuntu. But let's step back a minute and look at how the overall CI process works within ubuntu.

CI Process
App developers hack on a bit of code, fixing bugs or adding new features to the codebase. Once the code is ready, a merge proposal1 is created by the developer and feedback is sought. If the code passes the peer review and the application's tests, it will then become part of the image after a journey through the CI train.

For the community core apps, the code is merged after peer review, and then undergoes a similar journey to the store where it will become part of the image as well. Provided of course it meets further review criteria by myself and Alan (we'll just call him the gatekeeper).
Though menacing, Alan assures me he doesn't bite
Lest we forget, upstream2 uploads3 are done as well. We can hope some form of testing was done on them before we received them. Nevertheless, tests are run on these as well, and if they pass successfully, the new packages will enter the archive4 and become part of the image.

Generating Images
Now it's time to generate some images. For the desktop a snapshot of what's in the ubuntu archive is taken each day, built, and then subjected to a series of installation tests. If the tests pass, it is released for general testing called Image (or ISO) testing. An image is tested and declared stable as part of a milestone (or testing event) and can become the next version of ubuntu!
Adopted images are healthy images!
On the ubuntu phone side of things, all the new uploads are gathered and evaluated for risk. If something is a large change, it might be prudent to not land it with other large changes so we can tell what broke should the image not work properly. Once everything is ready, a new image is created and is released for testing.  The OTA updates (over-the-air; system updates) on your ubuntu phone come from this process!

How you can help?
Still with me I hope? As you can see there's many things happening each day in regards to quality and lots of places where you can create a positive change for the health of the distro! In my next few posts, I'll cover each of the places you can plug in to help ubuntu be healthy everyday!

1. A merge proposal is a means of changing an applications code via peer review.
2. By upstream, I mean the communities and people who make things we use inside of ubuntu, but are not directly a part of it. Something like the browser (firefox) and kernel are good examples.
3. This can happen via a general sync at the beginning of the cycle from debian. This sync copies part of the debian archive into the ubuntu archive, which in effect causes applications to be updated. Applications are also updated whenever a core ubuntu developer or a MOTU uploads a new version to the archive. 
4. In case you are wondering, "the archive" is the big repository where all of your updates and new applications come from!

James Page: Which Open vSwitch?

Planet Ubuntu - Mon, 2014-03-03 18:09

Since Ubuntu 12.04, we’ve shipped a number of different Open vSwitch versions supporting various different kernels in various different ways; I though it was about time that the options where summarized to enable users to make the right choice for their deployment requirements.

Open vSwitch for Ubuntu 14.04 LTS

Ubuntu 14.04 LTS will be the first Ubuntu release to ship with in-tree kernel support for Open vSwitch with GRE and VXLAN overlay networking – all provided by the 3.13 Linux kernel. GRE and VXLAN are two of the tunnelling protocols used by OpenStack Networking (Neutron) to provide logical separation between tenants within an OpenStack Cloud.

This is great news from an end-user perspective as the requirement to use the openvswitch-datapath-dkms package disappears as everything should just *work* with the default Open vSwitch module. This allows us to have much more integrated testing of Open vSwitch as part of every kernel update that we will release for the 3.13 kernel going forward.

You’ll still need the userspace tooling to operate Open vSwitch; for Ubuntu 14.04 this will be the 2.0.1 release of Open vSwitch.

Open vSwitch for Ubuntu 12.04 LTS

As we did for the Raring 3.8 hardware enablement kernel, an openvswitch-lts-saucy package is working its way through the SRU process to support the Saucy 3.11 hardware enablement kernel; if you are using this kernel, you’ll be able to continue to use the full feature set of Open vSwitch by installing this new package:

sudo apt-get install openvswitch-datapath-lts-saucy-dkms

Note that if you are using Open vSwitch on Ubuntu 12.04 with the Ubuntu Cloud Archive for OpenStack Havana, you will already have access to this newer kernel module through the normal package name (openvswitch-datapath-dkms).

DKMS package names

Ubuntu 12.04/Linux 3.2: openvswitch-datapath-dkms (1.4.6)
Ubuntu 12.04/Linux 3.5: openvswitch-datapath-dkms (1.4.6)
Ubuntu 12.04/Linux 3.8: openvswitch-datapath-lts-raring-dkms (1.9.0)
Ubuntu 12.04/Linux 3.11: openvswitch-datapath-lts-saucy-dkms (1.10.2)
Ubuntu 12.04/Linux 3.13: N/A
Ubuntu 14.04/Linux 3.13: N/A

Hope that makes things clearer…


Lubuntu Blog: [Poll] Community wallpaper contest

Planet Ubuntu - Mon, 2014-03-03 14:54
Ladies and gentlemen, cast your votes! The poll that will decide our additional wallpapers for Trusty Tahr is now up and running. To cast your vote head here and choose your five favorite wallpapers. The poll will continue until March 11th, and the top five contributions will be included in Lubuntu 14.04 and packaged into the Ubuntu repositories.  Spread the word and good luck to all

Marcin Juszkiewicz: How to get Zoom slider on Microsoft keyboard recognized by X11

Planet Ubuntu - Mon, 2014-03-03 11:16

If you are using Microsoft Natural Ergonomic Keyboard 4000 as I do you may wondered how to get that zoom slider in a middle to be useful. Thanks to Hans de Goede there is a solution.

There is one new file and changes to other needed. First we need to instruct udev to remap some keys for us. Create /lib/udev/hwdb.d50-msoffice-keyboard-xorg.hwdb file with this content:

# classic msoffice keyboard keyboard:usb:v045Ep0048d*dc*dsc*dp*ic*isc*ip*in01* KEYBOARD_KEY_0c0184=documents # KEY_WORDPROCESSOR to KEY_DOCUMENTS KEYBOARD_KEY_0c0186=finance # KEY_SPREADSHEET to KEY_FINANCE KEYBOARD_KEY_0c018e=chat # KEY_CALENDAR to KEY_CHAT KEYBOARD_KEY_0c01a3=nextsong # KEY_NEXT to KEY_NEXTSONG KEYBOARD_KEY_0c01a4=previoussong # KEY_PREVIOUS to KEY_PREVIOUSSONG KEYBOARD_KEY_0c01ab=search # KEY_SPELLCHECK to KEY_SEARCH # Microsoft Natural Ergonomic Keyboard 4000 keyboard:usb:v045Ep00DB* KEYBOARD_KEY_0c01ab=search # KEY_SPELLCHECK to KEY_SEARCH KEYBOARD_KEY_0c022d=scrollup # KEY_ZOOMIN to KEY_SCROLLUP KEYBOARD_KEY_0c022e=scrolldown # KEY_ZOOMOUT to KEY_SCROLLDOWN

In Fedora rawhide I also needed to edit 60-keyboard.hwdb file (same directory) to disable some definitions:

# Microsoft Natural Ergonomic Keyboard 4000 #keyboard:usb:v045Ep00DB* # KEYBOARD_KEY_c022d=zoomin # KEYBOARD_KEY_c022e=zoomout

Now update of hwdb is needed:

sudo udevadm hwdb --update sudo udevadm control --reload

And the only thing left is replugging the keyboard (or system reboot). As a bonus you get XF86Search button instead of non-working Spell (F10). Those who use Microsoft Office Keyboard (old one with scroller on left side) will get all keys working as well but they also need 3.14 kernel to get all recent fixes.

And why all that is needed at all? Simple — Xorg is still sitting in 80s when it comes to handling keyboard and ignores all keycodes with >8bit values. I hope that Wayland does not follow that way and does/will just take whatever system under is telling about input devices.

All rights reserved © Marcin Juszkiewicz
How to get Zoom slider on Microsoft keyboard recognized by X11 was originally posted on Marcin Juszkiewicz website

Related posts:

  1. How to update Chrubuntu 12.04 to Ubuntu 13.04
  2. Flashing U-Boot on Efika MX Smartbook
  3. Spending whole day with just Chromebook

Daniel Pocock: Automatically building Java projects

Planet Ubuntu - Mon, 2014-03-03 09:27

Another of the GSoC project areas I have offered to mentor involves automatically and recursively building Java projects.

Why is this important?

Recently, I decided to start using travis-ci to automatically build some of my Java projects.

One of the projects, JMXetric, depends on another, gmetric4j. When travis-ci builds JMXetric, it needs to look in Maven repositories to find gmetric4j (and also remotetea / oncrpc.jar and junit). The alternative to fetching things from repositories is to stop using Maven and ship the binary JARs of dependencies in the JMXetric repository itself, which is not desirable for various reasons.

Therefore, I submitted gmetric4j into the Maven Central repository by using the Sonatype Nexus service. One discovery I made in this process disturbed me: Sonatype allows me to sign and upload my binary JAR, they don't build it from source themselves, so users of the JAR have no way to know that it really is built from the source I publish in Github.

In fact, as anybody who works with Java knows, there is no shortage of Java projects out there that have a mix of binary dependencies in their source tree, sometimes without any easy way to find the dependency sources. HermesJMS is one popular example that is crippled by the inclusion of some JARs that are binary-only.

No silver bullet, but there is hope

Although there are now tens of thousands of JAR libraries out there in repositories and projects that depend on them (and their transitive dependencies and build tools and such), there is some hope:

  • Many JARs provide a -source JAR including source. This doesn't include all of the build artifacts of a true source package or source tarball, it just provides a subset of the source for use with a debugger.
  • Many Maven pom.xml files now include metadata about where the source is located - example

With that in mind, I'm hopeful that a system could be developed to scrape some of these data sources to find some source code and properly build some subset of the thousands of JARs available in the Maven Central Repository.

But why bother if you can't completely succeed?

One recent post on maven-users suggested that because nobody would ever be able to build 100% of JARs from source, the project is already doomed to failure.

Personally, I feel it is quite the opposite: by failing to build 100% of JARs from source, the project will help to pinpoint those hierarchies of JARs that are not really free software and increase pressure on their publishers to adhere to the standards that people reasonably expect for source distribution or provide a red flag to help dependant projects stop using them.

On top of that, the confirmation of true free-software status for many other JARs will make it safer for people to rely on them, package them and distribute them in various ways.

Dumping a gazillion new Java packages into Debian

Just to clear up one point: being able to automatically build JARs from source (or a chain of dependencies involving hundreds of them) doesn't mean they will be automatically uploaded to the official Debian archive by a Debian Developer (DD).

Having this view of the source will make it easier for a DD to look at a set of JARs and decide if they are suitable for packaging, but there would still be some manual oversight involved. The tool would simply take out some of the tedious manual steps (where possible) and give the DD the ability to spot traps (JARs without real source hidden in the dependency hierarchy) much more quickly.

How would it work?

The project - whether completed under GSoC or by other means - would probably be broken down into a few discrete components. Furthermore, it would utilize some existing tools where possible. All of this makes it easier for a student to focus on and complete some subset of the work even if the whole thing is quite daunting.

Here are some of the ideas for the architecture of the solution and the different components to be used or developed:

  • The data set:
    • A database schema, tracking each binary artifact, the source repository location (e.g. Git or SVN URL), source tarball location, source JAR availability and dependency relationships (including versions)
    • A local Maven repository - only containing JARs that we have built locally from some source
    • A set of Git repositories to mirror the upstream repositories of projects that need to be tweaked.
  • Tool set:
    • A web interface or command line tool would be necessary for a user to kick-start the process by specifying some artifact they want to build
    • There would need to be a script that tries to work out all the possible ways to get the source for an artifact (e.g. by looking for a Git URL in the pom.xml from the Maven Central repository). This script would be able to do other things, like identifying the existence of -source JARs which may or may not be sufficient to build the artifact.
    • A script would need to be created for testing the artifact's source tarball or repository for binary artifacts (e.g. copies of junit.jar). Whenever such things were found, the script would mirror the repository into our local git and create a branch with binaries removed. A record of the binaries would be added to the local database so we can symlink them from a trusted source when building.
    • A script would need to be created for testing whether the project includes a recognised build system (such as build.xml for ant or pom.xml for Maven). For projects without such artifacts, the script would need to generate a template build.xml and store it in a local clone of the repository
    • Jenkins would be used to build the JARs. A script would need to be created to build the Jenkins config file for the project, pointing Jenkins to the upstream Git or the local Git repository depending upon the situation.
    • If the project is a Maven or Ivy project, then there are likely to be attempts to find dependencies during the build process. Running under Jenkins, these tools would be configured in such a way that they only look to the local repository and use dependencies that we have already built. If the build fails during dependency resolution, this is where the recursive process would kick off: the attempt to find each missing dependency would be logged to a queue, and the requests in this queue would each be handled by restarting the whole process again at the beginning. Each of these requests would also be logged to the database.
    • Sometimes, the system would be unable to proceed (e.g. because there are no clues about source locations in a given pom.xml). A user interface would need to be constructed to show a list of artifacts with exceptions and allow the user to manually locate the source and supply the URLs. The system would then continue iterating with this new data.
  • Reporting: we already know that for some JARs, we will simply fail to make any progress and we are not going to lose any sleep over that. The important thing is to provide accurate reports to help people make decisions that may involve working around those JARs in future:
    • For what percentage of projects could we determine the license from the pom.xml? Reports on licensing: can we spot any license mismatch in the dependency hierarchy?
    • Tools: which build tools in the chain of dependencies don't provide any source code? Are they optional tools (such as code quality analysis) that we can skip in the build process (e.g. by producing a mock version of the tool or plugin)?
    • Which non-free/sourceless JARs are most widely depended upon by other projects in the free Java eco-system? Can we make a list of the top 10 or 20?
    • Abandonware: can we detect JARs that haven't been updated for an extended period of time, with no activity in the source repository? For these projects in particular, it is a really good idea to make backups of the source repositories (or mirrors of their web sites and source download directories) in case they disappear altogether.

Michael Hall: Ubuntu App Developer Week starts Today!

Planet Ubuntu - Mon, 2014-03-03 09:00

Starting at 1400 UTC today, and continuing all week long, we will be hosting a series of online classes covering many aspects of Ubuntu application development. We have experts both from Canonical and our always amazing community who will be discussing the Ubuntu SDK, QML and HTML5 development, as well as the new Click packaging and app store.

You can find the full schedule here: http://summit.ubuntu.com/appdevweek-1403/

We’re using a new format for this year’s app developer week.  As you can tell from the link above, we’re using the Summit website.  It will work much like the virtual UDS, where each session will have a page containing an embedded YouTube video that will stream the presenter’s hangout, an embedded IRC chat window that will log you into the correct channel, and an Etherpad document where the presenter can post code examples, notes, or any other text.

Use the chatroom like you would an Ubuntu On Air session, start your questions with “QUESTION:” and wait for the presenter to get to it. After the session is over, the recorded video will be available on that page for you to replay later. If you register yourself as attending on the website (requires a Launchpad profile), you can mark yourself as attending those sessions you are interested in, and Summit can then give you a personalize schedule as well as an ical feed you can subscribe to in your calendar.

If you want to use the embedded Etherpad, make sure you’re a member of https://launchpad.net/~ubuntu-etherpad

That’s it!  Enjoy the session, ask good questions, help others when you can, and happy hacking.

Stuart Langridge: Writing a simple desktop widget for Ubuntu

Planet Ubuntu - Mon, 2014-03-03 00:47

I needed a way to display the contents of an HTML file on my desktop, in such a way that it looks like it’s part of the wallpaper. Fortunately, most of the answer was in How can I make my own custom desktop widgets? on Ask Ubuntu, along with Create a Gtk Window insensitive to Show Desktop and Won’t show in Launcher. Combining that with the excellent Python GI API Reference which contains everything and which I can never find when I go looking for it, I came up with a simple little Python app. I have it monitoring the HTML file which it displays for changes; when that file changes, I refresh the widget.

from gi.repository import WebKit, Gtk, Gdk, Gio import signal, os class MainWin(Gtk.Window): def __init__(self): Gtk.Window.__init__(self, skip_pager_hint=True, skip_taskbar_hint=True) self.set_wmclass("sildesktopwidget","sildesktopwidget") self.set_type_hint(Gdk.WindowTypeHint.DOCK) self.set_size_request(600,400) self.set_keep_below(True) # transparency screen = self.get_screen() rgba = screen.get_rgba_visual() self.set_visual(rgba) self.override_background_color(Gtk.StateFlags.NORMAL, Gdk.RGBA(0,0,0,0)) self.view = WebKit.WebView() self.view.set_transparent(True) self.view.override_background_color(Gtk.StateFlags.NORMAL, Gdk.RGBA(0,0,0,0)) self.view.props.settings.props.enable_default_context_menu = False self.view.load_uri("file://path/to/html/file") box = Gtk.Box() self.add(box) box.pack_start(self.view, True, True, 0) self.set_decorated(False) self.connect("destroy", lambda q: Gtk.main_quit()) self.show_all() self.move(100,100) def file_changed(monitor, file, unknown, event): mainwin.view.reload() if __name__ == '__main__': # the HTML file needs to have background colour rgba(0,0,0,0) gio_file = Gio.File.new_for_path("/path/to/html/file") monitor = gio_file.monitor_file(Gio.FileMonitorFlags.NONE, None) monitor.connect("changed", file_changed) mainwin = MainWin() signal.signal(signal.SIGINT, signal.SIG_DFL) # make ^c work Gtk.main()

Lots of little tricks in there: the widget acts as a widget (that is: it stays glued to the desktop, and doesn’t vanish when you Show Desktop) because of the Gdk.WindowTypeHint.DOCK, skip_pager_hint=True, skip_taskbar_hint=True, and set_keep_below(True) parts; it’s transparent because the HTML file sets its background colour to rgba(0,0,0,0) with CSS and then we use override_background_color to make that actually be transparent; the window has no decorations because of set_decorated(False). Then I just add it to Startup Applications and we’re done.

Pages

Subscribe to Free Software Magazine aggregator