Planet Ubuntu
Subscribe to Planet Ubuntu feed
Planet Ubuntu - http://planet.ubuntu.com/
Updated: 3 hours 26 min ago

Sam Hewitt: How-to Shape Tortellini

Fri, 2014-04-11 17:00

Tortellini is simply one of many filled pasta shapes, but it involves a bit more work to shape than others (such as ravioli). So if you're into the meticulous, then you'll enjoy these. :)

Instead of placing a filling between two squares of pasta & then sealing (ravioli), tortellini makes use of one square folded onto itself into a triangle which is then pinched into a "navel" shape.

    Things you'll need:
  • fresh pasta dough, rolled into sheets
  • flour, for dusting
  • small pizza wheel or knife –using a pizza wheel makes cutting a lot easier
  • water
    Directions
  1. Flour a clean, dry surface, such as a countertop, table or large cutting board.
  2. Place a small bowl of water within reach.
  3. Dust the pasta sheets with flour & cut into ~3-inch squares.
  4. Scoop approximately 1 teaspoon of your filling into the center of a pasta square.
  5. To seal, dip a finger in the water and then dampen two edges of the pasta square –the water makes the dough slightly gummy so it will stick to itself.
  6. Fold one edge over and starting in the smaller (~45°) corner gently pinch it close.
  7. Next, gently pinch close the other side, starting at the larger (~90°) corner.
  8. If you've had too much filling it will squirt out, but that's alright.
  9. Flip the half-formed tortellini over so the flatter side is facing upwards.
  10. Wet the two opposing corners and fold each towards the center (with overlap). Pinch them together.
  11. Transfer each completed tortellini to a well-floured tray where they can remain like this until you're ready to cook them.
  12. The tortellini can be refrigerated in this state for a few days, or frozen for months.
  13. Upon cooking, drop them into salted, boiling water and when they start to float they will be cooked –they can be still frozen at this point (had you done so).

Alan Bell: OpenERP and Heartbleed

Fri, 2014-04-11 14:39

No doubt by now you will have seen loads of stuff in the media about the Heartbleed bug. This is a pretty bad bug, there have been other huge bugs in the past too, but this one has a very media friendly name and a cute logo so it gets the coverage that it deserves. In short it affects https connections to web servers and other types of server that use ssl in a less obvious way. We have been updating and fixing servers that we host but we know that rather a lot of people have been using our guides to installing OpenERP, if you have, and you set up the https connections to the server (part 2 of the guides), then you are probably vulnerable to the heartbleed bug. OpenERP itself does not do the https bit, we used either Apache or Nginx as a reverse proxy to add the ssl layer.

Firstly use this testing tool http://filippo.io/Heartbleed to see if your system is vulnerable. You may need to check the box to ignore certificates if you are using a self-signed certificate. The fix to OpenSSL is already in the Ubuntu repositories, so you just need to pull the upgrade (this will update all packages, which is fine)

sudo apt-get update
sudo apt-get dist-upgrade

and then restart your webserver service, which could be apache or nginx, if you can’t remember which then just try both, one will fail with an unrecognised service error.

sudo service nginx restart
sudo service apache2 restart

This might get you up and running in seconds, but I found one one machine the openerp process had got a bit upset, if you can’t log in after restarting the web process then you could restart the openerp server process, or just restart everything with:

sudo reboot

Now use http://filippo.io/Heartbleed again to confirm that you are fixed.

If you are not using https you might be fine, you have an inherently less secure connection to your server, but the server won’t serve up it’s memory to anyone who asks for it. Even if you are not using https right now, do update anyway, it is a good thing to do.

Zygmunt Krynicki: Checkbox challenges for 2015

Fri, 2014-04-11 12:29
Having a less packed day for the first time in a few weeks I was thinking about the next steps for the Checkbox project. There are a few separate big tasks that I think should happen over the next 6-18 months.

First of all, our large collection of tests needs maintenance. We need to keep adapting it to changing requirements and new hardware. We need to fix bugs and make it more robust. We also need to add some level of polish to the user interface. To make sure all our test programs are behaving in an uniform way, use correct wording, can be localized, etc. Those are all important to keep the project healthy. We also have a big challenge ahead of us, with the whole touch world entering the Ubuntu ecosystem. We will have to revisit some decisions, decide which libraries, tools and layers to use to test certain features and make sure we don't leave anything behind. This is very challenging as we really have a lot of existing tests. We also need to make them work the same way regardless of how they are started (classic Ubuntu, touch Ubuntu, remote Ubuntu server).

The core tools got an amazing boost over the past 12 months. Starting from pretty old technology that was very flexible but hard to understand and modify to something that is probably just as flexible but far easier to understand and work with. Still, it's not all roses. The Ubuntu SDK UI needs a lot of work to get right. It has usability issues, it has architecture design issues. We also have a big disconnect between the core technology (python3) used by and Qt+QML C++ codebase, talking over D-Bus with the rest of the stack. That brings friction and is 10x harder to modify than an all-python solution. Ideally we'd like to switch to PyQt but how that fares with the future Touch world is hard to say. I suspect that our remote testing story will help us have a smooth transition that won't compromise our existing effort and equally won't collide with the direction set by the first Ubuntu touch release.

Perhaps not in the spotlight but definitely we need to work on "whitelists" (aka test plans). We need to learn how our users take our stack and remix it to solve their problems. Our test plan technology is ancient and shows its weaknesses. We need a 2.0 test plans that allow us to express the problems we need to solve clearly, unambiguously and efficiently. We need to improve our per-device-instance test support. We need to provide rich meta-data for user interfaces. We need better vocabulary to create true test plans that can react to results in a way unconstrained by the design of the legacy checkbox first written over seven years ago. We also need to execute those changes in a way that has no flag days or burnt bridges. Nobody likes to build on moving sand and we're here to provide a solid foundation for other teams at Canonical and everyone in the free software ecosystem.

Lastly we have the elephant in the room called deployment. Checkbox doesn't by itself handle deploying system images and configuration onto bare metal (we have a very old and support project for doing that) and the metal is changing very rapidly. Severs are quite unlike desktops, laptops (Ethernet-less ultrabooks?) and most importantly tablets and the whole touch-device ecosystem behind them. In the next 12 months we need a very good story and a solid plan on how to execute the transition from what we have now onto something that keeps us going for the next few years, at least. Canonical luckily has such a project already, MAAS. MAAS was envisioned for big iron hardware but if you look at it from our point of view we really want to have uniform API for all hardware. From that big-ass server in a Data Centre somewhere across the globe to that development board on your desk, which will be the next tablet or phone product. We want to do the same set of operations on all of the devices in this spectrum, manage, control, track, re-image. The means and technology to do that differ widely and from experience I can tell you this is a zoo with all the queer animals you can think of but I'm confident we can make it work.

So there you have it. Checkbox over the next 12+ months, as seen through my eyes.

Ubuntu GNOME: Ubuntu GNOME Trusty Tahr Release Candidate

Fri, 2014-04-11 10:56

Hi,

This is the final week of Trusty Tahr Cycle. We are now at the very last phase of this cycle. It is called The Final Freeze and Release Candidate.

The Final Freeze vs The Final Release
You need to understand the difference between The Final Release and The Final Freeze.

Final Freeze – April 10th

Final Release – April 17th

Adam Conrad from The Ubuntu Release Team has explained in details in his email and announced The Final Freeze of Trusty Tahr Cycle.

What does all this mean?
It means that Ubuntu GNOME Trusty Tahr Daily Builds are considered to be RC.

What does RC (Release Candidate) mean?

Release Candidate

“During the week leading up to the final release, the images produced are all considered release candidates.”

The Final Round of Testing Ubuntu GNOME Trusty Tahr
This is the final round and the last week to test Ubuntu GNOME Trusty Tahr.

Ubuntu GNOME QA Team is testing now Ubuntu GNOME Trusty Tahr Release Candidate.

As always, your help, support and testing are highly needed and greatly appreciated.

All about Testing Ubuntu GNOME.

Download and Test Ubuntu GNOME Trusty Tahr Release Candidate.

Feel free to Contact Us.

Thank you for choosing, testing and supporting Ubuntu GNOME. Without your great and amazing support, we would have never reached to this point.

Stephan Adig: Network Engineers, we are looking for you!

Fri, 2014-04-11 08:24

So, we have a Datacenter Engineer Position open, and also a Network Engineer Position.

And as pre-requisite, you should be able to travel through Europe without any issues, you should read/write/speak English, next to your native language.

When you

  • are comfortable to travel
  • are familiar with routers and switches of different vendors
  • know that bonding slaves don’t need a safe word
  • know that BGP is no medical condition
  • know how to crimp CAT 5/6/7
  • know the differences between the different types of LWL cable connections
  • have fun working with the smartest guys in this business
  • want to even learn something new
  • love games
  • love streaming
  • love PlayStation (well, this is not a must)

Still with me?

You will work out of our Berlin Office, which is in the Heart of Berlin.

You will work directly with our Southern California Based Network Engineering Team, with our Datacenter Team and with our SRE Team.

The Berlin team is a team of several nationalities, which combines the awesomeness of Spanish, Italian, French and German Minds. We all love good food and drinks, good jokes, awesome movies, and we all love to work in the hottest datacenter environments ever.

Is this something for you?

If so, you should apply now.

And applying for this job is easy as provision a Cisco Nexus router today.

Two ways:

  1. You point your browser to our LinkedIn Page and press ‘Apply Now. (Please refer to me, and where you read this post)
  2. Or you send your CV directly through the usual channels to me (PDF or ASCII with a Profile Picture attached) and I put you on top of the stack.

Hope to see you soon and welcome you as part of our Sony/Gaikai Family in Berlin

I know some people are afraid of LinkedIn so here is the official job description from our HR Department.

Job Description:

As a Network Engineer with deployment focus you will be responsible for rollout logistics, network deployment process and execution. You will work closely with remote Network Engineers and Datacenter Operations to turn up, configure, test and deliver Network platforms across POPs and Datacenters.

Principle Duties / Responsibilities:
  • Responsible for rollout logistics and coordination
  • Responsible for network deployment processes
  • Responsible for network deployment execution
  • Deployment and provisioning of Transport, Routing and Switching platforms
Required Knowledge / Skills:
  • Comfortable with travel
  • Comfortable with optical transport, DWDM
  • Comfortable with various network operating systems
  • Comfortable with some network testing equipment
  • Comfortable with structured cabling
  • Comfortable with interface and chassis diagnostics
  • Comfortable with basic power estimation and calculation
Desired Skills and Experience Requirements:
  • BA degree or equivalent experience
  • 1-3 years working in a production datacenter environment
  • Experience with asset management and reporting
  • Knowledge of various vendor RMA processes to deal with repairs and returns

  • Keen understanding of data center operations, maintenance and technical requirements including replacement of components such as hard drives, RAM, CPUs, motherboards and power supplies.

  • Understanding of the importance of Change Management in an online production environment
  • High energy and an intense desire to positively impact the business
  • Ability to rack equipment up to 50 lbs unassisted

  • High aptitude for technology

  • Highly refined organizational skills
  • Strong interpersonal and communication skills and the ability to work collaboratively
  • Ability to manage multiple tasks at one time

Up to 50% travel required with this position.

Javier L.: UGJ-MX, 5-April-2014

Thu, 2014-04-10 19:33

Last weekend the Ubuntu-mx team hosted their fourth UGJ in Mexico City!, Isn’t wonderful when you meet mind liked people and everything just flows?, we discussed in detail Free Software, Ubuntu, the Ubuntu MX team and our favorite quesadillas recipies (I love the ones with chorizo and cheese). We took a bunch of photos and video for those who couldn’t attend =(

Anyway, thanks for attending and we’ll see you in the next one! Have fun =D


Ubuntu Podcast from the UK LoCo: S07E02 – The One Where Everybody Finds Out

Thu, 2014-04-10 13:27

We’re back with the second episode of Season Seven of the Ubuntu Podcast! Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are drinking tea and eating early Easter cakes in Studio L.

 Download OGG  Download MP3 Play in Popup

In this week’s show:

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow our twitter feed http://twitter.com/uupc
Find our Facebook Fan Page
Follow us on Google Plus

Kubuntu Wire: Install Kubuntu on Windows XP Systems

Thu, 2014-04-10 13:27

KDE friendly web magazine Muktware has posted an article to Install Kubuntu on Windows XP systems for the millions of Windows XP machines which are now out of support.  With SSL breaking making the national news, you really can’t afford to be out of support.

 

Ubuntu App Developer Blog: Ubuntu App Showdown – some more time to tidy things up

Thu, 2014-04-10 10:05

Shortly before the submission deadline last night we had some small technical hiccups in the Ubuntu Software Store. This was fixed resolved very quickly (thanks a lot everyone who worked on this!), but we decided to give everyone another day to make up for it.

The new deadline is today, 10th April 2014, 23:59 UTC.

Please all verify that your app still works, everythings is tidy, you submitted it to the store and filled out the submission form correctly. Here’s how.

Submit your app

This is obviously the most important bit and needs to happen first. Don’t leave this to the last minute. Your app might have to go through a couple of reviews before it’s accepted in the store. So plan in some time for that. Once it’s accepted and published in the store, you can always, much more quickly, publish an update.

Submit your app.

Register your participation

Once your app is in the store, you need to register your participation in the App Showdown. To make sure your application is registered for the contest and judges review it, you’ll need to fill in the participation form. You can start filling it in already and until the submission deadline, it should only take you 2 minutes to complete.

Fill out the submission form.

Questions?

If you have questions or need help, reach out (also rather sooner than later) to our great community of Ubuntu App Developers.

Ubuntu Server blog: OpenStack Continuous Integration on Ubuntu 101

Thu, 2014-04-10 00:39

We (the Canonical OIL dev team) are about to finish the production roll out of our OpenStack Interoperability Lab (OIL). It’s been an awesome time getting here so I thought I would take the opportunity to get everyone familiar, at a high level, with what OIL is and some of the cool technology behind it.

So what is OIL?

For starters, OIL is essentially continuous integration of the entire stack, from hardware preparation, to Operating System deployment, to orchestration of OpenStack and third party software, all while running specific tests at each point in the process. All test results and CI artifacts are centrally stored for analysis and monthly report generation.

Typically, setting up a cloud (particularly OpenStack) for the first time can be frustrating and time consuming. The potential combinations and permutations of hardware/software components and configurations can quickly become mind-numbing. To help ease the process and provide stability across options we sought to develop an interoperability test lab to vet as much of the ecosystem as possible.

To accomplish this we developed a CI process for building and tearing down entire OpenStack deployments in order to validate every step in the process and to make sure it is repeatable. The OIL lab is comprised of a pool of machines (including routers/switches, storage systems, and computer servers) from a large number of partners. We continually pull available nodes from the pool, setup the entire stack, go to town testing, and then tear it all back down again. We do this so many times that we are already deploying around 50 clouds a day and expect to scale this by a factor of 3-4 with our production roll-out. Generally, each cloud is composed of about 5-7 machines each but we have the ability to scale each test as well.

But that’s not all, in addition to testing we also do bug triage, defect analysis and work both internally and with our partners on fixing as many things as we can. All to ensure that deploying OpenStack on Ubuntu is as seamless a process as possible for both users and vendors alike.

Underlying Technology

We didn’t want to reinvent the wheel so, we are leveraging the latest Ubuntu technologies as well as some standard tools to do all of this. In fact the majority of the OIL infrastructure is public code you can get and start playing with right away!

Here is a small list of what we are using for all this CI goodness:

  • MaaS — to do the base OS install
  • Juju — for all the complicated OpenStack setup steps — and linking them together
  • Tempest — the standard test suite that pokes and prods OpenStack to ensure everything is working
  • Machine selections & random config generation code — to make sure we get a good hardware/software cross sections
  • Jenkins — gluing everything together

Using all of this we are able to manage our hardware effectively, and with a similar setup you can easily too. This is just a high-level overview so we will have to leave the in-depth technological discussions for another time.

More to come

We plan on having a few more blog posts cover some of the more interesting aspects (both results we are getting from OIL and some underlying technological discussions).

We are getting very close to OIL’s official debut and are excited to start publishing some really insightful data.

Ben Howard: Updated 12.04.4 LTS Cloud Images in response to Heartbleed OpenSSL bug

Wed, 2014-04-09 20:04
Many of our Cloud Image users have inquired about the availability of updated Ubuntu Cloud Images in response to the Heartbleed OpenSSL Vulnerability [1]. Ubuntu released update Ubuntu packages for OpenSSL yesterday, 08 April 2014 [2]. Due to the exceptional circumstances and severity of the Heartbleed OpenSSL bug, Canonical has released new 12.04.4 LTS images at [3]. In the coming days, new Cloud Images for Ubuntu 12.10 and Ubuntu 13.10 will be released.

Canonical is working with Amazon to get the Quickstart and the AWS Marketplace links updated. In the meantime, you can find new AMI ID's for 12.04.4 LTS at [3] and [4]. Also, the snapshot's for Amazon have the volume-create permission granted on the latest images.

Windows Azure [5], Joyent [6] and HP [7, 8, 9] all have updated 12.04.4 LTS images in their respective galleries.

If you are running an affected version of OpenSSL on 12.04 LTS, 12.10 or 13.10, you are strongly encouraged to update. For new instances, it is recommended to either use an image with a serial newer than 20140408, or update your OpenSSL  package immediately upon launch. Finally, if you need documentation on enabling unattended upgrades, please see [10].


[1] https://www.openssl.org/news/secadv_20140407.txt
[2] http://www.ubuntu.com/usn/usn-2165-1/
[3] http://cloud-images.ubuntu.com/releases/precise/release-20140408/
[4] http://cloud-images.ubuntu.com/locator/ec2/
[5] Azure: Ubuntu-12_04_4-LTS-amd64-server-20140408-en-us-30GB
[6] Joyent Image "ubuntu-certified-12.04", fe5aa6c0-0f09-4b1f-9bad-83e453bb74f3
[7] HP US-West-1: 27be722e-d2d0-44f0-bebe-471c4af76039
[8] HP US-East-1: 8672f4c6-e33d-46f5-b6d8-ebbeba12fa02
[9] Waiting on HP for replication to legacy regions az-{1,2,3}
[10] https://help.ubuntu.com/community/AutomaticSecurityUpdates

Dustin Kirkland: Ubuntu 14.04 LTS -- Security for Human Beings

Wed, 2014-04-09 15:08


In about an hour, I have the distinct honor to address a room full of federal sector security researchers and scientists at the US Department of Energy's Oak Ridge National Labs, within the Cyber and Information Security Research Conference.

I'm delighted to share with you the slide deck I have prepared for this presentation.  You can download a PDF here.

To a great extent, I have simply reformatted the excellent Ubuntu Security Features wiki page our esteemed Ubuntu Security Team maintains, into a format by which I can deliver as a presentation.

Hopefully you'll learn something!  I certainly did, as I researched and built this presentation ;-)
On a related security note, it's probably worth mentioning that Canonical's IS team have updated all SSL services with patched OpenSSL from the Ubuntu security archive, and have restarted all relevant services (using Landscape, for the win), against the Heartbleed vulnerability. I will release an updated pollinate package in a few minutes, to ship the new public key for entropy.ubuntu.com.


Stay safe,
Dustin

Julian Andres Klode: ThinkPad X230 UEFI broken by setting a setting

Wed, 2014-04-09 12:57

Today, I decided to set my X230 back to UEFI-only boot, after having changed that for a bios upgrade recently (to fix a resume bug). I then choose to save the settings and received several error messages telling me that the system ran out of resources (probably storage space for UEFI variables).

I rebooted my machine, and saw no logo appearing. Just something like an underscore on a text console. The system appears to boot normally otherwise, and once the i915 module is loaded (and we’re switching away from UEFI’s Graphical Output Protocol [GOP]) the screen works correctly.

So it seems the GOP broke.

What should I do next?


Filed under: General

Stephan Adig: We are hiring

Wed, 2014-04-09 08:03

Normally I don’t write this type of post, but I know what’s coming up here, and we need people.

As long as you have a European Passport and/or a Visa which entitles you to travel across Europe without issues, you are already interesting.

You are even more interesting when

  • you like working in a fast paced environment
  • you like working with Hardware
  • you are not afraid of moving several hundreds of racks (yes, racks, not servers) of baremetal
  • you like working in an environment where OpenSource is one of the main drivers
  • you like working with a the smartest people in our business
  • you like automation
  • you like being in a Datacenter
  • you like gaming
  • you like streaming
  • you like traveling
  • you read/write/speak English (technically and socially)
  • you like Sony PlayStation (oh well, that’s a plus but not a must ;))
  • you are not afraid

If most of this applies to you, we want to hear from you.

You’ll work from Berlin, Germanies Capital. Our office is in the Heart of Berlin, one of the nicest places in this City.

We are a team of French, Italian, Spanish and German People.

You’ll work closely with the US Southern California Based team and as well with the EU SRE Team.

If you think you are the right person, what are you waiting for?

Applying for this job is easy as installing Ubuntu.

Two ways to apply:

  1. You apply for the job on our LinkedIn Page and refer to Me (Stephan Adig) (you can also mention where you read this post)
  2. Or you send me an email with all your details and your CV (PDF or ASCII and Picture Attached) and I’ll put you in top of the stack.

Anyways, I know some people are scared of LinkedIn so here is the official job description from our HR Department:

Data Center Operations Engineer Job description

Gaikai (外海?, lit. “open sea”, i.e. an expansive outdoor space) is a company which provides technology for the streaming of high-end video games.[2] Founded in 2008, it was acquired by Sony Computer Entertainment in 2012. Its technology has multiple applications, including in-home streaming over a local wired or wireless network (as in Remote Play between the PlayStation 4 and PlayStation Vita), as well as cloud-based gaming where video games are rendered on remote servers and delivered to end users via internet streaming (such as the PlayStation Now game streaming service.[3]) As a startup, before its acquisition by Sony, the company announced many partners using the technology from 2010 through 2012 including game publishers, web portals, retailers and consumer electronics manufacturers

Gaikai is looking for a talented Data Center Operations Engineer to be based in our Berlin office. This position is for an experienced candidate who will work within the Data Center Operations team and have hands on responsibility for ensuring our production datacenter environments are operating efficiently. This position will work closely with the System Engineering and Network Operations teams and provide hands on support for them. The primary responsibility of this job role is to rack and cable new hardware, upgrade existing servers and network equipment and keep accurate inventory information for all systems. You will also be responsible for assisting in the development of processes and procedures related hardware deployment, upgrades and break/fix issues. Key Responsibilities:

  • Support existing hardware in multiple datacenter locations
  • Plan and execute installations in multiple datacenter locations in a timely manner
  • Ensure accurate inventory information for multiple datacenter locations
  • Work closely with Data Center Operations team to track orders and deliveries to multiple datacenter locations
  • Work with the Director of Data Center Operations on datacenter status reports for Senior Management for each datacenter location
  • Refine and document support process for each location including the handling of RMA requests
Desired Skills and Experience

Requirements:

  • BA degree or equivalent experience
  • 1-3 years working in a production datacenter environment
  • Experience with asset management and reporting
  • Knowledge of various vendor RMA processes to deal with repairs and returns
  • Keen understanding of data center operations, maintenance and technical requirements including replacement of components such as hard drives, RAM, CPUs, motherboards and power supplies.
  • Understanding of the importance of Change Management in an online production environment
  • High energy and an intense desire to positively impact the business
  • Ability to rack equipment up to 50 lbs unassisted
  • High aptitude for technology
  • Highly refined organizational skills
  • Strong interpersonal and communication skills and the ability to work collaboratively
  • Ability to manage multiple tasks at one time

Up to 50% travel required with this position.

Ubuntu GNOME: Upgrade Testing

Wed, 2014-04-09 07:44

Hi,

Ubuntu GNOME as an official flavour of Ubuntu, it has the same Release Schedule of Ubuntu and the same goes for all the other official flavours as well.

When it comes to Testing Ubuntu GNOME, we need to make sure everything is working as expected without any problem.

That said, we would like to invite you to help Ubuntu GNOME with Upgrade Testing.

How to help Ubuntu GNOME with Upgrade Testing?
The idea is very simple. We need to upgrade Ubuntu GNOME 13.10 to Ubuntu GNOME Trusty Tahr and test the upgrade process.

If you have Ubuntu GNOME 13.10 installed already, we would really appreciate your help in this regard.

If Ubuntu GNOME 13.10 is not installed, then kindly install it and do the upgrade. Installing Ubuntu GNOME from LiveUSB should not take more than 10 minutes.

How to do an upgrade from 13.10 to Trusty Tahr?
Before we get into this, kindly have a read at Upgrades Documentation.

Whether you’re helping Ubuntu GNOME Team with Testing or you’re a fan of running unstable releases on your machine, kindly make sure to backup your important files before anything else.

To upgrade Ubuntu GNOME 13.10 Stable to Ubuntu GNOME Trusty Tahr Development Release, kindly have a read at Upgrading to Development Releases.

Share your Testing Results
Please make sure to share your Testing Results with Ubuntu GNOME QA Team. The more feedback in this regard, the better.

Let’s make sure that our very first LTS Release of Ubuntu GNOME is solid as rock.

Thank you for helping, supporting and testing Ubuntu GNOME!

As always, for more information about testing, please see Ubuntu GNOME Testing Wiki Page.

Should you have any question, please don’t hesitate to Contact Us.

Happy Testing

Daniel Pocock: Double whammy for CACert.org users

Wed, 2014-04-09 05:47

If you are using OpenSSL (or ever did use it with any of your current keypairs in the last 3-4 years), you are probably in a rush to upgrade all your systems and replace all your private keys right now.

If your certificate authority is CACert.org then there is an extra surprise in store for you. CACert.org has changed their hash to SHA-512 recently and some client/server connections silently fail to authenticate with this hash. Any replacement certificates you obtain from CACert.org today are likely to be signed using the new hash. Amongst other things, if you use CACert.org as the CA for a distributed LDAP authentication system, you will find users unable to log in until you upgrade all SSL client code or change all clients to trust an alternative root.

Costales: #startubuntu

Tue, 2014-04-08 17:46
Discover a new space for your computer
Today is the day! Choose usability, beauty, speed, freedom, community!
Art work by Rafael Laguna.

Ubuntu Server blog: 2014-04-08 Meeting Minutes

Tue, 2014-04-08 17:12

A few last pieces are being worked in the last couple of days to final
freeze.

  • James Page is struggling to find a release team member to review the docker.io feature freeze exception request (bug 1295093).
  • The juju-quickstart MIR is deferred; Robie will upload some final bugfixes soon.
  • Louis is working on some last minute fixes to sosreport.
  • Parameswaran reports that all smoke tests are passing.
  • Stefan is polishing some last pieces in Xen and libvirt.

Full minutes: https://wiki.ubuntu.com/MeetingLogs/Server/20140408
Log: http://ubottu.com/meetingology/logs/ubuntu-meeting/2014/ubuntu-meeting.2014-04-08-16.01.log.html

Ubuntu Kernel Team: Kernel Team Meeting Minutes – April 08, 2014

Tue, 2014-04-08 17:11
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140408 Meeting Agenda


ARM Status

T/master-next: LP1303657 (“Cannot boot trusty kernel on qemu-system-arm”) – we
were missing the correct dtb (wasn’t necessary in S) and qemu was waiting for a
console over jtag (HVC_DCC) that would never show up – waiting for a
confirmation from the reporter before sending the patches.


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

http://people.canonical.com/~kernel/reports/kt-meeting.txt


Milestone Targeted Work Items    apw    core-1311-kernel    4 work items          core-1311-cross-compilation    2 work items          core-1311-hwe-plans    1 work item       ogasawara    core-1403-hwe-stack-eol-notifications    2 work items       smb    servercloud-1311-openstack-virt    3 work items   


Status: Trusty Development Kernel

We entered into Kernel Freeze for Trusty last Thurs and have uploaded
what we intend to be the final kernel for Trusty, 3.13.0-23.45. All
patches from here on out are subject to our Ubuntu SRU policy and only
critical bug fixes will warrant an upload before release next week.
—–
Important upcoming dates:
Thurs Apr 17 – Ubuntu 14.04 Final Release (~1 week away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Saucy/Raring/Quantal/Precise/Lucid

Status for the main kernels, until today (Mar. 25):

  • Lucid – Verification and Testing
  • Precise – Verification and Testing
  • Quantal – Verification and Testing
  • Saucy – Verification and Testing

    Current opened tracking bugs details:

  • http://people.canonical.com/~kernel/reports/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://people.canonical.com/~kernel/reports/sru-report.html

    Schedule:

    cycle: 30-Mar through 26-Apr
    ====================================================================
    28-Mar Last day for kernel commits for this cycle
    30-Mar – 05-Apr Kernel prep week.
    06-Apr – 12-Apr Bug verification & Regression testing.
    17-Apr 14.04 Released
    13-Apr – 26-Apr Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Daniel Pocock: reConServer for easy SIP conferencing

Tue, 2014-04-08 16:24

In the lead up to the release of Debian wheezy, there was quite some debate about the Mumble conferencing software which uses the deprecated and unsupported CELT codec. Although Mumble has very recently added Opus support, it is still limited by the fact that it is a standalone solution without any support for distributed protocols like SIP or XMPP.

Making SIP conferencing easy

Of course, people could always set up SIP conferences by installing Asterisk but for many use cases that may be overkill and may simply introduce alternative security and administration overheads.

Enter reConServer

The popular reSIProcate SIP stack includes a high-level programming API, the Conversation Manager, dubbed librecon. It was developed and contributed to the open source project by Scott Godin of SIP Spectrum. In very simple terms, a Conversation object with two Participants is a phone call. A Conversation object with more than two Participants is a conference.

The original librecon includes a fully functional demo app, testUA that allows you to control conferences from the command line.

As part of Google Summer of Code 2013, Catalin Constantin Usurelu took the testUA.cxx code and converted it into a proper daemon process. It is now available as a ready-to-run SIP conferencing server package in Debian and Ubuntu.

The quick and easy way to try it

Normally, a SIP conferencing server will be linked to a SIP proxy and other infrastructure.

For trying it out quickly, however, no SIP proxy is necessary.

Simply install the package with the default settings and then configure a client to talk to the reConServer directly by dialing the IP address of the server.

For example, set the following options in /etc/reConServer/reConServer.config:

UDPPort = 5062 EnableAutoAnswer = true

and it will happily accept all SIP calls sent to the IP address where it is running.

Now configure Jitsi to talk to it directly in serverless SIP configuration:

Notice here that we simply put a username without any domain part, this tells Jitsi to create an account that can operate without a SIP proxy or registration server:

Calling in to the conference

Notice in the screenshot below we simply dial the IP address and port number of the reConServer process, sip:192.168.1.100:5062. When the first call comes in, reConServer will answer and place the caller on hold. When the next caller arrives, the hold should automatically finish and audio will be heard.

Next steps

To make it run as part of a proper SIP deployment, set the necessary configuration options (username, password, proxy) to make reConServer register to the SIP proxy. Users can then call the conference through the proxy.

To discuss any problems or questions, please come and join the recon-users mailing list or the Jitsi users list

Consider using Wireshark to observe the SIP packets and learn more about the protocol.

Pages