Planet Ubuntu
Subscribe to Planet Ubuntu feed
Planet Ubuntu - http://planet.ubuntu.com/
Updated: 50 min 43 sec ago

Valorie Zimmerman: Overcoming fear

Fri, 2014-09-26 09:21
In the last few posts, I've been exploring ideas expressed by Ed Catmull in Creativity, Inc. Everyone likes good ideas! But putting them into practice can be both difficult, and frightening. Change is work, and creating something which has never existed before, is creating the future. The unknown is daunting.

In meetings with the Braintrust, where new film ideas are viewed and judged, Catmull says,
It is natural for people to fear that such an inherently critical environment will feel threatening and unpleasant, like a trip to the dentist. The key is to look at the viewpoints being offered, in any successful feedback group, as additive, not competitive. A competitive approach measures other ideas against your own, turning the discussion into a debate to be won or lost. An additive approach, on the other hand, starts with the understanding that each participant contributes something (even if it's only an idea that fuels the discussion--and ultimately doesn't work). The Braintrust is valuable because it broadens your perspective, allowing you to peer--at least briefly--through other eyes.[101]Catmull presents an example where the Braintrust found a problem in The Incredibles film. In this case, they knew something was wrong, but failed to correctly diagnose it. Even so, the director was able, with the help of his peers, to ultimately fix the scene. The problem turned out not to be the voices, but the physical scale of the characters on the screen!

This could happen because the director and the team let go of fear and defensiveness, and trust that everyone is working for the greater good. I often see us doing this in KDE, but in the Community Working Group cases which come before us, I see this breaking down sometimes. It is human nature to be defensive. It takes healthy community to build trust so we can overcome that fear.

Ubuntu GNOME: Utopic Unicorn Beta 2

Fri, 2014-09-26 04:29

Hi,

Ubuntu GNOME Team is pleased to announce the release of Ubuntu GNOME Utopic Unicorn Beta 2 (Final Beta).

Please do read the release notes.

NOTE:

This is Beta 2 Release. Ubuntu GNOME Beta Releases are NOT recommended for:

  • Regular users who are not aware of pre-release issues
  • Anyone who needs a stable system
  • Anyone uncomfortable running a possibly frequently broken system
  • Anyone in a production environment with data or workflows that need to be reliable

Ubuntu GNOME Beta Releases are recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Ubuntu GNOME developers

For those who wish to use the latest releases, please remember to do an upgrade test from Trusty Tahr (Ubuntu GNOME 14.04 LTS) to Utopic Unicorn Beta 2. Needless to say, Ubuntu GNOME 14.04 is an LTS release that is supported for 3 years, so this test is for those who seek the latest system/packages and don’t mind the LTS (Long Term Support) Releases.

To help with testing Ubuntu GNOME:
Please see Testing Ubuntu GNOME Wiki Page.

To contact Ubuntu GNOME:
Please see our full list of contact channels.

Thank you for choosing and testing Ubuntu GNOME!

See Also:
Ubuntu 14.10 (Utopic Unicorn) Final Beta Released – Official Announcement

Xubuntu: Xubuntu 14.10 Beta 2 is released!

Fri, 2014-09-26 03:44

The Xubuntu team is pleased to announce the immediate release of Xubuntu 14.10 Beta 2. This is the final beta towards the release in October. Before this beta we have landed various of enhancements and some new features. Now it’s time to start polishing the last edges and improve the stability.
The Beta 2 release is available for download by torrents and direct downloads from
http://cdimage.ubuntu.com/xubuntu/releases/utopic/beta-2/

Highlights and known issues Known Issues
  • com32r error on boot with usb (1325801)
  • Installation into some virtual machines fails to boot (1371651)
  • Failure to configure wifi in live-session (1351590)
  • Black background to Try/Install dialogue (1365815)

To celebrate the 14.10 codename “Utopic Unicorn” and to demonstrate the easy customisability of Xubuntu, highlight colors have been turned pink for this release. You can easily revert this change by using the theme configuration application (gtk-theme-config) under the Settings Manager; simply turn Custom Highlight Colors “Off” and click “Apply”. Of course, if you wish, you can change the highlight color to something you like better than the default blue!

Workarounds for issues in virtual machines
  • Move to TTY1 (with VirtualBox, Right-Ctrl+F1), login and then start lightdm with “sudo service lightdm start”
  • Some people have been able to boot successfully after editing grub and removing the “quiet” and “splash” options
  • Install appears to start OK when systemd is enabled; append “init=/lib/systemd/systemd” to the “linux” line in grub

Eric Hammond: Throw Away The Password To Your AWS Account

Thu, 2014-09-25 22:04

reduce the risk of losing control of your AWS account by not knowing the root account password

As Amazon states, one of the best practices for using AWS is

Don’t use your AWS root account credentials to access AWS […] Create an IAM user for yourself […], give that IAM user administrative privileges, and use that IAM user for all your work.

The root account credentials are the email address and password that you used when you first registered for AWS. These credentials have the ultimate authority to create and delete IAM users, change billing, close the account, and perform all other actions on your AWS account.

You can create a separate IAM user with near-full permissions for use when you need to perform admin tasks, instead of using the AWS root account. If the credentials for the admin IAM user are compromised, you can use the AWS root account to disable those credentials to prevent further harm, and create new credentials for ongoing use.

However, if the credentials for your AWS root account are compromised, the person who stole them can take over complete control of your account, change the associated email address, and lock you out.

I have consulted companies who lost control over the root AWS account which contained their assets. You want to avoid this.

Proposal

Given:

  • The AWS root account is not required for regular use as long as you have created an IAM user with admin privileges

  • Amazon recommends not using your AWS root account

  • You can’t accidentally expose your AWS root account password if you don’t know it and haven’t saved it anywhere

  • You can always reset your AWS root account password as long as you have access to the email address associated with the account

Consider this approach to improving security:

  1. Create an IAM user with full admin privileges. Use this when you need to do administrative tasks. Activate IAM user access to account billing information for the IAM user to have access to read and modify billing, payment, and account information.

  2. Change the AWS root account password to a long, randomly generated string. Do not save the password. Do not try to remember the password. On Ubuntu, you can use a command like the following to generate a random password for copy/paste into the change password form:

    pwgen -s 24 1
  3. If you need access to the AWS root account at some point in the future, use the “Forgot Password” function on the signin form.

It should be clear from this that protecting access to your email account is critical to your overall AWS security, as that is all that is needed to change your password, but that has been true for many online services for many years.

Caveats

You currently need to use the AWS root account in the following situations:

  • to change the email address and password associated with the AWS root account

  • to deactivate IAM user access to account billing information

  • to cancel AWS services (e.g., support)

  • to close the AWS account

  • to buy stuff on Amazon.com, Audible.com, etc. if you are using the same account (not recommended)

  • anything else? Let folks know in the comments.

MFA

For completeness, I should also reiterate Amazon’s constant and strong recommendation to use MFA (multi-factor authentication) on your root AWS account. Consider buying the hardware MFA device, associating it with your root account, then storing it in a lock box with your other important things.

You should also add MFA to your IAM accounts that have AWS console access. For this, I like to use Google Authenticator software running on a locked down mobile phone.

MFA adds a second layer of protection beyond just knowing the password or having access to your email account.

Original article: http://alestic.com/2014/09/aws-root-password

Eric Hammond: AWS Community Heroes Program

Thu, 2014-09-25 22:03

Amazon Web Services recently announced an AWS Community Heroes Program where they are starting to recognize publicly some of the many individuals around the world who contribute in so many ways to the community that has grown up around the services and products provided by AWS.

It is fun to be part of this community and to share the excitement that so many have experienced as they discover and promote new ways of working and more efficient ways of building projects and companies.

Here are some technologies I have gotten the most excited about over the decades. Each of these changed my life in a significant way as I invested serious time and effort learning and using the technology. The year represents when I started sharing the “good news” of the technology with people around me, who at the time usually couldn’t have cared less.

  • 1980: Computers and Programming - “You can write instructions and the computer does what you tell it to! This is going to be huge!”

  • 1987: The Internet - “You can talk to people around the world, access information that others make available, and publish information for others to access! This is going to be huge!”

  • 1993: The World Wide Web - “You can view remote documents by clicking on hyperlinks, making it super-easy to access information, and publishing is simple! This is going to be huge!”

  • 2007: Amazon Web Services - “You can provision on-demand disposable compute infrastructure from the command line and only pay for what you use! This is going to be huge!”

I feel privileged to have witnessed amazing growth in each of these and look forward to more productive use on all fronts.

There are a ton of local AWS meetups and AWS user groups where you can make contact with other AWS users. AWS often sends employees to speak and share with these groups.

A great way to meet thousands of people in the AWS community (and to spend a few days in intense learning about AWS no matter your current expertise level) is to attend the AWS re:Invent conference in Las Vegas this November. Perhaps I’ll see you there!

Original article: http://alestic.com/2014/09/aws-community-heroes

Costales: Lubuntu 14.04 for Cubieboard 2

Thu, 2014-09-25 17:31
Cubieboard2 (CPU A20)

I bought a Cubieboard2 and I made a Lubuntu 14.04 image! Now, it's really fast and easy to deploy that image in a cubieboard2 with a NAND = 4GB.

Download the Lubuntu 14.04 image for CubieBoard2 here.



LUBUNTU 14.04 INSTALL STEPS:
Boot with a Live distro, by example, with Cubian into a microSD (>8GB) with these steps.

Copy this Lubuntu image downloaded into the root of the microSD.

Boot the Cubieboard2 with Cubian from the microSD.

Open a Terminal (Menu / Accesories / LXTerminal) and run:
sudo su -
[password is "cubie"]
cd /
gunzip lubuntu-14.04-cubieboard2-nand.img.gz
dd if=/lubuntu-14.04-cubieboard2-nand.img conv=sync,noerror bs=64K of=/dev/nand

It's done! Reboot :) You must to have Lubuntu 14.04.1 running with 4GB as NAND partition. User: linaro, password: linaro.



RECOMMEND STEPS AFTER INSTALLATION:
    As sudo for the next steps:
    sudo su -
    • Add your new user (change 'username' for your new user):
    useradd -m username -G adm,dialout,cdrom,audio,dip,video,plugdev,admin,inet -s /bin/bash ; passwd username

    echo 'setxkbmap -layout "es"' >> /etc/xdg/lxsession/Lubuntu/autostart

    • Set localtime (By example, for Spain local time = Europe/Madrid), in other way, the browser will have problems with the https web pages:
    rm /etc/localtime ; ln -s /usr/share/zoneinfo/Europe/Madrid /etc/localtime ; ntpdate ntp.ubuntu.com

    • Change password to linaro user or remove (logout required) that user (it's sudo and all people know this password, do it ;):
    userdel -r linaro

    • Install ssh-client for connect by ssh or pulseaudio pavucontrol for audio.



    HOW WAS THIS IMAGE DONE?
    For this image I installed an official Lubuntu 13.04 Image from here, and I did this changes:
    - Resized NAND to 4GB (Ubuntu will use 1.5GB; 2GB free). You can use a microSD or SATA HD as external storage.
    - Updated to 13.10 and then to 14.04 LTS (Updated lxde* packages to last versions).
    - Installed ntp, firefox, audacious, sylpheed, pidgin, gpicview, lxappearance and ufw (not enabled)
    - Rewritabled and group owner for avoid ufw warnings: /etc, /, /lib
    - Removed chromium-browser, gnome-network-manager and gnome-disk-utility
    - Removed no password for admin users (edited /etc/sudoers)
    - Created this dd image



    (OPTIONAL) PREVIOUSLY BACKUP OF YOUR CURRENT CUBIEBOARD2:
    Insert a microSD card in your current OS:
    sudo su -
    dd if=/dev/nand conv=sync,noerror bs=64K | gzip -c -9 > /nand.img.gz
    (OPTIONAL) RESTORE THAT BACKUP:
    cd /
    gunzip nand.img.gz
    dd if=/nand.img conv=sync,noerror bs=64K of=/dev/nand

    Mike Rushton: Here’s how to patch Ubuntu 8.04 or anything where you have to build bash from source

    Thu, 2014-09-25 17:03

    Just a quick post to help those who might be running older/unsupported distributions of linux, mainly Ubuntu 8.04 who need to patch their version of bash due to the recent exploit here:

    http://thehackernews.com/2014/09/bash-shell-vulnerability-shellshock.html

    I found this post and can confirm it works:

    https://news.ycombinator.com/item?id=8364385

    Here are the steps(make a backup of /bin/bash just in case):

    #assume that your sources are in /src
    cd /src
    wget http://ftp.gnu.org/gnu/bash/bash-4.3.tar.gz
    #download all patches
    for i in $(seq -f “%03g” 0 25); do wget http://ftp.gnu.org/gnu/bash/bash-4.3-patches/bash43-$i; done
    tar zxvf bash-4.3.tar.gz
    cd bash-4.3
    #apply all patches
    for i in $(seq -f “%03g” 0 25);do patch -p0 < ../bash43-$i; done
    #build and install
    ./configure && make && make install

    Scarlett Clark: Kubuntu: Frameworks 5.2.0 Released and Plasma 5.0.2 is ready for testing.

    Thu, 2014-09-25 17:00

    Kubuntu 14.10 with KDE Plasma 5.0.2

    KDE Frameworks 5.2.0 Has been released to Utopic archive!
    (Actually a few days ago, we are playing catch up since Akademy)

    Also, I have finished packaging Plasma 5.0.2, it looks and runs great!
    We desperately need more testers! If you would like to help us test,
    please join us in IRC in #kubuntu-devel thanks!

    Scarlett Clark: My first Akademy: My confrontation with my Shyness and my Overall Experience.

    Thu, 2014-09-25 16:37

    KDE Akademy 2014 in Brno, Czech Republic

    A few weeks ago I was blessed with the opportunity to attend KDE’s Akademy Conference for the first time. (Thank you Ubuntu Donors for sponsoring me!).
    Akademy is a week long conference that begins with a weekend of keynote speakers, informative lectures, and many hacking groups scattered about.
    This Akademy also had a great pre-release party held by Red Hat.

    I have not traveled such a distance since I was a child, so I was not prepared for the adventures to come. Hint: Pack lightly! I still have nightmares of the giant suitcase I thought I would need! I was lucky to have a travel buddy / roommate (Thank you Valorie Zimmerman!) to assist me in my travels, and most importantly, introducing me to my peers at KDE/Kubuntu that I had never met in person. It was wonderful to finally put a face to the names.

    My first few days were rather difficult. I was fighting my urge to stand in a corner and be shy. Luckily, some friendly folks dragged me out of the corner and introduced me to more and more people. With each introduction and conversation it became easier. I also volunteered at the registration desk, which gave me an opportunity to meet new people. As the days went on and many great conversations later, I forgot I was shy! In the end I made many friends during Akademy, turning this event into one of the most memorable moments of my life.

    The weekend brought Keynote speakers and many informative lectures. Unfortunately, I could not be in several places at once, so I missed a few that I wanted to see.
    Thankfully, you can see them here: https://conf.kde.org/en/Akademy2014/public/schedule/2014-09-06

    Due to circumstances out of their control, the audio is not great. The rest of the week was filled with BoF sessions / Workshops / Hacking / Collaboration / Anything we could think of that need to get done. In the BoF sessions we covered a lot of ground and hashed out ways to resolve problems we were facing. All that I attended were extremely productive. Yet another case where I wish I could split into multiple people so I could attend all that I wanted too!

    Kubuntu Day @ Akademy 2014

    On Thursday we got an entire Kubuntu Day! We accomplished many things including working with Debian’s Sune and Pino to move some of our packaging to Debian git to reduce duplicate packaging work. We discussed the details of going to continuous packaging which includes Jenkins CI. We also had the pleasure of München’s Limux project joining us to update us with the progress of Kubuntu in Munich, Germany!

    While there was a lot of work accomplished during Akademy, there was also plenty of play as well! In the evenings many of us would go out on the town for dinner and drinks.
    On Wednesday,on the day trip, we visited (what a hike!) an old castle via a nice ferry ride. Unfortunately I forgot my camera in the hostel.. The hackroom in the hostel was always bustling with activity. We even had the pleasure of very tasty home cooked meals by Jos Poortvliet in the tiny hostel kitchen a couple nights, that took some creative thinking! In the end, there was never a moment of boredom and always moments of learning, discussions, hacking and laughing.

    If you ever have the opportunity to attend Akademy, do not pass it up!

    Julian Andres Klode: hardlink 0.3.0 released; xattr support

    Thu, 2014-09-25 12:42

    Today I not only submitted my bachelor thesis to the printing company, I also released a new version of hardlink, my file deduplication tool.

    hardlink 0.3 now features support for xattr support, contributed by Tom Keel at Intel. If this does not work correctly, please blame him.

    I also added support for a –minimum-size option.

    Most of the other code has been tested since the upload of RC1 to experimental in September 2012.

    The next major version will split up the code into multiple files and clean it up a bit. It’s getting a bit long now in a single file.


    Filed under: Uncategorized

    Mark Shuttleworth: What Western media and polititians fail to mention about Iraq and Ukraine

    Thu, 2014-09-25 08:01

    Be careful of headlines, they appeal to our sense of the obvious and the familiar, they entrench rather than challenge established stereotypes and memes. What one doesn’t read about every day is usually more interesting than what’s in the headlines. And in the current round of global unease, what’s not being said – what we’ve failed to admit about our Western selves and our local allies – is central to the problems at hand.

    Both Iraq and Ukraine, under Western tutelage, failed to create states which welcome diversity. Both Iraq and the Ukraine aggressively marginalised significant communities, with the full knowledge and in some cases support of their Western benefactors. And in both cases, those disenfranchised communities have rallied their cause into wars of aggression.

    Reading the Western media one would think it’s clear who the aggressors are in both cases: Islamic State and Russia are “obvious bad actors” who’s behaviour needs to be met with stern action. Russia clearly has no business arming rebels with guns they use irresponsibly to tragic effect, and the Islamic State are clearly “a barbaric, evil force”. If those gross simplifications, reinforced in the Western media, define our debate and discussion on the subject then we are destined pursue some painful paths with little but frustration to show for the effort, and nasty thorns that fester indefinitely. If that sounds familiar it’s because yes, this is the same thing happening all over again. In a prior generation, only a decade ago, anger and frustration at 9/11 crowded out calm deliberation and a focus on the crimes in favour of shock and awe. Today, out of a lack of insight into the root cause of Ukrainian separatism and Islamic State’s attractiveness to a growing number across the Middle East and North Africa, we are about to compound our problems by slugging our way into a fight we should understand before we join.

    This is in no way to say that the behaviour of Islamic State or Russia are acceptable in modern society. They are not. But we must take responsibility for our own behaviour first and foremost; time and history are the best judges of the behaviour of others.

    In the case of the Ukraine, it’s important to know how miserable it has become for native Russian speakers born and raised in the Ukraine. People who have spent their entire lives as citizens of the Ukraine who happen to speak in Russian at home, at work, in church and at social events have found themselves discriminated against by official decree from Kiev. Friends of mine with family in Odessa tell me that there have been systematic attempts to undermine and disenfranchise Russian speaking in the Ukraine. “You may not speak in your home language in this school”. “This market can only be conducted in Ukrainian, not Russian”. It’s important to appreciate that being a Russian speaker in Ukraine doesn’t necessarily mean one is not perfectly happy to be a Ukranian. It just means that the Ukraine is a diverse cultural nation and has been throughout our lifetimes. This is a classic story of discrimination. Friends of mine who grew up in parts of Greece tell a similar story about the Macedonian culture being suppressed – schools being forced to punish Macedonian language spoken on the playground.

    What we need to recognise is that countries – nations – political structures – which adopt ethnic and cultural purity as a central idea, are dangerous breeding grounds for dissent, revolt and violence. It matters not if the government in question is an ally or a foe. Those lines get drawn and redrawn all the time (witness the dance currently under way to recruit Kurdish and Iranian assistance in dealing with IS, who would have thought!) based on marriages of convenience and hot button issues of the day. Turning a blind eye to thuggery and stupidity on the part of your allies is just as bad as making sure you’re hanging with the cool kids on the playground even if it happens that they are thugs and bullies –  stupid and shameful short-sightedness.

    In Iraq, the government installed and propped up with US money and materials (and the occasional slap on the back from Britain) took a pointedly sectarian approach to governance. People of particular religious communities were removed from positions of authority, disqualified from leadership, hunted and imprisoned and tortured. The US knew that leading figures in their Iraqi government were behaving in this way, but chose to continue supporting the government which protected these thugs because they were “our people”. That was a terrible mistake, because it is those very communities which have morphed into Islamic State.

    The modern nation states we call Iraq and the Ukraine – both with borders drawn in our modern lifetimes – are intrinsically diverse, intrinsically complex, intrinsically multi-cultural parts of the world. We should know that a failure to create governments of that diversity, for that diversity, will result in murderous resentment. And yet, now that the lines for that resentment are drawn, we are quick to choose sides, precisely the wrong position to take.

    What makes this so sad is that we know better and demand better for ourselves. The UK and the US are both countries who have diversity as a central tenet of their existence. Freedom of religion, freedom of expression, the right to a career and to leadership on the basis of competence rather than race or creed are major parts of our own identity. And yet we prop up states who take precisely the opposite approach, and wonder why they fail, again and again. We came to these values through blood and pain, we hold on to these values because we know first hand how miserable and how wasteful life becomes if we let human tribalism tear our communities apart. There are doors to universities in the UK on which have hung the bodies of religious dissidents, and we will never allow that to happen again at home, yet we prop up governments for whom that is the norm.

    The Irish Troubles was a war nobody could win. It was resolved through dialogue. South African terrorism in the 80′s was a war nobody could win. It was resolved through dialogue and the establishment of a state for everybody. Time and time again, “terrorism” and “barbarism” are words used to describe fractious movements by secure, distant seats of power, and in most of those cases, allowing that language to dominate our thinking leads to wars that nobody can win.

    Russia made a very grave error in arming Russian-speaking Ukranian separatists. But unless the West holds Kiev to account for its governance, unless it demands an open society free of discrimination, the misery there will continue. IS will gain nothing but contempt from its demonstrations of murder – there is no glory in violence on the defenceless and the innocent – but unless the West bends its might to the establishment of societies in Syria and Iraq in which these religious groups are welcome and free to pursue their ambitions, murder will be the only outlet for their frustration. Politicians think they have a new “clean” way to exert force – drones and airstrikes without “boots on the ground”. Believe me, that’s false. Remote control warfare will come home to fester on our streets.

     

    Julian Andres Klode: APT 1.1~exp3 released to experimental: First step to sandboxed fetcher methods

    Wed, 2014-09-24 21:06

    Today, we worked, with the help of ioerror on IRC, on reducing the attack surface in our fetcher methods.

    There are three things that we looked at:

    1. Reducing privileges by setting a new user and group
    2. chroot()
    3. seccomp-bpf sandbox

    Today, we implemented the first of them. Starting with 1.1~exp3, the APT directories /var/cache/apt/archives and /var/lib/apt/lists are owned by the “_apt” user (username suggested by pabs). The methods switch to that user shortly after the start. The only methods doing this right now are: copy, ftp, gpgv, gzip, http, https.

    If privileges cannot be dropped, the methods will fail to start. No fetching will be possible at all.

    Known issues:

    • We drop all groups except the primary gid of the user
    • copy breaks if that group has no read access to the files

    We plan to also add chroot() and seccomp sandboxing later on; to reduce the attack surface on untrusted files and protocol parsing.


    Filed under: Uncategorized

    Stephen Kelly: Grantlee 5.0.0 (codename Umstraßen) now available

    Wed, 2014-09-24 16:23

    The Grantlee community is pleased to announce the release of Grantlee version 5.0 (Mirror). Grantlee contains an implementation of the Django template system in Qt.

    I invented the word ‘umstraßen’ about 5 years ago while walking to Mauerpark with a friend. We needed to cross the road, so I said ‘wollen wir umstraßen?’, because, well ‘umsteigen’ can be a word. Of course it means ‘die Straßenseite wechseln’ in common German, but one word is better than three, right? This one is generally popular with German native speakers, so let’s see if we can get it into the Duden :).

    This is a source and binary compatibility break since the 0.x.y series of Grantlee releases. The major version number has been bumped to 5 in order to match the Qt major version requirement, and to reflect the maturity of the Grantlee libraries. The compatibility breaks are all minor, with the biggest impact being in the buildsystem, which now follows patterns of modern cmake.

    The biggest change to the C++ code was removal of a lot of code which became obsolete in Qt 5 because of the QSequentialIterable as part of the type-erased iteration features.


    Didier Roche: Ubuntu Developer Tools Center: how do we run tests?

    Wed, 2014-09-24 11:30

    We are starting to see multiple awesome code contributions and suggestions on our Ubuntu Loves Developers effort and we are eagerly waiting on yours! As a consequence, the spectrum of supported tools is going to expand quickly and we need to ensure that all those different targeted developers are well supported, on multiple releases, always delivering the latest version of those environments, at anytime.

    A huge task that we can only support thanks to a large suite of tests! Here are some details on what we currently have in place to achieve and ensure this level of quality.

    Different kinds of tests pep8 test

    The pep8 test is there to ensure code quality and consistency checking. Tests results are trivial to interpret.

    This test is running on every commit to master, on each release during package build as well as every couple of hours on jenkins.

    small tests

    Those are basically unit tests. They are enabling us to quickly see if we've broken anything with a change, or if the distribution itself broke us. We try to cover in particular multiple corner cases that are easy to test that way.

    They are running on every commit to master, on each release during package build, every time a dependency is changed in Ubuntu thanks to autopkgtests and every couple of hours on jenkins.

    large tests

    Large tests are real user-based testing. We execute udtc and type in stdin various scenarios (like installing, reinstalling, removing, installing with a different path, aborting, ensuring the IDE can start…) and check that the resulting behavior is the one we are expecting.

    Those tests enables us to know if something in the distribution broke us, or if a website changed its layout, the download links are modified, or if a newer version of a framework can't be launched on a particular Ubuntu version or configuration. That way, we are aware, ideally most of the time even before the user, that something is broken and can act on it.

    Those tests are running every couple of hours on jenkins, using real virtual machines running an Ubuntu Desktop install.

    medium tests

    Finally, the medium tests are inheriting from the large tests. Thus, they are running exactly the same suite of tests, but in a Docker containerized environment, with mock and small assets, not relying on the network or any archives. This means that we ship and emulate a webserver delivering web pages to the container, pretending we are, for instance, https://developer.android.com. We then deliver fake requirements packages and mock tarballs to udtc, and running those.

    Implementing a medium tests is generally really easy, for instance:

    class BasicCLIInContainer(ContainerTests, test_basics_cli.BasicCLI):

    """This will test the basic cli command class inside a container"""

    is enough. That means "takes all the BasicCLI large tests, and run them inside a container". All the hard work, wrapping, sshing and tests are done for you. Just simply implement your large tests and they will be able to run inside the container with this inheritance!

    We added as well more complex use cases, like emulating a corrupted downloading, with a md5 checksum mismatch. We generate this controlled environment and share it using trusted containers from Docker Hub that we generate from the Ubuntu Developer Tools Center DockerFile.

    Those tests are running as well every couple of hours on jenkins.

    By comparing medium and large tests, as the first is in a completely controlled environment, we can decipher if we or the distribution broke us, or if a change from a third-party changing their website or requesting newer version requirements impacted us (as the failure will only occurs on the large tests and not in the medium for instance).

    Running all tests, continuously!

    As some of the tests can show the impact of external parts, being the distribution, or even, websites (as we parse some download links), we need to run all those tests regularly[1]. Note as well that we can experience different results on various configurations. That's why we are running all those tests every couple of hours, once using the system installed tests, and then, with the tip of master. Those are running on various virtual machines (like here, 14.04 LTS on i386 and amd64).

    By comparing all this data, we know if a new commit introduced regressions, if a third-party broke and we need to fix or adapt to it. Each testsuites has a bunch of artifacts attached to be able to inspect the dependencies installed, the exact version of UDTC tested here, and ensure we don't corner ourself with subtleties like "it works in trunk, but is broken once installed".

    You can see on that graph that trunk has more tests (and features… just wait for some days before we tell more about them ;)) than latest released version.

    As metrics are key, we collect code coverage and line metrics on each configuration to ensure we are not regressing in our target of keeping high coverage. That tracks as well various stats like number of lines of code.

    Conclusion

    Thanks to all this, we'll probably know even before any of you if anything is suddenly broken and put actions in place to quickly deliver a fix. With each new kind of breakage we plan to back it up with a new suite of tests to ensure we never see the same regression again.

    As you can see, we are pretty hardcore on tests and believe it's the only way to keep quality and a sustainable system. With all that in place, as a developer, you should just have to enjoy your productive environment and don't have to bother of the operation system itself. We have you covered!

    As always, you can reach me on G+, #ubuntu-desktop (didrocks) on IRC (freenode), or sending any issue or even pull requests against the Ubuntu Developer Tools Center project!

    Note

    [1] if tests are not running regularly, you can consider them broken anyway

    Robert Collins: what-poles-for-the-tent

    Wed, 2014-09-24 05:11

    So Monty and Sean have recently blogged about about the structures (1, 2) they think may work better for OpenStack. I like the thrust of their thinking but had some mumblings of my own to add.

    Firstly, I very much like the focus on social structure and needs – what our users and deployers need from us. That seems entirely right.

    And I very much like the getting away from TC picking winners and losers. That was never an enjoyable thing when I was on the TC, and I don’t think it has made OpenStack better.

    However, the thing that picking winners and losers did was that it allowed users to pick an API and depend on it. Because it was the ‘X API for OpenStack’. If we don’t pick winners, then there is no way to say that something is the ‘X API for OpenStack’, and that means that there is no forcing function for consistency between different deployer clouds. And so this appears to be why Ring 0 is needed: we think our users want consistency in being able to deploy their application to Rackspace or HP Helion. They want vendor neutrality, and by giving up winners-and-losers we give up vendor neutrality for our users.

    Thats the only explanation I can come up with for needing a Ring 0 – because its still winners and losers (e.g. picking an arbitrary project) keystone, grandfathering it in, if you will. If we really want to get out of the role of selecting projects, I think we need to avoid this. And we need to avoid it without losing vendor neutrality (or we need to give up the idea of vendor neutrality).

    One might say that we must pick winners for the very core just by its, but I don’t think thats true. If the core is small, many people will still want vendor neutrality higher up the stack. If the core is large, then we’ll have a larger % of APIs covered and stable granting vendor neutrality. So a core with fixed APIs will be under constant pressure to expand: not just from developers of projects, but from users that want API X to be fixed and guaranteed available and working a particular way at [most] OpenStack clouds.

    Ring 0 also fulfils a quality aspect – we can check that it all works together well in a realistic timeframe with our existing tooling. We are essentially proposing to pick functionality that we guarantee to users; and an API for that which they have everywhere, and the matching implementation we’ve tested.

    To pull from Monty’s post:

    “What does a basic end user need to get a compute resource that works and seems like a computer? (end user facet)

    What does Nova need to count on existing so that it can provide that. “

    He then goes on to list a bunch of things, but most of them are not needed for that:

    We need Nova (its the only compute API in the project today). We don’t need keystone (Nova can run in noauth mode and deployers could just have e.g. Apache auth on top). We don’t need Neutron (Nova can do that itself). We don’t need cinder (use local volumes). We need Glance. We don’t need Designate. We don’t need a tonne of stuff that Nova has in it (e.g. quotas) – end users kicking off a simple machine have -very- basic needs.

    Consider the things that used to be in Nova: Deploying containers. Neutron. Cinder. Glance. Ironic. We’ve been slowly decomposing Nova (yay!!!) and if we keep doing so we can imagine getting to a point where there truly is a tightly focused code base that just does one thing well. I worry that we won’t get there unless we can ensure there is no pressure to be inside Nova to ‘win’.

    So there’s a choice between a relatively large set of APIs that make the guaranteed available APIs be comprehensive, or a small set that that will give users what they need just at the beginning but might not be broadly available and we’ll be depending on some unspecified process for the deployers to agree and consolidate around what ones they make available consistently.

    In sort one of the big reasons we were picking winners and losers in the TC was to consolidate effort around a single API – not implementation (keystone is already on its second implementation). All the angst about defcore and compatibility testing is going to be multiplied when there is lots of ecosystem choice around APIs above Ring 0, and the only reason that won’t be a problem for Ring 0 is that we’ll still be picking winners.

    How might we do this?

    One way would be to keep picking winners at the API definition level but not the implementation level, and make the competition be able to replace something entirely if they implement the existing API [and win hearts and minds of deployers]. That would open the door to everything being flexible – and its happened before with Keystone.

    Another way would be to not even have a Ring 0. Instead have a project/program that is aimed at delivering the reference API feature-set built out of a single, flat Big Tent – and allow that project/program to make localised decisions about what components to use (or not). Testing that all those things work together is not much different than the current approach, but we’d have separated out as a single cohesive entity the building of a product (Ring 0 is clearly a product) from the projects that might go into it. Projects that have unstable APIs would clearly be rejected by this team; projects with stable APIs would be considered etc. This team wouldn’t be the TC : they too would be subject to the TC’s rulings.

    We could even run multiple such teams – as hinted at by Dean Troyer one of the email thread posts. Running with that I’d then be suggesting

    • IaaS product: selects components from the tent to make OpenStack/IaaS
    • PaaS product: selects components from the tent to make OpenStack/PaaS
    • CaaS product (containers)
    • SaaS product (storage)
    • NaaS product (networking – but things like NFV, not the basic Neutron we love today). Things where the thing you get is useful in its own right, not just as plumbing for a VM.

    So OpenStack/NaaS would have an API or set of APIs, and they’d be responsible for considering maturity, feature set, and so on, but wouldn’t ‘own’ Neutron, or ‘Neutron incubator’ or any other component – they would be a *cross project* team, focused at the product layer, rather than the component layer, which nearly all of our folk end up locked into today.

    Lastly Sean has also pointed out that we have large N N^2 communication issues – I think I’m proposing to drive the scope of any one project down to a minimum, which gives us more N, but shrinks the size within any project, so folk don’t burn out as easily, *and* so that it is easier to predict the impact of changes – clear contracts and APIs help a huge amount there.


    Valorie Zimmerman: Candor and trust

    Wed, 2014-09-24 02:13
    Catmull uses the term candor in his book Creativity, Inc., because honesty is overloaded with moral overtones. It means forthrightness, frankness, and also indicates a lack of reserve. Of course reserve is sometimes needed, but we want to create a space where complete candor is invited, even if it means scrapping difficult work and starting over. [p. 86] Catmull discusses measures he put into place to institutionalize candor, by explicitly asking for it in some processes. He goes on to discuss the Braintrust which Pixar relies on to push us towards excellence and to root out mediocrity....[It] is our primary delivery system for straight talk.... Put smart, passionate people in a room together, charge them with identifying and solving problems, and encourage them to be candid with one another. [86-7] Does this sound at all familiar?

    Naturally the focus is on constructive feedback. The members of such a group must not only trust one another, but see each other as peers. Catmull observes that it is difficult to be candid if you are thinking about not looking like an idiot! [89] He also says that this is crucial in Pixar because, in the beginning, all of our movies suck. [90] I'm not sure this is true with KDE software, but maybe it is. Not until the code is exposed to others, to testing, to accessibility teams, HIG, designers--can it begin to not suck.

    I think that we do some of this process in our sprints, on the lists, maybe in IRC and on Reviewboard, but perhaps we can be even more explicit in our calls for review and testing. The key of course is to criticize the product or the process, not the person writing the code or documentation. And on the other side, it can be very difficult to accept criticism of your work even when you trust and admire those giving you that criticism. It is something we must continually learn, in my experience.

    Catmull says,
    People who take on complicated creative projects become lost at some point in the process....How do you get a director to address a problem he or she cannot see? ...The process of coming to clarity takes patience and candor. [91] We try to create an environment where people want to hear one another's notes [feedback] even where those notes are challenging, and where everyone has a vested interest in one another's success.[92]Let me repeat that, because to me, that is the key of a working, creative community: "where everyone has a vested interest in one another's success." I think we in KDE feel that but perhaps do not always live it. So let us ask one another for feedback, criticism, and strive to pay attention to it, and evaluate criticism dispassionately. I think we have missed this bit some times in the past in KDE, and it has come back to bite us. We need to get better.

    Catmull makes the point that the Braintrust has no authority, and says this is crucial:
    the director does not have to follow any of the specific suggestions given. .... It is up to him or her to figure out how to address the feedback....While problems in a movie are fairly easy to identify, the sources of these problems are often extraordinarily difficult to assess.[93]He continues,
    We do not want the Braintrust to solve a director's problem because we believe that...our solution won't be as good....We believe that ideas....only become great when they are challenged and tested.[93]  More than once, he discusses instances where big problems led to Pixar's greatest successes, because grappling with these problems brought out their greatest creativity. While problems ... are fairly easy to identify, the sources of these problems are often extraordinarily difficult to assess.[93] How familiar does this sound to us working in software!? So, at Pixar,
    the Braintrust's notes ...are intended to bring the true causes of the problems to the surface--not to demand a specific remedy. Moreover, we don't want the Braintrust to solve a director's problem because we believe that, in all likelihood, our solution won't be as good as the one the director and his or her creative team comes up with. We believe that ideas--and thus films--only become great when they are challenged and tested.[93]I've seen that often this last bit is a sticking point. People are willing to criticize a piece of code, or even the design, but want their own solution instead. Naturally, this way of working encounters pushback.

    Frank talk, spirited debate, laughter, and love [99] is how Catmull sums up Braintrust meetings. Sound familiar? I've just come from Akademy, which I can sum up the same way. Let's keep doing this in all our meetings, whether they take place in IRC, on the lists, or face to face. Let's remember to not hold back; when we see a problem, have the courage to raise the issue. We can handle problems, and facing them is the only way to solve them, and get better.

    Lubuntu Blog: Lubuntu 14.10 Utopic Unicorn Final Beta

    Tue, 2014-09-23 19:41
    Testing has begun for the Final Beta (Beta 2) of Lubuntu 14.10 Utopic Unicorn. Head on over to the ISO tracker to download images, view testcases, and report results. If you're new to testing, anyone can join and they don't have to be Linux Jedis or anything. You can find all the information you need to get started here.

    Please note that we especially need testers for PPC chips and Intel Macs. We have a special section discussing it here. In particular, if you have an Intel Mac, I have a few questions for you that might help us trim down the workload of the testing team.

    Also, if you have a PPC chip, we're about the only distro actively supporting this architecture. However, we are community supported, so without formal testing, the arch will lose more support. So please, join in testing!

    Nicholas Skaggs: Final Beta testing for Utopic

    Tue, 2014-09-23 17:24
    Can you believe final beta is here for utopic already? Where has the summer gone? The milestone and images are already prepared for the final beta testing. This is the first round of image testing for ubuntu this cycle. A final image will also be tested next month, but now is the time to try out the image on your system. Be sure to report any bugs you may find. This will help ensure there is time to fix them before the release images.

    To help make sure the final utopic image is in good shape, we need your help and test results! Please, head over to the milestone on the isotracker, select your favorite flavor and perform the needed tests against the images.

    If you've never submitted test results for the iso tracker, check out the handy links on top of the isotracker page detailing how to perform an image test, as well as a little about how the qatracker itself works. If you still aren't sure or get stuck, feel free to contact the qa community or myself for help. Happy Testing!

    Ubuntu Kernel Team: Kernel Team Meeting Minutes – September 23, 2014

    Tue, 2014-09-23 17:13
    Meeting Minutes

    IRC Log of the meeting.

    Meeting minutes.

    Agenda

    20140923 Meeting Agenda


    Release Metrics and Incoming Bugs

    Release metrics and incoming bug data can be reviewed at the following link:

    • http://people.canonical.com/~kernel/reports/kt-meeting.txt


    Status: Utopic Development Kernel

    The Utopic kernel has been rebased to the v3.16.3 upstream stable kernel
    and uploaded to the archive, ie. 3.16.0-17.23. Please test and let us
    know your results.
    Also, we’re approximately 2 weeks away from Utopic Kernel Freeze on
    Thurs Oct 9. Any patches submitted after kernel freeze are subject to
    our Ubuntu kernel SRU policy.
    —–
    Important upcoming dates:
    Thurs Sep 25 – Utopic Final Beta (~2 days away)
    Thurs Oct 9 – Utopic Kernel Freeze (~2 weeks away)
    Thurs Oct 16 – Utopic Final Freeze (~3 weeks away)
    Thurs Oct 23 – Utopic 14.10 Release (~4 weeks away)


    Status: CVE’s

    The current CVE status can be reviewed at the following link:

    http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


    Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid

    Status for the main kernels, until today (Sept. 23):

    • Lucid – Kernel prep
    • Precise – Kernel prep
    • Trusty – Kernel prep

      Current opened tracking bugs details:

    • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

      For SRUs, SRU report is a good source of information:

    • http://kernel.ubuntu.com/sru/sru-report.html

      Schedule:

      cycle: 19-Sep through 11-Oct
      ====================================================================
      19-Sep Last day for kernel commits for this cycle
      21-Sep – 27-Sep Kernel prep week.
      28-Sep – 04-Oct Bug verification & Regression testing.
      05-Oct – 11-Oct Regression testing & Release to -updates.


    Open Discussion or Questions? Raise your hand to be recognized

    No open discussions.

    Dustin Kirkland: An Elegant Weapon, for a More Civilized Age...

    Tue, 2014-09-23 14:20

    Before Greedo shot first...
    Before a troubled young Darth Vader braided his hair...
    Before midiclorians offered to explain the inexplicably perfect and perfectly inexplicable...
    And before Jar Jar Binks burped and farted away the last remnants of dear Obi-Wan's "more civilized age"...

    ...I created something, of which I was very, very proud at the time.  Remarkably, I came across that creation, somewhat randomly, as I was recently throwing away some old floppy disks.

    Twenty years ago, it was 1994.  I was 15 years old, just learning to program (mostly on my own), and I created a "trivia game" based around Star Wars.  1,700 lines of Turbo Pascal.  And I made every mistake in the book:
    Of course I'm embarrassed by all of that!  But then, I take a look at what the program did do, and wow -- it's still at least a little bit fun today :-)

    Welcome to swline.pas.  Almost unbelievably, I was able to compile it tonight on an Ubuntu 14.04 LTS 64-bit Linux desktop, using fpc, after three very minor changes:
    1. Running fromdos to remove the trailing ^M endemic of many DOS-era text files
    2. Replacing the (80MHz) CPU clock based sleep function with Delay()
    3. Running iconv to convert the embedded 437 code page ASCII art to UTF-8
    Here's a short screen cast of the game in action :-)


    Would you look at that!
    • 8-bit color!
    • Hand drawn ANSI art!
    • Scrolling text of the iconic Star Wars, Empire Strikes Back, and Return of the Jedi logos! 
    • Random stars and galaxies drawn on the splash screen!
    • No graphic interface framework (a la Newt or Ncurses) -- just a whole 'bunch of GotoXY().
    • An option for sound (which, unfortunately, doesn't seem to work -- I'm sure it was just 8-bits of bleeps and bloops).
    • 300 hand typed quotes (and answers) spanning all 3 movies!
    • An Easter Egg, and a Cheat Code!
    • Timers!
    • User input!
    • And an option at the very end to start all over again!
    You can't make this stuff up :-)

    But watching a video is boring...  Why don't you try it for yourself!?!

    I thought this would be a perfect use case for a Docker.  Just a little Docker image, based on Ubuntu, which includes a statically built swline.pas, and set to run that one binary (and only that one binary when launched.  As simple as it gets, Makefile and Dockerfile.

    $ cat Makefile
    all:
    fpc -k--static swline.pas

    $ cat Dockerfile
    FROM ubuntu
    MAINTAINER Dustin Kirkland
    ADD swline /swline
    ENTRYPOINT /swline

    I've pushed a Docker image containing the game to the Docker Hub registry.
    Quick note...  You're going to want a terminal that's 25 characters high, and 160 characters wide (sounds weird, yes, I know -- the ANSI art characters are double byte wide and do some weird things to smaller terminals, and my interest in debugging this is pretty much non-existant -- send a patch!).  I launched gnome-terminal, pressed ctrl-- to shrink the font size, on my laptop.On an Ubuntu 14.04 LTS machine:

    $ sudo apt-get install docker.io
    $ sudo docker pull kirkland/swline
    $ sudo docker run -t -i kirkland/swline
    Of course you can find, build, run, and modify the original (horrible!) source code in Launchpad and Github.




    Now how about that for a throwback Tuesday ;-)

    May the Source be with you!  Always!
    Dustin

    p.s.  Is this the only gem I found on those 17 floppy disks?  Nope :-)  Not by a long shot.

    Pages