news aggregator

Valorie Zimmerman: Learning to git

Planet Ubuntu - Fri, 2014-08-22 09:55
A few years ago, I learned from Myriam's fine blog how to build Amarok from source, which is kept in git. It sounds mysterious, but once all the dependencies are installed, PATH is defined and the environment is properly set up, it is extremely easy to refresh the source (git pull) and rebuild. In fact, I usually use the up-arrow in the konsole, which finds the previous commands, so I rarely have to even type anything! Just hit return when the proper command is in place.

Now we're using git for the KDE Frameworks book, so I learned how to not only pull the new or changed source files, but also to commit my own few or edited files locally, then push those commits to git, so others can see and use them.

To be able to write to the repository, an SSH key must be uploaded, in this case done in the KDE Identity account. If the Identity account is not a developer account, that must first be granted.

Just as in building Amarok, first the folders need to be created, and the repository cloned. Once cloned, I can see either in konsole or Dolphin the various files. It's interesting to me to poke around in most of them, but the ones I work in are markdown files, which is a type of text file. I can open them in kate (or your editor of choice) either from Dolphin or directly from the cli (for instance kate ki18n/ki18n.in.md).

Once edited, save the file, then it's time to commit. If there are a number of files to work on, they can be all committed at once. git commit -a is the command you need. Once you hit return, you will be immediately put into nano, a minimal text editor. Up at the top, you will see it is waiting for your commit message, which is a short description of the file or the changes you have made. Most of my commits have said something like "Edited for spelling and grammar." Once your message is complete, hit Control X, and y and return to save your changes.

It's a good idea to do another git pull just to be sure no one else has pushed a conflicting file while the commit message was being crafted, then git push. At this point the passphrase for the ssh key is asked for; once that is typed and you hit return, you'll get something like the following:

Counting objects: 7, done.                                                                                                                                                                              
Delta compression using up to 8 threads.                                                                                                                                                                
Compressing objects: 100% (4/4), done.                                                                                                                                                                  
Writing objects: 100% (4/4), 462 bytes | 0 bytes/s, done.                                                                                                                                                
Total 4 (delta 2), reused 1 (delta 0)                                                                                                                                                                    
remote: This commit is available for viewing at:
remote: http://commits.kde.org/kf5book/90c863e4ee2f82e4d8945ca74ae144b70b9e9b7b
To git@git.kde.org:kf5book                                                                                                                                                                              
   1d078fe..90c863e  master -> master                                                                                                                                                                    
valorie@valorie-HP-Pavilion-dv7-Notebook-PC:~/kde/book/kf5book$
In this case, the new file is now part of the KDE Frameworks 5 book repository. Git is a really nifty way to keep files of any sort organized and backed up. I'm really happy that we decided to develop the book using this powerful tool.

Jorge Castro: The Ubuntu Steam box, one year later

Planet Ubuntu - Thu, 2014-08-21 23:02

It’s been about a year since I started building my own Steam console for my living room. A ton has changed since then. SteamOS has been released, In Home Streaming is out of beta and generally speaking the living room experience has gotten a ton better.

This blog post will be a summary of what’s changed in the past year, in the hopes that it will help someone who might be interested in building their own “next-gen console” for about the same price, and take advantage of nicer hardware and all the things that PC gaming has to offer.

Step 1: Choosing the hardware
  • I consider the NVIDIA GTX 750Ti to be the best thing to happen in hardware for this sort of project. It’s based on their newest Maxwell technology so it runs cool, it does not need a special power supply plug, and it’s pretty small. It’s also between $120-$150 – which means nearly any computer is now capable of becoming a game console. And a competent one at that.

  • I have settled on the Cooler Master 110 case, which is one of the least obnoxious PC case you can find that won’t look too bad in the living room. Unfortunately Valve’s slick-looking case did not kick the case makers into making awesome-looking living room style cases. The closest you can find is the Silverstone RVZ01, which has the right insides, but they ruined the outside with crazy plastic ribs. The Digital Storm Bolt II looks great, but you can’t buy the case seperately. Both cases have CD drives for some reason, boo!

  • Nvidia has a great guide on building a PC within the console-price range if you want to look around. I also recommend checking out r/buildapc, which has tons of Mini-ITX/750Ti builds.

  • Another alternative is the excellent Intel NUC and Gigabyte Brix. These make for great portable machines, but for the upcoming AAA titles for Linux like Metro Redux, Star Citizen, and so on I decided to go with a dedicated graphics card. Gigabyte makes a very interesting model that is the size of a NUC, but with a GTX 760(!). This looks to be ideal, but unfortunately when Linus reviewed it he found heat/throttling issues. When they make a Maxwell based one of these it will likely be awesome.

  • Don’t forget the controller. The Xbox wireless ones will work out of the box. I recommend avoiding the off-brand dongles you see on Amazon, they can be hit or miss.

Step 2: Choosing the software

I’ve been using SteamOS since it came out. The genious about SteamOS is that fundamentally it does only 2 things. It boots, and then runs Steam Big Picture (BPM) mode. This means for a dedicated console, the OS is really not important. I have 2 drives in the box, one with SteamOS, and one with Ubuntu running BPM. After running both I prefer Ubuntu/Steam to SteamOS:

  • Faster boot (Upstart v. SysV)
  • PPAs enable fresh access to new Nvidia drivers and Plex Home Theater
  • Newer kernels and access to HWE kernels over the next 5 years

I tend to alternate between the two, but since I am more familiar with Ubuntu it makes it easier to use for, so the rest of this post will cover how to build a dedicated Steam Box using Ubuntu.

This isn’t to say SteamOS is bad, in fact, setting it up is actually easier than doing the next few steps; remember that the entire point is to not care about the OS underneath, and get you into Steam. So build whatever is most comfortable for you!

Step 3: Installation

These are the steps I am currently doing. It’s not for beginners, you should be comfortable admining an Ubuntu system.

  • Install Ubuntu 14.04.
  • (Optional) - Install openssh-server. I don’t know about you but lugging a keyboard/mouse back and forth to my living room is not my idea of a good time. I prefer to sit on the couch, and ssh into the box from my laptop.
  • Add the xorg-edgers PPA. You don’t need this per se, but let’s go all in!
  • Install the latest Nvidia drivers: As of this writing, nvidia-graphics-drivers-343.

After you’ve installed the drivers and all the security updates you should reboot to get to your nice new clean desktop system. Now it’s time to make it a console:

  • Log in, and install Steam. Log into steam, make sure it works.
  • Add the Marc Deslaurier’s SteamOS packages PPA. These are rebuilt for Ubuntu and he does a great job keeping them up to date.
  • sudo apt-get install steamos-compositor steamos-modeswitch-inhibitor steamos-xpad-dkms
  • Log out, and in the login screen, click on the Ubuntu symbol by the username and select the Steam session. This will get you the dedicated Steam session. Make sure that works. Exit out of that and now let’s make it so we can boot into that new session by default
  • Enable autologin in LightDM after the fact so that when your machine boots it goes right into Steam’s Big Picture mode.

We’re using Valve’s xpad module instead of xboxdrv because that’s what they use in SteamOS and I don’t want to deviate too much. But if you prefer about xboxdrv then follow this guide.

  • Steam updates itself at the client level so there’s no need to worry about that, the final step for a console-like experience is to enable automatic updates. Remember you’re using PPAs, so if you’re not confident that you can fix things, just leave it and do maintenance by hand every once in a while.
Step 4: Home Theater Bling

If you’re going to have a nice living room box, then let’s use it for other things. I have a dedicated server with media that I share out with Plex Media Server, so in this step I’ll install the client side.

Plex Home Theater:

sudo add-apt-repository ppa:plexapp/plexht sudo add-apt-repository ppa:ppa:pulse-eight/libcec sudo apt-get update && sudo apt-get install plexhometheater

In Steam you can then click on the + symbol, Add a non-steam game, and then add Plex. Use the gamepad (not the stick) to navigate the UI once you launch it. If you prefer XBMC/Kodi you can install that instead. I found that the controller also works out of the box there, so it’s a nice experience no matter which one you choose.

Step 5: In Home Streaming

This is a killer Steam feature, that allows you to stream your Windows games to your new console. It’s very straight forward, just have both machines on and logged into Steam on the same network, they will autodiscover each other, and your Windows games will show up in your Ubuntu/Steam UI, and you can stream them. Though it works suprisingly well over wireless, you’ll definately want to ensure you’ve got gigabit ethernet if you want to stream games 1080p at 60 frames per second.

Conclusion

And that’s basically it! There’s tons of stuff I’ve glossed over, but these are the basic steps. There’s lots of little things you can do, like remove a bunch of desktop packages you won’t need (so you don’t need to download and update them) and other tips and tricks, I’ll try to keep everyone up to date on how it’s going.

Enjoy your new next-gen gaming console!

TODO:

  • You can change out the plymouth theme to use the one for SteamOS - but I have an SSD in the box and combined with the fast boot it never comes up for me anyway.
  • It’d be cool to make a prototype of Ubuntu Core and then provide Steam in an LXC container on top of that so we don’t have to use a full blown desktop ISO.

Ubuntu Podcast from the UK LoCo: S07E21 – The One with the Rumour

Planet Ubuntu - Thu, 2014-08-21 20:23

Laura Cowen, Alan Pope, and Mark Johnson are in Studio L for Season Seven, Episode Twenty-One of the Ubuntu Podcast!

 Download OGG  Download MP3 Play in Popup

In this week’s show:-

We’ll be back next week, when we’ll be interviewing Daniel Holbach, and we’ll go through your feedback.

Please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Michael Hall: Communicating Recognition

Planet Ubuntu - Thu, 2014-08-21 13:00

Recognition is like money, it only really has value when it’s being passed between one person and another. Otherwise it’s just potential value, sitting idle.  Communication gives life to recognition, turning it’s potential value into real value.

As I covered in my previous post, Who do you contribute to?, recognition doesn’t have a constant value.  In that article I illustrated how the value of recognition differs depending on who it’s coming from, but that’s not the whole story.  The value of recognition also differs depending on the medium of communication.

Over at the Community Leadership Knowledge Base I started documenting different forms of communication that a community might choose, and how each medium has a balance of three basic properties: Speed, Thoughtfulness and Discoverability. Let’s call this the communication triangle. Each of these also plays a part in the value of recognition.

Speed

Again, much like money, recognition is something that is circulated.  It’s usefulness is not simply created by the sender and consumed by the receiver, but rather passed from one person to another, and then another.  The faster you can communicate recognition around your community, the more utility you can get out of even a small amount of it. Fast communications, like IRC, phone calls or in-person meetups let you give and receive a higher volume of recognition than slower forms, like email or blog posts. But speed is only one part, and faster isn’t necessarily better.

Thoughtfulness

Where speed emphasizes quantity, thoughtfulness is a measure of the quality of communication, and that directly affects the value of recognition given. Thoughtful communications require consideration upon both receiving and replying. Messages are typically longer, more detailed, and better presented than those that emphasize speed. As a result, they are also usually a good bit slower too, both in the time it takes for a reply to be made, and also the speed at which a full conversation happens. An IRC meeting can be done in an hour, where an email exchange can last for weeks, even if both end up with the same word-count at the end.

Discoverability

The third point on our communication triangle, discoverability, is a measure of how likely it is that somebody not immediately involved in a conversation can find out about it. Because recognition is a social good, most of it’s value comes from other people knowing who has given it to whom. Discoverability acts as a multiplier (or divisor, if done poorly) to the original value of recognition.

There are two factors to the discoverability of communication. The first, accessibility, is about how hard it is to find the conversation. Blog posts, or social media posts, are usually very easy to discover, while IRC chats and email exchanges are not. The second factor, longevity, is about how far into the future that conversation can still be discovered. A social media post disappears (or at least becomes far less accessible) after a while, but an IRC log or mailing list archive can stick around for years. Unlike the three properties of communication, however, these factors to discoverability do not require a trade off, you can have something that is both very accessible and has high longevity.

Finding Balance

Most communities will have more than one method of communication, and a healthy one will have a combination of them that compliment each other. This is important because sometimes one will offer a more productive use of your recognition than another. Some contributors will respond better to lots of immediate recognition, rather than a single eloquent one. Others will respond better to formal recognition than informal.  In both cases, be mindful of the multiplier effect that discoverability gives you, and take full advantage of opportunities where that plays a larger than usual role, such as during an official meeting or when writing an article that will have higher than normal readership.

Jorge Castro: Free Official Ubuntu Books for Local Teams

Planet Ubuntu - Wed, 2014-08-20 17:16

Prentice Hall has just released the 8th Ed. of “The Official Ubuntu Book”, authored by Matthew Helmke and Elizabeth K. Joseph with José Antonio Rey, Philip Ballew and Benjamin Mako Hill.

This is the book’s first update in 2 years and as the authors state in their Preface, “…a large part of this book has been rewritten—not because the earlier editions were bad, but because so much has happened since the previous edition was published. This book chronicles the major changes that affect typical users and will help anyone learn the foundations, the history, and how to harness the potential of the free software in Ubuntu.”

As with prior editions, publisher Prentice Hall has kindly offered to ship approved LoCo teams each (1) free copy of this new edition. To keep this as simple as possible, you can request your book by following these steps. The team contact shown on our LoCo Team List (and only the team contact) should send an email to Heather Fox at heather.fox@pearson.com and include the following details:

  • Your full name
  • Which team you are from
  • If your team resides within North America, please provide: Your complete street address (the book will ship by UPS)
  • If your team resides outside North America, you will first be emailed a voucher code to download the complete eBook bundle from the publisher site, InformIT, which includes the ePub/mobi/pdf files.

If your team does reside outside North America and you wish to be considered for a print copy, please provide:

Your complete street address, region, country AND IMPORTANT: Your phone number, including country and area code. (Pearson will make its best effort to arrange shipment through its nearest corporate office.)

A few notes:

  • Only approved teams are eligible for a free copy of the book.
  • Only the team contact for each team can make the request for the book.
  • There is a limit of (1) copy of each book per approved team.
  • Prentice Hall will cover postage, but not any import tax or other shipping fees.
  • When you have the books, it is up to you what you do with them. We recommend you share them between members of the team. LoCo Leaders: please don’t hog them for yourselves!

If you have any questions or concerns, please directly contact Pearson/Prentice Hall’s Heather Fox at heather.fox@pearson.com Also, for those teams who are not approved or yet to be approved, you can still score a rather nice 35% discount on the books by registering your LoCo with the Pearson User Group Program.

Svetlana Belkin: The Certificate of Ubuntu Membership

Planet Ubuntu - Wed, 2014-08-20 17:10

Today, in mail, came my Certificate of Ubuntu Membership that I requested back in February.  The photo was taken via my Ubuntu Touch on my Nexus 7 2013.


Kubuntu: KDE Applications and Development Platform 4.14

Planet Ubuntu - Wed, 2014-08-20 15:11

Packages for the release of KDE SC 4.14 are available for Kubuntu 14.04LTS and our development release. You can get them from the Kubuntu Backports PPA. It includes an update of Plasma Desktop to 4.11.11.

Bugs in the packaging should be reported to kubuntu-ppa on Launchpad. Bugs in the software to KDE.

Ubuntu Women: Calling For Testers for Orientation Quiz

Planet Ubuntu - Wed, 2014-08-20 11:56

The Ubuntu Women Project is pleased to present an orientation quiz that is aimed to help new comers into the Ubuntu Community to find their niche and get involved.  The base quiz was taken from the Ubuntu Italian LoCo. Our plans is to put this quiz on community.ubuntu.com  but we are seeking testers for it first!

How to test it:

Go to the page where the quiz is and play around with answering the questions.  If you find an issue, please e-mail the Ubuntu Women Mailing-List at ubuntu-women@lists.ubuntu.com.  If you want to see the code, you may ask Lyz at lyz@ubuntu.com or me at belkinsa@ubuntu.com.

Jonathan Riddell: Qt Licence Update

Planet Ubuntu - Wed, 2014-08-20 09:27
KDE Project:

Today Qt announced some changes to their licence. The KDE Free Qt team have been working behind the scenes to make these happen and we should be very thankful for the work they put in. Qt code was LGPLv2.1 or GPLv3 (this also allows GPLv2). Existing modules will add LGPLv3 to that. This means I can get rid of the part of the KDE Licensing Policy which says "Note: code may not be copied from Qt into KDE Platform as Qt is LGPLv2.1 only which would prevent it being used under LGPL 3".

New modules, starting with the new web module QtWebEngine (which uses Blink) will be LGPLv3 or GPLv2. Getting rid of LGPLv2.1 means better preserving our freedoms (can't use patents to restrict, must allow reverse enginerring, must allow to replace Qt etc). It's not a problem for the new Qt modules to link to LGPLv2 or LGPLv2+ libraries or applications of any licence (as long as they allow the freedoms needed such as those listed above). One problem with LGPLv3 is you can't link a GPLv2 only application to it (not because LGPLv3 prevents it but because GPL2 prevents it), this is not a problem here because it will be dual licenced as GPLv2 alongside.

The main action this prevents is directly copying code from the new Qt modules into Frameworks, but as noted above we forbid doing that anyway.

With the new that Qt moved to Digia and there is a new company being spun out I had been slightly worried that the new modules would be restricted further to encourage more commercial licences of Qt. This is indeed the case and it's being done in the best possible way, thanks Digia.

Benjamin Kerensa: Mozilla and Open Diversity Data

Planet Ubuntu - Wed, 2014-08-20 05:28

I have been aware of the Open Diversity Data project for awhile. It is the work of the wonderful members of Double Union and their community of awesome contributors. Recently, a Mozillian tweeted that Mozilla should release it’s Diversity Data. It is my understanding also that a discussion happened internally and for whatever reason a release of Mozilla’s diversity data did not entirely result although some numbers are available here.

Anyways, I’m now going to bring this suggestion up again and encourage that both Mozilla Corporation and Mozilla Foundation release individual diversity data reports in the form of some numbers, graphs and a blog post and perhaps a combined one of both orgs.

I would encourage other Mozillians to support the push for opening this data by sharing this blog post on the Social Media as an indicator of supporting Open Diversity Data publishing by Mozilla or by retweeting this.

I really think our Manifesto encourages us to support initiatives like this; specifically principle number two of our manifesto. If other companies (Kudos!) that are less transparent than Mozilla can do it then I think we have to do this.

Finally, I would like to encourage Mozilla to consider creating a position of VP of Diversity and Inclusion to oversee our various diversity and inclusion efforts and to help plan and create a vision for future efforts at Mozilla. Sure we have already people who kind of do this but it is not their full-time role.

Anyways that’s all I have on this…

Svetlana Belkin: Ohio Team Wiki Team Wiki Update/Clean Up

Planet Ubuntu - Tue, 2014-08-19 19:23

As my recent project, that is Ubuntu related, and a very overdo task on my To Do list, I worked on updating the Ohio Team Wiki pages (I took some ideas from how the Doc Team Wiki pages look);

  • I removed the unneeded pages after I had approval from Stephen Michael Kellat
  • I lumped similar pages such as the agendas to the meetings and the various events into main pages (/Meetings and /Events) and linked those two main pages on the home page
  • I only have two things left to do which is to fix the banner and find all of the missing records from the mailing list

Hopefully it is cleaner and easier to get the information that one needs.


Bodhi.Zazen: Music practice

Planet Ubuntu - Tue, 2014-08-19 18:26

With the advent of YouTube , there is a plethora of music “lessons” available on the internet. When learning new riffs, however, it is helpful to be able to alter the speed of playback and play selective sections of the lesson.

For some time I have been using audacity, which has the advantage of cross platform availability. However, audacity is a bit of overkill and I find it a bit slow at times.

In addition, when selecting a particular segment within the lesson, skipping dialog or parts already mastered, audacity is a bit “clunky” and somewhat time consuming. Alternately one can splice the lessons with ffmpeg, again somewhat time consuming.

Recently I came across a simple, no frills, light weight solution, “Play it slowly”

Home page

Download (github)

Play it slowly is a light weight application but has a simple , clean interface. It is simple to use and has basic features such as:

  1. Slow the speed on playback without altering pitch.
  2. Easily mark, move, and reset sections of a track for playback.
  3. Easy to start/stop/restart playback.

Play is slowly is in the Debian and Ubuntu repositories

sudo apt-get install playitslowly

For Fedora, first install the dependencies:

yum install gstreamer-python gstreamer-plugins-bad-extras

Download the source code from the above link (version 1.4.0 at the time of this writing)

Extract the tarball and install

tar xvzf playitslowly-1.4.0.tar.gz
cd playitslowly-1.4.0
sudo python setup.py install

For additional options see the README or run:

python setup.py --help

Ubuntu Kernel Team: Kernel Team Meeting Minutes – August 19, 2014

Planet Ubuntu - Tue, 2014-08-19 17:17
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140819 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel has been rebased to the first v3.16.1 upstream stable
kernel and uploaded to the archive, ie. linux-3.16.0-9.14. Please test
and let us know your results.
—–
Important upcoming dates:
Thurs Aug 21 – Utopic Feature Freeze (~2 days away)
Mon Sep 22 – Utopic Final Beta Freeze (~5 weeks away)
Thurs Sep 25 – Utopic Final Beta (~5 weeks away)
Thurs Oct 9 – Utopic Kernel Freeze (~7 weeks away)
Thurs Oct 16 – Utopic Final Freeze (~8 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~9 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid

Status for the main kernels, until today (Aug. 19):

  • Lucid – verification & testing
  • Precise – verification & testing
  • Trusty – verification & testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 08-Aug through 29-Aug
    ====================================================================
    08-Aug Last day for kernel commits for this cycle
    10-Aug – 16-Aug Kernel prep week.
    17-Aug – 23-Aug Bug verification & Regression testing.
    24-Aug – 29-Aug Regression testing & Release to -updates.

    cycle: 29-Aug through 20-Sep
    ====================================================================
    29-Aug Last day for kernel commits for this cycle
    31-Sep – 06-Sep Kernel prep week.
    07-Sep – 13-Sep Bug verification & Regression testing.
    14-Sep – 20-Sep Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

The Fridge: Interview with Svetlana Belkin

Planet Ubuntu - Tue, 2014-08-19 07:46

Elizabeth K. Joseph: Can you tell us a little about yourself?

Svetlana Belkin: I am Svetlana Belkin, an active Ubuntu Member since July 2013, and I gained my Membership on February 6, 2014. This month will mark my first year of working in the Ubuntu Community.

I am not a developer, I cannot code to save my life!

I am a biology major with a focus on Cellular and Molecular Biology who uses Ubuntu because it and the FOSS world match how I think.

EKJ: What inspired you to get involved with the Ubuntu Community?

SB: An idea for a multi-player online game that is based on Mario Party but instead of mini-games, players use cards that are either attack, defense, or traps to get coins. The one with the most coins wins but everyone can keep the coins that they gained to shop for more cards and avatar items.

This was about one year ago, and I wanted to find someone who could help develop it. Since I am a woman, I joined Ubuntu Women to seek one out. But I quickly found out that it was a bad choice and I started to work on improving the Ubuntu Women Wiki to have it up-to-date. That’s what led me into doing other things within the Ubuntu Community.

EKJ: What are your roles within the Ubuntu community and what plans do you have for the future?

SB: My main role within the Ubuntu Community is to help newcomers to find their place in the Community and to network with women (Ubuntu Women) and scientists (Ubuntu Scientists) alike to improve the FOSS world.

I also help the Ubuntu Documentation team to keep the Ubuntu Community Help Wiki up-to-date.

My future plans are to train new leaders within the Community so they know how to lead.

EKJ: Have you hit any barriers with getting involved and what can you recommend to newcomers?

SB: Newcomers need to remember that they do not need to be a developer to get involved – that’s the barrier that I hit.

I would recommend to newcomers that they should not think that they need to be developers, and they should take these steps: they should start out small, join the team/project and its mailing-list, make sure to read all of the documentation for that project/team, and introduce themselves to the team via the mailing-lists. The best route – if they do not know what skills they have or what teams/projects to join – is to go to their Local Community and ask on the mailing list or their IRC channel.

EKJ: Is there anything you feel the Ubuntu project could improve on when it comes to new folks coming to the project?

SB: The main thing is the lack of Ubuntu Recruitment/Promo/Comms teams where the new folks can join and ask what teams/projects they can put their skills into. The other flavors have these teams but Ubuntu does not.

EKJ: What other things are you interested in outside of open source and Ubuntu?

I make art from time to time, and play my favorite and the only Multi-User Dungeon, Armageddon MUD.

Originally posted by Elizabeth K. Joseph in Full Circle Magazine Issue #87 on July 25, 2014

The Fridge: Ubuntu Weekly Newsletter Issue 379

Planet Ubuntu - Tue, 2014-08-19 04:22

Stephen Michael Kellat: Restart

Planet Ubuntu - Tue, 2014-08-19 00:00

Eventually even a blog can be brought back to life. A simple list may be best in order, for now, at the very least:

  • I now have my General-class amateur radio license in the United States
    • I have to find a good way to make use of it
    • The hamexam package in the archive was a great study aid
    • The use of fldigi on high frequency shortwave bands is now possibly in order
  • I am blessed to have a great team of deputies to assist in leading Ubuntu Ohio
  • Podcasting is still offline for the time being due to work-related circumstances
  • The work of the LoCo Council has gotten interesting though due to OpenID issues I cannot write blog posts on that site to talk about things
  • I have been increasingly assisting in backporting software relating to the pump.io network such as dianara and pumpa

Daniel Pocock: Is WebRTC private?

Planet Ubuntu - Mon, 2014-08-18 19:55

With the exciting developments at rtc.debian.org, many people are starting to look more closely at browser-based real-time communications.

Some have dared to ask: does it solve the privacy problems of existing solutions?

Privacy is a relative term

Perfect privacy and its technical manifestations are hard to define. I had a go at it in a blog on the Gold Standard for free communications technology on 5 June 2013. By pure co-incidence, a few hours later, the first Snowden leaks appeared and this particular human right was suddenly thrust into the spotlight.

WebRTC and ICE privacy risk

WebRTC does not give you perfect privacy.

At least one astute observer at my session at Paris mini-DebConf 2014 questioned the privacy of Interactive Connectivity Establishment (ICE, RFC 5245).

In its most basic form, ICE scans all the local IP addresses on your machine and NAT gateway and sends them to the person calling you so that their phone can find the optimal path to contact you. This clearly has privacy implications as a caller can work out which ISP you are connected to and some rough details of your network topology at any given moment in time.

What WebRTC does bring to the table

Some of this can be mitigated though: an ICE implementation can be tuned so that it only advertises the IP address of a dedicated relay host. If you can afford a little latency, your privacy is safe again. This privacy protecting initiative could be made by a browser vendor such as Mozilla or it can be done in JavaScript by a softphone such as JSCommunicator.

Many individuals are now using a proprietary softphone to talk to family and friends around the world. The softphone in question has properties like a virus, siphoning away your private information. This proprietary softphone is also an insidious threat to open source and free operating systems on the desktop. WebRTC is a positive step back from the brink. It gives people a choice.

WebRTC is a particularly relevant choice for business. Can you imagine going to a business and asking them to make all their email communication through hotmail? When a business starts using a particular proprietary softphone, how is it any different? WebRTC offers a solution that is actually easier for the user and can be secured back to the business network using TLS.

WebRTC is based on open standards, particularly HTML5. Leading implementations, such as the SIP over WebSocket support in reSIProcate, JSCommunicator and the DruCall module for Drupal are fully open source. Not only is it great to be free, it is possible to extend and customize any of these components.

What is missing

There are some things that are not quite there yet and require a serious effort from the browser vendors. At the top of the list for privacy:

  • ZRTP support - browsers currently support DTLS-SRTP, which is based on X.509. ZRTP is more like PGP, a democratic and distributed peer-to-peer privacy solution without needing to trust some central certificate authority.
  • TLS with PGP - the TLS protocol used to secure the WebSocket signalling channel is also based on X.509 with the risk of a central certificate authority. There is increasing chatter about the need for TLS to use PGP instead of X.509 and WebRTC would be a big winner if this were to eventuate and be combined with ZRTP.

You may think "I'll believe it when I see it". Each of these features, including WebRTC itself, is a piece of the puzzle and even solving one piece at a time brings people further out of danger from the proprietary mess the world lives with today.

Jono Bacon: New Facebook Page

Planet Ubuntu - Mon, 2014-08-18 16:35

As many of you will know, I am really passionate about growing strong and inspirational communities. I want all communities to benefit from well organized, structured, and empowerinfg community leadership. This is why I wrote The Art of Community and Dealing With Disrespect, and founded the Community Leadership Summit and Community Leadership Forum to further the art and science of community leadership.

In my work I am sharing lots of content, blog posts, videos, and other guidance via my new Facebook page. I would be really grateful if you could hop over and Like it to help build some momentum.

Many thanks!

Stuart Langridge: Do you do it anyway?

Planet Ubuntu - Mon, 2014-08-18 13:17

Let us imagine that you are a designer, designing a thing. Doesn’t matter here what the thing is; it might be a piece of software or a phone or a car or a coffee machine or a tea cup. Further imagine that there are already lots of people using this thing, and a whole bunch of those people legitimately want it to do something that it currently does not. You’d like the thing to have this feature, but importantly you can’t work out how to add it such that it’s beautifully integrated and doesn’t compromise any of the other stuff. (That might be because there genuinely is no way, or just because you haven’t thought of one yet, and obviously the second of those looks like the first one to you.)

The fundamental question dividing everyone into two camps is: do you do it anyway?

If you add the feature, then you’ll either do so relatively visibly or relatively invisibly. If it’s relatively visible then it will compromise the overall feel of the thing and maybe make it more difficult to use its existing features (because you can’t think of a seamless brilliant way to add it, so it will be unseamless and unbrilliant and maybe get in the way). If you add it relatively invisibly, then most people will not even discover that it exists and only a subset of those will actually learn how to use it at all.

However, if you don’t add it, then lots of people who could benefit from it, want it, and could have it aren’t allowed, even if they’re prepared to learn a complex way to make it happen.

These two camps, these two approaches, are contradictory, in opposition, irreconcilable, and equally valid.

It’s the “equally valid” bit that people have trouble with.

This war is played out, day after day, hour after hour, in every field of endeavour. And not once have I ever seen anyone become convinced by an argument to switch to the other side. I have seen people switch, and lots of them, but it’s a gradual process; nobody reads a frantic and shrill denunciation of their current opinion and then immediately crosses the floor.

There are also many people who would protest this division into two opposing camps; who would say that one should strike a balance. Everyone who says this is lying. It is not possible to strike a balance between these two things. You may well believe yourself to straddle this divide like an Adonis, but what that actually means is that sometimes you’re in one camp and sometimes you’re in the other, not that you’re simultaneously in both. Saying “well, you should add the feature, but as sensitively as possible” means “if it can’t be done sensitively, I’ll do it anyway”. Saying “it’s important to be user-focused, not feature-focused” means “people don’t get to have this thing even if they want it”. Doubtless you, gentle reader, would disagree with one of those two characterisations, which proves the point. Both views are equally valid, and they’re in opposition. If you’re of the opinion that it should be possible to straddle this divide, then help those of your comrades who can’t yet do so; help the do-it-anyways to understand that it’s sometimes better to leave a thing out, and help the deny-it-anyways to understand that some of their users are happy to learn about a thing to get best use from it. But if you’re already in one camp, stop telling the other camp that they’re wrong. We have enough heat and not enough light already.

John Baer: Considering Acer nVidia K1 Chromebook

Planet Ubuntu - Sun, 2014-08-17 20:17

Talked about for nearly a year the nVidia Tegra K1 Chromebook arrives via the Acer Chrombook 13 (CB5-311).

How will this Chromebook stack up against the competition? The newly announced nVidia Shield Tablet gives us a glimpse of what the K1 is capable of.

Despite the largely similar clock speeds compared to the Snapdragon 800 we see that the Tegra K1 is generally a step above in performance. Outside of Apple’s A7 SoC and x86 SoCs, NVIDIA is generally solidly ahead of the competition.

When it comes to GPU performance, there’s really no question: the Tegra K1 is easily the fastest in all of our GPU benchmarks. It handily beats every other ARM SoC, including the newest generation of SoCs such as the recently introduced Snapdragon 805 and its Adreno 420 GPU. It’s worth noting that the Snapdragon 805 is likely aimed more at smartphones than tablets, although we are looking at its performance in Qualcomm’s tablet development platform here. Until we get a look at Snapdragon 805 power consumption we can’t really draw any perf/watt conclusions here. Ultimately, the only thing that can top the Shield Tablet is Surface Pro line, which uses more powerful laptop-class hardware.

Source: AnandTech

For $279 you get the following.

  • NVIDIA Tegra K1 Quad Core 2.1 GHz Processor
  • 2 GB DDR3L SDRAM
  • 16 GB Internal Storage
  • 13.3-Inch Screen, NVIDIA Kepler GPU with 192 NVIDIA CUDA cores
  • 1366 x 768 pixels Screen Resolution
  • Chrome OS
  • 802.11 A/C WiFi
  • 13-hour battery life

For $379 you get all of the above and the following.

  • 4 GB DDR3L SDRAM
  • 32 GB Internal Storage
  • 1920 x 1080 pixels Screen Resolution
  • 11-hour battery life

As good as the above looks, I do not expect the early reviews to be kind.

CPU Performance

The first complaint will be the anticipated K1 Google Octane score of 7628 (some are suggesting it may achieve 8000). The new Acer C720 (Core i3-4005U, 4GB RAM) boasts an Octane score of 14530. The Acer C720 (Celeron 2955U, 2GB RAM) boasts an Octane score of 11502. Admittedly this added power comes at the expense of battery life but the rated 7.5 hours from these Intel powered Chromebooks is adequate and many feel an Octane score of 11000 is the minimum required to support a quality Chrome OS user experience.

TN Screen Panel

Manufactures are reluctant to offer IPS screen panel options. The reason is the Education market and cost. Education is a big driver in today’s Chromebook market and TN quality displays are considered adequate.

Build Quality

For a laptop priced at less than $400 unibody polycarbonate plastic is acceptable as long as it doesn’t flex or bend. Historically Chromebooks from Acer have been criticized for their build quality.

Saving Grace

GPU Performance

As noted in the Shield Tablet review the Kepler GPU is best in class. Assuming the keyboard and track pad perform well and the TN panel does a reasonably good job in displaying content, the GPU could move this Chromebook to the head of the class.

Gaming

You don’t really think of a Chromebook as a gaming console but if you are targeting a teen to young adult audience, gaming may be the item which closes the deal.

We game on all our devices, so why should a Chromebook be any different? With the upcoming launch of addicting games like Miss Take and Oort, along with the promise of great titles thanks to the adoption of WebGL in Unreal Engine 4 and Unity 5, the future of Chromebooks is looking really fun.

Source: TegraZone

Are you considering the Acer nVidia K1 Chromebook? The standard and HD models are up for pre-order on Amazon.

The post Considering Acer nVidia K1 Chromebook appeared first on j-Baer.

Pages

Subscribe to Free Software Magazine aggregator