Packages for the release of KDE's document suite Calligra 2.8.1 are available for Kubuntu 12.04 LTS and 13.10. You can get it from the Kubuntu Backports PPA (alongside KDE SC 4.13). They are also in our development release.
In this week’s show:-
- We take a look at what’s been happening in the news:
- Google+…The End?
- The Linux Foundation announces the Core Infrastucture initiative, with industry support…
- US Magistrate judge ruled that companies issued with warrants for customer data must produce it even if it’s stored on servers outside the USA.
- Google aren’t so evil…
- Awesome project lets you copy text from images…and more…
- Gaming with Tony: Unity 3D stuff
- We also take a look at what’s been happening in the community:
Please send your comments and suggestions to: email@example.com
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: firstname.lastname@example.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+
This means Fedora 20 users can now try WebRTC more easily.
The same version is already available in Debian wheezy-backports and Ubuntu trusty.Get started today
Install the resiprocate-repro proxy server package using yum.
Set up a DNS entry, here is what we will use in the example:Domain sip-ws1.example.org WebSocket URL
Use this in JSCommunicator or JsSIP ws://sip-ws1.example.org IP address of server 198.51.100.50
so the DNS entry will besip-ws1.example.org. IN A 198.51.100.50
Notice that in the ws:// URL, we do not specify a port. This means port 80 is used by default. You can also use a non-standard port if you prefer or if you dont' have root permissions.
Now edit the file /etc/repro/repro.config, adding a transport definition for WebSockets (WS) and change a few other parameters from their defaults:# Bind to 198.51.100.50 on port 80 Transport1Interface = 198.51.100.50:80 # Use WS (can also use WSS for TLS, see repro.config for full details) Transport1Type = WS # if using WSS, must also change the transport param here Transport1RecordRouteUri = sip:sip-ws1.example.org;transport=WS EnableFlowTokens = true DisableOutbound = false # Disable all authentication - just for testing DisableAuth = false # allow http admin access on all interfaces, # default is 127.0.0.1 # HttpBindAddress = 0.0.0.0
Now set up a password for the web admin tool using htdigest:# htdigest /etc/repro/users.txt repro admin
After all that, restart the repro process# service repro restart
Go to the web interface on port 5080 (only listening on localhost by default), go to the "ADD DOMAIN" page and add sip-ws1.example.org
Now restart repro again so it recognises the new domain.# service repro restart
- Set up a TURN server for NAT traversal
- Use WebSockets over TLS (WSS) instead of regular WS mode
- Set up authentication (see the various options, including client certificate support, in repro.confg)
- Connect to regular SIP infrastructure such as Asterisk
Please come and ask on the repro-users mailing list
Upon learning about the Heartbleed vulnerability in OpenSSL, my first thoughts were pretty desperate. I basically lost all faith in humanity's ability to write secure software. It's really that bad.
I spent the next couple of hours drowning in the sea of passwords and certificates I would personally need to change...ugh :-/
As of the hangover of that sobering reality arrived, I then started thinking about various systems over the years that I've designed, implemented, or was otherwise responsible for, and how Heartbleed affected those services. Another throbbing headache set in.
I patched DivItUp.com within minutes of Ubuntu releasing an updated OpenSSL package, and re-keyed the SSL certificate as soon as GoDaddy declared that it was safe for re-keying.
Likewise, the Ubuntu entropy service was patched and re-keyed, along with all Ubuntu-related https services by Canonical IT. I pushed an new package of the pollinate client with updated certificate changes to Ubuntu 14.04 LTS (trusty), the same day.
That said, I did enjoy a bit of measured satisfaction, in one controversial design decision that I made in January 2012, when creating Gazzang's zTrustee remote key management system.
All default network communications, between zTrustee clients and servers, are encrypted twice. The outer transport layer network traffic, like any https service, is encrypted using OpenSSL. But the inner payloads are also signed and encrypted using GnuPG.
Hundreds of times, zTrustee and I were questioned or criticized about that design -- by customers, prospects, partners, and probably competitors.
In fact, at one time, there was pressure from a particular customer/partner/prospect, to disable the inner GPG encryption entirely, and have zTrustee rely solely on the transport layer OpenSSL, for performance reasons. Tried as I might, I eventually lost that fight, and we added the "feature" (as a non-default option). That someone might have some re-keying to do...
But even in the face of the Internet-melting Heartbleed vulnerability, I'm absolutely delighted that the inner payloads of zTrustee communications are still protected by GnuPG asymmetric encryption and are NOT vulnerable to Heartbleed style snooping.
In fact, these payloads are some of the very encryption keys that guard YOUR health care and financial data stored in public and private clouds around the world by Global 2000 companies.
Truth be told, the insurance against crypto library vulnerabilities zTrustee bought by using GnuPG and OpenSSL in combination was really the secondary objective.
The primary objective was actually to leverage asymmetric encryption, to both sign AND encrypt all payloads, in order to cryptographically authenticate zTrustee clients, ensure payload integrity, and enforce key revocations. We technically could have used OpenSSL for both layers and even realized a few performance benefits -- OpenSSL is faster than GnuPG in our experience, and can leverage crypto accelerator hardware more easily. But I insisted that the combination of GPG over SSL would buy us protection against vulnerabilities in either protocol, and that was worth any performance cost in a key management product like zTrustee.
In retrospect, this makes me wonder why diverse, backup, redundant encryption, isn't more prevalent in the design of security systems...
Every elevator you have ever used has redundant safety mechanisms. Your car has both seat belts and air bags. Your friendly cashier will double bag your groceries if you ask. And I bet you've tied your shoes with a double knot before.
Your servers have redundant power supplies. Your storage arrays have redundant hard drives. You might even have two monitors. You're might be carrying a laptop, a tablet, and a smart phone.
Moreover, important services on the Internet are often highly available, redundant, fault tolerant or distributed by design.
But the underpinnings of the privacy and integrity of the very Internet itself, is usually protected only once, with transport layer encryption of the traffic in motion.
At this point, can we afford the performance impact of additional layers of security? Or, rather, at this point, can we afford not to use all available protection?
p.s. I use both dm-crypt and eCryptFS on my Ubuntu laptop ;-)
- What is Ubuntu
- Open Hardware
- C++ with Cocos2dx
There were aproximately 300 people in this event. Many of them were high school students. This event help students understand the importance of Free Software in our daily life.
What is FLISOL?FLISOL, an acronym for Festival Latinoamericano de Instalación de Software Libre (Latin American free software install fest), is the biggest event for spreading Software Libre since 2005, performed simultaneously in different countries of Latin America.
FLISOL San Pedro Sula (FLISOL SPS)This is a newspaper article talking about FLISOL
The FLISOL took place in San Pedro Sula Honduras at the university UNAH-VS.
For the past cycle we have been working with IBM to bring Ubuntu 14.04 to the POWER8 architecture. For my part I’ve been working on helping get our collection of over 170 services to Just Work for this new set of Ubuntu users.
This week Mark Shuttleworth and Doug Balog (IBM) demoed how people would use a POWER8 machine. Since Ubuntu is supported, it comes with the same great tools for deploying and managing your software in the cloud. Using Juju we were able to fire up Hadoop with Ganglia monitoring, a SugarCRM site with MariaDB and memcached, and finally a Websphere Liberty application server serving a simple Pet Store:
No part of this demo is staged; I literally went from nothing to fully deployed in 178 seconds on a POWER8. Dynamic languages just work, and most things in the archive (that are not architecture specific) also just work; we recompiled the archive to ppc64le over the course of this cycle and if you’re using 14.04 you’re good to go.
For reference here’s the entire Juju demo as we did it from behind the curtain:
This is the power of Juju!
Ubuntu 14.04 LTS ships with nginx 1.4, which is now in main for the first time. Packages in main are covered by the Ubuntu Security Team and generally receive particular focus and attention in Ubuntu. This brings nginx up to par with Apache as a first class citizen in Ubuntu.
This move also led us to closer collaboration with nginx upstream. This is great to see happening in Ubuntu, and can only help to improve quality in our ecosystem.
Note that it is only nginx, nginx-core and the other support packages nginx-doc and nginx-common that are in main. The other packages (extras, full, light, naxsi etc) contain third party plugins and thus remain in universe. See below for details.Background
Thomas Ward had been looking after the nginx packages in Ubuntu for quite a while, so when I received requests to get nginx into main, I made sure to contact him. One requirement for main inclusion is a team commitment to look after the package. We concluded that Thomas would carry on looking after nginx in general in Ubuntu, but that the rest of the Ubuntu Server Team would be able to back him as necessary.
Following Jorge's blog post about nginx plans for main, Sarah Novotny from nginx upstream contacted us to see how we might be able to collaborate. We are now all in touch so that we can work together to make the nginx experience better for Ubuntu users. I made sure that we all connected with the Debian nginx team also.
Thomas also blogged about nginx in main as soon as it landed.Packaging notes
There are a couple of notable differences in Ubuntu's nginx packaging (inherited from Debian):
The default path served is not /var/www/ like it is with Apache. Instead, it is /usr/share/nginx/html/. This directory contains the index.html file that is served by default. However, /usr/share/ is not a suitable location to place your own files to serve, since this area is maintained by packaging. Instead, you should configure nginx to serve from a different path, and then use that. According to the Filesystem Hierarchy Standard, /srv is a suitable path to use for this.
Placing your own files in /usr/share/nginx/html/ is dangerous, as they can be arbitrarily overwritten by package upgrades. This unfortunate behaviour has been reported in bug 1194074, and there there has been some discussion in Debian bug 730382. But as this is a consequence of the choice of default document root as a deliberate decision by the Debian nginx maintainers, there isn't yet any solution to stop users falling into this trap, except to know about it. So please heed this warning and make sure that you change your document root appropriately.
The nginx daemon does not start by default as soon as the package is installed. You must do this by hand using service nginx start. This makes sense since you will usually need to reconfigure nginx to use a different document root first (see the previous point).
A requirement for main inclusion in Ubuntu is quality and maintainability from a security perspective. The security team reviewed nginx and passed this requirement for nginx itself, noting that "Nginx is high-quality legible code, excellent explanatory comments and platform notes, very useful utility functions, and defensive error checking and logging".
However, some third party modules shipped with nginx in Debian varied in quality, so did not pass this requirement for main inclusion.
Since nginx does not currently support dynamically loadable modules, it is not possible for binary distributions such as Debian and Ubuntu to independently build plugin modules using separate source packages. Since module selection is done at build time, this makes it impossible for users to select the precise set of modules they want to have enabled in their nginx binaries, or to add modules written by third parties afterwards, as the distribution has already built the nginx binaries as part of the distribution.
So instead, Debian supplies a selection of third party modules as part of the nginx packaging. This results in binary packages such as nginx-light, nginx-full and nginx-extras, so that users can at least pick from a list of predefined sets of modules, which include common third party modules.
Since third party modules could not be included in main in Ubuntu, a new binary package nginx-core was created which contains only modules supplied in the nginx source itself. It is nginx-core, generated from the nginx source only, and related support packages nginx, nginx-common, and nginx-doc, that were promoted to main.nginx 1.6
nginx 1.6 was released on 24 April, which was a week after the release of 14.04 LTS. This means that it will not be available as part of Ubuntu except in future releases. If you need nginx 1.6 on 12.04 or 14.04, you can use a PPA or the upstream-provided package repository. Read on for details.Multiple package sources
When deciding which source to use, I suggest that you consider the differences in release management, how security updates are handled and by whom, and your deployment's external repository dependencies.
This sort of choice in package repository source seems to be becoming increasingly common for key packages as the Free Software ecosystem continues to develop. Ubuntu Server LTS releases remain the stable, solid ground that production server deployments are built on, but the faster development pace of upstreams means that there is constant demand for newer upstream releases to be made available in older LTS releases.
Here are my own personal opinions on the pros and cons of the different approaches, from an nginx perspective.nginx from Ubuntu itself
nginx as shipped within Ubuntu follows the Ubuntu release cycle and release management. You get the version available at the time the Ubuntu release you're using entered feature freeze, with only high-impact bugfixes and security updates issued as updates, as curated by the Ubuntu Server Team and the Ubuntu Security Team.
This provides a stable platform, where by stable I mean that the package does not functionally change in the lifetime of the Ubuntu release. From a production perspective, this means that if you successfully deployed last week, you can have maximum confidence in performing an identical redeployment next week. If your workflow is to have a validated, consistently reproducable deployment, then this approach minimises the chance of your deployment regressing, by not changing it. More information on Ubuntu's stable release policy can be found on the Stable Release Updates page.
The trade-off to this release stability is that the latest and greatest is not available, except through six-monthly non-LTS releases, whose use is less common on production servers.
You can see which nginx version ships with which Ubuntu release on the Ubuntu nginx package page. If the version of nginx shipped with Ubuntu is suitable for your needs, then I recommend using this option.nginx from the Launchpad nginx team PPAs
The Launchpad nginx team PPAs are mainly maintained by Thomas Ward nowadays, who is the same person who generally looks after the nginx packaging in Ubuntu itself. You have a choice of two PPAs "stable" and "mainline", which follow the two lines of upstream development.
The version of nginx in these PPAs move along with upstream releases. For example: if you installed nginx 1.4 from this repository on 12.04 previously, it would automatically have upgraded to 1.6 when you performed your regular system updates after the PPA was updated to 1.6. This effectively gives you a "rolling release" of the latest nginx, but based on the stable release of Ubuntu 12.04 LTS.
The advantage of this approach is that you get the latest version of nginx, assuming that this matters to you. For example, if you need a more recent feature that is not present in the version of nginx shipped with the latest LTS release of Ubuntu, then this is useful.
If you want to continue to have the latest version, then this option will work well for you.
However if you want the latest version but for it to subsequently not change, then this is dangerous, since not updating your system from this PPA having used it also means that you will not receive security updates, and bugfixes will generally not be available to you unless you also bump to the latest release version.
Assuming that you do stay up-to-date with the PPA, then instead of managing regression risk by not changing things, regression risk must now necessarily be managed by the nginx upstream team's QA process. In this case, I know that they do have a comprehensive test suite that they run before release, but clearly the regression risk is higher than the approach of not changing anything at all.
Also, note that as a PPA this does not receive the attention of the Ubuntu Security Team. Thomas is very good at keeping this PPA up to date, but clearly PPA maintanance primarily by one person does have a very low bus factor in the context of timely updates.nginx packages from upstream
nginx upstream also publish package repositories for nginx. The trade-offs in using these are quite similar to using the Launchpad nginx team PPA from a release management perspective.
The key difference is in packaging. nginx upstream packaging is designed to appear largely the same regardless of which distribution you are using, for more consistency across distribution families. This is different from the nginx distribution packaging, which is generally designed to follow the patterns commonly used across the Debian and Ubuntu distributions. So if you move from distribution packaging to upstream packaging or vice versa, you will probably need to adapt your deployment configuration.
I assume that the bus factor for timely updates here is much higher than for the PPA, since this repository is managed by the larger upstream nginx team. Security updates generally originate from upstreams anyway, so in general all nginx repositories are to some extent reliant on the upstream nginx team for updates, of course, regardless of their direct source.
Note that if you choose this option, your deployment will additionally rely on the availability of the upstream nginx package archive. If you use many upstream repositories for many different components in your deployment, then this magnifies to many points of failure. You can mitigate this risk entirely by mirroring the packages you are using. Of course, this type of deployment dependency also applies to anything based on the Ubuntu archive, in that your deployment already has a dependency on the Ubuntu archive if you are not mirroring the Ubuntu packages you use. I think that how you consider this trade-off, or whether it is even a trade-off at all, is a matter of opinion.Backports
I will note that the Ubuntu Backports repository exists, but it is not currently used for nginx, so I will not discuss this option further here.Getting help
A big shout out is due to Thomas Ward, who has been looking after both nginx in Ubuntu and the Launchpad nginx team PPA for quite a while now. Thomas was pivotal in getting nginx into main, and blogged about it when it landed.
Thanks also to the Debian nginx packaging team. Ubuntu's nginx packaging is based on their hard work.
And to Sarah Novotny of nginx upstream, for reaching out and collaborating with us to help make the nginx experience of Ubuntu users better.
Now, it's available PYLang, an upgrade from that application. You can practice English & Spanish in its first release.
How to play
Easy, just complete the right sentence with random words!
PYLang is a handy utility created with a different attitude in mind, forcing the user to write the correct sentence. The app provides per-detail-colored items to be utilized by the user with the main goal of combining them in a correct sentence.
Testing your English
Testing your Spanish
How to install
You can install into Ubuntu 14.04 or previous versions.
Just install from the Ubuntu Software Center in Trusty
PYLang web page.
The new DVD designs feature:
- 14.04 wallpaper
- Modified design of the folded paper numerals
- An integrated, 14 module graphic
Trusty Tahr – hidden reveal within the DVD pocket
Design exploration – folded paper numerals
Design exploration – graphic numerals
Alternative Desktop Edition concepts
Alternative Server Edition concepts
Ubuntu announced its 12.10 (Quantal Quetzal) release more than 18 months ago, on October 18, 2012. Since changes to the Ubuntu support cycle mean that Ubuntu 13.04 has reached end of life before Ubuntu 12.10, the support cycle for Ubuntu 12.10 has been extended slightly to overlap with the release of Ubuntu 14.04 LTS. This will allow users to move directly from Ubuntu 12.10 to Ubuntu 14.04 LTS (via Ubuntu 13.10).
This period of overlap is now coming to a close, and we will be retiring Ubuntu 12.10 on Friday, May 16, 2014. At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 12.10.
The supported upgrade path from Ubuntu 12.10 is via Ubuntu 13.10, though we highly recommend that once you’ve upgraded to 13.10, you continue to upgrade through to 14.04, as 13.10′s support will end in July.
Instructions and caveats for the upgrade may be found at:
Ubuntu 13.10 and 14.04 continue to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:
Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customize or alter their software in order to meet their needs.
Originally posted to the ubuntu-announce mailing list on Wed Apr 30 23:55:52 UTC 2014 by Adam Conrad
The judging is finished and the scores are in, we now have the winners of our third Ubuntu App Showdown! Over the course of six weeks, and using the Ubuntu SDK, our community of app developers were able to put together a number of stunningly beautiful, useful, and often highly entertaining apps.
We had everything from games to productivity tools submitted to the competition, written in QML, C++ and HTML5. Some were ports of apps that already existed on other platforms, but the vast majority were original apps created specifically for the Ubuntu platform.
Best of all, these apps are all available to download and install from the Apps Scope on Ubuntu phones and tablets, so if you have a Nexus device or one with a community image of Ubuntu, you can try these and many more for yourself. Now, on to the winners!QML Apps: Project Dashboard
Judges were astounded by the beautifully crafted Project Dashboard app, the winner of the QML category. Not only the idea and execution were brilliant, but also the fact that itâ€™s a convergent QML application that runs on phones, tablet and desktop got it those coveted extra points from the jury.
With Project Dashboard you can keep track of different projects youâ€™re managing or participating right from your device, in a very intuitive and easy way. For the geeks in us who contribute in several Open Source project, the excellent integration with Github makes it a pleasure to participate or manage the day to day of projects hosted in there.
Well done Michael Spencer!HTML5 Apps: BE Mobile
Say youâ€™re in Belgium and want to get quickly from A to B with public transport? Then youâ€™ll definitely want to use the winner of the HTML5 apps category: BE Mobile.
BE Mobile helps travellers find the best routes and times to travel within Belgium by selecting a journey and searching through a list of public transport services that can be enabled or disabled at will. In addition to that, a set of Twitter feeds for the services are provided, so that commuters and occasional travellers get informed in real time of disruptions and news for the lines theyâ€™re wanting to use.
Whatâ€™s beautiful about it is the way in which using the SDKâ€™s HTML5 components the app blends into the system exactly as a QML app. Convergence is also well-catered for with a responsive HTML design.
Congrats to Jelmer Prins!Ported Apps: 2048
Whoever has been online lately has surely heard about or played 2048. This addictive game created by 19-year-old Gabriele Cirully has quicky reached Internet popularity status and quite a following. And now Ubuntu has got its own ported version thanks to developer Victor Thompson, who takes home the prize for the best ported app in this Showdown!
A simple yet beautiful UI, combined with an engaging game experience will certainly grant hours of fun trying to reach that craved for 2048 tile!Chinese Apps: QmlTextReader and Simple Dict
As a new category, we added â€œChinese appsâ€ for this third round of the App Showdown. Boren Zhang, who is also a Core Apps developer, contributed QmlTextReader, which had a simple design as its focus. It allows you to read novels and other texts and works very well for Chinese text. Font size and encoding can be changed and you can jump to where you left the text before. Perfect for long rides on the train or bus! Shenjing Zhu submitted a simple English/Chinese dictionary which is easy to use and very straight-forward. Both apps are very useful for readers and will come in handy quite often.Go and get them all!
With retail Ubuntu phones getting closer and closer, the third Ubuntu App Showdown Ubuntu was a good opportunity to put the Ubuntu SDK, our documentation and our general approach to apps in Ubuntu to the test. In particular our HTML5 story has evolved to be on par with QML, so thanks a lot to all community developers and the Webapps team Engineers who have made this possible. During the course of these six weeks weâ€™ve received great feedback from our developer community, worked out a large number of bugs in the SDK, and added or plan to add many new features to our platform.
It was also great to see how quickly all the apps were published in the app store and how little time had to be spent in reviews. The great thing is: if you have a device to run Ubuntu on or use the emulator, you can very easily install all the apps and take them for a spin. Six weeks is not a long time to write an app and get it to completion, but everybody worked hard, got their app in and we are very likely going to see more updates to the apps in the coming weeks.
Once again congratulations to Boren Zhang, Jelmer Prins, Michael Spencer, Shengjing Zhu, Victor Thompson and a big thank you to everybody who participated or helped those who participated, and everyone who has worked on building the Ubuntu SDK, Click tools and the App Store. And if youâ€™re an app developer, or want to become an app developer, now is your time to get started with the Ubuntu SDK!
č°˘č°˘ for all the submissions everyone!
In short, you add one .so file (105K on amd64) (or install it system-wide) run qmlscene on your qml stuff and you can import and call anything from python world and get an asynchronous response. You can also use QML signals. This works pretty much like magic, it's speedy, responsive and the code is tiny. If you ever looked at pyside or pyqt then this is everything but.
I strongly recommend watching the Qt Developer Days 2013 presentation by the upstream author. You should also keep a look at the documentation. I've filed an ITP (to get it packaged in Debian) and we should see it in Debian and Ubuntu very very soon.
The upstream git repository has a number of examples. I love the matplotlib example that shows how you can render arbitrary bitmaps and push them to QML trivially.
If you want to give it a try Ive prepared a PPA with the same packages that I uploaded to Debian. Give them a spin and let me know if you find any problems.
Last week I attended the KDE Frameworks Sprint, held in Blue Systems Barcelona office. Kevin put together the now traditional sticky note board and we started cranking through the tasks. I think we were quite productive, as this picture of the board at the end of the sprint can attest:
I spent most of my time working on translation support, ironing out details to get them to install properly and working with David on the release tarballs scripts. I also worked a bit on KApidox, the code generating API documentation for KF5 on api.kde.org. I updated the script to match with the latest framework changes and switched to the Jinja2 template engine. Using Jinja will make it possible to generate an up-to-date list of frameworks on the landing page, based on the information from the framework metainfo.yaml files. I already have a branch which creates this list, but before I deploy it I want to fix the dependency diagrams on the server. Hopefully I'll figure it out this week.
nothing new to report this week
Release Metrics and Incoming Bugs
Release metrics and incoming bug data can be reviewed at the following link:
Milestone Targeted Work Items
The links above have 0 content for Utopic. I’ll try to get those fixed
up to better reflect our work moving forward.
Status: Utopic Development Kernel
Our Trusty kernel has been pocket copied to seed Utopic. We have opened
the ubuntu-utopic kernel tree. The master-next branch is currently
tracking the v3.15-rc3 kernel. We likely won’t upload a v3.15 based
kernel until a few more -rc releases come out.
Important upcoming dates:
The schedule is currently under review.
The current CVE status can be reviewed at the following link:
Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Quantal/Precise/Lucid
Status for the main kernels, until today (Mar. 25):
- Lucid – Prep week
- Precise – Prep week
- Quantal – Prep week
- Saucy – Prep week
Trusty – Prep week
Current opened tracking bugs details:
For SRUs, SRU report is a good source of information:
cycle: 27-Apr through 17-May
25-Apr Last day for kernel commits for this cycle
27-Apr – 03-May Kernel prep week.
04-May – 10-May Bug verification & Regression testing.
11-May – 17-May Regression testing & Release to -updates.
Open Discussion or Questions? Raise your hand to be recognized
No open discussion.
Welcome to the Ubuntu Weekly Newsletter. This is issue #365 for the week April 21 – 27, 2014, and the full version is available here.
In this issue we cover:
- U talking to me?
- Ubuntu Stats
- San Francisco Ubuntu 14.04 Release Party
- Ubuntu Florida 14.04 Release Party photos
- Charles Profitt: Ubuntu 14.04: Subtle shades of success
- Robie Basak: New in Ubuntu 14.04: Apache 2.4
- Ubuntu Classroom: Ubuntu Open Week wrap-up
- Utopic Unicorn (14.10) is Open for Business
- Latest from the web team — April 2014
- In The Press
- In Other News
- Other Articles of Interest
- Featured Audio and Video
- Monthly Team Reports: March 2014
- Upcoming Meetings and Events
- Updates and Security for 10.04, 12.04, 12.10, 13.10 and 14.04
- And much more!
The issue of The Ubuntu Weekly Newsletter is brought to you by:
- Elizabeth Krumbach Joseph
- Paul White
- Emily Gonyer
- Jim Connett
- And many others
Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License
In preparation for the Cherry Hill Library’s XP to Linux Installfest I made a website, presentation and worksheet.
Basic site designed to tell users if they are still getting security updates (right now just for Windows) and what their next actions should be.
Introduces Ubuntu and Lubuntu for non-technical users and also how today’s installfest will work.
A place for the user to write down what they use their computer for. And a place for the installfest helpers to write down what works/doesn’t in the LiveCD test.
Items mentioned in the worksheet: The liability release is specific to this Installfest. As for the survey mentioned, I’m very curious just how many non-pae machines or 32 x86 machines there are still in the wild.
Let me know what you think and feel free to modify/use/whatever. (If you need a license consider the docs under Creative Commons Attribution 4.0 International License).
The IT security world is still reeling from the impact of the OpenSSL Heartbleed bug. Thanks to the bug, many experts have been reviewing other technologies to try and find similar risks.
While Heartbleed was hidden away in the depths of the OpenSSL code base, another major security risk has been hiding in plain sight: SMS authentication for web site logins.
Remarkably, a number of firms have started giving customers the ability to receive single-use passwords over SMS for logging into their secure web sites. Some have even insisted that customers can no longer log in without it, denying customers the right to make an important choice about their own security preferences.
To deliver single-use SMS passwords, the SMS must travel through various networks from the firm's headquarters, to a wholesale SMS gateway, international SMS network and finally down the line of the local phone company.
In comparison, properly certified token devices generate a code inside the device in the palm of your hand. The code only travels from the screen to your eyes.
In a litany of frauds coming in all shapes and sizes, telephone networks have been exploited over and over again because they are almost always the weakest link in the chain. Using the mobile SMS network for authentication is not building on solid ground - some experts even feel it is downright stupidity.
One of the most serious examples was the theft of $150,000,000 from a pension fund deposited with JP Morgan: it was described as a real-life case of Ocean's 11. The authentication was meant to be a phone call rather than an SMS: a phone company employee who was in on the scam duly ensured the call never reached the correct place.
The insecurity of traditional telephone networks has been on display for all the world to see in the ongoing trial of News Corporation executives for phone hacking. If journalists from a tabloid newspaper can allegedly hack a dozen phones before their first cigarette of the day, is it really wise to use an insecure technology like SMS as the cornerstone of a security system for authorizing transactions?
A fraud recently played out on many credit card holders in the UK exploited a low-tech feature of the phone system to trick people to believe they were safe by "calling back" to their bank.A plethora of new attack vectors
The staggering reality of the situation is that attackers don't even have to directly hack their victim's phones to access SMS messages.
As the Android API documentation demonstrates, SMS reception is notified to all apps in real-time. Apps can process the messages even when the phone is sleeping and the message is not read by the user.
Just consider all the apps on a phone that have requested permission to read incoming messages. There was an uproar recently when a new version of the Facebook app started demanding permissions to read incoming SMS. The app can't be installed if the user doesn't agree to these new permissions. WhatsApp, another popular app that has SMS access rights, was recently exposed in a major security scandal which revealed they use a phone's IMEI number as the password. When people install an app like Tinder (which does not yet request SMS access) is the security of their bank account likely to be at the front of their mind?
Even if Facebook intends no harm, they have opened the floodgates by further de-sensitizing users to the risks of giving apps un-necessary access to their data.
These companies are looking for every piece of data that could give them an edge in their customer profiling and marketing programs. Having real-time access to your SMS is a powerful way for them to understand your activities and feelings at every moment in the day. To facilitate these data analysis techniques, replicating and archiving your messages into their cloud databases (whether you can see them there or not) is par for the course.
The cloud, of course, has become a virtual smorgasboard for cyber-criminals, including both hackers and occasionally insiders wanting to peek at private data or harvest it en-masse. Social networking and communication sites are built on a philosophy of sharing data to create interaction and excitement. Unfortunately, this is orthogonal to the needs of security.
In this context, the telephone network itself may no longer be the weakest link in the chain. The diligent attacker only needs to look for the cloud operator with an unplugged security hole and use their system as a stepping stone to read any SMS they want, when they want.Would you notice a stray SMS?
Maybe you feel that you would notice a stray SMS carrying a login code for your bank account. Would you always be able to react faster than the criminal however?
Thanks to social networks, or location data inadvertently leaked by other apps the attacker can easily work out whether you are on holiday, at the gym, at a party or sleeping or in some other situation where you are not likely to check messages immediately.
If you receive a flood of SMS spam messages (deliberately sent by an attacker) in the middle of the night and you put your phone into silent mode and ignore it, you may well miss one message that was a login to your bank account. SMS technology was never designed for secure activities.The inconvenience of SMS
While security is a headline issue these days, it is also worth reflecting on the inconvenience of SMS in some situations.
Travel is at the top of the list: SMS doesn't work universally when abroad. These are usually the times when the only way to access the bank is through the web site. After dealing with the irritations of the hotel or airport wifi registration, do you really need more stress from your bank's systems too? For some networks, SMS can be delayed by hours or days, sometimes never arriving at all.
Many people swap their SIM cards when travelling to avoid the excessive roaming charges and there is extra inconvenience in swapping SIM cards back again just to log in to a bank account. Worst of all, if you are tethering with a SIM card from the country you are visiting, then it is impossible for you to receive the SMS message from the bank on your regular SIM card while simultaneously maintaining the SSL connection to their web site over your new SIM card.
Other problems like a flat battery, water damage or PIN permanently blocked by children playing with the phone can also leave you without access to your bank account for varying lengths of time.Is there any up-side to SMS authentication?
The only potential benefit to SMS authentication is that it weeds out some of the most amateur attempts to compromise your bank account, but this is a false sense of security and it opens up new attack vectors through the cloud as we have just demonstrated. For all other purposes, it smells like a new form of security theater.
A more likely reason why it has become popular amongst some firms is that many lenders want to ensure they have mobile phone numbers to contact customers when loan or credit card payments are missed. Making the mobile phone number mandatory for login ensures they almost always have the correct phone number for almost 100% of customers. It is not clear that this benefit justifies the failure to provide proper security and the inconvenience when travelling though.Opting out
Next time you log in to a web site, if the firm does try to enrol you in an SMS authentication scheme, it may be a good idea to click the "No thanks" option.
If you have already been registered into an SMS authentication scheme, fill out the online complaint form and inform the firm that you will only accept a proper authentication token or cryptographic smart card. These solutions are tried and tested and they are the correct tool for the job.
Hot on the heels of my previous annoucement of my systemd PPA for trusty, I’m now happy to announce that the latest systemd 204-10ubuntu1 just landed in Utopic, after sorting out enough of the current uninstallability in -proposed. The other fixes (bluez, resolvconf, lightdm, etc.) already landed a few days ago. Compared to the PPA these have a lot of other fixes and cleanups, due to the excellent hackfest that we held last weekend.
So, upgrade today and let us know about problems in bugs tagged “systemd-boot”.
I think systemd in current utopic works well enough to not break a developer’s day to day workflow, so we can now start parallelizing the work of identifying packages which only have upstart jobs and provide corresponding systemd units (or SysV script). Also, this hasn’t yet been tested on the phone at all, I’m sure that it’ll require quite some work (e. g. lxc-android-config has a lot of upstart jobs). To clarify, there is nofixed date/plan/deadline when this will be done, in particular it might well last more than one release cycle. So we’ll “release” (i. e. switch to it as a default) when it’s ready
My friends over at Big Finish are celebrating fifteen years of producing Doctor Who audio drama this year. To mark the occasion, they will be releasing a special box set called “The Worlds of Doctor Who.” The set comprises four stories with a linking story arc, with each story based around one of the Doctor Who spin-off series that Big Finish have been so successful at producing: Jago and Litefoot, Countermeasures, The Vault and Gallifrey.
I have been doing some of the photography for the box set and it has been a pleasure attending the recording sessions over the past couple of months. I’m sure I will be writing more about it in the future, but the cover for the box set was released yesterday and the central image is one I took of actor Jamie Glover who plays the evil Mr Rees throughout the four stories. His costume was added by a very clever graphic designer though!
“The Worlds of Doctor Who” is available for pre-order now from the Big Finish website. You might also have seen some of my photographs from Big Finish Day 4 in Vortex #61, with a very nice double spread of my images showing what good fun the day was for guests and attendees.Pin It
Thanks to Benoit Allard, there is now a way to sync the newer fitbit devices from within Ubuntu! I’ve packaged it up and put it in my PPA, so now it’s easier than ever to sync your Fitbit. To install (make sure the USB dongle is plugged in):
sudo add-apt-repository ppa:cwayne18/fitbit sudo apt-get update sudo apt-get install galileo sudo start galileo
The sync should now happen in the background, and will sync automatically every 15 minutes.