Live in the US? Did you know that we put Lead (Pb), a known neurotoxin, in:
- Garden hoses (that have been shown to leak Lead into the water)
- Power cords (including laptop cords)
- Carseats (mostly to the base, some other toxins have been found in the seat itself)
- Likely more, it’s apparently not uncommon to be added to plastic…
In the EU you aren’t allowed to put Lead in the above. I think it’s time we joined them!
- Sign the petition on the White House We the People site.
- Donate to this Indiegogo campaign to test carseats for toxic chemicals. (They are only asking for $10,000! ~ mostly to buy the carseats)
- Share this post / the above with friends, family, and any celebrities you happen to know on Twitter, etc. #NoSafeAmountOfLead.
- Bonus: Watch episode 7 of the new Cosmos which ends with Neil deGrasse Tyson saying there is no safe amount of lead.
Please let me know if you have trouble doing any of the above..
It’s all about purpose. This is the most important thing to keep in mind when you’re attempting to compare or judge something on how useful something is.
What do you look to accomplish with a coupe sports car? Surely you don’t buy one and claim it sucks because you can’t fit your family of four. That isn’t the intended purpose.
There are a lot of people trying to replace their day-to-day machine with Chromebooks and expecting a cheap 1-to-1 replacement. Whelp, good luck, you may end up frustrated. Let’s take a second and scope your expectations properly, so you know what you’re getting into.
Here’s an example of something I’ve witnessed:
ChromeOS sucks, it doesn’t have $application.
This is one of the most prominent types of comments or articles on the internet. This argument is invalid, however, in a proper scoping of intended purpose of the device.
Frankly, if you’re trying to get one of these for very cheap to replace every experience you have on a Windows, Mac, or Linux computer, you’re going to have a bad time. You’re just not going to get that out of a Chromebook, but that’s okay. Or may be for you, depending on what you’re looking to do.
Are you looking for a cheap Facebook or Pinterest machine? Awesome. Some editing using Google Drive of documents and spreadsheets? Yep. Email? Check. Netflix? Aye, it can do that too. Google Hangouts? Like a champ. Remote access to a VDI environment with VMware Horizon View? Yes! More on that later…
There are a ton of addons and ‘webapps’ you can install from the Chrome Web Store, and alternatives.
It is built to be a web browser; it is a web browser. That’s it. And if that’s what you want, it is perfect. If you require something else, then it isn’t. And that is okay.
written and posted from a Chromebook.
Today, Svetlana Belkin (belkinsa), done work on the team wiki pages, mainly the home page. The home page now has a cleaner look where the basics, such as the introduction about the team all the way to how to contact the team and how to join the team. Svetlana also removed some of the excess “tabs” on the menu bar and added a “Site Map” tab, where users can see what other pages are there.
There is still work to be done on a homepage, mainly with menu and a lot of work on those team wiki pages, as stated here and in the UOS session. Hopefully, the team’s wiki pages will be finished by the end of July 2014 in order for a clearer understanding for newcomers.
Filed under: News Tagged: News, Svetlana Belkin, Team Wiki Pages, Ubuntu, Ubuntu Scientists, wiki
We’re back with Season Seven, Episode Twelve of the Ubuntu Podcast! Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are drinking tea and eating very rich chocolate cake (like this one, only more chocolatey) in Studio L.Download OGG Download MP3 Play in Popup
In this week’s show:
- We discuss alternatives to Ubuntu One, which has recently shut down. Alan makes up the CRESCCO scale…
- We also discuss:
- We share some Command Line Lurve from @climagic:
Use it to order file-systems by percent usage and keep the header in place. Or order by file-system size with -k2n
* And we read your feedback – thanks for sending it in!
We’ll be back next week, so please send your comments and suggestions to: email@example.com
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: firstname.lastname@example.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+
The press picked up the recent press release about Debian LTS but mainly to mention the fact that it’s up and running. The call for help is almost never mentioned.
As usual what we lack is contributors doing the required work, but in this specific case, there’s a simple solution: pay people to do the required work. This extended support is mainly for the benefit of corporate users and if they see value in Debian LTS, it should not be too difficult to convince companies to support the project.
With some other Debian developers, we have gone out of our way to make it super easy for companies to support the Debian LTS project. We have created a service offer for Debian-using companies.
Freexian (my company) collects money from all contributing companies (by way of invoices) and then uses the money collected to pay Debian contributors who will prepare security updates. On top of this we added some concrete benefits for contributing companies such as the possibility to indicate which packages should have priority, or even the possibility to provide functional tests to ensure that a security update doesn’t introduce a regression in their production setup.
To make a good job of maintaining Debian Squeeze, our goal is to fund the equivalent of a full-time position. We’re currently far from there with only 13 hours per month funded by 4 companies. That makes a current average of 3.25 hours/month funded by each contributing company, for a price of 276 EUR/month or 3315 EUR/year.
This is not much if you compare it with the price those companies would have to pay to upgrade all their Debian 6 machines now instead of keeping them for two supplementary years.
Assuming the average contribution level will stay the same, we only need the support of 50 other companies in the world. That’s really not much compared to the thousands of companies using Debian. Can you convince your own company? Grab the subscription form and have a chat with your company management.
Help us reach that goal, share this article and the link to Freexian’s Debian LTS offer. Long Term Support is important if we want Debian to be a good choice for servers and big deployments. We need to make Squeeze LTS a success!
We’re now almost half way through the year and only a few days until summer officially starts here in the UK!
In the last few weeks we’ve worked on:
- Responsive ubuntu.com: we’ve finished publishing the series on making ubuntu.com responsive on the design blog
- Ubuntu.com: we’ve released a hub for our legal documents and information, and we’ve created homepage takeovers for Mobile Asia Expo
- Juju GUI: we’ve planned work for the next cycle, sketched scenarios based on the new personas, and launched the new inspector on the left
- Fenchurch: we’ve finished version 1 of our new asset server, and we’ve started work on the new Ubuntu partners site
- Ubuntu Insights: we’ve published the latest iteration of Ubuntu Insights, now with a dedicated press area
- Chinese website: we’ve released the Chinese version of ubuntu.com
And we’re currently working on:
- Responsive Day Out: I’m speaking at the Responsive Day Out conference in Brighton on the 27th on how we made ubuntu.com responsive
- Responsive ubuntu.com: we’re working on the final tweaks and improvements to our code and documentation so that we can release to the public in the next few weeks
- Juju GUI: we’re now starting to design based on the scenarios we’ve created
- Fenchurch: we’re now working on Juju charms for the Chinese site asset server and Ubuntu partners website
- Partners: we’re finishing the build of the new Ubuntu partners site
- Chinese website: we’ll be adding a cloud and server sections to the site
- Cloud Installer: we’re working on the content for the upcoming Cloud Installer beta pages
If you’d like to join the web team, we are currently looking for a web designer and a front end developer to join the team!
Working on Juju personas and scenarios.
Have you got any questions or suggestions for us? Would you like to hear about any of these projects and tasks in more detail? Let us know your thoughts in the comments.
This shows me running the Unity 8 preview session. Simple Scan shows up as an option and can be launched and perform a scan.
This is only a first start, and there's still lots of work to be done. In particular:
- Applications need to set X-Ubuntu-Touch=true in their .desktop files to show in Unity 8.
- Application icons from the gnome theme do not show (bug).
- GTK+ applications don't go fullscreen (bug).
- No cursors changes (bug).
- We only support single window applications because we can't place/focus the subwindows yet (bug). We're currently faking menus and tooltips by drawing them onto the same surface.
If you are using Ubuntu 14.10 you can install the packages for this from a PPA:
$ sudo apt-add-repository ppa:ubuntu-desktop/gtk-mir
$ sudo apt-get update
$ sudo apt-get upgrade
The PPA contains a version of GTK+ with Mir support, fixes for libraries that assume you are running in X and a few select applications patched so they show in Unity 8.
The Mir backend currently on the wip/mir branch in the GTK+ git repository. We will keep developing it there until it is complete enough to propose into GTK+ master. We have updated jhbuild to support Mir so we can easily build and test this backend going forward.
This post is part of the series ‘Making ubuntu.com responsive‘.
There are several resources out there on how to create responsive websites, but they tend to go through the process in an ideal scenario, where the project starts with a blank slate, from scratch.
That’s why we thought it would be nice to share the steps we took in converting our main website and framework, ubuntu.com, into a fully responsive site, with all the constraints that come from working on legacy code, with very little time available, while making sure that the site was kept up-to-date and responding to the needs to the business.
Before we started this process, the idea of converting ubuntu.com seemed like a herculean task. It was only because we divided the work in several stages, smaller projects, tightened scope, and kept things simple, that it was possible to do it.
We learned a lot throughout this past year or so, and there is a lot more we want to do. We’d love to hear about your experience of similar projects, suggestions on how we can improve, tools we should look into, books we should read, articles we should bookmark, and things we should try, so please do leave us your thoughts in the comments section.
Here is the list of all the post in the series:
- Setting the rules
- Making the rules a reality
- Pilot projects
- Lessons learned
- Scoping the work
- Approach to content
- Making our grid responsive
- Adapting our navigation to small screens
- Dealing with responsive images
- Updating font sizes and increasing readability
- Our Sass architecture
- Ensuring performance
- Testing on multiple devices
Note: I will be speaking about making ubuntu.com responsive at the Responsive Day Out conference in Brighton, on the 27th June. See you there!
AMD made available 10 SeaMicro 15000 chassis in one of their test labs. Each chassis has 64, 4 core, 2 thread (8 logical cores), 32GB RAM servers with 500G storage attached via a storage fabric controller – creating the potential to scale an OpenStack deployment to a large number of compute nodes in a small rack footprint.
As you would expect, we chose the best tools for deploying OpenStack:
- MAAS – Metal-as-a-Service, providing commissioning and provisioning of servers.
- Juju – The service orchestration for Ubuntu, which we use to deploy OpenStack on Ubuntu using the OpenStack charms.
- OpenStack Icehouse on Ubuntu 14.04 LTS.
- CirrOS – a small footprint linux based Cloud OS
MAAS has native support for enlisting a full SeaMicro 15k chassis in a single command – all you have to do is provide it with the MAC address of the chassis controller and a username and password. A few minutes later, all servers in the chassis will be enlisted into MAAS ready for commissioning and deployment:maas local node-group probe-and-enlist-hardware \ nodegroup model=seamicro15k mac=00:21:53:13:0e:80 \ username=admin password=password power_control=restapi2
Juju has been the Ubuntu Server teams preferred method for deploying OpenStack on Ubuntu for as long as I can remember; Juju uses Charms to encapsulate the knowledge of how to deploy each part of OpenStack (a service) and how each service relates to each other – an example would include how Glance relates to MySQL for database storage, Keystone for authentication and authorization and (optionally) Ceph for actual image storage.
Using the charms and Juju, it’s possible to deploy complex OpenStack topologies using bundles, a yaml format for describing how to deploy a set of charms in a given configuration – take a look at the OpenStack bundle we used for this test to get a feel for how this works.
Starting out small(ish)
All ten chassis were not all available from the outset of testing, so we started off with two chassis of servers to test and validate that everything was working as designed. With 128 physical servers, we were able to put together a Neutron based OpenStack deployment with the following services:
- 1 Juju bootstrap node (used by Juju to control the environment), Ganglia Master server
- 1 Cloud Controller server
- 1 MySQL database server
- 1 RabbitMQ messaging server
- 1 Keystone server
- 1 Glance server
- 3 Ceph storage servers
- 1 Neutron Gateway network forwarding server
- 118 Compute servers
We described this deployment using a Juju bundle, and used the juju-deployer tool to bootstrap and deploy the bundle to the MAAS environment controlling the two chassis. Total deployment time for the two chassis to the point of a OpenStack cloud that was usable was around 35 minutes.
At this point we created 500 tenants in the cloud, each with its own private network (using Neutron), connected to the outside world via a shared public network. The immediate impact of doing this is that Neutron creates dnsmasq instances, Open vSwitch ports and associated network namespaces on the Neutron Gateway data forwarding server – seeing this many instances of dnsmasq on a single server is impressive – and the server dealt with the load just fine!
Next we started creating instances; we looked at using Rally for this test, but it does not currently support using Neutron for instance creation testing, so we went with a simple shell script that created batches of servers (we used a batch size of 100 instances) and then waited for them to reach the ACTIVE state. We used the CirrOS cloud image (developed and maintained by the Ubuntu Server teams’ very own Scott Moser) with a custom Nova flavor with only 64 MB of RAM.
We immediately hit our first bottleneck – by default, the Nova daemons on the Cloud Controller server will spawn sub-processes equivalent to the number of cores that the server has. Neutron does not do this and we started seeing timeouts on the Nova Compute nodes waiting for VIF creation to complete. Fortunately Neutron in Icehouse has the ability to configure worker threads, so we updated the nova-cloud-controller charm to set this configuration to a sensible default, and provide users of the charm with a configuration option to tweak this setting. By default, Neutron is configured to match what Nova does, 1 process per core – using the charm configuration this can be scaled up using a simple multiplier – we went for 10 on the Cloud Controller node (80 neutron-server processes, 80 nova-api processes, 80 nova-conductor processes). This allowed us to resolve the VIF creation timeout issue we hit in Nova.
At around 170 instances per compute server, we hit our next bottleneck; the Neutron agent status on compute nodes started to flap, with agents being marked down as instances were being created. After some investigation, it turned out that the time required to parse and then update the iptables firewall rules at this instance density took longer than the default agent timeout – hence why agents kept dropping out from Neutrons perspective. This resulted in virtual interface (VIF) creation timing out and we started to see instance activation failures when trying to create more that a few instances in parallel. Without an immediate fix for this issue (see bug 1314189), we took the decision to turn Neutron security groups off in the deployment and run without any VIF level iptables security. This was applied using the nova-compute charm we were using, but is obviously not something that will make it back into the official charm in the Juju charm store.
With the workaround on the Compute servers and we were able to create 27,000 instances on the 118 compute nodes. The API call times to create instances from the testing endpoint remained pretty stable during this test, however as the Nova Compute servers got heavily loaded, the amount of time taken for all instances to reach the ACTIVE state did increase:
At this point AMD had another two chassis racked and ready for use so we tore down the existing two chassis, updated the bundle to target compute services at the two new chassis and re-deployed the environment. With a total of 256 servers being provisioned in parallel, the servers were up and running within about 60 minutes, however we hit our first bottleneck in Juju.
The OpenStack charm bundle we use has a) quite a few services and b) a-lot of relations between services – Juju was able to deploy the initial services just fine, however when the relations where added, the load on the Juju bootstrap node went very high and the Juju state service on this node started to throw a larger number of errors and became unresponsive – this has been reported back to the Juju core development team (see bug 1318366).
We worked around this bottleneck by bringing up the original two chassis in full, and then adding each new chassis in series to avoid overloading the Juju state server in the same way. This obviously takes longer (about 35 minutes per chassis) but did allow us to deploy a larger cloud with an extra 128 compute nodes, bringing the total number of compute nodes to 246 (118+128).
And then we hit our next bottleneck…
By default, the RabbitMQ packaging in Ubuntu does not explicitly set a file descriptor ulimit so it picks up the Ubuntu defaults – which are 1024 (soft) and 4096 (hard). With 256 servers in the deployment, RabbitMQ hits this limit on concurrent connections and stops accepting new ones. Fortunately it’s possible to raise this limit in /etc/default/rabbitmq-server – and as we were deployed using the rabbitmq-server charm, we were able to update the charm to raise this limit to something sensible (64k) and push that change into the running environment. RabbitMQ restarted, problem solved.
With the 4 chassis in place, we were able to scale up to 55,000 instances.
Ganglia was letting us know that load on the Nova Cloud Controller during instance setup was extremely high (15-20), so we decided at this point to add another unit to this service:juju add-unit nova-cloud-controller
and within 15 minutes we had another Cloud Controller server up and running, automatically configured for load balancing of API requests with the existing server and sharing the load for RPC calls via RabbitMQ. Load dropped, instance setup time decreased, instance creation throughput increased, problem solved.
Whilst we were working through these issues and performing the instance creation, AMD had another two chassis (6 & 7) racked, so we brought them into the deployment adding another 128 compute nodes to the cloud bringing the total to 374.
And then things exploded…
The number of instances that can be created in parallel is driven by two factors – 1) the number of compute nodes and 2) the number of workers across the Nova Cloud Controller servers. However, with six chassis in place, we were not able to increase the parallel instance creation rate as much as we wanted to without getting connection resets between Neutron (on the Cloud Controllers) and the RabbitMQ broker.
The learning from this is that Neutron+Nova makes for an extremely noisy OpenStack deployment from a messaging perspective, and a single RabbitMQ server appeared to not be able to deal with this load. This resulted in a large number of instance creation failures so we stopped testing and had a re-think.
A change in direction
After the failure we saw in the existing deployment design, and with more chassis still being racked by our friends at AMD, we still wanted to see how far we could push things; however with Neutron in the design, we could not realistically get past 5-6 chassis of servers, so we took the decision to remove Neutron from the cloud design and run with just Nova networking.
Fortunately this is a simple change to make when deploying OpenStack using charms as the nova-cloud-controller charm has a single configuration option to allow Neutron and Nova networkings to be configured. After tearing down and re-provisioning the 6 chassis:juju destroy-enviroment maas juju-deployer --bootstrap -c seamicro.yaml -d trusty-icehouse
with the revised configuration, we were able to create instances in batches of 100 at a respectable throughput of initially 4.5/sec – although this did degrade as load on compute servers went higher. This allowed us to hit 75,000 running instances (with no failures) in 6hrs 33 mins, pushing through to 100,000 instances in 10hrs 49mins – again with no failures.
As we saw in the smaller test, the API invocation time was fairly constant throughout the test, with the total provisioning time through to ACTIVE state increasing due to loading on the compute nodes:
OK – so we are now running an OpenStack Cloud on Ubuntu 14.04 across 6 seamicro chassis (1,2,3,5,6,7 – 4 comes later) – a total of 384 servers (give or take one or two which would not provision). The cumulative load across the cloud at this point was pretty impressive – Ganglia does a pretty good job at charting this:
AMD had two more chassis (8 & 9) in the racks which we had enlisted and commissioned, so we pulled them into the deployment as well; This did take some time – Juju was grinding pretty badly at this point and just running ‘juju add-unit -n 63 nova-compute-b6′ was taking 30 minutes to complete (reported upstream – see bug 1317909).
After a couple of hours we had another ~128 servers in the deployment, so we pushed on and created some more instances through to the 150,000 mark – as the instances where landing on the servers on the 2 new chassis, the load on the individual servers did increase more rapidly so instance creation throughput did slow down faster but the cloud managed the load.
Prior to starting testing at any scale, we had some issues with one of the chassis (4) which AMD had resolved during testing, so we shoved that back into the cloud as well; after ensuring that the 64 extra servers where reporting correctly to Nova, we started creating instances again.
However, the instances kept scheduling onto the servers in the previous two chassis we added (8 & 9) with the new nodes not getting any instances. It turned out that the servers in chassis 8 & 9 where AMD based servers with twice the memory capacity; by default, Nova does not look at VCPU usage when making scheduling decisions, so as these 128 servers had more remaining memory capacity that the 64 new servers in chassis 4, they were still being targeted for instances.
Unfortunately I’d hopped onto the plane from Austin to Atlanta for a few hours so I did not notice this – and we hit our first 9 instance failures. The 128 servers in Chassis 8 and 9 ended up with nearly 400 instances each – severely over-committing on CPU resources.
A few tweaks to the scheduler configuration, specifically turning on the CoreFilter and setting the over commit at x 32, applied to the Cloud Controller nodes using the Juju charm, and instances started to land on the servers in chassis 4. This seems like a sane thing to do by default, so we will add this to the nova-cloud-controller charm with a configuration knob to allow the over commit to be altered.
At the end of the day we had 168,000 instances running on the cloud – this may have got some coverage during the OpenStack summit….
The last word
Having access to this many real servers allowed us to exercise OpenStack, Juju, MAAS and our reference Charm configurations in a way that we have not been able undertake before. Exercising infrastructure management tools and configurations at this scale really helps shake out the scale pinch points – in this test we specifically addressed:
- Worker thread configuration in the nova-cloud-controller charm
- Bumping open file descriptor ulimits in the rabbitmq-server charm enabled greater concurrent connections
- Tweaking the maximum number of mysql connections via charm configuration
- Ensuring that the CoreFilter is enabled to avoid potential extreme overcommit on nova-compute nodes.
There where a few things we could not address during the testing for which we had to find workarounds:
- Scaling a Neutron base cloud past more than 256 physical servers
- High instance density on nova-compute nodes with Neutron security groups enabled.
- High relation creation concurrency in the Juju state server causing failures and poor performance from the juju command line tool.
We have some changes in the pipeline to the nova-cloud-controller and nova-compute charms to make it easier to split Neutron services onto different underlying messaging and database services. This will allow the messaging load to be spread across different message brokers, which should allow us to scale a Neutron based OpenStack cloud to a much higher level than we achieved during this testing. We did find a number of other smaller niggles related to scalability – checkout the full list of reported bugs.
And finally some thanks:
- Blake Rouse for doing the enablement work for the SeaMicro chassis and getting us up and running at the start of the test.
- Ryan Harper for kicking off the initial bundle configuration development and testing approach (whilst I was taking a break- thanks!) and shaking out the initial kinks.
- Scott Moser for his enviable scripting skills which made managing so many servers a whole lot easier – MAAS has a great CLI – and for writing CirrOS.
- Michael Partridge and his team at AMD for getting so many servers racked and stacked in such a short period of time.
- All of the developers who contribute to OpenStack, MAAS and Juju!
.. you are all completely awesome!
It’s always nice to come across Kubuntu being used in the wild. Recently I was pointed to a refurbished laptop shop in Ireland who are selling HP laptops running Kubuntu 14.04LTS. €140 for a laptop? You’d pay as much for a just the Windows licence in most other shops.
Two years ago, my wife and I made the decision to home-school our two children. It was the best decision we could have made, our kids are getting a better education, and with me working from home since joining Canonical I’ve been able to spend more time with them than ever before. We also get to try and do some really fun things, which is what sets the stage for this story.
Both my kids love science, absolutely love it, and it’s one of our favorite subjects to teach. A couple of weeks ago my wife found an inexpensive USB microscope, which lets you plug it into a computer and take pictures using desktop software. It’s not a scientific microscope, nor is it particularly powerful or clear, but for the price it was just right to add a new aspect to our elementary science lessons. All we had to do was plug it in and start exploring.
My wife has a relatively new (less than a year) laptop running windows 8. It’s not high-end, but it’s all new hardware, new software, etc. So when we plugged in our simple USB microscope…….it failed. As in, didn’t do anything. Windows seemed to be trying to figure out what to do with it, over and over and over again, but to no avail.
My laptop, however, is running Ubuntu 14.04, the latest stable and LTS release. My laptop is a couple of years old, but classic, Lenovo x220. It’s great hardware to go with Ubuntu and I’ve had nothing but good experiences with it. So of course, when I decided to give our new USB microsope a try……it failed. The connection was fine, the log files clearly showed that it was being identified, but nothing was able to see it as a video input device or make use of it.
Now, if that’s where our story ended, it would fall right in line with a Shakespearean tragedy. But while both Windows and Ubuntu failed to “just work” with this microscope, both failures were not equal. Because the Windows drivers were all closed source, my options ended with that failure.
But on Ubuntu, the drivers were open, all I needed to do was find a fix. It took a while, but I eventually found a 2.5 year old bug report for an identical chipset to my microscope, and somebody proposed a code fix in the comments. Now, the original reporter never responded to say whether or not the fix worked, and it was clearly never included in the driver code, but it was an opportunity. Now I’m no kernel hacker, nor driver developer, in fact I probably shouldn’t be trusted to write any amount of C code at all. But because I had Ubuntu, getting the source code of my current driver, as well as all the tools and dependencies needed to build it, took only a couple of terminal commands. The patch was too old to cleanly apply to the current code, but it was easy enough to figure out where they should go, and after a couple tries to properly build just the driver (and not the full kernel or every driver in it), I had a new binary kernel modules that would load without error. Then, when I plugged my USB microscope in again, it worked!
People use open source for many reasons. Some people use it because it’s free as in beer, for them it’s on the same level as freeware or shareware, only the cost matters. For others it’s about ethics, they would choose open source even if it cost them money or didn’t work as well, because they feel it’s morally right, and that proprietary software is morally wrong. I use open source because of USB microscopes. Because when they don’t work, open source gives me a chance to change that.
This post is part of the series ‘Making ubuntu.com responsive‘.
When working on a responsive project you’ll have to test on multiple operating systems, browsers and devices, whether you’re using emulators or the real deal.
Testing on the actual devices is preferable — it’s hard to emulate the feel of a device in your hand and the interactions and gestures you can do with it — and more enjoyable, but budget and practicability will never allow you to get a hold of and test on all the devices people might use to access your site.
We followed very simple steps that anyone can emulate to decide which devices we tested ubuntu.com on.Numbers
You can quickly get a grasp of which operating systems, browsers and devices your visitors are using to get to your site just by looking at your analytics.
By doing this you can establish whether some of the more troublesome ones are worth investing time in. For example, if only 10 people accessed your site through Internet Explorer 6, perhaps you don’t need to provide a PNG fallback solution. But you might also get a few less pleasant surprises and find that a hard-to-cater-for browser is one of the preferred ones by your customers.
When we did our initial analysis we didn’t find any real surprises, however, due to the high volume of traffic that ubuntu.com sees every month even a very small percentage represented a large number of people that we just couldn’t ignore. It was important to keep this in mind as we defined which browsers, operating systems and devices to test on, and what issues we’d fix where.Browsers (between 11 February and 13 March 2014) Browser Percentage usage Chrome 46.88% Firefox 36.96% Internet Explorer Total 7.54% 11 41.15% 8 22.96% 10 17.05% 9 14.24% 7 2.96% 6 1.56% Safari 4.30% Opera 1.68% Android Browser 1.04% Opera Mini 0.45% Operating systems (between 11 February and 13 March 2014) Operating system Percentage usage Windows Total 52.45% 7 60.81% 8.1 14.31% XP 13.3% 8.84 8.84% Vista 2.38% Linux 35.4% Macintosh 6.14% Android Total 3.32% 4.4.2 19.62% 4.3 15.51% 4.1.2 15.39% iOS 1.76% Mobile devices (between 12 May and 11 June 2014)
5.41% of total visits Device Percentage usage (of 5.41%) Apple iPad 17.33% Apple iPhone 12.82% Google Nexus 7 3.12% LG Nexus 5 2.97% Samsung Galaxy S III 2.01% Google Nexus 4 2.01% Samsung Galaxy S IV 1.17% HTC M7 One 0.92% Samsung Galaxy Note 2 0.88% Not set 16.66%
After analysing your numbers, you can also define which combinations to test in (operating system and browser).Go shopping
Based on the most popular devices people were using the access our site, we made a short list of the ones we wanted to buy first. We weren’t married to the analytics numbers though: the idea was to cover a range of screen sizes and operating systems and expand as we went along.
- Nexus 7
- Samsung Galaxy S III
- Samsung Galaxy Note II
We opted for not splashing on an iPad or iPhone, as there are quite a few around the office (retina and non-retina) and the money we saved meant we could buy other less common devices.
Part of our current device suite.
When we started to get a few bug reports from Android Gingerbread and Windows phone users, we decided we needed phones with those operating systems installed. This was our second batch of purchases:
- Samsung Galaxy y
- Kindle Fire HD (Amazon was having a sale at the time we made the list!)
- Nokia Lumia 520
- LG G2
And, last but not least, we use a Nexus 4 to test on Ubuntu on phones.
We didn’t spend much in any of our shopping sprees, but have managed to slowly build an ever-growing device suite that we can test our sites on, which is invaluable when working on responsive projects.Alternatives
Some operating systems and browsers are trickier to test on in native devices. We have a BrowserStack account that we tend to use mostly to test on older Windows and Internet Explorer versions, although we also test Windows on virtual machines.
We have to confess we’re not using any special software that synchronises testing and interactions across devices. We haven’t really felt the need for that yet, but at some point we should experiment with a few tools, so we’d like to hear suggestions!Browser support
We prefer to think of different levels (or ways) of access to the content rather than browser support. The overall rule is that everyone should be able to get to the content, and bugs that obstruct access are prioritised and fixed.
As much as possible, and depending on resources available at the time, we try to fix rendering issues in browsers and devices used by a higher percentage of visitors: degree of usage defines the degree of effort fixing rendering bugs.
And, obviously: dowebsitesneedtolookexactlythesameineverybrowser.com.Reading list
In many cases, high-quality code counts more than bells and whistles. Fast, reliable and well-maintained libraries provide a solid base for excellent applications built on top of it. Investing time into improving existing code improves the value of that code, and of the software built on top of that. For shared components, such as libraries, this value is often multiplied by the number of users. With this in mind, let’s have a closer look of how the Frameworks 5 transition affects the quality of the code, so many developers and users rely on.
KDE Frameworks 5 will be released in 2 weeks from now. This fifth revision of what is currently known as the “KDE Development Platform” (or, technically “kdelibs”) is the result of 3 years of effort to modularize the individual libraries (and “bits and pieces”) we shipped as kdelibs and kde-runtime modules as part of KDE SC 4.x. KDE Frameworks contains about 60 individual modules, libraries, plugins, toolchain, and scripting (QtQuick, for example) extensions.
One of the important aspects that has seen little exposure when talking about the Frameworks 5 project, but which is really at the heart of it, are the processes behind it. The Frameworks project, as it happens with such transitions has created a new surge of energy for our libraries. The immediate results, KF5′s first stable release is a set of software frameworks that induce minimal overhead, are source- and binary-stable for the foreseeable future, are well maintained, get regular updates and are proven, high-quality, modern and performant code. There is a well-defined contribution process and no mandatory copyright assignment. In other words, it’s a reliable base to build software on in many different aspects.Maturity
Extension and improvement of existing software are two ways of increasing their values. KF5 does not contain revolutionary new code, instead of extending it, in this major cycle, we’re concentrating on widening the usecases and improving their quality. The initial KDE4 release contained a lot of rewritten code, changed APIs and meant a major cleanup of hard-to-scale and sometimes outright horrible code. Even over the course of 4.x, we had a couple of quite fundamental changes to core functionality, for example the introduction of semantic desktop features, Akonadi, in Plasma the move to QML 1.x.
All these new things have now seen a few years of work on them (and in the case of Nepomuk replacing of the guts of it with the migration to the much more performant Baloo framework). These things are mature, stable and proven to work by now. The transition to Qt5 and KF5 doesn’t actually change a lot about that, we’ve worked out most of the kinks of this transition by now. For many application-level code using KDE Frameworks, the porting will be rather easy to do, though not zero effort. The APIs themselves haven’t changed a lot, changes to make something work usually involve updating the build-system. From that point on, the application is often already functional, and can be gradually moved away from deprecated APIs. Frameworks 5 provides the necessary compatibility libraries to ease porting as much as possible.
Surely, with the inevitable and purposeful explosion of the user-base following a first stable release, we will get a lot of feedback how to further improve the code in Frameworks 5. Processes, requirements and tooling for this is in place. Also, being an open system, we’re ready to receive your patches.
Frameworks 5, in many ways encodes more than 15 years of experience into a clearly structured, stable base to build applications for all kinds of purposes, on all kinds of platforms on.
With the modularization of the libraries, we’ve looked for suitable maintainers for them, and we’ve been quite successful in finding responsible caretakers for most of them. This is quite important as it reduces bottlenecks and single points of failure. It also scales up the throughput of our development process, as the work can be shared across more shoulders more easily. This achieves quicker feedback for development questions, code review requests, or help with bug fixes. We don’t actually require module maintainers to fix every single bug right away, they are acting much more as orchestrators and go-to-guys for a specific framework.More Reviews
More peer-review of code is generally a good thing. It provides safety nets for code problems, catches potential bugs, makes sure code doesn’t do dumb thing, or smart things in the wrong way. It also allows transfer of knowledge by talking about each others code. We have already been using Review Board for some time, but the work on Frameworks 5 and Plasma 2 has really boosted our use of review board, and review processes in general. It has become a more natural part of our collaboration process, and it’s a very good thing, both socially and code-quality-wise.
More code reviews also keeps us developers in check. It makes it harder to slip in a bit of questionable code, a psychological thing. If I know my patches will be looked at line-by-line critically, it triggers more care when submitting patches. The reasons for this are different, and range from saving other developers some time to point out issues which I could have found myself had I gone over the code once more, but also make me look more cool when I submit a patch that is clean and nice, and can be submitted as-is.
Surely, code reviews can be tedious and slow down the development, but with the right dose, in the end it leads to better code, which can be trusted down the line. The effects might not be immediately obvious, but they are usually positive.
Splitting up the libraries and getting the build-system up to the task introduced major breakage at the build-level. In order to make sure our changes would work, and actually result in buildable and working frameworks, we needed better tooling. One huge improvement in our process was the arrival of a continuous integration system. Pushing code into one of the Frameworks nowadays means that a it is built in a clean environment and automated tests are run. It’s also used to build its dependencies, so problems in the code that might have slipped the developer’s attention are more often caught automatically. Usually, the results of the Continuous integration system’s automated builds are available within a few minutes, and if something breaks, developers get notifications via IRC or email. Having these short turnaround cycles makes it easier to fix things, as the memory of the change leading to the problem is still fresh. It also saves others time, it’s less likely that I find a broken build when I update to latest code.
The build also triggers running autotests, which have been extended already, but are still quite far away from complete coverage. Having automated tests available makes it easier to spot problems, and increases the confidence that a particular change doesn’t wreak havoc elsewhere.
Neither continuous builds, nor autotests can make 100% sure that nothing ever breaks, but it makes it less likely, and it saves development resources. If a script can find a problem, that’s probably vastly more efficient than manual testing. (Which is still necessary, of course.)
A social aspect here is that not a single person is responsible if something breaks in autobuilds or autotests, it rather should be considered a “stop-the-line” event, and needs immediate attention — by anyone.
This harnessing allows us to concentrate more on further improvments. Software in general are subject to a continous evolution, and Frameworks 5.0 is “just another” milestone in that ongoing process. Better scalability of the development processes (including QA) is not about getting to a stable release, it supports the further improvement. As much as we’ve updated code with more modern and better solutions, we’re also “upgrading” the way we work together, and the way we improve our software further. It’s the human build system behind software.
The circle goes all the way round, the continuous improvement process, its backing tools and processes evolve over time. They do not just pop out of thin air, they’re not dictated from the top down, they are rather the result of the same level of experience that went into the software itself. The software as a product and its creation process are interlinked. Much of the important DNA of a piece of software is encoded in its creation and maintenance process, and they evolve together.
jCryption is an open-source plugin for jQuery that is used for performing encryption on the client side that can be decrypted server side. It works by retrieving an RSA key from the server, then encrypting an AES key under the RSA key, and sending both the encrypted AES key and the RSA key to the server. This is not dissimilar to how OpenPGP encrypts data for transmission. (Though, of course, implementation details are vastly different.)
jCryption comes with PHP and perl code demonstrating the decryption server-side, and while not packaged as ready-to-use libraries, it is likely that most users used the sample code for the server-side implementation. While the code used proc_open, which doesn't allow command injection (it's not being run through a shell, so shell metacharacters aren't relevant) still allows an attacker to modify the arguments being passed to the command.
Originally, the code used constructs like:1$cmd = sprintf("openssl enc -aes-256-cbc -pass pass:'$key' -a -e");
Because $key can be attacker-controlled, an attacker can close the pass string early, and add additional openssl parameters there. This includes, for example, the ability to read the jCryption RSA private key, allowing an attacker to read any traffic sent with jCryption that they have captured (or capture in the future).
I reported this issue late last night, and Daniel Griesser, the author of jCryption, replied shortly thereafter, confirming he was looking into the matter. By this morning, he had created a fix and pushed a new release out. It speaks very highly of a developer when they're able to respond so quickly to a security concern.
For the curious, it was fixed by escaping the shell argument using escapeshellarg:1$cmd = sprintf("openssl enc -aes-256-cbc -pass pass:" . escapeshellarg($key) . " -a -e");
I'm not releasing a PoC that does the actual crypto steps at this point, I want to make sure sites have had a chance to upgrade.
Additionally, we also have made a new type of image available called "io1". These new images have 500 provisioned IOPS by default.
Even better, both the SSD and io1 images are available for HVM, 32-bit and 64-bit paravirtual instances.
As of today, all dailies and future releases will be published with all supported volume types. The latest releases are at [2,3,4] or :
 http://cloud-images.ubuntu.com/releases/precise/release-20140606/ http://cloud-images.ubuntu.com/locator/ec2/
As the comments are quick to point out – at the expense of the rest of the piece – the hardware isn’t the compelling story here. While you can buy your own, you can almost certainly hand build an equivalent-or-better set up for less money1, but Ars recognises this:
Of course, that’s exactly the point: the Orange Box is that taste of heroin that the dealer gives away for free to get you on board. And man, is it attractive. However, as Canonical told me about a dozen times, the company is not making them to sell—it’s making them to use as revenue driving opportunities and to quickly and effectively demo Canonical’s vision of the cloud.
To see what Ars think of those, you should read the article.
I definitely echo Lee’s closing statement:
I wish my closet had an Orange Box in it. That thing is hella cool.
In the last UOS, the founder (belkinsa, Svetlana Belkin) of Ubuntu Scientists decided to create a blog where the team will post news, interviews of the members, and stories of members to help other scientists to get a feeling of who we are, how to help us, and how to use FOSS in the science fields. Another team also have done this in the past.
Other posts are HERE
Filed under: News Tagged: blog, Interviews, News, Stories, Svetlana Belkin, Ubuntu, Ubuntu Scientists, UOS
Nothing new to report this week
Release Metrics and Incoming Bugs
Release metrics and incoming bug data can be reviewed at the following link:
Milestone Targeted Work Items
I’d note that this section of the meeting is becoming less and less
useful to me in its current form. I’d like to take a vote to skip this
section until I/we can find a better solution. All in favor (ie +1)?
I’ll take silence as agreement as well
ogasawara: ok, motion passed.
(actually the same could be said for the ARM status part since support it’s part of generic now FWIW)
Dropping ARM Status from this meeting as well.
Status: Utopic Development Kernel
We have rebased our Utopic kernel to v3.15 final and uploaded
(3.15.0-6.11). As noted in previous meetings, we are planning on
converging on the v3.16 kernel for Utopic. We have started tracking
v3.16-rc1 in our “unstable” ubuntu-utopic branch. We’ll let this
marinate and bake for a bit before we do an official v3.16 based upload
to the archive.
Important upcoming dates:
Thurs Jun 26 – Alpha 1 (~1 week away)
Fri Jun 27 – Kernel Freeze for 12.04.5 and 14.04.1 (~1 week away)
Thurs Jul 31 – Alpha 2 (~6 weeks away)
The current CVE status can be reviewed at the following link:
Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid
Status for the main kernels, until today (May. 6):
- Lucid – Verification and Testing
- Precise – Verification and Testing
- Saucy – Verification and Testing
Trusty – Verification and Testing
Current opened tracking bugs details:
For SRUs, SRU report is a good source of information:
cycle: 08-Jun through 28-Jun
06-Jun Last day for kernel commits for this cycle
08-Jun – 14-Jun Kernel prep week.
15-Jun – 21-Jun Bug verification & Regression testing.
22-Jun – 28-Jun Regression testing & Release to -updates.
14.04.1 cycle: 29-Jun through 07-Aug
27-Jun Last day for kernel commits for this cycle
29-Jun – 05-Jul Kernel prep week.
06-Jul – 12-Jul Bug verification & Regression testing.
13-Jul – 19-Jul Regression testing & Release to -updates.
20-Jul – 24-Jul Release prep
24-Jul 14.04.1 Release 
07-Aug 12.04.5 Release 
 This will be the very last kernels for lts-backport-quantal, lts-backport-raring,
 This will be the lts-backport-trusty kernel as the default in the precise point
Open Discussion or Questions? Raise your hand to be recognized
No open discussion.
The UOS 14.06 was last week during June 12 to June 14 and it was the first one that I was able to be there for the whole thing and I was a track lead for the Community Track which I feel that I ended up running most of the show along with Daniel Holbach. To the other track leads of the same track, I mean no offence. :) Because this was my first full UOS, I tired myself out quickly after each day (the weather was gloomy all three days too), I had no mood to do anything else after and this is why this blog post is almost a week late.
First thing that I will share with you are the summaries for the Community track:
Introduction to Lubuntu: Phill Whiteside and Harry Webber talked about what Lubuntu is and what is planned.
Ubuntu Women Utopic Goals: To get more women involved in Ubuntu, the team has been looking into adding a “get involved quiz” to the website. The plan is now to get it up on community.ubuntu.com. The women’s team also want to take a look at Harvest and see how it could be improved to show new developers what needs to get done. The team website will also get more stories and updated best practices. More classroom sessions are planned as well.
Community Roundtable: A number of topics were discussed, among them dates for the next events. UOS dates will be picked soon, it was suggested to bring it back in line with the release cycle again. We will work with the LoCo community and Classroom team to organise the Global Jam and other events this cycle.
In the LoCo part of our community we want to look into making it easier to share stories and pictures of LoCo events and publish them on Planet. We also want to look into helping teams to train new coordinators and organisers on their teams.
From fix to image: how your patch makes it into Ubuntu: The CI team has put together an impressive process to get changes automatically built and tested. This makes it a lot easier to land high quality changes in Ubuntu. Łukasz Zemczak gave a great presentation on how this process works.
Ubuntu Documentation Team Roundtable: A number of initiatives were discussed to make it easier for newcomers to get involved with the team: a cleanup of current documentation and referring to it on help.ubuntu.com and elsewhere. Regular meetings are planned again as well.
Kubuntu Documentation Team Roundtable June 2014: They talked about following Ubuntu GNOME and setting up a Kubuntu Promo team to help promote and gather contributors and then send them to the right team (Docs, Dev, etc) They are also talked about once or server things get setup we can work on docs.kubuntu.org to make it look more in line with the new kubuntu set.
Introduction to Ubuntu GNOME: Ali Linx talked about Ubuntu GNOME, the web site, and the history of the flavour. He and other team members also talked about plans for the website, mainly about art work.
App development training programme: In the last cycle some of our app developers went out to their LoCo meetings and did some app development workshops. We put together a plan to turn this into a more formal training programme, starting in phase 1 in July.
Ubuntu Scientists June 2014 Roundtable: The team reviewed the team’s wiki page and discussed a few changes to it, to make it more inviting and set clearer tasks for newcomers. Another idea was to interview scientist users about their use of Ubuntu and blog about it.
Thanks Daniel Holbach for the summaries and the links to the sessions are in the Community Track embedded link. Sorry if it’s hard to read, I can’t fix this issue!
I went to other sessions but this my favorite:
And I still have some to watch!
Since I was a track lead, I have a few lessons that I learned:
- Give enough notice to the team or group of people but I think this was completely my fault since the UOS organizers didn’t give us a month’s notice
- Use Chrome not Firefox for Hangouts and if needed, restart your computer before the next Hangout. I had issues with my netbook and my mic where no one was able to hear me.
- Even though it’s suggested to set up the Hangout on Air ten minutes before, but if you have time, do it a bit early and check if you have any problems
- You can host a session for someone else but you don’t need to say anything
I enjoyed this one but I think it could of been better, but I know that is getting worked on.