More and more software come with testsuites. But not every distribution runs them for each package (nevermind is it Debian, Fedora or Ubuntu). Why it matters? Let me give example from yesterday: HDF 4.2.10.
There is a bug reported against libhdf with information that it built fine for Ubuntu. As I had issues with hdf in Fedora I decided to look and found even simpler patch than one I wrote. Tried it and got package built. But that’s all…
Running testsuite is easy: “make check”. But result was awesome:
!!! 31294 Error(s) were detected !!!
It does not look good, right? So yesterday I spent some time yesterday on searching for architecture related check and found main reason for so big amount of errors — unknown systems are treated as big endian… Simple switch there and from 31294 it dropped to just 278 ones.
Took me a while to find all 27 places where miscellaneous variations of “#if defined(__aarch64__)” were needed and finally got to point where “make check” simply worked as it should.
So if you port software do not assume it is fine once it builds. Run testsuite to be sure that it runs properly.
Release Metrics and Incoming Bugs
Release metrics and incoming bug data can be reviewed at the following link:
Status: Utopic Development Kernel
We have rebased our Utopic kernel “unstable” branch to v3.16-rc2. We
are preparing initial packages and performing some test builds and DKMS
package testing. I do not anticipate a v3.16 based upload until it has
undergone some additional widespread baking and testing.
Important upcoming dates:
Thurs Jun 26 – Alpha 1 (2 days away)
Fri Jun 27 – Kernel Freeze for 12.04.5 and 14.04.1 (3 days away)
Thurs Jul 31 – Alpha 2 (~5 weeks away)
The current CVE status can be reviewed at the following link:
Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid
Status for the main kernels, until today (May. 6):
- Lucid – Testing
- Precise – Testing
- Saucy – Testing
Trusty – Testing
Current opened tracking bugs details:
For SRUs, SRU report is a good source of information:
cycle: 08-Jun through 28-Jun
06-Jun Last day for kernel commits for this cycle
08-Jun – 14-Jun Kernel prep week.
15-Jun – 21-Jun Bug verification & Regression testing.
22-Jun – 28-Jun Regression testing & Release to -updates.
14.04.1 cycle: 29-Jun through 07-Aug
27-Jun Last day for kernel commits for this cycle
29-Jun – 05-Jul Kernel prep week.
06-Jul – 12-Jul Bug verification & Regression testing.
13-Jul – 19-Jul Regression testing & Release to -updates.
20-Jul – 24-Jul Release prep
24-Jul 14.04.1 Release 
07-Aug 12.04.5 Release 
 This will be the very last kernels for lts-backport-quantal, lts-backport-raring,
 This will be the lts-backport-trusty kernel as the default in the precise point
Open Discussion or Questions? Raise your hand to be recognized
No open discussions.
As many of you will know, I used to do a weekly Q&A on Ubuntu On Air for the Ubuntu community where anyone could come and ask any question about anything.
I am pleased to announce my weekly Q&A is coming back but in a new time and place. Now it will be every Thursday at 6pm UTC (6pm UK, 7pm Europe, 11am Pacific, 2pm Eastern), starting this week.
You can join each weekly session at http://www.jonobacon.org/live/
You are welcome to ask questions about:
- Community management, leadership, and how to build fun and productive communities.
- XPRIZE, our work there, and how we solve the world’s grand challenges.
- My take on Ubuntu from the perspective of an independent community member.
- My views on technology, Open Source, news, politics, or anything else.
As ever, all questions are welcome! I hope to see you there!
Welcome to the Ubuntu Weekly Newsletter. This is issue #373 for the week June 16 – 22, 2014, and the full version is available here.
In this issue we cover:
- Ubuntu 13.10 (Saucy Salamander) reaches End of Life on July 17 2014
- Welcome New Members and Developers
- Ubuntu Stats
- Colorado Ubuntu Team: Operation ‘Spread Ubuntu’ is Underway!
- Ubuntu Cloud News
- Svetlana Belkin: UOS 14.06 Summary and Lessons Learned (as a Track Lead)
- Ubuntu Scientists: Introducting the “Ubuntu Scientists Blog”!
- Ben Howard: SSD and PIOPS AMI’s for AWS
- Kubuntu Wire: Refurbished HP Laptops with Kubuntu
- Robert Ancell: GTK+ applications in Unity 8 (Mir)
- Ubuntu Scientists: Team Wiki Pages Update: June 19, 2014
- Ubuntu Forum Council Changes
- Canonical News
- Hands-on with Canonical’s Orange Box and a peek into cloud nirvana
- In The Blogosphere
- Other Articles of Interest
- Featured Audio and Video
- Weekly Ubuntu Development Team Meetings
- Monthly Team Reports: May 2014
- Upcoming Meetings and Events
- Updates and Security for 10.04, 12.04, 13.10 and 14.04
- And much more!
The issue of The Ubuntu Weekly Newsletter is brought to you by:
- Elizabeth K. Joseph
- Tiago Carrondo
- Jose Antonio Rey
- And many others
Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License
Laksa is a quite popular spicy noodle soup, from the regions of China, Singapore, Malaysia & Indonesia, as such many variations and regional differences. You can get more info from Wikipedia.
This recipe I'm sharing is for a chicken laksa but it can be varied or added to, such as using fish or shrimp instead. The addition of veg. such as bok choy or shredded carrot wouldn't go astray either.Chicken Laksa Recipe
- 2 tablespoons peanut oil (or sunflower oil)
- 1 kg chicken thigh fillets, sliced
- 200g laksa paste –if not purchased arecipe follows
- 3 cups (700ml) chicken stock
- 1 can (~400ml) coconut milk/cream
- 1 lime, juice of
- 2 kaffir lime leaves, very finely shredded (if not available add more lime juice)
- 2 tablespoons crushed rock sugar (or light brown sugar)
- 2 tablespoons fish sauce
- 800g (a box) dried vermicelli rice noodles (or Udon noodles)
- salt Garnishes
- Fried Asian shallots (see note) and sliced red chile, to garnish
- 2 cups (160g) bean sprouts, trimmed
- 1/2 cup Thai basil leaves (you can substitute regular basil)
- 1/2 cup cilantro leaves
- 1/2 cup green onion, sliced on a bias
- 1-2 chilies, sliced thinly
- 4 eggs, hardboiled
- lime slices
4 hours of waiting; 10 minutes of work; Makes 8 buns
- In a wok, stir fry the chicken slices, in batches, until golden brown (3-4 minutes). Remove from wok and set aside.
- Add the laksa paste and stir fry until fragrant. Transfer to a large pot.
- Return the chicken, add the coconut milk, chicken stock, kaffir lime leaves (if using) & the lime juice.
- Bring the broth to a boil, reduce heat & simmer for 10 minutes to cook the chicken.
- Stir in the sugar, fish sauce and season to taste.
- Cook noodles according to directions.
- Serve broth and cooked chicken over cooked noodles and garnish with slices of boiled egg, bean sprouts, Thai basil & cilantro leaves, sliced green onion and sliced chiles.
- 6 dried long red chiles (or a couple tablespoons dried chile flakes)
- 1 teaspoon ground coriander
- 1/2 teaspoon ground cumin
- 1/2 teaspoon ground turmeric
- 1/2 teaspoon sweet paprika
- 1 3-cm piece galangal (or ginger), peeled & chopped
- 1 onion, chopped
- 2 garlic cloves, chopped
- 2 stalks lemongrass, white part only, chopped
- several cashew nuts
- 2 teaspoons shrimp paste (or dried shrimp)
- 1 tablespoon peanut oil
Makes about 300ml
- Place the dried chile in a small heatproof bowl (and the dried shrimp if using in another) and cover with boiled water. Let stand for ~10 minutes.
- In a small non-stick skillet toast the coriander, cumin, turmeric & paprika over med-high heat, until fragrant –1-2 minutes.
- In a food processor, pulse to coarsely chop the ginger/galanagal, onion, lemongrass, garlic & cashews.
- Drain and add the reconsituted chiles (and dried shrimp, if using) and the shrimp paste (if using).
- Turn the food processor on and pour in the peanut oil, blending until a smooth paste forms.
- This laksa paste can be kept refridgerated for a couple weeks.
The dough, I was surprised to discover, is a very basic wheat flour dough, however the footwork (you read correctly) required may deter some; given the dough's density it requires quite a bit of force to knead, but it's well worth it, in my opinion.Udon Dough Recipe
- 2 cups white all-purpose flour
- 1/2 cup warm water
- 2 teaspoons kosher salt
- Dissolve the salt in the warm water.
- In a large bowl, combine the salt water solution and the flour.
- Knead with your hands until it is a lumpy mass & transfer it to a clean surface.
- Continue to knead and shape it into a ball ߝthis'll take about 10 minutes.
- Let the dough-ball rest for a few minutes. The traditional method is to knead the dough with your feet, using your body weight to help flatten it.
- Shape it into more-or-less a rectangle & place it in a large zip-seal bag or between two sheets of plastic wrap and wrap in a towel.
- Using your feet, flatten it out until it's ~1 cm thick, then fold in half and knead it out again.
- Repeat this another 3 or 4 times, folding the dough over in the same direction each time ߝthis smooths out the dough. Hey, how often do you get to walk on your food?
- After the final folding/foot massage, set the dough aside (still wrapped up) for 3+ hours.
- When ready, unwrap the dough onto a clean, lightly-floured surface.
- Using a pasta machine (for ease), roll to desired thinness and cut into the noodle shape you like.
- Cooking in boiling water or directly in a soup broth until tender.
Back in March I photographed the legendary Tom Baker at the Big Finish studios in Kent. The occasion was the recording of a special extended interview with Tom, to mark his 80th birthday. The interview was conducted by Nicholas Briggs, and the recording is being released on CD and download by Big Finish.
I got to listen in to the end of the recording session and it was full of Tom’s own unique form of inventive story-telling, as well as moments of reflection. I got to photograph Tom on his own using a portable studio set up, as well as with Nick and some other special guests. All in about 7 minutes! The cover has been released now and it looks pretty good I think.
The CD is available for pre-order from the Big Finish website now. Pre-orders will be signed by Tom, so buy now!Pin It
It was about 4pm on Friday afternoon, when I had just about wrapped up everything I absolutely needed to do for the day, when I decided to kick back and have a little fun with the remainder of my work day.
It's now 4:37pm on Friday, and I'm now done.
Done with what? The Yo charm, of course!
The Internet has been abuzz this week about the how the Yo app received a whopping $1 million dollars in venture funding. (Forbes notes that this is a pretty surefire indication that there's another internet bubble about to burst...)
It's little more than the first program any kid writes -- hello world!
Subsequently I realized that we don't really have a "hello world" charm. And so here it is, yo.
$ juju deploy yo
Deploying up a webpage that says "Yo" is hardly the point, of course. Rather, this is a fantastic way to see the absolute simplest form of a Juju charm. Grab the source, and explore yourself.
$ charm-get yo
$ tree yo
│ ├── config-changed
│ ├── install
│ ├── start
│ ├── stop
│ ├── upgrade-charm
│ └── website-relation-joined
1 directory, 11 files
- The config.yaml let's you set and dynamically changes the service itself (the color and size of the font that renders "Yo").
- The copyright is simply boilerplate GPLv3
- The icon.svg is just a vector graphics "Yo."
- The metadata.yaml explains what this charm is, how it can relate to other charms
- The README.md is a simple getting-started document
- And the hooks...
- config-changed is the script that runs when you change the configuration -- basically, it uses sed to inline edit the index.html Yo webpage
- install simply installs apache2 and overwrites /var/www/index.html
- start and stop simply starts and stops the apache2 service
- upgrade-charm is currently a no-op
- website-relation-joined sets and exports the hostname and port of this system
Hopefully this simple little example might help you examine the anatomy of a charm for the first time, and perhaps write your own first charm!
This week Open Source Bridge will kick off in Portland and I’m extremely excited that Mozilla will once again be sponsoring this wonderful event. This will also mark my second year attending.
To me, Open Source Bridge is the kind of conference that has a lot of great content while also having a small feel to it; where you feel like you can dive in and do some networking and attend many of the talks.
This year, like previous years, Mozilla will have a number of speakers and attendees at Open Source Bridge and we will be giving out some swag in the Hacker Lounge throughout the week and chatting with people about Firefox OS and other Mozilla Projects.
Be sure to catch one of these awesome talks being given by Mozillians:
Making language selection smarter in Wikipedia – Sucheta Ghosal
The joy of volunteering with open technology and culture – Netha Hussain
Making your mobile web app accessible – Eitan Isaacson
Modern Home Automation – Ben Kero
Nest + Pellet Stove + Yurt – Lars John
What pushed me over the edge was finding the aiopg driver (postgres asyncio bindings), with very (let me stress - very) immature SQLAlchemy support.
Unfortunately, no web frameworks support asyncio as a first-class member of the framework, so I was forced into writing a microframework. The resulting “app” looks pretty not bad, and likely easy to switch if Flask ever gets support for asyncio.
One neat side-effect was that the framework can support stuff like websockets as a first-class element of the framework, just like GET requests.
Moxie will be a tool to run periodic long-running jobs in a sane way using docker.io.
First, torrents aren't available, since 1. that requires dedicated tracker software, which isn't needed since 2. KDE doesn't distribute many large files.
However, files available at http://files.kde.org/snapshots/ and http://download.kde.org have a Details tab, where metalinks and mirrors are listed. I knew nothing about metalinks, but we could all benefit from them when downloading large files.
PovAddict (Nicolas Alvarez) told me that he uses the commandline for them: `aria2c http://files.kde.org/snapshots/neon5-201406200837.iso.metalink` for instance. I had to install aria2c for this to work; and the file took less than 15 minutes to download.
I read man wget and it seems not to support metalinks; at least I didn't find a reference.
Bshah (Bushan Shah) tried with kget and says it works very well. He said, New Download > Paste metalink > it will ask which files to download.
He also found the nice Wikipedia page for me: http://en.wikipedia.org/wiki/Metalink
Thanks to bcooksley, PovAddict and bshaw for their help.
PS: Bcooksley adds, the .mirrorlist url is generally what we recommend for people to use anyway. So even if you don't point to the metalink, please use the .mirrorlist URL when posting a file hosted at download.kde.org or files.kde.org. If people forget to do that, click that Details link to get there for the hashes, lists of mirrors, and metalink files.
With Amazon’s announcement that SSD is now available for EBS volumes, they have also declared this the recommended EBS volume type.
The good folks at Canonical are now building Ubuntu AMIs with EBS-SSD boot volumes. In my preliminary tests, running EBS-SSD boot AMIs instead of EBS magnetic boot AMIs speeds up the instance boot time by approximately… a lot.
Canonical now publishes a wide variety of Ubuntu AMIs including:
- 64-bit, 32-bit
- EBS-SSD, EBS-SSD pIOPS, EBS-magnetic, instance-store
- PV, HVM
- in every EC2 region
- for every active Ubuntu release
Matrix that out for reasonable combinations and you get 561 AMIs actively supported today.
On the Alestic.com blog, I provide a handy reference to the much smaller set of Ubuntu AMIs that match my generally recommended configurations for most popular uses, specifically:
I list AMIs for both PV and HVM, because different virtualization technologies are required for different EC2 instance types.
Where SSD is not available, I list the magnetic EBS boot AMI (e.g., Ubuntu 10.04 Lucid).
To access this list of recommended AMIs, select an EC2 region in the pulldown menu towards the top right of any page on Alestic.com.
If you like using the AWS console to launch instances, click on the orange launch button to the right of the AMI id.
The AMI ids are automatically updated using an API provided by Canonical, so you always get the freshest released images.
Original article: http://alestic.com/2014/06/ec2-ebs-ssd-ami
stress-ng current contains the following methods to exercise the machine:
- CPU compute - just lots of sqrt() operations on pseudo-random values. One can also specify the % loading of the CPUs
- Cache thrashing, a naive cache read/write exerciser
- Drive stress by writing and removing many temporary files
- Process creation and termination, just lots of fork() + exit() calls
- I/O syncs, just forcing lots of sync() calls
- VM stress via mmap(), memory write and munmap()
- Pipe I/O, large pipe writes and reads that exercise pipe, copying and context switching
- Socket stressing, much like the pipe I/O test but using sockets
- Context switching between a pair of producer and consumer processes
The --metrics option dumps the number of operations performed by each stress method, aka "bogo ops", bogos because they are a rough and unscientific metric. One can specify how long to run a test either by test duration in sections or by bogo ops.
I've tried to make stress-ng compatible with the older stress tool, but note that it is not guaranteed to produce identical results as the common test methods between the two tools have been implemented differently.
Stress-ng has been a useful for helping me measure different power consuming loads. It is also useful with various thermald optimisation tweaks on one of my older machines.
For more information, consult the stress-ng manual page. Be warned, this tool can make your system get seriously busy and warm!
Today, I was checking some charming as usual, and found myself in a problem. I wanted to have different environments for automated testing and manual code testing, but I only had one AWS account. I thought I needed an account in another cloud, or another AWS account, but after thinking for a while I decided it wasn’t worth it, leaving those thoughts in the past. But suddenly I asked myself if it was possible to just clone my information on my environments.yaml file and set up another environment with the same credentials. Indeed, it was.
The only thing I did here was:
- Open my environments.yaml file.
- Copy the exact same information I had for my old EC2 environment.
- Give a new name to the environment I was creating.
- Change the name of the storage bucket (as it has to be unique).
- Save the changes, close the file, and bootstrap the new environment.
Easy enough, right? That way you can just have multiple environments and execute different things on each one with just one account. I am not sure how this will work for other providers, but at least for AWS it works this way. This just adds more awesome-ness to Juju than it already has. Now, let’s play with this environments!
Ubuntu announced its 13.10 (Saucy Salamander) release almost 9 months ago, on October 17, 2013. This was the second release with our new 9 month support cycle and, as such, the support period is now nearing its end and Ubuntu 13.10 will reach end of life on Thursday, July 17th. At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 13.10.
The supported upgrade path from Ubuntu 13.10 is via Ubuntu 14.04 LTS. Instructions and caveats for the upgrade may be found at:
Ubuntu 14.04 LTS continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:
Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.
Originally posted to the ubuntu-announce mailing list on Fri Jun 20 05:00:13 UTC 2014 by Adam Conrad on behalf of the Ubuntu Release Team
A lot of people have been asking lately about what the minimum number of nodes are required to setup OpenStack and there seems to be a lot of buzz around setting up OpenStack with Juju and MAAS. Some would speculate it has something to do with the amazing keynote presentation by Mark Shuttleworth, others would conceed it’s just because charms are so damn cool. Whatever the reason my answer is as follows
You really want 12 nodes to do OpenStack right, even more for high availability, but at a bare minimum you only need two nodes.
So, naturally, as more people dive in to OpenStack and evaluate how they can use it in their organizations, they jump at the thought “Oh, I have two servers laying around!” and immediately want to know how to achieve such a feat with Juju and MAAS. So, I took an evening to do such a thing with my small cluster and share the process.
This post makes a few assumptions. First, that you have already set up MAAS, installed Juju, and configured Juju to speak to your MAAS environment. Secondly, that the two machine allotment is nodes after setting up MAAS and that these two nodes are already enlisted in MAAS.My setup
Before I dive much deeper, let me briefly show my setup.
I realize the photo is terrible, the Nexus 4 just doesn’t have a super stellar camera compared to other phones on the market. For the purposes of this demo I’m using my home MAAS cluster which consists of three Intel NUCs, a gigabit switch, a switched PDU, and an old Dell Optiplex with an extra nick which acts as the MAAS region controller. All the NUCs have been enlisted in MAAS and commissioned already.Diving in
Once MAAS and Juju are configured you can go ahead and run juju bootstrap. This will provision one of the MAAS nodes and use it as the orchestration node for your juju environment. This can take some time, especially if you don’t have fastpath installer selected, if you get a timeout during your first bootstrap don’t fret! You can increase the bootstrap timeout in the environments.yaml file with the following directive in your maas definition: bootstrap-timeout: 900. During the video I increase this timeout to 900 seconds in the hopes of eliminating this issue.
After you’ve bootstrapped it’s time to get deploying! If you care to use the Juju GUI now would be the time to deploy it. You can do so with by running the following command:juju deploy --to 0 juju-gui
To avoid having juju spin us up another machine we can tell Juju to simply place it on machine 0.
NOTE: the --to flag is crazy dangerous. Not all services can be safely co-located with each other. This is tandumount to “hulk smashing” services and will likely break things. Juju GUI is designed to coincide with the bootstrap node so this has been safe. Running this elsewhere will likely result in bad things. You have been warned.
Now it’s time to get OpenStack going! Run the following commands:juju deploy --to lxc:0 mysql juju deploy --to lxc:0 keystone juju deploy --to lxc:0 nova-cloud-controller juju deploy --to lxc:0 glance juju deploy --to lxc:0 rabbitmq-server juju deploy --to lxc:0 openstack-dashboard juju deploy --to lxc:0 cinder
To break this down, what you’re doing is deploying the minimum number of components required to support OpenStack, only, your deploying them to machine 0 (the bootstrap node) in LXC containers. If you don’t know what LXC containers are, they are very light weight Linux containers (virtual machines) that don’t produce a lot of overhead but allow you to safely compartmentalize these services. So, after a few minutes these machines will begin to pop online, but in the meantime we can press on because Juju waits for nothing!
The next step is to deploy the nova-compute node. This is the powerhouse behind OpenStack and is the hypervisor for launching instances. As such, we don’t really want to virtualize it as KVM (or XEN, etc) don’t work well inside of LXC machines.juju deploy nova-compute
That’s it. MAAS will allocate the second, and final node if you only have two, to nova-compute. Now while all these machines are popping up and becoming ready lets create relations. The magic of Juju and what it can do is in creating relations between services. It’s what turns a bunch of scripts into LEGOs for the cloud. You’ll need to run the following commands to create all the relations necessary for the OpenStack components to talk to eachother:juju add-relation mysql keystone juju add-relation nova-cloud-controller mysql juju add-relation nova-cloud-controller rabbitmq-server juju add-relation nova-cloud-controller glance juju add-relation nova-cloud-controller keystone juju add-relation nova-compute nova-cloud-controller juju add-relation nova-compute mysql juju add-relation nova-compute rabbitmq-server:amqp juju add-relation nova-compute glance juju add-relation glance mysql juju add-relation glance keystone juju add-relation glance cinder juju add-relation mysql cinder juju add-relation cinder rabbitmq-server juju add-relation cinder nova-cloud-controller juju add-relation cinder keystone juju add-relation openstack-dashboard keystone
Whew, I know that’s a lot to go through, but OpenStack isn’t a walk in the park. It’s a pretty intricate system with lots of dependencies. The good news is we’re nearly done! No doubt most of the nodes have turned green in the GUI or are marked as “started” in the output of juju status.
One of the last things is configuration for the cloud. Since this is all working against Trusty, we have the latest OpenStack being installed. All that’s left is to configure our admin password in keystone so we can log in to the dashboard.juju set keystone admin-password="helloworld"
Set the password to whatever you’d like. Once complete, run juju status openstack-dashboard find the public-address for that unit, load it’s address in your browser and navigate to /horizon. (For example, if the public-address was 10.0.1.2 you would go to http://10.0.1.2/horizon). Log in with the username admin and the password as you set it in the command line. You should now be in the horizon dashboard for OpenStack. Click on Admin -> System Panel -> Hypervisors and confirm you have a hypervisor listed.
Congradulations! You’ve create a condensed OpenStack installation.
On top of the incredible response from the team to complete the handout, I received a handful of volunteers for CD distribution throughout Colorado. The volunteers below will be available with install CD's in the following Colorado cities:
- Neal McBurnett: Boulder
- Chris Yoder: Longmont
- Ryan Nicholson: Fort Collins
- Emma Marshall: Denver & Aurora
Here's a close look at our 2-sided handout:
Thank you to the Colorado Ubuntu Team for helping spread Ubuntu! We are on an excellent path to a successful summer!
The Randa meetings provide an excellent opportunity for KDE developers to come across for a week long hack session to fix bugs in various KDE components while collaborating on new features.
This year we have some amazing things planned, with contributors working across the board on delivering an amazing KDE Frameworks 5 experience, a KDE frameworks SDK, a KDE frameworks book, the usual bug fixing and writing new features for the KDE Multimedia stack and much much more.
So please, go ahead and donate to our Randa fundraiser here , because when these contributors come together, amazing things happen :)
Packages for the release of KDE SC 4.13.2 are available for Kubuntu 12.04LTS, 13.10 and our development release. You can get them from the Kubuntu Backports PPA.
To update, use the Software Repository Guide to add the following repository to your software sources list:
Packages for the release of KDE SC 4.13.2 are available for Kubuntu 12.04LTS, 13.10 and our development release. You can get them from the Kubuntu Backports PPA.
To update, use the Software Repository Guide to add the following repository to your software sources list: