We’ll be hosting our next Ubuntu User Days on Saturday January 25th, 14:30 UTC – Sunday the 26th 2014, 3:00 UTC.
“User Days was created to be a set of courses offered during a one day period to teach the beginning or intermediate Ubuntu user the basics to get them started with Ubuntu”
In order for this event to be a success, we need instructors to lead sessions.
To volunteer to lead a session, you can contact a member of the Ubuntu User Days Team by sending an email to myself (lyz at ubuntu.com), the ubuntu-classroom at lists.ubuntu.com mailing list or by contacting us on IRC by stopping by #ubuntu-classroom-backstage on irc.freenode.net.
If you are unsure of a topic for your session, you can visit the Course Suggestions page:
If you are unsure about expectations for class instructors, please ask! You may also visit the logs from past Ubuntu User Days:
Please be sure to pass this announcement along to any of your friends who might be interested in leading a session.
If you ever want to try out the latest bleeding edge stuff, and waiting 6 months for another distro update is too much wait for you? Try Project Neon, a nightly build of the latest KDE trunk. It allows you to get the latest features, changes and enhancements (as well as bugs). Make sure you know what you’re getting into, through.
A similar service is available for KDE’s world class paining application, Krita. Krita Lime is a PPA which lets you test daily builds of improvements to Krita.
The Krita team has been hard working on improving the performance of their overhauled OpenGL engine, with visual improvements too. For example, there is now a high quality filtering mode, which significantly increases the quality of zoomed in images using advanced filtering algorithms. Read the full post here!
For example, SMM can be used to handle shutdown if CPU temperature is too high, perform transparent fan control, handle special system events (e.g. chipset errors), emulate hardware (non existing or buggy hardware) and a lot more besides.
SMM in theory cannot be disabled by the operating system and have been known to interfere with the operating system even though is it meant to be transparent. SMIs steal CPU cycles from the system - CPU state has to be stored and restored and there are side effects because of flushing out of the write back cache. This CPU cycle stealing can impact real time behaviour and in the past it has been hard to determine how frequently SMIs occur and hence how much potential disruption they bring to a system.
When the CPU enters SMM the output pin SMIACT# is asserted (and all further memory cycles are redirected to a protected memory for SMM). Hence one could use a logic analyser on SMIACT# to count SMIs. An alternative is to have a non-interruptible thread on a CPU checking time skips by constantly monitoring the Time Stamp Counter (TSC) but this is a CPU expensive operation.
Fortunately, modern Intel CPUs (such as Ivybridge and Haswell) have a special SMI counter in a Model Specific Register. MSR_SMI_COUNT (0x00000034) is incremented when SMIs occur, allowing easy detection of SMIs.
As a quick test, I hacked up smistat that polls MSR_SMI_COUNT every second. For example, pressing the backlight brightness keys on my Lenovo laptop bumps the counter and this is easy to see with smistat. So this MSR provides some indication of the frequency of SMIs, however, it of course cannot inform us how many CPU cycles are stolen in SMM. Now that would be a very useful MSR to add to the next revision of the Intel silicon...
This year's final episode is released a few days later than expected. For the avoidance of doubt a transcript is presented below. Until new episodes start being released in 2014, we encourage you to enjoy the various posted episodes of the 2013 reboot of The Tomorrow People. (N.B. No sponsorship has been provided by The CW, we just like the show enough to recommend it especially with our UK counterpart going on about Doctor Who...)
Download here (MP3) (Ogg Vorbis) (Free Lossless Audio Codec) (Speex), or subscribe to the podcast (MP3) to have episodes delivered to your media player. We suggest subscribing by way of a service like gpodder.net.
Discussion of this episode may take place in the relevant thread on discourse.ubuntu.com.
This work is licensed under the Creative Commons Attribution-ShareAlike 3.0 United States License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/3.0/us/.
Welcome to the Burning Circle. This is episode 144 and is the final episode for 2013.
I will be likely promulgating a Delegation of Authority statement by way of the mailing list before the end of the calendar year. I will also have to forward such to the LoCo Council for their attention and ensure that my lateral counterparts are aware. There are some changes in circumstances on my part that right now will require such a Delegation to take place for at least 30 days.
The first alpha release of this cycle is coming out next week. Will you be ready to beat on the images for the various flavors? Please remember that nothing will be released for mainline Ubuntu during the alpha period.
Thank you all for sticking around for the wild ride that 2013 has been. Yeah, we have had changes and issues to cope with. So has the overall Ubuntu realm. Heading into 2014 we get to worry about communications and other issues.
This is the last podcast for the year. No new episodes will be released until an announcement is made in 2014. Keep an eye on planet dot ubuntu dot com for updates.
Thank you for joining us. Until next time, we'll be seeing you...
Ezgo is a pre-configured set of software designed to bring FOSS to Taiwan’s schools. The latest release, Ezgo 11 uses KDE as its desktop and Kubuntu as its base distribution, and has had great success with the New Taipei City government installing Ezgo on more than 10,000 PCs in elementary schools.
The aim isn’t just to get FOSS software into computer classes, but also in other subjects. The first software compilation was born, and 10 versions later a free and open source operating system is now the pre-installed OS instead of Microsoft Windows – thanks to the hard work of the Ezgo team and all the contributors to open source software.
Mir and Wayland are two display servers/managers that aims to replace X. Recently in April 2013, Canonical Ltd, the company behind Ubuntu has announced that Mir will be used as a replacement display server for the current X.Org Server in Ubuntu. What does this mean, and what does that have to do with Kubuntu?
The controversy centers around the licensing of Mir and the development process. Mir is licensed under GPLv3, however contributors must sign an agreement which allows Canonical to relicense their contributions under any other license, including proprietary ones. The development process has also been akin to something like Android, instead of being truly open.
Kubuntu has announced that they will not be switching to Mir and instead will keep X until eventually replacing with Wayland (like KDE and other distros). Some users, like Igor have took sides and decided to support Kubuntu in the controversy. Here’s a snippet from his review of Kubuntu 13.10:
Perhaps differences in philosophy are at the root of the MIR/Wayland controversy. I like technology, but believe philosophy is more important. In reality, however much one might like to cast the issue in philosophical terms, the problem has a very practical impact. We observe the classical Linux case of reinventing the wheel. Instead of one display server to suit all distros, now a new wheel must be invented twice, by different teams of developers working presumably in isolation from one another. [...]
We’re back with the forty-second episode of Season Six of the Ubuntu Podcast from the UK LoCo Team! Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are drinking tea and eating chocolate cake in Studio A.Download OGG Download MP3 Play in Popup
In this week’s show:
- We discuss Alan’s new toy, a 3Doodler:
- We also discuss poisoning someone, going to ThingMonk, Doom 3 BFG edition is being ported to Linux, and fixing a laptop under warranty.
- We share some Command Line Lurve: explainshell.com.
- And we read your feedback.
We’ll be back next week for the last episode of Season 6, so please send your comments and suggestions to: email@example.com
Join us on IRC in #ubuntu-uk-podcast on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: firstname.lastname@example.org and skype: ubuntuukpodcast
Follow our twitter feed http://twitter.com/uupc
Find our Facebook Fan Page
Follow us on Google Plus
We are pleased to announce that Trusty is now open for translation:
* Translation caveats. Remember that according to the release schedule  translatable messages might be subject to change until the User Interface Freeze .
* Language packs. During the Trusty development cycle, language packs containing translations will be released twice per week  except for the freeze periods. This will allow users and translators to quickly see and test the results of translations.
* Test and report bugs. If you notice any issues (e.g. untranslated strings or applications), do check with the translation team  for your language first. If you think it is a genuine bug, please report it .
Happy translating! :-)
Recently a small group of dedicated people have been working to update the community help wiki. Today alone they completed 378 edits. While many of these updates have been simply to tag pages some of them have been significant. In particular knome is responsible for cleaning up the home page (above) and several navigation pages such the applications menu (below).
The All of this activity is to help prepare for the release of Ubuntu 14.04. The Ubuntu documentation team could always use more people to assist with documentation. The team is broken up in to three different areas:
Working on the wiki can be as simple as identifying pages that need to be updated for the current version of Ubuntu or converting a blog post to a wiki page. I recently converted one of my blog posts to update the Joomla page on the community help wiki and it was very easy to do. A good way to find a good match for your particular area of expertise is to look at the categories page. You can also look at the list of pages that are tagged with ‘Needs Updating‘.
Keeping documentation up to date is a great example of ‘many hands make light work. No matter what your technical expertise is the documentation team can use your help.
Ubuntu-related events have been chugging along here in California.
On Wednesday evening here in San Francisco I had the pleasure of hosting an Ubuntu Hour and Bay Area Debian Dinner. Both events attracted new attendees, which was great to see during December, a month that’s historically pretty quiet for us.
On Thursday night I joined Professor Sameer Verma over at San Francisco State University where I did a presentation on Open Source for his “Managing Open Source” business school class. Given the audience, I gave the someone tongue in cheek title of “Open Source for love, money and fame” to the talk. I fear it was still a bit too technical for some of the students, but there were a number of great questions at the tail end of my talk.
Thanks to Sameer for the photo
My somewhat sparse slides from the talk are here: SFSU-2013-open-source-love-money-fame.pdf
On Tuesday I’ll be meeting some of my HP colleagues for a couple of days at the office in Sunnyvale, so I thought this would be a great opportunity to host a Mountain View Ubuntu Hour, so I am! 7PM at Red Rock Coffee in Mountain View, details here:
Looking onward to next year, we have a leadership election coming down the pipeline in a few weeks. Richard Gaskin is putting together an Ubucon at SCaLE12x in February and we have Philip Ballew heading up efforts for our booth at the conference, details coming together here: CaliforniaTeam/Projects/Scale12x
Huge thanks to everyone on the team who has been pitching in lately, it’s a pleasure working with all of you!
Oracle, a sponsor of OLPC Australia, have posted some video interviews of a child and a teacher involved in the One Education programme.
Tuesday was a really nice evening. A few weeks ago I found a poster about the concert of [dunkelbunt], and got my ticket only on monday. I was told by the ticket sellers that they still have plenty left. In the end when I turned up at the event at tuesday though the concert hall was fully packed with people and I was told that it actually was sold out. There wasn't much place inside the hall left, so I mostly stood in the doorway to the bar area and enjoyed the music from there.
If you listen to their songs you might get an idea why the music catched me and I started to let the music move my body, literally. It's a great feeling after a tough day, and there were some other nice people around which let the same happen to them so it did feel less awkward for me.
Anyway, if you want to find out if their music can do the same to you, here are some songs to listen to:
- The Chocolate Butterfly: This was actually the first song that got me interested in them which was playing on a local radio station.
- Cinnamon Girl: One of the reasons why [dunkelbunt] is put into the electro swing genre. :)
- Schlawiener: The title is a pun, a mix between "Schlawiner" (smooth operator) and "Wiener" (Viennese).
urfkill is meant to be a daemon that centralizes killswitch handling, rather than having all kind of different applications and daemon handle Wi-Fi, Bluetooth, WWAN and whatnot separately, and potentially fighting over them, you can have just one system that tracks the states and makes it possible to switch just one type of killswitch on all at one, or turn everything off should you so desire...
One reason I've taken an interest in urfkill in Ubuntu is that as we build a phone, we have to keep thinking about how users of the devices will be mobile. That's even more the case when you think about a phone or tablet than a laptop: on a laptop, you may have to think of WiFi and Bluetooth, but you're just about as likely to have your laptop off or not have it at all; whereas phones and tablets have become ubiquitous in our way of life.
Like anyone, thinking mobile I'd first think of walking around, driving, or other methods of travel. Granted, nobody needs to turn off Wi-Fi when getting in their car, but what about on planes?
This is the first thing everything brings up when talking about killswitches. Planes. Alright, you really do need to turn the device off on take off and landing, but some airlines do now allow wifi to be on and offer in-flight service. They still require you to keep cellular and bluetooth off. Also, while I sometimes do take my laptop out of my bag on long flights, it's just cramped. Space is at a premium on a flight (hey, I fly economy...), you'll likely want to have a drink, people besides you may need to get up, spillage could occur if there is turbulence...
I don't really enjoy using my laptop on a flight, even though it's quite small. It's just so much trouble and not very comfortable.
However, I do love to watch saved movies, listen to music, and play games on a tablet. That tablet will most likely need to have radios turned off. My phone will typically just stay off and stowed far enough, since I don't really change SIM cards until I can do so safely without risking to lose the thing.
But then, one can also think of how you should avoid using transmitting equipment in a hospital. They have similar rules about radios as planes to avoid interfering with cardiac stimulators, MRI equipment, etc.
Having all kind of different applications handle each type of killswitches separately is quite risky and complicated. How are you certain that things have been turned off? How do you check in the UI whether it's the case? Can you see it quickly by scanning the screen?
What about the actual process of switching things off? Do you need to go through three different interfaces to toggle everything? What do you need to do if you don't have a physical switch to use?
What about persistence after a reboot?
urfkill is meant to, in time, address all such questions. At the moment, it still needs a lot of work though.
I've spent the last day fixing bugs I could find while testing urfkill on my laptop, as well as porting it to logind (still in progress). In time, we should be able to use it efficiently in Ubuntu to handle all the killswitches. With some more work, we will also be able to use it to manage the modem and other systems on Touch.
For the time being, urfkill is available in the archive, for those who want to start experimenting with it.
I am delighted to say that the Raspberry Pi cluster project is now fully funded to the first target of £2,500, this means that the Indiegogo fees will be 4% of the total rather than the 9% which applies to partly funded flexible campaigns. The money received by Paypal has already partially cleared, so we have been out spending some of it, here is a collection of Raspberry Pi units doing some load testing.
There are many ways to build a cluster and many decisions to take along the way, like how to power them, what SD cards to use, whether to overclock them, how to do networking, how to fix them together etc. I will try to explain some of the reasons behind what we are doing and what we tried and didn’t like so much.Powering the Pis
The first two criteria for powering the cluster was that it must be safe, and it must look safe. These are not the same thing at all, it is quite easy to have something with bare wires all over the place that looks a bit scary, but is entirely safe. It is also possible to have it looking great, but overloading some components and generating too much heat in the wrong place and build something that is a good looking fire risk. A single large transformer was one approach, difficulties would be handling the connection from 20A cable or rail (basically like mains flex, the current decides the wire gauge, not the voltage) down to MicroUSB, most electronics components like a USB socket or stripboard are rated for 2.5A max so we would end up with chunky mains grade connectors all over the place, which looks scary, even if it is entirely safe. After a bit of experimentation we found a D-Link 7 port USB hub with a 3A power supply and decided to see how many Raspberry Pi devices we could power from it, turns out that it can do all 7, which was a bit of a surprise. We know the Pi should be able to draw 700mA for a reliable supply, but that is when it has two 100mA USB peripherals plugged into it and is running the CPU and GPU flat out. As we are not using the USB ports and we won’t be using the GPU at all, our little Pi units only draw about 400mA each. This simplifies the power setup a lot, we just need several of these high powered hubs giving us a neat, safe and safe looking setup. The power supply for the hub does get a little warm, but I have tested the current draw at the plug and we are not exceeding the rated supply.Networking
Initially I wanted to find out if we could do it all with WiFi. This would cut out the wires, would give us a decent theoretical peak speed and could in theory be supported by a single wifi router. After testing Pi compatible Wireless N dongles the performance just wasn’t there, the max we could get was 20Mbit/sec, whilst with wired networking 74Mbit/sec was achievable. I am not sure if this was a limitation of the USB system or the drivers, but it became clear that wired networking would be significantly quicker. Having decided that wires are the way forward it came to choosing switches. One big switch or lots of little ones? Well price/performance ratio of the small home switches is just unbeatable. We settled on some TP-Link 8 port gigabit switches. Obviously the Pi would only be connecting at 100Mbit (link speed) but the uplink to the backbone switch is at gigabit speeds. Choosing the 8 port switch meant that we were going to have groups of 7 Raspberry Pi units and one port for the uplink. This approach of multiple hubs has the excellent side effect that the cluster is modular. Every shelf can run as a self-contained cluster of 7 devices networked together, we then join them together using a backbone hub to make a bigger cluster.Physical setup
Here is the first layout attempt. It uses a 30cm x 50cm shelf, with the pi units screwed to wooden dowels pushed into holes drilled in the shelf. There are holes drilled through for the network cables, which were snipped and re-crimped on the other side.
The router and power setup were screwed to the underside of the shelf. This setup was a bit fiddly to build, crimping network cables is a bit time consuming and the dowel arrangement wasn’t as neat as I wanted.
The Raspberry Pi doesn’t really have a flat available side to it, I was thinking of removing the composite video and audio out connectors to produce a flat side for fixing things to, then I noticed that if I drill some holes just the right size then the composite connection makes quite a reasonable push-fit fixing for a sideways mounted unit. Here is the shelving unit they are going to be fixed to, it is an IKEA Ivar set with 8 30×50 shelves. One design goal is to use easily available parts so that other people can replicate the design without sourcing obscure or expensive components. Wood is a great material for this kind of project, it is easy to cut, drill and fix things to, and it is a good thermal and electrical insulator – I wouldn’t want to accidentally put a Raspberry Pi down on a metal surface!
More updates will follow as the build progresses, if you have any suggestions on different approaches to any of the decisions on power/networking/fixing then do leave a comment, the design isn’t fixed in stone and we could end up changing it if a better idea comes along. Any further contributions to the campaign would also be gratefully appreciated, they will go towards filling up more shelves!
A self proclaimed “non-techie”, Arindam Sen has wrote a comprehensive review of Kubuntu 13.10 ‘Saucy Salamander’! It’s always interesting to hear the perspective of everyday end users. Digging from the archives, the original motivation for KDE back in 1996 was “A GUI for endusers” – someone who wants to surf the web and use their computer. Read the original KDE announcement if you’re interested. Back to the review, Arindam reviewed Kubuntu 13.10 on a Asus K54C laptop with modest laptop specs.
Except for that dual monitor piece, Kubuntu recognized my laptop’s hardware correctly. Sound played well, screen resolution was excellent and Wifi and LAN worked properly. Touch pad was recognized appropriately with single and double tap and vertical scroll working right from the live boot.
It’s great to hear about compatibility working nearly perfectly. Over the past few years, countless contributors have worked to make Kubuntu and other distros something that works with as wide range of hardware as possible, creating a “it just works” experience for end users.
Arindam also mentioned the customization provided by the KDE desktop environment, which is powerful, intuitive and flexible to adjust to your needs.
Day One (Wednesday) of the Mozilla Community Building Team Meetup here in San Francisco started with a bit of an icebreaker game so all 40+ of us could learn a little bit about each other. The exercise consisted of throwing a purple plushie around the room and answering a question.
After the icebreaker we had some introductions by the track leads and organizers who described the format of the rest of the week. We then broke into workgroups (Education, Events, Recognition, Systems & Data and Pathways). The initial work that each workgroup did was to put together some ideas in their focus area for 2014.
Later we came back to the main area we would be working from this week and did some more exercises to produce ideas for increasing participation surrounding various areas of the Mozilla project. My evening wrapped up heading with a group of Mozillians and my cousin down to a local restaurant to unwind and have dinner.
Recently the Ubuntu newswires have been buzzing with the news that we have won our first smartphone partner.
Now, let’s get the elephant in the room out of the way – I am not telling you who it is. It is not my place here to share confidential details about business-to-business relationships such as this. Rest assured though, I know the folks working on these relationships and there is a tremendous amount of opportunity for Ubuntu in these discussions; OEMs, carriers, ISVs and more are very interested in exploring Ubuntu for future products.
This is…spoiler alert…fantastic news.
But what does this news really mean for Ubuntu, and to what extent do our community play a part? Let’s dig into this a little bit.
I joined Ubuntu because I want to help an effort to bring technological elegance and freedom to people. Both of these are essential; elegant proprietary software and complex Free Software are both limited in the opportunities they bring to people and who can harness them. A good balance of both is what we strive to achieve in Ubuntu.
For many years Ubuntu has been available to download and install on your computer. Today you can download Ubuntu for your desktop computer, phone, tablet, and you can deploy it to your public or private cloud.
While this provides a reliable distribution point for those in the know, it remains an unknown service for those not in the know. Put simply: most normal people don’t do this. People like you and me, who read nerdy blogs like mine, often do this.
Now, we often talk about how we have around 20million Ubuntu users. To be fair, this will always be something of an informed estimation (made up from sales, downloads etc). As an example, if one person downloads Ubuntu they may install it on one computer. Alternatively, they could do the kind of work that Project Community Computers and Partimus do and use that download to install Ubuntu on hundreds of computers that potentially thousands of people will use. Again, put simply, it is difficult to get a firm idea of current numbers of users.
Irrespective though, whatever figure we have…such as 20million…this number is fundamentally defined by our available distribution mechanisms. The formula here is simple: if we increase the opportunity for Ubuntu to be distributed, we get more users…
…and this is where the chain reaction begins.
Wrong chain reaction.
If we have more users, we get more ISVs such as Adobe, Autodesk, Zynga, Rovio and others who want to use Ubuntu as a channel. If we get more apps from ISVs we get more interest from OEMs, carriers, and others. If we get more OEMs and carriers, we get more enterprise, creative-industry, and educational deployments. If we get more deployments we see more businesses selling support, services, training, people writing books, seminars, and other areas of focus. This effectively creates an eco-system around Ubuntu which in turn lowers the bar enough that any consumer can use and try it…thus putting Free Software in the hands of the masses.
Put simply once more: if we make Ubuntu commercially successful, it will put Free Software in the hands of more people.
Now, on the desktop side of things we have Ubuntu pre-installed on four of the largest OEMs on the planet, and while industry-wide annual PC shipments are dropping more and more each year, fortunately, we have positioned ourselves in a sweet spot. We can continue to fulfill our position as the third most popular Operating System for desktop/laptop computers, while providing a simple on-ramp to bring Ubuntu to these other devices as part of our wider convergence story.
As such, our first commercial smartphone partner is where we light the touch-paper that starts that chain reaction. This is good for Ubuntu, consumers, app developers, small businesses selling services, and for other OEMs/carriers who are exploring Ubuntu. All of this is good for Free Software.
So where does the community fit into this? Surely all of this work is going to be the domain of paid Canonical engineers delivering whatever the secret smartphone partner wants?
Recent Canonical sprint at the Marriott City Center, Oakland
Not at all.
Delivering a shippable device has many different technology components: hardware enablement, display server (Mir), shell (Unity 8), developer platform and SDK, core applications that ship with the device, quality assurance, language packs, third-party scopes and services, and more.
This is just what sits on the device. Outside of it we also need effective governance, event planning, local user group advocacy and campaigns, app developer growth and support, general documentation and support, web and communications services, accessibility, and more.
Every one of these areas (with the probable exception of specifically working with customers around enabling their specific device) welcomes and needs our community to help. Some of these areas are better set up collaboratively with our community than others…but not working collaboratively with our community is a bug, not a feature.
Believe me when I say there is no shortage of things for us to do. We have a long but exciting road ahead of us, and I am looking at my team to help support our community in finding something fun, rewarding, and productive to work on. There are few things in life more satisfying than putting your brick in the wall as part of a global effort to bring technological change to people. I hope you are joining us for the ride.
If you want to help and get stuck, email me at email@example.com. I am happy to help get you started.
It has been at least two years, maybe longer, since I updated the look and feel of this site. With today’s lovely WordPress update, the administrative interface received a beautiful update and it inspired me to go with a cleaner, more minimalist look for the public side. Hope you like it.
We have long been able to test Ubuntu isos very easily by
using ‘testdrive’. It syncs releases/architectures you are interested
in and starts them in kvm. Very nice. But nowadays, in addition to
the isos, we also distribute cloud images. They are the basis for
cloud instances and ubuntu-cloud containers, but starting a local vm
based on them took some manual steps. Now you can use ‘uvtool’ to
easily sync and launch vms with cloud images.
uvtool is in the archive in trusty and saucy, but if you’re on precise
you’ll need the ppa:
sudo add-apt-repository ppa:uvtool-dev/trunk
sudo apt-get update
sudo apt-get install uvtool
Now you can sync the release you like, using a command like:
uvt-simplestreams-libvirt sync --source http://cloud-images.ubuntu.com/daily release=trusty arch=amd64
See what you’ve got syncd:
then launch a vm
uvt-kvm create xxx release=saucy arch=amd64
and connect to it
uvt-kvm ssh --insecure -l ubuntu xxx
While it exists you can manage it using libvirt,
virsh destroy xxx
virsh start xxx
virsh dumpxml xxx
Doing so, you can find out that the backing store is a qcow snapshot
of the ‘canonical’ image. If you decided you wanted to publish a
resulting vm, you could of course convert the backing store to a
raw file or whatever:
sudo qemu-img convert -f qcow2 -O raw /var/lib/uvtool/libvirt/images/xxx.qcow xxx.img
When you’re done, destroy it using
uvt-kvm destroy xxx