The URL you requested could not be found.
Rafał Cieślak: Multi-OS gaming w/o dual-booting: Excelent graphics performance in a VM with VGA passthrough
Note: This articles is a technology/technique outline, not a detailed guide and not a how-to. It explains what is VGA passthrough, why you might be interested in it, and where to start.
Even with the current abundance of Linux native games (both indies and AAAs), with WINE reliably running almost any not-so-new software, many gamers who use Linux on a daily basis tend to switch to Windows for playing games. Regardless of one’s attitude towards non-free software, it has to be admitted that if you wish to try out some of the newest titles, you have no other choice than running them on a Windows installation. This is why so many gamers dual-boot: having installed two operating systems on the same machine and using Windows for playing games and Linux for virtually anything else, they limit their usage of Microsoft’s OS for gaming only. This popular technique seems handy – you get the luxury of using a Linux, and the gaming performance of Windows.
But dual-booting is annoying because of the need of reboot to switch your context. Need to IM your friend while playing? Save your game, shut down Windows, reboot to Linux, launch IM, reboot to Windows, load your game. Switching takes a long time, is inconvenient, and therefore the player may feel discouraged to do so.
What if you could run both operating systems at once? That’s nothing new, run a virtual machine in your Linux, install Windows within it, and voilà! But a virtual machine is no good for gaming, the performance will be utter cr terrible. Playing chess might work, but any 3D graphics won’t do because of the lack of hardware acceleration. The VM emulates a simple graphics adapter to display it’s output in a window of the host OS.
And that is where VGA passthrough comes in, and solves this issue.1. The idea
The key to getting neat graphics in a VM is to grant the virtual machine a full access to your graphics card. This means that your host OS will not touch this piece of hardware at all, and the guest OS will be able to use it as any other (emulated) hardware. The guest OS (presumably Windows) will load it’s own drivers for the graphics adapter, and will communicate with it natively! Therefore it will have full access to hardware acceleration and any other goodies that gear might provide (eg. HDMI audio). The idea of passing a VGA adapter to a virtual machine is usually named VGA passthrough.
Sounds crazy? Let me tease you: my setup is capable of smoothly running Watch_Dogs, Tomb Raider (2013) on Ultra settings at 60+ FPS within that virtual machine, using NVIDIA’s GTX 770. And I get the luxury of running both OS at once – so I can switch between them in just a glimpse, without shutting down either one! This is astonishingly convenient.
Because the dedicated graphics hardware will be reserved for the guest system, the host will need another graphics adapter to display anything. So there comes the first hardware requirement: you need at least two graphic adapters. However, it is not uncommon – many new Intel processors have a build-in GMA – and if you are a gamer, chances are you have invested in a dedicated graphics card – so that makes two graphics adapters already. Let the host system use integrated graphics, and the guest will get the powerful dedicated graphics for games. Because both graphic adapters will work independently and there is no way to compose their video output¹, you will need two separate displays, one for each system. This means either a set of two monitors, or a monitor with two video inputs (so that you can switch between them). You might also experiment with a KVM switch.
Also keep in mind that it is not an easy thing to set up. While some claim they have succeeded on their first try, many others have struggled a lot. Personally, I spend about two weeks tuning things up to get my VGA passthrough running – and if we count hardware searching and preparations, then it took me two months. But it was completely worth it! My current setup contains of:
- Intel i7-4790K (4 x 2 x 4.0GHz)
- ASRock Z97 Extreme6
- NVIDIA GTX 770 4GB
- and some 16 gigs of RAM
- also, a monitor with multiple video inputs (I switch video source using buttons on the monitor)
- Ubuntu 14.04
As I have mentioned, this set is capable of running very demanding games at maxed settings with amazing results. How does it work in practice? It feels as if I was running both systems at once. For example, while playing a game under Windows, my Linux has an IM client running. Because I mix the sound from both systems, I can hear the notification when I get a message. So I pause the game, switch monitor video source with a hotkey shortcut, respond to the message, and switch the video back. If only I had two monitors, I would play on one of them, with the host system using the other one – so I wouldn’t even need to touch the monitor to switch the OS, I would just need to rotate my head a little bit ;-)
Getting here was a lot of work, but a lot of fun too! The first step is to meet the…2. Hardware requirements
Yeah. Not every machine will be able to do this trick. As already mentioned, you need two graphics adapters. However, it is not possible to passthrough the graphics integrated in your CPU! This is because passthrough works by separating a PCI device from the host system, and attaching it to the guest OS. Therefore you can only pass a dedicated graphics hardware. Not much of a problem, probably, but it’s probably an important note.
You also need to ensure that your CPU and mainboard support IOMMU – extensions for I/O visualisation, which are necessary for passing through a PCI device. Intel calls their IOMMU technology “VT-d“, while AMD refers to it as “AMD-V“. This is an absolute must, so if you are buying new hardware, make sure both your processor and the chipset will support IOMMU²!
Also, if you plan to use a CPU integrated graphics adapter for the host system, make sure that the mainboard supports it, and that it has a video output!
You will get best results if you use a multi-core CPU. Demanding games will require not only powerful graphics hardware, but a decent CPU as well! It is possible to reserve some of CPU’s cores for the VM – this way you can ensure that the guest OS will be granted enough computational power. For example, in my setup, the host OS uses 2 cores, while other 6 are at Windows’ disposal.
Also, as explained, you need a monitor with several inputs, or a set of two. I am not aware of any way to get this working on a laptop, as most of laptops I know have just one monitor, and you cannot manually switch between video sources¹.
So the full list of requirements is:
- IOMMU compatible CPU and mainboard
- A dedicated PCI graphics adapter (for passing through)
- Graphics hardware for the host OS (can be integrated in CPU)
- Monitor with multiple video inputs (recommended two monitors)
- (Recommended: multi-core CPU).
Warning: Note that you DO NOT NEED a multi-OS graphics card! Contrary to popular belief, non-Quatro NVIDIA cards will work well, with no hardware modifications of any kind!3. Methods
There are two popular passthrough techniques – one involves Xen virtualization, and one using Qemu and VFIO. Having played around with both, I am personally a fan of the Qemu way – it seems it is much easier to set up, I get more control over my VM, customizations are easier, and, most importantly, it works with virtually any PCI graphics adapter!
There is a lot of confusion on the Internet concerning what results each method may yield. Some say that Qemu method can never grant any decent performance, they claim that only Xen can perform primary VGA passthrough, while Qemu’s secondary VGA passthrough will be very inefficient. However, numerous people (including me) confirm that they have awesome performance with Qemu. On the other hand, it is clear that passthrough with Xen will only work with multi-OS graphic cards. This is not a problem for Radeon users, as probably all new Radeons will do just fine with Xen. However, if your NVIDIA card is not an NVIDIA Quadro, you have no chances with Xen! – unless you burn several resistors on the board, which can mod your card so that it thinks it is a Quadro… I do not recommend such hardware modifications to anyone, even if you trust the Internet too much, the risk of rendering your precious hardware useless is far too high to make it work the effort. Qemu, on the other hand, should work well with absolutely any PCI card.
Given these reasons, as well as customization options, I have decided to stick with Qemu. For the rest of this article, I will be describing this particular method.
There is one particular comprehensive guide on how to setup everything using the Qemu method here – at the time of writing this forums thread has more than 2500 replies, so learning details from here may be hard, but on the other hand every possible scenario is covered somewhere in there :) I can highly recommend that guide, but if you want to learn about the general idea first, stay with me before you jump there!4. The software
Obviously things won’t work out of the box. There are also necessary preparations on the software side.
First, you will need to patch your kernel a bit, and compile it with several options enabled. At the time of writing, ASC override patches and VGA arbiter fixes need to be applied manually, as they are not (yet?) included in the kernel. You can find details in that guide I linked.
You will need to configure your kernel a bit. The key is not only to ensure it activates appropriate IOMMU modules, but also to forbid it from loading any drivers to the card you will want to pass through.
Most likely it will be also necessary to use the git development version of Qemu – some necessary features are not yet available in stable releases. Also, when playing with qemu, it is worth to try KVM – chances are that hardware virtualization might significantly improve virtual machine’s CPU performance.
You may want to write a bit of scripts that set up few other details (binding the PCI card to vfio module) before you start qemu to run the virtual machine.
Also, it may be tricky to get the right order of installing drivers in the guest OS. It took me a while to realize that I need to disable qemu’s emulated VGA – otherwise NVIDIA drivers won’t detect the dedicated hardware :-)
The greatest issue I have met is that Windows is very sensitive to hardware changes. Even slightest changes in my virtual machine (different qemu options) would immediately cause my Windows to never boot anymore, and any web guides on dealing with these particular BSoDs on boot never helped… So eventually I had to re-install the whole guest OS, after ~10 times I am completely fed up with it. However, if I do not experiment with qemu settings, there are no such problems at all.5. Peryphetials
How about keyboard/mouse, should you pass them through too? You might, but this is not necessary; I use Synergy for sharing my mouse/keyboard between systems just as if they were two displays of one system. Very convenient. The script that starts qemu for me also launches synergy server on my Linux, the client running in Windows starts automatically on boot.
If you want, you can also setup networking for the guest system – qemu has very good support for interface bridging, so it is not difficult to grant internet access for the guest OS.
One could also pass-through audio devices, but I believe this is not necessary – especially if you do not care about hardware audio acceleration; in such case you can get qemu to emulate a sound device and play it as any other app in the host OS would do. In result you can hear both systems on same speakers/headphones!
Personally, I have even went so far that I prepared a simple app that talks to my monitor via I²C and tells it when to switch video input – this way I can use a hotkey shortcut instead of navigating it’s OSD menus. The same hotkey will switch my keyboard/mouse between systems, thanks to synergy’s customizability.6. Conclusions
I have used this configuration for a few weeks now, and I am yet to find a game that would not perform outstandingly in this environment. Graphics performance is just as if I dual-booted, CPU performance is only a tiny bit worse (but still awesome). The ability to keep all my apps running under Linux while I play games, be it a web browser, IM client, teamspeak or whatever else might be useful – is incredibly convenient!
Switching between systems in less then a second is really a game-changer for me (pun intended…)!
If you are excited about this technique, go ahead and read the guide. Be ready for a challenge, and do not give up it things won’t work – you won’t regret it! Good luck!
Want to know more? I will be happy to answer your general questions, but if you need help or want to learn about technical details, the best place to find answers is here.
¹) Unless your motherboard has a video multiplexer, like NVIDIA Optimus… but using it would be difficult, as you would need to manually control the mux. I believe this might be achievable, but most certainly would require specialized drivers, that do not exist right now.
²) It’s not as simple as “all new hardware supports it”, both in case of CPUs and mobos. You may find some lists of IOMMU-compatible hardware on the Internet, but it is probably best to ask the manufacturer itself – if they do not list it on their website, try dropping an email – from my experience all manufacturers are very keen to respond to enquiries concerning such sophisticated features! ;-)
Filed under: PlanetUbuntu, Ubuntu
I am a firm believer in building strong and empowered communities. We are in an age of a community management renaissance in which we are defining repeatable best practice that can be applied many different types of communities, whether internal to companies, external to volunteers, or a mix of both. The opportunity here is to grow large, well-managed, passionate communities, no matter what industry or area you work in.
I have been working to further this growth in community management via my books, The Art of Community and Dealing With Disrespect, the Community Leadership Summit, the Community Leadership Forum, and delivering training to our next generation of community managers and leaders.LinuxCon North America and Europe
Firstly, on Fri 22nd August 2014 (next week) I will be presenting the course at LinuxCon North America in Chicago, Illinois and then on Thurs Oct 16th 2014 I will deliver the training at LinuxCon Europe in Düsseldorf, Germany.
Tickets are $300 for the day’s training. This is a steal; I usually charge $2500+/day when delivering the training as part of a consultancy arrangement. Thanks to the Linux Foundation for making this available at an affordable rate.
Space is limited, so go and register ASAP:
So what is in the training course?
If you like videos, go and watch this:
If you prefer to read, read on!
My goal with each training day is to discuss how to build and grow a community, including building collaborative workflows, defining a governance structure, planning, marketing, and evaluating effectiveness. The day is packed with Q&A, discussion, and I encourage my students to raise questions, challenge me, and explore ways of optimizing their communities. This is not a sit-down-and-listen-to-a-teacher-drone on kind of session; it is interactive and designed to spark discussion.
The day is mapped out like this:
- 9.00am – Welcome and introductions
- 9.30am – The core mechanics of community
- 10.00am – Planning your community
- 10.30am – Building a strategic plan
- 11.00am – Building collaborative workflow
- 12.00pm – Governance: Part I
- 12.30pm – Lunch
- 1.30pm – Governance: Part II
- 2.00pm – Marketing, advocacy, promotion, and social
- 3.00pm – Measuring your community
- 3.30pm – Tracking, measuring community management
- 4.30pm – Burnout and conflict resolution
- 5.00pm – Finish
I will warn you; it is an exhausting day, but ultimately rewarding. It covers a lot of ground in a short period of time, and then you can follow with further discussion of these and other topics on our Community Leadership discussion forum.
I hope to see you there!
I’ll be there this year!
Talks look amazing, I can’t wait to hit up all the talks. Looks really well organized! Talk schedule has a bunch that I want to hit, I hope they’re recorded to watch later!
If anyone’s heading to PyGotham, let me know, I’ll be there both days, likely floating around the talks.
The Ubuntu Developer Summit has been scheduled for November 12 – 14. UDS is a hotbe of ideas. It is where the Ubuntu community works to find creative solutions to problems with the intent to produce a better Ubuntu for everyone. Since moving to an on-line format UDS has enabled a diverse range of participants from across the globe to participate in the process.
As the planning for the summit continues your thoughts and ideas could help shape the next UDS. The discussion is taking place now on here.
elementary OS Freya Beta has been announced by its developers and it comes with an Ubuntu 14.04 base and lots of new features. As you can imagine, there are quite a few changes and improvements over elementary OS Luna, including the Linux kernel from Ubuntu 14.04, the 3.13 stack. This is just the tip of the iceberg.
elementary OS developers are supporting Facebook, Fastmail, Google+, Microsoft, and Yahoo account integration by default. This is done with the help of Pantheon Online Accounts, a new tool that combines features from Ubuntu Online Accounts and GNOME Online Accounts and brings its own improvements.
This is still a Beta release, which means that users will probably notice bugs with the operating system. The release date remains unknown, but that is not something new. The developers never provide a release date and they usually take their time until they are satisfied with the result.
Submitted by: Silviu Stahie
Shutter, a feature-rich screenshot program that allows users to capture nearly anything on their screen without losing control, is now at version 0.92.
The latest update for Shutter was released in June, but it was almost identical in complexity with the current build. Nothing really important has been implemented, with the exception of a few maintenance changes.
Submitted by: Silviu Stahie
Lean. Agile. Svelte. Lithe. Free.
That's how we roll our operating systems in this modern, bountiful era of broadly deployed virtual machines, densely packed with system containers.
Linux, and more generally free software, is a natural fit in this model where massive scale is the norm. And particularly Ubuntu (with its solid Debian base), is perfectly suited to this brave new world.
Introduced in Ubuntu 8.04 LTS (Hardy) -- November 19, 2007, in fact -- JeOS (pronounced, "juice") was the first of its kind. An absolutely bare minimal variant of the Ubuntu Server, tailored to perfection for virtual machines and appliances. Just enough OS.
Taken aback, I overheard a technical executive at a Fortune 50 company say this week:
"What ever happened to that Ubuntu JeOS thing? We keep looking at CoreOS and Atomic, but what we really want is just a bare minimal Ubuntu server."Somehow, somewhere along the line, an important message a got lost. I hope we can correct that now...
JeOS has been here all along, in fact. You've been able to deploy a daily, minimal Ubuntu image, all day, every single day for most of the the last decade. Sure, it changed names to Ubuntu Core along the way, but it's still the same sleek little beloved ubuntu-minimal distribution.
"How minimal?", you ask...
63 MB compressed, to be precise.
Did you get that?
That's 63 MB, including a package management system, with one-line, apt-get access to over 30,000 freely available packages across the Ubuntu universe.
That's pretty darn small. Much smaller than say, 147 MB or 268 MB, to pick two numbers not at all at random.
"How useful could such a small image actually be, in practice?", you might ask...
Ask any Docker user, for starters. Docker's base Ubuntu image has been downloaded over 775,260 to date. And this image is built directly from the Ubuntu Core amd64 tarball.
Oh, and guess what else? Ubuntu Core is available for than just the amd64 architecture! It's also available for i386, armhf, arm64, powerpc, and ppc64el. Which is pretty cool, particularly for embedded systems.
So next time you're looking for just enough operating system, just look to the core. Ubuntu Core. There is truly no better starting point ;-)
Bart estuvo superliado y no pudo recogernos en el muelle, pero no importó, porque estábamos muy cerca del centro y nos acercamos a cambiar moneda y pasear tranquilamente por el precioso centro de Cartagena.
Una lluvia tropical ayudó a calmar el calor, pero a costa de empaparnos... ¿Qué mejor momento para disfrutar bajo techo de un delicioso jugo de mango? :P
Y tras un corto paseo, otra demostración de lo pequeño que puede llegar a ser el mundo, ¡Nos encontramos con Fernando y Marta por la calle!
Juntos disfrutamos de unas cervezas bien frías y bailamos un poco de rumba (siendo sincero, yo más bien intenté bailar) en un bar de la Plaza de los Coches, haciendo tiempo porque habíamos quedado a las 7 con Bart.
Disfrutando del ambiente de CartagenaEsperando en la plaza a la hora indicada Bart no llegaba y nos fuimos al hotel de Fernando que está a 5' andando, para contactar con él. Bart estaba a tope, liado recogiendo a más conferenciantes y no pudo acercarse a cenar. También escribimos a Sergio para ver qué hacíamos con el hotel y nos comentó que el hostal sólo tenía una plaza y que teníamos que ir muy rápido para hacer el checkin a nuestra persona.
Nos dio pereza ir tan tarde y lejos, así que buscamos un hotel cercano al de Fernando y nos deleitamos en un restaurante cercano con unos pescaitos y jugos, todo ello acompañado por música en directo :))
Tras la cena, paseamos un rato en espera ya del primer gran día de la Ubuconla...
Continúa leyendo más de este viaje.
The last day of KDE’s Randa Sprint 2014 is almost over and boy am I exhausted.
The awesome multimedia crew processed some 220 bugs in Phonon, KMix and Amarok. We did a Phonon 4.8 beta release allowing Linux distributions to smoothly transit to a newer version of GStreamer. We started writing a new PulseAudio based volume control Plasma widget as well as a configuration module to allow feature richer and more reliable volume control on systems with PulseAudio available.
In the non-multimedia area I discussed my continuous packaging integration plans with people to work out a suitable workflow. Certain planned improvements to KDE’s CI process make me very confident that in the not too distant future distributions will be able piggyback onto KDE’s CI and create daily integration builds in their regular build environments.
Many great things await!‘A Spaceship’ by Rohan Garg
The Juju Charm Store has been in a bit of a spotlight lately, as it's both a wonderful tool and a source of some frustration for new charmers getting involved in the Juju ecosystem. We wanted to take this opportunity to cover some of the finer aspects of the Juju Charm Store for new users and explain the difference between what a recommended charm is vs a charm that lives in a personal namespace.Why is there a distinction?
Quality. We want all the charms in the Charm Store to be of the highest quality so that users can depend on the charms deploying properly and do what they say they are going to do.
When the Charm Store first came into existance, it was the wild west. Everyone wanted their charm in the Charm Store and things were being promoted very rapidly into the store. There were minimal requirements, and everything was new and exciting. Now that Juju has grown into its toddler phase and is starting to walk around on it's own - we've evolved more regulations on charms. We have defined what makes a high-quality charm, and what expectations a user should have from a high quality charm. You can read more about this at the Charm Store Policy doc and the Feature Rating doc
The bar for some of the features, and quality descriptors may seem like extremly high hurdles for your service to meet to become a ~charmer recommended service. This is why Personal Namespaces exist - as the charmer team continues to add and expand the Charm Store with charms that meet and/or exceed these quality guidelines - we encourage everyone to submit their Juju charm for world wide consumption. You may disagree with FOSS licensing, or perhaps data-handling just isn't something you're willing to do with the service that you orchestrate. These are OK! We still want your service to be orchestrate-able with Juju. Just push your charm into a Personal Namespace, and you don't even have to undergo a charm review from the Charmers team unless you really want someone proofing your code, and service behavior.What differences will this have? Deployment
We've all seen the typical CLI commands for deploying charmer recommended charms.
juju deploy cs:trusty/mysql
There will be a descriptor changed for your personal namespace
juju deploy cs:~lazypower/trusty/logstashCharm Store Display
Personal namespace charms will display the charm category icon instead of a provided service icon. This is a leftover decision in the Charm Store that is subject to change, but at present writing - is the current status of visual representation.Submission Process
To have your charm listed as a charmer team recommended charm, you have to under-go a rigorous review process where we evaluate the charm, evaluate tests for your charm, and deploy & run tests against the provided service with different configuration patterns, and even introduce some chaos monkey breakage to see how well the charm stands on its own 2 feet during less than ideal conditions.
This involves pushing to a launchpad branch, and opening a bug ticket assigned to ~charmers, and following the cycle - which at present can take a week or longer to complete from first contact, depending on Charmer resources, time, etc.I don't want to wait my service is awesome and does what I want it to do. Why am I waiting?
You dont have to! The pattern for pushing a charm into your personal namespace requires zero review, and is ready for you to complete today. The longest you will wait is ~ 30 minutes for the Charm Store to ingest the metadata about your charm.
bzr push lp:~lazypower/charms/trusty/awesome-o/trunk
Thats all that's required for you to publish a charm under your namespace for the Charm Store. To further break that down:
lp:~lazypower : This is your launchpad username
/charms/ : in this case, charms is the project descriptor
/trusty/ : We target all charms against a series
/awesome-o/ : This is the name of your service
/trunk/ : Only the /trunk branch will be ingested. So if you want to do development work in /fixing_lp1234 - you can certainly do that. When work is completed, simply merge back into /trunk! It will be available immediately in your charm listed in the Juju Charm Store.Charm Store: Personal Namespace (other)
In the Juju Charm Store as it exists today, there is a dividing bar below the recommended charms for 'other' - and this warehouses bundles, personal charms, and is a place holder for future data types as they emerge.
As you can see by the image above, there is quite a bit of information packed into the accordion. Let's take a look at the bundle description first:
As illustrated, no review process was done to submit this bundle, it has 0 deployments in the wild of 5 services/units.
Looking at a charm, we have the same basic level of information, and we see that the charm itself is in my personal namespace. trusty|lazypower - designates the series/namespace of the charm listing.Charm Store: Recommended Charms
Recommended charms have undergone a rigerous testing phase by the Juju Charmer team, include tested hooks, and tested deployment strategies using the Amulet testing framework. You can read more about this at the Charm Store Policy doc and the Feature Rating doc
They have full service descriptor icons provided by the charm itself, and are deployable via juju deploy cs:series/service
Notice the orange earmark in the upper right corner. This denotes the charm is a ~charmer recommended service, as it has undergone the review process and accepted into the charmer's namespace of the Juju Charm Store.Which is right for me?
When deciding how to get started working with Juju and what level you should start at for your charm - I can't stress enough. Get started with your personal namespace. When you feel your charm is ready (and this can take a while during R&D) Then submit your charm for official ~charmer review.
The process of getting started with personal namespaces is cheap, easy, and open to everyone. It's still very much the wild west. Your charm will be in the hands of users 10x faster using personal namespaces, you still have the opportunity to have it reviewed by submitting a bug to the Review Queue, and you become the orchestrating master of your charmed service.
If you're an Independent Software Vendor and would like to start with your charm In the ~charmers recommended list, feel free to submit a review proposal, however - you are now agreeing to be subject to the Charm Store review policy, your charm must meet all the criteria of a good charm, and the review process can take some length of time depending on the complexity of your service.What is the future of charm publishing?
The Juju Ecosystem team has spent many hours discussing the current state of charm publishing and how to make this easier for our users. On the horizon (but with no foreseeable dates to be published) there are some new tools emerging to assist in this process.
juju publish is a command that will get you started right away by creating your personal namespace, and pushing your charm (and/or revisions) to your branch with the appropriate bugs/MP's assigned.
A new Review Queue is being implemented by Marco Ceppi that will aid us in first contact, getting 'hot' review items out the door quickly, and triaging long running reviews appropriately.Where do I go for help?
There is a tool called 'mount-image-callback' in cloud-utils that takes care of mounting and unmounting a disk image. It allows you to focus on exactly what you need to do. It supports mounting partitioned or unpartitioned images in any format that qemu can read (thanks to qemu-nbd).
Heres how you can use it interactively:
$ mount-image-callback disk1.img -- chroot _MOUNTPOINT_
% echo "I'm chrooted inside the image here"
% echo "192.168.1.1 www.example.com" >> /etc/hosts
% exit 0
mount-image-callback disk1.img -- \
sh -c 'rm -Rf $MOUNTPOINT/var/cache/apt'
or one of my typical use cases, to add a package to an image.
mount-image-callback --system-mounts --resolv-conf --
chroot _MOUNTPOINT_ apt-get install --assume-yes pastebinit
Above, mount-image-callback handled setting up the loopback or qemu-nbd devices required to mount the image and then mounted it to a temporary directory. It then runs the command you provide, unmounts the image, and exits with the return code of the provided command.
If the command you provide has the literal argument '_MOUNTPOINT_' then it will substitute the path to the mount. It also makes that path available in the environment variable MOUNTPOINT. Adding '--system-mounts' and '--resolv-conf' address the common need to mount proc, dev or sys, and to modify and replace /etc/resolv.conf in the filesystem so that networking will work in a chroot.
mount-image-callback supports mounting either an unpartitioned image (ie, dd if=/dev/sda1 of=my.img) or the first partition of a partitioned image (dd if=/dev/sda of=my.img). Two improvements I'd like to make are to allow the user to tell it which partition to mount (rather than expecting the first) and also to do so automatically by finding an /etc/fstab and mounting other relevant mounts as well.
Why not libguestfs?
libguestfs is a great tool for doing this. It operates essentially by launching a qemu (or kvm) guest, and attaching disk images to the guest and then letting the guest's linux kernel and qemu to do the heavy lifting. Doing this provides security benefits, as mounting untrusted filesystems could cause kernel crash. However, it also has performance costs and limitations, and also doesn't provide "direct" access as you'd get via just mounting a filesystem.
Much of my work is done inside a cloud instance, and done by automation. As a result, the security benefits of using a layer of virtualization to access disk images are less important. Also, I'm likely operating on an official Ubuntu cloud image or other vendor provided image where trust is assumed.
In short, mounting an image and changing files or chrooting is acceptable in many cases and offers more "direct" path to doing so.
La hora que dura la travesía se hizo corta y tras un tentempié como recibimiento, en el hotel nos asignaron un bungalow.
La primera vez que estoy en un bungalow :))El archipiélago de Las Islas del Rosario es de origen volcánico y sólo hay una isla que tenga playa, pero esa playa es artificial. El resto de las islas son calas de colares de aguas muy tranquilas. Nosotros escogimos la Isla del Pirata, una pequeña al este del archipiélago.
La mitad de la isla es del hotel con sus pequeños edificios centrales y varios bungalows esparcidos por la isla. El resto de la isla son casas privadas.
¿Y qué puedo contar del día? Tras estar a remojo tantas horas que ni las recuerdo (no hay mejor forma para combatir el calor), descansamos de tanto nadar en unas tumbonas, antes de la comida.
Tras descansar un poco, volvimos a la carga con un kayak. Aún con nuestro bajo fondo físico, apenas nos costó dar un par de vueltas a la isla, porque es muy pequeña. Disfrutando como lobos de mar, y es que remábamos como podíamos y bueno... la sincronización era lo de menos jajaja
Esto es el paraísoEn el atardecer llegó lo mejor. Volvimos a nadar con gafas de buceo y wow, literalmente había cientos de miles de peces muy pequeños en la orilla. La sensación de nadar entre este banco de peces es indescriptible. Si no hacíamos movimientos bruscos apenas se apartaban y casi que te sentías uno más entre ellos :P
La cena romántica en el edificio del hotel, junto con el resto de huéspedes (otras 3 parejas) a base de sopa y un pollo con arroz ponía colofón a un día inolvidable.
Durante la noche una tormenta tropical sacudió la isla con lluvia y un viento muy fuerte.
Amaneciendo... ¿Tormenta? ¿Dónde? :PEn el amanecer, a sólo 1 día para la Ubuconla, amanecimos sin un ápice de brisa y nublado (de agradecer por el bochorno caribeño) y todos los objetos ligeros por el suelo debido al viento de la tormenta nocturna.
Menudos desayunos más ricos¿Y qué hacer durante la mañana? ¡Exacto! Otra vez a remojo :P Estuvimos nadando un buen rato, hasta que decidimos aceptar una oferta para hacer snorkel en una isla que esta a 5'. Y uf... ¡que pasada! Estuvimos nadando por colares abruptos y miles de peces con sus mil colores.
Tras esta extraordinaria mini aventura, a 'tostarnos' un poco al sol y remojarnos al rato para quitar el calor, así hasta la hora de comer :P
from time import gmtime, strftime
while strftime("%H:%M", gmtime()) <> '13:00':
print('Ñam Ñam Ñam')
Algún día tenía que acabar este bucle no infinito :)Y tras la comida, tocó hacer el checkout y esperar 1 hora a que salga la lancha rumbo a Cartagena.
Continúa leyendo más de este viaje.
Ronnie Tucker: Russian Ministry of Health to Replace Microsoft and Oracle Products with Linux and PostgreSQL
The Russian government is considering the replacement of Microsoft and Oracle products with Linux and open source counterparts, at least for the Ministry of Health.
Russia has been slapped with a large number of sanctions by the European Union and the United States, which means that they are going to respond. One of the ways they can do that is by stopping the authorities from buying Microsoft licenses or prolonging existing ones.
According to a report published on gov.cnews.ru, the official website of the Russian government, the Ministry of Health intends to abandon all the proprietary software provided by Oracle and Microsoft and replace it with open source software.
Submitted by: Silviu Stahie
One of the issues that comes up from time to time in many organizations and projects (both community and commercial ventures) is the question of how to manage bug reports, feature requests and support requests.
There are a number of open source solutions and proprietary solutions too. I've never seen a proprietary solution that offers any significant benefit over the free and open solutions, so this blog only looks at those that are free and open.Support request or bug?
One common point of contention is the distinction between support requests and bugs. Users do not always know the difference.
Some systems, like the Github issue tracker, gather all the requests together in a single list. Calling them "Issues" invites people to submit just about anything, such as "I forgot my password".
At the other extreme, some organisations are so keen to keep support requests away from their developers that they operate two systems and a designated support team copies genuine bugs from the customer-facing trouble-ticket/CRM system to the bug tracker. This reduces the amount of spam that hits the development team but there is overhead in running multiple systems and having staff doing cut and paste.Will people use it?
Another common problem is that a full bug report template is overkill for some issues. If a user is asking for help with some trivial task and if the tool asks them to answer twenty questions about their system, application version, submit log files and other requirements then they won't use it at all and may just revert to sending emails or making phone calls.
Ideally, it should be possible to demand such details only when necessary. For example, if a support engineer routes a request to a queue for developers, then the system may guide the support engineer to make sure the ticket includes attributes that a ticket in the developers' queue should have.Beyond Perl
My personal perspective is that this hinders the ability of Perl projects to attract new blood or leverage the benefits of new Python modules that don't exist in Perl at all.
I recently started having a look at the range of options in the Wikipedia list of bug tracking systems.
Some of the trends that appear:
- Many appear to be bug tracking systems rather than issue tracking / general-purpose support systems. How well do they accept non-development issues and keep them from spamming the developers while still providing a useful features for the subset of users who are doing development?
- A number of them try to bundle other technologies, like wiki or FAQ systems: but how well do they work with existing wikis? This trend towards monolithic products is slightly dangerous. In my own view, a wiki embedded in some other product may not be as well supported as one of the leading purpose-built wikis.
- Some of them also appear to offer various levels of project management. For development tasks, it is just about essential for dependencies and a roadmap to be tightly integrated with the bug/feature tracker but does it make the system more cumbersome for people dealing with support requests? Many support requests, like "I've lost my password", don't really have any relationship with project management or a project roadmap.
- Not all appear to handle incoming requests by email. Bug tracking systems can be purely web/form-based, but email is useful for helpdesk systems.
This leaves me with some of the following questions:
- Which of these systems can be used as a general purpose help-desk / CRM / trouble-ticket system while also being a full bug and project management tool for developers?
- For those systems that don't work well for both use cases, which combinations of trouble-ticket system + bug manager are most effective, preferably with some automated integration?
- Which are more extendable with modern programming practices, such as Python scripting and using Git?
- Which are more future proof, with choice of database backend, easy upgrades, packages in official distributions like Debian, Ubuntu and Fedora, scalability, IPv6 support?
- Which of them are suitable for the public internet and which are only considered suitable for private access?
On Monday we released Issue 378 of the Ubuntu Weekly Newsletter. The newsletter has thousands of readers across various formats from wiki to email to forums and discourse.
As we creep toward the 400th issue, we’ve been running a bit low on contributors. Thanks to Tiago Carrondo and David Morfin for pitching in these past few weeks while they could, but the bulk of the work has fallen to José Antonio Rey and myself and we can’t keep this up forever.
So we need more volunteers like you to help us out!
We specifically need folks to let us know about news throughout the week (email them to email@example.com) and to help write summaries over the weekend. All links and summaries are stored in a Google Doc, so you don’t need to learn any special documentation formatting or revision control software to participate. Plus, everyone who participates is welcome to add their name to the credits!
Summary writers. Summary writers receive an email every Friday evening (or early Saturday) with a link to the collaborative news links document for the past week which lists all the articles that need 2-3 sentence summaries. These people are vitally important to the newsletter. The time commitment is limited and it is easy to get started with from the first weekend you volunteer. No need to be shy about your writing skills, we have style guidelines to help you on your way and all summaries are reviewed before publishing so it’s easy to improve as you go on.
Interested? Email firstname.lastname@example.org and we’ll get you added to the list of folks who are emailed each week and you can help as you have time.
En la salida del aeropuerto nos esperaba José Luís Ahumada (a partir de ahora Bart, como él quiere que le llamemos), muy voluntarioso y buen anfitrión.
Tras dejar los bártulos en el hostal nos acercamos a su 'oficina': Vivelab, un semillero que facilita instalaciones tecnológicas a emprendedores, donde estaba trabajando Sergio Meneses, a quien por fin conozco en persona tras tantos años colaborando en Ubuntu ;)
Respirando la iniciativa tecnológica en el Vivelab
Saliendo de Vivelab coincidimos por casualidad con Rodny, un compañero superentusiasta que trabajará en conseguir la mejor Ubuconla posible.
Nos acercamos al centro y tengo que decir que Cartagena hay que vivirla; callejeando por sus recovecos, soñando los tiempos pasados de sus antiguos edificios de piedra coronados por espectaculares baldones de madera, viéndose inmerso en la algarabía de su día a día, con sus mercados, caótico tráfico, vendedores con las más ingeniosas tretas... como decía, ¡hay que vivir Cartagena! :D
Tras cambiar moneda fuimos a comer a un restaurante que me encantó, con un 'pescaito' a la plancha y unos zumos de guanaba y lulo riquísimos.
Sergio, Bart y CostalesEn la sobremesa Sergio tuvo que irse a trabajar y Bart nos descubrió el extraordinario trabajo de CaribeMesh, una organización de Cartagena que está haciendo llegar Internet a los barrios más desfavorecidos, en donde económicamente ni llega, ni interesa que llegue la red de redes. ¡Chapó CaribeMesh! ;D
Tras la comida e interesante charla con Bart, fuimos a una agencia para contratar la noche siguiente en Las Islas del Rosario. Al igual que en nuestros viajes a Perú, aquí también se impone el regateo. Y ya con el día siguiente organizado, pedimos a Bart volver al hostal, porque el jetlag seguía haciendo mella en nosotros.
Tras la siesta y ya con las fuerzas recuperadas, pasaron sobre las 20:30 Sergio y Bart para ir a cenar.
Debido a la copiosa comida no teníamos mucho hambre y nos sació una sóla pizza en un centro comercial que ya estaba casi vacío. Y es que si en España acostumbramos comer sobre las 2 y cenar sobre las 9:30, aquí se suele comer sobre las 12 y cenar sobre las 7 :O
Ya de vuelta en el hostal estuve charlando casi 1 hora con Sergio en su propia habitación (se hospeda en la habitación contigua a la nuestra). Fue chocante contrastar opiniones de muchos temas, principalmente del LoCo Council que por email significaría cruzar decenas de correos para no llegar a ningún lado, y por contra, en persona, un cara a cara sigue siendo lo mejor :)
Continúa leyendo más de este viaje.
In October this year I'll be visiting the US and Canada for some conferences and a wedding. The first event will be xTupleCon 2014 in Norfolk, Virginia. xTuple make the popular open source accounting and CRM suite PostBooks. The event kicks off with a keynote from Apple co-founder Steve Wozniak on the evening of October 14. On October 16 I'll be making a presentation about how JSCommunicator makes it easy to add click-to-call real-time communications (RTC) to any other web-based product without requiring any browser plugins or third party softphones.
Juliana Louback has been busy extending JSCommunicator as part of her Google Summer of Code project. When finished, we hope to quickly roll out the latest version of JSCommunicator to other sites including rtc.debian.org, the WebRTC portal for the Debian Developer community. Juliana has also started working on wrapping JSCommunicator into a module for the new xTuple / PostBooks web-based CRM. Versatility is one of the main goals of the JSCommunicator project and it will be exciting to demonstrate this in action at xTupleCon.xTupleCon discounts for developers
For those who don't or can't attend xTupleCon there has been some informal discussion about a small WebRTC-hacking event at some time on 15 or 16 October. Please email me privately if you may be interested.