Bart estuvo superliado y no pudo recogernos en el muelle, pero no importó, porque estábamos muy cerca del centro y nos acercamos a cambiar moneda y pasear tranquilamente por el precioso centro de Cartagena.
Una lluvia tropical ayudó a calmar el calor, pero a costa de empaparnos... ¿Qué mejor momento para disfrutar bajo techo de un delicioso jugo de mango? :P
Y tras un corto paseo, otra demostración de lo pequeño que puede llegar a ser el mundo, ¡Nos encontramos con Fernando y Marta por la calle!
Juntos disfrutamos de unas cervezas bien frías y bailamos un poco de rumba (siendo sincero, yo más bien intenté bailar) en un bar de la Plaza de los Coches, haciendo tiempo porque habíamos quedado a las 7 con Bart.
Disfrutando del ambiente de CartagenaEsperando en la plaza a la hora indicada Bart no llegaba y nos fuimos al hotel de Fernando que está a 5' andando, para contactar con él. Bart estaba a tope, liado recogiendo a más conferenciantes y no pudo acercarse a cenar. También escribimos a Sergio para ver qué hacíamos con el hotel y nos comentó que el hostal sólo tenía una plaza y que teníamos que ir muy rápido para hacer el checkin a nuestra persona.
Nos dio pereza ir tan tarde y lejos, así que buscamos un hotel cercano al de Fernando y nos deleitamos en un restaurante cercano con unos pescaitos y jugos, todo ello acompañado por música en directo :))
Tras la cena, paseamos un rato en espera ya del primer gran día de la Ubuconla...
Continúa leyendo más de este viaje.
The last day of KDE’s Randa Sprint 2014 is almost over and boy am I exhausted.
The awesome multimedia crew processed some 220 bugs in Phonon, KMix and Amarok. We did a Phonon 4.8 beta release allowing Linux distributions to smoothly transit to a newer version of GStreamer. We started writing a new PulseAudio based volume control Plasma widget as well as a configuration module to allow feature richer and more reliable volume control on systems with PulseAudio available.
In the non-multimedia area I discussed my continuous packaging integration plans with people to work out a suitable workflow. Certain planned improvements to KDE’s CI process make me very confident that in the not too distant future distributions will be able piggyback onto KDE’s CI and create daily integration builds in their regular build environments.
Many great things await!‘A Spaceship’ by Rohan Garg
The Juju Charm Store has been in a bit of a spotlight lately, as it's both a wonderful tool and a source of some frustration for new charmers getting involved in the Juju ecosystem. We wanted to take this opportunity to cover some of the finer aspects of the Juju Charm Store for new users and explain the difference between what a recommended charm is vs a charm that lives in a personal namespace.Why is there a distinction?
Quality. We want all the charms in the Charm Store to be of the highest quality so that users can depend on the charms deploying properly and do what they say they are going to do.
When the Charm Store first came into existance, it was the wild west. Everyone wanted their charm in the Charm Store and things were being promoted very rapidly into the store. There were minimal requirements, and everything was new and exciting. Now that Juju has grown into its toddler phase and is starting to walk around on it's own - we've evolved more regulations on charms. We have defined what makes a high-quality charm, and what expectations a user should have from a high quality charm. You can read more about this at the Charm Store Policy doc and the Feature Rating doc
The bar for some of the features, and quality descriptors may seem like extremly high hurdles for your service to meet to become a ~charmer recommended service. This is why Personal Namespaces exist - as the charmer team continues to add and expand the Charm Store with charms that meet and/or exceed these quality guidelines - we encourage everyone to submit their Juju charm for world wide consumption. You may disagree with FOSS licensing, or perhaps data-handling just isn't something you're willing to do with the service that you orchestrate. These are OK! We still want your service to be orchestrate-able with Juju. Just push your charm into a Personal Namespace, and you don't even have to undergo a charm review from the Charmers team unless you really want someone proofing your code, and service behavior.What differences will this have? Deployment
We've all seen the typical CLI commands for deploying charmer recommended charms.
juju deploy cs:trusty/mysql
There will be a descriptor changed for your personal namespace
juju deploy cs:~lazypower/trusty/logstashCharm Store Display
Personal namespace charms will display the charm category icon instead of a provided service icon. This is a leftover decision in the Charm Store that is subject to change, but at present writing - is the current status of visual representation.Submission Process
To have your charm listed as a charmer team recommended charm, you have to under-go a rigorous review process where we evaluate the charm, evaluate tests for your charm, and deploy & run tests against the provided service with different configuration patterns, and even introduce some chaos monkey breakage to see how well the charm stands on its own 2 feet during less than ideal conditions.
This involves pushing to a launchpad branch, and opening a bug ticket assigned to ~charmers, and following the cycle - which at present can take a week or longer to complete from first contact, depending on Charmer resources, time, etc.I don't want to wait my service is awesome and does what I want it to do. Why am I waiting?
You dont have to! The pattern for pushing a charm into your personal namespace requires zero review, and is ready for you to complete today. The longest you will wait is ~ 30 minutes for the Charm Store to ingest the metadata about your charm.
bzr push lp:~lazypower/charms/trusty/awesome-o/trunk
Thats all that's required for you to publish a charm under your namespace for the Charm Store. To further break that down:
lp:~lazypower : This is your launchpad username
/charms/ : in this case, charms is the project descriptor
/trusty/ : We target all charms against a series
/awesome-o/ : This is the name of your service
/trunk/ : Only the /trunk branch will be ingested. So if you want to do development work in /fixing_lp1234 - you can certainly do that. When work is completed, simply merge back into /trunk! It will be available immediately in your charm listed in the Juju Charm Store.Charm Store: Personal Namespace (other)
In the Juju Charm Store as it exists today, there is a dividing bar below the recommended charms for 'other' - and this warehouses bundles, personal charms, and is a place holder for future data types as they emerge.
As you can see by the image above, there is quite a bit of information packed into the accordion. Let's take a look at the bundle description first:
As illustrated, no review process was done to submit this bundle, it has 0 deployments in the wild of 5 services/units.
Looking at a charm, we have the same basic level of information, and we see that the charm itself is in my personal namespace. trusty|lazypower - designates the series/namespace of the charm listing.Charm Store: Recommended Charms
Recommended charms have undergone a rigerous testing phase by the Juju Charmer team, include tested hooks, and tested deployment strategies using the Amulet testing framework. You can read more about this at the Charm Store Policy doc and the Feature Rating doc
They have full service descriptor icons provided by the charm itself, and are deployable via juju deploy cs:series/service
Notice the orange earmark in the upper right corner. This denotes the charm is a ~charmer recommended service, as it has undergone the review process and accepted into the charmer's namespace of the Juju Charm Store.Which is right for me?
When deciding how to get started working with Juju and what level you should start at for your charm - I can't stress enough. Get started with your personal namespace. When you feel your charm is ready (and this can take a while during R&D) Then submit your charm for official ~charmer review.
The process of getting started with personal namespaces is cheap, easy, and open to everyone. It's still very much the wild west. Your charm will be in the hands of users 10x faster using personal namespaces, you still have the opportunity to have it reviewed by submitting a bug to the Review Queue, and you become the orchestrating master of your charmed service.
If you're an Independent Software Vendor and would like to start with your charm In the ~charmers recommended list, feel free to submit a review proposal, however - you are now agreeing to be subject to the Charm Store review policy, your charm must meet all the criteria of a good charm, and the review process can take some length of time depending on the complexity of your service.What is the future of charm publishing?
The Juju Ecosystem team has spent many hours discussing the current state of charm publishing and how to make this easier for our users. On the horizon (but with no foreseeable dates to be published) there are some new tools emerging to assist in this process.
juju publish is a command that will get you started right away by creating your personal namespace, and pushing your charm (and/or revisions) to your branch with the appropriate bugs/MP's assigned.
A new Review Queue is being implemented by Marco Ceppi that will aid us in first contact, getting 'hot' review items out the door quickly, and triaging long running reviews appropriately.Where do I go for help?
There is a tool called 'mount-image-callback' in cloud-utils that takes care of mounting and unmounting a disk image. It allows you to focus on exactly what you need to do. It supports mounting partitioned or unpartitioned images in any format that qemu can read (thanks to qemu-nbd).
Heres how you can use it interactively:
$ mount-image-callback disk1.img -- chroot _MOUNTPOINT_
% echo "I'm chrooted inside the image here"
% echo "192.168.1.1 www.example.com" >> /etc/hosts
% exit 0
mount-image-callback disk1.img -- \
sh -c 'rm -Rf $MOUNTPOINT/var/cache/apt'
or one of my typical use cases, to add a package to an image.
mount-image-callback --system-mounts --resolv-conf --
chroot _MOUNTPOINT_ apt-get install --assume-yes pastebinit
Above, mount-image-callback handled setting up the loopback or qemu-nbd devices required to mount the image and then mounted it to a temporary directory. It then runs the command you provide, unmounts the image, and exits with the return code of the provided command.
If the command you provide has the literal argument '_MOUNTPOINT_' then it will substitute the path to the mount. It also makes that path available in the environment variable MOUNTPOINT. Adding '--system-mounts' and '--resolv-conf' address the common need to mount proc, dev or sys, and to modify and replace /etc/resolv.conf in the filesystem so that networking will work in a chroot.
mount-image-callback supports mounting either an unpartitioned image (ie, dd if=/dev/sda1 of=my.img) or the first partition of a partitioned image (dd if=/dev/sda of=my.img). Two improvements I'd like to make are to allow the user to tell it which partition to mount (rather than expecting the first) and also to do so automatically by finding an /etc/fstab and mounting other relevant mounts as well.
Why not libguestfs?
libguestfs is a great tool for doing this. It operates essentially by launching a qemu (or kvm) guest, and attaching disk images to the guest and then letting the guest's linux kernel and qemu to do the heavy lifting. Doing this provides security benefits, as mounting untrusted filesystems could cause kernel crash. However, it also has performance costs and limitations, and also doesn't provide "direct" access as you'd get via just mounting a filesystem.
Much of my work is done inside a cloud instance, and done by automation. As a result, the security benefits of using a layer of virtualization to access disk images are less important. Also, I'm likely operating on an official Ubuntu cloud image or other vendor provided image where trust is assumed.
In short, mounting an image and changing files or chrooting is acceptable in many cases and offers more "direct" path to doing so.
La hora que dura la travesía se hizo corta y tras un tentempié como recibimiento, en el hotel nos asignaron un bungalow.
La primera vez que estoy en un bungalow :))El archipiélago de Las Islas del Rosario es de origen volcánico y sólo hay una isla que tenga playa, pero esa playa es artificial. El resto de las islas son calas de colares de aguas muy tranquilas. Nosotros escogimos la Isla del Pirata, una pequeña al este del archipiélago.
La mitad de la isla es del hotel con sus pequeños edificios centrales y varios bungalows esparcidos por la isla. El resto de la isla son casas privadas.
¿Y qué puedo contar del día? Tras estar a remojo tantas horas que ni las recuerdo (no hay mejor forma para combatir el calor), descansamos de tanto nadar en unas tumbonas, antes de la comida.
Tras descansar un poco, volvimos a la carga con un kayak. Aún con nuestro bajo fondo físico, apenas nos costó dar un par de vueltas a la isla, porque es muy pequeña. Disfrutando como lobos de mar, y es que remábamos como podíamos y bueno... la sincronización era lo de menos jajaja
Esto es el paraísoEn el atardecer llegó lo mejor. Volvimos a nadar con gafas de buceo y wow, literalmente había cientos de miles de peces muy pequeños en la orilla. La sensación de nadar entre este banco de peces es indescriptible. Si no hacíamos movimientos bruscos apenas se apartaban y casi que te sentías uno más entre ellos :P
La cena romántica en el edificio del hotel, junto con el resto de huéspedes (otras 3 parejas) a base de sopa y un pollo con arroz ponía colofón a un día inolvidable.
Durante la noche una tormenta tropical sacudió la isla con lluvia y un viento muy fuerte.
Amaneciendo... ¿Tormenta? ¿Dónde? :PEn el amanecer, a sólo 1 día para la Ubuconla, amanecimos sin un ápice de brisa y nublado (de agradecer por el bochorno caribeño) y todos los objetos ligeros por el suelo debido al viento de la tormenta nocturna.
Menudos desayunos más ricos¿Y qué hacer durante la mañana? ¡Exacto! Otra vez a remojo :P Estuvimos nadando un buen rato, hasta que decidimos aceptar una oferta para hacer snorkel en una isla que esta a 5'. Y uf... ¡que pasada! Estuvimos nadando por colares abruptos y miles de peces con sus mil colores.
Tras esta extraordinaria mini aventura, a 'tostarnos' un poco al sol y remojarnos al rato para quitar el calor, así hasta la hora de comer :P
from time import gmtime, strftime
while strftime("%H:%M", gmtime()) <> '13:00':
print('Ñam Ñam Ñam')
Algún día tenía que acabar este bucle no infinito :)Y tras la comida, tocó hacer el checkout y esperar 1 hora a que salga la lancha rumbo a Cartagena.
Continúa leyendo más de este viaje.
Ronnie Tucker: Russian Ministry of Health to Replace Microsoft and Oracle Products with Linux and PostgreSQL
The Russian government is considering the replacement of Microsoft and Oracle products with Linux and open source counterparts, at least for the Ministry of Health.
Russia has been slapped with a large number of sanctions by the European Union and the United States, which means that they are going to respond. One of the ways they can do that is by stopping the authorities from buying Microsoft licenses or prolonging existing ones.
According to a report published on gov.cnews.ru, the official website of the Russian government, the Ministry of Health intends to abandon all the proprietary software provided by Oracle and Microsoft and replace it with open source software.
Submitted by: Silviu Stahie
One of the issues that comes up from time to time in many organizations and projects (both community and commercial ventures) is the question of how to manage bug reports, feature requests and support requests.
There are a number of open source solutions and proprietary solutions too. I've never seen a proprietary solution that offers any significant benefit over the free and open solutions, so this blog only looks at those that are free and open.Support request or bug?
One common point of contention is the distinction between support requests and bugs. Users do not always know the difference.
Some systems, like the Github issue tracker, gather all the requests together in a single list. Calling them "Issues" invites people to submit just about anything, such as "I forgot my password".
At the other extreme, some organisations are so keen to keep support requests away from their developers that they operate two systems and a designated support team copies genuine bugs from the customer-facing trouble-ticket/CRM system to the bug tracker. This reduces the amount of spam that hits the development team but there is overhead in running multiple systems and having staff doing cut and paste.Will people use it?
Another common problem is that a full bug report template is overkill for some issues. If a user is asking for help with some trivial task and if the tool asks them to answer twenty questions about their system, application version, submit log files and other requirements then they won't use it at all and may just revert to sending emails or making phone calls.
Ideally, it should be possible to demand such details only when necessary. For example, if a support engineer routes a request to a queue for developers, then the system may guide the support engineer to make sure the ticket includes attributes that a ticket in the developers' queue should have.Beyond Perl
My personal perspective is that this hinders the ability of Perl projects to attract new blood or leverage the benefits of new Python modules that don't exist in Perl at all.
I recently started having a look at the range of options in the Wikipedia list of bug tracking systems.
Some of the trends that appear:
- Many appear to be bug tracking systems rather than issue tracking / general-purpose support systems. How well do they accept non-development issues and keep them from spamming the developers while still providing a useful features for the subset of users who are doing development?
- A number of them try to bundle other technologies, like wiki or FAQ systems: but how well do they work with existing wikis? This trend towards monolithic products is slightly dangerous. In my own view, a wiki embedded in some other product may not be as well supported as one of the leading purpose-built wikis.
- Some of them also appear to offer various levels of project management. For development tasks, it is just about essential for dependencies and a roadmap to be tightly integrated with the bug/feature tracker but does it make the system more cumbersome for people dealing with support requests? Many support requests, like "I've lost my password", don't really have any relationship with project management or a project roadmap.
- Not all appear to handle incoming requests by email. Bug tracking systems can be purely web/form-based, but email is useful for helpdesk systems.
This leaves me with some of the following questions:
- Which of these systems can be used as a general purpose help-desk / CRM / trouble-ticket system while also being a full bug and project management tool for developers?
- For those systems that don't work well for both use cases, which combinations of trouble-ticket system + bug manager are most effective, preferably with some automated integration?
- Which are more extendable with modern programming practices, such as Python scripting and using Git?
- Which are more future proof, with choice of database backend, easy upgrades, packages in official distributions like Debian, Ubuntu and Fedora, scalability, IPv6 support?
- Which of them are suitable for the public internet and which are only considered suitable for private access?
On Monday we released Issue 378 of the Ubuntu Weekly Newsletter. The newsletter has thousands of readers across various formats from wiki to email to forums and discourse.
As we creep toward the 400th issue, we’ve been running a bit low on contributors. Thanks to Tiago Carrondo and David Morfin for pitching in these past few weeks while they could, but the bulk of the work has fallen to José Antonio Rey and myself and we can’t keep this up forever.
So we need more volunteers like you to help us out!
We specifically need folks to let us know about news throughout the week (email them to firstname.lastname@example.org) and to help write summaries over the weekend. All links and summaries are stored in a Google Doc, so you don’t need to learn any special documentation formatting or revision control software to participate. Plus, everyone who participates is welcome to add their name to the credits!
Summary writers. Summary writers receive an email every Friday evening (or early Saturday) with a link to the collaborative news links document for the past week which lists all the articles that need 2-3 sentence summaries. These people are vitally important to the newsletter. The time commitment is limited and it is easy to get started with from the first weekend you volunteer. No need to be shy about your writing skills, we have style guidelines to help you on your way and all summaries are reviewed before publishing so it’s easy to improve as you go on.
Interested? Email email@example.com and we’ll get you added to the list of folks who are emailed each week and you can help as you have time.
En la salida del aeropuerto nos esperaba José Luís Ahumada (a partir de ahora Bart, como él quiere que le llamemos), muy voluntarioso y buen anfitrión.
Tras dejar los bártulos en el hostal nos acercamos a su 'oficina': Vivelab, un semillero que facilita instalaciones tecnológicas a emprendedores, donde estaba trabajando Sergio Meneses, a quien por fin conozco en persona tras tantos años colaborando en Ubuntu ;)
Respirando la iniciativa tecnológica en el Vivelab
Saliendo de Vivelab coincidimos por casualidad con Rodny, un compañero superentusiasta que trabajará en conseguir la mejor Ubuconla posible.
Nos acercamos al centro y tengo que decir que Cartagena hay que vivirla; callejeando por sus recovecos, soñando los tiempos pasados de sus antiguos edificios de piedra coronados por espectaculares baldones de madera, viéndose inmerso en la algarabía de su día a día, con sus mercados, caótico tráfico, vendedores con las más ingeniosas tretas... como decía, ¡hay que vivir Cartagena! :D
Tras cambiar moneda fuimos a comer a un restaurante que me encantó, con un 'pescaito' a la plancha y unos zumos de guanaba y lulo riquísimos.
Sergio, Bart y CostalesEn la sobremesa Sergio tuvo que irse a trabajar y Bart nos descubrió el extraordinario trabajo de CaribeMesh, una organización de Cartagena que está haciendo llegar Internet a los barrios más desfavorecidos, en donde económicamente ni llega, ni interesa que llegue la red de redes. ¡Chapó CaribeMesh! ;D
Tras la comida e interesante charla con Bart, fuimos a una agencia para contratar la noche siguiente en Las Islas del Rosario. Al igual que en nuestros viajes a Perú, aquí también se impone el regateo. Y ya con el día siguiente organizado, pedimos a Bart volver al hostal, porque el jetlag seguía haciendo mella en nosotros.
Tras la siesta y ya con las fuerzas recuperadas, pasaron sobre las 20:30 Sergio y Bart para ir a cenar.
Debido a la copiosa comida no teníamos mucho hambre y nos sació una sóla pizza en un centro comercial que ya estaba casi vacío. Y es que si en España acostumbramos comer sobre las 2 y cenar sobre las 9:30, aquí se suele comer sobre las 12 y cenar sobre las 7 :O
Ya de vuelta en el hostal estuve charlando casi 1 hora con Sergio en su propia habitación (se hospeda en la habitación contigua a la nuestra). Fue chocante contrastar opiniones de muchos temas, principalmente del LoCo Council que por email significaría cruzar decenas de correos para no llegar a ningún lado, y por contra, en persona, un cara a cara sigue siendo lo mejor :)
Continúa leyendo más de este viaje.
In October this year I'll be visiting the US and Canada for some conferences and a wedding. The first event will be xTupleCon 2014 in Norfolk, Virginia. xTuple make the popular open source accounting and CRM suite PostBooks. The event kicks off with a keynote from Apple co-founder Steve Wozniak on the evening of October 14. On October 16 I'll be making a presentation about how JSCommunicator makes it easy to add click-to-call real-time communications (RTC) to any other web-based product without requiring any browser plugins or third party softphones.
Juliana Louback has been busy extending JSCommunicator as part of her Google Summer of Code project. When finished, we hope to quickly roll out the latest version of JSCommunicator to other sites including rtc.debian.org, the WebRTC portal for the Debian Developer community. Juliana has also started working on wrapping JSCommunicator into a module for the new xTuple / PostBooks web-based CRM. Versatility is one of the main goals of the JSCommunicator project and it will be exciting to demonstrate this in action at xTupleCon.xTupleCon discounts for developers
For those who don't or can't attend xTupleCon there has been some informal discussion about a small WebRTC-hacking event at some time on 15 or 16 October. Please email me privately if you may be interested.
themukt.com is a website which covers open source news and has long been KDE friendly, just not for example it’s leading the story of Plasma 5 being updated yesterday. Recently I got an e-mail from the editor:
From: Swapnil Bhartiya
To: Jonathan Riddell
Subject: Kubuntu Feedback
Just wanted to tell you that I have been using Kubuntu as my primary OS
for a while now and I am really impressed how stable and bug-free it has
become. Earlier random crashes was a normal thing for Kubuntu so I never
used it as main distros. Though I still triple boot with openSUSE and
Arch all running Plasma, I really started to love what you are doing
Keep up the great work and let us know what else I should do to further
Here in the KDE office in Barcelona some people spend their time on purely upstream KDE projects and some of us are primarily interested in making distros work which mean our users can get all the stuff we make. I've been asked why we don't just automate the packaging and go and do more productive things. One view of making on a distro like Kubuntu is that its just a way to package up the hard work done by others to take all the credit. I don't deny that, but there's quite a lot to the packaging of all that hard work, for a start there's a lot of it these days.
"KDE" used to be released once every nine months or less frequently. But yesterday I released the first bugfix update to Plasma, to make that happen I spent some time on Thursday with David making the first update to Frameworks 5. But Plasma 5 is still a work in progress for us distros, let's not forget about KDE SC 4.13.3 which Philip has done his usual spectacular job of updating in the 14.04 LTS archive or KDE SC 4.14 betas which Scarlett has been packaging for utopic and backporting to 14.04 LTS. KDE SC used to be 20 tars, now it's 169 and over 50 langauge packs.
If we were packaging it without any automation as used to be done it would take an age but of course we do automate the repetative tasks, the KDE SC 4.13.97 status page shows all the packages and highlights obvious problems. But with 169 tars even running the automated script takes a while, then you have to fix any patches that no longer apply. We have policies to disuade having patches, any patches should be upstream in KDE or on their way upstream, but sometimes it's unavoidable that we have some to maintain which often need small changes for each upstream release.
Much of what we package are libraries and if one small bit changes in the library, any applications which use that library will crash. This is ABI and the rules for binary compatility in C++ are nuts. Not infrequently someone in KDE will alter a library ABI without realising. So we maintain symbol files to list all the symbols, these can often feel like more trouble than they're worth because they need updated when a new version of GCC produces different symbols or when symbols disappear and on investigation they turn out to be marked private and nobody will be using them anyway, but if you miss a change and apps start crashing as nearly happened in KDE PIM last week then people get grumpy.
Debian, and so Ubuntu, documents the copyright licence of every files in every package. This is a very slow and tedious job but it's important that it's done both upstream and downstream because it you don't people won't want to use your software in a commercial setting and at worst you could end up in court. So I maintain the licensing policy and not infrequently have to fix bits which are incorrectly or unclearly licenced and answer questions such as today I was reviewing whether a kcm in frameworks had to be LGPL licenced for Eike. We write a copyright file for every package and again this can feel like more trouble than its worth, there's no easy way to automate it but by some readings of the licence texts it's necessary to comply with them and it's just good practice. It also means that if someone starts making claims like requiring licencing for already distributed binary packages I'm in an informed position to correct such nonsense.
When we were packaging KDE Frameworks from scratch we had to find a descirption of each Framework. Despite policies for metadata some were quite underdescribed so we had to go and search for a sensible descirption for them. Infact not infrequently we'll need to use a new library which doesn't even have a sensible paragraph describing what it does. We need to be able to make a package show something of a human face.
A recent addition to the world of .deb packaging is MultiArch which allows i386 packages to be installed on amd64 computers as well as some even more obscure combinations (powerpc on ppcel64 anyone?). This lets you run Skype on your amd64 computer without messy cludges like the ia32-libs package. However it needs quite a lot of attention from packagers of libraries marking which packages are multiarch, which depend on other multiarch or arch independent packages and even after packaging KDE Frameworks I'm not entirely comfortable with doing it.
Splitting up Packages
We spend lots of time splitting up packages. When say Calligra gets released it's all in one big tar but you don't want all of it on your system because you just want to write a letter in Calligra Words and Krita has lots of image and other data files which take up lots of space you don't care for. So for each new release we have to work out which of the installed files go into which .deb package. It takes time and even worse occationally we can get it wrong but if you don't want heaps of stuff on your computer you don't need then it needs to be done. It's also needed for library upgrades, if there's a new version of libfoo and not all the programs have been ported to it then you can install libfoo1 and libfoo2 on the same system without problems. That's not possible with distros which don't split up packages.
One messy side effect of this is that when a file moves from one .deb to another .deb made by the same sources, maybe Debian chose to split it another way and we want to follow them, then it needs a Breaks/Replaces/Conflicts added. This is a pretty messy part of .deb packaging, you need to specify which version it Breaks/Replaces/Conflicts and depending on the type of move you need to specify some combination of these three fields but even experienced packages seem to be unclear on which. And then if a backport (with files in original places) is released which has a newer version than the version you specify in the Breaks/Replaces/Conflicts it just refuses to install and stops half way through installing until a new upload is made which updates the Breaks/Replaces/Conflicts version in the packaging. I'd be interested in how this is solved in the RPM world.
Ubuntu is forked from Debian and to piggy back on their work (and add our own bugs while taking the credit) we merge in Debian's packaging at the start of each cycle. This is fiddly work involving going through the diff (and for patches that's often a diff or a diff) and changelog to work out why each alternation was made. Then we merge them together, it takes time and it's error prone but it's what allows Ubuntu to be one of the most up to date distros around even while much of the work gone into maintaining universe packages not part of some flavour has slowed down.
Stable Release Updates
You have Kubuntu 14.04 LTS but you want more? You want bugfixes too? Oh but you want them without the possibility of regressions? Ubuntu has quite strict definition of what's allowed in after an Ubuntu release is made, this is because once upon a time someone uploaded a fix for X which had the side effect of breaking X on half the installs out there. So for any updates to get into the archive they can only be for certain packages with a track record of making bug fix releases without sneaking in new features or breaking bits. They need to be tested, have some time passed to allow for wider testing, be tested again using the versions compiled in Launchpad and then released. KDE makes bugfix releases of KDE SC every month and we update them in the latest stable and LTS releases as 4.13.3 was this week. But it's not a process you can rush and will take a couple of weeks usually. That 4.13.3 update was even later then usual because we were busy with Plasma 5 and whatnot. And it's not perfect, a bug in Baloo did get through with 4.13.2. But it would be even worse if we did rush it.
Ah but you want new features too? We don't allow in new features into the normal updates because they will have more chance of having regressions. That's why we make backports, either in the kubuntu-ppa/backports archive or in the ubuntu backports archive. This involves running the package through another automation script to change whever needs changed for the backport then compiling it all, testing it and releasing it. Maintaining and running that backport script is quite faffy so sending your thanks is always appreciated.
We have an allowance to upload new bugfix (micro releases) of KDE SC to the ubuntu archive because KDE SC has a good track record of fixing things and not breaking thins. When we come to wanting to update Plasma we'll need to argue for another allowance. One controvertial release in KDE Frameworks is that there's no bugfix releases, only monthly releases with new features. These are unlikely to get into the Ubuntu archive, we can try to argue the case that with automated tests and other processes the quality is high enough, but it'll be a hard sell.
Crack of the Day
Project Neon provides packages of daily builds of parts of KDE from Git. And there's weekly ISOs that are made from this too. These guys rock. The packages are monolithic and install in /opt to be able to live alongside your normal KDE software.
You should be able to run KDELibs 4 software on a Plasma 5 desktop. I spent quite a bit of time ensuring this is possible by having no overlapping files in kdelibs/kde-runtime and kde frameworks and some parts of Plasma. This wasn't done primarily for Kubuntu, many of the files could have been split out into .deb packages that could be shared between KDELibs 4 and Plasma 5, but other disros which just installs packages in a monolithic style benefitted. Some projects like Baloo didn't ensure they were co-installable, fine for Kubuntu as we can separate the libraries that need to be coinstalled from the binaries, but other distros won't be so happy.
Increasingly KDE software comes with its own test suite. Test suites are something that has been late coming to free software (and maybe software in general) but now it's here we can have higher confidence that the software is bug free. We run these test suites as part of the package compilation process and not infrequently find that the test suite doesn't run, I've been told that it's not expected for packagers to use it in the past. And of course tests fail.
In Ubuntu we have some obscure architectures. 64-bit Arm is likely to be a useful platform in the years to come. I'm not sure why we care about 64-bit powerpc, I can only assume someone has paid Canonical to care about it. Not infrequently we find software compiles fine on normal PCs but breaks on these obscure platforms and we need to debug why they is. This can be a slow process on ARM which takes an age to do anything, or very slow where I don't even have access to a machine to test on, but it's all part of being part of a distro with many use-cases.
At Kubuntu we've never shared infrstructure with Debian despite having 99% the same packaging. This is because Ubuntu to an extent defines itself as being the technical awesomeness of Debian with smoother processes. But for some time Debian has used git while we've used the slower bzr (it was an early plan to make Ubuntu take over the world of distributed revision control with Bzr but then Git came along and turned out to be much faster even if harder to get your head around) and they've also moved to team maintainership so at last we're planning shared repositories. That'll mean many changes in our scripts but should remove much of the headache of merges each cycle.
There's also a proposal to move our packaging to daily builds so we won't have to spend a lot of time updating packaging at every release. I'm skeptical if the hassle of the infrastructure for this plus fixing packaging problems as they occur each day will be less work than doing it for each release but it's worth a try.
Every 6 months we make an Ubuntu release (which includes all the flavours of which Ubuntu [Unity] is the flagship and Kubuntu is the most handsome) and there's alphas and betas before that which all need to be tested to ensure they actually install and run. Some of the pain of this has reduced since we've done away with the alternative (text debian-installer) images but we're nowhere near where Ubuntu [Unity] or OpenSUSE is with OpenQA where there are automated installs running all the time in various setups and some magic detects problems. I'd love to have this set up.
I'd welcome comments on how any workflow here can be improved or how it compares to other distributions. It takes time but in Kubuntu we have a good track record of contributing fixes upstream and we all are part of KDE as well as Kubuntu. As well as the tasks I list above about checking copyright or co-installability I do Plasma releases currently, I just saw Harald do a Phonon release and Scott's just applied for a KDE account for fixes to PyKDE. And as ever we welcome more people to join us, we're in #kubuntu-devel where free hugs can be found, and we're having a whole day of Kubuntu love at Akademy.
Since first running into TrackingPoint at CES 2013, we’ve kept tabs on the Austin-based company and its Linux-powered rifles, which it collectively calls “Precision Guided Firearms,” or PGFs. We got to spend a few hours on the range with TrackingPoint’s first round of near-production bolt-action weapons last March, when my photojournalist buddy Steven Michael nailed a target at 1,008 yards—about 0.91 kilometers—on his first try, in spite of never having fired a rifle before.
A lot of things have changed in the past year for TrackingPoint. The company relocated its headquarters from within Austin to the suburb community of Pflugerville, constructed an enormous manufacturing and testing lab to scale up PGF production, shed some 30 employees (including CEO Jason Schauble and VP Brett Boyd, the latter of whom oversaw our range visit in 2013), and underwent a $29 million Series D round of financing. It also sold as many PGFs as it could make, according to Oren Schauble, TrackingPoint’s director of marketing and brother of former CEO Jason Schauble.
Submitted by:Lee Hutchinson
I'm back and recovering with typical post-con fatigue. This year, I made several mistakes, not the least of which was trying to do BSides, Black Hat, and DEF CON. Given the overlapping schedules and the events occurring outside the conferences, this left me really drained, not to mention spending more time transiting between the events than I'd like.BSides Las Vegas
B-Sides was a blast, but I spent most of the time I was there playing in the Pros vs Joes CTF run by Dichotomy. This is a particularly nice Capture the Flag competition, since it's based on defending (and attacking) "real world" networks, rather than the typical Jeopardy-style "crack this binary" competitions. Most of the problems seen in the real world aren't, in fact, 0-day produced by talented hackers, but in fact configuration weaknesses, outdated software, and insecure practices exploited by script kiddies. PvJ forces you to consider how to harden a "corporate" environment while still providing the same services. You get a Cisco ASA as your firewall, and can reconfigure services as needed to establish your perimeter and secure your systems. On Day 2, you also get to see just how good you are at breaking in, and just how good (or bad) your opponents are at securing their network.Black Hat
There were a couple of interesting talks to see at Black Hat, but some of the ones that I hoped would be more ground breaking seemed to just scratch the surface and didn't provide enough depth. (Or working demos! I'm looking at you, USB firmware!) The Black Hat business hall was an incredible letdown, as basically none of the booths had anyone with technical depth for discussion, but just had sales people who wanted to sell things that probably don't work anyway. [Cynical mode off.]
In all honesty, Black Hat continues to be a venue for government & corporate security managers, and consultants and contractors that work for those entities. There's absolutely nothing community about it, but so long as you go in with that expectation, you won't be disappointed by that.DEF CON 22
So much to do, so little time! Every year, I'm plagued by the same problem: which of the 7 amazing things going on right now do I want to do? This year, the problem got even more complicated for me due to an event run by my employer.
The badge was, as usual, pretty awesome, thanks to 1o57's work. Apparently he even worked on it during his honeymoon, so a big thanks to @NelleBot for not yelling at him too much, so we all got to play with some awesome hardware. Once again, the badge features a Parallax Propeller chip, which is sortof unfortunate, as the toolkit for it is closed-source and Linux is not a first-class citizen. Between that & time constraints, I didn't spend any time working on the badge challenge, but maybe I'll play around with it some now that I'm home. I believe I've spotted (and heard of) an IR transmitter/receiver pair, similar to the DC20 badge. I also have some IR LEDs and receivers at home, so I wonder if they're in a similar range. Maybe I'll break out a Digispark as an IR transceiver to play around with.
Thursday night was theSummit, an annual fundraiser run by Vegas 2.0 to raise money for the Electronic Frontier Foundation. It's an incredible event, with lots of great people in attendance, and a good opportunity to meet many of the BSides and DEF CON speakers. The fact that there's a raffle, auction, and open bar is just the icing on the cake. (Donating to the EFF makes it such a good cause that I wouldn't miss it for anything!) As you can see at the top, the VIP badge for theSummit was pretty awesome. I love the LED shining through the acrylic to make the text glow.
I was really happy to see the Crypto & Privacy village, and even though I only got a little time there, it was great to see that playing more of a role at DEF CON. I attended the OpenPGP keysigning on Friday, but didn't make it back for Saturday's. They also seemed to have some good introductory crypto talks, and it'll be interesting to see how that evolves over the next year.
Despite losing a lot of time to a work event and teaching at the R00tz Asylum, I managed to play in Capture the Packet with another member of DC404 (my DEF CON group from when I lived in Atlanta) and we won the round, qualifying for the finals. Unfortunately, he wasn't able to make it to the finals due to his flight arrangements, so another DC404 member (and current coworker) stepped in, and we managed a 2nd place overall finish, which I was extremely happy with. (Not that a black badge wouldn't have been cool... There's always next year.)
Of course, work events aren't so bad when they come with this view. We took some interesting people on a little trip around the High Roller, the tallest Ferris Wheel in the world, right off the strip! It was incredible to get to talk with some of them, and the view didn't hurt things either.
If you haven't heard, this was the final year at the Rio. It's time to pack our bags and head across the freeway to Paris. And Bally's. That's right, it's going to take 2 hotels to contain all the hackers. Apparently we'll have room blocks at several more of the area hotels. Makes sense given this year's reported 16,000 attendance.
Today in Randa: Phonon 4.8 Beta got released, making the GStreamer backend use the GStreamer1 API and improving robustness in all parts of Phonon.
For more information on this new beta releae head on over to our releases page.New Phonon GStreamer maintainer Daniel Vrátil is a close friend of Konqi! Picture kindly provided by Martin Klapetek. Also, no dragons were harmed in the making of this picture (we think).
With the timing getting a bit tight and no serious objections against the suggested dates, we’d like to plan the next Ubuntu Global Jam for
UGJ 14.10: 12-14 September 2014
To get the planning going, we’d like to invite all available LoCo enthusiasts, LoCo contacts and LoCo Council to join us for a
Thursday 14th Aug, 14 UTC
You are all invited, we’ll get everyone on the hangout who wants to participate.
If you’re new to the party, have a look at https://wiki.ubuntu.com/UbuntuGlobalJam for some reading.
Originally posted to the loco-contacts mailing list on Tue Aug 12 14:01:17 UTC 2014 by Daniel Holbach
Release Metrics and Incoming Bugs
Release metrics and incoming bug data can be reviewed at the following link:
Status: Utopic Development Kernel
The Utopic kernel has been rebased to v3.16 final and uploaded to the
archive, ie. linux-3.13.0-7.12. Please test and let us know your
Important upcoming dates:
Thurs Aug 21 – Utopic Feature Freeze (~1 week away)
Thurs Sep 25 – Utopic Final Beta (~6 weeks away)
The current CVE status can be reviewed at the following link:
Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid
cycle: 08-Aug through 29-Aug
08-Aug Last day for kernel commits for this cycle
10-Aug – 16-Aug Kernel prep week.
17-Aug – 23-Aug Bug verification & Regression testing.
24-Aug – 29-Aug Regression testing & Release to -updates.
cycle: 29-Aug through 29-Aug
29-Aug Last day for kernel commits for this cycle
31-Sep – 06-Sep Kernel prep week.
07-Sep – 13-Sep Bug verification & Regression testing.
14-Sep – 20-Sep Regression testing & Release to -updates.
Status for the main kernels, until today (Aug. 12):
- Lucid – Kernels being prep’d
- Precise – Kernels being prep’d
Trusty – Kernels being prep’d
Current opened tracking bugs details:
For SRUs, SRU report is a good source of information:
Open Discussion or Questions? Raise your hand to be recognized
No open discussion.