Ronnie Tucker: Russian Ministry of Health to Replace Microsoft and Oracle Products with Linux and PostgreSQL
The Russian government is considering the replacement of Microsoft and Oracle products with Linux and open source counterparts, at least for the Ministry of Health.
Russia has been slapped with a large number of sanctions by the European Union and the United States, which means that they are going to respond. One of the ways they can do that is by stopping the authorities from buying Microsoft licenses or prolonging existing ones.
According to a report published on gov.cnews.ru, the official website of the Russian government, the Ministry of Health intends to abandon all the proprietary software provided by Oracle and Microsoft and replace it with open source software.
Submitted by: Silviu Stahie
One of the issues that comes up from time to time in many organizations and projects (both community and commercial ventures) is the question of how to manage bug reports, feature requests and support requests.
There are a number of open source solutions and proprietary solutions too. I've never seen a proprietary solution that offers any significant benefit over the free and open solutions, so this blog only looks at those that are free and open.Support request or bug?
One common point of contention is the distinction between support requests and bugs. Users do not always know the difference.
Some systems, like the Github issue tracker, gather all the requests together in a single list. Calling them "Issues" invites people to submit just about anything, such as "I forgot my password".
At the other extreme, some organisations are so keen to keep support requests away from their developers that they operate two systems and a designated support team copies genuine bugs from the customer-facing trouble-ticket/CRM system to the bug tracker. This reduces the amount of spam that hits the development team but there is overhead in running multiple systems and having staff doing cut and paste.Will people use it?
Another common problem is that a full bug report template is overkill for some issues. If a user is asking for help with some trivial task and if the tool asks them to answer twenty questions about their system, application version, submit log files and other requirements then they won't use it at all and may just revert to sending emails or making phone calls.
Ideally, it should be possible to demand such details only when necessary. For example, if a support engineer routes a request to a queue for developers, then the system may guide the support engineer to make sure the ticket includes attributes that a ticket in the developers' queue should have.Beyond Perl
My personal perspective is that this hinders the ability of Perl projects to attract new blood or leverage the benefits of new Python modules that don't exist in Perl at all.
I recently started having a look at the range of options in the Wikipedia list of bug tracking systems.
Some of the trends that appear:
- Many appear to be bug tracking systems rather than issue tracking / general-purpose support systems. How well do they accept non-development issues and keep them from spamming the developers while still providing a useful features for the subset of users who are doing development?
- A number of them try to bundle other technologies, like wiki or FAQ systems: but how well do they work with existing wikis? This trend towards monolithic products is slightly dangerous. In my own view, a wiki embedded in some other product may not be as well supported as one of the leading purpose-built wikis.
- Some of them also appear to offer various levels of project management. For development tasks, it is just about essential for dependencies and a roadmap to be tightly integrated with the bug/feature tracker but does it make the system more cumbersome for people dealing with support requests? Many support requests, like "I've lost my password", don't really have any relationship with project management or a project roadmap.
- Not all appear to handle incoming requests by email. Bug tracking systems can be purely web/form-based, but email is useful for helpdesk systems.
This leaves me with some of the following questions:
- Which of these systems can be used as a general purpose help-desk / CRM / trouble-ticket system while also being a full bug and project management tool for developers?
- For those systems that don't work well for both use cases, which combinations of trouble-ticket system + bug manager are most effective, preferably with some automated integration?
- Which are more extendable with modern programming practices, such as Python scripting and using Git?
- Which are more future proof, with choice of database backend, easy upgrades, packages in official distributions like Debian, Ubuntu and Fedora, scalability, IPv6 support?
- Which of them are suitable for the public internet and which are only considered suitable for private access?
On Monday we released Issue 378 of the Ubuntu Weekly Newsletter. The newsletter has thousands of readers across various formats from wiki to email to forums and discourse.
As we creep toward the 400th issue, we’ve been running a bit low on contributors. Thanks to Tiago Carrondo and David Morfin for pitching in these past few weeks while they could, but the bulk of the work has fallen to José Antonio Rey and myself and we can’t keep this up forever.
So we need more volunteers like you to help us out!
We specifically need folks to let us know about news throughout the week (email them to email@example.com) and to help write summaries over the weekend. All links and summaries are stored in a Google Doc, so you don’t need to learn any special documentation formatting or revision control software to participate. Plus, everyone who participates is welcome to add their name to the credits!
Summary writers. Summary writers receive an email every Friday evening (or early Saturday) with a link to the collaborative news links document for the past week which lists all the articles that need 2-3 sentence summaries. These people are vitally important to the newsletter. The time commitment is limited and it is easy to get started with from the first weekend you volunteer. No need to be shy about your writing skills, we have style guidelines to help you on your way and all summaries are reviewed before publishing so it’s easy to improve as you go on.
Interested? Email firstname.lastname@example.org and we’ll get you added to the list of folks who are emailed each week and you can help as you have time.
En la salida del aeropuerto nos esperaba José Luís Ahumada (a partir de ahora Bart, como él quiere que le llamemos), muy voluntarioso y buen anfitrión.
Tras dejar los bártulos en el hostal nos acercamos a su 'oficina': Vivelab, un semillero que facilita instalaciones tecnológicas a emprendedores, donde estaba trabajando Sergio Meneses, a quien por fin conozco en persona tras tantos años colaborando en Ubuntu ;)
Respirando la iniciativa tecnológica en el Vivelab
Saliendo de Vivelab coincidimos por casualidad con Rodny, un compañero superentusiasta que trabajará en conseguir la mejor Ubuconla posible.
Nos acercamos al centro y tengo que decir que Cartagena hay que vivirla; callejeando por sus recovecos, soñando los tiempos pasados de sus antiguos edificios de piedra coronados por espectaculares baldones de madera, viéndose inmerso en la algarabía de su día a día, con sus mercados, caótico tráfico, vendedores con las más ingeniosas tretas... como decía, ¡hay que vivir Cartagena! :D
Tras cambiar moneda fuimos a comer a un restaurante que me encantó, con un 'pescaito' a la plancha y unos zumos de guanaba y lulo riquísimos.
Sergio, Bart y CostalesEn la sobremesa Sergio tuvo que irse a trabajar y Bart nos descubrió el extraordinario trabajo de CaribeMesh, una organización de Cartagena que está haciendo llegar Internet a los barrios más desfavorecidos, en donde económicamente ni llega, ni interesa que llegue la red de redes. ¡Chapó CaribeMesh! ;D
Tras la comida e interesante charla con Bart, fuimos a una agencia para contratar la noche siguiente en Las Islas del Rosario. Al igual que en nuestros viajes a Perú, aquí también se impone el regateo. Y ya con el día siguiente organizado, pedimos a Bart volver al hostal, porque el jetlag seguía haciendo mella en nosotros.
Tras la siesta y ya con las fuerzas recuperadas, pasaron sobre las 20:30 Sergio y Bart para ir a cenar.
Debido a la copiosa comida no teníamos mucho hambre y nos sació una sóla pizza en un centro comercial que ya estaba casi vacío. Y es que si en España acostumbramos comer sobre las 2 y cenar sobre las 9:30, aquí se suele comer sobre las 12 y cenar sobre las 7 :O
Ya de vuelta en el hostal estuve charlando casi 1 hora con Sergio en su propia habitación (se hospeda en la habitación contigua a la nuestra). Fue chocante contrastar opiniones de muchos temas, principalmente del LoCo Council que por email significaría cruzar decenas de correos para no llegar a ningún lado, y por contra, en persona, un cara a cara sigue siendo lo mejor :)
Continúa leyendo más de este viaje.
In October this year I'll be visiting the US and Canada for some conferences and a wedding. The first event will be xTupleCon 2014 in Norfolk, Virginia. xTuple make the popular open source accounting and CRM suite PostBooks. The event kicks off with a keynote from Apple co-founder Steve Wozniak on the evening of October 14. On October 16 I'll be making a presentation about how JSCommunicator makes it easy to add click-to-call real-time communications (RTC) to any other web-based product without requiring any browser plugins or third party softphones.
Juliana Louback has been busy extending JSCommunicator as part of her Google Summer of Code project. When finished, we hope to quickly roll out the latest version of JSCommunicator to other sites including rtc.debian.org, the WebRTC portal for the Debian Developer community. Juliana has also started working on wrapping JSCommunicator into a module for the new xTuple / PostBooks web-based CRM. Versatility is one of the main goals of the JSCommunicator project and it will be exciting to demonstrate this in action at xTupleCon.xTupleCon discounts for developers
For those who don't or can't attend xTupleCon there has been some informal discussion about a small WebRTC-hacking event at some time on 15 or 16 October. Please email me privately if you may be interested.
themukt.com is a website which covers open source news and has long been KDE friendly, just not for example it’s leading the story of Plasma 5 being updated yesterday. Recently I got an e-mail from the editor:
From: Swapnil Bhartiya
To: Jonathan Riddell
Subject: Kubuntu Feedback
Just wanted to tell you that I have been using Kubuntu as my primary OS
for a while now and I am really impressed how stable and bug-free it has
become. Earlier random crashes was a normal thing for Kubuntu so I never
used it as main distros. Though I still triple boot with openSUSE and
Arch all running Plasma, I really started to love what you are doing
Keep up the great work and let us know what else I should do to further
Here in the KDE office in Barcelona some people spend their time on purely upstream KDE projects and some of us are primarily interested in making distros work which mean our users can get all the stuff we make. I've been asked why we don't just automate the packaging and go and do more productive things. One view of making on a distro like Kubuntu is that its just a way to package up the hard work done by others to take all the credit. I don't deny that, but there's quite a lot to the packaging of all that hard work, for a start there's a lot of it these days.
"KDE" used to be released once every nine months or less frequently. But yesterday I released the first bugfix update to Plasma, to make that happen I spent some time on Thursday with David making the first update to Frameworks 5. But Plasma 5 is still a work in progress for us distros, let's not forget about KDE SC 4.13.3 which Philip has done his usual spectacular job of updating in the 14.04 LTS archive or KDE SC 4.14 betas which Scarlett has been packaging for utopic and backporting to 14.04 LTS. KDE SC used to be 20 tars, now it's 169 and over 50 langauge packs.
If we were packaging it without any automation as used to be done it would take an age but of course we do automate the repetative tasks, the KDE SC 4.13.97 status page shows all the packages and highlights obvious problems. But with 169 tars even running the automated script takes a while, then you have to fix any patches that no longer apply. We have policies to disuade having patches, any patches should be upstream in KDE or on their way upstream, but sometimes it's unavoidable that we have some to maintain which often need small changes for each upstream release.
Much of what we package are libraries and if one small bit changes in the library, any applications which use that library will crash. This is ABI and the rules for binary compatility in C++ are nuts. Not infrequently someone in KDE will alter a library ABI without realising. So we maintain symbol files to list all the symbols, these can often feel like more trouble than they're worth because they need updated when a new version of GCC produces different symbols or when symbols disappear and on investigation they turn out to be marked private and nobody will be using them anyway, but if you miss a change and apps start crashing as nearly happened in KDE PIM last week then people get grumpy.
Debian, and so Ubuntu, documents the copyright licence of every files in every package. This is a very slow and tedious job but it's important that it's done both upstream and downstream because it you don't people won't want to use your software in a commercial setting and at worst you could end up in court. So I maintain the licensing policy and not infrequently have to fix bits which are incorrectly or unclearly licenced and answer questions such as today I was reviewing whether a kcm in frameworks had to be LGPL licenced for Eike. We write a copyright file for every package and again this can feel like more trouble than its worth, there's no easy way to automate it but by some readings of the licence texts it's necessary to comply with them and it's just good practice. It also means that if someone starts making claims like requiring licencing for already distributed binary packages I'm in an informed position to correct such nonsense.
When we were packaging KDE Frameworks from scratch we had to find a descirption of each Framework. Despite policies for metadata some were quite underdescribed so we had to go and search for a sensible descirption for them. Infact not infrequently we'll need to use a new library which doesn't even have a sensible paragraph describing what it does. We need to be able to make a package show something of a human face.
A recent addition to the world of .deb packaging is MultiArch which allows i386 packages to be installed on amd64 computers as well as some even more obscure combinations (powerpc on ppcel64 anyone?). This lets you run Skype on your amd64 computer without messy cludges like the ia32-libs package. However it needs quite a lot of attention from packagers of libraries marking which packages are multiarch, which depend on other multiarch or arch independent packages and even after packaging KDE Frameworks I'm not entirely comfortable with doing it.
Splitting up Packages
We spend lots of time splitting up packages. When say Calligra gets released it's all in one big tar but you don't want all of it on your system because you just want to write a letter in Calligra Words and Krita has lots of image and other data files which take up lots of space you don't care for. So for each new release we have to work out which of the installed files go into which .deb package. It takes time and even worse occationally we can get it wrong but if you don't want heaps of stuff on your computer you don't need then it needs to be done. It's also needed for library upgrades, if there's a new version of libfoo and not all the programs have been ported to it then you can install libfoo1 and libfoo2 on the same system without problems. That's not possible with distros which don't split up packages.
One messy side effect of this is that when a file moves from one .deb to another .deb made by the same sources, maybe Debian chose to split it another way and we want to follow them, then it needs a Breaks/Replaces/Conflicts added. This is a pretty messy part of .deb packaging, you need to specify which version it Breaks/Replaces/Conflicts and depending on the type of move you need to specify some combination of these three fields but even experienced packages seem to be unclear on which. And then if a backport (with files in original places) is released which has a newer version than the version you specify in the Breaks/Replaces/Conflicts it just refuses to install and stops half way through installing until a new upload is made which updates the Breaks/Replaces/Conflicts version in the packaging. I'd be interested in how this is solved in the RPM world.
Ubuntu is forked from Debian and to piggy back on their work (and add our own bugs while taking the credit) we merge in Debian's packaging at the start of each cycle. This is fiddly work involving going through the diff (and for patches that's often a diff or a diff) and changelog to work out why each alternation was made. Then we merge them together, it takes time and it's error prone but it's what allows Ubuntu to be one of the most up to date distros around even while much of the work gone into maintaining universe packages not part of some flavour has slowed down.
Stable Release Updates
You have Kubuntu 14.04 LTS but you want more? You want bugfixes too? Oh but you want them without the possibility of regressions? Ubuntu has quite strict definition of what's allowed in after an Ubuntu release is made, this is because once upon a time someone uploaded a fix for X which had the side effect of breaking X on half the installs out there. So for any updates to get into the archive they can only be for certain packages with a track record of making bug fix releases without sneaking in new features or breaking bits. They need to be tested, have some time passed to allow for wider testing, be tested again using the versions compiled in Launchpad and then released. KDE makes bugfix releases of KDE SC every month and we update them in the latest stable and LTS releases as 4.13.3 was this week. But it's not a process you can rush and will take a couple of weeks usually. That 4.13.3 update was even later then usual because we were busy with Plasma 5 and whatnot. And it's not perfect, a bug in Baloo did get through with 4.13.2. But it would be even worse if we did rush it.
Ah but you want new features too? We don't allow in new features into the normal updates because they will have more chance of having regressions. That's why we make backports, either in the kubuntu-ppa/backports archive or in the ubuntu backports archive. This involves running the package through another automation script to change whever needs changed for the backport then compiling it all, testing it and releasing it. Maintaining and running that backport script is quite faffy so sending your thanks is always appreciated.
We have an allowance to upload new bugfix (micro releases) of KDE SC to the ubuntu archive because KDE SC has a good track record of fixing things and not breaking thins. When we come to wanting to update Plasma we'll need to argue for another allowance. One controvertial release in KDE Frameworks is that there's no bugfix releases, only monthly releases with new features. These are unlikely to get into the Ubuntu archive, we can try to argue the case that with automated tests and other processes the quality is high enough, but it'll be a hard sell.
Crack of the Day
Project Neon provides packages of daily builds of parts of KDE from Git. And there's weekly ISOs that are made from this too. These guys rock. The packages are monolithic and install in /opt to be able to live alongside your normal KDE software.
You should be able to run KDELibs 4 software on a Plasma 5 desktop. I spent quite a bit of time ensuring this is possible by having no overlapping files in kdelibs/kde-runtime and kde frameworks and some parts of Plasma. This wasn't done primarily for Kubuntu, many of the files could have been split out into .deb packages that could be shared between KDELibs 4 and Plasma 5, but other disros which just installs packages in a monolithic style benefitted. Some projects like Baloo didn't ensure they were co-installable, fine for Kubuntu as we can separate the libraries that need to be coinstalled from the binaries, but other distros won't be so happy.
Increasingly KDE software comes with its own test suite. Test suites are something that has been late coming to free software (and maybe software in general) but now it's here we can have higher confidence that the software is bug free. We run these test suites as part of the package compilation process and not infrequently find that the test suite doesn't run, I've been told that it's not expected for packagers to use it in the past. And of course tests fail.
In Ubuntu we have some obscure architectures. 64-bit Arm is likely to be a useful platform in the years to come. I'm not sure why we care about 64-bit powerpc, I can only assume someone has paid Canonical to care about it. Not infrequently we find software compiles fine on normal PCs but breaks on these obscure platforms and we need to debug why they is. This can be a slow process on ARM which takes an age to do anything, or very slow where I don't even have access to a machine to test on, but it's all part of being part of a distro with many use-cases.
At Kubuntu we've never shared infrstructure with Debian despite having 99% the same packaging. This is because Ubuntu to an extent defines itself as being the technical awesomeness of Debian with smoother processes. But for some time Debian has used git while we've used the slower bzr (it was an early plan to make Ubuntu take over the world of distributed revision control with Bzr but then Git came along and turned out to be much faster even if harder to get your head around) and they've also moved to team maintainership so at last we're planning shared repositories. That'll mean many changes in our scripts but should remove much of the headache of merges each cycle.
There's also a proposal to move our packaging to daily builds so we won't have to spend a lot of time updating packaging at every release. I'm skeptical if the hassle of the infrastructure for this plus fixing packaging problems as they occur each day will be less work than doing it for each release but it's worth a try.
Every 6 months we make an Ubuntu release (which includes all the flavours of which Ubuntu [Unity] is the flagship and Kubuntu is the most handsome) and there's alphas and betas before that which all need to be tested to ensure they actually install and run. Some of the pain of this has reduced since we've done away with the alternative (text debian-installer) images but we're nowhere near where Ubuntu [Unity] or OpenSUSE is with OpenQA where there are automated installs running all the time in various setups and some magic detects problems. I'd love to have this set up.
I'd welcome comments on how any workflow here can be improved or how it compares to other distributions. It takes time but in Kubuntu we have a good track record of contributing fixes upstream and we all are part of KDE as well as Kubuntu. As well as the tasks I list above about checking copyright or co-installability I do Plasma releases currently, I just saw Harald do a Phonon release and Scott's just applied for a KDE account for fixes to PyKDE. And as ever we welcome more people to join us, we're in #kubuntu-devel where free hugs can be found, and we're having a whole day of Kubuntu love at Akademy.
Since first running into TrackingPoint at CES 2013, we’ve kept tabs on the Austin-based company and its Linux-powered rifles, which it collectively calls “Precision Guided Firearms,” or PGFs. We got to spend a few hours on the range with TrackingPoint’s first round of near-production bolt-action weapons last March, when my photojournalist buddy Steven Michael nailed a target at 1,008 yards—about 0.91 kilometers—on his first try, in spite of never having fired a rifle before.
A lot of things have changed in the past year for TrackingPoint. The company relocated its headquarters from within Austin to the suburb community of Pflugerville, constructed an enormous manufacturing and testing lab to scale up PGF production, shed some 30 employees (including CEO Jason Schauble and VP Brett Boyd, the latter of whom oversaw our range visit in 2013), and underwent a $29 million Series D round of financing. It also sold as many PGFs as it could make, according to Oren Schauble, TrackingPoint’s director of marketing and brother of former CEO Jason Schauble.
Submitted by:Lee Hutchinson
I'm back and recovering with typical post-con fatigue. This year, I made several mistakes, not the least of which was trying to do BSides, Black Hat, and DEF CON. Given the overlapping schedules and the events occurring outside the conferences, this left me really drained, not to mention spending more time transiting between the events than I'd like.BSides Las Vegas
B-Sides was a blast, but I spent most of the time I was there playing in the Pros vs Joes CTF run by Dichotomy. This is a particularly nice Capture the Flag competition, since it's based on defending (and attacking) "real world" networks, rather than the typical Jeopardy-style "crack this binary" competitions. Most of the problems seen in the real world aren't, in fact, 0-day produced by talented hackers, but in fact configuration weaknesses, outdated software, and insecure practices exploited by script kiddies. PvJ forces you to consider how to harden a "corporate" environment while still providing the same services. You get a Cisco ASA as your firewall, and can reconfigure services as needed to establish your perimeter and secure your systems. On Day 2, you also get to see just how good you are at breaking in, and just how good (or bad) your opponents are at securing their network.Black Hat
There were a couple of interesting talks to see at Black Hat, but some of the ones that I hoped would be more ground breaking seemed to just scratch the surface and didn't provide enough depth. (Or working demos! I'm looking at you, USB firmware!) The Black Hat business hall was an incredible letdown, as basically none of the booths had anyone with technical depth for discussion, but just had sales people who wanted to sell things that probably don't work anyway. [Cynical mode off.]
In all honesty, Black Hat continues to be a venue for government & corporate security managers, and consultants and contractors that work for those entities. There's absolutely nothing community about it, but so long as you go in with that expectation, you won't be disappointed by that.DEF CON 22
So much to do, so little time! Every year, I'm plagued by the same problem: which of the 7 amazing things going on right now do I want to do? This year, the problem got even more complicated for me due to an event run by my employer.
The badge was, as usual, pretty awesome, thanks to 1o57's work. Apparently he even worked on it during his honeymoon, so a big thanks to @NelleBot for not yelling at him too much, so we all got to play with some awesome hardware. Once again, the badge features a Parallax Propeller chip, which is sortof unfortunate, as the toolkit for it is closed-source and Linux is not a first-class citizen. Between that & time constraints, I didn't spend any time working on the badge challenge, but maybe I'll play around with it some now that I'm home. I believe I've spotted (and heard of) an IR transmitter/receiver pair, similar to the DC20 badge. I also have some IR LEDs and receivers at home, so I wonder if they're in a similar range. Maybe I'll break out a Digispark as an IR transceiver to play around with.
Thursday night was theSummit, an annual fundraiser run by Vegas 2.0 to raise money for the Electronic Frontier Foundation. It's an incredible event, with lots of great people in attendance, and a good opportunity to meet many of the BSides and DEF CON speakers. The fact that there's a raffle, auction, and open bar is just the icing on the cake. (Donating to the EFF makes it such a good cause that I wouldn't miss it for anything!) As you can see at the top, the VIP badge for theSummit was pretty awesome. I love the LED shining through the acrylic to make the text glow.
I was really happy to see the Crypto & Privacy village, and even though I only got a little time there, it was great to see that playing more of a role at DEF CON. I attended the OpenPGP keysigning on Friday, but didn't make it back for Saturday's. They also seemed to have some good introductory crypto talks, and it'll be interesting to see how that evolves over the next year.
Despite losing a lot of time to a work event and teaching at the R00tz Asylum, I managed to play in Capture the Packet with another member of DC404 (my DEF CON group from when I lived in Atlanta) and we won the round, qualifying for the finals. Unfortunately, he wasn't able to make it to the finals due to his flight arrangements, so another DC404 member (and current coworker) stepped in, and we managed a 2nd place overall finish, which I was extremely happy with. (Not that a black badge wouldn't have been cool... There's always next year.)
Of course, work events aren't so bad when they come with this view. We took some interesting people on a little trip around the High Roller, the tallest Ferris Wheel in the world, right off the strip! It was incredible to get to talk with some of them, and the view didn't hurt things either.
If you haven't heard, this was the final year at the Rio. It's time to pack our bags and head across the freeway to Paris. And Bally's. That's right, it's going to take 2 hotels to contain all the hackers. Apparently we'll have room blocks at several more of the area hotels. Makes sense given this year's reported 16,000 attendance.
Today in Randa: Phonon 4.8 Beta got released, making the GStreamer backend use the GStreamer1 API and improving robustness in all parts of Phonon.
For more information on this new beta releae head on over to our releases page.New Phonon GStreamer maintainer Daniel Vrátil is a close friend of Konqi! Picture kindly provided by Martin Klapetek. Also, no dragons were harmed in the making of this picture (we think).
With the timing getting a bit tight and no serious objections against the suggested dates, we’d like to plan the next Ubuntu Global Jam for
UGJ 14.10: 12-14 September 2014
To get the planning going, we’d like to invite all available LoCo enthusiasts, LoCo contacts and LoCo Council to join us for a
Thursday 14th Aug, 14 UTC
You are all invited, we’ll get everyone on the hangout who wants to participate.
If you’re new to the party, have a look at https://wiki.ubuntu.com/UbuntuGlobalJam for some reading.
Originally posted to the loco-contacts mailing list on Tue Aug 12 14:01:17 UTC 2014 by Daniel Holbach
Release Metrics and Incoming Bugs
Release metrics and incoming bug data can be reviewed at the following link:
Status: Utopic Development Kernel
The Utopic kernel has been rebased to v3.16 final and uploaded to the
archive, ie. linux-3.13.0-7.12. Please test and let us know your
Important upcoming dates:
Thurs Aug 21 – Utopic Feature Freeze (~1 week away)
Thurs Sep 25 – Utopic Final Beta (~6 weeks away)
The current CVE status can be reviewed at the following link:
Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Saucy/Precise/Lucid
cycle: 08-Aug through 29-Aug
08-Aug Last day for kernel commits for this cycle
10-Aug – 16-Aug Kernel prep week.
17-Aug – 23-Aug Bug verification & Regression testing.
24-Aug – 29-Aug Regression testing & Release to -updates.
cycle: 29-Aug through 29-Aug
29-Aug Last day for kernel commits for this cycle
31-Sep – 06-Sep Kernel prep week.
07-Sep – 13-Sep Bug verification & Regression testing.
14-Sep – 20-Sep Regression testing & Release to -updates.
Status for the main kernels, until today (Aug. 12):
- Lucid – Kernels being prep’d
- Precise – Kernels being prep’d
Trusty – Kernels being prep’d
Current opened tracking bugs details:
For SRUs, SRU report is a good source of information:
Open Discussion or Questions? Raise your hand to be recognized
No open discussion.
Flying off to a conference on the other side of the country 2 weeks after having my gallbladder removed may not have been one of the wisest decisions of my life, but I am very glad I went. Thankfully MJ had planned on coming along to this event anyway, so I had companionship… and someone to carry the luggage :)
This was Fosscon‘s 5th year, 4th in Philadelphia and the 3rd one I’ve been able to attend. I was delighted this year to have my employer, HP, sponsor the conference at a level that gave us a booth and track room. Throughout the day I was attending talks, giving my own and chatting with people at the HP booth about the work we’re doing in OpenStack and opportunities for people who are looking to work with open source technologies.
The day started off with a keynote by Corey Quinn titled “We are not special snowflakes” which stressed the importance of friendliness and good collaboration skills in technical candidates.
I, for one, am delighted to see us as an industry moving away from BOFHs and kudos for antisocial behavior. I may not be a social butterfly, but I value the work of my peers and strive to be someone people enjoy working with.
After the keynote I did a talk about having a career in FOSS. I was able to tell stories about my own work and experiences and those of some of my colleagues. I talked about my current role at HP and spent a fair amount of time giving participation examples related to my work on Xubuntu. I must really enjoy this topic, because I didn’t manage to leave time for questions! Fortunately I think I made up for it in some great chats with other attendees throughout the day.
My slides from the talk are available here: FOSSCON-2014-FOSS_career.pdf
Some other resources related to my talk:
- OpenSource.com ebook: How to get started with open source
- 7 skills to land your open source dream job
- Careers in Open Source Week features professionals’ tips and lessons learned in the field
- StackOverflow open source jobs board
- HP Helion OpenStack jobs
During the conference I always was able to visit with my friends at the Ubuntu booth. They had brought along a couple copies of The Official Ubuntu Book, 8th Edition for me to sign (hooray!) and then sell to conference attendees. I brought along my Ubuntu tablet which they were able to have at the booth, and which MJ grabbed from me during a session when someone asked to see a demo.
After lunch I went to see Charlie Reisinger’s “Lessons From Open Source Schoolhouse” where he talked about the Ubuntu deployments in his school district. I’ve been in contact with Charlie for quite some time now since the work we do with Partimus also puts us in schools, but he’s been able to achieve some pretty exceptional success in his district. It was a great pleasure to finally meet him in person and his talk was very inspiring.
I’ve been worried for quite some time that children growing up today will only have access to tablets and smart phones that I classify as “read only devices.” I think back to when I first started playing with computers and the passion for them grew out of the ability to tinker and discover, if my only exposure had been a tablet I don’t think I’d be where I am today. Charlie’s talk went in a similar direction, particularly as he revealed that he controversially allows students to have administrative (sudo) access on the Ubuntu laptops! The students feel trusted, empowered and in the time the program has been going on, he’s been able to put together a team of student apprentices who are great at working with the software and can help train other students, and teachers too.
Fosscon talks aren’t recorded, but check out Charlie’s TEDx Lancaster talk to get a taste of the key points about student freedom and the apprentice program he covered: Enabling students in a digital age: Charlie Reisinger at TEDxLancaster
GitHub for Penn Manor School District here: https://github.com/pennmanor
The last talk I went to of the day was by Robinson Tryon on “LibreOffice Tools and Tricks For Making Your Work Easier” where I was delighted to see how far they’ve come with the Android/iOS Impress remote and work being done in the space of editing PDFs, including the development of Hybrid PDFs which can be opened by LibreOffice for editing or a PDF viewer and contain full versions of both documents. I also didn’t realized that LibreOffice retained any of the command line tools, so it was pretty cool to learn about soffice --headless --convert to do CLI-based conversions of files.
Huge thanks to the volunteers who make Fosscon happen. The Franklin Institute was a great venue and aside from the one room downstairs, I think the layout worked out well for us. Booths were in common spaces that attendees congregated in, and I was even able to meet some tech folks who were just at the museum and happened upon us, which was a lot of fun.
More photos from the event here: https://www.flickr.com/photos/pleia2/sets/72157646362111741/
I'm often asked for a quick-start guide, to using Byobu effectively. This wiki page is a decent start, as is the manpage, and the various links on the upstream website. But it seems that some of the past screencast videos have had the longest lasting impressions to Byobu users over the years.
I was on a long, international flight from Munich to Newark this past Saturday with a bit of time on my hands, and I cobbled together this instructional video. That recent international trip to Nuremberg inspired me to rediscover Mozart, and I particularly like this piece, which Mozart wrote in 1788, but sadly never heard performed. You can hear it now, and learn how to be more efficient in command line environments along the way :-)
If you are attending UbuConLA I would strongly encourage you to check out the talks on Firefox OS and Webmaker. In addition to the talks, there will also be a Firefox OS workshop where attendees can go more hands on.
When the organizers of UbuConLA reached out to me several months ago, I knew we really had to have a Mozilla presence at this event so that Ubuntu Users who are already using Firefox as their browser of choice could learn about other initiatives like Firefox OS and Webmaker.
People in Latin America always have had a very strong ethos in terms of their support and use of Free Software and we have an amazingly vibrant community there in Columbia.
So if you will be anywhere near Universidad Tecnológica De Bolívar in Catagena, Columbia, please go see the talks and learn why Firefox OS is the mobile platform that makes the open web a first class citizen.
Learn how you can build apps and test them in Firefox on Ubuntu! A big thanks to Guillermo Movia for helping us get some speakers lined up here! I really look forward to seeing some awesome Firefox OS apps getting published as a result of our presence at UbuConLA as I am sure the developers will love what Firefox OS has to offer.
The Peppermint OS is built around a concept that may be unique among desktop environments. It is a hybrid of traditional Linux desktop applications and cloud-based apps.
Using the Ice technology in the Peppermint OS is much like launching an app on an Android phone or tablet. For example, I can launch Google Docs, Gmail, Twitter, Yahoo Mail, YouTube, Pandora or Facebook as if they were self-contained apps on a mobile device — but these pseudo apps never need updating. Ice easily creates a menu entry to launch any website or application as if it were installed.
This innovative approach puts the latest release of Peppermint OS 5, which appeared in late June, well ahead of the computing curve. It brings cloud apps to the Linux desktop with the ease and flexibility of a Chromebook. It marries that concept to the traditional idea of having installed software that runs without cloud interaction.
Submitted by: Jack M. Germain