Most of the community manger jobs in the Open Source (and Open *) world require the persons in the position to know how to develop, as in to code rather to develop a new non-coding project, if that made sense. But my thought is there is any Open * communities that are not based on development but on other things. If so, may I have some examples? I’m looking for mainly Open Science ones but any can do.
I would like to have insight here also for the non-members of that forum. You can post your answer in the comments section in instead of joining the forum and answering there.
I was glad to hear that once she found the KDE-doc-english mail list, that she was encouraged to stick around, get onto IRC, and guided every step of the way. I was also happy to hear that Yuri, Sune and Jonathan Riddell all made her feel welcome, and showed her where to find the information she needed to make her contributions high quality. When Scarlett showed up in #kubuntu-devel offering to learn to package, I was over the moon with happiness. I really love to see more women involved in free and open source, and especially in KDE and Kubuntu, my Linux home.
I was a bit sad that the Debian community was not welcoming to her, with Sune the one bright spot. Yeah SUNE! (By the way, hire him!) I think she will find a nice home there as well, however, if our plans to do some common packaging between Kubuntu and Debian works out in the future. It was interesting to see the blog by the developers of systemd discussing the same issue we've been considering; the waste of time packaging the same applications and other stuff over and over again. So much wasted work, when we could really be using our time more productively. Rather than working harder, let's work smarter! Check out their blog for their take on the issue: http://0pointer.net/blog/revisiting-how-we-put-together-linux-systems.html
Welcome to Scarlett, who is planning to get her blog up and running again, and on the planets. She'll be saying more about these subjects in the future. Scarlett, and all you other first-time Akademy attendees, a hearty hug of greeting. Have a wonderful time! See me in person for a real hug!
PS: I couldn't post this until now, Sunday morning. The Debian folks here, especially Pinotree have been great! I look forward to our meeting with them on Thursday morning.
Some of the Kubuntu Devs
Talking and hacking in the corridor
Sebas celebrates the release of Plasma 5
David Explains Frameworks 5
Morning exercises led by President Lydia
Konqi and family
Cast your vote by choosing 5 wallpapers that you'd like to see in Lubuntu 14.10.
As we are a bit short on time this time around, we will have to close the poll on the 10th of September.
Please feel free to share the word and good luck to all contestants!
I really enjoy couscous and while it's often best in warm dishes, it makes a great pasta salad too; pasta salad it needn't always be some mayo-dressing + macaroni. Couscous mixed with a bunch of chopped vegetables & herbs in a vinagrette and served cold makes a great meal (or side dish).
You can make this salad with almost any crisp vegetable you like (or combinations of vegetables), you don't have to use all the ones I list in this recipe.
- 2 cups water
- 1 teaspoon salt
- 1 cup couscous
- 2 tablespoons olive oil
- 1 red pepper, chopped
- 1 tomato, chopped
- 1 celery stick, chopped
- 1/2 cucumber, chopped
- 1/2 red onion, chopped
- 1/3 cup black (or kalamata) olives, sliced
- 4 tablespoons feta cheese chopped/crumbled
- handful of parsely, chopped
- 2 tablespoons red wine vinegar
- salt & pepper, to taste
Makes about 4 cups
- Bring the water to a boil, add the salt.
- Remove from heat and add the couscous, stir amd let set until the couscous has absorbed the water.
- Transfer cooked couscous to a large bowl and toss thoroughly with olive oil to keep from sticking.
- Add the chopped red pepper, tomato, celery, cucumber, red onion, black olives, parsley & feta and toss to combine.
- Season with the vinegar, salt & pepper.
- Cover and chill in a refridgerator for at least 2 hours before serving.
Aunque hay (bastante escondido en su web) un manual oficial, lo hacen de un modo un poco raro. Lo fácil es el método de conexión normal:
- Pulsamos en la WIFI de Telecable:
- Dejamos estos campos como se ven en la captura (el email@example.com será tu NIF en minúscula en el modo firstname.lastname@example.org; la clave la que te asignen, que puedes cambiar desde aquí):
- E ignoramos el warning de certificado y marcamos que no nos avise de nuevo:
My book to read for this trip finally arrived from the library last week, and I could hardly wait to dip into it. I see a profound parallel between the work we do in KDE, and the experiences Catmull recounts in his book. He's structures it as "lessons learned" as he lead one of the most creative teams in both entertainment and in technology. His dream was always to marry the two fields, which he has done brilliantly at Pixar. He tried to make a place where you don't have to ask permission to take responsibility. [p. 51]
Always take a chance on better, even if it seems threatening Catmull says on p. 23. When he hired a person he deemed more qualified for his job than he was, the risk paid off both creatively and personally. Playing it safe is what humans tend to do far too often, especially after they have become successful. Our stone age brains hate to lose, more than they like to win big. Knowing this about ourselves sometimes gives us the courage to 'go big' rather than 'go home.' I have seen us follow this advice in the past year or two, and I hope we have the courage to continue on our brave course.
However, experience showed Catmull that being confident about the value of innovation was not enough. We needed buy-in from the community we were trying to serve.[p 31] My observation is that the leaders in the KDE community have learned this lesson very well. The collaborative way we develop new ideas, new products, new processes helps get that buy in. However, we're not perfect. We often lack knowledge of our "end users" -- not our fellow community members, but some of the millions of students, tech workers and just plain computer users. How often do teams schedule testing sessions where they watch users as they try to accomplish tasks using our software? I know we do it, and we need to do it more often.
Some sources rate us as the largest FOSS community. This can be seen as success. This achievement can have hidden dangers, however. When Catmull ran into trouble, in spite of his 'open door' management style, he found that the good stuff was hiding the bad stuff.... When downsides coexist with upsides, as they often do, people are reluctant to explore what's bugging them for fear of being labeled complainers.[p. 63] This is really dangerous. Those downsides are poison, and they must be exposed to the light, dealt with, fixed, or they will destroy a community or a part of a community. On the upside, the KDE community created the Community Working Group (CWG), and empowered us to do our job properly. On the downside, often people hide their misgivings, their irritations, their fears, until they explode. Not only does such an explosion shock the people surrounding the damage, but it shocks the person exploding as well. And afterwards, the most we can do is often damage control, rather than helping the team grow healthier, and find more creative ways to deal with those downsides.
Another danger is that even the smartest people can form an ineffective team if they are mis-matched. Focus on how a team is performing, not on the talents of the individuals within it.... Getting the right people and the right chemistry is more important than getting the right idea.[p. 74] One of the important strengths of FOSS teams, and KDE teams in particular, is that people feel free to come and go. If anyone feels walled out, or trapped in, we need to remove those barriers. When people are working with those who feed their energy and they in turn can pass it along. When the current stops flowing, it's time to do something different. Of course this prevents burnout, but more important, it keeps teams feeling alive, energetic, and fun. Find, develop, and support good people, and they will find, develop, and own good ideas."[p. 76] I think we instinctively know in KDE that good ideas are common. What is unusual is someone else stepping up to make those "good ideas" we are often given, to make them happen. Instead, the great stuff happens when someone has an itch, and decides to scratch it, and draws others to help her make that vision become reality.
The final idea I want to present in this post is directed to all the leaders in KDE. This doesn't mean just the board of the e.V., by the way. The leaders in KDE are those who have volunteered to maintain packages, mentor students, moderate the mail lists and forums, become channel ops in IRC, write the promo articles, release notes and announcements, do the artwork, write the documentation, keeps the wikis accurate, helpful and free of spam, organize sprints and other meetings such as Akademy, translate our docs and internationalize our software, design and build-in accessibility, staff booths, and many other responsibilities such as serving on working groups and other committees. This is a shared responsibility we carry to one another, and what keeps our community healthy.
It is management's job to take the long view, to intervene and protect our people from their willingness to pursue excellence at all costs. Not to do so would be irresponsible.... If we are in this for the long haul, we have to take care of ourselves, support healthy habits and encourage our employees to have fullfilling lives outside of work. [p. 77] This is the major task of the e.V. and especially the Board, in my opinion, and of course the task of the CWG as well.
Isn't this stuff great!? I'll be writing more blog posts inspired by this book as I get further into it.
Since today I’m looking actively for a job, if you require a packager or a linux system administrator consider the following entry:
Name: Javier López
Location: South America and constantly moving
Willing to relocate: No
Technologies: elastix, nagios, snmp, smokeping, proxmox, vagrant, shell, python scripting, logstash, software packaging (deb,rpm).
Contact: echo m+javier-io | tr ‘+-‘ ‘@.’
Culture and people matter the most to me. I prefer unix geeks, vim users, open source fanatics, logical thinkers and tool tinkerers. I think I can help most in a DevOps/Sysadmin position.
In this week’s show:-
- We take a look at what’s been happening in the news:
- Google researchers have published a paper on their “Knowledge Vault” project…
- A Kickstarter campaign has been launched for “Operating System U”…
- Twitter has made its analytics tools available to everyone, not just those paying for adverts…
- Snapdeal.com is selling the Intex Cloud Fx phone featuring FirefoxOS for 2000 Rupees…
- Mozilla has rolled out its “Sponsored Tiles” feature in the Firefox Nightly channel…
- Munich councillors have refuted claims that they are to switch back to Microsoft products…
- A group of celebrities have reportedly had nude photos of themselves leaked online after they were stolen, probably, from The Cloud…
- BBC has unveiled a series of resources for teaching computing and programming to children…
- Although Tony is still being away, gaming news has returned!
- We take a look at what’s been happening in the community:
- And there’s an event called OggCamp:
- OggCamp 2014 – 4th-5th October – Oxford, UK
- Go take a look at the website for links to all the lovely OggCamp sponsors.
- If you’d like to sponsor Oggcamp 2014, contact details are on the website.
- As is the list of the speakers so far confirmed for the scheduled track.
- There are a few discounted hotel rooms still available. See OggCamp.org for details.
- If you want to help out in any way, contact @oggcamp
We’ll be back next week, when we’ll be discussing whether communities suck, and we’ll go through your feedback.
Please send your comments and suggestions to: email@example.com
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: firstname.lastname@example.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+
So I did this:
$ pex -r 'plainbox' -r 'xlsxwriter' -r 'lxml' \
-e plainbox.public:main -o plainbox
And it worked :-) It's super simple and quite convenient for many things I can think of. If you want to play around with the python3 version you may want to apply this patch (python3 is still a stranger to many developers :P)
You can also download the resulting PlainBox executable
In the meantime I would be remiss if I didn't also talk about the different test runners commonly used with autopilot tests. In addition to the autopilot binary which can be executed to run the tests, different tools have cropped up to make running tests easier.
This tool ships with autopilot itself and was developed as a way to run autopilot test suites on your desktop in a sane manner. Run the autopilot3-sandbox-run command with --help to see all the options available. By default, the tests will run in an Xvfb server, all completely behind the scenes with the results being reported to you upon completion. This is a great way to run tests with no interference on your desktop. If you are a visual person like me, you may instead wish to pass -X to enable the test runs to occur in a Xephyr window allowing you to see what's happening, but still retaining control of your mouse and keyboard.
I need this tool!
sudo apt-get install python3-autopilot
I want to run tests on my desktop without losing control of my mouse!
I want to run tests on my desktop without losing control of my mouse, but I still want to see what's happening!
autopilot3-sandbox-run -X my_testsuite_name
Autopkgtest was developed as a means to automatically test Debian packages, "as-installed". Recently support was added to also test click packages and to run on phablet devices. Autopkgtest will take care of dependencies, setting up autopilot, and unlocking the device. You can literally plug in a device and wait for the results. You should really checkout the README pages, including those on running tests. That said, here's a quick primer on running tests using autopkgtest.
I need this tool!
sudo apt-get install autopkgtest
If you are on trusty, grab and install the utopic deb from here.
I want to run tests for a click package installed on my device!
Awesome. This one is simple. Connect the device and then run:
adt-run --click my.click.name --- ssh -s adb
adt-run --click com.ubuntu.music --- ssh -s adb
will run the tests for the installed version of the music app on your device. You don't need to do anything else. For the curious, this works by reading the manifest file all click packages have. Read more here.
I want to run the tests I wrote/modified against an installed click package!
For this you need to also pass your local folder containing the tests. You will also want to make sure you installed the new version of the click package if needed.
adt-run my-folder/ --click my.click.name --- ssh -s adb
Autopkgtest can also run in a lxc container, QEMU, a chroot, and other fun targets. In the examples above, I passed --- ssh -s adb as the target, instructing autopkgtest to use ssh and adb and thus run the tests on a connected phablet device. If you want to run autopilot tests on a phablet device, I recommend using autopkgtest as it handles everything for you.
This tool is part of the greater phablet-tools package. It was originally developed as an easy way to execute tests on your phablet device. Note however that copying the tests and any dependencies to the phablet device is left to you. The phablet-tools package provides some other useful utilities to help you with this (checkout phablet-click-test-setup for example).
I need this tool!
sudo apt-get install phablet-tools
I want to run the tests I wrote/modified against an installed click package!
First copy the tests to the device. You can use the ubuntu sdk or click-buddy for this, or even do it manually via adb. Then run phablet-test-run. It takes the same arguments as autopilot itself.
phablet-test-run -v my_testsuite
Note the tools looks for the testsuite and any dependencies of the testsuite inside the /home/phablet/autopilot folder. It's up to you to make sure everything that is needed to run your tests are located there or else it will fail.
There are of course other possible test runners that wrap around autopilot to make executing tests easier. Perhaps you've written a script yourself. Just remember at the end of the day the autopilot binary will be running the tests. It simply needs to be able to find the testsuite and all of it's dependencies in order to run. For this reason, don't be afraid to execute autopilot3 and run the tests yourself. Happy test runs!
A few days ago I watched a Q/A session with Linus Torvalds at Debconf 14. One of the main complaint of Linus towards Linux distribution was the way that distribution ends up using different versions of libraries than what has been used during application development. And the fact that it’s next to impossible to support properly all Linux distributions at the same time due to this kind of differences.
And now I just discovered a new proposal of the systemd team that basically tries to address this: Revisiting how we put together Linux Systems.
They suggest to make extensive use of btrfs subvolumes to host multiple variants of the /usr tree (that is supposed to contain all the invariant system code/data) that you could combine with multiple runtime/framework subvolumes thanks to filesytem namespaces and make available to individual applications.
This way of grouping libraries in “runtime subvolumes” reminds me a bit of the concepts of baserock (they are using git instead of btrfs) and while I was a bit dubious of all this (because it goes against quite a few of the principles of distribution integration) I’m beginning to believe that there’s room for both models to work together.
It would be nice if Debian could become the reference distribution that upstream developers are using to develop against Linux. This would in turn mean that when upstream distribution their application under this new form, they will provide (or reference) Debian-based subvolumes ready for use by users (even those who are not using Debian as their main OS). And those subvolumes would be managed by the Debian project (probably automatically built from our collection of .deb).
We’re still quite far from this goal but it will interesting to see this idea mature and become reality. There are plenty of challenges facing us.
The following message was sent as an ALL-CALL to all members of Ubuntu Ohio regardless of their subscription status to the team's Launchpad-based mailing list.
This is an all-call to Ubuntu Ohio.
For the purposes of Ubuntu Global Jam I would like to schedule regional keysignings. Setting up an all-state single gathering is not looking doable at this time. With our not participating in Ohio Linux Fest this year setting things up regionally would seem appropriate.
For those living in any of the following counties I want to set up an event in perhaps Kirtland, Kent, or somewhere in Geagua County: Lorain, Cuyahoga, Medina, Summit, Portage, Trumbull, Geagua, Lake, Ashtabula, Wayne, Stark, and Mahoning. It will be ideal if anybody in the community with proper connections can get us space to meet in on Kent State University's Kent Campus. Relative to Kirtland as a meeting point, I can either contact Kirtland Public Library or talk to Lakeland Community College to get space. As to Geauga County for a meeting point I will be open for suggestions.
No, we will not attempt to go for an inconvenient location such as The Lodge at Geneva State Park in Ashtabula County for the group of counties listed above.
For other portions of the state, I am open to ideas for organization as to where you would like to get together and how wide a net you would like to cast. Please contribute those on the community's mailing list at email@example.com
If you are interested in participating in this face-to-face event you need to express interest on the mailing list and indicate both what county you live in and your preferred date to get together. We need expressions of interest by NO LATER THAN 10 PM local time on Friday night. Ubuntu Global Jam runs September 12th-14th and Monday will be a day to scramble to get meeting places set up. Instructions on what to bring for the keysigning would be provided by Tuesday night so as to prepare if enough interest is expressed.
For the purposes of this all-call, "enough interest" is going to be defined as a minimum of 3 people other than the person heading up the keysigning session commiting on-list to attend.
If there any questions, please contact the Ohio leadership team by way of this contact form: https://launchpad.net/~ubuntu-us-ohio-council/+contactuser
Thank you for your time and patience.
Stephen Michael Kellat
Leader/Point of Contact, Ubuntu Ohio
In case you missed the recent Cloud Austin MeetUp, you have another chance to see the Ubuntu Orange Box live and in action here in Austin!
This time, we're at the OpenStack Austin MeetUp, next Wednesday, September 10, 2014, at 6:30pm at Tech Ranch Austin, 9111 Jollyville Rd #100, Austin, TX!
If you join us, you'll witness all of OpenStack Ice House, deployed in minutes to real hardware. Not an all-in-one DevStack; not a minimum viable set of components. Real, rich, production-quality OpenStack! Ceilometer, Ceph, Cinder, Glance, Heat, Horizon, Keystone, MongoDB, MySQL, Nova, NTP, Quantum, and RabbitMQ -- intelligently orchestrated and rapidly scaled across 10 physical servers sitting right up front on the podium. Of course, we'll go under the hood and look at how all of this comes together on the fabulous Ubuntu Orange Box.
And like any good open source software developer, I generally like to make things myself, and share them with others. In that spirit, I'll also bring a couple of growlers of my own home brewed beer, Ubrewtu ;-) Free as in beer, of course!Cheers,Dustin
We just created a new Ubuntu mailing list called ubuntu-community-team.
As we didn’t have a place like this before, we created it so we can
- have discussions around planning community events
- start all kinds of initiatives around Ubuntu
- allow enthusiasts of the Ubuntu community to kick around new ideas
- bring people from all parts of our community together so we can learn from each other
- hang out and have fun
We are looking forward to seeing you on the list as well, sign up on this page.
Now that the OpenPower sources are available, it's possible to build custom firmware images for OpenPower machines. Here's a little guide to show how that's done.The build process
OpenPower firmware has a number of different components, and some infrastructure to pull it all together. We use buildroot to do most of the heavy lifting, plus a little wrapper, called op-build.
There's a README file, containing build instructions in the op-build git repository, but here's a quick overview:
To build an OpenPower PNOR image from scratch, we'll need a few prerequisites (assuming recent Ubuntu):sudo apt-get install cscope ctags libz-dev libexpat-dev libc6-dev-i386 \ gcc g++ git bison flex gcc-multilib g++-multilib libxml-simple-perl \ libxml-sax-perl
Then we can grab the op-build repository, along with the git submodules:git clone --recursive git://github.com/open-power/op-build.git
set up our environment and configure using the "palmetto" machine configuration:. op-build-env op-build palmetto_defconfig
After a while (there is quite a bit of downloading to do on the first build), the build should complete successfully, and you'll have a PNOR image build in output/images/palmetto.pnor.
If you have an existing op-build tree around (colleagues working on OpenPower perhaps?), you can share or copy the dl/ directory to save on download time.
The op-build command is just a shortcut for a make in the buildroot tree, so the general buildroot documentation applies here too. Just replace "make" with "op-build". For example, we can enable a verbose build with:op-build V=1 Changing the build configuration
Above, we used a palmetto_defconfig as the base buildroot configuration. This defines overall options for the build; things like:
- Toolchain details used to build the image
- Which firmware packages are used
- Which packages are used in the petitboot bootloader environment
- Which kernel configuration is used for the petitboot bootloader environment
This configuration can be changed through buildroot's menuconfig UI. To adjust the configuration:op-build menuconfig
And busybox's configuration interface will be shown:
As an example, let's say we want to add the "file" utility to the petitboot environment. To do this, we can nagivate to that option in the Target Packages section (Target Packages → Shell and Utilities → file), and enable the option:
Then exit (saving changes) and rebuild:op-build
- the resulting image will have the file command present in the petitboot shell environment.Kernel configuration
There are a few other configuration targets to influence the build process; the most interesting for our case will be the kernel configuration. Since we use petitboot as our bootloader, it requires a Linux kernel for the initial bootloader environment. The set of drivers in this kernel will dictate which devices you'll be able to boot from.
So, if we want to enable booting from a new device, we'll need to include an appropriate driver in the kernel. To adjust the kernel configuration, use the linux-menuconfig target:op-build linux-menuconfig
- which will show the standard Linux "menuconfig" interface:
From here, you can alter the kernel configuration. Once you're done, save changes and exit. Then, to build the new PNOR image:op-build Customised packages
If you have a customised version of one of the packages used in the OpenPower build, you can easily tell op-build to use your local package. There are a number of package-specific make variables documented in the buildroot generic package reference, the most interesting ones being the _VERSION and _SITE variables.
For example, let's say we have a custom petitboot tree that we want to use for the build. We've committed our changes in the petitboot tree, and want to build a new PNOR image. For the sake of this example, the git SHA petitboot commit we'd like to build is 2468ace0, and our custom petitboot tree is at /home/jk/devel/petitboot.
To build a new PNOR image with this particular petitboot source, we need to specify a few buildroot make variables:op-build PETITBOOT_SITE=/home/jk/devel/petitboot \ PETITBOOT_SITE_METHOD=git \ PETITBOOT_VERSION=2468ace0
This is what these variables are doing:
- PETITBOOT_SITE=/home/jk/devel/petitboot - tells op-build where our custom source tree is. This could be a git URL or a local path.
- PETITBOOT_SITE_METHOD=git - telsl op-build that PETITBOOT_SITE is a git tree. If we were using a git:// URL for PETITBOOT_SITE, then this variable would be set automatically
- PETITBOOT_VERSION=2468ace0 - tells op-build which version of petitboot to checkout. This can be any commit reference that git understands.
The same method can be used for any of the other packages used during build. For OpenPower builds, you may also want to use the SKIBOOT_* and LINUX_* variables to include custom skiboot firmware and kernel in the build.
If you'd prefer to test new sources without committing to git, you can use _SITE_METHOD=local. This will copy the source tree (defined by _SITE) to the buildroot tree and use it directly. For example:op-build SKIBOOT_SITE=/home/jk/devel/skiboot \ SKIBOOT_SITE_METHOD=local
- will build the current (and not-necessarily-committed) sources in /home/jk/devel/skiboot. Note that buildroot has no way to tell if your code has changed with _SITE_METHOD=local. If you re-build with this, it's safer to clean the relevant source tree first:op-build skiboot-dirclean
This is my monthly summary of my free software related activities. If you’re among the people who made a donation to support my work (65.55 €, thanks everybody!), then you can learn how I spent your money. Otherwise it’s just an interesting status update on my various projects.Distro Tracker
Even though I was officially in vacation during 3 of the 4 weeks of August, I spent many nights working on Distro Tracker. I’m pleased to have managed to bring back Python 3 compatibility over all the (tested) code base. The full test suite now passes with Python 3.4 and Django 1.6 (or 1.7).
From now on, I’ll run “tox” on all code submitted to make sure that we won’t regress on this point. tox also runs flake8 for me so that I can easily detect when the submitted code doesn’t respect the PEP8 coding style. It also catches other interesting mistakes (like unused variable or too complex functions).
Getting the code to pass flake8 was also a major effort, it resulted in a huge commit (89 files changed, 1763 insertions, 1176 deletions).
Thanks to the extensive test suite, all those refactoring only resulted in two regressions that I fixed rather quickly.
Some statistics: 51 commits over the last month, 41 by me, 3 by Andrew Starr-Bochicchio, 3 by Christophe Siraut, 3 by Joseph Herlant and 1 by Simon Kainz. Thanks to all of them! Their contributions ported some features that were already available on the old PTS. The new PTS is now warning of upcoming auto-removals, is displaying problems with uptream URLs, includes a short package description in the page title, and provides a link to screenshots (if they exist on screenshots.debian.net).
We still have plenty of bugs to handle, so you can help too: check out https://tracker.debian.org/docs/contributing.html. I always leave easy bugs for others to handle, so grab one and get started! I’ll review your patch with pleasure.Tryton
I wasn’t able to attend this year but thanks to awesome work of the video team, I watched some videos (and I still have a bunch that I want to see). Some of them were put online the day after they had been recorded. Really amazing work!Django 1.7
After the initial bug reports, I got some feedback of maintainers who feared that it would be difficult to get their packages working with Django 1.7. I helped them as best as I can by providing some patches (for horizon, for django-restricted-resource, for django-testscenarios).
Since I expected many maintainers to be not very pro-active, I rebuilt all packages with Django 1.7 to detect at least those that would fail to build. I tagged as confirmed all the corresponding bug reports.
Looking at https://firstname.lastname@example.org;tag=django17, one can see that some progress has been made with 25 packages fixed. Still there are at least 25 others that are still problematic in sid and 35 that have not been investigated at all (except for the automatic rebuild that passed). Again your help is more than welcome!
It’s easy to install python-django 1.7 from experimental and they try to use/rebuild the packages from the above list.Dpkg translation
With the freeze approaching, I wanted to ensure that dpkg was fully translated in French. I thus pinged email@example.com and merged some translations that were done by volunteers. Unfortunately it looks like nobody really stepped up to maintain it in the long run… so I did myself the required update when dpkg 1.17.12 got uploaded.
Is there anyone willing to manage dpkg’s French translation? With the latest changes in 1.17.13, we have again a few untranslated strings:
$ for i in $(find . -name fr.po); do echo $i; msgfmt -c -o /dev/null --statistics $i; done
1083 translated messages, 4 fuzzy translations, 1 untranslated message.
268 translated messages, 3 fuzzy translations.
545 translated messages.
2277 translated messages, 8 fuzzy translations, 3 untranslated messages.
I made an xsane QA upload (it’s currently orphaned) to drop the (build-)dependency on liblcms1 and avoid getting it removed from Debian testing (see #745524). For the record, how-can-i-help warned me of this after one dist-upgrade.
With the Django 1.7 work and the need to open up an experimental branch, I decided to switch python-django’s packaging to git even though the current team policy is to use subversion. This triggered (once more) the discussion about a possible switch to git and I was pleased to see more enthusiasm this time around. Barry Warsaw tested a few workflows, shared his feeling and pushed toward a live discussion of the switch during Debconf. It looks like it might happen for good this time. I contributed my share in the discussions on the mailing list.Thanks
See you next month for a new summary of my activities.