news aggregator

Arthur Schiwon: LDAP in ownCloud 6.0.3: improved performance and more

Planet Ubuntu - Fri, 2014-04-25 11:00

In about a week we release version 6.0.3 of ownCloud Community Edition! It is a maintenance release that includes nearly two months of improvements (including performance improvements) and some fixes. I want to use this opportunity to shed some light on the fixes the LDAP back-end has seen.

There are no big things, however, but the performance improvements in sharing related methods and in the configuration wizard will significantly enhance the experience for end users and admins. The following list is not a complete one, but addresses the most notable changes.

Faster user retrieval in sharing dialogue

By optimizing the behavior in fetching and caching the display name, additional per user queries to the LDAP server are not necessary anymore. Fortunately, it was a low hanging fruit, because we requested the attribute on the original search query already. The missing piece was to push the value to the LDAP cache. The result, obviously, is that users will appear faster in the share dialogue and the number of LDAP queries is reduced.

Regular updates of email (and quota)

Users really appreciate the feature to send email notifications when sharing files. Now, they found out that LDAP users were not notified, although the email attribute was configured properly. Well, yes, the email was only fetched upon login. If a user never logged in before, for example, the email address would not have been known to ownCloud. Previously, this was totally OK as there was no big use for the email address nevertheless, but things are different today.

Now, user details like user quota and the email address will be fetched initially with mapping of the user (one-time happening) and on the regular user exists check (with utilizing the LDAP cache). So the email address will be accurate whenever the notification will be used.

More reliable Configuration Wizard

The LDAP Wizard has seen two major improvements. First, when determining the object classes in the User and Group Filter tabs, it does not look at every available object anymore. A nasty mistake by missing to implement a limitation. Now, only three LDAP objects will be looked at, which reduces the detection time massively, especially with bigger LDAP setups.

Another issue has been a race condition that could lead to a reset (respectively automatic compilation) of the LDAP filters. No undesired surprises any more.

More accurate reporting

Do you know the ownCloud command line client? It gives the administrator some tools for managing ownCloud that are handy to not (only) have in the web interface. There is also a method to get the total number of users, user:report. In LDAP we need to count the whole result set for this. If available (depending on PHP version and LDAP server configuration), we work with paged results. Well, we should, and we do since now also in this case.

This allows us to get a specific total number from Active Directory. For OpenLDAP however, the configured size limit on the LDAP server is the maximum number of results we can get. It is because OpenLDAP follows a suggestion of the awkward RFC 2696 (section 6) and AD does not (guess who wrote the RFC).

FreeIPA compatibility

Good news for FreeIPA users: Robin McCorkell (thank you!) added support for the UUID attribute used in FreeIPA so the configuration will work right out of the box without any changes in the expert settings.

ownCloud 6.0.3 RC

ownCloud 6.0.3 is currently in the Release Candidate stage. With so many different setups out there in the wild we always appreciate testers. So, if you have some time left, please get it and poke around! Also, the temporary changelog is available.

Tags: ownCloudPlanetOwnCloudPlanetUbuntu

Rhonda D'Vine: Que[e]rbau

Planet Ubuntu - Fri, 2014-04-25 10:41

I'm moving. Well, not right here, right now. Rather less than two years. But I already know what my flat will look like and was able to influence that decision. And there will be more to influence, like what to do with common rooms in the builing, or what to put in the garden (voting for climbing facilities for my son of course!). It's this kind of co-housing project where you already know your neighbours beforehand and can find common grounds for decisions like that.

The co-housing project I'm moving to is called Que[e]rbau. And it will be living up to its name. It is specificly aimed at people who live tolerance and acceptance, and also potentially live an sorta alternative lifestyle, defining their own identity; but not limited to those. There also will be conventional families living there who specificly don't want to raise their kids in a conservative environment.

When I told about this plan to someone they asked me if I really want to do that. Their concerns were with respect to my son and if the house wouldn't become a target. I was puzzled at first, given that we have the Rainbow Parade, the Life Ball and most of all the Rosa Lila Villa since several years in Vienna and I'm not aware of any bigger disturbances it causes, rather the opposite.

After thinking a while about it it sounded a bit for the wish of a Don't Ask Don't Tell environment. Recently there was this great documentary done by Vice on youtube about Young and Gay in Putin's Russia (watch all five parts of it, it's worth it). In the light of that I don't think hiding does improve the situation, rather the opposite. Not speaking about it doesn't improve acceptance. And actually, I was approached by at least one person during the Debian Women MiniDebconf about how brave I am considered. I'm not sure if it really is brave, I just don't want to lie to myself anymore, and I very rarely had troubles through that. The more open and natural you behave, the less confrontation area you leave left, and people notice that.

Not totally unrelated to that, I created myself a new gpg key. It doesn't carry my official name anymore but just the name I prefer to be addressed with: Rhonda. It also carries a last name you might not have heard yet (it was adopted on the Discworld MUD several years ago, even before I wrote Mermaids; actually in connection with the person who partly triggered the poem), that's the reason I added a plain Rhonda UID to it for those who aren't aware of the last name. I will submit that key to keysigning parties from now on, and it of course is up to you if you feel comfortable with signing it.

/personal | permanent link | Comments: 0 |

Svetlana Belkin: My Dream Job

Planet Ubuntu - Fri, 2014-04-25 00:26

I Twitted this awhile ago today:

@openscience: Is there any #OpenSource #Science org that need some help in community management? I’m willing to learn how to help.

— Svetlana Belkin (@Barsookain) April 24, 2014

and I thought I need to explain more in depth what my dream job is. Hopefully, what I write down is not that far-fetched for a job that exists.

What I want to do is to tie in my hobbies, computers, Ubuntu/FOSS, and the sense of community in these communities and what I do for them, and my degree that I’m getting, which is BS in biology with molecular and cellular biology as the focus.  The closest thing that I have in mind is a Community Manager type of job, just like Jono Bacon is looking for. I want to use my other skills that I gained from being involved with the Ubuntu Community, mainly running a WordPress Blog, editing MoinMoin wiki pages, and driving projects. The only skills that I lack are the coding/scripting skills and command line but I’m willing to learn those.

Even though I manage a team in the Ubuntu Community called Ubuntu Scientists, the team’s aim is different then my dream job since it’s aim is to have a network of scientists that use Ubuntu/Linux and have resources to help them.  Also, I hate to say this but I want to be paid for my work so I can have a living.

While money is the issue, it’s not the only one.  The other major issue is I think I will not be happy if I was a lab tech or even (if I go for my Masters or PhD) as a researcher.  I want to do both, community management and still work as a biologist.

If you have a position, please contact me at belkinsa@ubuntu.com or comment below.  You may also make a connection with on LinkedIn.


Ubuntu Podcast from the UK LoCo: S07E04 – The One with All the Haste

Planet Ubuntu - Thu, 2014-04-24 19:30

We’re back with Season Seven, Episode Four of the Ubuntu Podcast! Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are drinking tea and eating Simnel cake in Studio L.

 Download OGG  Download MP3 Play in Popup

In this week’s show:

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow our twitter feed http://twitter.com/uupc
Find our Facebook Fan Page
Follow us on Google Plus

Daniel Pocock: Android betrays tethering data

Planet Ubuntu - Thu, 2014-04-24 17:06

When I upgraded an Android device the other day, I found that tethering completely stopped working. The updated CyanogenMod had inherited a new bug from Android, informing the carrier that I was tethering. The carrier, Vodafone Italy, had decided to make my life miserable by blocking that traffic. I had a closer look and managed to find a workaround.

There is absolutely no difference, from a technical perspective, between data transmitted from a mobile device on-screen application and data transmitted from tethering. Revealing the use of tethering to the carrier is a massive breach of privacy - yet comments in the Google bug tracker suggest it is a feature rather than a bug. This little table helps put that logic in perspective:

Product Person who carriers handset User Mobile network who wants to discriminate against some types of network traffic to squeeze more money out of the Product Feature Revealing private information about the way the Product uses his/her Internet so the real User can profit.

It is also bad news for the environment: many people are being tricked into buying un-needed USB dongle modems that will end up in the rubbish in 1-2 years time when their contract expires and the company pushes them to upgrade to the next best thing.

Behind the scenes

What does it really mean in practice, how does Android tell your carrier which data is from tethering?

As my device is rooted and as it is my device and I will do what I want with it, I decided to have a look inside.

The ip command revealed that there are now two network devices, rmmnet_usb0 and rmmnet_usb1. The basic ip route command reveals that traffic from different source addresses is handled differently and split over the different network devices:

shell@android:/ # ip route 0.0.0.0/1 dev tun0 scope link default via 100.66.150.89 dev rmnet_usb0 83.224.66.138 via 100.87.31.214 dev rmnet_usb1 83.224.70.94 via 100.87.31.214 dev rmnet_usb1 100.66.150.88/30 dev rmnet_usb0 proto kernel scope link src 100.66.150.90 100.66.150.89 dev rmnet_usb0 scope link 100.87.31.212/30 dev rmnet_usb1 proto kernel scope link src 100.87.31.213 100.87.31.214 dev rmnet_usb1 scope link 128.0.0.0/1 dev tun0 scope link 192.168.42.0/24 dev rndis0 proto kernel scope link src 192.168.42.129

I then looked more closely and found that there is also an extra routing table, it can be found with ip rule

shell@android:/ # ip rule show 0: from all lookup local 32765: from 192.168.42.0/24 lookup 60 32766: from all lookup main 32767: from all lookup default shell@android:/ # ip route show table 60 default via 100.87.51.57 dev rmnet_usb1 100.87.51.57 dev rmnet_usb1 192.168.42.0/24 dev rndis0 scope link

In this routing table, it is obvious that data from the tethering subnet (192.168.42.0/24) is sent over the extra device rmnet_usb1.

Manually cleaning it up

If the phone is rooted, it is possible to very quickly change the routing table to get all the tethering traffic going through the normal rmnet_usb0 interface again.

It is necessary to get rid of that alternative routing table first:

ip rule del pref 32765

and then update the iptables entries that refer to interface names:

iptables -t nat -I natctrl_nat_POSTROUTING -s 192.168.0.0/16 -o rmnet_usb0 -j MASQUERADE iptables -I natctrl_FORWARD -i rmmnet_usb0 -j RETURN

This immediately resolved the problem for me on the Vodafone network in Italy.

Conclusion

If Google can be bullied into accepting this kind of discriminatory routing in the stock Android builds and it can even propagate into CyanogenMod, then I'm just glad I'm not running one of those Android builds that has been explicitly "enhanced" by a carrier.

It raises big questions about who really is the owner and user of the device and who is receiving the value when a person pays money to "buy" a device.

Canonical Design Team: Latest from the web team — April 2014

Planet Ubuntu - Thu, 2014-04-24 13:57

Ubuntu 14.04 LTS is out and it’s great! The period after release tends to be slightly less hectic than the lead up to it, but that doesn’t mean that the web team is not as busy as ever.

In the last few weeks we’ve worked on:

  • Ubuntu 14.04 LTS release: we’ve published the latest updates to www.ubuntu.com that go alongside the latest release of Ubuntu
  • Ubuntu.com: ubuntu.com is now responsive! Stay tuned for a more in-depth post on this, and keep following our series on how we made ubuntu.com responsive; we’ve also launched a new and improved cloud section
  • Juju GUI: we’ve moved the inspector to the left of the screen, which should be live in the coming weeks, and we’re finalising user research
  • Fenchurch: we moved downloads, contributions and search to Fenchurch, so we’re now effectively off our old Drupal site, with a better geolocation solution for download mirrors
  • Ubuntu Resources: we’ve released the beta version for large screen sizes of Ubuntu Resources
  • Future of Web Design: I attended and spoke at the Future of Web Design conference, in London, where I talked about letting mechanisation into our work as web designers, and how we can move further in our profession

And we’re currently working on:

  • Responsive ubuntu.com: we’re currently working on tweaks and improvements following the release on 17 April
  • Web style guide: we’re updating the Ubuntu web style guide (still in alpha) to reflect the changes from making www.ubuntu.com responsive
  • Ubuntu Resources: we’re currently working on making the transition from Ubuntu Resources to Ubuntu Insights, after that we’ll be working on creating a press centre on the new Ubuntu Insights
  • Fenchurch: we’re working on a new front-end for our asset server and upgrading the ubuntu.com CMS to the version running www.canonical.com
  • Las Vegas sprint: a few of us are travelling to the USA next week for some intense Juju planning and work
  • Legal pages: we’re in the process of defining the information architecture and wireframing for a new hub that will hold all our legal information
  • Partners: we’re finalising wireframes and content for a new Ubuntu partners site

And, if you’d like to join the web team, we are currently looking for an experienced user experience designer to join us! Send us an email if you’d like to apply.

Delicious treats on release day

Do you have any questions or suggestions for us? Would you like to hear about any of these projects and tasks in more detail? Let us know your thoughts in the comments.

Robie Basak: New in Ubuntu 14.04: Apache 2.4

Planet Ubuntu - Thu, 2014-04-24 13:13

Ubuntu 14.04 ships with Apache 2.4, which is a significant upgrade over Apache 2.2 as found in 12.04.

Apache 2.4 actually first appeared in 13.10, though of course if you intend to do an LTS to LTS upgrade, you won't notice this until now.

If you have a default configuration, then everything should upgrade automatically.

Of course, server deployments typically do not run on defaults. In this case, there are significant changes of which you should be aware. Expect the apache2 postinst script to fail to restart Apache after the upgrade. You'll need to fix up your own customisations to meet the requirements in Apache 2.4 and then run sudo dpkg --configure -a and sudo apt-get -f install to recover. Be sure to back up your system before you begin.

Instead of upgrading, you may want to consider this as an opportunity to enter the new world of automated deployments. Codify your deployment, and then test and deploy a fresh instance of Apache on 14.04 instead, using virtual machines as needed. This is far less stressful than trying to upgrade an existing production system!

Upstream changes

You will need to update any custom configuration according to latest upstream configuration syntax.

See upstream's document "Upgrading to 2.4 from 2.2" for details of required configuration changes. Authorization and access control directives have changed, and will likely need adjustment. Various defaults have also changed.

Significant packaging changes

The default path to served files has changed from /var/www to /var/www/html, mainly for security reasons. See the debian-devel thread "Changing the default document root for HTTP server" for details.

The packaging has been overhauled quite significantly. /etc/apache2/conf.d/ is now /etc/apache2/conf-available/ and /etc/apache2/conf-enabled/, to match the existing sites-enabled/ and mods-enabled/ mechanisms.

Before you upgrade, I suggest that you first make sure that everything in /etc/apache2/*-available is correctly a symlink to the corresponding /etc/apache2/*-enabled. Note that all configurations in sites-enabled and conf-enabled need a .conf suffix now.

Make use of the a2enmod, a2ensite, a2enconf series tools! These help you easily manage the symlinks from *-available to *-enabled.

See Debian's apache2 packaging NEWS file for full details.

Other Notes

Debian changed the default "It works!" page into a comprehensive page explaining on where to go after an initial installation. Initially, I imported this into Ubuntu without noticing this change. Thank you to Andreas Hasenack for pointing out that the page referred to Debian and the Debian bug tracker in a misleading way, in bug 1288690. I fixed this in Ubuntu by essentially doing a s/Debian/Ubuntu/g and crediting Debian appropriately instead.

Thanks

I think the Apache 2.4 packaging is a shining example of complex packaging done well. All credit is due to Stefan Fritsch and Arno Töll, the Debian maintainers of the Apache packaging. They have done the bulk of the work involved in this update.

Getting help

As always, see Ubuntu's main page on community support options. askubuntu.com, #ubuntu-server on IRC (Freenode) and the Ubuntu Server mailing list are appropriate venues.

Costales: "Folder Color" app: Change the color of your folders in Ubuntu

Planet Ubuntu - Wed, 2014-04-23 17:29
A simple, easy, fast and useful app! Change the color of your folders in Nautilus in a really easy way, so that you can get a better visual layout!

Folder Color in Ubuntu
How to install? Just enter this command into a Terminal, logout and enjoy it!
sudo add-apt-repository ppa:costales/folder-color ; sudo apt-get update ; sudo apt-get install folder-color -y

More info.

Mark Shuttleworth: U talking to me?

Planet Ubuntu - Wed, 2014-04-23 17:16

This upstirring undertaking Ubuntu is, as my colleague MPT explains, performance art. Not only must it be art, it must also perform, and that on a deadline. So many thanks and much credit to the teams and individuals who made our most recent release, the Trusty Tahr, into the gem of 14.04 LTS. And after the uproarious ululation and post-release respite, it’s time to open the floodgates to umpteen pent-up changes and begin shaping our next show.

The discipline of an LTS constrains our creativity – our users appreciate the results of a focused effort on performance and stability and maintainability, and we appreciate the spring cleaning that comes with a focus on technical debt. But the point of spring cleaning is to make room for fresh ideas and new art, and our next release has to raise the roof in that regard. And what a spectacular time to be unleashing creativity in Ubuntu. We have the foundations of convergence so beautifully demonstrated by our core apps teams – with examples that shine on phone and tablet and PC. And we have equally interesting innovation landed in the foundational LXC 1.0, the fastest, lightest virtual machines on the planet, born and raised on Ubuntu. With an LTS hot off the press, now is the time to refresh the foundations of the next generation of Linux: faster, smaller, better scaled and better maintained. We’re in a unique position to bring useful change to the ubiquitary Ubuntu developer, that hardy and precise pioneer of frontiers new and potent.

That future Ubuntu developer wants to deliver app updates instantly to users everywhere; we can make that possible. They want to deploy distributed brilliance instantly on all the clouds and all the hardware. We’ll make that possible. They want PAAS and SAAS and an Internet of Things that Don’t Bite, let’s make that possible. If free software is to fulfil it’s true promise it needs to be useful for people putting precious parts into production, and we’ll stand by our commitment that Ubuntu be the most useful platform for free software developers who carry the responsibilities of Dev and Ops.

It’s a good time to shine a light on umbrageous if understandably imminent undulations in the landscape we love – time to bring systemd to the centre of Ubuntu, time to untwist ourselves from Python 2.x and time to walk a little uphill and, thereby, upstream. Time to purge the ugsome and prune the unusable. We’ve all got our ucky code, and now’s a good time to stand united in favour of the useful over the uncolike and the utile over the uncous. It’s not a time to become unhinged or ultrafidian, just a time for careful review and consideration of business as usual.

So bring your upstanding best to the table – or the forum – or the mailing list – and let’s make something amazing. Something unified and upright, something about which we can be universally proud. And since we’re getting that once-every-two-years chance to make fresh starts and dream unconstrained dreams about what the future should look like, we may as well go all out and give it a dreamlike name. Let’s get going on the utopic unicorn. Give it stick. See you at vUDS.

Svetlana Belkin: vBlog Teaser

Planet Ubuntu - Wed, 2014-04-23 13:54

I’m thinking of doing a vBlog about Ubuntu and other things:


Adam Stokes: new juju plugin: juju-sos

Planet Ubuntu - Wed, 2014-04-23 05:52

Juju sos is my entryway into Go code and the juju internals. This plugin will execute and pull sosreports from all machines known to juju or a specific machine of your choice and copy them locally on your machine.

An example of what this plugin does, first, some output of juju status to give you an idea of the machines I have:

┌[poe@cloudymeatballs] [/dev/pts/1] └[~]> juju status environment: local machines: "0": agent-state: started agent-version: 1.18.1.1 dns-name: localhost instance-id: localhost series: trusty "1": agent-state: started agent-version: 1.18.1.1 dns-name: 10.0.3.27 instance-id: poe-local-machine-1 series: trusty hardware: arch=amd64 cpu-cores=1 mem=2048M root-disk=8192M "2": agent-state: started agent-version: 1.18.1.1 dns-name: 10.0.3.19 instance-id: poe-local-machine-2 series: trusty hardware: arch=amd64 cpu-cores=1 mem=2048M root-disk=8192M services: keystone: charm: cs:trusty/keystone-2 exposed: false relations: cluster: - keystone identity-service: - openstack-dashboard units: keystone/0: agent-state: started agent-version: 1.18.1.1 machine: "2" public-address: 10.0.3.19 openstack-dashboard: charm: cs:trusty/openstack-dashboard-0 exposed: false relations: cluster: - openstack-dashboard identity-service: - keystone units: openstack-dashboard/0: agent-state: started agent-version: 1.18.1.1 machine: "1" open-ports: - 80/tcp - 443/tcp public-address: 10.0.3.27

Basically what we are looking at is 2 machines that are running various services on them in my case Openstack Horizon and Keystone. Now suppose I have some issues with my juju machines and openstack and I need a quick way to gather a bunch of data on those machines and send them to someone who can help. With my juju-sos plugin, I can quickly gather sosreports on each of the machines I care about in as little typing as possible.

Here is the output from juju sos querying all machines known to juju:

┌[poe@cloudymeatballs] [/dev/pts/1] └[~]> juju sos -d ~/scratch 2014-04-23 05:30:47 INFO juju.provider.local environprovider.go:40 opening environment "local" 2014-04-23 05:30:47 INFO juju.state open.go:81 opening state, mongo addresses: ["10.0.3.1:37017"]; entity "" 2014-04-23 05:30:47 INFO juju.state open.go:133 dialled mongo successfully 2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:53 Querying all machines 2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:59 Adding machine(1) 2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:59 Adding machine(2) 2014-04-23 05:30:47 INFO juju.sos.cmd cmd.go:88 Capturing sosreport for machine 1 2014-04-23 05:30:55 INFO juju.sos main.go:119 Copying archive to "/home/poe/scratch" 2014-04-23 05:30:56 INFO juju.sos.cmd cmd.go:88 Capturing sosreport for machine 2 2014-04-23 05:31:08 INFO juju.sos main.go:119 Copying archive to "/home/poe/scratch" ┌[poe@cloudymeatballs] [/dev/pts/1] └[~]> ls $HOME/scratch sosreport-ubuntu-20140423040507.tar.xz sosreport-ubuntu-20140423052125.tar.xz sosreport-ubuntu-20140423052545.tar.xz sosreport-ubuntu-20140423050401.tar.xz sosreport-ubuntu-20140423052223.tar.xz sosreport-ubuntu-20140423052600.tar.xz sosreport-ubuntu-20140423050727.tar.xz sosreport-ubuntu-20140423052330.tar.xz sosreport-ubuntu-20140423052610.tar.xz sosreport-ubuntu-20140423051436.tar.xz sosreport-ubuntu-20140423052348.tar.xz sosreport-ubuntu-20140423053052.tar.xz sosreport-ubuntu-20140423051635.tar.xz sosreport-ubuntu-20140423052450.tar.xz sosreport-ubuntu-20140423053101.tar.xz sosreport-ubuntu-20140423052006.tar.xz sosreport-ubuntu-20140423052532.tar.xz

Another example of juju sos just capturing a sosreport from one machine:

┌[poe@cloudymeatballs] [/dev/pts/1] └[~]> juju sos -d ~/scratch -m 2 2014-04-23 05:41:59 INFO juju.provider.local environprovider.go:40 opening environment "local" 2014-04-23 05:42:00 INFO juju.state open.go:81 opening state, mongo addresses: ["10.0.3.1:37017"]; entity "" 2014-04-23 05:42:00 INFO juju.state open.go:133 dialled mongo successfully 2014-04-23 05:42:00 INFO juju.sos.cmd cmd.go:70 Querying one machine(2) 2014-04-23 05:42:00 INFO juju.sos.cmd cmd.go:88 Capturing sosreport for machine 2 2014-04-23 05:42:08 INFO juju.sos main.go:99 Copying archive to "/home/poe/scratch"

Fancy, fancy

Of course this is a work in progress and I have a few ideas of what else to add here, some of those being:

  • Rename the sosreports to match the dns-name of the juju machine
  • Filter sosreport captures based on services
  • Optionally pass arguments to sosreport command in order to filter out specific plugins I want to run, ie

    $ juju sos -d ~/sosreport -- -b -o juju,maas,nova-compute

As usual contributions are welcomed and some installation instructions are located in the readme

Mario Limonciello: IR Receiver extension for Ambilight raspberry pi clone

Planet Ubuntu - Wed, 2014-04-23 05:48
After working with my ambilight clone for a few days, I discovered the biggest annoyance was that it wouldn't turn off after turning off the TV.  I had some ideas on how I could remotely trigger it from the phone or from an external HTPC but I really wanted a self contained solution in case I decided to swap the HTPC for a FireTV or a Chromecast.

This brought me to trying to do it directly via my remote.  My HTPC uses a mceusb, so I was tempted to just get another mceusb for the pi.  This would have been overkill though, the pi has tons of unused GPIO's, it can be done far simpler (and cheaper).

I looked into it and discovered that someone actually already wrote a kernel module that directly controls an IR sensor on a GPIO.  The kernel module is based off the existing lirc_serial module, but adapted specifically for the raspberry pi.  (See http://aron.ws/projects/lirc_rpi/ for more information)

HardwareAll that's necessary is a 38 kHz IR sensor.  You'll spend under $5 on one of them on Amazon (plus some shipping) or you can get one from radio shack if you want something quick and local.  I spent $4.87 on one at my local radio shack.

The sensor is really simple, 3 pins.  All 3 pins are available in the pi's header.  One goes to 3.3V rail, one to ground, and one to a spare GPIO.  There's a few places on the header that you can use for each.  Just make sure you match up the pinout to the sensor you get.  I chose to use GPIO 22 as it's most convenient for my lego case.  The lirc_rpi defaults to GPIO 18.

Some notes to keep in mind:

  1. While soldering it, be cognizant of which way you want the sensor to face so that it can be accessed from the remote.  
  2. Remember that you are connecting to 3.3V and Ground from the Pi header.  The ground connection won't be the same as your rail that was used to power the pi if you are powering via USB.  
  3. The GPIO pins are not rated for 5V, so be sure to connect to the 3.3V.



Software
LIRC is available directly in the raspbian repositories.  Install it like this:

# sudo apt-get install lirc

Manually load the module so that you can test it.
# sudo modprobe lirc_rpi gpio_in_pin=22
Now use mode2 to test that it's working.  Once you run the command, press some buttons on your remote.  You should be output about space, pulse and other stuff.  Once you're satisfied, press ctrl-c to exit.
# mode2 -d /dev/lirc0
Now, add the modules that need to be loaded to /etc/modules.  If you are using a different GPIO than 18, specify it here again.  This will make sure that lirc_rpi loads on boot.

/etc/moduleslirc_dev
lirc_rpi gpio_in_pin=22


Now modify /etc/lirc/hardware.conf to match this configuration to make it work for the rpi:
/etc/lirc/hardware.conf# /etc/lirc/hardware.conf## Arguments which will be used when launching lircdLIRCD_ARGS="--uinput"
#Don't start lircmd even if there seems to be a good config file#START_LIRCMD=false
#Don't start irexec, even if a good config file seems to exist.#START_IREXEC=false
#Try to load appropriate kernel modulesLOAD_MODULES=true
# Run "lircd --driver=help" for a list of supported drivers.DRIVER="default"# usually /dev/lirc0 is the correct setting for systems using udev DEVICE="/dev/lirc0"MODULES="lirc_rpi"
# Default configuration files for your hardware if anyLIRCD_CONF=""LIRCMD_CONF=""
Next, we'll record the buttons that you want the pi to trigger the backlight toggle on.  I chose to do it on the event of turning the TV on or off.  For me I actually have a harmony remote that has separate events for "Power On" and "Power Off" available.  So I chose to program KEY_POWER and KEY_POWER2.  If you don't have the codes available for both "Power On" and "Power Off" then you can just program "Power Toggle" to KEY_POWER.
# irrecord -d /dev/lirc0 ~/lircd.conf
Once you have the lircd.conf recorded, move it into /etc/lirc to overwrite /etc/lirc/lircd.conf and start lirc
# sudo mv /home/pi/lircd.conf /etc/lirc/lircd.conf# sudo /etc/init.d/lirc start
With lirc running you can examine that it's properly recognizing your key event using the irw command.  Once irw is running, press the button on the remote and make sure your pi recognizes it.  Once you're done press ctrl-c to exit.
# irw
Now that you've validated the pi can recognize the command, it's time to tie it to an actual script.  Create /home/pi/.lircrc with contents like this:
/home/pi/.lircrcbegin     button = KEY_POWER     prog = irexec     repeat = 0     config = /home/pi/toggle_backlight.sh offend
begin     button = KEY_POWER2     prog = irexec     repeat = 0     config = /home/pi/toggle_backlight.sh onend
My toggle_backlight.sh looks like this:
/home/pi/toggle_backlight.sh#!/bin/shARG=toggleif [ -n "$1" ]; then ARG=$1fiRUNNING=$(pgrep hyperion-v4l2)if [ -n "$RUNNING" ]; then if [ "$ARG" = "on" ]; then exit 0 fi pkill hyperion-v4l2 hyperion-remote --color black exit 0fiif [ "$ARG" = "off" ]; then hyperion-remote --color black exit 0fi#spawn hyperion remote before actually clearing channels to prevent extra flickershyperion-v4l2 --crop-height 30 --crop-width 10 --size-decimator 8 --frame-decimator 2 --skip-reply --signal-threshold 0.08&hyperion-remote --clearall

To test, run irexec and then press your remote button.  With any luck irexec will launch the toggle script and change your LED status.
# irexec
Lastly, you need to add irexec to your /etc/rc.local to make it boot with the pi.  Make sure you put the execution before the exit 0
/etc/rc.localsu pi -c "irexec -d"su pi -c "/home/pi/toggle_backlight.sh off"
Reboot your pi, and make sure everything works together.  
# sudo reboot

Charles Profitt: Ubuntu 14.04: Subtle shades of success

Planet Ubuntu - Wed, 2014-04-23 03:10

I just completed upgrading four computers to Ubuntu 14.04 tonight. My testing machine has been running 14.04 since early alpha phase, but in the last two days I upgrade by work Lenovo W520, my person Lenovo T530 and the self-assembled desktop with a core2duo and Nvidia 8800 GTS that I haded down to my son.

Confidence In Ubuntu
On Friday of this week I will be involved in delivering training to a group of Boy Scout leaders at a Wood Badge course. I will utilize my primary laptop, the T530, to give a presentation and produce the Gilwell Gazette. I completed a great deal of prep work on Ubuntu 13.10 and if I did not have complete confidence in Ubuntu 14.04 I would have waited until after the weekend to upgrade. I needed to be confident that the multi-monitor functionality would work, that documents produced in an earlier version of Libre Office would not suddenly change the page layouts. In short, I was depending on Ubuntu being dependable and solid more than I usually do.

Subtle Changes Add Flexibility and Polish
Ubuntu added some very small tweaks that truly add to the overall user experience. The borderless windows, new lock screen, and smaller minimum size of launcher icons all add up to slight, but pleasant changes.

Here is a screen shot of the 14.04 desktop on the Lenovo T530.

14.04 desktop


Daniel Pocock: Automatically creating repackaged upstream tarballs for Debian

Planet Ubuntu - Tue, 2014-04-22 20:34

One of the less exciting points in the day of a Debian Developer is the moment they realize they have to create a repackaged upstream source tarball.

This is often a process that they have to repeat on each new upstream release too.

Wouldn't it be useful to:

  • Scan all the existing repackaged upstream source tarballs and diff them against the real tarballs to catalog the things that have to be removed and spot patterns?
  • Operate a system that automatically produces repackaged upstream source tarballs for all tags in the upstream source repository or all new tarballs in the upstream download directory? Then the DD can take any of them and package them when he wants to with less manual effort.
  • Apply any insights from this process to detect non-free content in the rest of the Debian archive and when somebody is early in the process of evaluating a new upstream project?
Google Summer of Code is back

One of the Google Summer of Code projects this year involves recursively building Java projects from their source. Some parts of the project, such as repackaged upstream tarballs, can be generalized for things other than Java. Web projects including minified JavaScript are a common example.

Andrew Schurman, based near Vancouver, is the student selected for this project. Over the next couple of weeks, I'll be starting to discuss the ideas in more depth with him. I keep on stumbling on situations where repackaged upstream tarballs are necessary and so I'm hoping that this is one area the community will be keen to collaborate on.

Martin Pitt: Booting Ubuntu with systemd: Test packages available

Planet Ubuntu - Tue, 2014-04-22 16:54

On the last UDS we talked about migrating from upstart to systemd to boot Ubuntu, after Mark announced that Ubuntu will follow Debian in that regard. There’s a lot of work to do, but it parallelizes well once developers can run systemd on their workstations or in VMs easily and the system boots up enough to still be able to work with it.

So today I merged our systemd package with Debian again, dropped the systemd-services split (which wasn’t accepted by Debian and will be unnecessary now), and put it into my systemd PPA. Quite surprisingly, this booted a fresh 14.04 VM pretty much right away (of course there’s no Plymouth prettiness). The main two things which were missing were NetworkManager and lightdm, as these don’t have an init.d script at all (NM) or it isn’t enabled (lightdm). Thus the PPA also contains updated packages for these two which provide a proper systemd unit. With that, the desktop is pretty much fully working, except for some details like cron not running. I didn’t go through /etc/init/*.conf with a small comb yet to check which upstart jobs need to be ported, that’s now part of the TODO list.

So, if you want to help with that, or just test and tell us what’s wrong, take the plunge. In a 14.04 VM (or real machine if you feel adventurous), do

sudo add-apt-repository ppa:pitti/systemd sudo apt-get update sudo apt-get dist-upgrade

This will replace systemd-services with systemd, update network-manager and lightdm, and a few libraries. Up to now, when you reboot you’ll still get good old upstart. To actually boot with systemd, press Shift during boot to get the grub menu, edit the Ubuntu stanza, and append this to the linux line: init=/lib/systemd/systemd.

For the record, if pressing shift doesn’t work for you (too fast, VM, or similar), enable the grub menu with

sudo sed -i '/GRUB_HIDDEN_TIMEOUT/ s/^/#/' /etc/default/grub sudo update-grub

Once you are satisfied that your system boots well enough, you can make this permanent by adding the init= option to /etc/default/grub (and possibly remove the comment sign from the GRUB_HIDDEN_TIMEOUT lines) and run sudo update-grub again. To go back to upstart, just edit the file again, remove the init=sudo update-grub again.

I’ll be on the Debian systemd/GNOME sprint next weekend, so I feel reasonably well prepared now.

Jonathan Riddell: Favourite Twitter Post

Planet Ubuntu - Tue, 2014-04-22 12:11
KDE Project:

There's only 1 tool to deal with an unsupported Windows XP...

Michael Rooney: Easily sending postcards to your Kickstarter backers with Lob

Planet Ubuntu - Tue, 2014-04-22 12:00

Recently my friend Joël Franusic was stressing out about sending postcards to his Kickstarter backers and asked me to help him out. He pointed me to the excellent service Lob.com, which is a very developer-friendly API around printing and mailing. We quickly had some code up and running that could take a CSV export of Kickstarter campaign backers, verify addresses, and trigger the sending of customizable, actual physical postcards to the backers.

We wanted to share the project such that it could help out other Kickstarter campaigns, so we put it on Github: https://github.com/mrooney/kickstarter-lob.

Below I explain how to install and use this script to use Lob to send postcards to your Kickstarter backers. The section after that explains how the script works in detail.

Using the script to mail postcards to your backers

First, you’ll need to sign up for a free account at Lob.com, then grab your “Test API Key” from the “Account Details” section of your account page. At this point you can use your sandbox API key to test away free of charge and view images of any resulting postcards. Once you are happy with everything, you can plug in credit card details and start using your “Live” API key. Second, you’ll need an export from Kickstarter for the backers you wish to send postcards to.

Now you’ll want to grab the kickstarter-lob code and get it set.

These instructions assume that you’re using a POSIX compatible operating system like Mac OS X or Linux. If you’re using Mac OS X, open the “Terminal” program and type the commands below into it to get started:

git clone https://github.com/mrooney/kickstarter-lob.git cd kickstarter-lob sudo easy_install pip # (if you don’t have pip installed already) pip install -r requirements.txt cp config_example.json config.json open config.json

At this point, you should have a text editor open with the configuration information. Plug in the correct details, making sure to maintain quotes around the values. You’ll need to provide a few things besides an API key:

  • A URL of an image or PDF to be used for the front of the postcard.
    This means that you need to have your PDF available online somewhere. I suggest using Amazon’s S3 service to host your PDF.

  • A message to be printed on the back of the postcard (the address of the receiver will automatically show up here as well).

  • Your return address.

Now you are ready to give it a whirl. Run it like so. Make sure you include the filename for your Kickstarter export:

$ python kslob.py ~/Downloads/your-kickstarter-backer-report.csv Fetching list of any postcards already sent... Verifying addresses of backers... warning: address verification failed for jsmith@example.com, cannot send to this backer. Already sent postcards to 0 of 161 backers Send to 160 unsent backers now? [y/N]: y Postcard sent to Jeff Bezos! (psc_45df20c2ade155a9) Postcard sent to Tim Cook! (psc_dcbf89cd1e46c488) ... Successfully sent to 160 backers with 0 failures

The script will verify all addresses, and importantly, only send to addresses not already sent to. The script queries Lob to keep track of who you’ve already sent a postcard to; this important feature allows you to download new Kickstarter exports as people fill in or update their addresses. After downloading a new export from Kickstarter, just run the script against the new export, and the script will only send postcards to the new addresses.

Before anything actually happens, you’ll notice that you’re informed of how many addresses have not yet received postcards and prompted to send them or not, so you can feel assured it is sending only as many postcards as you expect.

If you were to run it again immediately, you’d see something like this:

$ python kslob.py ~/Downloads/your-kickstarter-backer-report.csv Fetching list of any postcards already sent... Verifying addresses of backers... warning: address verification failed for jsmith@example.com, cannot send to this backer. Already sent postcards to 160 of 161 backers SUCCESS: All backers with verified addresses have been processed, you're done!

After previewing your sandbox postcards on Lob’s website, you can plug in your live API key in the config.json file and send real postcards at reasonable rates.

How the script works

This section explains how the script actually works. If all you wanted to do is send postcards to your Kickstarter backers, then you can stop reading now. Otherwise, read on!

Before you get started, take a quick look at the “kslob.py” file on GitHub: https://github.com/mrooney/kickstarter-lob/blob/master/kslob.py

We start by importing four Python libraries: “csv”, “json”, “lob”, and “sys”. Of those four libraries, “lob” is the only one that isn’t part of Python’s standard library. The “lob” library is installed by using the “pip install -r requirements.txt” command I suggest using above. You can also install “lob-python” using pip or easy_install.

#!/usr/bin/env python import csv import json import lob import sys

Next we define one class named “ParseKickstarterAddresses” and two functions “addr_identifier” and “kickstarter_dict_to_lob_dict”

“ParseKickstarterAddresses” is the code that reads in the backer report from Kickstarter and turns it into an array of Python dictionaries.

class ParseKickstarterAddresses:    def __init__(self, filename):        self.items = []        with open(filename, 'r') as csvfile:            reader = csv.DictReader(csvfile)            for row in reader:                self.items.append(row)

The “addr_identifier” function takes an address and turns it into a unique identifier, allowing us to avoid sending duplicate postcards to backers.

def addr_identifier(addr):    return u"{name}|{address_line1}|{address_line2}|{address_city}|{address_state}|{address_zip}|{address_country}".format(**addr).upper()

The “kickstarter_dict_to_lob_dict” function takes a Python dictionary and turns it into a dictionary we can give to Lob as an argument.

def kickstarter_dict_to_lob_dict(dictionary):    ks_to_lob = {'Shipping Name': 'name',                 'Shipping Address 1': 'address_line1',                 'Shipping Address 2': 'address_line2',                 'Shipping City': 'address_city',                 'Shipping State': 'address_state',                 'Shipping Postal Code': 'address_zip',                 'Shipping Country': 'address_country'}    address_dict = {}    for key in ks_to_lob.keys():        address_dict[ks_to_lob[key]] = dictionary[key]    return address_dict

The “main” function is where the majority of the logic for our script resides. Let’s cover that in more detail.

We start by reading in the name of the Kickstarter backer export file. Loading our configuration file (“config.json”) and then configuring Lob with the Lob API key from the configuration file:

def main():    filename = sys.argv[1]    config = json.load(open("config.json"))    lob.api_key = config['api_key']

Next we query Lob for the list of postcards that have already been sent. You’ll notice that the “processed_addrs” variable is a Python “set”, if you haven’t used a set in Python before, a set is sort of like an array that doesn’t allow duplicates. We only fetch 100 results from Lob at a time, and use a “while” loop to make sure that we get all of the results.

print("Fetching list of any postcards already sent...") processed_addrs = set() postcards = [] postcards_result = lob.Postcard.list(count=100) while len(postcards_result):    postcards.extend(postcards_result)    postcards_result = lob.Postcard.list(count=100, offset=len(postcards))

One we fetch all of the postcards, we print out how many were found:

print("...found {} previously sent postcards.".format(len(postcards)))

Then we iterate through all of our results and add them to the “processed_addrs” set. Note the use of the “addr_identifier” function, which turns each address dictionary into a string that uniquely identifies that address.

for processed in postcards:    identifier = addr_identifier(processed.to.to_dict())    processed_addrs.add(identifier)

Next we set up a bunch of variables that will be used later on, variables with configuration information for the postcards that Lob will send, the addresses from the Kickstarter backers export file, and variables to keep track of who we’ve sent postcards to and who we still need to send postcards to.

postcard_from_address = config['postcard_from_address'] postcard_message = config['postcard_message'] postcard_front = config['postcard_front'] postcard_name = config['postcard_name'] addresses = ParseKickstarterAddresses(filename) to_send = [] already_sent = []

At this point, we’re ready to start validating addresses, the code below loops over every line in the Kickstarter backers export file and uses Lob to see if the address is valid.

print("Verifying addresses of backers...") for line in addresses.items:     to_person = line['Shipping Name']     to_address = kickstarter_dict_to_lob_dict(line)     try:         to_name = to_address['name']         to_address = lob.AddressVerify.verify(**to_address).to_dict()['address']         to_address['name'] = to_name     except lob.exceptions.LobError:         msg = 'warning: address verification failed for {}, cannot send to this backer.'         print(msg.format(line['Email']))         continue

If the address is indeed valid, we check to see if we’ve already sent a postcard to that address. If so, the address is added to the list of addresses we’ve “already_sent” postcards to. Otherwise, it’s added to the list of address we still need “to_send” postcards to.

if addr_identifier(to_address) in processed_addrs:     already_sent.append(to_address) else:     to_send.append(to_address)

Next we print out the number of backers we’ve already sent postcards to and check to see if we need to send postcards to anybody, exiting if we don’t need to send postcards to anybody.

nbackers = len(addresses.items) print("Already sent postcards to {} of {} backers".format(len(already_sent), nbackers)) if not to_send:     print("SUCCESS: All backers with verified addresses have been processed, you're done!")     return

Finally, if we do need to send one or more postcards, we tell the user how many postcards will be mailed and then ask them to confirm that those postcards should be mailed:

query = "Send to {} unsent backers now? [y/N]: ".format(len(to_send), nbackers) if raw_input(query).lower() == "y":     successes = failures = 0

If the user enters “Y” or “y”, then we start sending postcards. The call to Lob is wrapped in a “try/except” block. We handle calls to the Lob library that return a “LobError” exception, counting those calls as a “failure”. Other exceptions are not handled and will result in the script exciting with that exception.

for to_address in to_send:     try:         rv = lob.Postcard.create(to=to_address, name=postcard_name, from_address=postcard_from_address, front=postcard_front, message=postcard_message)         print("Postcard sent to {}! ({})".format(to_address['name'], rv.id))         successes += 1     except lob.exceptions.LobError:         msg = 'Error: Failed to send postcard to Lob.com'         print("{} for {}".format(msg, to_address['name']))         failures += 1

Lastly, we print a message indicating how many messages were sent and how many failures we had.

    print("Successfully sent to {} backers with {} failures".format(successes, failures)) else:

(If the user pressed a key other than “Y” or “y”, this is the message that they’ll see)

print("Okay, not sending to unsent backers.")

And there you have it, a short script that uses Lob to send postcards to your Kickstarter backers, with code to only send one postcard per address, that gracefully handles errors from Lob.

I hope that you’ve found this useful! Please let us know of any issues you encounter on Github, or send pull requests adding exciting new features. Most importantly, enjoy easily bringing smiles to your backers!

The Fridge: Ubuntu Weekly Newsletter Issue 364

Planet Ubuntu - Mon, 2014-04-21 21:30

Welcome to the Ubuntu Weekly Newsletter. This is issue #364 for the week April 14 – 20, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth Krumbach Joseph
  • Paul White
  • Emily Gonyer
  • Tiago Carrondo
  • Jose Antonio Rey
  • Jim Connett
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Michael Hall: Make Android apps Human with NDR

Planet Ubuntu - Mon, 2014-04-21 18:54

Ever since we started building the Ubuntu SDK, we’ve been trying to find ways of bringing the vast number of Android apps that exist over to Ubuntu. As with any new platform, there’s a chasm between Android apps and native apps that can only be crossed through the effort of porting.

There are simple solutions, of course, like providing an Android runtime on Ubuntu. On other platforms, those have shown to present Android apps as second-class citizens that can’t benefit from a new platform’s unique features. Worse, they don’t provide a way for apps to gradually become first-class citizens, so chasm between Android and native still exists, which means the vast majority of apps supported this way will never improve.

There are also complicates solutions, like code conversion, that try to translate Android/Java code into the native platform’s language and toolkit, preserving logic and structure along the way. But doing this right becomes such a monumental task that making a tool to do it is virtually impossible, and the amount of cleanup and checking needed to be done by an actual developer quickly rises to the same level of effort as a manual port would have. This approach also fails to take advantage of differences in the platforms, and will re-create the old way of doing things even when it doesn’t make sense on the new platform.

NDR takes a different approach to these, it doesn’t let you run our Android code on Ubuntu, nor does it try to convert your Android code to native code. Instead NDR will re-create the general framework of your Android app as a native Ubuntu app, converting Activities to Pages, for example, to give you a skeleton project on which you can build your port. It won’t get you over the chasm, but it’ll show you the path to take and give you a head start on it. You will just need to fill it in with the logic code to make it behave like your Android app. NDR won’t provide any of logic for you, and chances are you’ll want to do it slightly differently than you did in Android anyway, due to the differences between the two platforms.

To test NDR during development, I chose the Telegram app because it was open source, popular, and largely used Android’s layout definitions and components. NDR will be less useful against apps such as games, that use their own UI components and draw directly to a canvas, but it’s pretty good at converting apps that use Android’s components and UI builder.

After only a couple days of hacking I was able to get NDR to generate enough of an Ubuntu SDK application that, with a little bit of manual cleanup, it was recognizably similar to the Android app’s.

This proves, in my opinion, that bootstrapping an Ubuntu port based on Android source code is not only possible, but is a viable way of supporting Android app developers who want to cross that chasm and target their apps for Ubuntu as well. I hope it will open the door for high-quality, native Ubuntu app ports from the Android ecosystem.  There is still much more NDR can do to make this easier, and having people with more Android experience than me (that would be none) would certainly make it a more powerful tool, so I’m making it a public, open source project on Launchpad and am inviting anybody who has an interest in this to help me improve it.

Rick Spencer: Adding Interactivity to a Map with Popovers

Planet Ubuntu - Mon, 2014-04-21 15:30
On Friday I started my app "GetThereDC". I started by adding the locations of all of the Bikeshare stations in DC to a map. Knowing where the stations are is great, but it's a bummer when you go to a station and there are no bikes, or there are no empty parking spots. Fortunately, that exact information is in the XML feed, so I just need a way to display it.  
The way I decided to do it is to make the POI (the little icons for each station on the map) clickable, and when the user clicks the POI to use the Popover feature in the Ubuntu Components toolkit to display the data.
Make the POI Clickable When you want to make anyting "clickable" in QML, you just use a MouseArea component. Remember that each POI is constructed as a delegate in the MapItemView as an Image component. So all I have to do is add a MouseArea inside the Image and respond to the Click event. So, not my image looks like this:
sourceItem: Image
{
id: poiImage
width: units.gu(2)
height: units.gu(2)
source: "images/bike_poi.png"
MouseArea
{
anchors.fill: parent
onClicked:
{
print("The POI was clicked! ")
}
}
}
This can be used anywhere in QML to make an image respond to a click. MouseArea, of course, has other useful events as well, for things like onPressed, onPressAndHold, etc...
Add the Roles to the XmlListModel I already know that I'll want something to use for a title for each station, the address, as well as the number of bikes and the number of parking slots. Looking at the XML I can see that the "name" property is the address, so that's a bonus. Additionally, I can see the other properties I want are called "nbBikes" and "nbEmptyDocks". So, all I do is add those three new roles to the XmlListModel that I constructed before:
XmlListModel
{
id: bikeStationModel
source: "https://www.capitalbikeshare.com/data/stations/bikeStations.xml"
query: "/stations/station"
XmlRole { name: "lat"; query: "lat/string()"; isKey: true }
XmlRole { name: "lng"; query: "long/string()"; isKey: true }
XmlRole {name: "name"; query: "name/string()"; isKey: true}
XmlRole {name: "available"; query: "nbBikes/string()"; isKey: true}
XmlRole {name: "freeSlots"; query: "nbEmptyDocks/string()"; isKey: true}
}
Make a Popover Component The Ubuntu SDK offers some options for displaying additional information. In old school applications these might be dialog boxes, or message boxes. For the purposes of this app, Popover looks like the best bet. I suspect that over time the popover code might get a little complex, so I don't want it to be too deeply nested inside the MapItemView, as the code will become unwieldy. So, instead I decided to add a file called BikeShareStationPopover.qml to the components sub-directory. Then I copy and pasted the sample code in the documentation to get started. 

To make a popover, you start with a Component tag, and then add a popover tag inside that. Then, you can put pretty much whatever you want into that Popover. I am going to go with a Column and use ListItem components because I think it will look nice, and it's the easiest way to get started. Since I already added the XmlRoles I'll just use those roles in the construction of each popover. 

Since I know that I will be adding other kinds of POI, I decided to add a Capital Bike Share logo to the top of the list so users will know what kind of POI they clicked. I also added a close button just to be certain that users don't get confused about how to go back to the map. So, at the end of they day, I just have a column with ListItems:
import QtQuick 2.0
import Ubuntu.Components 0.1
import Ubuntu.Components.ListItems 0.1 as ListItem
import Ubuntu.Components.Popups 0.1
Component
{
id: popoverComponent
Popover
{
id: popover
Column
{
id: containerLayout
anchors
{
left: parent.left
top: parent.top
right: parent.right
}
ListItem.SingleControl
{
control: Image
{
source: "../images/CapitalBikeshare_Logo.jpg"
height: units.gu(5)
width: units.gu(5)
}
}
ListItem.Header { text: name}
ListItem.Standard { text: available + " bikes available" }
ListItem.Standard { text: freeSlots + " parking spots available"}
ListItem.SingleControl
{
highlightWhenPressed: false
control: Button
{
text: "Close"
onClicked: PopupUtils.close(popover)
}
}
}
}
Make the Popover Component Appear on ClickSo, now that I made the component code, I just need to add it to the MapItemView and make it appear on click. So, I add the tag and give it an id to the MapQuickItem Delegate, and change the onClicked handler for the MouseArea to open the popover:
delegate: MapQuickItem
{
id: poiItem
coordinate: QtPositioning.coordinate(lat,lng)
anchorPoint.x: poiImage.width * 0.5
anchorPoint.y: poiImage.height
z: 9
sourceItem: Image
{
id: poiImage
width: units.gu(2)
height: units.gu(2)
source: "images/bike_poi.png"
MouseArea
{
anchors.fill: parent
onClicked:
{
PopupUtils.open(bikeSharePopover)
}
}
}
BikeShareStationPopover
{
id: bikeSharePopover
}
}
And when I run the app, I can click on any POI and see the info I want! Easy!

Code is here

Pages

Subscribe to Free Software Magazine aggregator