A couple days ago I did a post about going to school, and it in-between the lines it had the words “I’m deatached from my ZNC it has got push notifications on” hidden. One person did notice, and asked about how this feature worked and mentioned some tedious points in the process. But let’s get to it!
If you use ZNC, you should already know that ZNC supports the use of modules. Some of them are already built-in with the packaged system, but some others can be compiled manually. If you host your own ZNC, this may be of your interest.
The module for this is called ‘push’ (a bit obvious, huh?) and is hosted on Github, right here. In order to be able to compile and grab the module, first execute:
sudo apt-get install git znc-dev
Then, pull the git code, make the module and install it:
git clone https://github.com/jreese/znc-push.git
And, finally, load the module on your ZNC by executing the following on your ZNC:
/msg *status loadmod push
In general, there are two services I have checked are good and work: Pushbullet (for Android) and Airgram (for iOS). Each service has some specific configuration options. In the case of Pushbullet, which I use, you need to execute the following on your ZNC:
/msg *push set service pushbullet
/msg *push set secret [secretgoeshere]
/msg *push set target [targetgoeshere]
To find this values, register on Pushbullet and login to your account. Once the device is added, click on your email address and then on ‘Account Settings’. It should explicitly give you the secret. Then, go back to your inbox and click on the device you want to send the notifications to, even if it’s already selected. Now, from the address bar, copy the ‘device_iden’ value – that should be the target. And you’re good to go!
There are many other configuration options, which can be found here. I hope this is useful for many of you who want to stick with ZNC 24/7 :)
OpenStack Icehouse RC1 packages for Cinder, Glance, Keystone, Neutron, Heat, Ceilometer, Horizon and Nova are now available in the current Ubuntu development release and the Ubuntu Cloud Archive for Ubuntu 12.04 LTS.
To enable the Ubuntu Cloud Archive for Icehouse on Ubuntu 12.04:
sudo add-apt-repository cloud-archive:icehouse
sudo apt-get update
Users of the Ubuntu development release (trusty) can install OpenStack Icehouse without any further steps required.
Other packages which have been updated for this Ubuntu release and are pertinent for OpenStack users include:
- Open vSwitch 2.0.1 (+ selected patches)
- QEMU 1.7 (upgrade to 2.0 planned prior to final release)
- libvirt 1.2.2
- Ceph 0.78 (firefly stable release planned as a stable release update)
Note that the 3.13 kernel that will be released with Ubuntu 14.04 supports GRE and VXLAN tunnelling via the in-tree Open vSwitch module – so no need to use dkms packages any longer! You can read more about using Open vSwitch with Ubuntu in my previous post.
Ubuntu 12.04 users should also note that Icehouse is the last OpenStack release that will be backported to 12.04 – however it will receive support for the remainder of the 12.04 LTS support lifecycle (3 years).
Remember that you can always report bugs on packages in the Ubuntu Cloud Archive and Ubuntu 14.04 using the ubuntu-bug tool – for example:
I just wanted to let you folks know that I am recruiting for a community manager to join my team at Canonical.
I am looking for someone with strong technical knowledge of building Ubuntu (knowledge of how we release, how we build packages, bug management, governance etc), great community management skills, and someone who is willing to be challenged and grow in their skills and capabilities.
My goal with everyone who joins my team is not just to help them be successful in their work, but to help them be the very best at what they do in our industry. As such I am looking for someone with a passion to be successful and grow.
I think it is a great opportunity and to be part of a great team. Details of the job are available here – please apply if you are interested!?
We’ve recently rolled out some changes to the submission process for Click Applications that should make it easier for you to submit new applications, and allow them to be approved more quickly.
Previously when submitting an application you would have to enter all the information about that application on the website, even when some of that information was already included in the package itself. This was firstly an irritation, but sometimes developers would make a mistake when re-entering this information, meaning that the app was rejected from review and they would have to go back and correct the mistake.
With the new changes, when you submit an application you will wait a few seconds while the package is examined by the system, and you will then be redirected to the same process as before. However this time some of the fields will be pre-filled with information from the package. You won’t have to type in the application name, as it will already be there. This will speed up the process, and should reduce the number of mistakes that happen at that stage.
We’ve also been working on a command-line interface for submitting applications. It’s not polished yet, but if you are intrepid you can try out click-toolbelt.
LogAnalyzer is a powerful but simple log file analysis tool. The upstream web site gives an online demo.
It is developed in PHP, runs in Apache and has no other dependencies such as databases - it can read directly from the log files.
For efficiency, however, it is now trivial to make it work with MongoDB on Debian.
Using a database (including MongoDB and SQL backends) also means that severity codes (debug/info/notice/warn/error/...) are retained. These are not available from many log files. The UI can only colour-code and filter the messages by severity if it has a database backend.Package status
The packages just entered Debian recently. It has now been migrated to wheezy-backports so anybody on wheezy can use it.Quick start with MongoDB
The version of rsyslog in Debian wheezy does not support MongoDB output. It is necessary to grab 7.4.8 from backports.
Some versions, up to 7.4.4 in backports, had bugs with MongoDB support - if you tried those, please try again now.
The backported rsyslog is a drop-in replacement for the standard rsyslog package and for users with a default configuration it is unlikely you will notice any difference. For users who customized the configuration, as always, make a backup before trying the new version.
- Install all the necessary packages: apt-get install rsyslog-mongodb php5-mongo mongodb-server
- Add the following to /etc/rsyslog.conf:
*.* action(type="ommongodb" server="127.0.0.1")
- Look for the MongoDB settings in /etc/loganalyzer/config.php and uncomment them. Comment out the stuff for disk log access.
- Restart rsyslog and then browse your logs at http://localhost/loganalyzer
The app showdown is still in full swing and we have seen lots and lots of activity already. The competition is going to end on Wednesday, April 9th 2014 (23:59 UTC). So what do you need to do to enter and submit the app?
It’s actually quite easy. It takes three steps.
Submit your app
This is obviously the most important bit and needs to happen first. Don’t leave this to the last minute. Your app might have to go through a couple of reviews before it’s accepted in the store. So plan in some time for that. Once it’s accepted and published in the store, you can always, much more quickly, publish an update.
Register your participation
Once your app is in the store, you need to register your participation in the App Showdown. To make sure your application is registered for the contest and judges review it, you’ll need to fill in the participation form. You can start filling it in already and until the submission deadline, it should only take you 2 minutes to complete.
Fill out the submission form.
If you have questions or need help, reach out (also rather sooner than later) to our great community of Ubuntu App Developers.
We have just received news from Canonical that all verified LoCo Teams contacts who have pre-ordered a 14.04 DVD pack will receive it from the first shipment. This will only apply for those who register until April 8th, 2014. So, if you are the contact for a verified team and have not pre-ordered your DVDs for 14.04, make sure you do it as soon as possible!
If you are not a verified team, please l apply for the process in order to get a pack for the cycle.
Remember, only team contacts from verified teams can request them!
Make sure to get your orders in before the 8th!
Since its inception, the LibreOffice project has been pursuing the objective of freeing office computing from vendor lock-in. Now, some fellow Document Foundation members and LibreOffice developers have announced an umbrella project for all the file parsing libraries that are being developed to achieve this objective.
The new project is called Document Liberation, and will house the wide range of libraries that are already allowing LibreOffice users to have control on their own files. We want everyone to, for example, take their old files written in proprietary formats and have a way to recover the information, convert it over to a standard-compliant, modern format, and ensure the long-term preservation of the information they own – because you should own your data, not a specific version of a program.
Are you interested on this? Let’s make it happen! Head over the new Document Liberation website and read all about this effort.
A good friend just yesterday sent me a link to a one and a half hour lasting live concert of 2CELLOS. And wow, I was deeply impressed. Terrific! Even Sir Elton John approves. Have to share them with you, too. :)
- Highway To Hell featuring Steve Vai: Bloody hell yeah!
- Smooth Criminal: Forbidden good.
- With Or Without You: What can I say, I'm a sucker when it comes to ballads.
P.S.: I sooo love them also for their pun in their second album title, In2ition. :D
“No, unfortunately it’s not an April Fools joke.”
Said Jane Silber from Canonical.
Sad but true. Canonical is shutting down Ubuntu One file services.
“Today we are announcing plans to shut down the Ubuntu One file services. This is a tough decision, particularly when our users rely so heavily on the functionality that Ubuntu One provides. However, like any company, we want to focus our efforts on our most important strategic initiatives and ensure we are not spread too thin.”
“As of today, it will no longer be possible to purchase storage or music from the Ubuntu One store. The Ubuntu One file services will not be included in the upcoming Ubuntu 14.04 LTS release, and the Ubuntu One apps in older versions of Ubuntu and in the Ubuntu, Google, and Apple stores will be updated appropriately. The current services will be unavailable from 1 June 2014; user content will remain available for download until 31 July, at which time it will be deleted.”
This decision, as per Canonical, will not affect:
“The shutdown will not affect the Ubuntu One single sign on service, the Ubuntu One payment service, or the backend U1DB database service.”
For Full Details, please refer to this post.
At the last Ubuntu Developer Summit we discussed the idea of making our regular online summit serve more than just developers. We are interested in showcasing not just the developer-orientated discussion sessions that we currently have, but also including content such as presentations, demos, tutorials, and other topics.
I just wanted to give everyone a heads up that the first Ubuntu Online Summit will happen from 10th – 12th June 2014. The website is not yet updated (we are going to keep everything on summit.ubuntu.com and uds.ubuntu.com can point there, and Michael is making the changes to bring over the static content).
We are really keen to get ideas for how the event can run so I am scheduling a hangout on Thurs 10th April at 5pm UTC on Ubuntu On Air where I would welcome ideas and input. I hope to see you there!
Nothing new to report this week
Release Metrics and Incoming Bugs
Release metrics and incoming bug data can be reviewed at the following link:
Milestone Targeted Work Items
4 work items
2 work items
1 work item
1 work item
2 work items
3 work items
Status: Trusty Development Kernel
The 3.13.0-21.43 Trusty kernel has been uploaded to the archive. With
kernel freeze about to go into effect this Thurs Apr 3, I do not
anticipate another upload between now and then. After kernel freeze,
all patches are subject to our Ubuntu SRU policy and only critical bug
fixes will warrant an upload before release.
Important upcoming dates:
Thurs Apr 03 – Kernel Freeze (~2 days away)
Thurs Apr 17 – Ubuntu 14.04 Final Release (~2 weeks away)
The current CVE status can be reviewed at the following link:
Status: Stable, Security, and Bugfix Kernel Updates -
Status for the main kernels, until today (Mar. 25):
- Lucid – Prep week
- Precise – Prep week
- Quantal – Prep week
Saucy – Prep week
Current opened tracking bugs details:
For SRUs, SRU report is a good source of information:
cycle: 30-Mar through 26-Apr
28-Mar Last day for kernel commits for this cycle
30-Mar – 05-Apr Kernel prep week.
06-Apr – 12-Apr Bug verification & Regression testing.
17-Apr 14.04 Released
13-Apr – 26-Apr Regression testing & Release to -updates.
Open Discussion or Questions? Raise your hand to be recognized
No open discussions.
That's still an open question. There's a good chance that if we find an elegant solution, we'll get some new syntax.
In an effort to (re)start this conversation and get us thinking about the possibilities, I've drawn together some examples from various Lisps. At the end of the post, we'll review some related data structures in LFE... as a point of contrast and possible guidance.
Note that I've tried to keep the code grouped in larger gists, not split up with prose wedged between them. This should make it easier to compare and contrast whole examples at a glance.
Before we dive into the Lisps, let's take a look at maps in Erlang:
Common Lisp Hash Tables
Racket Hash Tables
Clojure Hash Maps
Shen Property Lists
OpenLisp Hash Tables
LFE Property Lists
I summarized some very basic usability and aesthetic thoughts on the LFE mail list, but I'll restate them here:
- Erlang syntax really is quite powerful; I continue to be impressed.
- Clojure was by far the most enjoyable to work with... however, doing something similar in LFE would require quite a bit of additions for language or macro infrastructure. My concern here is that we'd end up with a Clojure clone rather than something distinctly Erlang-Lispy.
- Racket had the fullest and most useful set of hash functions (and best docs).
- Chicken Scheme was probably second.
- Common Lisp was probably (I hate to say it) the most awkward of the bunch). I'm hoping we can avoid pretty much everything the way it was done there :-/
That being said, I don't think today is the day to propose unifying features for LFE/Erlang data types ;-) (To be honest, though, it's certainly in the back of my mind... this is probably also true for many folks on the mail list.)
Given my positive experience with maps (hash tables) in Racket, and Robert's initial proposed functions like map-new, map-set, I'd encourage us to look to Racket for some inspiration:
- "map" has a specific meaning in FPs (: lists map), and there's a little bit of cognitive dissonance for me when I look at map-*
- In my experience, applications generally don't have too many records; however, I've known apps with 100s and 1000s of instances of hash maps; as such, the idea of creating macros for each hash-map (e.g., my-map-get, my-map-set, ...) terrifies me a little. I don't believe this has been proposed, and I don't know enough about LFE's internals (much less, Erlang's) to be able to discuss this with any certainty.
- The thought did occur that we could put all the map functions in a module e.g., (: maps new ... ), etc. I haven't actually looked at the Erlang source and don't know how maps are implemented in R17 yet (nor how that functionality is presented to the developer). Obviously, once I have, this point will be more clear for me.
Looking at this Erlang syntax:
My fingers want to do something like this in LFE:
That feels pretty natural, from the LFE perspective. However, it looks like it might require hacking on the tuple-parsing logic (or splitting that into two code paths: one for regular tuple-parsing, and the other for maps...?).
The above syntax also lends itself nicely to these:
The question that arises for me is "how would we do this when calling functions?" Perhaps one of these:
Then, for Joe's other example:
We'd have this for LFE:
Before we pattern match on this, let's look at Erlang pattern matching for tuples:
Compare this with pattern matching elements of a tuple in LFE:
With that in our minds, we turn to Joe's matching example against a specific map element:
And we could do the same in LFE like this:
I'm really uncertain about add-pair and update-pair, both the need for them and the names. Interested to hear from others who know how map is implemented in Erlang and the best way to work with that in LFE...
This weekend (4-6 April) the Ubuntu community is celebrating another Ubuntu Global Jam! The goal, as always, is to get together as a team and make Ubuntu better, get people involved and have fun. In the past we all focused on packaging, fixing bugs, translations, documentation and testing. The most recent addition to the mix are App Dev School events.
The goal of App Dev Schools is to have a look at developing apps for Ubuntu together. We made this a lot easier by providing presentation material and virtualbox images and instructions for how to run an event. If you have a bit of programming experience, it should be easy for you to run the sessions with just a bit of preparation time.
Why is this exciting and probably a good idea to discuss in the team? Simple: it has never been easier to write apps for Ubuntu and publish them. You can choose between Qt/QML apps and HTML5 apps – both are easy to put together and packaging/publishing an app is a matter of a couple of a clicks. Awesome!
Check out the Ubuntu Global Jam page and find out how have your own local event. If it’s just you and a couple of friends meeting up – don’t worry – it’s still a jam!
Have a great weekend everyone!
This post is part of the series ‘Making ubuntu.com responsive‘.
At this point in time, once the pilot projects were either completed or underway, we had already:
- Created an initial responsive prototype of our main site, based on some common-sense rules
- Started our first mobile-first and responsive project from scratch
- Created and launched a fully responsive site
We had a better understanding of what was involved in working on this type of project, with different constraints and work flows. With lots of ideas and questions floating in our minds, we decided that the best next step was for designers and front-end developers to spend two or three days right after the release of the new canonical.com website to discuss and capture the findings.
It’s important to take time to take in the pros and cons of certain approaches we try as a team, so that we can try to avoid repeating past mistakes and keep doing more of the things that make projects run smoothly and produce great results.
Developers sprinting and a wall of sticky notesThings we learned Make sure you have a solid grid
Our new responsive grid seemed to adapt well from large to small screens (I will be publishing a post on this later in the series, so stay tuned!) and this was mostly because when we initially created the CSS and HTML we opted for using percentage and relative units rather than absolute units (like pixels).Use Modernizr for feature detection
The introduction of Modernizr to our developer tools proved essential to easily detect features across browsers, such as SVG support, and provide adequate fallbacks and is something we’ll keep using in the future.SVG icons and pictograms
We started the move from bitmap-based images to SVG for things like pictograms and UI elements. This was easy from a design perspective, as all of our icons and pictograms are already created as SVGs (as well as other formats). There were some hiccups when we tested the PNG fallback solution in some operating systems and browsers, like Opera Mini. But more on this in an upcoming post dedicated to images!Things we had to work on Defining visual layout across screen sizes
We were used to creating large, desktop-focused visuals and we had the tools to do so quickly — our style guide. Because the deadlines were looming, we decided we wouldn’t create lots of different mockups for each page in canonical.com and instead create flat mockups for large screen and work alongside the developers on how that would scale and flow in small and medium sized screens.
The wireframes were kept as linear as possible — they were more of a content and hierarchy overview to guide the visual designers — , and the content was produced so that it wasn’t too long for small screens.
A wireframe created for canonical.com
The problem with this approach was that, even though we all agreed with the general ways in which the content and visual elements would reflow from small to large screens, by creating comps for the large screen problems invariably arose and reflows that sounded great in our own minds didn’t really work as easily or smoothly as we thought.
It’s important that you define how you’re going to tackle this issue: in this case, canonical.com was designed from scratch, so it was more difficult to visualise how a large design could adapt to a small screen across the team. In the case of ubuntu.com, though, the tight scope means we’re adapting existing designs, so it makes sense to work almost exclusively in the browser and test it at the same time.
Initial small screen canonical.com prototypes: ‘needs work’
In the future, when we need to produce mockups we will make sure they are created initially for smaller screens and then for larger screens. When mockups aren’t necessary — for example, if we’re creating pages based on existing patterns — we are already building directly in code, for small screens first, and enhancements are added as the available screen space gets bigger.Animations
Even though the addition of CSS animations to our repertoire made for more interesting pages, making sure that they are designed to work well and look good across different screen sizes proved harder than expected.
In the future, we’ll need to carefully think about how having (or not having) an animation impacts small screens, how the animation should work from small to large screens, and what the fallback(s) should be, instead of assuming that the developers can simply rescale them.The process going forward
As a final note, it’s important to mention that in a fast-paced project, where decisions need to be made quickly and several people are involved in the project, you should keep a register of those decisions in a central location, where everyone can access them. This could be anything from a solution for a bug to even the decision of not fixing an issue, along with the reasoning behind it.
The plainbox-0.6 milestone is full of content but one thing I want to point out is the CEP-4 blueprint. In short, you will be able to run PlainBox on a desktop or laptop computer but execute tests on a server or tablet device you can connect to over ssh or adb.
I'd like to solicit comments and feedback on the proposed design. Development has started but so far just in R&D mode, to check the limitations of adb and see how the proposed design really fits into the current architecture.
So, if you are interested in device or server testing, have a look at the specification (linked from the blueprint) and discuss this in email@example.com. Please help us help you better.
Tomorrow evening we’ll be bringing a brand new season of the Ubuntu Podcast to your ears. After an extended winter break, we’re ready to dust off our microphones and mixers, fire up our laptops and dive head first into the new season. We’ll be streaming the show live at 2030 BST so you can listen and even participate through the IRC channel. Visit the live page on the website to find out more.
As we did last year, we will be releasing new episodes for download every week. If you can’t wait for that, listen live on alternate Wednesday evenings for about an hour. You can check the recording dates on our website or add them to your Google calendar.
The show will be much as you know and hopefully love it: A mix of discussion, interviews, news, silliness and cake. It would be great if you could join us at 2030 BST tomorrow (Wednesday 2nd April) for the first live show of the season!Pin It
Every now and then I do make ice cream from scratch, and I find it's always better than that bought in the store, because you know what goes into it –seriously, what the hell is carrageenan and why is it in my ice cream?
- 1 ostrich egg
- 1 quart of milk of magnesia
- 1 quart of cream of wheat
- 1 cup honey badger honey
- 1 vanilla root, peeled and minced
- Since the ostrich egg is known to be tough to crack, use a hack saw to cut a small section of the top off.
- Drain the egg and whisk in the cream and milk.
- Heat that mixture in a saucepan, to near boiling.
- Add the minced vanilla, reduce heat and simmer for 8-10 minutes.
- Remove from heat and refrigerate until very cold ~3 days.
- Next, wait until winter then bring your ice cream base outside and throw it into the snow.
- Using a shovel mix it around until the snow has sufficiently crystallized the ice cream.
- Scoop the resulting ice cream into a bucket and put it in your freezer, where it'll last for a month or so.
- Now take note of the date of publication above and simply go buy a fucking tub of ice cream. :)