If you follow the Ubuntu channels, and unless you’ve been living under a rock, you’ll have noticed that this coming weekend we’re organizing the Ubuntu Global Jam, a worldwide event where Ubuntu local community teams (LoCos) join in a get-together fest to have some fun while improving Ubuntu.
As we’re ramping up to a Long Term Support release, this is a particularly important UGJ and we need all hands on deck to ensure that it does not only meet, but exceeds the high quality standard of previous Ubuntu LTS releases. This is another article in the series of blog posts showcasing the events our community is organizing, brought to you by Rafael Carreras, from the Ubuntu Catalan LoCo team.Tell us a bit about your LoCo team
Our LoCo is language-oriented, and by language I mean Catalan (a Romanic one), not Perl or Python. In fact, the Catalan LoCo Team was the first language-oriented LoCo to be approved back in 2007. We manage our day-to-day in three mailing lists: technical doubts, team work and translations and do IRC meetings twice a month. We organise Ubuntu Global Jam events every 6 months (with some minor absences) and of course great release parties every 6 months along with some other little ones in between.What kind of event are you organizing for this Ubuntu Global Jam?
As always, we will translate some new packages, discuss translation items, a bug triage session, some install release work and even evangelization to some passing people, as we organise UGJ this time in a civic centre.Is this the first UGJ event you’re organizing?
No, it’s not, we are running UGJs since the first one and I think we only missed last one.How do you think UGJ events help the Ubuntu community and Ubuntu?
It’s a great opportunity for meeting people you only know by email or chat. Also, as we sit down together, there is little room for procrastination. Well, more or less, anyway.Why do you think Jono Bacon always features pictures of the Catalan team when announcing the UGJ? Are we the most good-looking LoCo?
Yeah, definitely. It must be that.
Join the party by registering your event at the Ubuntu LoCo Portal!
Recently I started to work with this again for a job and them I realize that could be a good idea to try content from zero for give a mind recycle, also taking into consideration that Ruby is a fun language to learn as python is.
Speaking of Ruby, learn this first is the correct path rather than start playing with the rails up front. Learn the 1.9 syntax and no one will dies.
Here's my Ruby learn path:
- The full Ruby track in Codeacademy. This is awesome and really fun. You can finish this by one or two days: http://www.codecademy.com/tracks/ruby;
- Them, http://tryruby.org.
And this will be enough to speak Ruby fluently for a while.
Now that you are familiar with Ruby, the Rails learn path:
But first, if you don't know the difference between Ruby and Rails, always remember, Ruby is a language and Rails is a framework to build webapps. They are held independently and at the time I speak to you, the most stable version of Ruby is 1.9.3 and Rails is 3.2.13.So...
- Try http://railsforzombies.org, Like Codeacademy, that will show a short and fun path to get started;
- You can try http://guides.rubyonrails.org/getting_started.html after. It is nice too;
- http://guides.rubyonrails.org and http://api.rubyonrails.org/ is the best quick reference out there. Try find something here before search something at Google;
- Read the http://ruby.railstutorial.org/ruby-on-rails-tutorial-book, this is strongly recommended by the rails creator to beginners;
- Pay for http://railscasts.com, they also have good free content all weeks, but the revised content will worth for you.
- Use rvm, it can save you some times. I will write a post about this and how set up on Ubuntu tomorrow.
Tell me in the comments if this track was useful for you and if you could to add something to the list. :-)
I’m thrilled to announce the availability of the Ubuntu 12.04 Online Tour for local community teams to localize and use on their websites. The tour has been the result of the stunning work done by Ant Dillon from the Canonical Web Design Team and should provide a web-based first impression of Ubuntu to new users, now in their language.
It’s a great opportunity to showcase Ubuntu to your local community to celebrate release day tomorrow.Where is it?
First of all, you’ll need to get set up with the right tools before you start.
Getting set up:
- Bazaar. Install the bzr revision control system
- Polib. Install the polib library
- Terminal. You’ll need to run the commands below on a terminal. Simply press Ctrl+Alt+T to fire up a new terminal console.
If you’ve already translated the tour in Launchpad, you can build a localized version in 3 easy steps:
1. Get the code:bzr branch lp:ubuntu-online-tour/12.04
2. Build the localized tour:cd 12.04 cd translate-html/bin ./translate-html -t
3. Deploy the tour:
- This will vary depending on your setup, so simply make sure you copy the chromeless, css, img, js, pie and videos folders along with the videoplayer.swf file to your site. In addition, you will need the en folder and the folder for your language created in the previous step.
If you haven’t finished the translation for your language in Launchpad, you will need to complete the corresponding PO file before you run step 2. Just ask on the Ubuntu translators mailing list or on Launchpad in case you need help or are not familiar with PO files.
For any issues, suggestions or enhancement, use the Online Tour’s Launchpad project to report bugs or submit improvements.
- Install ZFS on Debian GNU/Linux
- The ZFS Intent Log (ZIL)
- The Adjustable Replacement Cache (ARC)
- Exporting and Importing Storage Pools
- Scrub and Resilver
- Getting and Setting Properties
- Best Practices and Caveats
- Creating Filesystems
- Compression and Deduplication
- Snapshots and Clones
- Sending and Receiving Filesystems
- iSCSI, NFS and Samba
- Getting and Setting Properties
- Best Practices and Caveats
While taking a walk around the city with the rest of the system administration team at work today (we have our daily “admin walk”), a discussion came up about asynchronous writes and the contents of the ZFS Intent Log. Previously, as shown in the Table of Contents, I blogged about the ZIL in great length. However, I didn’t really discuss what the contents of the ZIL were, and to be honest, I didn’t fully understand it myself. Thanks to Andrew Kuhnhausen, this was clarified. So, based on the discussion we had during our walk, as well as some pretty graphs on the whiteboard, I’ll give you the breakdown here.
Let’s start at the beginning. ZFS behaves more like an ACID compliant RDBMS than a traditional filesystem. Its writes are transactions, meaning there are no partial writes, and they are fully atomic, meaning you get all or nothing. This is true whether the write is synchronous or asynchronous. So, best case is you have all of your data. Worst case is you missed the last transactional write, and your data is 5 seconds old (by default). So, let’s look at those too cases- the synchronous write and the asynchronous write. With synchronous, we’ll consider the write both with and without a separate logging device (SLOG).The ZIL Function
The primary, and only function of the ZIL is to replay lost transactions in the event of a failure. When a power outage, crash, or other catastrophic failure occurs, pending transactions in RAM may have not been committed to slow platter disk. So, when the system recovers, the ZFS will notice the missing transactions. At this point, the ZIL is read to replay those transactions, and commit the data to stable storage. While the system is up and running, the ZIL is never read. It is only written to. You can verify this by doing the following (assuming you have SLOG in your system). Pull up two terminals. In one terminal, run an IOZone benchmark. Do something like the following:$ iozone -ao
This will run a whole series of tests to see how your disks perform. While this benchmark is running, in the other terminal, as root, run the following command:# zpool iostat -v 1
This will clearly show you that when the ZIL resides on a SLOG, the SLOG devices are only written to. You never see any numbers in the read columns. This is becaus the ZIL is never read, unless the need to replay transactions from a crash are necessary. Here is one of those seconds illustrating the write:capacity operations bandwidth pool alloc free read write read write ------------------------------------------------------- ----- ----- ----- ----- ----- ----- pool 87.7G 126G 0 155 0 601K mirror 87.7G 126G 0 138 0 397K scsi-SATA_WDC_WD2500AAKX-_WD-WCAYU9421741-part5 - - 0 69 0 727K scsi-SATA_WDC_WD2500AAKX-_WD-WCAYU9755779-part5 - - 0 68 0 727K logs - - - - - - mirror 2.43M 478M 0 8 0 108K scsi-SATA_OCZ-REVODRIVE_XOCZ-6G9S9B5XDR534931-part1 - - 0 8 0 108K scsi-SATA_OCZ-REVODRIVE_XOCZ-THM0SU3H89T5XGR1-part1 - - 0 8 0 108K mirror 2.57M 477M 0 7 0 95.9K scsi-SATA_OCZ-REVODRIVE_XOCZ-V402GS0LRN721LK5-part1 - - 0 7 0 95.9K scsi-SATA_OCZ-REVODRIVE_XOCZ-WI4ZOY2555CH3239-part1 - - 0 7 0 95.9K cache - - - - - - scsi-SATA_OCZ-REVODRIVE_XOCZ-6G9S9B5XDR534931-part5 26.6G 56.7G 0 0 0 0 scsi-SATA_OCZ-REVODRIVE_XOCZ-THM0SU3H89T5XGR1-part5 26.5G 56.8G 0 0 0 0 scsi-SATA_OCZ-REVODRIVE_XOCZ-V402GS0LRN721LK5-part5 26.7G 56.7G 0 0 0 0 scsi-SATA_OCZ-REVODRIVE_XOCZ-WI4ZOY2555CH3239-part5 26.7G 56.7G 0 0 0 0 ------------------------------------------------------- ----- ----- ----- ----- ----- -----
The ZIL should always be on non-volatile stable storage! You want your data to remain consistent across power outages. Putting your ZIL on a SLOG that is built from TMPFS, RAMFS, or RAM drives that are not battery backed means you will lose any pending transactions. This doesn’t mean you’ll have corrupted data. It only means you’ll have old data. With the ZIL on volatile storage, you’ll never be able to get the new data that was pending a write to stable storage. Depending on how busy your servers are, this could be a Big Deal. SSDs, such as from Intel or OCZ, are good cheap ways to have a fast, low latentcy SLOG that is reliable when power is cut.Synchronous Writes without a SLOG
When you do not have a SLOG, the application only interfaces with RAM and slow platter disk. As previously discussed, the ZFS Intent LOG (ZIL) can be thought of as a file that resides on the slow platter disk. When the application needs to make a synchronous write, the contents of that write are sent to RAM, where the application is currently living, as well as sent to the ZIL. So, the data blocks of your synchronous write at this exact moment in time have two homes- RAM and the ZIL. Once the data has been written to the ZIL, the platter disk sends an acknowledgement back to the application letting it know that it has the data, at which point the data is flushed from RAM to slow platter disk.
In the image below, I tried to capture a simplified view of the process. The pink arrows, labeled as number one, show the application committing its data to both RAM and the ZIL. Technically, the application is running in RAM already, but I took it out to make the image a bit more clean. After the blocks have been committed to RAM, the platter ACKs the write to the ZIL, noted by the green arrow labeled as number two. Finally, ZFS flushes the data blocks out of RAM to disk as noted by the gray arrow labeled as number three.
Image showing a synchronous write with ZFS without a SLOG Synchronous Writes with a SLOG
The advantage of a SLOG, as previously outlined, is the ability to use low latency, fast disk to send the ACK back to the application. Notice that the ZIL now resides on the SLOG, and no longer resides on platter. The SLOG will catch all synchronous writes (well those called with O_SYNC and fsync(2) at least). Just as with platter disk, the ZIL will contain the data blocks the application is trying to commit to stable storage. However, the SLOG, being a fast SSD or NVRAM drive, ACKs the write to the ZIL, at which point ZFS flushes the data out of RAM to slow platter.
Notice that ZFS is not flushing the data out of the ZIL to platter. This is what confused me at first. The data is flushed from RAM to platter. Just like an ACID compliant RDBMS, the ZIL is only there to replay the transaction, should a failure occur, and the data is lost. Otherwise, the data is never read from the ZIL. So really, the write operation doesn’t change at all. Only the location of the ZIL changes. Otherwise, the operation is exactly the same.
As shown in the image, again the pink arrows labeled number one show the application committing its data to both the RAM and the ZIL on the SLOG. The SLOG ACKs the write, as identified by the green arrow labeled number two, then ZFS flushes the data out of RAM to platter as identified by the gray arrow labeled number three.
Image showing a synchronous write with ZFS with a SLOG Asynchronous Writes
Asynchronous writes have a history of being “unstable”. You have been taught that you should avoid asynchronous writes, and if you decide to go down that path, you should prepare for corrupted data in the event of a failure. For most filesystems, there is good counsel there. However, with ZFS, it’s a nothing to be afraid of. Because of the architectural design of ZFS, all data is committed to disk in transaction groups. Further, the transactions are atomic, meaning you get it all, or you get none. You never get partial writes. This is true with asynchronous writes. So, your data is ALWAYS consistent on disk- even with asynchronous writes.
So, if that’s the case, then what exactly is going on? Well, there actually resides a ZIL in RAM when you enable “sync=disabled” on your dataset. As is standard with the previous synchronous architectures, the data blocks of the application are sent to a ZIL located in RAM. As soon as the data is in the ZIL, RAM acknowledges the write, and then flushes the data do disk, as would be standard with synchronous data.
I know what you’re thinking: “Now wait a minute! The are no acknowledgements with asynchronous writes!” Not always true. With ZFS, there is most certainly an acknowledgement, it’s just one coming from very, very fast and extremely low latent volatile storage. The ACK is near instantaneous. Should there be a crash or some other failure that causes RAM to lose power, and the write was not saved to non-volatile storage, then the write is lost. However, all this means is you lost new data, and you’re stuck with old but consistent data. Remember, with ZFS, data is committed in atomic transactions.
The image below illustrates an asynchronous write. Again, the pink number one arrow shows the application data blocks being initially written to the ZIL in RAM. RAM ACKs back with the green number two arrow. ZFS then flushes the data to disk, as per every previous implementation, as noted by the gray number 3 arrow. Notice in this image, even if you have a SLOG, with asynchronous writes, it’s bypassed, and never used.
Image showing an asynchronous write with ZFS. Disclaimer
This is how I and my coworkers understand the ZIL. This is after reading loads of documentation, understanding a bit of computer science theory, and understanding how an ACID compliant RDBMS works, which is architected in a similar manner. If you think this is not correct, please let me know in the comments, and we can have a discussion about the architecture.
There are certainly some details I am glossing over, such as how much data the ZIL will hold before its no longer utilized, timing of the transaction group writes, and other things. However, it should also be noted that aside from some obscure documentation, there doesn’t seem to be any solid examples of exactly how the ZIL functions. So, I thought it would be best to illustrate that here, so others aren’t left confused like I was. For me, images always make things clearer to understand.
We’re back with the eighth episode of Season Six of the Ubuntu Podcast from the UK LoCo Team! Alan Pope, Mark Johnson, Tony Whitmore, Laura Cowen and The Podcats are all set in Studio A with cake and an interview.Download OGG Play in Popup Download MP3 Play in Popup
In this week’s show:-
- We interview Michael Hall from the Community team about CoreApps.
- We share some GUI Luv: GNOME-Do.
- We chat about moonlighting on The Doctor Who Podcast and looking at apps for Ubuntu Touch.
- And, of course, we go over your marvellous feedback.
Please send your comments and suggestions to: email@example.com
Join us on IRC in #ubuntu-uk-podcast on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: firstname.lastname@example.org and skype: ubuntuukpodcast
Follow our twitter feed http://twitter.com/uupc
Find our Facebook Fan Page
Follow us on Google Plus
Leave us some segment ideas on the Etherpad
Today I got in touch with Nathan Heafner, a community member who is actively participating, and wanted me to leave you with this message:
“With Ubuntu 13.04 (Raring Ringtail) being released this month we are knee deep in beta testing, bug tracking, and testing out new features. With all this testing going on, it’s important for us to stand back, take a few minutes, and examine ourselves and our community. Thus, the Ubuntu Community Survey.
Generally speaking, this survey encompasses questions around contributions to Ubuntu, the drivers for those contributions, and general user data (age, gender). With this data we hope to better understand our growing community and refocus resources where they are most needed.
The survey should take less than 3 minutes to complete, and the results will be released to the public for review. Your survey submission is greatly appreciated. There will be a follow up blog post pointing out the highlights.
Remember that, if you want to take the survey, you can click here.”
I learned about GTD 5 or 8 years ago, and pretty immediately was trying to use it. Ever since then I keep all of my information in one gtd folder, with Projects and Reference folders, a nextactions file, etc. I’ve blogged before about my tickler file, which frankly rocks and never lets me down.
However, a few months ago I decided I wasn’t happy with my nextactions file. Sitting down for a bit to think about it, it was clear that the following happens: some new project comes in. I only have time to jot a quick note, so I do so in nextactions. Later, another piece of information comes in, so I add it there. Over time, my nextactions files grows and is no longer a nextactions file.
I briefly tried simply not using the Projects/ directory, and keeping a indented/formatted structure in the nextactions file. But that does not work out – I spend most of my time either gazing at too much information, or/and ignoring parts which I hadn’t been working on recently. (I also briefly tried ETM and bug which both are *very* neat, but they similarly didn’t work for me for GTD.)
I have a Projects directory, so why am I not using it? Doing so takes several steps (think of a name, make the directory, open a file in it, make the notes, exit) and after that I don’t have a good system for managing the project files. Looking at a project again involves several steps – cd into gtd/Projects, look around, cd , look again. Clearly, project files needed better tools.
So I wrote up a simple ‘project’ script, with a corresponding bash_completion file. If info comes in for a trip I have to take in a few months, I can simplyproject start trip-sandiego-201303
orp s trip-sandiego-201303
This creates the project directory and opens vim with three buffers, for each of the three files – a summary, actions, and log. (‘project new’ will create without pulling up vim with those files.) Later, I canproject list
or (for short)p l
to list all open projects,p e tr<tab>
to edit the project – which again opens the same files, orp cat tr<tab>
to cat the files to stdout. I’ve added a ‘Postponed’ directory for projects which are on hold, so I canproject postpone trip-sandiego-201303
or justp po tr<tab>
to temporarily move the project folder into Postponed, orp complete tr<tab>
to move the project folder into the Completed/ directory.
I’ve been using this for a few months now, and am very happy with the result. The scipt and completion file are in lp:~serge-hallyn/+junk/gtdproject. It’s really not much, but so useful!
They are calling it by OpsWorks, supporting deployment and scale wep apps and setup load balancer layers with a few clicks. Initially the list of stack scripts is not too big, supporting only the following:
- Load balancer
- App Server
- Static Web Server
- Rails App Server
- PHP App Server
- Custom (Not tested. I don't know what is it)
Except missing python apps and other dbs, I think this have a lot of potential.
The cool stuff is the possibility of choose between Apache 2 or Nginx and Ubuntu 12.04 LTS instead Amazon Linux.
The service if free, but use carefully because it automatically setup EC2 machines, load balancers and other AWS related features to make your stack run. It is also interesting because you can access your machines remotely via SSH and manage it via your AWS panel or API, as a normal EC2 machines.
If you choose to use Ubuntu Server, you can set up juju for make your stack more powerful, but avoid conflicts with OpsWorks.
See it in action:
And, of course, to test it:
What do you think about?
I use the PassHash firefox extension to generate site-specific strong passwords. The idea behind the extension is that a master password and a siteTag (e.g. the domain name) is used to generate a sha1 hash. This hash is used as the password for the website. In python its essentially this code:h = hmac.new(master_pass, site_tag, hashlib.sha1) print(b64encode(h.digest())[:hash_len])
I want a commandline utility that can output me PassHash compatible hashes when I use w3m (or if the extension stops working for some reason).
To my delight I discovered that the upstream git repNice and hard to brute-force.o of PassHash already has a python helper to generate passhash compatible password. I added some tweaks to add pythons argparse  and now I’m really happy with it:$ ./tools/passhash.py --hash-size 14 slashdot.org Please enter the master key: KPXveo7bq7j1%X
Hard to brute-force and matches what the extension generates.
There are 4 product listings representing each of the officially supported devices; grouper (nexus 7), maguro (galaxy nexus), mako (nexus 4), and manta (nexus 10). You can help by installing the new images following the installation instructions, and then reporting your results on the isotracker.
There are handy links for download and bug information at the top of the testcases to help you out. If you do find a bug, please use the instructions to report it and add it to your result. Never used the tracker before? Take a look at this handy guide or watch the youtube version.
Once all the kinks and potential issues are worked out (your feedback requested!) the quantal images will cease, and moving forward, the team will continue to provide daily images and participate in testing milestones as part of the 's' cycle.
As always please contact me if you run into issues, or have a question.
Thank you in advance for your help, and happy testing everyone!
And Python continues to hold little surprises. Just today, I solved a bug in an Ubuntu package that's been perplexing us for weeks. I'd looked at the code dozens of times and saw nothing wrong. I even knew about the underlying corner of the language, but didn't put them together until just now. Here's a boiled down example, see if you can spot the bug!
if i == 1:
if i == 2:
e = None
except KeyError as e:
except ValueError as e:
Here's a hint: this works under Python 2, but gives you an UnboundLocalError on the `e` variable under Python 3.
The reason is that in Python 3, the targets of except clauses are `del`d from the current namespace after the try...except clause executes. This is to prevent circular references that occur when the exception is bound to the target. What is surprising and non-obvious is that the name is deleted from the namespace even if it was bound to a variable before the exception handler! So really, setting `e = None` did nothing useful!
Python 2 doesn't have this behavior, so in some sense it's less surprising, but at the expense of creating circular references.
The solution is simple. Just use a different name to capture and use the exception outside of the try...except clause. Here's a fixed example:
exception = None
except KeyError as e:
exception = e
except ValueError as e:
exception = e
So even after almost 20 years of hacking Python, you can still experience the thrill of discovering something new.
- Install procenv inside the container:
$ sudo apt-get install -y procenv
- Shutdown the container:
$ sudo shutdown -h now
- Boot the container (mine is called "raring" in this example) like this:
$ sudo lxc-start -n raring --console-log /tmp/lxc-console.log \ -- /usr/bin/procenv --file=/dev/console --exec \ -- /sbin/init --debug
- View the /tmp/lxc-console.log logfile in the host environment.
Because Lisp Flavored Erlang is 100% compatible with Erlang Core, it has access to all the Erlang libraries, OTP, and many third-party modules, etc. Naturally, this includes the Erlang HTTP client, httpc. Today we're going to be taking a look at how to use httpc from LFE. Do note, however that this post is only going to provide a taste, just enough to give you a sense of the flavor, as it were.
If you would like more details, be sure to not only give the official docs a thorough reading, but to take a look at the HTTP Client section of the inets Reference Manual.
Note that for the returned values below, I elide large data structures. If you run them in the LFE REPL yourself, you can view them in all of the line-consuming glory.
Let's get started with a simple example. The first thing we need to do is start the inets application. With that done, we'll then be able to make client requests:
Now we can perform an HTTP GET: This just makes a straight-forward HTTP request (defaults to GET) and returns a bunch of associated data:
- HTTP version
- status code
- reason phrase
For those not familiar with Erlang patterns, we've just told LFE the following:
- the return value of the function we're going to call is going to be a tuple composed of an atom ('ok) and another tuple
- the nested tuple is going to be composed of a tuple, some headers, and a body
- the next nested tuple is going to be composed of the HTTP version, status code, and status code phrase
Once the request returns, we can check out the variables we set in the pattern:
That's great if everything goes as expected and we get a response from the server. What happens if we don't?
Well, errors don't have the same nested data structure that the non-error results have, so we're going to have to make some changes to our pattern if we want to extract parts of the error reason. Pattern matching for just the 'error atom and the error reason, we can get a sense of what that data structure looks like:
Looking at just the data stored in the reason variable, we see:
If you check out the docs for httpc request and look under "Types", you will see that the error returned can be one of three things:
- a tuple of connect_failed and additional data
- a tuple of send_failed and additional data
- or just unspecified additional data
Now that we've taken a quick look at the synchronous example, let's make a foray into async. We'll still be using httpc's request function, but we'll need to use one of the longer forms were extra options need to be passed, since that's how you tell the request function to perform the request asynchronously and not synchronously.
For clarity of introducing the additional options, we're going to define some variables first: You can read more about the options in the httpc docs.
With the variables defined, let's make our async call: The sender receives the results, and since we sent from the LFE REPL, that's the process that will receive the data. Let's keep our pattern simple at first -- just the request id and the result data:
Needless to say, parsing the returned data is a waste of Erlang's pattern matching, so let's go back and do that again, this time with a nice pattern to capture the results. We'll need to do another request, though, so that something gets sent to the shell:
Now we can set up a pattern that will allow us to extract and print just the bits that we're looking for. The thing to keep in mind here is that the scope for the variables is within the receive call, so we'll need to display the values within that scope:
This should demonstrate the slight differences in usage and result patterns between the sync and async modes.
Well, that about sums it up for an intro to the HTTP client in LFE! But one last thing, for the sake of completeness. Once we're done, we can shut down inets:
These are very exciting times for Ubuntu. In so many parts of our community so many awesome things are happening every day and it’s great that many talk about it so you can get a sense of what’s happening.
We’ve been doing Ubuntu Development hangouts for a while now, but in last few weeks the pace increased even more. If you have missed some of the hangouts, have a look at the Ubuntu On Air youtube channel (better yet subscribe to it) to get an idea of what happened recently, what’s planned and where you can get involved. Here’s some recent examples:
- A discussion about image-based updates for Ubuntu Touch
- Didier Roche talking about autolanding stuff in Ubuntu
- Thomas Voß and Kevin Gunn explaining what to expect from Mir and the next Unity
- Ricardo Salveti and Sergio Schvezov talking about what’s happening with Ubuntu Touch
Of course there’s many many more.
Today (2013-04-18) we are going to have some more special people talking to us, so make sure you’re going to be there, at ubuntuonair.com!
- At 13:30 UTC we are going to have Loïc Minier, Seth Forshee, Thomas Voß, Michael Frey, Ricardo Salveti, Alex Chiang, Martin Pitt, Tony Espy and Matthew Fischer on the channel, who will discuss some of the main choices around how and where power management will happen (kernel driver model; supporting Android and mainline kernels, indicators and service daemons vs. power manager daemon)
- At 16:00 UTC Robert Park and Ken vanDine will talk us through the friends-app and its API.
I’m very much looking forward to both!
You can help!
I’m looking for a co-presenter, who knows a bit about Ubuntu Development, who can help hosting some of the sessions. Bonus points if you live in a different timezone (I’m in CET right now), so we can more easily cover different times.
Thanks a lot to José Antonio Rey who helps a lot with keeping Ubuntu On Air in shape!
If you have something you’d like to talk about (roughly in the area of Ubuntu Development), please let me know as well!
In this small post, I’m going to explain some necessary steps for fixing bugs in Ubuntu or any project in Launchpad (https://launchpad.net/)
En este pequeño post voy a explicar rápidamente los pasos necesarios para solucionar un bug en Ubuntu o en cualquiera de los proyectos alojados en Launchpad (https://launchpad.net/)
1- Buscamos un bug sobre el cual vamos a trabajar, para este ejemplo trabajamos sobre el bug#1162057, también podemos trabajar sobre un bug reportado por nosotros mismos. Una vez nos ubicamos en la dirección del bug podemos revisar mucha información que nos ayudará a encontrar un solución.
En la primera linea podemos ver a que paquete pertenece el bug (Affects), el estado actual del bug (Status), la importancia del bug para el proyecto (Importance), La persona asignada a resolver el bug (Assigned to) y el Milestone
Luego nos encontramos con la descripción del bug (Bug Description), esta información es agregada por la persona que ha reportado el bug y es nuestra principal fuente de datos técnicos para poder resolver el bug.
2- En este punto ya hemos decidio que vamos a trabajar en este bug, el siguiente paso será confirmar que en verdad es un bug… para esto procedemos a replicar el bug. Aquí logramos reproducir el bug simplemente ejecutando la aplicación… como se puede ver es algo muy evidente.
Acto seguido podemos cambiar el estado a Confirmado (Confirmed), pero no nos detendremos allí, si pasado algún tiempo nadie adopta el bug (otra forma de decir trabajar en el) pues lo tomaremos cambiando el estado a En Proceso (In Progress) en este punto podremos modificar la asignación; si somos muy habilidosos y manejamos muy bien la plataforma lo podemos asignar a nosotros mismos sino asignamos a alguien que nos ayude en la revisión. Esta persona será la encargada de revisar nuestra solución y a futuro marcará el bug como solucionado.
3- Ahora nos dedicaremos a buscar una solución, para este bug revisamos los repositorios donde se obtienen las imágenes iso, allí se ve que se hay dos entradas por cada versión, algo que es muy nuevo y posiblemente es el problema.
Con el posible problema ya identificado procedemos a obtener el código fuente de la aplicación para empezar las pruebas, esto lo realizamos vía Bazaar bzr por terminal (hay herramientas gráficas como el bazaar explorer que se puede usar para este mismo fin).
Con el uso del comando bzr branch obtenemos el código fuente del proyecto en la localización donde nos encontramos! en mi caso /home… en este paso si ya toca aplicar mucho instinto y consultar a los maintainers del proyecto por si se tienen algunas dudas.
—Nos tomamos unos días para hacer las respectivas ediciones al código y revisiones del funcionamiento del código.—
Una vez terminada la edición puedes consultar tus cambios mediante el uso del comando bzr diff, cabe anotar que debes estar dentro del directorio del proyecto
4- Teniendo ya la solución aplicada en el codigo fuente procedemos a subir nuestros cambios realizados al proyecto, primero agregamos un mensaje acerca de nuestra mejora, esto se locgra con el comando bzr commit -m “aquí va el mensaje de nuestra mejora“, luego procedemos a subir nuestro código y aquí hay algo interesante y es que no vamos a subir el código directamente al proyecto ya que eso solo lo hacen los maintainers del proyecto, así que nuestros cambios iran a nuestro perfil de launchpad y se vincularán con una referencia del proyecto, esto se logra con la siguiente nomenclatura: bzr push lp:~usuario_launchpad/proyecto/tu_mejora donde tu_mejora es un nombre representativo a las novedades de tu código.
5- En el paso anterior subimos el código de la mejora a launchpad, ahora nos dirigimos a su sitio web, donde podremos enlazarlo al bug al cual vamos a implementar la mejora y aquí lo proponemos para merging. cuando se propone para merging uno de los maintainers del proyecto deberá verificarlo.
6-Una vez el maintainer del paquete o la aplicación aprueba tu mejora, ya es solo tiempo para verla en acción. En algunas aplicaciones este lanzamiento de mejoras puede demorar mas que en otras, pero al final tengan la seguridad que serán publicadas… Y así podemos ver a TestDrive funcionando normalmente y descargando isos como siempre
Espero que este pequeño post les sea muy útil y mas personas se animen a contribuir a un proyecto de software libre o al mismo Ubuntu! Estamos en contacto
"I like KDevelop, KDE and Debian packages combined with the fast update service of Kubuntu."
"I have to hand it to the Kubuntu package team, they really get the updates out fast"
"Everything is highly customizable."
Add your reason today :)
If you aren’t aware, Canonical is planning on writing a new display server, competing to both X11 and Wayland, named Mir.
I’ll state my opinion right now: I really do NOT like this move. The rest of the post is the “why” of my opinion. I have not written all of the “why”, because some of the reasons I thought of lacked enough proof to back what I said (such as Canonical becoming another M$ or Apple, trying to take over the linux world ).
First, the segregation. Let’s assume that Mir is, as was planned “A … that is extremely well-defined, well-tested and portable.” (this is very hard to do, in fact, there are only a very few amount of software and libraries that are this good). Now this would cause a horrible issue of segregation, because now application developers that write applications for Mir will exclude people who do not have Mir (kind of an obvious issue though). Most application developers will use a toolkit, such as Qt or GTK+, which provides an abstraction layer that will allow the applications to run on any display server (though, of course, this requires patching them to support the display servers, but canonical, IIRC, has promised to do this, at least for Qt), so this is less of an issue. The bigger issue would be with the 3rd party graphics drivers. Both major GPU manufacturers (Nvidia and ATI) are already having issues with giving X11 support to their drivers (though Nvidia has considerably less issues than ATI). Now comes along Wayland, an alternative to X11. Back in 2010, Nvidia clearly stated that they do not have any plans to support wayland (they seemed to have changed their mind though, or at least, considered it), and ATI does not plan on supporting wayland anytime soon. This is reasonable for the companies and for linux. As long as X11 will still be supported until both companies officially support wayland, everything should be somewhat okay. But then comes along Mir, a company-lead alternative to both Wayland and X11, for their operating system. 3 different display servers for linux, and both major GPU manufacturers are already having issues with one. This is just ridiculous. And anyways, just think of the users and distros, trying to find which one they should use.
Lastly, let’s say that they were able to accomplish their goal. Why do they need a new display server in the first place? Why can’t they just use Wayland? There is a section on the wiki about this, which I tried to read, but it was quite vague. All that I could understand is that they wanted support for 3D input devices. So why don’t they just talk to the Wayland developers about this and maybe help them implement it if progress is not going fast enough? Or if they don’t want to have it in wayland, just fork it, don’t start writing your own. “In summary, we have not chosen Wayland/Weston as our basis for delivering a next-generation user experience as it does not fulfill our requirements completely”. Oh, come on. Wayland is open-source, you can change it if you need to, you know. Or if they don’t want the changes, you can just fork it. I know I’m repeating my last sentence, but this is just ridiculous.
So to summarize, I’m not that crazy about Mir
I know I said things rather bluntly, and I’m expecting that most of the reactions to this will be rather harsh, but I feel that it was important to write this. It’s not because I hate ubuntu that I write this, I really like the initiative, just not the execution (which is why I write these kinds of posts… maybe if enough people show their disapproval towards their methods, they might change their minds ^_^). Also, if you think that any of the claims I made were false, let me know, I’m not that closed-minded about it
One thing I love about Linux is the ability to easily try new applications. After all we are open source and they are being created all the time. The command-line makes it easy to add and remove Personal Package Archives. However their is a tool which will let you add, remove, search, manage Personal Package Archives(PPA) and more. Just a little easier, I will talk about it below. First the basics. You can easily add a PPA from the command-line with the following commands.
For exaple in three easy commands I can install the application CLI Companion:
sudo add-apt-repository ppa:clicompanion-devs/clicompanion-nightliessudo apt-get update
sudo apt-get install clicompanion
Because of my endless tinkering and checking out the latest software our FOSS developers have created I ended up with quite a large collection of Personal Package Archives(PPA). Some of which I no longer used.
At first I was unsure how to get rid of these. Today I did a little research and wanted to share with you what I found. This includes a cool new application called Y PPA Manager. Whether it is the command-line or a fancy GUI app we will have you cleaning up your collection of Personal Package Archives in no time.
From the terminal you can use a very similar command that you used to add the PPA.
sudo apt-add-repository --remove ppa:<ppa_name>
Then run the following to download the package lists from the repositories and "update" them to get information on the newest versions of packages and their dependencies. It will do this for all repositories and PPAs
sudo apt-get update
Saved The Best For Last
I came across a project which made a top application list on a popular website. The features definetly peeked my interest. The application is Y PPA Manager. Y PPA Manager is a tool which simplifies this process of managing Personal Package Archives (PPA). It allows adding, deleting and purging PPAs easily. You can also search and install PPAs from Launchpad repositories by entering the name of an application. I gave it a spin today and I have to say so far I like it. You can install Y PPA Manager with the following commands,sudo add-apt-repository ppa:webupd8team/y-ppa-manager
sudo apt-get update
sudo apt-get install y-ppa-manager
The main interface is quite self explanatory. You can Add a new PPA from the Add button and delete added sources from the Remove button. To get a list of your packages from “List Packages”. The Advanced option allows backing up, restoring and purging PPAs.
Main interfaceWhat makes this application really useful is its PPA search ability. To easily find a Launchpad PPA, click on Search all Launchpad PPAs and enter an application name. You can also enable the Deep Search option for a more advanced search.
Whichever option you choose your PPPas will now be much more manageable.
But more than all of that, several team members have stepped out of there comfort zones and went to work on one of the testing tools we as a team utilize. The tool is called "Testdrive" and is written in python. Now, one of the great things I love to espouse on about with QA is the opportunity to work on many different things. There are needs to fit all interests, and if you are willing, the capability to learn.
In this instance, there is an opportunity to learn a little python and to work with a new team to help keep a testing tool alive. I'm happy to see that the same tool that was rendered broken in January by updates is now alive and well, with brand new contributors, fresh patches and even a release! Many thanks to smartboyhw, noskcaj, SergioMeneses, phillw, and the others who have reached out to ensure the tool that ships in raring still works. Thanks as well to the testdrive development team for engaging with us, reviewing merge proposals, and helping to ensure testdrive still works.
I look forward to a bright feature of new and improved testing tools. Specifically to those who contributed patches, with your new coding abilities, I can't wait to see what will happen next cycle! *wink, wink*