The Ubuntu SDK preview is just over 2 months old, but we’ve already seen a lot of development starting with it. Read below for a high-level look at some of the apps that are currently being written.Core Apps
Shortly after making Ubuntu Touch and the SDK preview announcements, we kicked off an effort to develop the core applications for Ubuntu devices in the open with full community involvement. We identified a number of desired applications, recruited interested community contributors, and dedicated design and project management resources from Canonical staff.
The actual development phase for these apps has only recently started, but some of them have shown a huge amount of progress already.Calculator
The Calculator app has made huge progress, and has been working closely with Canonical designers to work out the user interface and user experience.Calendar
Likewise,the Calendar app developers have been iterating over their UI/UX with the design team, and are making fast progress on the front-end.Clock
The Clock app is also sporting a functional, stylish analog dial that shows your current time, with screens staged for more features to come.Weather
Even the weather app has seen some UI work recentlyBut wait! There’s more!
The Core Apps developers aren’t the only ones working with the Ubuntu SDK preview, we’ve seen a number of 2nd and 3rd party app developers writing new apps or porting existing ones. Here’s a short list of the ones that I’ve seen in development:More or Less
A simple number guessing gameAlternative Weather App
Community developer Joseph Mills independently created another Weather appGwibber/Friends
Gwibber microblogging client developer Ken Vandine has started porting it to a QML front-end using the Ubuntu SDKSudoku Touch
Everybody’s favorite number puzzle, you can now play Sudoku on Ubuntu Touch using the SDK components.Akari
Or the lesser known number puzzle: AkariSameGame
If number puzzles aren’t your thing, you can always group and pop some bubbles with SameGameChess Engine
Vibhav Pant gives us hope that a mobile chess app will be available soon, by porting his chess engine to the Ubuntu SDKNovacut
Speaking of things too come, Novacut developer David Jordan has also been playing with the Ubuntu SDK and gives us this teaserOMG!Ubuntu! Reader
But if you just want to kick back and catch up on some Ubuntu news, you can try the OMG!Ubuntu ReaderGoogle Reader
And for the rest of your news reading, you can use this Google Reader built with the Ubuntu SDKWhosThere
Keep up with your friends using this WhatsApp client for Ubuntu TouchJono Head
And then of course there’s this. I don’t even…
It’s been two weeks since Rick Spencer made the case for a rolling release approach in Ubuntu. Having a rolling release is one of the very top suggestions from the hardcore Ubuntu user community, and after years of it being mooted by all and sundry I thought it deserved the deep consideration that Rick and his team, who represent most of Canonical’s direct contributions to Ubuntu, brought to the analysis.
It’s obviously not helpful to have mass hysteria break out when ideas like this get floated, so I would like to thank everyone who calmly provided feedback on the proposal, and blow a fat raspberry at those of you who felt obliged to mount soapboxes and opine on The End Of the World As We Know It. Sensible people the world over will appreciate the dilemma at being asked to take user feedback seriously, and being accused of unilateralism when exploring options.
Change is warranted. If we want to deliver on our mission, we have to be willing to stare controversy in the face and do the right thing anyway, recognising that we won’t know if it’s the right thing until much later, and for most of the intervening time, friends and enemies alike will go various degrees of apoplectic. Our best defense against getting it wrong is to have a strong meritocracy, which I think we do. That means letting people like Rick, who have earned their leadership roles, explore controversial territory.
So, where do we stand? And where do I stand? What’s the next step?
What makes this conversation hard is the sheer scale of the Ubuntu ecosystem, all of which is profoundly affected by any change. Here are the things I think we need to optimise for, and the observations that I think we should structure our thinking around:
Releases are good discipline, cadence is valuable.
Releases, even interim releases, create value for parts of the Ubuntu ecosystem that are important. They allow us to get more widespread feedback on decisions made in that cycle – what’s working, what’s not working. Interestingly, in the analysis that played into Rick’s proposal, we found that very few institutional users depend on extended support of the interim releases. Those who care about support tend to use the LTS releases and LTS point releases.
Release management detracts from development time, and should be balanced against the amount of use that release gets.
While reaffirming our interest in releases, I think we established that the amount of time spend developing in a cycle versus spent doing release management is currently out of whack with the amount to which people actually DEPEND on that release management, for interim releases, on the desktop. On the server, we found that the interim releases are quite heavily used in the cloud, less so on physical metal.
Daily quality has raised the game dramatically for tip / trunk / devel users, and addresses the Rolling Release need.
There’s widespread support for the statement that ‘developers can and should use the daily development release’. The processes that have been put in place make it much more reliable for folks who want to track development, either as a contributor to Ubuntu or as someone who ships software for Ubuntu and wants to know what’s happening on the latest release, to use Ubuntu throughout the development cycle. For those of you not aware, uploads to the edge get published in a special ‘pocket’, and only moved into the edge if they don’t generate any alarms from people who are on the VERY BLEEDING EDGE. So you can use Raring (without that bleeding edge pocket) and get daily updates that are almost certain not to bork you. There is a real community that WANTS a rolling release, and the daily development release of Ubuntu satisfies this need already.
LTS point releases are a great new enhancement to the LTS concept.
On a regular basis, the LTS release gets a point update which includes access to a new, current kernel (supporting new hardware without regressing the old hardware on the previous kernel, which remains supported), new OpenStack (via the Cloud Archive), and various other elements. I think we could build on this to enhance the LTS with newer and better versions of the core UX (Unity) as long as we don’t push those users through a major transition in the process (Unity/Qt, anybody? ).
Separating platform from apps would enhance agility.
Currently, we make one giant release of the platform and ALL APPS. That means an enormous amount of interdependence, and an enormous bottleneck that depends largely on a single community to line everything up at once. If we narrowed the scope of the platform, we would raise the quality of the platform. Quite possibly, we could place the responsibility for apps on the developers that love them, giving users access to newer versions of those apps if (and only if) the development communities behind them want to do that and believe it is supportable.
That’s what I observed from all the discussion that ensued from Rick’s proposal.
Here’s a new straw man proposal. Note – this is still just a proposal. I will ask the TB to respond to this one, since it incorporates both elements of Rick’s team’s analysis and feedback from wider circles.Updated Ubuntu Release Management proposal
In order to go even faster as the leading free software platform, meet the needs of both our external users and internal communities (Unity, Canonical, Kubuntu, Xubuntu and many many others) and prepare for a wider role in personal computing, Ubuntu is considering:
1. Strengthening the LTS point releases.
Our end-user community will be better served by higher-quality LTS releases that get additional, contained update during the first two years of their existence (i.e. as long as they are the latest LTS). Updates to the LTS in each point release might include:
- addition of newer kernels as options (not invalidating prior kernels). The original LTS kernel would be supported for the full duration of the LTS, interim kernels would be supported until the subsequent LTS, and the next LTS kernel would be supported on the prior LTS for teh length of that LTS too. The kernel team should provide a more detailed updated straw man proposal to the TB along these lines.
- optional newer versions of major, fast-moving and important platform components. For example, during the life of 12.04 LTS we are providing as optional updates newer versions of OpenStack, so it is always possible to deploy 12.04 LTS with the latest OpenStack in a supported configuration, and upgrade to newer versions of OpenStack in existing clouds without upgrading from 12.04 LTS itself.
- required upgrades to newer versions of platform components, as long as those do not break key APIs. For example, we know that the 13.04 Unity is much faster than the 12.04 Unity, and it might be possible and valuable to backport it as an update.
2. Reducing the amount of release management, and duration of support, for interim releases.
Very few end users depend on 18 months support for interim releases. The proposal is to reduce the support for interim releases to 7 months, thereby providing constant support for those who stay on the latest interim release, or any supported LTS releases. Our working assumption is that the latest interim release is used by folks who will be involved, even if tangentially, in the making of Ubuntu, and LTS releases will be used by those who purely consume it.
3. Designating the tip of development as a Rolling Release.
Building on current Daily Quality practices, to make the tip of the development release generally useful as a ‘daily driver’ for developers who want to track Ubuntu progress without taking significant risk with their primary laptop. We would ask the TB to evaluate whether it’s worth changing our archive naming and management conventions so that one release, say ‘raring’, stays the tip release so that there is no need to ‘upgrade’ when releases are actually published. We would encourage PPA developers to target the edge release, so that we don’t fragment the ‘extras’ collection across interim releases.
That is all.
It’s again the time to call for more testers – Xubuntu Raring beta 1 is around.Why help with testing?
Testing is an excellent and easy way to get involved with Xubuntu. It’s a vital part of our release cycle and anyone with a virtual machine (or even better, a spare computer!) can help out with it. When you do testing you will work with most of the people involved in Xubuntu. Don’t be afraid, we won’t bite you!What do I need to do?
There are different ways of testing the ISOs for all of our releases. Here’s a quick overview of the different tests we need to run:
- Installation testing to make sure our ISO’s are installable
- Live CD testing to make sure the live CD environment works as expected (and to make sure persistency works on live USB)
- Post-installation testing to make sure our applications work as expected
- Upgrade testing to make sure upgrades from old releases work as expected
All the results from different tests are reported and recorded on the Ubuntu ISO Tracker. You will need a Ubuntu Single Sign-On account to log in and send results.
Finally, to run a test and report it so it helps the Xubuntu team, you need to do the following:
- Go to the Ubuntu ISO Tracker and Log in
- Click Raring Beta 1 and select your infrastructure from the Xubuntu product at the bottom of the page
- Download the appropriate ISO (see Link to download information at the top of the page for download links as well as zsync commands)
- Select a test you want to run – it’s best to start with an installation test and then advance to post-installation tests to get the most out of your time
- On any test page, you should see the testcase instructions (if you don’t, click on Testcase) and follow them step by step
- If you find bugs while running the test, add them to the report and if they don’t exist, file them
- In the Bugs to look for section you will see bugs that people have been experiencing with the same test before – specifically look out for these
- Once you’ve finished with the testcase, report your results; select the overall result for the test and list any bugs you experienced during testing. Remember to click Submit result when you’re done – if you don’t do this, the Xubuntu team doesn’t get any benefit from the test!
All of the different testing areas for Xubuntu follow more or less the same pattern. To get detailed instructions with pictures on how to report test results refer to the ISO testing Walkthrough on the Ubuntu wiki. You can also ask for help in our developer IRC channel #xubuntu-devel on Freenode.
Once you get more into testing, there are ways to make downloading the latest images and testing them easier for you. These include but are not limited to zsyncing new images, using Testdrive to manage your tests, etc… To read more about the Ubuntu Testing framework and the Ubuntu Quality Assurance team, read the QA Team wiki.
- We are building a private cloud. Ubuntu server instances running on that are cloned from a master template
- While most clouds provide what I call "Instance Identity" information through some pre-known web service URI, in our case we utilize VMware's "Invoke-VMScript" API to run scripts inside the cloned template, thus customizing it and giving it its identity, then puppet takes it from there. Note that we're using a simple vCenter+ESXi (no vCloud stuff here) no shared storage even!
- Editing the template, and publishing it across the cloud (a topic for another post) takes considerable time! I wanted a way to be able to quickly update my identity scripts without having to re-build and re-publish images
- A script with a trivially simple (thus mostly fixed) "bootstrap" section, which auto-updates itself from github and relaunches its-new-self!
Note: To run this successfully the user this is run under, should have passwordless sudo rights to run "ip" and to run "itself" :) Got cool ideas, thoughts or comments? Leave me a message
Welcome to the Ubuntu Weekly Newsletter. This is issue #307 for the week March 4 – 10, 2013, and the full version is available here.
In this issue we cover:
- Mir + Unity QML + Unity APIs = Unity
- Not convinced by rolling releases
- Follow Up from “Let’s Discuss Interim Releases”
- Virtual Ubuntu Developer Summit
- Welcome New Members and Developers
- Ubuntu Stats
- Ubuntu Global Jam, Novosibirsk 2013
- Measuring Jam
- Ubuntu Global Jam Brazil 2013
- Recent announcements and the Ubuntu Community
- James Hunt: Upstart 1.7 released & A basic Upstart Events GUI (and cli!:-) tool
- Kaj Ailomaa: Future Ubuntu Studio changes
- Tiago Hillebrandt: Downloading photos from Ubuntu Touch to your desktop
- Valorie Zimmerman: Thoughts and worries about the proposed new Ubuntu processes
- Nicholas Skaggs: Staring down the scarecrow; should ubuntu roll?
- Ubuntu 13.04 beta touts search privacy – before it hooks in eBay, IMDb etc
- Canonical’s Windowing Shift: More than a Mir Techie Footnote
- In The Blogosphere
- Other Articles of Interest
- Featured Audio and Video
- Weekly Ubuntu Development Team Meetings
- Upcoming Meetings and Events
- Updates and Security for 8.04, 10.04, 11.10, 12.04 and 12.10
- And much more!
The issue of The Ubuntu Weekly Newsletter is brought to you by:
- Elizabeth Krumbach
- Stephen Michael Kellat
- Matt Rudge
- And many others
Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License
Well worth the read.
Might the pixel be on it's way out and dead in 5 years? This project developing a vector based video codec predicts so. The project team consists of researches of the University, Root6 Technology, Smoke & Mirrors and Ovation Data Services.
The pixel isn't perfect. A grid simplification of the original image, at any scale bigger than it was intended the image looks blocky. To that add the aliasing problems regarding edges and lines that don't match the grid nicely, and even at the original size things can look chunky.
The transition from pixel based bitmaps to vector based images has been happening for a long time in the static image world. This team of researchers is saying this is also a better way to record moving images and that it will replace the pixel in five years.
The team developed something called a vector-based video codec that attempts to overcome the challenges of a typical vector display. A typical vector display features drawn lines and contoured colors on a screen (rather than the simple, geometrical map of pixels we're all accustomed to). But it has problems--notably, areas between colors can't be filled in well enough for a high-quality image to be displayed, the researchers say.
Professor Phil Willis, from the University's Department of Computer Science, said: "This is a significant breakthrough which will revolutionize the way visual media is produced."
Read more here.
A couple of weeks ago I saw Katy Manning in a tour of Agatha Christie’s “A Murder is Announced.” It’s a very traditional production: A single set and a cast of eleven, with scene changes covered by blackouts and music. Some of the cast appeared to be playing well out of their age range: One character supposedly in her dotage was clearly played by a much younger actress, and Dean Gaffney playing a student was stretching credulity somewhat! Katy portrayed the central character of the piece and delivered a strong confident performance. It was great to see her in action.
This weekend was the third of the BFI’s monthly Doctor Who screenings. “The Mind of Evil” was shown in colour for the first time in forty-two years in the UK, following a painstaking colour recovery process. I am even more convinced now that Doctor Who is at its best when watched with five hundred other fans in a cinema! The humour (intentional and otherwise) is emphasised, the action enhanced and the performances sparkle.
The panel afterwards comprised Timothy Combe (director), Terrance Dicks (script editor), Richard Franklin (Mike Yates), John Levene (Sgt. Benton) and, that’s right, Katy Manning (Jo Grant). The panel had a great energy, with several very vocal contributors.
Once again, James from The Doctor Who Podcast recorded our thoughts after the screening, which will be available from their Facebook page soon. (You can still download February’s “Tomb of the Cybermen” special episode.)
Finally, a massive thank you to everyone who responded to last week’s blog post. It’s been touching seeing some familiar names on the Sam Shaw Appeal page. The appeal has nearly reached 3% of the target. It’s a big target and that 3% represents an amazing contribution from a lot of people in a short time. Please give something if you can.Pin It
The raring feature freeze took effect last week. What’s been happening with qemu in the meantime?
A lot! I’ll touch on the following main changes in this post: package reorg, spice support, hugepages, uefi, and rbd support.
* package reorg
Perhaps best to begin with a bit of Ubuntu qemu packaging history. In hardy (before my time) Ubuntu shipped with separate qemu and kvm packages. This reflected the separate upstream qemu and kvm source trees. In August of 2009, upstream was already talking about merging the two trees, and Dustin Kirkland started a new qemu-kvm Ubuntu package which provided both qemu and kvm.
In 2010, a new ‘qemu-linaro’ source package was created in universe, to provide qemu with more bleeding-edge arm support from linaro. Eventually the qemu-kvm package provided the i386 and amd64 qemu-system binaries, qemu-common, and qemu-utils. All other target architecture system binaries, plus all qemu-user binaries, plus qemu-kvm-spice, came from qemu-linaro. This is clearly non-ideal from many viewpoints, and especially QA testing and bug duplication. But any reorganization would have to make sure that upgrades work seamlessly for raring-raring, quantal-raring, and future LTS-to-LTS upgrades, for the many commonly used packages (qemu-kvm, qemu on various packages, and qemu-user).
In the traditional 6-month-plus-LTS Ubuntu cycle, raring was a good time (not too close to next LTS) to try to straighten that out. It was also a good time in that upstream qemu and kvm were now very close together, and especially in that the wonderfully helpful debian qemu team which was also starting to merge debian’s qemu and qemu-kvm sources into a new qemu source tree in debian experimental.
And so, it’s done! The qemu-linaro and qemu-kvm source packages have been merged into qemu. Most arm patches from linaro are in our package, but you can still run linaro’s qemu from ppa at https://launchpad.net/~linaro-maintainers/+archive/tools/. The Ubuntu and Debian teams are working together, which should mean more stable packages in both, and combined resources in addressing bugs. Thanks especially to Michael Tokarev for helping to review the Ubuntu delta, and to infinity for more than once helping to figure out packaging issues I couldn’t have figured out on my own.
* Spice support. Spice has finally made it into main! The qemu package in main therefore finally supports spice, without having to install a separate qemu-kvm-spice package. As a simple example, if you used to do:
kvm -vga vmware -vnc :1
then you can use spice by doing:
kvm -vga qxl -spice port=5900,disable-ticketing
then connect with spicec or spicy:
spicec -h hostname -p 5900
3. Transparent hugepages. The 1.4.0 qemu release includes support for transparent hugepages. This means that when hugepages are available, qemu instances migrate some memory pages from regular to huge pages. Hugepages offer performance improvements due to (1) requiring fewer TLB entries for the same amount of memory, (2) requiring fewer lookups per page, and (3) requiring fewer page faults for nearby memory references (since each memory page is much larger).
4. Hugetlbfs mount. While transparent hugepages are convenient, if you want a particular vm to run with hugepages backing the whole VM, you will want to use dedicated hugepages. To do this, simply set KVM_HUGEPAGES to 1 in /etc/init/qemu-kvm.conf, then add an entry to /etc/sysctl.conf like:
vm.nr_hugepages = 512
(for 1G of hugepages – 512 2M pages). Make sure to leave at least around 1G of memory not dedicated to hugepages. Then add the arguments
to your kvm command. Dedicated hugepages are not new, but the automatic mounting of the /sys/hugepages/kvm is.
6. UEFI. If you install the ovmf package, then you can run qemu with a UEFI bios (to test secureboot, for instance) by adding the ‘-bios OVMF.fd’ arguments to kvm. As was pointed out during vUDS there are some bugs to work out to make this seamless.
5. rbd. Ok this has been enabled since precise, but it’s still cool. You can use a ceph cluster to back your kvm instances (as an alternative to, say, nfs) to easily enable live migration. Just
qemu-img create -f rbd rbd:pool/vm1 10G
kvm -m 512 -drive format=rbd,file=rbd:pool/vm1 -cdrom raring.iso -boot d
See http://ceph.com/docs/master/rbd/qemu-rbd/ for more information.
So there’s what I can think of that is new in qemu this cycle. I hope you all enjoy, and if you find upgrading issues please do raise a bug.
As I previously mentioned I’ve been working some on the Ubuntu Error Tracker. A bit ago, I added the ability to search for crashes about a source package using a url like http://errors.ubuntu.com/?package=usb-creator. A source package can also be selected by choosing the package in the package selection drop down box and then entering the specific package name.
In addition to this there is now a selection for ‘packages subscribed to by’. This selection allows you to input a Launchpad user name, for example brian-murray, and see package’s bug listing page to subscribe to bugs about that package. If you don’t want more bug mail but still desire this functionality of errors.ubuntu.com consider setting up a subscription for bug reports that only have the tag ‘bugs-will-never-ever-ever-have-this-tag’.
Just as with the package parameter, you can append ‘user=brian-murray’ to the errors.ubuntu.com url which makes it easier to share queries. Naturally, this also works for teams in Launchpad, like the Foundations Bugs team. However, because this particular team is subscribed to hundreds of packages, we’ve created a table in the database for caching the packages for some teams instead of querying Launchpad for the list of packages every time.
During vUDS I gave a lightning talk where you can see this new feature in action and some of my comic book collection!
Second day in a row I managed to get 8 hours of sleep like I was not able at Linaro Connect Asia 2013. There was no time for sleeping as so many things had happened.
This time I decided to go to Hong Kong on Friday to have whole Sunday for shopping or sight seeing etc. Also to make things different I went though Helsinki (was Istanbul in 2012). It was interesting experience to hear English language with Finnish accent. There were moments when during in-flight announcements I was not able to recognize when they ended Finnish part and started English one ;D
HEL was cold but only outside so once I got to terminal it was fine. Rushed though, passed biometric passport gate and got a seat with electricity to charge my Chromebook and phone. Flight was “fine” as usual but as it was during night I tried to catch some sleep.
Finnair’s crew had some problems getting in-flight entertainment system working so we could watch how Linux booted on those NSC Geode GX2 based devices. Due to copyright note in bootloader (redboot) I assumed that it is not older than 9 years. Very slow boot anyway with lot of text printed. They should show some splash + potential progress bar instead. But finally it started working. Provided in-ear headphones are much better than ones on Lufthansa flights.
Landed, got prepaid sim from “3″ network, met Andrea Gallo and we went to hotel. I had plans to go to the city center but was too tired for it. I also lacked HKD due to other layout of keypad in ATM :D
On Sunday we grouped and went to Shim Shui Po to do some electronics related shopping. Prices in Hong Kong are similar/worse than in Europe so I bought only few things which I had problems finding in low price at home: mini-ITX case (16€), Nexus 4 back cover (6.5€), case for Samsung Chromebook (7.5€) and some cables. There are still no USB 3.0 cables in wide selection ;( I also bought crappy dual sim phone for 10€ as I needed one to get my Polish sim on network.
I also did some shopping on Tuesday — this time on Ladies’ Market. It is one long street with lot of sellers with clothes, wallets, toys, phone covers, headphones and other gift like things of unknown quality. I left there all money I had but got gifts for everyone I wanted. Haggling there is a must as 40% of starting price is easy to get. And you do not even need to tell anything to get price lowered…
We also went to Shenzen, China for one afternoon but that’s story for separate post.
But I went there for connecting with people. And to discuss/present our work done in last cycle and to be done in next ones.
Each day started with keynote (Friday one had Linaro awards). And we got speakers from outside of Linaro:
- Jon Corbet (LWN)
- Lars Kurth (Citrix)
- Jason Taylor (Facebook)
- Greg Kroah-Hartman
Each talk was interesting. Jon shown Linaro developers that Big.Little switcher should be taken for community review earlier, Lars presented Xen on ARM (v7, v8), Jason told about how Facebook handles servers and where is a space for ARM ones. Greg’s talk was best — he told why he does not want our code, what kind of mistakes people do in sent patches and gave us story how one code submission can break whole set of devices due to lack of testing. I wonder how Linaro Kernel WG will handle Greg’s new requirement of having all Linaro patches signed by senior kernel developer.
This was also first conference where I was fully ARMed. I left my x86 laptop at home and took Samsung Chromebook instead. Ubuntu runs fine on it, speed is comparable but size (13.3″ contra 11.6″) and weight differ. This also gave me few more occasions to talk with other developers.
I spoke with Citrix guys about Chromebook kernel changes and their Xen backport will probably be merged into “linux-chromebook 3.4″ package. Also had some discussions with ARM Mali developers which resulted in removal of OpenGLES packages from Chromebook support PPA due to licence issues (I do not have redistribution permission).
We also had meeting about hacking Samsung Chromebook where ChromeOS, Debian, Linaro, OpenSUSE, Ubuntu developers had discussion about what we can expect, where we are, how to get some things fixed etc. After that Nicolas ‘Charbax’ Charbonnier from armdevices.net shot video about it:
I remember that Charbax tried to make interview with me at one of earlier Linaro Connects but I always rejected that idea. This time he went for help… And I could not refuse to Zack Pfeffer :) How it went? You tell me:
Hong Kong was great. Weather was perfect with +25°C, sun and no rain. Someone told me March is the last moment for being there :)
But then I had to leave. Problem with return flights is that they usually are around midnight. Add lack of sleep during previous nights and result is not nice mix. So we spent some time in airport lounge to charge batteries (our and devices) and then squeezed in economy class for 11 hours. Took a nap, watched movie in English with Finnish subtitles (learnt new word even) and read “Amiga, the future was here” book.
Imagine weather change when we landed in Helsinki… -13°C and snow. As I left my spring jacket in checked-in baggage (but I had sweater) those few minutes from airport -> bus -> plane were cold ones. Similar few hours later in Berlin. But I had some time for shopping. Skipped salmiakki cause it is hard to know which ones will be hardcore just enough but got some other things.
Szczecin was nice on Saturday. Cold, but spring was visible. Winter came during night:
Next Linaro Connect will be in Dublin, Ireland. See you there!
- Linaro Connect q2.12
- I am going to Hong Kong
- My devboards lost power source
- Started X11 on AArch64
- Transplanting bugLCDs
Really I do, dunno why some people think I unfairly critisise them. They want to fix bug number 1, "Microsoft has a majority market share". The obvious way to do that was with the existing community made desktop software but after many years that still had minimal traction. Actually lots of nice rollouts around the world in educational institutions, every region/nation in Spain made their own Ubuntu derivative and of course Kubuntu has had the world's biggest Linux desktop rollout in Brazil. But nothing to get it directly into the hands of consumers. Dell did a half hearted attempt to pre-load Ubuntu on laptops but somehow it never went very far.
In the mean time Apple did fix Bug No 1 by using shiny design and new form factors. There's some Free Software in Apple's catalogue including a web browser engine from KDE but it's otherwise more closed than Microsoft. Netbooks came and went and Linux to the consumer didn't have much success there.
Then Google started fixing Bug No 1 with Android and maybe even Chromebook will go somewhere. They're a lot more Free Sofware but also not openly community developed.
So it's entirely sensible for Canonical to reason that community developed consumer software isn't going to fix Bug No 1 any time soon and want to start looking at models which do. Hiring a design team to create their own software is more Free Software than Apple and more Open Source then Google. I wish Ubuntu (Unity) luck in taking over the world and fixing Bug No 1.
But people who like to work on a community driven project will be a bit put off by that which is why we've seen some high profile departures recently from Ubuntu. But as I say there are many parts of Ubuntu which are very community driven so I'm hanging around to be part of that community. Come and join us.
Here are the steps for installing the AWS command line tools that are currently available as Ubuntu packages. These include:
- EC2 API tools
- EC2 AMI tools
- IAM - Identity and Access Management
- RDS - Relational Database Service
- Auto Scaling
Starting with Ubuntu 12.04 LTS Precise, these are also available:
- ELB - Elastic Load Balancer
Enable the multiverse repository. This can be done through the Ubuntu Update Manager or by editing /etc/apt/sources.list Here are some commands that will enable multiverse on a new installation:# 12.04 LTS Precise, 11.10 Oneiric sudo perl -pi.orig -e \ 'next if /-backports/; s/^# (deb .* multiverse)$/$1/' \ /etc/apt/sources.list # 10.04 LTS Lucid sudo perl -pi.orig -e \ 's/^(deb .* universe)$/$1 multiverse/' \ /etc/apt/sources.list
Enable the awstools PPA and update the apt package index:sudo apt-add-repository ppa:awstools-dev/awstools sudo apt-get update
Install available AWS command line tool packages:sudo apt-get install ec2-api-tools ec2-ami-tools iamcli rdscli moncli ascli elasticache # Also available on Ubuntu 12.04 LTS Precise sudo apt-get install aws-cloudformation-cli elbcli
Some of these tools support passing in credentials on the command line, but for regular use, you will want to store the AWS credentials in files.Set up AWS Credentials
Create a place to store the AWS credentials:mkdir -m 0700 $HOME/.aws/
Copy your AWS X.509 certificate and private key to this subdirectory. These files will have names that look something like this:$HOME/.aws/cert-7KX4CVWWQ52YM2SUCIGGHTPDNDZQMVEF.pem $HOME/.aws/pk-7KX4CVWWQ52YM2SUCIGGHTPDNDZQMVEF.pem
Create the file $HOME/.aws/aws-credential-file.txt with your AWS access key id and secret access key in the following format:AWSAccessKeyId=YOURACCESSKEYIDHERE AWSSecretKey=YOURPRIVATEACCESSKEYHERE
Add the following lines to your $HOME/.bashrc file so that the AWS command line tools know where to find the above files:# AWS credentials export EC2_PRIVATE_KEY=$(echo $HOME/.aws/pk-*.pem) export EC2_CERT=$(echo $HOME/.aws/cert-*.pem) export AWS_CREDENTIAL_FILE=$HOME/.aws/aws-credential-file.txt
Make sure these are set in your current shell(s):source $HOME/.bashrc Test
Make sure that the command line tools are installed and have credentials set up correctly. These commands should not return errors:ec2-describe-regions ec2-ami-tools-version iam-accountgetsummary rds-describe-db-engine-versions mon-version as-version # Ubuntu 12.04 LTS Precise and higher cfn-list-stacks elb-describe-lb-policies AWS Command Line Tools
The table below shows some of the various AWS products, whether Amazon publishes command line tools, and whether these are available in key Ubuntu releases. Some of the packages are available in the standard apt repositories, some require adding multiverse, and some are published in the awstools PPA. The awstools PPA also has newer versions of some of the packages released by Amazon after the official Ubuntu release.AWS Service Amazon Command Line Tools Ubuntu 12.04 LTS Precise Ubuntu 11.10 Oneiric Ubuntu 10.04 LTS Lucid EC2 API Tools AWS CLI multiverse multiverse
PPA updates multiverse
PPA updates EC2 AMI Tools AWS CLI multiverse multiverse
PPA updates multiverse
PPA updates IAM - Identity and Access Management AWS CLI main main PPA RDS - Relational Database Service AWS CLI main main PPA CloudWatch AWS CLI PPA PPA PPA Auto Scaling AWS CLI PPA PPA PPA ElastiCache AWS CLI PPA PPA PPA ELB - Elastic Load Balancing AWS CLI PPA - - AWS CloudFormation AWS CLI PPA - - AWS Import/Export AWS CLI - - - CloudFront AWS CLI - - - CloudSearch AWS CLI - - - Elastic Beanstalk AWS CLI - - - SNS - Simple Notification Service AWS CLI - - - EMR - Elastic MapReduce AWS CLI - - - Route 53 AWS CLI - - - S3 - Simple Storage Service AWS CLI - - - SES - Simple Email Service - - - Direct Connect - - - - DynamoDB - - - - SimpleDB - - - - SQS - Simple Queue Service - - - - Storage Gateway - - - - SWF (Simple Workflow Service) - - - - VPC (Virtual Private Cloud) - - - -
As you can see, there are a number of command line tools that are not (yet) packaged in Ubuntu. These can be downloaded directly from Amazon and installed manually.
There are also a number of AWS services that do not have command line tools available from Amazon, though some third parties have provided helpful alternatives.
[Update 2012-09-03: Added links to command line tools for S3, SNS]
[Update 2013-03-10: Added CloudWatch, Auto Scaling, ElastiCache]
Original article: http://alestic.com/2012/05/aws-command-line-packages
This week's episode provides further discussion of timelines as we come into Ubuntu's version of "March Madness".
Download here (MP3) (ogg) (FLAC), or subscribe to the podcast (MP3) to have episodes delivered to your media player. We suggest subscribing by way of a service like gpodder.net. Materials to support the work of the Air Staff of Erie Looking Productions can be purchased via their Amazon wish list.
This work is licensed under the Creative Commons Attribution-ShareAlike 3.0 United States License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/3.0/us/.
I am proud to announce the first alpha release for Muon Suite 2.0. The Muon Suite is a set of package management utilities for Debian-based Linux distributions built on KDE technologies. Packages for Kubuntu 12.10 “Quantal Quetzal” are available in the QApt Experimental PPA. Additionally, packages are available in the development release of Kubuntu 13.04, “Raring Ringtail”.
Most of the big stuff for 2.0 was announced in my previous post. This release mostly features bugfixes over the previous 1.9.80 (2.0 beta) release.Changelogs
In episode six of Adventures in Haskell we add variable assignment and simple function definition to our calculator.
There has been a humongous amount of fluff flying through the air regarding Ubuntu and Canonical as of late and it seems to be a lack of communication (as other blogs have also pointed out) between the community and Canonical. There is obviously a huge rift in the community over this and a form of exodus is happening. To be honest, I do not know whether I really consider myself part of the Ubuntu community anymore. Perhaps I am, because I am still working with the Ubuntu Beginner’s Team and have plans to continue on doing so. I do not, however, run Ubuntu, nor do I know anything about JuJu and it’s “charms” or a whole lot about Unity! Despite this, it is still one of the distributions I do recommend to total newcomers to the Linux playing field, but only alongside other suggestions (Mint or Debian, depending on the person).
Disillusioned, this one is, about Ubuntu. The shine has worn off for me but I am going to do my best to make sure that my perspective of these situations doesn’t impact others’ negatively. Ubuntu is still a great idea and Canonical is doing some truly wonderful things for getting Linux more widely known as a great operating system alternative to Windows and Mac.
At any rate, I shall still be around #ubuntu-beginners and #ubuntu-beginners-team to see how I can help rebuild from our end of the spectrum.
Perhaps I do feel part of the community, but only because it has made a rather large impact on my life. If you don’t frequent or visit the aforementioned channels, you can also find me in #ubuntu-expats on oftc.
Tonight, I was working with the developer of the wonderful little CLI application Rippit trying to get it to work on my system again. He gave me a patch to the tarball which cleared up one error, but then there was another, and GAH. Then I noticed that he used git, DUH, which I know how to clone from and then build. For instance, in this case, it was
mkdir rippit && cd rippit
git clone git://git.fedorahosted.org/rippit.git
cd rippit && mkdir build && cd build && cmake cmake -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_BUILD_TYPE=debugfull $HOME/rippit/rippit
sudo make install
From the error message, Trever could tell that I wasn't using the proper branch. So:
cd srcthen, rather than ls to list all the branches, git branch -a git checkout remotes/origin/0.1 (I found this method on Stackoverflow. Thanks superlogical for your answer.)
From there, delete the build folder, and do the rest of the process as before. I love stringing together the commands with && which make it so much easier for me to copy/paste or use the up-arrow in Yakuake. I want to make it easy to rebuild so that testing can be quick.
Finally, a working Rippit again! Thanks again, Trever. You rock! KDE devels are the best.