The Server team just finished up the second day of the Ubuntu Developer Summit (UDS) March 2014 (see http://summit.ubuntu.com/uds-1403/track/servercloud/ for the Server track). I may be biased (well, actually, I know I am), but I think its been very interesting – lots of good, thoughtful discussions around where we are and where we’re heading. Check out the videos and let us know what you think.
One video from today’s UDS sessions, I wanted to specifically highlight is the demo Robie Basak gave on uvtool. Uvtool is, as Robie explains in the video, a very simple tool for setting up kvm guests – he calls it the glue that brings together several existing tools. Just go watch it – http://www.youtube.com/embed/Ue0C2ssp450 and then go try it.
In December, Serge did a writeup on uvtool, I think that’s worth a read also – http://s3hh.wordpress.com/2013/12/12/quickly-run-ubuntu-cloud-images-locally-using-uvtool/
Anyhow, time to prepare for the last day of UDS. Enjoy!
First of all, Erlang Factory this year was just phenomenal: great talks, great energy, and none of the feared/anticipated "acquisition feeding frenzy" -- everyone was genuinely happy for WhatsApp and Cloudant. And with that happiness, they were ready to get on with the conference and dive into the tech :-)
And gosh, there was a bunch of good stuff. Check out the schedule. Also on that page are the speaker pics. For those that have video or slides of their talk, the speaker pic is annotated; clicking on them will take you to the speaker's page with links to slides and/or video.
There's so much good stuff there -- I've definitely got my watching queue set up for the next few weeks ...
I gave a presentation on LFE which covered everything from motivational basics for using a Lisp in the 21st century, gave a taste of LFE in small chunks, and then took folks on a tour of creating projects in LFE. There was also some dessert of fun side/research projects that are currently in-progress. The slides for the presentation are here; also the slide source code is available (related demo project), and a shout out to the Hoplon crew for their help in making sure I could create this presentation in a Lisp (Clojure), and not HTML ;-) (It uses a Hoplon-based Reveal.js library.)
The Good Stuff
After the presentation, several of us chatted about Lisp and Erlang for a while. Robert and I later continued along these lines after heading over to the quiet of the ever-cool Marines Memorial 11th-floor library (complete with fireplace). Here we sketched out some of the interesting areas for future development in LFE. I'm not sure if I'm remembering everything (and I've added Sean Chalmers' recent experiments with types; blog and discussion):
- getting the REPL to the point where full dev can happen (defining functions, macros, and records in the LFE shell)
- adding macros (maybe just one) for easier use of Mnesia in LFE
- discussing the possibility of an LFE stdlib
- gathering some of the best funcs and macros in the wild for inclusion in an LFE stdlib
- possibly getting spec and type support in LFE
- producing an LFE 1.0 release
- building out an LFE rebar plugin
- examining erlang.mk as another possible option
- starting work on an LFE Cookbook
- creating demos of LFE on Erjang
- creating demos of LFE-Clojure interop via JInterface
- creating more involved YAWS/REST examples with LFE
- explore the possibility of an SOA tutorial with LFE + YAWS
- releasing a planner demo
- finishing the genetic programming examples
- LFE style guide
- continued work on the LFE user guide
If this stuff is exciting to you, feel free to jump into the low-volume discussions we have on the mail list.
Updated Debian packages should be made available tomorrow. The final release is expected early next week after which I will try to get a Feature Freeze Exception and sync it to Ubuntu 14.04.
About PlainBox: PlainBox is a toolkit consisting of python3 library, development tools, documentation and examples. It is targeted at developers working on testing or certification applications and authors creating tests for such applications.
venture capitalists, musicians, recording studios, actors, agents, celebrities, and vendors of every imaginable kind. With a keen eye, I also spotted one or two hipsters. And throngs of Glassholes.
The largest keynote venues (plural) hold over several thousand people, and fill to capacity, with both closed circuit and Internet streamed broadcasts on display in multiple overflow ballrooms. Technical sessions, presentations, and panels are spread across 30 different venues around downtown Austin (e.g. The Austin Convention Center, The Hilton, The Marriott, The Driskill, City Hall, The Chamber of Commerce, Palmer Event Center, the Omni, the Intercontinental etc.). Tracks are roughly contained in a given venue. While shuttles are available for moving between venues, the weather in Austin in March is gorgeous and everything is roughly walkable.
While massive corporate "super sponsors" drive the overall event (Miller, Chevrolet, AT&T, Deloitte, American Express), a huge portion of the interactive side of the house is focused on start ups and
smaller businesses. This was a very familiar crowd, savvy and familiar with free software and open standards. These are thousands of the hackers that are building the next 40 new apps you're going to
install on your phone or for which you'll soon have to generate a new web login password.
SxSW has been used to launch or spread countless social media platforms, including: Wordpress, Twitter, Foursquare, etc. Early adopters now flock to SxSW in droves, to learn about new hardware and software gadgets before their Silicon Valley friends do. Or, depending on your means, perhaps invest in said opportunities.
Expo Floor The tradeshow does require an expo badge, but in my experience, its pretty easy to come by an expo badge freely. The expo floor includes 300+ booths, wide and varied, covering technology, gadgets, startups, film, music, and more. Nearly 75,000 unique badges entered the tradeshow floor.
I saw at least 4 different public cloud vendors (Rackspace, SoftLayer, DigitalOcean, and Codero) with sizable displays. I spent a good bit of time with Codero. They're a new(ish) public cloud offering, built
on Ubuntu and CloudStack, based in Austin and Kansas City. I also spoke with a couple of data analytics start ups, and talked a bit about Ubuntu and Juju.
I was surprised to see Ghostery on exhibit (I'm a big fan, actually, use it everywhere!). NASA had a spectacular booth. I a few booths displaying their wares on Unity desktops (woot).
There were several RaspberryPi demos too. The most amusing start up was from Japan, called LogLog, "When it comes to #2, we're #1". Seriously.
I wore an Ubuntu t-shirt each day, and several people stopped to ask me where the Ubuntu booth was. It's probably worth considering a booth next year. I can see where both a Juju GUI and a few Ubuntu Touch devices would generate some great traffic and press at SxSW. This is definitely the crowd of next generation app developers and back end social media developers building the new web. It would behoove us to help ensure they're doing all of that on Ubuntu!
Session HighlightsI missed Friday and Saturday, but I did attend sessions Sunday, Monday, and Tuesday.
There was a very strong, pervasive theme throughout much of the conference, across many, many tracks about security, privacy of individual data, openness of critical systems and infrastructure, and
generally speaking, freedom. I don't suppose I was expecting this. There were numerous mentions of open source, Linux, and even Ubuntu in various capacities as being better options that the status quo, for many of the social and technical issues under discussion. Perhaps I gravitated toward those sessions (okay, yeah, I did). Still, it was quite reassuring that there were so many people, unknown to many of
us, touting our beloved free and open standards and software as "the answer".
The other theme I picked up on, is how "connected" our media and entertainment devices and mechanisms are becoming. Netflix is designing TV series (House of Cards) based on empirical data that they collect, about what people like to watch. Smart TVs will soon deliver richer experiences about the sports and programming we watch, with real-time, selectable feeds and layers of additional content. Your handheld devices are becoming part of the entertainment experience.
Here are a few highlights, mostly from names that you might recognize.
Edward Snowden[Note that I am not passing judgement here, just reporting what was said during that session.]
Perhaps the most anticipated (and reported upon) keynote was the remotely delivered panel session with infamous NSA leaker Edward Snowden, via Google Hangout. The largest part of the conference
center was packed to capacity, and local feeds broadcast the session to much of the rest of the conference. I suppose some of you saw the coverage on Slashdot. Snowden's choppy, Google+ hangout picture featured the US Constitution displayed behind him.
He said that the NSA collected so much information that they didn't even know what to do with it, how to process it. Collecting it proved to be the easy part. Processing it was orders of magnitude more difficult. He suggests that developers need to think security and encryption first, and protect user data from the start (and the SxSW tech savvy crowd are the ones to do it). He said that encryption is not fundamentally broken, and it generally works very well. That the NSA spent for less time trying to break systems, than to just monitor all of the easy targets. He said that he felt like he did his job, by blowing the whistle, in that "he took an oath to defend and uphold the constitution, and what he observed was abuse and violation of it on a massive scale."
Adam Savage (co-host of Mythbusters) delivered the best canned presentation of the entire event (for me). He discussed Art and Science, how they're fundamentally the same thing, but we as a society, lately, haven't been treating them as such, and they're tending to drift apart. He talked about code as art, as well.
Shaquille O'NealBelieve it or not, Shaq delivered a hilarious panel session, talking about wearable technology. He described himself as the "world's biggest geek" -- literally. He said that he used to be afraid of
technology (in high school), until he was tutored by one of the geekiest kids in school. He then fell in love with technology (at 17), and has been an early adopter ever since. He says he has both Android and iPhone devices, talked extensively about the Fitbit (the co-host was from Qualcomm), and other wearable technologies, particularly as they relate to sports, health, and fitness.
George TakeiGeorge Takei is 76 years old, but has the technical aptitude of a 24 year old computer whiz. He bridges at least 3 generations, and is on a quest to bring technology, and especially social media to older
people. I've been a subscriber to his feeds on Facebook/Twitter/G+, and he's really sharp witted, funny, and topical. He discussed his tough life growing up (in an American concentration camp for Japanese Americans during WWII), coming to terms with his sexuality, entering showbiz, Star Trek, his (brief) political career, and now his icon status in social media. Brilliant, brilliant man. Entertaining and enlightening session.
Daniel SuarezDaniel Suarez is an author of (now) four cyberpunk technical thrillers. I reviewed his first book (Daemon) back in 2008 on my blog (and a few more). His publicist reached out to me, put us in touch, and we've been in communication ever since. He sat on a panel with Bruce Sterling and Warren Ellis, hosted by Joi Ito (MIT Media lab, early investor in Twitter, Flickr, Kickstarter). Daniel invited me out for dinner and drinks afterward with he and his wife, and we had a great time. He's a huge fan of Ubuntu. He says that he wrote all of his last book (Influx) on an Ubuntu laptop (woot). In his previous book (Kill Decision), Ubuntu made a brief cameo on the main character's computer (albeit compromised by a zero-day attack).
I did attend a few sessions by lesser known individuals. Not much remarkable, but there was one "interesting" presentation, introducing people to "the dark net". The presenter covered a bunch of
technologies that (probably) you and I use every day, but framed it as "the dark net", and explained how anyone from malicious people to Wikileaks use IRC, PGP, tor, proxies, stunnels, bitcoin, wikis, sftp, ssh, and so forth to conduct shady business. He only had a very small time slot, and had to tear through a lot of material quickly, but I found it sad that so many of these fundamental technologies were conflated and in some people's minds, I'm sure made synonymous with human trafficking, drugs, corporate espionage, and stolen credit card numbers :-(
Aaron Swartz documentary
I did manage to catch one documentary while at SxSW... The Internet's Own Boy: The Aaron Swartz documentary. Aaron's story clearly resonates with the aforementioned themes of freedomness and openness on the Internet. While I didn't know Aaron personally, I was of course very much aware of his work on RSS, Reddit, SOPA/PIPA, etc. I feel like I've known many, many people like him -- brilliant programmers, freedom fighters -- especially around free software. His suicide (and this documentary) hits pretty hard. There are hundreds of clips of him, from 3 years old until his death at 26, showing his aptitude for technology, sheer brilliance and limitless potential. He did setup a laptop in a closet at MIT and downloaded hundreds of gigabytes of copyrighted JSTOR documents, and was about to stand trial on over a dozen felony counts. The documentary argues that he was to be "made an example of". Heartfelt interviews with Lawrence Lessig, Cory Doctorow, Sir Tim Berners-Lee, as well as Aaron's friends and family paint extremely powerful portraits of a brilliant, conflicted genius. The film was extremely well done. I had a pit in my stomach the rest of the day.
Today, I went to two the vUDS tracks since they were the only ones that I was interested in since they covered aspects in the Ubuntu Community.
The “Re-imagining our Online Summit” focused on how to redesign the summit to focus more on the non-developer side of the Community and still, of course, have the developer side. My thought on this is that it’s a good idea to move away from having the summits being almost 100% for just for developers. There are other parts of the Community that need spotlight, such as LoCo’s. One suggestion that was suggested is to use a panel of let’s say five (5) people all from different teams and they talk about what their team is doing at the moment. This is the only suggestion that I remember on the top of my head, I was fighting to try to get my mic working so I can be in the Hangout. Nope, and I’m seeking to buy a mic that will work.
The second track that I went to is the, “Growing a new generation of Ubuntu leaders”, and this focused on the problem of getting leaders motivated to lead and to figure out a leader from a manager. I took a lot of notes for this track and those notes can be found on that page that I linked in the Pad. In this track, we focused on how to get non-natural leaders started. Some of the ideas that were suggested are videos by Jono Bacon on how to start something, docs in the Toolkit to help leaders, and some mentoring system that ID’s new leaders and helps them to succeeded.
Even though this was a mid-cycle vUDS, there was still something for me to come and get my ideas across. And hopefully, I will a working mic for the next one.
In the Plasma team, we’re working frantically towards the next release of the Plasma workspaces, code-named “Plasma Next”. With the architectural work well in place, we’ve been filling in missing bits and pieces in the past months, and are now really close to the intended feature set for the first stable release. A good time to give you an impression of what it’s looking like right now. Keep in mind that we’re talking Alpha software here, and that we still have almost three months to iron out problems. I’m sure you’ll be able to observe something broken, but also something new and shiny.
For the first stable release of Plasma Next, we decided to focus on core functionality. It’s impossible to get every single feature that’s available in our long-term support release KDE Plasma workspaces 4.11 into Plasma Next at once. We therefore decided to not spread ourselves too thin, and set aside some not-quite-core functionality for now. So we’re not aiming at complete feature parity yet, but at a stable core desktop that gets the work done, even if it doesn’t have all the bells and whistles that some of our users might be used to, yet.
Apart from “quite boring”, underlying “system stuff”, we’ve also worked on the visuals. In the video, you can see an improved contrast in Plasma popups, effects in kwin have been polished up to make the desktop feel snappier. We’ve started the work on a new Plasma theme, that will sport a flatter look with more pronounced typography than the venerable Air, and animations can now be globally disabled, so the whole thing runs more efficiently on systems with slow painting performance, for example across a network. These are only some of the changes, there are many more, visible and invisible.
We’re not quite done yet, but we have moved our focus from feature development to bugfixing, the results of that are very visible if you follow the development closely. Annoying problems are being fixed every day, and at this rate of development, I think we’re looking at a very shiny first stable release. Between today and that unicorn-dances-on-rainbows release lie almost three months of hard work, though, and that’s what we’ll do. While the whole thing already runs very smooth on my computers, we still have a lot of work to do in the integration department, and to translate this stability to the general case. Systems out there are diverse and different, and only wide-spread testing can help us make the experience a good one for everybody.
Over the last few hours, I tried out Tomnod, a (fairly new) crowdsourcing site that uses live satellite images for users to search for clues of what happened. This time it is the missing Malaysia Airlines Flight MH370 that occurred on March 8, 2014. I gone through about 260 maps plus two of the maps that others have shared (6060 and another one that I don’t remember) and I found nothing minus the plane like object in the 6060 map that is posted on iCNN.
This crowdsourcing idea is a neat one but there are suggestions that I want to give to the developers of it, some of them I have seen from others:
- Have a way to view the tags that you have tagged and what other users have tagged
- Zooming functions are a must
- Have a way to allow users to switch from large map to large map
- Have a homepage that has the link to the map
- Use that homepage to explain the project
- Also have a “News” page
I hope that this project will help them to find the plane and everyone. Please pray for them.
I hope you have followed Myriam's advice and done your homework. If you have worked on some junior jobs, have your KDE developer credentials, joined the necessary lists *including KDE-soc*, you have a good foundation built.
Pro-tip: always check out the links in the /topic of your IRC channels. The #kde-soc channel topic is particularly rich.
Many prospective mentors hang out in that channel, but not all. Us admins are there as often as possible as well. I'm always willing to help edit a proposal for grammar, spelling, organization, formatting, etc. And I can be brutally honest, so if you ask my opinion, be aware that I won't waste your time with anything but the truth.
Now is the time to log into melange, and submit your proposals. If you have not yet had a team member vet your plan, give them the link to your melange proposal and ask. Don't waste their time with mere ideas; you need a clear plan of action, and a realistic timeline.
Go, go, go!
In 1991 I had some photos published in Doctor Who Magazine. Fast forward 23 years and 293 editions and it’s happened again.
I was asked to photograph Nicholas Briggs (voice of the Daleks, Cybermen, Judoon and more, executive producer at Big Finish and all round good egg) interviewing David Graham. David was one of the voice artistes who created the Daleks’ grating staccato delivery back in the 1960s. He has had an amazing career, appearing on screen in Doctor Who, The Avengers, The Saint and plenty more. He was also the voice of Parker and Brains in Thunderbirds, and is Grandpa in Peppa Pig.
So on a Monday morning back in January I found myself stood outside David’s flat in a rather nice area of London. Nick and David were already mid-interview (I was bang on time of course! They must have started early) so after David had made me a cup of tea I sat quietly a took some candid photos of them chatting. Then we carefully re-arranged David’s furniture to create an impromptu space for photographs and I broke out the speed lights and softbox. Figuring out how to make the available space work is something that I have learnt from my wedding photography. I think the photos capture the gentle good humour that was bouncing between Nick and David. Nick’s write-up of the interview is a funny take on the morning. It’s fascinating to read his take on the conversation.
You can read the interview and admire the photos in the latest issue of Doctor Who Magazine, which is a Dalek special. It’s at newsagents now, or available online: Doctor Who Official Magazine issue 471 (April 2014) – Dalek Special
Thanks to Tom Spilsbury and Nick Briggs for asking me to take the photos, and the DWM team who made them look fantastic. It was a very enjoyable way to spend a morning!Pin It
Our current autopkgtest machinery uses Jenkins (a private and a public one) and lots of “rsync state files between hosts”, both of which have reached a state where they fall over far too often. It’s flakey, hard to maintain, and hard to extend with new test execution slaves (e. g. for new architectures, or using different test runners). So I’m looking into what it would take to replace this with something robust, modern, and more lightweight.
In our new Continuous Integration world the preferred technologies are RabbitMQ for doing the job distribution (which is delightfully simple to install and use from Python), and OpenStack’s swift for distributed data storage. We have a properly configured swift in our data center, but for local development and experimentation I really just want a dead simple throw-away VM or container which gives me the swift API. swift is quite a bit more complex, and it took me several hours of reading and exercising various tutorials, debugging connection problems, and reading stackexchange to set it up. But now it’s working, and I condensed the whole setup into a single setup-swift.sh shell script.
You can run this in a standard ubuntu container or VM as root:sudo apt-get install lxc sudo lxc-create -n swift -t ubuntu -- -r trusty sudo lxc-start -n swift # log in as ubuntu/ubuntu, and wget or scp setup-swift.sh sudo ./setup-swift.sh
Then get swift’s IP from sudo lxc-ls --fancy, install the swift client locally, and talk to it:$ sudo apt-get install python-swiftclient $ swift -A http://10.0.3.134:8080/auth/v1.0 -U testproj:testuser -K testpwd stat
Caveat: Don’t use this for any production machine! It’s configured to maximum insecurity, with static passwords and everything.
I realize this is just poor man’s juju, but juju-local is currently not working for me (I only just analyzed that). There is a charm for swift as well, but I haven’t tried that yet. In any case, it’s dead simple now, and maybe useful for someone else.
Scarlett has been working hard on packaging KDE Frameworks 5 Alpha 2 and the build status page shows a sea of green (the only yellow is when a framework is asking for a package which doesn't exist yet). Just in time for Plasma Next to get its Alpha release this week coming :) Grab the KF5 packages from the experimental PPA for Kubuntu Trusty.
A couple of weeks ago, Gunnar Wolf mentioned on IRC that his CuBox-i4 had arrived. This resulted in various jealous noises from me; having heard about this device making the rounds at the Kernel Summit, I ordered one for myself back in December, as part of the long-delayed HDification of our home entertainment system and coinciding with the purchase of a new Samsung SmartTV. We've been running an Intel Coppermine Celeron for a decade as a MythTV frontend and encoder (hardware-assisted with a PVR-250), which is fine for SD video, but really doesn't cut it for anything HD. So after finally getting a TV that would showcase HD in all its glory, I figured it was time to upgrade from an S-Video-out, barely-limping-along tower machine to something more modern with HDMI out, eSATA, hardware video decoding, and whose biggest problem is it's so small that it threatens to get lost in the wiring!
Since placing the order, I've been bemused to find that the SmartTV is so smart that it has had a dramatic impact on how we consume media; between that and our decision to not be a boiled frog in the face of DISH Network's annual price increase, the MythTV frontend has become a much less important part of our entertainment center, well before I ever got a chance to lay hands on the intended replacement hardware. But that's a topic for another day.
Anyway, the CuBox-i4 finally arrived in the mail on Friday, so of course I immediately needed to start hacking on it! Like Gunnar, who wrote last week about his own experience getting a "proper" Debian install on the box, I'm not content with running a binary distribution image prepared by some third party; I expect my hardware toys to run official distro packages assembled using official distro tools and, if at all possible, distributed on official distro images for a minimum of hassle.
Whereas Gunnar was willing to settle for using third-party binaries for the bootloader and kernel, however, I'm not inclined to do any such thing. And between my stint at Linaro a few years ago and the recent work on Ubuntu for phones, I do have a little knowledge of Linux on ARM (probably just enough to be dangerous), so I set to work trying to get the CuBox-i4 bootable with stock Debian unstable.
Being such a cutting-edge piece of hardware, that does pose some challenges. Support for the i.MX6 chip is in the process of being upstreamed to U-Boot, but the support for the CuBox-i devices isn't there yet, nor is the support for SPL on i.MX6 (which allows booting the variants of the CuBox-i with a single U-Boot build, instead of requiring a different bootloader build for each flavor). The CuBox-i U-Boot that SolidRun makes available (with source at github) is based on U-Boot 2013.10-rc4, so more than a full release behind Debian unstable, and the patches there don't apply to U-Boot 2014.01 without a bit of effort.
But if it's worth doing, it's worth doing right, so I've taken the time to rebase the CuBox-i patches on top of 2014.01, publishing the results of the rebase to my own github repository and submitting a bug to the Debian U-Boot maintainers requesting its inclusion.
The next step is to get a Debian kernel that not only works, but fully supports the hardware out of the box (a 3.13 generic arm kernel will boot on the machine, but little things like ethernet and hdmi don't work yet). I've created a page in the Debian wiki for tracking the status of this work.
Welcome to the Ubuntu Weekly Newsletter. This is issue #358 for the week March 3- 9, 2014, and the full version is available here.
In this issue we cover:
- Virtual Ubuntu Developer Summit Mar 11-13th
- Ubuntu Stats
- Lubuntu Blog: [Poll] Community wallpaper contest
- Nicholas Skaggs: A simple look at testing within ubuntu
- Mark Shuttleworth: #11 – Ubuntu is the #1 platform for production OpenStack deployments
- Jonathan Riddell: New Blue Systems Office Edinburgh
- Harald Sitter: Kubuntu Testing and You
- Ubuntu GNOME: [Results] Community Wallpaper Contest for Trusty Tahr
- Lubuntu Blog: Ubuntu community survey 2014
- Nicholas Skaggs: Keeping ubuntu healthy: Manual Image Testing
- Ubuntu GNOME: LTS Proposal
- Mark Shuttleworth: The very best edge of all
- Canonical News
- Ubuntu 14.04 beta 1 offers a sneak peek at ‘Trusty Tahr’
- In The Blogosphere
- Other Articles of Interest
- Featured Audio and Video
- Weekly Ubuntu Development Team Meetings
- Upcoming Meetings and Events
- Updates and Security for 10.04, 12.04, 12.10 and 13.10
- And much more!
The issue of The Ubuntu Weekly Newsletter is brought to you by:
- Elizabeth Krumbach Joseph
- Paul White
- Emily Gonyer
- Jim Connett
- And many others
Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License
I’m often asked what being the Vice President of Cloud Development and Operations means, when introduced for a talk or meeting, or when someone happens to run by my LinkedIn profile or business card.
The office of the CDO has been around in Canonical for so long, I forget that the approach we’ve taken to IT and development is either foreign or relatively new to a lot of IT organizations, especially in the commonly thought of “enterprise” space. I was reminded of this when I gave a presentation at an OpenStack Developer Summit entitled “OpenStack in Production: The Good, the Bad, & the Ugly” a year ago in Portland, Oregon. Many in the audience were surprised by the fact that Canonical not only uses OpenStack in production, but uses our own tools, Juju and MAAS, created to manage these cloud deployments. Furthermore, some attendees were floored by how our IT and engineering teams actually worked well together to leverage these deployments in our production deployment of globally accessible and extensively used services.
Before going into what the CDO is today, I want to briefly cover how it came to be. The story of the CDO goes back to 2009, when our CEO, Jane Silber, and Founder, Mark Shuttleworth, were trying to figure out how our IT operations team and web services teams could work better…smarter together. At the same time our engineering teams had been experimenting with cloud technologies for about a year, going so far as to provide the ability to deploy a private cloud in our 9.04 release of Ubuntu Server.
It was clear to us then, that cloud computing would revolutionize the way in which IT departments and developers interact and deploy solutions, and if we were going to be serious players in this new ecosystem, we’d need to understand it at the core. The first step to streamlining our development and operations activities was to merge our IT team, who provided all global IT services to both Canonical and the Ubuntu community, with our Launchpad team, who developed, maintained, and serviced Launchpad.net, the core infrastructure for hosting and building Ubuntu. We then added our Online Services team, who drove our Ubuntu One related services, and this new organization was called Core DevOps…thus the CDO was born.
Roughly soon after the formation of the CDO, I was transitioning between roles within Canonical, going from acting CTO to Release Manager (10.10 on 10.10.10..perfection! ), then landing in as our new manager for the Ubuntu Server and Security teams. Our server engineering efforts continued to become more and more focused on cloud, and we had also began working on a small, yet potentially revolutionary, internal project called Ensemble, which was focused on solving the operational challenges system administrators, solution architects, and developers would face in the cloud, when one went from managing 100s of machines and associated services to 1000s.
All of this led to a pivotal engineering meeting in Cape Town, South Africa early 2011, where management and technical leaders representing all parts of the CDO and Ubuntu Server engineering met with Mark Shuttleworth, along with the small team working on Project Ensemble, to determine the direction Canonical would take with our server product.
Until this moment in time, while we had been dabbling in cloud computing technologies with projects like our own cloud-init and the Amazon EC2 AMI Locator, Ubuntu Server was still playing second stage to Ubuntu for the desktop. While being derived from Debian (the world’s most widely deployed and dependable Linux web hosting server OS), certainly gave us credibility as a server OS, the truth was that most people thought of desktops when you mentioned Ubuntu the OS. Canonical’s engineering investments were still primarily client focused, and Ubuntu Server was nothing much more than new Debian releases at a predictable cadence, with a bit of cloud technology thrown in to test the waters. But this weeklong engineering sprint was where it all changed. After hours and hours of technical debates, presentations, demonstrations, and meetings, there were two major decisions made that week that would catapult Canonical and Ubuntu Server to the forefront of cloud computing as an operating system.
The first decision made was OpenStack was the way forward. The project was still in its early days, but it had already peaked many of our engineers’ interest, not only because it was being led by friends of Ubuntu and former colleagues of Canonical, Rick Clark, Thierry Carrez, and Soren Hansen, but the development methods, project organization, and community were derived from Ubuntu, and thus it was something we knew had potential to grow and sustain itself as an opensource project. While we still had to do our due diligence on the code, and discuss the decision at UDS, it was clear to many then that we’d inevitably go that direction.
The second decision made was that Project Ensemble would be our main technical contribution to cloud computing, and more importantly, the key differentiator we needed to break through as the operating system for the cloud. While many in our industry were still focused on scale-up, legacy enterprise computing and the associated tools and technologies for things like configuration and virtual machine management, we knew orchestrating services and managing the cloud were the challenges cloud adopters would need help with going forward. Project Ensemble was going to be our answer.
Fast forward a year to early 2012. Project Ensemble had been publicly unveiled as, Juju, the Ubuntu Server team had fully adopted OpenStack and plans for the hugely popular Ubuntu Cloud Archive were in the works, and my role had expanded to Director of Ubuntu Server, covering the engineering activities of multiple teams working on Ubuntu Server, OpenStack, and Juju. The CDO was still covering IT operations, Launchpad, and Online Services, but now we had started discussing plans to transition our own internal IT infrastructure over to an internal cloud computing model, essentially using the very same technologies we expected our users, and Canonical customers, to depend on. As part of the conversation on deploying cloud internally, our Ubuntu Server engineering teams started looking at tools to adopt that would provide our internal IT teams and the wider Ubuntu community the ability to deploy and manage large numbers of machines installed with Ubuntu Server. Originally, we landed on creating a tool based on Fedora’s Cobbler project, combined with Puppet scripts, and called it Ubuntu Orchestra. It was perfect for doing large-scale, coordinated installations of the OS and software, such as OpenStack, however it quickly became clear that doing this install was just the beginning…and unfortunately, the easy part. Managing and scaling the deployment was the hard part. While we had called it Orchestra, it wasn’t able to orchestrate much beyond machine and application install. Intelligently and automatically controlling the interconnected services of OpenStack or Hadoop in a way that allowed for growth and adaptability was the challenge. Furthermore, the ways in which you had to describe the deployments were restricted to Puppet and it’s descriptive scripting language and approach to configuration management…what about users wanting Chef?…or CF Engine?…or the next foobar configuration management tool to come about? If we only had a tool for orchestrating services that ran on bare metal, we’d be golden….and thus Metal as a Service (MAAS) was born.
MAAS was created for the sole purpose of providing Juju a way to orchestrate physical machines the same way Juju managed instances in the cloud. The easiest way to do this, was to create something that gave cloud deployment architects the tools needed to manage pools of servers like the cloud. Once we began this project, we quickly realized that it was good enough to even stand on its own, i.e. as a management tool for hardware, and so we expanded it to a full fledged project. MAAS expanded to having a verbose API and user-tested GUI, thereby making Juju, Ubuntu Server deployment, and Canonical’s Landscape product leverage the same tool for managing hardware…allowing all three to benefit from the learnings and experiences of having a shared codebase.The CDO Evolves
In the middle of 2012, the current VP of CDO decided to seek new opportunities elsewhere. Senior management took this opportunity to look at the current organizational structure of Core DevOps, and adjust/adapt according to both what we had learned over the past 3 1/2 years and where we saw the evolution of IT and the server/cloud development heading. The decision was made to focus the CDO more on cloud/scale-out server technologies and aspects, thus the Online Services team was moved over to a more client focused engineering unit. This left Launchpad and internal IT in the CDO, however the decision was also made to move all server and cloud related project engineering teams and activities into the organization. The reasoning was pretty straight-forward, put all of server dev and ops into the same team to eliminate “us vs them” siloed conversations…streamline the feedback loop between engineering and internal users to accelerate both code quality and internal adoption. I took a career growth decision to apply for the chance to lead the CDO, and was fortunate enough to get it, and thus became the new Vice President of Core DevOps.
My first decision as new lead of the CDO was to change the name. It might seem trivial, but while I felt it was key to keep to our roots in DevOps, the name Core DevOps no longer applied to our organization because of the addition of so much more server and cloud/scale-out computing focused engineering. We had also decided to scale back internal feature development on Launchpad, focusing more on maintenance and reviewing/accepting outside contributions. Out of a pure desire to reduce the overhead that department name changes usually cause in a company, I decided to keep the acronym and go with Cloud and DevOps at first. However, then the name (and quite honestly the job title itself) seemed a little too vague…I mean what does VP of Cloud or VP of DevOps really mean? I felt like it would have been analogous to being the VP of Internet and Agile Development…heavy on buzzword and light on actual meaning. So I made a minor tweak to “Cloud Development and Operations“, and while arguably still abstract, it at least covered everything we did within the organization at high level.
At the end of 2012, we internally gathered representation of every team in the “new and improved” CDO for a week long strategy session on how we’d take advantage of the reorganization. We reviewed team layouts, workflows, interactions, tooling, processes, development models, and even which teams individuals were on. Our goal was to ensure we didn’t duplicate effort unnecessarily, share best practices, eliminate unnecessary processes, break down communication silos, and generally come together as one true team. The outcome resulted in some teams broken apart, some others newly formed, processes adapted, missions changed, and some people lost because they didn’t feel like they fit anymore.
Entering into 2013, the goal was to simply get work done:
- Work to deploy, expand, and transition developers and production-level services to our internal OpenStack clouds: CanoniStack and ProdStack.
- Work to make MAAS and Juju more functional, reliable, and scalable.
- Work to make Ubuntu Server better suited for OpenStack, more easily consumable in the public cloud, and faster to bring up for use in all scale-out focused hardware deployments
- Work to make Canonical’s Landscape product more relevant in the cloud space, while continuing to be true to its roots of server management.
All this work was in preparation for the 14.04 LTS release, i.e. the Trusty Tahr. Our feeling was (and still is) that this had to be the release when it all came together into a single integrated solution for use in *any* scale-out computing scenario…cloud…hyperscale…big data…high performance computing…etc. If a computing solution involved large numbers of computational machines (physical or virtual) and massively scalable workloads, we wanted Ubuntu Server to be the defacto OS of choice. By the end of last year, we had achieved a lot of the IT and engineering goals we set, and felt pretty good about ourselves. However, as a company we quickly discovered there was one thing we left out in our grand plan to better align and streamline our efforts around scale-out technologies….professional delivery and support of these technologies.
To be clear, Canonical had not forgotten about growing or developing our teams of engineers and architects responsible for delivering solutions and support to customers. We had just left them out of our “how can we do this better” thinking when aligning the CDO. We were initially focused on improving how we developed and deployed, and we were benefiting from the changes made. However, now as we began growing our scale-out computing customer base in hyperscale and cloud (both below and above), we began to see that same optimizations made between Dev and Ops, needed to be done with delivery. So in December of last year, we moved all hardware enablement and certification efforts for servers, along with technical support and cloud consultancy teams into the CDO. The goal was to strengthen the product feedback loop, remove more “us vs them” silos, and improve the response times to customer issues found in the field. We were basically becoming a global team of scale-out technology superheroes.
It’s been only 3 months since our server and cloud enablement and delivery/support teams have joined the CDO, and there are already signs of improvement in responsiveness to support issues and collaboration on technical design. I won’t lie and say it’s all been butterflies and roses, nor will I say we’re done and running like a smooth, well-oiled machine because you simply can’t do that in 3 months, but I know we’ll get there with time and focus.
So there you have it.
The Cloud Development and Operations organization in Canonical is now 5 years strong. We deliver global, 24×7 IT services to Canonical, our customers and Ubuntu community. We have engineering teams creating server, cloud, hyperscale, and scale-out software technologies and solutions to problems some have still yet to even consider. We deliver these technologies and provide customer support for Canonical across a wide range of products including Ubuntu Server and Cloud. This end-to-end integration of development, operations, and delivery is why Ubuntu Server 14.04 LTS, aka the Trusty Tahr, will be the most robust, technically innovative release of the Ubuntu for the server and cloud to date.
For the last couple of weeks we’ve been working on the new Ubuntu Wallpaper. It is never easy trust me. The most difficult part was to work out the connection between the old wallpapers and the new look and feel – Suru. The wallpaper has become an integral part of the ubuntu brand, the strong colours and gradated flow are powerful important elements. We realised this when looking from a distance at someone laptop it really does shout UBUNTU.
We spent some time looking at our brand guidelines as well as previous wallpaper thinking how to connect the old with the new and how to make the transition smooth. I did start with simple shapes and treated them as a separate sheets of paper. After a while we moved away from that idea simply because Suru is about simplicity and minimalism.
When we got the composition right we started to play with colours, we tried all our Ubuntu complimentary colours but we were not entirely happy. Don’t get me wrong ;) they did look nice but it didn’t feel like a next step from our last wallpaper…
And here some examples of the things I was exploring…
Last Sunday, I taught the “Getting started contributing to the Wiki docs” classroom session of the Doc Team’s Ubuntu Documentation Day. This was the first time I taught one and I learned some lessons:
- Have an outline of your lesson ready
- If possible, have the outline reviewed by someone else in the team to check for erros
- Have an example to go over and have it for everyone that wants to do it be done by the them
- Have enough time for pauses and use them for questions
- It’s okay to still have five (5) to ten (10) minutes left in your session; it can be used for questions
- You have to PM the Classbot with the commands in order for it to work out the questions and have them posted into the session from the chat channel
I hope these lessons can help others who wants to teach a session.
The Community Wallpaper Contest for Trusty Tahr Cycle has come to an end.
Special thanks to everyone who took the time to vote and helped Ubuntu GNOME team with the very first Community Wallpaper Contest.
We have 10 winners:
Another Blueprint for Trusty Tahr has been implemented successfully.
Thank you for Ubuntu GNOME Artwork Team. They have done a great job.
Thank you everyone for your endless help and continuous support!
On behalf of Ubuntu GNOME Artwork Team
Check out “loving the bottom edge” for the most important bit of design guidance for your Ubuntu mobile app.
This work has been a LOT of fun. It started when we were trying to find the zen of each edge of the screen, a long time back. We quickly figured out that the bottom edge is by far the most fun, by far the most accessible. You can always get to it easily, it feels great. I suspect that’s why Apple has used the bottom edge for their quick control access on IOS.
We started in the same place as Apple, thinking that the bottom edge was so nice we wanted it for ourselves, in the system. But as we discussed it, we started to think that the app developer was the one who deserved to do something really distinctive in their app with it instead. It’s always tempting to grab the tastiest bit for oneself, but the mark of civility is restraint in the use of power and this felt like an appropriate time to exercise that restraint.
Importantly you can use it equally well if we split the screen into left and right stages. That made it a really important edge for us because it meant it could be used equally well on the Ubuntu phone, with a single app visible on the screen, and on the Ubuntu tablet, where we have the side stage as a uniquely cool way to put phone apps on tablet screens alongside a bigger, tablet app.
The net result is that you, the developer, and you, the user, have complete creative freedom with that bottom edge. There are of course ways to judge how well you’ve exercised that freedom, and the design guidance tries to leave you all the freedom in the world while still providing a framework for evaluating how good the result will feel to your users. If you want, there are some archetypes and patterns to choose from, but what I’d really like to see is NEW patterns and archetypes coming from diverse designs in the app developer community.
Here’s the key thing – that bottom edge is the one thing you are guaranteed to want to do more innovatively on Ubuntu than on any other mobile platform. So if you are creating a portable app, targeting a few different environments, that’s the thing to take extra time over for your Ubuntu version. That’s the place to brainstorm, try out ideas on your friends, make a few mockups. It’s the place you really express the single most important aspects of your application, because it’s the fastest, grooviest gesture in the book, and it’s all yours on Ubuntu.