Amazon Web Services recently announced an AWS Community Heroes Program where they are starting to recognize publicly some of the many individuals around the world who contribute in so many ways to the community that has grown up around the services and products provided by AWS.
It is fun to be part of this community and to share the excitement that so many have experienced as they discover and promote new ways of working and more efficient ways of building projects and companies.
Here are some technologies I have gotten the most excited about over the decades. Each of these changed my life in a significant way as I invested serious time and effort learning and using the technology. The year represents when I started sharing the “good news” of the technology with people around me, who at the time usually couldn’t have cared less.
1980: Computers and Programming - “You can write instructions and the computer does what you tell it to! This is going to be huge!”
1987: The Internet - “You can talk to people around the world, access information that others make available, and publish information for others to access! This is going to be huge!”
1993: The World Wide Web - “You can view remote documents by clicking on hyperlinks, making it super-easy to access information, and publishing is simple! This is going to be huge!”
2007: Amazon Web Services - “You can provision on-demand disposable compute infrastructure from the command line and only pay for what you use! This is going to be huge!”
I feel privileged to have witnessed amazing growth in each of these and look forward to more productive use on all fronts.
A great way to meet thousands of people in the AWS community (and to spend a few days in intense learning about AWS no matter your current expertise level) is to attend the AWS re:Invent conference in Las Vegas this November. Perhaps I’ll see you there!
Original article: http://alestic.com/2014/09/aws-community-heroes
I bought a Cubieboard2 and I made a Lubuntu 14.04 image! Now, it's really fast and easy to deploy that image in a cubieboard2 with a NAND = 4GB.
Download the Lubuntu 14.04 image for CubieBoard2 here.
LUBUNTU 14.04 INSTALL STEPS:
Boot with a Live distro, by example, with Cubian into a microSD (>8GB) with these steps.
Copy this Lubuntu image downloaded into the root of the microSD.
Boot the Cubieboard2 with Cubian from the microSD.
Open a Terminal (Menu / Accesories / LXTerminal) and run:
sudo su -
[password is "cubie"]
dd if=/lubuntu-14.04-cubieboard2-nand.img conv=sync,noerror bs=64K of=/dev/nand
It's done! Reboot :) You must to have Lubuntu 14.04.1 running with 4GB as NAND partition. User: linaro, password: linaro.
RECOMMEND STEPS AFTER INSTALLATION:
sudo su -
- Add your new user (change 'username' for your new user):
- Set keyboard layout in persist mode (By example, for the Spanish is "es"):
- Set localtime (By example, for Spain local time = Europe/Madrid), in other way, the browser will have problems with the https web pages:
- Change password to linaro user or remove (logout required) that user (it's sudo and all people know this password, do it ;):
- Install ssh-client for connect by ssh or pulseaudio pavucontrol for audio.
HOW WAS THIS IMAGE DONE?
For this image I installed an official Lubuntu 13.04 Image from here, and I did this changes:
- Resized NAND to 4GB (Ubuntu will use 1.5GB; 2GB free). You can use a microSD or SATA HD as external storage.
- Updated to 13.10 and then to 14.04 LTS (Updated lxde* packages to last versions).
- Installed ntp, firefox, audacious, sylpheed, pidgin, gpicview, lxappearance and ufw (not enabled)
- Rewritabled and group owner for avoid ufw warnings: /etc, /, /lib
- Removed chromium-browser, gnome-network-manager and gnome-disk-utility
- Removed no password for admin users (edited /etc/sudoers)
- Created this dd image
(OPTIONAL) PREVIOUSLY BACKUP OF YOUR CURRENT CUBIEBOARD2:
Insert a microSD card in your current OS:
sudo su -
dd if=/dev/nand conv=sync,noerror bs=64K | gzip -c -9 > /nand.img.gz
(OPTIONAL) RESTORE THAT BACKUP:
dd if=/nand.img conv=sync,noerror bs=64K of=/dev/nand
Just a quick post to help those who might be running older/unsupported distributions of linux, mainly Ubuntu 8.04 who need to patch their version of bash due to the recent exploit here:
I found this post and can confirm it works:
Here are the steps(make a backup of /bin/bash just in case):
#assume that your sources are in /src
#download all patches
for i in $(seq -f “%03g” 0 25); do wget http://ftp.gnu.org/gnu/bash/bash-4.3-patches/bash43-$i; done
tar zxvf bash-4.3.tar.gz
#apply all patches
for i in $(seq -f “%03g” 0 25);do patch -p0 < ../bash43-$i; done
#build and install
./configure && make && make install
KDE Frameworks 5.2.0 Has been released to Utopic archive!
(Actually a few days ago, we are playing catch up since Akademy)
Also, I have finished packaging Plasma 5.0.2, it looks and runs great!
We desperately need more testers! If you would like to help us test,
please join us in IRC in #kubuntu-devel thanks!
A few weeks ago I was blessed with the opportunity to attend KDE’s Akademy Conference for the first time. (Thank you Ubuntu Donors for sponsoring me!).
Akademy is a week long conference that begins with a weekend of keynote speakers, informative lectures, and many hacking groups scattered about.
This Akademy also had a great pre-release party held by Red Hat.
I have not traveled such a distance since I was a child, so I was not prepared for the adventures to come. Hint: Pack lightly! I still have nightmares of the giant suitcase I thought I would need! I was lucky to have a travel buddy / roommate (Thank you Valorie Zimmerman!) to assist me in my travels, and most importantly, introducing me to my peers at KDE/Kubuntu that I had never met in person. It was wonderful to finally put a face to the names.
My first few days were rather difficult. I was fighting my urge to stand in a corner and be shy. Luckily, some friendly folks dragged me out of the corner and introduced me to more and more people. With each introduction and conversation it became easier. I also volunteered at the registration desk, which gave me an opportunity to meet new people. As the days went on and many great conversations later, I forgot I was shy! In the end I made many friends during Akademy, turning this event into one of the most memorable moments of my life.
The weekend brought Keynote speakers and many informative lectures. Unfortunately, I could not be in several places at once, so I missed a few that I wanted to see.
Thankfully, you can see them here: https://conf.kde.org/en/Akademy2014/public/schedule/2014-09-06
Due to circumstances out of their control, the audio is not great. The rest of the week was filled with BoF sessions / Workshops / Hacking / Collaboration / Anything we could think of that need to get done. In the BoF sessions we covered a lot of ground and hashed out ways to resolve problems we were facing. All that I attended were extremely productive. Yet another case where I wish I could split into multiple people so I could attend all that I wanted too!
On Thursday we got an entire Kubuntu Day! We accomplished many things including working with Debian’s Sune and Pino to move some of our packaging to Debian git to reduce duplicate packaging work. We discussed the details of going to continuous packaging which includes Jenkins CI. We also had the pleasure of München’s Limux project joining us to update us with the progress of Kubuntu in Munich, Germany!
While there was a lot of work accomplished during Akademy, there was also plenty of play as well! In the evenings many of us would go out on the town for dinner and drinks.
On Wednesday,on the day trip, we visited (what a hike!) an old castle via a nice ferry ride. Unfortunately I forgot my camera in the hostel.. The hackroom in the hostel was always bustling with activity. We even had the pleasure of very tasty home cooked meals by Jos Poortvliet in the tiny hostel kitchen a couple nights, that took some creative thinking! In the end, there was never a moment of boredom and always moments of learning, discussions, hacking and laughing.
If you ever have the opportunity to attend Akademy, do not pass it up!
Today I not only submitted my bachelor thesis to the printing company, I also released a new version of hardlink, my file deduplication tool.
hardlink 0.3 now features support for xattr support, contributed by Tom Keel at Intel. If this does not work correctly, please blame him.
I also added support for a –minimum-size option.
Most of the other code has been tested since the upload of RC1 to experimental in September 2012.
The next major version will split up the code into multiple files and clean it up a bit. It’s getting a bit long now in a single file.
Filed under: Uncategorized
Be careful of headlines, they appeal to our sense of the obvious and the familiar, they entrench rather than challenge established stereotypes and memes. What one doesn’t read about every day is usually more interesting than what’s in the headlines. And in the current round of global unease, what’s not being said – what we’ve failed to admit about our Western selves and our local allies – is central to the problems at hand.
Both Iraq and Ukraine, under Western tutelage, failed to create states which welcome diversity. Both Iraq and the Ukraine aggressively marginalised significant communities, with the full knowledge and in some cases support of their Western benefactors. And in both cases, those disenfranchised communities have rallied their cause into wars of aggression.
Reading the Western media one would think it’s clear who the aggressors are in both cases: Islamic State and Russia are “obvious bad actors” who’s behaviour needs to be met with stern action. Russia clearly has no business arming rebels with guns they use irresponsibly to tragic effect, and the Islamic State are clearly “a barbaric, evil force”. If those gross simplifications, reinforced in the Western media, define our debate and discussion on the subject then we are destined pursue some painful paths with little but frustration to show for the effort, and nasty thorns that fester indefinitely. If that sounds familiar it’s because yes, this is the same thing happening all over again. In a prior generation, only a decade ago, anger and frustration at 9/11 crowded out calm deliberation and a focus on the crimes in favour of shock and awe. Today, out of a lack of insight into the root cause of Ukrainian separatism and Islamic State’s attractiveness to a growing number across the Middle East and North Africa, we are about to compound our problems by slugging our way into a fight we should understand before we join.
This is in no way to say that the behaviour of Islamic State or Russia are acceptable in modern society. They are not. But we must take responsibility for our own behaviour first and foremost; time and history are the best judges of the behaviour of others.
In the case of the Ukraine, it’s important to know how miserable it has become for native Russian speakers born and raised in the Ukraine. People who have spent their entire lives as citizens of the Ukraine who happen to speak in Russian at home, at work, in church and at social events have found themselves discriminated against by official decree from Kiev. Friends of mine with family in Odessa tell me that there have been systematic attempts to undermine and disenfranchise Russian speaking in the Ukraine. “You may not speak in your home language in this school”. “This market can only be conducted in Ukrainian, not Russian”. It’s important to appreciate that being a Russian speaker in Ukraine doesn’t necessarily mean one is not perfectly happy to be a Ukranian. It just means that the Ukraine is a diverse cultural nation and has been throughout our lifetimes. This is a classic story of discrimination. Friends of mine who grew up in parts of Greece tell a similar story about the Macedonian culture being suppressed – schools being forced to punish Macedonian language spoken on the playground.
What we need to recognise is that countries – nations – political structures – which adopt ethnic and cultural purity as a central idea, are dangerous breeding grounds for dissent, revolt and violence. It matters not if the government in question is an ally or a foe. Those lines get drawn and redrawn all the time (witness the dance currently under way to recruit Kurdish and Iranian assistance in dealing with IS, who would have thought!) based on marriages of convenience and hot button issues of the day. Turning a blind eye to thuggery and stupidity on the part of your allies is just as bad as making sure you’re hanging with the cool kids on the playground even if it happens that they are thugs and bullies – stupid and shameful short-sightedness.
In Iraq, the government installed and propped up with US money and materials (and the occasional slap on the back from Britain) took a pointedly sectarian approach to governance. People of particular religious communities were removed from positions of authority, disqualified from leadership, hunted and imprisoned and tortured. The US knew that leading figures in their Iraqi government were behaving in this way, but chose to continue supporting the government which protected these thugs because they were “our people”. That was a terrible mistake, because it is those very communities which have morphed into Islamic State.
The modern nation states we call Iraq and the Ukraine – both with borders drawn in our modern lifetimes – are intrinsically diverse, intrinsically complex, intrinsically multi-cultural parts of the world. We should know that a failure to create governments of that diversity, for that diversity, will result in murderous resentment. And yet, now that the lines for that resentment are drawn, we are quick to choose sides, precisely the wrong position to take.
What makes this so sad is that we know better and demand better for ourselves. The UK and the US are both countries who have diversity as a central tenet of their existence. Freedom of religion, freedom of expression, the right to a career and to leadership on the basis of competence rather than race or creed are major parts of our own identity. And yet we prop up states who take precisely the opposite approach, and wonder why they fail, again and again. We came to these values through blood and pain, we hold on to these values because we know first hand how miserable and how wasteful life becomes if we let human tribalism tear our communities apart. There are doors to universities in the UK on which have hung the bodies of religious dissidents, and we will never allow that to happen again at home, yet we prop up governments for whom that is the norm.
The Irish Troubles was a war nobody could win. It was resolved through dialogue. South African terrorism in the 80′s was a war nobody could win. It was resolved through dialogue and the establishment of a state for everybody. Time and time again, “terrorism” and “barbarism” are words used to describe fractious movements by secure, distant seats of power, and in most of those cases, allowing that language to dominate our thinking leads to wars that nobody can win.
Russia made a very grave error in arming Russian-speaking Ukranian separatists. But unless the West holds Kiev to account for its governance, unless it demands an open society free of discrimination, the misery there will continue. IS will gain nothing but contempt from its demonstrations of murder – there is no glory in violence on the defenceless and the innocent – but unless the West bends its might to the establishment of societies in Syria and Iraq in which these religious groups are welcome and free to pursue their ambitions, murder will be the only outlet for their frustration. Politicians think they have a new “clean” way to exert force – drones and airstrikes without “boots on the ground”. Believe me, that’s false. Remote control warfare will come home to fester on our streets.
Today, we worked, with the help of ioerror on IRC, on reducing the attack surface in our fetcher methods.
There are three things that we looked at:
- Reducing privileges by setting a new user and group
- seccomp-bpf sandbox
Today, we implemented the first of them. Starting with 1.1~exp3, the APT directories /var/cache/apt/archives and /var/lib/apt/lists are owned by the “_apt” user (username suggested by pabs). The methods switch to that user shortly after the start. The only methods doing this right now are: copy, ftp, gpgv, gzip, http, https.
If privileges cannot be dropped, the methods will fail to start. No fetching will be possible at all.
- We drop all groups except the primary gid of the user
- copy breaks if that group has no read access to the files
We plan to also add chroot() and seccomp sandboxing later on; to reduce the attack surface on untrusted files and protocol parsing.
Filed under: Uncategorized
I invented the word ‘umstraßen’ about 5 years ago while walking to Mauerpark with a friend. We needed to cross the road, so I said ‘wollen wir umstraßen?’, because, well ‘umsteigen’ can be a word. Of course it means ‘die Straßenseite wechseln’ in common German, but one word is better than three, right? This one is generally popular with German native speakers, so let’s see if we can get it into the Duden :).
This is a source and binary compatibility break since the 0.x.y series of Grantlee releases. The major version number has been bumped to 5 in order to match the Qt major version requirement, and to reflect the maturity of the Grantlee libraries. The compatibility breaks are all minor, with the biggest impact being in the buildsystem, which now follows patterns of modern cmake.
We are starting to see multiple awesome code contributions and suggestions on our Ubuntu Loves Developers effort and we are eagerly waiting on yours! As a consequence, the spectrum of supported tools is going to expand quickly and we need to ensure that all those different targeted developers are well supported, on multiple releases, always delivering the latest version of those environments, at anytime.
A huge task that we can only support thanks to a large suite of tests! Here are some details on what we currently have in place to achieve and ensure this level of quality.Different kinds of tests pep8 test
The pep8 test is there to ensure code quality and consistency checking. Tests results are trivial to interpret.
Those are basically unit tests. They are enabling us to quickly see if we've broken anything with a change, or if the distribution itself broke us. We try to cover in particular multiple corner cases that are easy to test that way.
Large tests are real user-based testing. We execute udtc and type in stdin various scenarios (like installing, reinstalling, removing, installing with a different path, aborting, ensuring the IDE can start…) and check that the resulting behavior is the one we are expecting.
Those tests enables us to know if something in the distribution broke us, or if a website changed its layout, the download links are modified, or if a newer version of a framework can't be launched on a particular Ubuntu version or configuration. That way, we are aware, ideally most of the time even before the user, that something is broken and can act on it.
Those tests are running every couple of hours on jenkins, using real virtual machines running an Ubuntu Desktop install.medium tests
Finally, the medium tests are inheriting from the large tests. Thus, they are running exactly the same suite of tests, but in a Docker containerized environment, with mock and small assets, not relying on the network or any archives. This means that we ship and emulate a webserver delivering web pages to the container, pretending we are, for instance, https://developer.android.com. We then deliver fake requirements packages and mock tarballs to udtc, and running those.
Implementing a medium tests is generally really easy, for instance:
class BasicCLIInContainer(ContainerTests, test_basics_cli.BasicCLI):
"""This will test the basic cli command class inside a container"""
is enough. That means "takes all the BasicCLI large tests, and run them inside a container". All the hard work, wrapping, sshing and tests are done for you. Just simply implement your large tests and they will be able to run inside the container with this inheritance!
We added as well more complex use cases, like emulating a corrupted downloading, with a md5 checksum mismatch. We generate this controlled environment and share it using trusted containers from Docker Hub that we generate from the Ubuntu Developer Tools Center DockerFile.
Those tests are running as well every couple of hours on jenkins.
By comparing medium and large tests, as the first is in a completely controlled environment, we can decipher if we or the distribution broke us, or if a change from a third-party changing their website or requesting newer version requirements impacted us (as the failure will only occurs on the large tests and not in the medium for instance).Running all tests, continuously!
As some of the tests can show the impact of external parts, being the distribution, or even, websites (as we parse some download links), we need to run all those tests regularly. Note as well that we can experience different results on various configurations. That's why we are running all those tests every couple of hours, once using the system installed tests, and then, with the tip of master. Those are running on various virtual machines (like here, 14.04 LTS on i386 and amd64).
By comparing all this data, we know if a new commit introduced regressions, if a third-party broke and we need to fix or adapt to it. Each testsuites has a bunch of artifacts attached to be able to inspect the dependencies installed, the exact version of UDTC tested here, and ensure we don't corner ourself with subtleties like "it works in trunk, but is broken once installed".
You can see on that graph that trunk has more tests (and features… just wait for some days before we tell more about them ;)) than latest released version.
As metrics are key, we collect code coverage and line metrics on each configuration to ensure we are not regressing in our target of keeping high coverage. That tracks as well various stats like number of lines of code.Conclusion
Thanks to all this, we'll probably know even before any of you if anything is suddenly broken and put actions in place to quickly deliver a fix. With each new kind of breakage we plan to back it up with a new suite of tests to ensure we never see the same regression again.
As you can see, we are pretty hardcore on tests and believe it's the only way to keep quality and a sustainable system. With all that in place, as a developer, you should just have to enjoy your productive environment and don't have to bother of the operation system itself. We have you covered!
 if tests are not running regularly, you can consider them broken anyway
Firstly, I very much like the focus on social structure and needs – what our users and deployers need from us. That seems entirely right.
And I very much like the getting away from TC picking winners and losers. That was never an enjoyable thing when I was on the TC, and I don’t think it has made OpenStack better.
However, the thing that picking winners and losers did was that it allowed users to pick an API and depend on it. Because it was the ‘X API for OpenStack’. If we don’t pick winners, then there is no way to say that something is the ‘X API for OpenStack’, and that means that there is no forcing function for consistency between different deployer clouds. And so this appears to be why Ring 0 is needed: we think our users want consistency in being able to deploy their application to Rackspace or HP Helion. They want vendor neutrality, and by giving up winners-and-losers we give up vendor neutrality for our users.
Thats the only explanation I can come up with for needing a Ring 0 – because its still winners and losers (e.g. picking an arbitrary project) keystone, grandfathering it in, if you will. If we really want to get out of the role of selecting projects, I think we need to avoid this. And we need to avoid it without losing vendor neutrality (or we need to give up the idea of vendor neutrality).
One might say that we must pick winners for the very core just by its, but I don’t think thats true. If the core is small, many people will still want vendor neutrality higher up the stack. If the core is large, then we’ll have a larger % of APIs covered and stable granting vendor neutrality. So a core with fixed APIs will be under constant pressure to expand: not just from developers of projects, but from users that want API X to be fixed and guaranteed available and working a particular way at [most] OpenStack clouds.
Ring 0 also fulfils a quality aspect – we can check that it all works together well in a realistic timeframe with our existing tooling. We are essentially proposing to pick functionality that we guarantee to users; and an API for that which they have everywhere, and the matching implementation we’ve tested.
To pull from Monty’s post:
“What does a basic end user need to get a compute resource that works and seems like a computer? (end user facet)
What does Nova need to count on existing so that it can provide that. “
He then goes on to list a bunch of things, but most of them are not needed for that:
We need Nova (its the only compute API in the project today). We don’t need keystone (Nova can run in noauth mode and deployers could just have e.g. Apache auth on top). We don’t need Neutron (Nova can do that itself). We don’t need cinder (use local volumes). We need Glance. We don’t need Designate. We don’t need a tonne of stuff that Nova has in it (e.g. quotas) – end users kicking off a simple machine have -very- basic needs.
Consider the things that used to be in Nova: Deploying containers. Neutron. Cinder. Glance. Ironic. We’ve been slowly decomposing Nova (yay!!!) and if we keep doing so we can imagine getting to a point where there truly is a tightly focused code base that just does one thing well. I worry that we won’t get there unless we can ensure there is no pressure to be inside Nova to ‘win’.
So there’s a choice between a relatively large set of APIs that make the guaranteed available APIs be comprehensive, or a small set that that will give users what they need just at the beginning but might not be broadly available and we’ll be depending on some unspecified process for the deployers to agree and consolidate around what ones they make available consistently.
In sort one of the big reasons we were picking winners and losers in the TC was to consolidate effort around a single API – not implementation (keystone is already on its second implementation). All the angst about defcore and compatibility testing is going to be multiplied when there is lots of ecosystem choice around APIs above Ring 0, and the only reason that won’t be a problem for Ring 0 is that we’ll still be picking winners.
How might we do this?
One way would be to keep picking winners at the API definition level but not the implementation level, and make the competition be able to replace something entirely if they implement the existing API [and win hearts and minds of deployers]. That would open the door to everything being flexible – and its happened before with Keystone.
Another way would be to not even have a Ring 0. Instead have a project/program that is aimed at delivering the reference API feature-set built out of a single, flat Big Tent – and allow that project/program to make localised decisions about what components to use (or not). Testing that all those things work together is not much different than the current approach, but we’d have separated out as a single cohesive entity the building of a product (Ring 0 is clearly a product) from the projects that might go into it. Projects that have unstable APIs would clearly be rejected by this team; projects with stable APIs would be considered etc. This team wouldn’t be the TC : they too would be subject to the TC’s rulings.
We could even run multiple such teams – as hinted at by Dean Troyer one of the email thread posts. Running with that I’d then be suggesting
- IaaS product: selects components from the tent to make OpenStack/IaaS
- PaaS product: selects components from the tent to make OpenStack/PaaS
- CaaS product (containers)
- SaaS product (storage)
- NaaS product (networking – but things like NFV, not the basic Neutron we love today). Things where the thing you get is useful in its own right, not just as plumbing for a VM.
So OpenStack/NaaS would have an API or set of APIs, and they’d be responsible for considering maturity, feature set, and so on, but wouldn’t ‘own’ Neutron, or ‘Neutron incubator’ or any other component – they would be a *cross project* team, focused at the product layer, rather than the component layer, which nearly all of our folk end up locked into today.
Lastly Sean has also pointed out that we have large N N^2 communication issues – I think I’m proposing to drive the scope of any one project down to a minimum, which gives us more N, but shrinks the size within any project, so folk don’t burn out as easily, *and* so that it is easier to predict the impact of changes – clear contracts and APIs help a huge amount there.
Naturally the focus is on constructive feedback. The members of such a group must not only trust one another, but see each other as peers. Catmull observes that it is difficult to be candid if you are thinking about not looking like an idiot!  He also says that this is crucial in Pixar because, in the beginning, all of our movies suck.  I'm not sure this is true with KDE software, but maybe it is. Not until the code is exposed to others, to testing, to accessibility teams, HIG, designers--can it begin to not suck.
I think that we do some of this process in our sprints, on the lists, maybe in IRC and on Reviewboard, but perhaps we can be even more explicit in our calls for review and testing. The key of course is to criticize the product or the process, not the person writing the code or documentation. And on the other side, it can be very difficult to accept criticism of your work even when you trust and admire those giving you that criticism. It is something we must continually learn, in my experience.
People who take on complicated creative projects become lost at some point in the process....How do you get a director to address a problem he or she cannot see? ...The process of coming to clarity takes patience and candor.  We try to create an environment where people want to hear one another's notes [feedback] even where those notes are challenging, and where everyone has a vested interest in one another's success.Let me repeat that, because to me, that is the key of a working, creative community: "where everyone has a vested interest in one another's success." I think we in KDE feel that but perhaps do not always live it. So let us ask one another for feedback, criticism, and strive to pay attention to it, and evaluate criticism dispassionately. I think we have missed this bit some times in the past in KDE, and it has come back to bite us. We need to get better.
Catmull makes the point that the Braintrust has no authority, and says this is crucial:
the director does not have to follow any of the specific suggestions given. .... It is up to him or her to figure out how to address the feedback....While problems in a movie are fairly easy to identify, the sources of these problems are often extraordinarily difficult to assess.He continues,
We do not want the Braintrust to solve a director's problem because we believe that...our solution won't be as good....We believe that ideas....only become great when they are challenged and tested. More than once, he discusses instances where big problems led to Pixar's greatest successes, because grappling with these problems brought out their greatest creativity. While problems ... are fairly easy to identify, the sources of these problems are often extraordinarily difficult to assess. How familiar does this sound to us working in software!? So, at Pixar,
the Braintrust's notes ...are intended to bring the true causes of the problems to the surface--not to demand a specific remedy. Moreover, we don't want the Braintrust to solve a director's problem because we believe that, in all likelihood, our solution won't be as good as the one the director and his or her creative team comes up with. We believe that ideas--and thus films--only become great when they are challenged and tested.I've seen that often this last bit is a sticking point. People are willing to criticize a piece of code, or even the design, but want their own solution instead. Naturally, this way of working encounters pushback.
Frank talk, spirited debate, laughter, and love  is how Catmull sums up Braintrust meetings. Sound familiar? I've just come from Akademy, which I can sum up the same way. Let's keep doing this in all our meetings, whether they take place in IRC, on the lists, or face to face. Let's remember to not hold back; when we see a problem, have the courage to raise the issue. We can handle problems, and facing them is the only way to solve them, and get better.
Please note that we especially need testers for PPC chips and Intel Macs. We have a special section discussing it here. In particular, if you have an Intel Mac, I have a few questions for you that might help us trim down the workload of the testing team.
Also, if you have a PPC chip, we're about the only distro actively supporting this architecture. However, we are community supported, so without formal testing, the arch will lose more support. So please, join in testing!
To help make sure the final utopic image is in good shape, we need your help and test results! Please, head over to the milestone on the isotracker, select your favorite flavor and perform the needed tests against the images.
If you've never submitted test results for the iso tracker, check out the handy links on top of the isotracker page detailing how to perform an image test, as well as a little about how the qatracker itself works. If you still aren't sure or get stuck, feel free to contact the qa community or myself for help. Happy Testing!
Release Metrics and Incoming Bugs
Release metrics and incoming bug data can be reviewed at the following link:
Status: Utopic Development Kernel
The Utopic kernel has been rebased to the v3.16.3 upstream stable kernel
and uploaded to the archive, ie. 3.16.0-17.23. Please test and let us
know your results.
Also, we’re approximately 2 weeks away from Utopic Kernel Freeze on
Thurs Oct 9. Any patches submitted after kernel freeze are subject to
our Ubuntu kernel SRU policy.
Important upcoming dates:
Thurs Sep 25 – Utopic Final Beta (~2 days away)
Thurs Oct 9 – Utopic Kernel Freeze (~2 weeks away)
Thurs Oct 16 – Utopic Final Freeze (~3 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~4 weeks away)
The current CVE status can be reviewed at the following link:
Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid
Status for the main kernels, until today (Sept. 23):
- Lucid – Kernel prep
- Precise – Kernel prep
Trusty – Kernel prep
Current opened tracking bugs details:
For SRUs, SRU report is a good source of information:
cycle: 19-Sep through 11-Oct
19-Sep Last day for kernel commits for this cycle
21-Sep – 27-Sep Kernel prep week.
28-Sep – 04-Oct Bug verification & Regression testing.
05-Oct – 11-Oct Regression testing & Release to -updates.
Open Discussion or Questions? Raise your hand to be recognized
No open discussions.
Before Greedo shot first...
Before a troubled young Darth Vader braided his hair...
Before midiclorians offered to explain the inexplicably perfect and perfectly inexplicable...
And before Jar Jar Binks burped and farted away the last remnants of dear Obi-Wan's "more civilized age"...
...I created something, of which I was very, very proud at the time. Remarkably, I came across that creation, somewhat randomly, as I was recently throwing away some old floppy disks.
Twenty years ago, it was 1994. I was 15 years old, just learning to program (mostly on my own), and I created a "trivia game" based around Star Wars. 1,700 lines of Turbo Pascal. And I made every mistake in the book:
- One monolithic file
- Global variables
- No database or external data files
- No object orientation
- Minimal exception handling
- Giant, enormous case statements
- Ridiculous amounts of code duplication
- Poor indentation
- Zero inline comments
- Assumptions that terminals would always and forevermore be 80x24
- Self invented UI guidelines
- Possibly even egregious copyright violations?
Welcome to swline.pas. Almost unbelievably, I was able to compile it tonight on an Ubuntu 14.04 LTS 64-bit Linux desktop, using fpc, after three very minor changes:
- Running fromdos to remove the trailing ^M endemic of many DOS-era text files
- Replacing the (80MHz) CPU clock based sleep function with Delay()
- Running iconv to convert the embedded 437 code page ASCII art to UTF-8
Would you look at that!
- 8-bit color!
- Hand drawn ANSI art!
- Scrolling text of the iconic Star Wars, Empire Strikes Back, and Return of the Jedi logos!
- Random stars and galaxies drawn on the splash screen!
- No graphic interface framework (a la Newt or Ncurses) -- just a whole 'bunch of GotoXY().
- An option for sound (which, unfortunately, doesn't seem to work -- I'm sure it was just 8-bits of bleeps and bloops).
- 300 hand typed quotes (and answers) spanning all 3 movies!
- An Easter Egg, and a Cheat Code!
- User input!
- And an option at the very end to start all over again!
But watching a video is boring... Why don't you try it for yourself!?!
I thought this would be a perfect use case for a Docker. Just a little Docker image, based on Ubuntu, which includes a statically built swline.pas, and set to run that one binary (and only that one binary when launched. As simple as it gets, Makefile and Dockerfile.
$ cat Makefile
fpc -k--static swline.pas
$ cat Dockerfile
MAINTAINER Dustin Kirkland
ADD swline /swline
I've pushed a Docker image containing the game to the Docker Hub registry.
Quick note... You're going to want a terminal that's 25 characters high, and 160 characters wide (sounds weird, yes, I know -- the ANSI art characters are double byte wide and do some weird things to smaller terminals, and my interest in debugging this is pretty much non-existant -- send a patch!). I launched gnome-terminal, pressed ctrl-- to shrink the font size, on my laptop.On an Ubuntu 14.04 LTS machine:
$ sudo apt-get install docker.io
$ sudo docker pull kirkland/swline
$ sudo docker run -t -i kirkland/swline
Of course you can find, build, run, and modify the original (horrible!) source code in Launchpad and Github.
Now how about that for a throwback Tuesday ;-)
May the Source be with you! Always!
p.s. Is this the only gem I found on those 17 floppy disks? Nope :-) Not by a long shot.
Welcome to the Ubuntu Weekly Newsletter. This is issue #384 for the week September 15 – 21, 2014, and the full version is available here.
In this issue we cover:
- Call for nominations to the LoCo Council
- Ubuntu Stats
- New SubLoCo Policy
- Thomas Ward: NGINX in Ubuntu, PPAs, and Debian: naxsi packages to be dropped by the end of the month.
- Jose Antonio Rey: 3 years and counting…
- Elizabeth K. Joseph: Ubuntu at Fossetcon 2014
- Jorge Castro: Juju Ecosystem Team Sprint Report
- Final Beta 9/22 – 9/25
- In The Blogosphere
- Ubuntu Podcast from the UK LoCo: S07E25 – The One Where the Monkey Gets Away
- Weekly Ubuntu Development Team Meetings
- Upcoming Meetings and Events
- Updates and Security for 10.04, 12.04 and 14.04
- And much more!
The issue of The Ubuntu Weekly Newsletter is brought to you by:
- Elizabeth K. Joseph
- Diego Turcios
- John Mahoney
- And many others
Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License
This is a $15 million competition in which teams are challenged to create Open Source software that will teach a child to read, write, and perform arithmetic in 18 months without the aid of a teacher. This is not designed to replace teachers but to instead provide an educational solution where little or none exists.
There are 57 million children aged 5 – 12 in the world today who have no access to education. There are 250 million children below basic literacy levels, even after several years of school. You may think the solution to this is to build more schools, but we would need an extra 1.6 million teachers by next year to provide universal primary education.
This is a tragedy.
This new XPRIZE is designed to help fix this.
Every child should have a right to the core ingredient that is literacy. It unlocks their potential and opens up opportunity. Just think of all the resources we depend on today for growth and education…the Internet, books, wikipedia, collaborative tools…without literacy all of these are inaccessible. It is time to change this. Too many suffer from a lack of literacy, and sadly girls bear much of the brunt of this too.
This prize is open to anyone to participate in. Professional developers, hobbyists, students, scientists, teachers…everyone is welcome to join in and compete. While the $15 million purse is attractive in itself, just think of the impact of potentially changing the lives of hundreds of millions of kids.Coopetition For a Great Cause
What really excites me about this new XPRIZE is that it is the first Open Source XPRIZE. The winning team and the four runner-up teams will be expected to Open Source their entire code-base, assets, and material. This will create a solid foundation of education technology that can live on…long past the conclusion of this XPRIZE.
That isn’t the only reason why this excites me though. The Open Source nature of this prize provides an incredible opportunity for coopetition; where teams can collaborate around common areas of interest and problem-solving, while keeping their secret sauce to themselves. The impact of this could be profound.
I will be working hard to build an environment in which we encourage this kind of collaboration. It makes no sense for 100 teams to all solve the same problems privately in their own silo. Let’s get everyone up and running in GitHub, collaborating around these common challenges, so all the teams benefit from that pooling of resources.
Let’s also open this up so everyone can help us be successful. Let’s invite designers, translators, testers, teachers, scientists, musicians, artists and more…everyone has something they can bring to solve one of our grandest challenges, and help create a more literate and peaceful world.Everyone Can Play a Part
As part of this new XPRIZE we are also launching a crowdfunding campaign that is designed to raise additional resources so we can do even more as part of the prize. We have already funded the $15 million prize purse and some field-testing, but this crowdfunding campaign will provide the resources for us to do so much more.
This will help us broaden the field-testing in more countries, with more kids, to grow a global community around solving these grand challenges, build a collaborative environment for teams to work together on common problems, and optimize this new XPRIZE to be as successful as possible. Every penny contributed helps us to do more and ultimately squeeze the most value out of this important XPRIZE.
There are ten things you can do to help:
- Contribute! - a great place to start is to buy one of our awesome perks from the crowdfunding campaign. Find out more here.
- Join the Community - come and meet the new XPRIZE community at http://forum.xprize.org and share ideas, brainstorm, and collaborate around new projects.
- Refer Friends and Win Prizes - want to win an expenses-paid trip to our Visioneering event where we create new XPRIZEs while also helping spread the word? To find out more click here.
- Download the Street Team Kit - head over to our Get Involved page and download a free kit with avatars, banners, posters, presentations, FAQs and more. The page also includes videos for how to get started!
- Create and Share Digital Content - we are encouraging authors, developers, content-creators and more to create content that will spread the word about literacy, the Global Learning XPRIZE, and more!
Share and Tag Your Fave Children’s Book - which children’s books have been the most memorable for you? Share your favorite (and preferably post a picture of the cover), complete with a link to
http://igg.me/at/learningxprize and tag 5 friends to share theirs too! When using social media, be sure to use the #learningprize hashtag.
- Show Your Pride - go and download the Street Team Kit and use the images and avatars in there to change your profile picture and banner on your favorite social media networks (e.g. Twitter, Facebook, Google+).
- Create Your ‘Learning Moment’ Video - record a video and share on a video website (such as YouTube) about how learning has really impact your life. Give the video the title “Global Learning XPRIZE: My Learning Moment“. Be sure to share your video on social media too with the #learningprize hashtag!
- Put Posters up in Your Community - go and download the Street Team Kit, print the posters out and put them up in your local coffee shops, universities, colleges, schools, and elsewhere!
- Organize a Local Event - create a local event to share the Global Learning XPRIZE. Fortunately we have a video on our Get Involved page that explains how you can do this, and we have a presentation deck with notes ready for you to use!
I know a lot of people who read my blog are Open Source folks, and I believe this prize offers an incredible opportunity for us to come together to have a very real profound impact on the world. Come and join the community, support the crowdfunding campaign, and help us to all be successful in bringing literacy to millions.
I just moved back to Europe, this time to foggy London town, to join the Open Source Group at Samsung. Where I will be contributing upstream to GStreamer and WebKit/Blink during the day and ironically mocking the local hipsters at night.
After 4 years with Collabora it is sad to leave behind the talented and enjoyable people I've grown fond of there, but it's time to move on to the next chapter in my life. The Open Source Group is a perfect fit: contribute upstream, participate in innovative projects and be active in the Open Source community. I am very excited for this new job opportunity and to explore new levels of involvement in Open Source.
I am going to miss Montreal. It's very particular joie de vivre. Will miss the poutine, not the winter.
For all of those in London, I will be joining the next GNOME Beers event or let me know if you want to meet up for a coffee/pint.
Syndicators, there is a video above that may not have made it into syndication. Visit the source link to view the video.
Juju on Digital Ocean, WOW! That's all I have to say. Digital Ocean is one of the fastest cloud hosts around with their SSD backed virtual machines. To top it off their billing is a no-nonsense straight forward model. $5/mo for their lowest end server, with 1TB of included traffic. That's enough to scratch just about any itch you might have with the cloud.
Speaking of scratching itches, if you haven't checked out Juju yet, now you have a prime, low cost cloud provider to toe the waters. Spinning up droplets with Juju is very straight forward, and offers you a hands on approach to service orchestration thats affordable enough for a weekend hacker to whet their appetite. Not to mention, Juju is currently the #1 project on their API Integration listing!
In about 11 minutes, we will go from zero to deployed infrastructure for a scale-out blog (much like the one you're reading right now).Links in Video:
Juju Docean Github - http://github.com/kapilt/juju-digital...
Juju Documentation -http://juju.ubuntu.com/docs
Juju CharmStore - http://jujucharms.com
Kapil Thangavelu - http://blog.kapilt.com/
The Juju Community Members on DO - http://goo.gl/m6u781
- A Recent Ubuntu Installation (12.04 +)
- A CreditCard (for DO)
You'll want the following exports in $HOME/.bashrcexport DO_CLIENT_ID="XXXXXX" export DO_API_KEY="XXXXXX"
Then source the file so its in our current, active session.source ~/.bashrc Setup Environment and Bootstrap
Place the following lines in the environments.yaml, under the environments: key (indented 4 spaces) - ENSURE you use 4 spaces per indentation block, NOT a TAB key.digitalocean: type: manual bootstrap-host: null bootstrap-user: root Switch to the DigitalOcean environment, and bootstrap juju switch digitalocean juju docean bootstrap
Now you're free to add machines with constraints.juju docean add-machine -n 3 --constraints="mem=2g region=nyc3" --series=precise
And deploy our infrastructure:juju deploy ghost juju deploy mysql juju deploy haproxy juju add-relation ghost mysql juju add-relation ghost haproxy juju expose haproxy
From here, pull the status off the HAProxy node, copy/paste the public-address into your browser and revel in your brand new Ghost blog deployed on Digital Ocean's blazing fast SSD servers.Caveats to Juju DigitalOcean as of Sept. 2014:
These are important things to keep in mind as you move forward. This is a beta project. Evaluate the following passages for quick fixes to known issues, and warnings.
Not all charms have been tested on DO, and you may find missing libraries. Most notably python-yaml on charms that require it. Most "install failed" charms is due to missing python-yaml.
A quick hotseat fix is:juju ssh service/# sudo apt-get install python-yaml exit juju resolved -r service/#
And then file a bug against the culprit charm that it's missing a dependency for Digital Ocean.
While this setup is amazingly cheap, and works really well, the Docean plugin provider should be considered beta software, as Hazmat is still actively working on it.
All in all, this is a great place to get started if you're willing to invest a bit of time working with a manual environment. Juju's capable orchestration will certainly make most if not all of your deployments painless, and bring you to scaling nirvana.