few days ago, working on an app for Ubuntu for Phones I needed to know all proprieties and values of an object during the execution of the app.
It’s a very easy thing to do, but a bit boring to write the code to debug every time, so I wrote a little function to do this, with formattation of the output.
Hope it will be useful to someone :-)
So, to call the function we only need to write debug (objectId, 0) whenever we need to debug an object.
0 is because it’s a recursive function, and it indicates the level of formattation
This work is licensed under Creative Commons Attribution 3.0 Unported
Recently, Facebook published a paper in the Proceedings of the National Academy of Sciences (warning: PDF) about how they intentionally manipulated over half a million FB users’ news feeds to exclude either “happy” or “sad” posts and then (using the same algorithm which detected happiness or sadness) see whether the users’ own posts became happier or sadder as a result. Turns out: they did. Slate then wrote an article excoriating this as unethical, and today it seems to have blown up on Twitter a bit.
First, let us address and dismiss the issue of ethics. Was it unethical for Facebook to publish a paper on this? Yes, yes it was. The key issue here is informed consent: basically, you’re not allowed to do experiments on people without them knowing. It’s wrong of Facebook in my opinion to have, in liedra’s memorable wording, “gussied it up as ‘science’”, and also in my opinion PNAS ought to be asking a lot more questions about what they publish, surely? This study fails the most homeopathically weak example imaginable of “informed consent”, unless we’re counting the Facebook EULA here as giving informed consent to this sort of thing. @liedra did her PhD specifically on this subject, so I trust her views. But most of the upset I’ve seen about this has not been about academic standards and rules for research, or that a paper was published. It’s because Facebook did this thing at all.
I shall now pause to tell a little story. Supermarket loyalty cards are not there just to give you money off for being a regular customer. They’re there to let the shops build up a truly terrifying data warehouse and then mine it in extremely advanced ways to determine both what the average person wants to buy and what you specifically want to buy. Store design is a deeply complex, well-understood science, and that it goes on is almost unknown to the public. A store planner can tell you what every square foot of your store is for and how to maximise the amount of time customers spend in the shop, where to put the highest profit goods to improve their sales over others, and at bottom how to make more money by how you lay everything out. Loyalty schemes do the same thing with your purchases. At a very obvious level, sending you vouchers for stuff you buy a lot can work, but the data mining drills way, way deeper than that. In one memorable event, the US store Target identified that a woman was pregnant from seemingly innocent purchases such as unscented lotion, and then sent her coupons for baby products… before she’d told her father about the pregnancy. These people are mathematically well-equipped, they’re able to deduce a startling amount of things about you that you might wish they didn’t know, they’re doing it with data you’ve given them voluntarily, and they’re doing it to make their own service more compelling and so make more money at your expense. Is this any different from Facebook?
There is an undercurrent of fatalism in some of the responses to publication of this study. “Man, if you expect Facebook to do anything other than shove a live rattlesnake up your arse in pursuit of profit, you’re a naive child.” I don’t agree with that. We should expect more, demand more, hope for more from those who act as custodians of our data. Whether the law requires it or not. (European data protection laws are considerably more constraining than those in the US, in my opinion correctly, but acting only just as decently as the law requires is the minimum requirement, and we should ask for better.) But I honestly don’t see the difference between what Facebook did and what Target did. Yes, someone with depression could be adversely affected (perhaps very severely) by Facebook making their main channel of friendly communication be markedly less friendly. But consider if the pregnant woman who hadn’t told her father had had a miscarriage, and then received a book of baby milk vouchers in the mail.
This is not to minimise the impact of what Facebook did. What concerns me is that Facebook are not the only culprit here. They may not even be the most egregious culprit. The world of modern targeted advertising is considerably more sophisticated than most people suspect, and excoriating one firm for doing something that basically everybody else is doing too won’t stop it happening. It’ll just drive it further underground. Firms are going to mine my data. Indeed, I largely want them to; we’ve decided collectively that we want to fund things through advertising, so I might as well get adverts for things I actually want to buy. Facebook ran a study to discover whether they have the power to make people happier or sadder, and it turns out that they do. But they already had that power. In order for them to use it responsibly they should study it scientifically and learn about it. Then they can use it for good things.
Imagine if Facebook could have a button which says “make the billion people who use Facebook each a little bit happier”. It’s quite hard to imagine a more effective, more powerful, cheaper way to make the world a little bit better than for that button to exist. I want them to be able to build the button of happiness. And then I want them to press it.
This is part 6 in a series on organizational design and growth.
“The change from a business that the owner-entrepreneur can run with “helpers” to a business that requires management is a sweeping change. [...] One can compare the two kinds of business to two different kinds of organism: the insect, which is held together by a tough, hard skin, and the vertebrate animal, which has a skeleton. Land animals that are supported by a hard skin cannot grow beyond a few inches in size. To be larger, animals must have a skeleton. Yet the skeleton has not evolved out of the hard skin of the insect; for it is a different organ with different antecedents. Similarly, management becomes necessary when an organization reaches a certain size and complexity. But management, while it replaces the “hard-skin” structure of the owner-entrepreneur, is not its successor. It is, rather, its replacement.”
Peter DruckerWhat it means
Management is the art of enabling people to cooperate in achieving shared goals. I’ve written elsewhere about what management is not. Management is a multifaceted discipline which is centered on people and the environment in which they work.Why it’s important
In very small organizations, management can be comparatively easy, and happen somewhat automatically, especially between people who have worked together before. But as organizations grow, management becomes a first-class concern, requiring dedicated practice and a higher degree of skill. Without due attention to management, coordination becomes excessively difficult, working systems are outgrown and become strained, and much of the important work described in this series just won’t happen. Management is part of the infrastructure of the organization, and specifically the part which enables it to adapt and change as it grows.Old Status Quo
People generally “just do stuff”, meaning there is little conscious understanding of the system in which people are working. If explicit managers exist, their jobs are poorly understood. Managers themselves may be confused or uncertain about what their purpose is, particularly if they are in such a role for the first time. The organization itself has probably developed more through accretion than deliberate design.New Status Quo
People work within systems which help coordinate their work. These systems are consciously designed, explicitly communicated, and changed as often as necessary. Managers guide and coordinate the development and continuous improvement of these systems. The role of managers in the organization is broadly understood, and managers receive the training, support and coaching they need to be successful.Behaviors that help
- It can be helpful to bring more experienced managers into the organization at this stage, especially if there isn’t much management experience in house.
- Show everyone in the organization (including managers themselves) what managers do and why it matters.
- Consider very carefully whether someone should become a manager.
- If someone does take on a management role, treat this as a completely new job, which requires handing off their existing responsibilities and learning a new discipline. Don’t treat it as just an extension of their work. Write a new job description and discuss it up front.
- Management misbeliefs
- Granting “promotions” to management roles as rewards for performance
- Many people, when they experience what management work is like, don’t enjoy it and aren’t motivated by it. It can be hard to predict when this will be the case, and people can feel “trapped” in a management role that they don’t want. Make sure there are mechanisms to gracefully transition out of roles that don’t fit for the people holding them.
This is part 5 in a series on organizational design and growth.What it means
Each of us has a job to do, probably more than one, and our teammates should know what they are.Why it’s important
Roles are a kind of standing commitment we make to each other. They’re a way of dividing up work which is easy to understand and simple to apply. Utilizing this tool will make it easier for us to coordinate our day to day work, and manage the changes and growth we’re going through.Old Status Quo
Roles are vague or nonexistent. Management roles in particular are probably not well understood. Many people juggle multiple roles, all of which are implicit. Moving to a new team means learning from scratch what other people do. People take responsibility for tasks and decisions largely on a case-by-case basis, or based on implicit knowledge of what someone does (or doesn’t do). In many cases, there is only one person in the company who knows how to perform a certain function. When someone leaves or goes on vacation, gaps are left behind.New Status Quo
Each individual has a clear understanding of the scope of their job. We have a handful of well defined roles, which are used by multiple teams and have written definitions. People who are new or transfer between teams have a relatively easy time understanding what the people around them are doing. Many day to day responsibilities are defined by roles, and more than one person can fill that role. When someone leaves a team, another person can cover their critical roles.Behaviors that help
Define project roles: when starting something new, make it explicit who will be working on it. Often this will be more than one person, often from different teams. For example, customer-facing product changes should have at least a product owner and an engineering owner. This makes it easy to tell if too many concurrent projects are dependent on a single person, which is a recipe for blockage.
Define team roles: Most recurring tasks should fall within a defined role. An owner of a technical service is an example of a role. An on-call engineer is an example of a time-limited role. There are many others which will depend on the team and its scope.
Define job roles: Have a conversation with your teammates and manager about what the scope of your job is, which responsibilities are shared with other members of the team and which are yours alone.
Getting hung up on titles as ego gratification. Roles are tools, not masters.
Fear that a role limits your options, locks you into doing one thing forever. Roles can be as flexible as we want them to be.
Using idlestat is easy, to capture 20 seconds of activity into a log file called example.log, run:
sudo idlestat --trace -f example.log -t 20
..and this will display the per CPU C-state and P-state and IRQ statistics for that run.
One can also take the saved log file and parse it again to calculate the statistics again using:
idlestat --import -f example.log
One can get the source from here and I've packaged version 0.3 (plus a bunch of minor fixes that will land in 0.4) for Ubuntu 14.10 Utopic Unicorn.
If you've already given money, please help by spreading the word. Small contributions not only count up quickly, they demonstrate that the free community stands behind our work. Mario gives a nice wrap-up here: blogs.fsfe.org/mario/?p=234. Show us you care.
Personally, I'm scared and excited by the prospect of writing another book, again with people who over-the-top smart, creative and knowledgeable. I will personally appreciate widespread support of the work we'll be doing.
If you already know about the Randa Meetings, and what our confederated sprints can produce, just proceed directly to http://www.kde.org/fundraisers/randameetings2014/index.php and kick in some shekels!
And then please pass along the word.
One which ends in tears, I’m afraid.
A week or so ago, I got an MacBook Air 13” (MacBook Air 6-2), to take with me when I head out to local hack sessions, and when I travel out of state for short lengths of time. My current Thinkpad T520i is an amazing machine, and will remain my daily driver for a while to come.
After getting it, I unboxed the machine, and put in a Debian install USB key. I wiped OSX (without even booting into it) and put Debian on the drive.
To my dismay, it didn’t boot up after install. Using the recovery mode of the disk, I chrooted in and attempted an upgrade (to ensure I had an up-to-date GRUB). I couldn’t dist-upgrade, the terminal went white. After C-c’ing the terminal, I saw something about systemd’s sysvinit shim, so I manually did a purge of sysvinit and install of systemd.
I hear this has been resolved now. Thanks :)
My machine still wasn’t booting, so I checked around. Turns out I needed to copy over the GRUB EFI blob to a different location to allow it to boot. After doing that, I could boot Debian whilst holding Alt.
After booting, things went great! Until I shut my lid. After that point, my screen was either 0% bright (absolutely dark) or 100% (facemeltingly bright).
I saw mba6x_bl, which claims to fix it, and has many happy users. If you’re in a similar suggestion, I highly suggest looking at it.
Unfortunately, this caused my machine to black out on boot and I couldn’t see the screen at all anymore. A rescue disk later, and I was back.
Annoyingly, the bootup noise’s volume is stored in NVRAM, which gets written out by OSX when it shuts down. After consulting the inimitableMatthew Garrett, I was told the NVRAM can be hacked about with by mounting efivarfs on /sys/firmware/efi/efivars type efivarfs. Win!
After hitting enter, I got a kernel panic and some notes about hardware assertions failing.
That’s when I returned my MacBook and got a Dell XPS 13.
This is a problem, folks.
If I can’t figure out how to make this work, how can we expect our users?
Such a shame the most hardware on the planet gets no support, locking it’s users into even less freedom.
For the very first time, not only in Ubuntu GNOME Community but within all Ubuntu Communities (Ubuntu + Official Flavours); Ubuntu GNOME Team has taken the initiative and the productivity by establishing the very first Human Resource (Recruiting) Sub-Team within Ubuntu Communities.
For more details, please see this blueprint.
Now, Ubuntu GNOME Sub-Teams has a new arrival and we hope the rest of Ubuntu Communities will follow our steps
Ubuntu GNOME Blueprints for Utopic Unicorn are ready to be approved. We had 7 Roadmaps for 7 Sub-Teams but now, we have 7 + 1 = 8 Roadmaps.
We’re so excited about this new addition and we hope this experience will be very excited and helpful for our team.
Ubuntu GNOME Leaders Board
As we promised you on the earlier post we have published today, there will be great new changes within Ubuntu GNOME Community and here is yet another one.
We’re pleased to announce that starting from this cycle (Utopic Unicorn), each Sub-Team within Ubuntu GNOME will have:
- Team Leader
- Acting Team Leader
The idea was proposed on the first weekly meeting for Ubuntu GNOME Team for Utopic Unicorn Cycle and the team have voted with ‘YES’ to this idea; hence it has been approved.
For more details, please see this blueprint on Launchpad.
On Sunday, 29-06-2014, Ubuntu GNOME Team will have its second weekly meeting for this cycle (Utopic Unicorn) and we shall discuss and decide who will be the Acting Team Leaders for Ubuntu GNOME Sub-Teams.
This is our second official team announcement during this cycle (Utopic Unicorn). Stay tuned for more
Ubuntu GNOME Leaders Board
Fortunately, this might be fixed easily and give your laptop a longer life. (you can skip to "Disable ATI" section if you only care about the workaround)
Linus vs Nvidia Many have seen or heard of the famous Linus Torvalds reaction to Nvidia, when commenting on a question concerning hardware manufacturers and (proprietary) drivers on GNU/Linux.
OK, so Linus gives the finger to Nvidia for lack of cooperation with open source driver developers and for the undocumented closed hardware architecture.
The video and pictures went viral, and many just assumed that ATI drivers are much better than Nvidia's on Linux.
Does it make ATI much better?
The truth is that many Linux users are complaining about ATI's drivers quality and crashes on Linux, even if it is somehow better documented and might collaborate with FLOSS driver developers.
On the other hand, Nvidia has been offering a great quality proprietary drivers. And technically speaking, the nouveau team has been offering great open source drivers for Nvidia through reverse engineering and minor help from Nvidia employees.
In addition to all of this, ATI's open source drivers does not provide an easy and practical solution for (Hybrid) on demand switching between the integrated GPU and ATI's GPU. Actually this feature is only offered by the proprietary drivers which causes a lot of problems for many Linux users.
But Nvidia owners had an open source workaround project called bumblebee offering an on demand use of Nvidia drivers for specific applications (switching to either nouveau or the proprietary drivers). This workaround was available for Nvidia users since 2011 and while Prime the optimal FLOSS solution was designed by Dave Airlie (Red Hat) and delivered in the last few months.
My end user experience with Nvidia has been great (thanks to the community's efforts) since I first started using Linux as a main OS back in 2005. And I still hope Nvidia and ATI cut the bullshit and become a little more cooperative with Linux driver developers.
So that was my overall end user experience with Nvidia for almost 10 years now.
New ATI experience
Then, I had the chance to work with ATI on the new great laptop I got at Angry Cactus (work). But to be honest, this is only my second night with ATI at home (and I already got ATI nightmares), so it is going to be a first impression / first quick tricks article only.
Personally, my needs for a high end GPU performance is rare. I might need great graphics performance once in few months for a LAN/Online gaming round or for 3D design learning or for game/3D-app development. Since I do not need powerful graphics that much and I don't care about disabling the dedicated GPU completely until I need it again, I have decided to go with this workaround. This is because I actually had a real nightmare getting my ATI's proprietary drivers to work and provide a good hybrid GPU support on Ubuntu 14.04.
So, to get to the point. I have googled for some workarounds but the common solution was only compatible with previous versions of Ubuntu 14.04. This might be due to a system bug or a kernel update.
The actual compatible workaround was to add the kernel parameter radeon.runpm=0 during boot.
For this to work properly, open the file /etc/default/grub with your favourite text editor or using the following command :
sudo gedit /etc/default/grubThen find the line containing GRUB_CMDLINE_LINUX_DEFAULT, it looks something like the following :
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"You must then add the kernel parameter radeon.runpm=0 so it looks like the following :
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash radeon.runpm=0"Finally save the file and execute the following command :
sudo grub-updateCongratulations, you are now able to use the integrated GPU without overheating the laptop in addition to saving battery life.
Please leave your comments about your experience with GPU drivers.
Note that this might not be the best workaround, but I am satisfied with the result I have got for now. I will see if there is a better workaround or solution in the near future and maybe do some benchmarking and comparison.
Wings, if you don't fry 'em you bake them and baking them is the way I do them most often. As such, the coating for them is something I've played around with to find the most flavour boosting techniques.
Two things I do to "amp up" my wings is to add crunch and increase browning. Crunch is straightforward; I add either semolina or cornmeal to the wing dredge mixture.
The browning of meat is (basically) the breakdown of an amino acid (usually a protein) in the presence of sugar (this is known as the Maillard reaction) and more browning equals more flavour.
So to boost the browning potential of the wings I add milk powder to the coating; the milk powder adds additional protein to the outside of the wing, thus increasing the amount it'll brown.Chicken Wing Coating Recipe
- 3 lbs (1.5 kg) chicken wings
- 1/2 cup all-purpose flour
- 1/4 cup cornmeal or semolina
- 1/8 cup milk powder
- 1 tablespoon salt
- Preheat an oven to 375 degrees Fahrenheit.
- Butcher the chicken wings, removing the tips (keep them*) and separating the wing at the joint.
- In a large shallow dish, combine the flour, cornmeal, milk powder & salt.
- Line a baking sheet with aluminium foil & lightly brush with olive oil.
- Dredge the wings in the dry mixture.
- Place on the baking sheet and bake/roast in the oven for 1-1.5 hours.
- Serve, either plain or with desired sauce(s).
*People often discard the wing tips, but they are one of my favourite parts; throw the wing tips in a 450 degree oven with some olive oil and seasoned with a little salt and pepper and roast for 30-40 minutes until they're quite crunchy. The result is a sort-of chicken crackling.
As some of you will know, recently I moved from Canonical to XPRIZE to work as Sr. Dir. Community. My charter here at XPRIZE is to infuse the organization and the incentive prizes it runs with community engagement.
For those unfamiliar with XPRIZE, it was created by Peter H. Diamandis to solve the grand challenges of our time by creating incentive prize competitions. The first XPRIZE was the $10million Ansari XPRIZE to create a space-craft that went into space and back twice in two weeks and carrying three crew. It was won by Scaled Composites with their SpaceShipOne craft, and the technology ultimately led to birth of the commercial space-flight industry. Other prizes have focused on ocean health, more efficient vehicles, portable health diagnosis, and more.
The incentivized prize model is powerful. It is accessible to anyone with the drive to compete, it results in hundreds of teams engaging in extensive R&D, only the winner gets paid, and the competitive nature generally results in market competition which then drives even more affordable and accessible technology to be built.
The XPRIZE model is part of Peter Diamandis’s vision of exponential technology. In a nutshell, Peter has identified that technology is doubling every year, across a diverse range of areas (not just computing), and that technology can ultimately solve our grand challenges such as scarcity, clean water, illiteracy, space exploration, clean energy, and more. If you are interested in finding out more, read Abundance; it really is an excellent and inspirational read.
When I was first introduced to XPRIZE the piece that inspired me about the model is that collaboratively we can solve grand challenges that we couldn’t solve alone. Regular readers of my work will know that this is precisely the same attribute in communities that I find so powerful.
As such, connecting the dots between incentivized prizes that solve grand challenges with effective and empowering community management has the potential for a profound impact on the world.
The XPRIZE Lobby.
My introduction to XPRIZE helped me to realize that the exponential growth that Peter sees in technology is a key ingredient in how communities work. While not as crisply predictable (a doubling of community does not neccessarily mean a doubling of output), we have seen time and time again that when communities build momentum and growth the overall output of the community (irrespective of the specific size) can often exponentially grow.
An example of this is Wikipedia. From the inception of the site, the tremendous growth of the community resulted in huge growth in not just the site, but the value the site brought to users (as value is often defined by completeness). Another example is Linux. When the Linux kernel was only authored by Linus Torvalds, it had limited growth. The huge community that formed there has resulted in a technology that has literally changed how the technology infrastructure in the world runs. We also have political examples such as the Arab Spring in which social media helped empower large swathes of citizens to challenge their leaders. Again, as the community grew, so did the potency of their goals.
XPRIZE plays a valuable role because exponential growth in technology does not necessarily mean that the technology will be built. Traditionally, only governments were solving the grand challenges of our time because companies found it difficult to understand or define a market. XPRIZE competitions put a solid stake in the ground that highlights the problem, legitimizes the development of the technology with a clear goal and prize purse, and empowers fair participation.
The raw ingredients (smart people with drive and passion) for solving these challenges are already out there, XPRIZE mobilizes them. In a similar fashion, the raw ingredients for creating globally impactive communities are there, we just need to activate them.
So what will I be doing at XPRIZE to build community engagement?
Well, I have only been here a few weeks so my priority right now are some near-term goals and getting to know the team and culture, so I don’t have anything concrete I can share right now. I assure you though, I will be talking more about my work in the coming months.
At the last Ubuntu Online Summit, we had a session in which we discussed (among other things) the idea to make it easier for LoCo teams to share news, stories, pictures, events and everything else. A lot of great work is happening around the world already, but a lot of us don’t get to hear about it.
I took an action item to schedule a meeting to discuss the idea some more. The rough plan being that we add functionality to the LoCo Team Portal which allows teams to share their stories, which then end up on Planet Ubuntu.
We held the meeting last week, find the IRC logs here.The mini-spec
Find the spec here and please either comments on the loco-contacts mailing list or below in the comments. If you are a designer, a web developer, know Django or just generally want to get involved, please let us know!
We will discuss the spec some more, turn it into bug reports and schedule a series of hack days to work on this together.
* Command & Conquer
* How-To : Python, LibreOffice, and GRUB2.
* Graphics : Blender and Inkscape.
* Review: Toshiba SSD
* Security and Q&A
* CryptoCurrency: Compiling an Alt-Coin Wallet
* NEW! - Arduino
plus: Q&A, Linux Labs, Ubuntu Games, and another competition to win Humble Bundles!
Get it while it’s hot!
As the title suggests this little gem is an Openstack installer tailored specifically to get you from zero to hero in just a short amount of time.
There are a few options available today for deploying an Openstack cloud. For instance, juju-deployer with an Openstack specific bundle or that other thing called devstack. While these technologies work we wanted to take our existing technologies and go a step further. A lot of people may not have 10 systems laying around to utilize juju-deployer or you may be wanting to demonstrate to the powers that be that implementing Ubuntu, Juju, MAAS, and Openstack within your company is a great idea. Of course you could bring one of those shiny orange boxes or a handful of Intel NUCS into the conference room or ..
.. install the Ubuntu Openstack Installer and get a cloud to play with on a single machine. Getting started is ez-pz.Requirements
- Decent machine, tested on a machine with 8 cores, 12G ram, and 100G HDD.
- Ubuntu Trusty 14.04
- Juju 1.18.3+ (includes support for lxc fast cloning for multiple providers)
- About 30 minutes of your time.
Add the ppa and install the software.$ sudo apt-add-repository ppa:cloud-installer/ppa $ sudo apt-get update $ sudo apt-get install cloud-installer Second
Run it.$ sudo cloud-install Third
You’re presented with 3 options, a Single Install, Multi Install, and Landscape (coming soon). Select Single Install.Post
The installer will go through its little routine of installing necessary packages and setting up configuration. Once this is complete you’ll be dropped into a status screen which will then begin the magical journey of getting you setup with a fully functioning Openstack cloud.Is that all?
Yep, to elaborate a bit I’ll explain what’s happening:
The entire stack is running off a single machine. Juju is heavily utilized for its ability to deploy services, setup relations, and configure those services. Similar to what juju-deployer does. What juju-deployer doesn’t do is automatically sync boot images via simplestreams or automatically configure neutron to have all deployed instances within nova-compute available on the same network as the host machine all while using a single network card. We even throw in juju-gui for good measure
The experience we are trying to achieve is that any one person can sit down at a machine and have a complete end to end working Openstack environment. All while keeping your gray hair at a minimum and your budget intact. Heres a screenshot of our nifty console ui:Verify
Verifying your cloud is easy, just go through the process of deploying an instance via Horizon (Openstack Dashboard), associating a floating IP (already created for you just need to select one) and ssh into the newly created instance to deploy your software stack. Depending on bandwidth some images may not be immediately available and may require you to wait a little longer.What about those other install options?
Well, as I stated before we have a lot of cool technologies out there like MAAS. That is what the Multi Install is for. The cool thing about this is you install it the same way you would a Single Install. Fast-forward past the package installing and to the status screen you’ll be presented with a dialog stating to PXE boot a machine to act as the controller. Our installer tries to do everything for you but some things are left up to you. In this case you’d commission a machine in the MAAS environment and get it into a ready state. From there the Installer will pick up that machine and continue on its merry way as it did during the single installation.
One thing to note is you’ll want to have a few machines whether it be bare metal or virtual registered in MAAS to make use of all the installer has to offer. I was able to get a full cloud deployed on 3 machines, 1 bare metal (the host machine running maas), 2 virtual machines registered in MAAS. Keep in mind there were no additional network devices added as the installer can configure neutron on a single NICWhere to go from here?
If you need swift storage for your glance images hit (F6) in the status screen and select Swift storage. This will deploy the necessary bits for swift-storage to be integrated into your Openstack cloud. Swift storage requires at least 3 nodes (in the single install this would be 3 VMs) so make sure you’ve got the hardware for this. Otherwise, for developing/toying around with Openstack leaving the defaults works just as good.
Want to deploy additional instances on your compute nodes? Add additional machines to your MAAS environment or if running on a single machine and you have the hardware add a few more nova-compute nodes (via F6 in the status screen) to allow for more instances to be deployed within Openstack.
This is just an intro into the installer more documentation can be found @ ReadTheDocs. The project is hosted @ GitHub and we definitely encourage you to star it, fork it, file issues, and contribute back to make this a truly enjoyable experience.
Also keep an eye out for when the Landscape implementation is completed for an even more feature-rich end to end solution.
In this week’s show:-
- We take a look at what’s been happening in the news:
- We have some gaming news from Tony:
- We also take a look at what’s been happening in the community:
- And there’s an event:
We’ll be back next week, when we’ll be discussing remote screencasting technologies, and we’ll go through your feedback.
Please send your comments and suggestions to: firstname.lastname@example.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: email@example.com and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+
I thought of an interesting example of being in the middle of a linux kernel build and a security update needing to be installed and the machine rebooted. While most of us could probably just reboot and rebuild, why not checkpoint it and save the progress; then restore after the system update? I admit its not the most useful example; but it is pretty cool nonetheless.
sudo apt-get install criu
# start build; save the PID for later
cd linux; make clean; make defconfig
make & echo $! > ~/make.pid
# enter a clean directory that isn't tmp and dump state
mkdir ~/cr-build && cd $_
sudo criu dump --shell-job -t $(cat ~/make.pid)# install and reboot machine
# restore your build
sudo criu restore --shell-job -t $(cat ~/make.pid)
And you're building again!