Earlier this year, I helped plan and run the Community Data Science Workshops: a series of three (and a half) day-long workshops designed to help people learn basic programming and tools for data science tools in order to ask and answer questions about online communities like Wikipedia and Twitter. You can read our initial announcement for more about the vision.
The workshops were organized by myself, Jonathan Morgan from the Wikimedia Foundation, long-time Software Carpentry teacher Tommy Guy, and a group of 15 volunteer “mentors” who taught project-based afternoon sessions and worked one-on-one with more than 50 participants. With overwhelming interest, we were ultimately constrained by the number of mentors who volunteered. Unfortunately, this meant that we had to turn away most of the people who applied. Although it was not emphasized in recruiting or used as a selection criteria, a majority of the participants were women.
- Friday April 4th: Setup and Programming Practice
- Saturday April 5th: Introduction to Python
- Saturday May 3rd: Building data sets using web APIs
- Saturday May 31st: Data analysis and visualization
The workshops were designed for people with no previous programming experience. Although most our participants were from the University of Washington, we had non-UW participants from as far away as Vancouver, BC.
Feedback we collected suggests that the sessions were a huge success, that participants learned enormously, and that the workshops filled a real need in the Seattle community. Between workshops, participants organized meet-ups to practice their programming skills.
Most excitingly, just as we based our curriculum for the first session on the Boston Python Workshop’s, others have been building off our curriculum. Elana Hashman, who was a mentor at the CDSW, is coordinating a set of Python Workshops for Beginners with a group at the University of Waterloo and with sponsorship from the Python Software Foundation using curriculum based on ours. I also know of two university classes that are tentatively being planned around the curriculum.
Because a growing number of groups have been contacting us about running their own events based on the CDSW — and because we are currently making plans to run another round of workshops in Seattle late this fall — I coordinated with a number of other mentors to go over participant feedback and to put together a long write-up of our reflections in the form of a post-mortem. Although our emphasis is on things we might do differently, we provide a broad range of information that might be useful to people running a CDSW (e.g., our budget). Please let me know if you are planning to run an event so we can coordinate going forward.
This reminded me of the how much progress I used to make when I did genealogy research, by looking over the documents I had gotten long ago, in light of facts I more recently uncovered. All of a sudden, I made new discoveries in those old docs. So that has become part of my regular research routine.
And perhaps all of these thoughts were triggered by the BASH bug which I keep hearing about on the news in very vague terms, and in quite specific discussion in IRC and mail lists. Old, stable code can yield new, interesting bugs! Maybe even dangerous vulnerabilities. So it's always worth raking over old ground, to see what comes to the surface.
I know the program ended, almost a month ago. But I haven't had the opportunity of sharing my thoughts of the GSOC 2014. This summer, I coded for the BeagleBoard.org organization. It was a great experience. It was my third time trying to be part of the GSOC, and finally I was accepted.
The main idea of the project is a platform for viewing and creating tutorials. You can see it here. Right now I'm working on migrating this to Jekyll. This is the next step the BeagleBoard community is taking.
After the program finish I convinced Jason Kridner cofounder of the BeagleBoard.org to give a small hangout about what's BeagleBoard.org, talk about the Beagle Bone Black and his view of the organizations.
Why I decide to talk with Jason, so he can give a talk? Well for motivating more Honduran students to involve on the open source moment. I was the first Honduran Student, that was part of the Google Summer of Code.
Hope this motivates more Honduran student.
In my previous post, I explained about my personal need to have a Utopic Environment for the purpose of running the test suites since they are require the latest ubuntu-ui-toolkit which is not available for Trusty 14.04 which my main laptop runs. For quite some time I used Virtualbox VMs to get around this issue. But anyone who has used Virtualbox VMs will agree when I say they are too resource intensive and slow making it a bit frustating to use it.
I am thankful to Sergio Schvezov for introducing me to this cool concept of Linux Containers (LXC). It took me some time to get acquainted with the concept and use it on a daily basis. I can now share my experiences and also show how I got it setup to provide a Ubuntu Touch app development environment.Brief Introduction to LXC
Linux Containers (LXC) is a novel concept developed by Stéphane Graber and Serge Hallyn. One could describe Linux containers as,
LXC are lightweight virtualization technology. They are more akin to an enhanced chroot than to full virtualization like VMware or QEMU. They do not emulate hardware and containers share the same operating system as the host.
I think to fully appreciate LXC, it would best to compare it with Virtualbox VMs as shown below.
A LXC container uses the host machine's kernel and is somewhere in the middle of a schroot and a full fledged virtual machine. Each ofcourse has its advantages and disadvantages. For instance since LXC containers use the host machine's kernel, they are limited to the Linux Kernels and cannot be used to create Windows or other OS containers. However they do not have any overhead since they only run the most essential services needed for your use case.
They perfectly fit my use case of providing an utopic environment inside which I can run my test suites. In fact, in this post I will show you some tricks that I learnt which will provide seamless integration between LXC and your host machine to the point where you would be unable to tell the difference between a native app and a container app.Getting Started with LXC
Stephane Graber's original post provides an excellent tutorial on getting started with LXC. If you are stuck at any step, I highly recommend talking to Stephane Graber on IRC at #ubuntu-devel. His nick is stgraber. The instructions below are a quick way of getting started with lxc containers for ubuntu touch app development and as such I have avoided detailed instructions explaining why we run a certain command.
Without further ado, let's get started!Installing LXC
LXC is available to install directly from the Ubuntu Archives. You can install it by,sudo apt-get install lxc systemd-services uidmap Prerequisite configuration (One-Time Configuration)
Linux containers are run using root by default. However this can be a little inconvenient for our use case since our containers will essentially be used to launch common applications like Qtcreator, terminal etc. So we will be first performing some prerequisite steps to creating unpriviledged containers (run by a normal user).
Note: These steps below are required only if you want to create unpriviledged containers (required for our use case).sudo usermod --add-subuids 100000-165536 $USER sudo usermod --add-subgids 100000-165536 $USER sudo chmod +x $HOME
Create ~/.config/lxc/default.conf with the following contents,lxc.network.type = veth lxc.network.link = lxcbr0 lxc.network.flags = up lxc.network.hwaddr = 00:16:3e:xx:xx:xx lxc.id_map = u 0 100000 65536 lxc.id_map = g 0 100000 65536
And then run,echo "$USER veth lxcbr0 10" | sudo tee -a /etc/lxc/lxc-usernet Unpriveledged Containers
Now with the prerequisite steps complete, we can proceed with creating the linux container itself. We are going to create a generic utopic container by,lxc-create --template download --name qmldevel -- --dist ubuntu --release utopic --arch amd64
This should create a LXC container with a ubuntu utopic environment of architecture amd64. On the other hand, if you want to see a list of the various distros, releases and architectures supported and choose it interactively, you should run,lxc-create -t download -n qmldevel
Once the container has finished downloading, you should be provided with a default user "ubuntu" with password "ubuntu". You will be able to find the container files at ~/.local/share/lxc/qmldevel.sudo chown -R 1000:1000 ~/.local/share/lxc/qmldevel/rootfs/home/ubuntu
Add the following to your container config file found at ~/.local/share/lxc/qmldevel/config,# Container specific configuration lxc.id_map = u 0 100000 1000 lxc.id_map = g 0 100000 1000 lxc.id_map = u 1000 1000 1 lxc.id_map = g 1000 1000 1 lxc.id_map = u 1001 101001 64535 lxc.id_map = g 1001 101001 64535 # Custom Mounts lxc.mount.entry = /dev/dri dev/dri none bind,optional,create=dir lxc.mount.entry = /dev/snd dev/snd none bind,optional,create=dir lxc.mount.entry = /tmp/.X11-unix tmp/.X11-unix none bind,optional,create=dir lxc.mount.entry = /dev/video0 dev/video0 none bind,optional,create=file lxc.mount.entry = /home/krnekhelesh/Documents/Ubuntu-Projects home/ubuntu none bind,create=dir
Notice the line lxc.mount.entry = /home/krnekhelesh/Documents/Ubuntu-Projects home/ubuntu none bind,create=dir which basically maps (mounts) your host machine's folder to a location in the container. So if we go to /home/ubuntu you would see the contents of /home/krnekhelesh/.../Ubuntu-Projects. Isn't that nifty? We are seamlessly sharing data between the host and the container.Shelling into our container
So yay we created this awesome container. How about accessing it and installing some of the applications we want to have? That's quite easy.lxc-start -n qmldevel -d lxc-attach -n qmldevel
At this point, your command line prompt should show be in the container. Here you can run any command you so wish. We are going to install the ubuntu-sdk and also terminator which is now my favourite terminal.sudo apt-get install ubuntu-sdk terminator
Type exit to exit out of the container.
At this point our container configuration is complete. This was the hardest and longest part. If you are past this, then we have one last final step left which is to create shortcuts to applications that we would like to launch from within our container. Onward to the next section!Application shortcuts
So basically here we create a few scripts and .desktop files to launch our applications we just installed in the previous section. First let's create those scripts. I will explain in a moment why we need those scripts.
Create a script called start-qtcreator with the following contents,#!/bin/sh CONTAINER=qmldevel CMD_LINE="qtcreator $*" STARTED=false if ! lxc-wait -n $CONTAINER -s RUNNING -t 0; then lxc-start -n $CONTAINER -d lxc-wait -n $CONTAINER -s RUNNING STARTED=true fi lxc-attach --clear-env -n $CONTAINER -- sudo -u ubuntu -i \ env DISPLAY=$DISPLAY $CMD_LINE if [ "$STARTED" = "true" ]; then lxc-stop -n $CONTAINER -t 10 fi
Make the script executable by chmod +x start-qtcreator. What the script essentially does is that it starts the container (if not running already) and then launches qtcreator while ensuring the proper environment variables are set.
We are going to create a similar script for launching terminator as well called start-terminator and make it executable.#!/bin/sh CONTAINER=qmldevel CMD_LINE="terminator $*" STARTED=false if ! lxc-wait -n $CONTAINER -s RUNNING -t 0; then lxc-start -n $CONTAINER -d lxc-wait -n $CONTAINER -s RUNNING STARTED=true fi lxc-attach --clear-env -n $CONTAINER -- sudo -u ubuntu -i \ env DISPLAY=$DISPLAY $CMD_LINE if [ "$STARTED" = "true" ]; then lxc-stop -n $CONTAINER -t 10 fi
Now for the very last bit which is the .desktop file, for Qtcreator and Terminator, I created the following .desktop files,[Desktop Entry] Exec=/home/krnekhelesh/.local/share/lxc/qmldevel/start-qtcreator %F Icon=ubuntu-qtcreator Type=Application Terminal=false Name=Ubuntu SDK (LXC) GenericName=Integrated Development Environment MimeType=text/x-c++src;text/x-c++hdr;text/x-xsrc;application/x-designer;application/vnd.nokia.qt.qmakeprofile;application/vnd.nokia.xml.qt.resource;application/x-qmlproject; Categories=Qt;Development;IDE; InitialPreference=9 Keywords=Ubuntu SDK;SDK;Ubuntu Touch;Qt Creator;Qt
Make sure to replace the Exec path with your path. Save the .desktop file as ubuntusdklxc.desktop in ~/.local/share/applications. Do the same for the terminal desktop file,[Desktop Entry] Name=Terminator (LXC) Comment=Multiple terminals in one window Exec=/home/krnekhelesh/.local/share/lxc/qmldevel/start-terminator Icon=terminator Type=Application Categories=GNOME;GTK;Utility;TerminalEmulator;System; StartupNotify=true X-Ubuntu-Gettext-Domain=terminator X-Ayatana-Desktop-Shortcuts=NewWindow; Keywords=terminal;shell;prompt;command;commandline; [NewWindow Shortcut Group] Name=Open a New Window Exec=terminator TargetEnvironment=Unity
That's it! When you go to the Unity Dash and search for "Terminator", you should see the entry "Terminator (LXC) appear. When you launch it, it will seamlessly start the linux container and then launch terminator from within it. Best part of it is that you won't even notice the difference between a native app and the container app.
Check out the screenshow below as proof!
What I usually do is that I have my clock app files in /home/krnekhelesh/Documents/Ubuntu-Projects. I do my coding and testing on the host machine. Then while running the test suite, I quickly open the terminator (lxc) and then run the tests since it already points at the correct folder.
I hope you found this useful.
I often have to deal with VPNs, either to connect to the company network, my own network when I’m abroad or to various other places where I’ve got servers I manage.
All of those VPNs use OpenVPN, all with a similar configuration and unfortunately quite a lot of them with overlapping networks. That means that when I connect to them, parts of my own network are no longer reachable or it means that I can’t connect to more than one of them at once.
Those I suspect are all pretty common issues with VPN users, especially those working with or for companies who over the years ended up using most of the rfc1918 subnets.
So I thought, I’m working with containers every day, nowadays we have those cool namespaces in the kernel which let you run crazy things as a a regular user, including getting your own, empty network stack, so why not use that?
Well, that’s what I ended up doing and so far, that’s all done in less than 100 lines of good old POSIX shell script
That gives me, fully unprivileged non-overlapping VPNs! OpenVPN and everything else run as my own user and nobody other than the user spawning the container can possibly get access to the resources behind the VPN.
The code is available at: git clone git://github.com/stgraber/vpn-container
Then it’s as simple as: ./start-vpn VPN-NAME CONFIG
What happens next is the script will call socat to proxy the VPN TCP socket to a UNIX socket, then a user namespace, network namespace, mount namespace and uts namespace are all created for the container. Your user is root in that namespace and so can start openvpn and create network interfaces and routes. With careful use of some bind-mounts, resolvconf and byobu are also made to work so DNS resolution is functional and we can start byobu to easily allow as many shell as you want in there.
In the end it looks like this:stgraber@dakara:~/vpn$ ./start-vpn stgraber.net ../stgraber-vpn/stgraber.conf WARN: could not reopen tty: No such file or directory lxc: call to cgmanager_move_pid_abs_sync(name=systemd) failed: invalid request Fri Sep 26 17:48:07 2014 OpenVPN 2.3.2 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [eurephia] [MH] [IPv6] built on Feb 4 2014 Fri Sep 26 17:48:07 2014 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info. Fri Sep 26 17:48:07 2014 NOTE: the current --script-security setting may allow this configuration to call user-defined scripts Fri Sep 26 17:48:07 2014 Attempting to establish TCP connection with [AF_INET]127.0.0.1:1194 [nonblock] Fri Sep 26 17:48:07 2014 TCP connection established with [AF_INET]127.0.0.1:1194 Fri Sep 26 17:48:07 2014 TCPv4_CLIENT link local: [undef] Fri Sep 26 17:48:07 2014 TCPv4_CLIENT link remote: [AF_INET]127.0.0.1:1194 Fri Sep 26 17:48:09 2014 [vorash.stgraber.org] Peer Connection Initiated with [AF_INET]127.0.0.1:1194 Fri Sep 26 17:48:12 2014 TUN/TAP device tun0 opened Fri Sep 26 17:48:12 2014 Note: Cannot set tx queue length on tun0: Operation not permitted (errno=1) Fri Sep 26 17:48:12 2014 do_ifconfig, tt->ipv6=1, tt->did_ifconfig_ipv6_setup=1 Fri Sep 26 17:48:12 2014 /sbin/ip link set dev tun0 up mtu 1500 Fri Sep 26 17:48:12 2014 /sbin/ip addr add dev tun0 172.16.35.50/24 broadcast 172.16.35.255 Fri Sep 26 17:48:12 2014 /sbin/ip -6 addr add 2001:470:b368:1035::50/64 dev tun0 Fri Sep 26 17:48:12 2014 /etc/openvpn/update-resolv-conf tun0 1500 1544 172.16.35.50 255.255.255.0 init dhcp-option DNS 172.16.20.30 dhcp-option DNS 172.16.20.31 dhcp-option DNS 2001:470:b368:1020:216:3eff:fe24:5827 dhcp-option DNS nameserver dhcp-option DOMAIN stgraber.net Fri Sep 26 17:48:12 2014 add_route_ipv6(2607:f2c0:f00f:2700::/56 -> 2001:470:b368:1035::1 metric -1) dev tun0 Fri Sep 26 17:48:12 2014 add_route_ipv6(2001:470:714b::/48 -> 2001:470:b368:1035::1 metric -1) dev tun0 Fri Sep 26 17:48:12 2014 add_route_ipv6(2001:470:b368::/48 -> 2001:470:b368:1035::1 metric -1) dev tun0 Fri Sep 26 17:48:12 2014 add_route_ipv6(2001:470:b511::/48 -> 2001:470:b368:1035::1 metric -1) dev tun0 Fri Sep 26 17:48:12 2014 add_route_ipv6(2001:470:b512::/48 -> 2001:470:b368:1035::1 metric -1) dev tun0 Fri Sep 26 17:48:12 2014 Initialization Sequence Completed To attach to this VPN, use: byobu -S /home/stgraber/vpn/stgraber.net.byobu To kill this VPN, do: byobu -S /home/stgraber/vpn/stgraber.net.byobu kill-server or from inside byobu: byobu kill-server
After that, just copy/paste the byobu command and you’ll get a shell inside the container. Don’t be alarmed by the fact that you’re root in there. root is mapped to your user’s uid and gid outside the container so it’s actually just your usual user but with a different name and with privileges against the resources owned by the container.
You can now use the VPN as you want without any possible overlap or conflict with any route or VPN you may be running on that system and with absolutely no possibility that a user sharing your machine may access your running VPN.
This has so far been tested with 5 different VPNs, on a regular Ubuntu 14.04 LTS system with all VPNs being TCP based. UDP based VPNs would probably just need a couple of tweaks to the socat unix-socket proxy.
The Ubuntu team is pleased to announce the final beta release of Ubuntu 14.10 Desktop, Server, Cloud, and Core products.
Codenamed "Utopic Unicorn", 14.10 continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, introducing new features and fixing bugs.
This beta release includes images from not only the Ubuntu Desktop, Server, Cloud, and Core products, but also the Kubuntu, Lubuntu, Ubuntu GNOME, Ubuntu Kylin, Ubuntu Studio and Xubuntu flavours.
The beta images are known to be reasonably free of showstopper CD build or installer bugs, while representing a very recent snapshot of 14.10 that should be representative of the features intended to ship with the final release expected on October 23rd, 2014.Ubuntu, Ubuntu Server, Ubuntu Core, Cloud Images
Utopic Final Beta includes updated versions of most of our core set of packages, including a current 3.16.2 kernel, apparmor improvements, and many more.
To upgrade to Ubuntu 14.10 Final Beta from Ubuntu 14.04, follow these instructions:
The Ubuntu 14.04 Final Beta images can be downloaded at:
http://releases.ubuntu.com/14.10/ (Ubuntu and Ubuntu Server)
Additional images can be found at the following links:
http://cloud-images.ubuntu.com/releases/14.10/beta-2/ (Cloud Images)
http://cdimage.ubuntu.com/releases/14.10/beta-2/ (Community Supported)
The full release notes for Ubuntu 14.10 Final Beta can be found at:
Kubuntu is the KDE based flavour of Ubuntu. It uses the Plasma desktop and includes a wide selection of tools from the KDE project.
The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/kubuntu/releases/14.10/beta-2/
More information on Kubuntu Final Beta can be found here: https://wiki.ubuntu.com/UtopicUnicorn/Beta2/KubuntuLubuntu
Lubuntu is a flavor of Ubuntu that targets to be lighter, less resource hungry and more energy-efficient by using lightweight applications and LXDE, The Lightweight X11 Desktop Environment, as its default GUI.
The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/lubuntu/releases/14.10/beta-2/Ubuntu GNOME
Ubuntu GNOME is a flavor of Ubuntu featuring the GNOME desktop environment.
The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/ubuntu-gnome/releases/14.10/beta-2/
More information on Ubuntu GNOME Final Beta can be found here: https://wiki.ubuntu.com/UtopicUnicorn/Beta2/UbuntuGNOMEUbuntuKylin
UbuntuKylin is a flavor of Ubuntu that is more suitable for Chinese users.
The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/ubuntukylin/releases/14.10/beta-2/Ubuntu Studio
Ubuntu Studio is a flavor of Ubuntu that provides a full range of multimedia content creation applications for each key workflows: audio, graphics, video, photography and publishing.
The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/ubuntustudio/releases/14.10/beta-2/Xubuntu
Xubuntu is a flavor of Ubuntu that comes with Xfce, which is a stable, light and configurable desktop environment.
The Final Beta images can be downloaded at: http://cdimage.ubuntu.com/xubuntu/releases/14.10/beta-2/
Regular daily images for Ubuntu can be found at: http://cdimage.ubuntu.com
Ubuntu is a full-featured Linux distribution for clients, servers and clouds, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.
Professional technical support is available from Canonical Limited and hundreds of other companies around the world. For more information about support, visit http://www.ubuntu.com/support
If you would like to help shape Ubuntu, take a look at the list of ways you can participate at: http://www.ubuntu.com/community/participate
Your comments, bug reports, patches and suggestions really help us to improve this and future releases of Ubuntu. Instructions can be ound at: https://help.ubuntu.com/community/ReportingBugs
You can find out more about Ubuntu and about this beta release on our website, IRC channel and wiki.
To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:
Originally posted to the ubuntu-announce mailing list on Fri Sep 26 02:30:26 UTC 2014 by Adam Conrad
* Command & Conquer
* How-To : Install Oracle, LibreOffice, and dmc4che.
* Graphics : GIMP Perspective Clone Tool and Inkscape.
* Linux Labs: Kodi/XBMC, and Compiling a Kernel Pt.2
plus: News, Q&A, Ubuntu Games, and soooo much more.
Grab it while it’s hot
LEGO. There, now I have your attention.
The LEGO Neighborhood Book is another addition to the series of cool LEGO books published by No Starch Press. In it, you find a set of instructions for building anything from small features like furniture or traffic lights to large things like buildings to populate an entire neighborhood. Unlike the creations of my youth, these buildings are detailed structures. Gone are the standard, boxy things I used to make. Replacing them are fancy window frames, building mouldings, and seriously beautiful architectural touches. In fact, many of those features are discussed and described, giving a context for the builder to understand a little bit about them. Also included are instructions for creating different types of features to put in those buildings. Everything from art work to plants to kitchen appliances is in there.
I’ve said so much about the books in this series, and it all holds true here, too. Part of me feels bad for the short review here, but the other part of me hates to repeat myself. In this instance, the praise of the past still applies. If you are a LEGO enthusiast, this is worthy of your consideration. Pick it up and take a look.
Many a times I get asked which version of Ubuntu I use to develop and test Ubuntu Touch apps or even which device I run my stuff on. So I figured that it would be interesting to share how I get around doing what I do and also at the same time share some tips that might help you setup your workflow.
I am going to start of with my needs which are,
- Develop core apps like Clock, Calendar and be able to test them on a phone form factor (amongst others) to ensure they work as expected.
- Develop test suites (Autopilot, QML, Manual Tests) which needs to be run on the device before every merge proposal to prevent regressions.
My primary machine runs Trusty 14.04 period. It is my main machine that I use for development and also for other important purposes like University, Personal uses cases etc, and I am not a big fan of updating it every 6 months. And to be honest it has served me quite well up and I don't want to pass that on.
My primary machine runs Trusty 14.04 period.
When I heard that the Ubuntu SDK wouldn't be updated in Trusty, I was shocked! I was so fixated on keeping Trusty that I decided to look for alternative ways of developing core apps while still keeping Trusty. So I naturally created a Utopic Virtualbox VM and used that for a while.
Disclamier: You have to understand though that it is a legitimate challenge to backport newer versions of the SDK to Trusty since it requires the entire Qt 5.3 which is a massive undertaking if it was to be backported.
That's when I talked to Zoltán Balogh and he explained things to me. There is a distinction between the development environment and the testing environment. So while it is necessary for an application developer to test his application on an environment that best simulates the real device may that be a phone, tablet or anything, the developing environment can very well just be any ordinary system (without the latest ubuntu-ui-toolkit and other packages).
This is done by integrating the test environment (Ubuntu Emulator) closely with the Ubuntu SDK IDE. In recent times, it has been a breeze getting the core apps like Clock and Calendar running on the phone and the emulator. The i386 emulator starts up rather quickly (around 20-40 seconds) and running your app on the emulator takes about 4-5 seconds. The SDK devs also ensure that the test environment tools like the ubuntu emulator runtime package, qtcreator-ubuntu-plugin are up to date on Trusty.
And as such I use Trusty 14.04 to develop and run all core apps.2. Test suites for Core Apps
This one is a bit tricky and is part of the reason why I cannot have one universal golden device to work with. Test suites are an important part of the core apps development process. If your merge proposal doesn't pass the tests, then it certainly will not be accepted. As a result it is important that your testing environment is able to run the test suite to verify that you are aren't introducing any regressions.
With Autopilot tests this isn't so much of an issue since with the help of autopkgtests, running tests on the device is quite simple. However as of now, I haven't found a way to run QML tests on the device or emulator despite my best attempts at it. If you do find a way please do answer it here and you would be my hero :D. As a result the next best environment is the development environment. However since Trusty isn't getting the latest SDK which is required for running the tests, I was rather stuck with a Virtualbox VM (which I hate since they are awfully slow and heavy).
As usual, I did what I do best which is to go and complain about that on IRC :P. That's when Sergio Schvezov introduced me to LXC Containers. I had absolutely no idea about them at the time. If I were to describe LXC Containers in a few words it would be,
"LXC Containers are schroots on steroids. They allow you to have any distro's environment without the unnecessary overhead of the desktop shell, linux kernel etc.."
So they are somewhat like the smarter cousins of Virtualbox VMs which requires a hefty amount of resources to run. If you are interested in reading more about LXC then I highly recommend that you take a look at this. If you want a shorter version of how to apply that for Ubuntu Touch development, you will have to wait for my next post :-) which will be about setting up LXC containers and installing the Ubuntu SDK in it.Summary
As a 3rd party app dev, you should be able to do pretty much everything related to developing Ubuntu Touch apps on Trusty 14.04 LTS. Don't let anyone convince you otherwise that you would need the latest release of Ubuntu to do that. If you are having issues getting your emulator up and running after reading through the tutorials here, please bring it up in the mailing list or on IRC at #ubuntu-app-devel, #ubuntu-touch.
If you are interested in the eBook, take a look. Valid only on 26 September 2014.
In meetings with the Braintrust, where new film ideas are viewed and judged, Catmull says,
It is natural for people to fear that such an inherently critical environment will feel threatening and unpleasant, like a trip to the dentist. The key is to look at the viewpoints being offered, in any successful feedback group, as additive, not competitive. A competitive approach measures other ideas against your own, turning the discussion into a debate to be won or lost. An additive approach, on the other hand, starts with the understanding that each participant contributes something (even if it's only an idea that fuels the discussion--and ultimately doesn't work). The Braintrust is valuable because it broadens your perspective, allowing you to peer--at least briefly--through other eyes.Catmull presents an example where the Braintrust found a problem in The Incredibles film. In this case, they knew something was wrong, but failed to correctly diagnose it. Even so, the director was able, with the help of his peers, to ultimately fix the scene. The problem turned out not to be the voices, but the physical scale of the characters on the screen!
This could happen because the director and the team let go of fear and defensiveness, and trust that everyone is working for the greater good. I often see us doing this in KDE, but in the Community Working Group cases which come before us, I see this breaking down sometimes. It is human nature to be defensive. It takes healthy community to build trust so we can overcome that fear.
Ubuntu GNOME Team is pleased to announce the release of Ubuntu GNOME Utopic Unicorn Beta 2 (Final Beta).
Please do read the release notes.
This is Beta 2 Release. Ubuntu GNOME Beta Releases are NOT recommended for:
- Regular users who are not aware of pre-release issues
- Anyone who needs a stable system
- Anyone uncomfortable running a possibly frequently broken system
- Anyone in a production environment with data or workflows that need to be reliable
Ubuntu GNOME Beta Releases are recommended for:
- Regular users who want to help us test by finding, reporting, and/or fixing bugs
- Ubuntu GNOME developers
For those who wish to use the latest releases, please remember to do an upgrade test from Trusty Tahr (Ubuntu GNOME 14.04 LTS) to Utopic Unicorn Beta 2. Needless to say, Ubuntu GNOME 14.04 is an LTS release that is supported for 3 years, so this test is for those who seek the latest system/packages and don’t mind the LTS (Long Term Support) Releases.
To help with testing Ubuntu GNOME:
Please see Testing Ubuntu GNOME Wiki Page.
To contact Ubuntu GNOME:
Please see our full list of contact channels.
Thank you for choosing and testing Ubuntu GNOME!
Ubuntu 14.10 (Utopic Unicorn) Final Beta Released – Official Announcement
The Xubuntu team is pleased to announce the immediate release of Xubuntu 14.10 Beta 2. This is the final beta towards the release in October. Before this beta we have landed various of enhancements and some new features. Now it’s time to start polishing the last edges and improve the stability.
The Beta 2 release is available for download by torrents and direct downloads from
- com32r error on boot with usb (1325801)
- Installation into some virtual machines fails to boot (1371651)
- Failure to configure wifi in live-session (1351590)
- Black background to Try/Install dialogue (1365815)
To celebrate the 14.10 codename “Utopic Unicorn” and to demonstrate the easy customisability of Xubuntu, highlight colors have been turned pink for this release. You can easily revert this change by using the theme configuration application (gtk-theme-config) under the Settings Manager; simply turn Custom Highlight Colors “Off” and click “Apply”. Of course, if you wish, you can change the highlight color to something you like better than the default blue!Workarounds for issues in virtual machines
- Move to TTY1 (with VirtualBox, Right-Ctrl+F1), login and then start lightdm with “sudo service lightdm start”
- Some people have been able to boot successfully after editing grub and removing the “quiet” and “splash” options
- Install appears to start OK when systemd is enabled; append “init=/lib/systemd/systemd” to the “linux” line in grub
reduce the risk of losing control of your AWS account by not knowing the root account password
As Amazon states, one of the best practices for using AWS is
Don’t use your AWS root account credentials to access AWS […] Create an IAM user for yourself […], give that IAM user administrative privileges, and use that IAM user for all your work.
The root account credentials are the email address and password that you used when you first registered for AWS. These credentials have the ultimate authority to create and delete IAM users, change billing, close the account, and perform all other actions on your AWS account.
You can create a separate IAM user with near-full permissions for use when you need to perform admin tasks, instead of using the AWS root account. If the credentials for the admin IAM user are compromised, you can use the AWS root account to disable those credentials to prevent further harm, and create new credentials for ongoing use.
However, if the credentials for your AWS root account are compromised, the person who stole them can take over complete control of your account, change the associated email address, and lock you out.
I have consulted companies who lost control over the root AWS account which contained their assets. You want to avoid this.Proposal
The AWS root account is not required for regular use as long as you have created an IAM user with admin privileges
Amazon recommends not using your AWS root account
You can’t accidentally expose your AWS root account password if you don’t know it and haven’t saved it anywhere
You can always reset your AWS root account password as long as you have access to the email address associated with the account
Consider this approach to improving security:
Create an IAM user with full admin privileges. Use this when you need to do administrative tasks. Activate IAM user access to account billing information for the IAM user to have access to read and modify billing, payment, and account information.
Change the AWS root account password to a long, randomly generated string. Do not save the password. Do not try to remember the password. On Ubuntu, you can use a command like the following to generate a random password for copy/paste into the change password form:pwgen -s 24 1
If you need access to the AWS root account at some point in the future, use the “Forgot Password” function on the signin form.
It should be clear from this that protecting access to your email account is critical to your overall AWS security, as that is all that is needed to change your password, but that has been true for many online services for many years.Caveats
You currently need to use the AWS root account in the following situations:
to change the email address and password associated with the AWS root account
to deactivate IAM user access to account billing information
to cancel AWS services (e.g., support)
to close the AWS account
to buy stuff on Amazon.com, Audible.com, etc. if you are using the same account (not recommended)
anything else? Let folks know in the comments.
For completeness, I should also reiterate Amazon’s constant and strong recommendation to use MFA (multi-factor authentication) on your root AWS account. Consider buying the hardware MFA device, associating it with your root account, then storing it in a lock box with your other important things.
You should also add MFA to your IAM accounts that have AWS console access. For this, I like to use Google Authenticator software running on a locked down mobile phone.
MFA adds a second layer of protection beyond just knowing the password or having access to your email account.
Original article: http://alestic.com/2014/09/aws-root-password
Amazon Web Services recently announced an AWS Community Heroes Program where they are starting to recognize publicly some of the many individuals around the world who contribute in so many ways to the community that has grown up around the services and products provided by AWS.
It is fun to be part of this community and to share the excitement that so many have experienced as they discover and promote new ways of working and more efficient ways of building projects and companies.
Here are some technologies I have gotten the most excited about over the decades. Each of these changed my life in a significant way as I invested serious time and effort learning and using the technology. The year represents when I started sharing the “good news” of the technology with people around me, who at the time usually couldn’t have cared less.
1980: Computers and Programming - “You can write instructions and the computer does what you tell it to! This is going to be huge!”
1987: The Internet - “You can talk to people around the world, access information that others make available, and publish information for others to access! This is going to be huge!”
1993: The World Wide Web - “You can view remote documents by clicking on hyperlinks, making it super-easy to access information, and publishing is simple! This is going to be huge!”
2007: Amazon Web Services - “You can provision on-demand disposable compute infrastructure from the command line and only pay for what you use! This is going to be huge!”
I feel privileged to have witnessed amazing growth in each of these and look forward to more productive use on all fronts.
A great way to meet thousands of people in the AWS community (and to spend a few days in intense learning about AWS no matter your current expertise level) is to attend the AWS re:Invent conference in Las Vegas this November. Perhaps I’ll see you there!
Original article: http://alestic.com/2014/09/aws-community-heroes
I bought a Cubieboard2 and I made a Lubuntu 14.04 image! Now, it's really fast and easy to deploy that image in a cubieboard2 with a NAND = 4GB.
Download the Lubuntu 14.04 image for CubieBoard2 here.
LUBUNTU 14.04 INSTALL STEPS:
Boot with a Live distro, by example, with Cubian into a microSD (>8GB) with these steps.
Copy this Lubuntu image downloaded into the root of the microSD.
Boot the Cubieboard2 with Cubian from the microSD.
Open a Terminal (Menu / Accesories / LXTerminal) and run:
sudo su -
[password is "cubie"]
dd if=/lubuntu-14.04-cubieboard2-nand.img conv=sync,noerror bs=64K of=/dev/nand
It's done! Reboot :) You must to have Lubuntu 14.04.1 running with 4GB as NAND partition. User: linaro, password: linaro.
RECOMMEND STEPS AFTER INSTALLATION:
sudo su -
- Add your new user (change 'username' for your new user):
- Set keyboard layout in persist mode (By example, for the Spanish is "es"):
- Set localtime (By example, for Spain local time = Europe/Madrid), in other way, the browser will have problems with the https web pages:
- Change password to linaro user or remove (logout required) that user (it's sudo and all people know this password, do it ;):
- Install ssh-client for connect by ssh or pulseaudio pavucontrol for audio.
HOW WAS THIS IMAGE DONE?
For this image I installed an official Lubuntu 13.04 Image from here, and I did this changes:
- Resized NAND to 4GB (Ubuntu will use 1.5GB; 2GB free). You can use a microSD or SATA HD as external storage.
- Updated to 13.10 and then to 14.04 LTS (Updated lxde* packages to last versions).
- Installed ntp, firefox, audacious, sylpheed, pidgin, gpicview, lxappearance and ufw (not enabled)
- Rewritabled and group owner for avoid ufw warnings: /etc, /, /lib
- Removed chromium-browser, gnome-network-manager and gnome-disk-utility
- Removed no password for admin users (edited /etc/sudoers)
- Created this dd image
(OPTIONAL) PREVIOUSLY BACKUP OF YOUR CURRENT CUBIEBOARD2:
Insert a microSD card in your current OS:
sudo su -
dd if=/dev/nand conv=sync,noerror bs=64K | gzip -c -9 > /nand.img.gz
(OPTIONAL) RESTORE THAT BACKUP:
dd if=/nand.img conv=sync,noerror bs=64K of=/dev/nand
Just a quick post to help those who might be running older/unsupported distributions of linux, mainly Ubuntu 8.04 who need to patch their version of bash due to the recent exploit here:
I found this post and can confirm it works:
Here are the steps(make a backup of /bin/bash just in case):
#assume that your sources are in /src
#download all patches
for i in $(seq -f “%03g” 0 25); do wget http://ftp.gnu.org/gnu/bash/bash-4.3-patches/bash43-$i; done
tar zxvf bash-4.3.tar.gz
#apply all patches
for i in $(seq -f “%03g” 0 25);do patch -p0 < ../bash43-$i; done
#build and install
./configure && make && make install
KDE Frameworks 5.2.0 Has been released to Utopic archive!
(Actually a few days ago, we are playing catch up since Akademy)
Also, I have finished packaging Plasma 5.0.2, it looks and runs great!
We desperately need more testers! If you would like to help us test,
please join us in IRC in #kubuntu-devel thanks!
A few weeks ago I was blessed with the opportunity to attend KDE’s Akademy Conference for the first time. (Thank you Ubuntu Donors for sponsoring me!).
Akademy is a week long conference that begins with a weekend of keynote speakers, informative lectures, and many hacking groups scattered about.
This Akademy also had a great pre-release party held by Red Hat.
I have not traveled such a distance since I was a child, so I was not prepared for the adventures to come. Hint: Pack lightly! I still have nightmares of the giant suitcase I thought I would need! I was lucky to have a travel buddy / roommate (Thank you Valorie Zimmerman!) to assist me in my travels, and most importantly, introducing me to my peers at KDE/Kubuntu that I had never met in person. It was wonderful to finally put a face to the names.
My first few days were rather difficult. I was fighting my urge to stand in a corner and be shy. Luckily, some friendly folks dragged me out of the corner and introduced me to more and more people. With each introduction and conversation it became easier. I also volunteered at the registration desk, which gave me an opportunity to meet new people. As the days went on and many great conversations later, I forgot I was shy! In the end I made many friends during Akademy, turning this event into one of the most memorable moments of my life.
The weekend brought Keynote speakers and many informative lectures. Unfortunately, I could not be in several places at once, so I missed a few that I wanted to see.
Thankfully, you can see them here: https://conf.kde.org/en/Akademy2014/public/schedule/2014-09-06
Due to circumstances out of their control, the audio is not great. The rest of the week was filled with BoF sessions / Workshops / Hacking / Collaboration / Anything we could think of that need to get done. In the BoF sessions we covered a lot of ground and hashed out ways to resolve problems we were facing. All that I attended were extremely productive. Yet another case where I wish I could split into multiple people so I could attend all that I wanted too!
On Thursday we got an entire Kubuntu Day! We accomplished many things including working with Debian’s Sune and Pino to move some of our packaging to Debian git to reduce duplicate packaging work. We discussed the details of going to continuous packaging which includes Jenkins CI. We also had the pleasure of München’s Limux project joining us to update us with the progress of Kubuntu in Munich, Germany!
While there was a lot of work accomplished during Akademy, there was also plenty of play as well! In the evenings many of us would go out on the town for dinner and drinks.
On Wednesday,on the day trip, we visited (what a hike!) an old castle via a nice ferry ride. Unfortunately I forgot my camera in the hostel.. The hackroom in the hostel was always bustling with activity. We even had the pleasure of very tasty home cooked meals by Jos Poortvliet in the tiny hostel kitchen a couple nights, that took some creative thinking! In the end, there was never a moment of boredom and always moments of learning, discussions, hacking and laughing.
If you ever have the opportunity to attend Akademy, do not pass it up!