news aggregator

Ronnie Tucker: Mozilla Thunderbird 31.1.1 Lands in Ubuntu

Planet Ubuntu - Sat, 2014-09-20 05:06

Canonical has shared some details about a number of Thunderbird vulnerabilities identified in its Ubuntu 14.04 LTS and Ubuntu 12.04 LTS operating systems, and the devs have pushed a new version into the repositories.

The Thunderbird email client was updated a couple of days ago and the new version has landed pretty quickly in the Ubuntu repos. This means that it should be available when users update their systems.

For example, “Abhishek Arya discovered a use-after-free during DOM interactions with SVG. If a user were tricked in to opening a specially crafted message with scripting enabled, an attacker could potentially exploit this to cause a denial of service via application crash or execute arbitrary code with the privileges of the user invoking Thunderbird,” reads the announcement.

Source:

http://news.softpedia.com/news/Mozilla-Thunderbird-13-1-1-Lands-in-the-Ubuntu-458664.shtml

Submitted by: Silviu Stahie

John Baer: The Promise of a Broadwell Chromebook

Planet Ubuntu - Fri, 2014-09-19 22:53

There were many announcements made at the 2014 Intel Developers Conference, but a Broadwell powered Chromebook was not on the list. That is not to say it’s not coming.

The Intel Broadwell system on a chip “SoC” is a great fit for a Chromebook.

Central Processing Unit

You won’t find significant innovation or design changes in the Broadwell central processing unit (CPU) as the primary focus of this release was to shrink the manufacturing die from 22nm to 14nm to gain power efficiency. The result is a 4.5W power draw which should result in 8 or more hours of battery life.

From a getting stuff done perspective it looks like the CPU will perform as well or better than a Haswell Celeron 2955U found in many Chrome devices. Intel states twice the performance of a four year old i5. Some folks are speculating the performance will equal the new i3-4005U but I anticipate Octane scores will equal or exceed 11000.

Graphical Processing Unit

This is where it gets interesting. I assume Intel is feeling some pressure from nVidia, Imagination Technologies, and new entrants such as RockChip which compelled them to enhance the performance of the graphical processing unit (GPU). Indeed, preliminary 3DMark benchmarks are successfully showing a 40% advantage to the nVidia Tegra 32 bit K1.

Will there be a Chromebook?

Francois Beaufort blogged a Broadwell development board has been added the Chrome OS repository which supports the fact a Chromebook is under consideration. As a platinum member of the Linux foundation, Intel has the knowledge and experience to optimize Chrome OS to leverage the features of their processors and I am confidant they will do so for Broadwell.

The big question becomes price. Wholesale pricing for this SoC is expected to exceed $250 and that is a premium price for an entry level Chromebook but may work for a professional grade device (viz. Pixel 2) targeted to college students and the enterprise.

  • High Quality Touch IPS FHD 13 inch screen
  • 4 GB RAM
  • 64 GB SSD
  • Backlit Keyboard
  • Wifi ac / Bluetooth 4.0 / USB 3.0 / HDMI

Retail priced at $599.

The post The Promise of a Broadwell Chromebook appeared first on john's journal.

Jorge Castro: Juju Ecosystem Team Sprint Report

Planet Ubuntu - Fri, 2014-09-19 16:49

Greetings folks,

The Juju Ecosystem team at Canonical (joined remotely by community members) recently had a developer sprint in beautiful Dillon, Colorado to Get Things Done(™).

Here are the highlights:

Automated Charm Testing

Tim Van Steenburgh and Marco Ceppi made a ton of progress with automated charm testing, here’s the prelim state-of-the-world:

Jenkins Jobs Fired off: 22

This enabled us to dedicate hours of block time of getting as many of those red charms to green as possible. The priority for our team over the next few weeks will be fixing these charms, and of course, gating new charms via this method, as well as kicking back broken charms to personal namespaces.

Ben Saller helped out by prototyping “Dockerizing” charm testing so that developers can test their charms in a fully containerized way. This will help CI by giving us isolation, density, and reliability.

Charm Tests are now launched from review queue to help gating based on tests passing.

Thanks to Aaron Bentley for supporting our efforts here!

Review Queue

The Charmers (Marco Ceppi, Charles Butler, and Matt Bruzek) dedicated time to getting through reviews. The whole team spent time creating fixes for the automated test results mentioned above. We’re in great shape to driving this down and not ever letting it get out control again thanks to our new team review guidelines: http://review.juju.solutions/

The goal was to help submitters and reviews know the where they are at in a review, and next steps needed.

Here are the numbers:

  • Reviews Performed: 189
  • Commits: 228
  • Charms Promulgated: 10
  • Charms Unpromulgated: 7
  • Lines of Code touched: 34109 (artificially high due to SVG icons, heh)
  • Reviews Submitted: 84
  • Energy Drinks: 80

Some new features:

  • Users can now log in with Ubuntu SSO and see what reviews they have submitted, and reviewed
  • Ability to query the review system and search/filter reviews based on several metrics (http://review.juju.solutions/search)
  • Ability for charmers to fire off an automated test of a charm on demand right from the queue. When an MP is done against a charm, we’ll now automatically reply to the MP with a link to the test results. \o/
  • You can now “lock” a review when you’re doing one so that the rest of the community can see when a review is claimed so we don’t duplicate work. (Essential for mass reviews!)
  • Queues divided and separated to highlight priority items and items for different teams
CloudFoundry
  • Improving the downloader/packaging story so it’s more reusable
  • Cory Johns developed a pattern for charm helpers for CloudFoundry; the CF sub-team feels this will be a useful pattern for other charmers. They’re calling it the “charm services framework”, expect to hear more from them in the future.
  • We were able to replicate the Juju/Rails Framework deployment of an application and compare doing the same thing on CF: https://plus.google.com/117270619435440230164/posts/gHgB6k5f7Fv
  • Whit concentrated on tracking changes to Pivotal’s build procedures.

Charm Developer Workflow

This involves two things:

“The first 30 minutes of Juju”

This primarily involved finding and fixing issues with our user and developer workflow. This included doing some initial work on what we’re calling “Landing Pages”, which will be topic based landing pages for different places where people can use Juju, so for example, a “Big Data” page with specific solutions for that field. We expect to have these for a bunch of different fields of study.

We have identified the following 5 charms as “flagbearer””: Rails (in progress), elasticsearch, postgresql, meteor, and chamilo. We consider these charms to be excellent examples of quality, integration with other tools, and usage of charm tools. We will be modifying the documentation to highlight these charms as reference points. All these charms have tests now, though some might not have landed yet.

Better tools for Charm Authors:

Ben, Tim, and Whit have a prototype of a fully Dockerized developer environment that contains all of the local developer tools and all of the flagbearer charms. The intention is to also provide a fully bootstrapped local provider. The goal is “test anything in 30 seconds in a container”.

In addition to this, Adam Israel tackled some of our Vagrant development stories, that will allow us to provide better Vagrant developer workflow, thanks to Ben Howard and his team for helping us get these features in our boxes.

We expect both the Docker-based and Vagrant-based approaches to eventually converge. Having both now gives us a nice “spread” to cover developers on multiple operating systems with tools they’re familiar with.

Big Data

Amir/Chuck worked on the following things:

  • Upgrading the ELK stack for Trusty
  • Planning out new Landing Pages focused on the Big Data story
  • Bringing up existing Big Data (Hortonworks) Stack to Charm Store standards for Trusty, and getting those charms merged
  • Pre-planning for next phase of Big Data Workloads (MapR, Apache distributions)
Other
  • General familiarity training with MAAS, OpenStack on OBs and NUCs.
  • Very fast firehose drinking for new team members, Adam Israel, Randall Ross, and Kevin Monroe have joined the team.
  • Special Thanks to Jose Antonio Rey, Sebas, and Josh Strobl, for joining us to help get reviews and fixes in the store and documentation.
  • We have a new team blog at: http://juju-solutions.github.io/ (Beta, thanks Whit.)
  • Most of the topics here had corresponding fixes/updates to the Juju documentation.

Leo Iannacone: Use GTK-3.0 Dark variant theme for your GTK-2 terminal emulator

Planet Ubuntu - Fri, 2014-09-19 14:13

This is a workaround to force you preferred terminal emulator to use the Dark variant of Adwaita theme in GNOME >= 3.12 (maybe less, but untested).

Just add these lines to you ~/.bashrc file:

# set dark theme for xterm emulators if [ "$TERM" == "xterm" ] ; then xprop -f _GTK_THEME_VARIANT 8u -set _GTK_THEME_VARIANT "dark" -id `xprop -root | awk '/^_NET_ACTIVE_WINDOW/ {print $5}'` fi

This is how it works with Terminator:

Before

After

Stephen Kelly: Grantlee 0.5.0 (codename Auf 2 Hochzeiten tanzen) now available

Planet Ubuntu - Fri, 2014-09-19 09:29

The Grantlee community is pleased to announce the release of Grantlee version 0.5 (Mirror). Source and binary compatibility are maintained as with all previous releases. Django is an implementation of the Django template system in Qt.

This release builds with both Qt 5 and Qt 4. The Qt 5 build is only for transitional purposes so that a downstream can get their own code built and working with Qt 5 without being first blocked by Grantlee backward incompatible changes. The Qt 5 based version of Grantlee 0.5.0 should not be relied upon as a stable interface. It is only there to assist porting. There won’t be any more Qt 4 based releases, except to fix build issues if needed.

The next release of Grantlee will happen next week and will be exclusively Qt 5 based. It will have a small number of backward incompatible changes, such as adding missing const and dropping some deprecated stuff.

The minimum CMake required has also been increased to version 2.8.11. This release contains most of the API for usage requirements and so allows cleaning up a lot of older CMake code.

Also in this release is a small number of new bug fixes and memory leak plugs etc.


Daniel Pocock: reSIProcate migration from SVN to Git completed

Planet Ubuntu - Fri, 2014-09-19 06:47

This week, the reSIProcate project completed the move from SVN to Git.

With many people using the SIP stack in both open source and commercial projects, the migration was carefully planned and tested over an extended period of time. Hopefully some of the experience from this migration can help other projects too.

Previous SVN committers were tracked down using my script for matching emails to Github accounts. This also allowed us to see their recent commits on other projects and see how they want their name and email address represented when their previous commits in SVN were mapped to Git commits.

For about a year, the sync2git script had been run hourly from cron to maintain an official mirror of the project in Github. This allowed people to test it and it also allowed us to start using some Github features like travis-CI.org before officially moving to Git.

At the cut-over, the SVN directories were made read-only, sync2git was run one last time and then people were advised they could commit in Git.

Documentation has also been created to help people get started quickly sharing patches as Github pull requests if they haven't used this facility before.

Ronnie Tucker: Curl Exploits Closed in All Supported Ubuntu OSes

Planet Ubuntu - Fri, 2014-09-19 05:05

Canonical has announced that a couple of curl vulnerabilities have been found and fixed in its Ubuntu 14.04 LTS, Ubuntu 12.04 LTS, and Ubuntu 10.04 LTS operating systems.

The developers have released a new update for the curl package and it looks like a number of security issues have been corrected.

“Tim Ruehsen discovered that curl incorrectly handled partial literal IP addresses. This could lead to the disclosure of cookies to the wrong site, and malicious sites being able to set cookies for others,” reads the security notice.

Source:

http://news.softpedia.com/news/Curl-Exploits-Close-in-All-Supported-Ubuntu-OSes-458899.shtml

Submitted by: Silviu Stahie

Stuart Langridge: Fundamentally connected

Planet Ubuntu - Fri, 2014-09-19 04:11

Aaron Gustafson recently wrote a very interesting monograph bemoaning a recent trend to view JavaScript as “a virtual machine in the browser”. I’ll quote fairly extensively, because Aaron makes some really strong points here, and I have a lot of sympathy with them. But at bottom I think he’s wrong, or at the very least he’s looking at this question from the wrong direction, like trying to divine the purpose of the Taj Mahal by looking at it from underneath.

“The one problem I’ve seen,” says Aaron, “is the fundamental disconnect many of these developers [who began taking JavaScript seriously after Ajax became popular] seem to have with the way deploying code on the Web works. In traditional software development, we have some say in the execution environment. On the Web, we don’t.” He goes on to explain: “If we’re writing server-side software in Python or Rails or even PHP, … we control the server environment [or] we have knowledge of it and can author… accordingly”, and “in the more traditional installed software world, we can similarly control the environment by placing certain restrictions on what operating systems our code can run on”.

I believe that this criticism, while essentially valid, misapprehends the real case here. It underestimates the universality of JavaScript implementations, it overestimates the stability of old-fashioned software development, and most importantly it starts from the presumption that building things for one particular computer is actually a good idea. Which it isn’t.

Now, nobody is arguing that the web environment is occasionally challengingly different across browsers and devices. But a lot of it isn’t. No browser ships with a JavaScript implementation in which 1 and 1 add to make 3, or in which Arrays don’t have a length property, or in which the for keyword doesn’t exist. If we ignore some of the Mozilla-specific stuff which is becoming ES6 (things such as array comprehensions, which nobody is actually using in actual live code out there in the universe), JavaScript is pretty stable and pretty unchanging across all its implementations. Of course, what we’re really talking about here is the DOM model, not JavaScript-the-language, and to claim that “JavaScript can be the virtual machine” and then say “aha I didn’t mean the DOM” is sophistry on a par with a child asking “can I not not not not not have an ice-cream?”. But the DOM model is pretty stable too, let’s be honest. In things I build, certainly I find myself in murky depths occasionally with JavaScript across different browsers and devices, but those depths are as the sparkling waters of Lake Treviso by comparison with CSS across different browsers. In fact, when CSS proves problematic across browsers, JavaScript is the bandage used to fix it and provide a consistent experience — your keyframed CSS animation might be unreliable, but jQuery plugins work everywhere. JavaScript is the glue that binds the other bits together.

Equally, I am not at all sold that “we have knowledge of [the server environment] and can author your program accordingly so it will execute as anticipated” when doing server development. Or, at least, that’s possible, but nobody does. If you doubt this, I invite you to go file a bug on any server-side app you like and say “this thing doesn’t work right for me” and then add at the bottom “oh, and I’m running FreeBSD, not Ubuntu”. The response will occasionally be “oh really? we had better get that fixed then!” but is much more likely to be “we don’t support that. Use Ubuntu and this git repository.” Now, that’s a valid approach — we only support this specific known configuration! — but importantly, on the web Aaron sees requiring a specific browser/OS combination as an impractical impossibility and the wrong thing to do, whereas doing this on the server is positively virtuous. I believe that this is no virtue. Dismissing claims of failure with “well, you should be using the environment I demand” is just as large a sin on the server or the desktop as it is in the browser. You, the web developer, can’t require that I use your choice of browser, but equally you, the server developer, shouldn’t require that I use your particular exact combination of server packages either. Why do client users deserve more respect than server users? If a developer using your server software should be compelled to go and get a different server, how’s that different from asking someone to install a different web browser? Sure, I’m not expecting someone who built a server app running on Linux to necessarily also make it run on Windows (although wise developers will do so), but then I’m not really expecting someone who’s built a 3d game with WebGL to make the experience meaningful for someone browsing with Lynx, either.

Perhaps though you differ there, gentle reader. That the web is the web, and one should have a meaningful experience (although importantly not necessarily the same meaningful experience) which ever class of browser and device and capability one uses to get at the web. That is a very good point, one with which I have a reasonable amount of sympathy, and it leads me on to the final part of the argument.

It is this. Web developers are actually better than non-web developers. And Aaron explains precisely why. It is because to build a great web app is precisely to build a thing which can be meaningfully experienced by people on any class of browser and device and capability. The absolute tip-top very best “native” app can only be enjoyed by those to whom it is native. “Native apps” are poetry: undeniably beautiful when done well, but useless if you don’t speak the language. A great web app, on the other hand, is a painting: beautiful to experience and available to everybody. The Web has trained its developers to attempt to build something that is fundamentally egalitarian, fundamentally available to everyone. That’s why the Web’s good. The old software model, of something which only works in one place, isn’t the baseline against which the Web should be judged; it’s something that’s been surpassed. Software development is easiest if it only has to work on your own machine, but that doesn’t mean that that’s all we should aim for. We’re all still collaboratively working out exactly how to build apps this way. Do we always succeed? No. But by any measure the Web is the largest, most widely deployed, most popular and most ubiquitous computing platform the world has ever known. And its programming language is JavaScript.

Paul Tagliamonte: Docker PostgreSQL Foreign Data Wrapper

Planet Ubuntu - Fri, 2014-09-19 01:49

For the tl;dr: Docker FDW is a thing. Star it, hack it, try it out. File bugs, be happy. If you want to see what it's like to read, there's some example SQL down below.

The question is first, what the heck is a PostgreSQL Foreign Data Wrapper? PostgreSQL Foreign Data Wrappers are plugins that allow C libraries to provide an adaptor for PostgreSQL to talk to an external database.

Some folks have used this to wrap stuff like MongoDB, which I always found to be hilarous (and an epic hack).

Enter Multicorn

During my time at PyGotham, I saw a talk from Wes Chow about something called Multicorn. He was showing off some really neat plugins, such as the git revision history of CPython, and parsed logfiles from some stuff over at Chartbeat. This basically blew my mind.

If you're interested in some of these, there are a bunch in the Multicorn VCS repo, such as the gitfdw example.

All throughout the talk I was coming up with all sorts of things that I wanted to do -- this whole library is basically exactly what I've been dreaming about for years. I've always wanted to provide a SQL-like interface into querying API data, joining data cross-API using common crosswalks, such as using Capitol Words to query for Legislators, and use the bioguide ids to JOIN against the congress api to get their Twitter account names.

My first shot was to Multicorn the new Open Civic Data API I was working on, chuckled and put it aside as a really awesome hack.

Enter Docker

It wasn't until tianon connected the dots for me and suggested a Docker FDW did I get really excited. Cue a few hours of hacking, and I'm proud to say -- here's Docker FDW.

Currently it only implements reading from the API, but extending this to allow for SQL DELETE operations isn't out of the question, and likely to be implemented soon. This lets us ask all sorts of really interesting questions out of the API, and might even help folks writing webapps avoid adding too much Docker-aware logic.

Setting it up The only stumbling block you might find (at least on Debian and Ubuntu) is that you'll need a Multicorn `.deb`. It's currently undergoing an official Debianization from the Postgres team, but in the meantime I put the source and binary up on my people.debian.org. Feel free to use that while the Debian PostgreSQL team prepares the upload to unstable.

I'm going to assume you have a working Multicorn, PostgreSQL and Docker setup (including adding the postgres user to the docker group)

So, now let's pop open a psql session. Create a database (I called mine dockerfdw, but it can be anything), and let's create some tables.

Before we create the tables, we need to let PostgreSQL know where our objects are. This takes a name for the server, and the Python importable path to our FDW.

CREATE SERVER docker_containers FOREIGN DATA WRAPPER multicorn options ( wrapper 'dockerfdw.wrappers.containers.ContainerFdw'); CREATE SERVER docker_image FOREIGN DATA WRAPPER multicorn options ( wrapper 'dockerfdw.wrappers.images.ImageFdw');

Now that we have the server in place, we can tell PostgreSQL to create a table backed by the FDW by creating a foreign table. I won't go too much into the syntax here, but you might also note that we pass in some options - these are passed to the constructor of the FDW, letting us set stuff like the Docker host.

CREATE foreign table docker_containers ( "id" TEXT, "image" TEXT, "name" TEXT, "names" TEXT[], "privileged" BOOLEAN, "ip" TEXT, "bridge" TEXT, "running" BOOLEAN, "pid" INT, "exit_code" INT, "command" TEXT[] ) server docker_containers options ( host 'unix:///run/docker.sock' ); CREATE foreign table docker_images ( "id" TEXT, "architecture" TEXT, "author" TEXT, "comment" TEXT, "parent" TEXT, "tags" TEXT[] ) server docker_image options ( host 'unix:///run/docker.sock' );

And, now that we have tables in place, we can try to learn something about the Docker containers. Let's start with something fun - a join from containers to images, showing all image tag names, the container names and the ip of the container (if it has one!).

SELECT docker_containers.ip, docker_containers.names, docker_images.tags FROM docker_containers RIGHT JOIN docker_images ON docker_containers.image=docker_images.id; ip | names | tags -------------+-----------------------------+----------------------------------------- | | {ruby:latest} | | {paultag/vcs-mirror:latest} | {/de-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/ny-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/ar-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} 172.17.0.47 | {/ms-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} 172.17.0.46 | {/nc-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/ia-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/az-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/oh-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/va-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} 172.17.0.41 | {/wa-openstates-to-ocd} | {sunlightlabs/scrapers-us-state:latest} | {/jovial_poincare} | {<none>:<none>} | {/jolly_goldstine} | {<none>:<none>} | {/cranky_torvalds} | {<none>:<none>} | {/backstabbing_wilson} | {<none>:<none>} | {/desperate_hoover} | {<none>:<none>} | {/backstabbing_ardinghelli} | {<none>:<none>} | {/cocky_feynman} | {<none>:<none>} | | {paultag/postgres:latest} | | {debian:testing} | | {paultag/crank:latest} | | {<none>:<none>} | | {<none>:<none>} | {/stupefied_fermat} | {hackerschool/doorbot:latest} | {/focused_euclid} | {debian:unstable} | {/focused_babbage} | {debian:unstable} | {/clever_torvalds} | {debian:unstable} | {/stoic_tesla} | {debian:unstable} | {/evil_torvalds} | {debian:unstable} | {/foo} | {debian:unstable} (31 rows)

Success! This is just a taste of what's to come, so please feel free to hack on Docker FDW, tweet me @paultag, file bugs / feature requests. It's currently a bit of a hack, and it's something that I think has long-term potential after some work goes into making sure that this is a rock solid interface to the Docker API.

Ubuntu LoCo Council: Call for nominations to the LoCo Council

Planet Ubuntu - Thu, 2014-09-18 20:16

Hello All,

As you may know the LoCo council members are set with a two years term, due this situation we are facing the difficult task of replacing Bhavani. A special thanks to Bhavani for all of the great contributions he had made while serving with us on the LoCo Council.

So with that in mind, we are writing this to ask for volunteers to step forward and nominate themselves or another contributor for the three open positions. The LoCo Council is defined on our wiki page.

Wiki: https://wiki.ubuntu.com/LoCoCouncil

Team Agenda: https://wiki.ubuntu.com/LoCoCouncilAgenda

Typically, we meet up once a month in IRC to go through items on the team agenda also we started to have Google Hangouts too (The time for hangouts may vary depending the availability of the members time). This involves approving new LoCo Teams, Re-approval of Approved LoCo Teams, resolving issues within Teams, approving LoCo Team mailing list requests, and anything else that comes along.

We have the following requirements for Nominees:

Be an Ubuntu member

Be available during typical meeting times of the council

Insight into the culture(s) and typical activities within teams is a plus

Here is a description of the current LoCo Council:

They are current Ubuntu Members with a proven track record of activity in the community. They have shown themselves over time to be able to work well with others, and display the positive aspects of the Ubuntu Code of Conduct. They should be people who can judge contribution quality without emotion while engaging in an interview/discussion that communicates interest, a welcoming atmosphere, and which is marked by humanity, gentleness, and kindness.

If this sounds like you, or a person you know, please e-mail the LoCo Council with your nomination(s) using the following e-mail address: loco-council<at>lists.ubuntu.com.

Please include a few lines about yourself, or whom you’re nominating, so we can get a good idea of why you/they’d like to join the council, and why you feel that you/they should be considered. If you plan on nominating another person, please let them know, so they are aware.

We welcome nominations from anywhere in the world, and from any LoCo team. Nominees do not need to be a LoCo Team Contact to be nominated for this post. We are however looking for people who are active in their LoCo Team.

The time frame for this process is as follows:

Nominations will open: September 18th, 2014

Nominations will close: October 2nd, 2014

We will then forward the nominations to the CC, Requesting they take the following their next meeting to make their selections.

Robbie Williamson: Priorities & Perseverance

Planet Ubuntu - Thu, 2014-09-18 05:32

This is a not a stock ticker, rather a health ticker…and unlike with a stock price, a downward trend is good.  Over the last 3 years or so, I’ve been on a personal mission of improving my health.  As you can see it wasn’t perfect, but I managed to lose a good amount of weight.

So why did I do it…what was the motivation…it’s easy, I decided in 2011 that I needed to put me first.   This was me from 2009

At my biggest, I was pushing 270lbs.  I was so busy trying to do for others, be it work, family, or friends, I was constantly putting my needs last, i.e. exercise and healthy eating.  You see, I actually like to exercise and healthy eating isn’t a hard thing for me, but when you start putting those things last on your priorities, it becomes easy to justify skipping the exercise or grabbing junk food because your short on time or exhausted from being the “hero”.

Now I have battled weight issues most of my life.  Given how I looked as a baby, this shouldn’t come as a surprise. LOL

But I did thin out as a child.

To only get bigger again

And even bigger again

But then I got lucky.  My metabolism kicked into high gear around 20, and I grew about 5 inches and since I was playing a ton of basketball daily, I ate anything I wanted and still stayed skinny

I remained so up until I had my first child, then the pounds began to come on.  Many parents will tell you that the first time is always more than you expected, so it’s not surprising with sleep deprivation and stress, you gain weight.  To make it even more fun, I had decide to start a new job and buy a new house a few years later, when my second child came…even more “fun”.

To be clear, I’m not blaming any of my weight gain on these events, however they became easy crutches to justify putting myself last.  And here’s the crazy part, by doing all this, I actually ended up doing less for those I cared about in the long run, because I was physically exhausted, mentally fatigued, and emotionally spent a lot of the time.

So, around October of 2012 I made a decision.  In order for me to be the man I wanted to be for my family, friends, and even colleagues, I had to put myself first.  While it sounds selfish, it’s the complete opposite.  In order to be the best I could be for others, I realized I had to get myself together first.  For those of you who followed me on Facebook then, you already know what it took…a combination of MyFitnessPal calorie tracking and a little known workout program called Insanity:

Me and my boy, Shaun T, worked out religiously…everyday…sometimes mornings…sometimes afternoons…sometimes evenings.  I carried him with me all for work travel on my laptop and phone…doing Insanity videos in hotels rooms around the world.  I did the 60day program about 4 times through (with breaks in between cycles)…adding in some weight workouts towards the end.  The results were great, as you can see in the first graphic starting around October 2012.  By staying focused and consistent, I dropped from about 255lbs to 226lbs at my lowest in July 2013.  I got rid of a lot of XXL shirts and 42in waist pants/shorts, and got to a point where I didn’t always feel the need to swim with a shirt on….if ya know what I mean ;-).  So August rolled around, and while I was feeling good about myself…didn’t feel great, because I knew that while I was lighter, and healthier, I wasn’t necessarily that much stronger.  I knew that if I wanted to really be healthy and keep this weight off, I’d need more muscle mass…plus I’d look better too :-P.

So the Crossfit journey began.

Now I’ll be honest, it wasn’t my first thought.  I had read all the horror stories about injuries and seen some of the cult-like stuff about it.  However, a good friend of mine from college was a coach, and pretty much called me out on it…she was right…I was judging something based on others opinions and not my own (which is WAY outta character for me).  So…I went to my first Crossfit event…the Women’s Throwdown in Austin, TX (where I live) held by Woodward Crossfit in July of 2013.  It was pretty awesome….it wasn’t full of muscle heads yelling at each other or insane paleo eating nut jobs trying to out shine another…it was just hardworking athletes pushing themselves as hard as they could…for a great cause (it’s a charity event)…and having a lot of fun.  I planned to only stay for a little bit, but ended up staying the whole damn day! Long story, short…I joined Woodward Crossfit a few weeks after (the delay was because I was determined to complete my last Insanity round, plus I had to go on a business trip), which was around the week of my birthday (Aug 22).

Fast forward a little over a year, with a recently added 21-day Fitness Challenge by David King (who also goes to the same gym), and as of today I’m down about 43lbs (212), with a huge reduction in body fat percentage.  I don’t have the starting or current percentage, but let’s just say all 43lbs lost was fat, and I’ve gained a good amount of muscle in the last year as well…which is why the line flattened a bit before I kicked it up another notch with the 21-Day last month.

Now I’m not posting any more pictures, because that’s not the point of this post (but trust me…I look goooood :P).  My purpose is exactly what the subject says, priorities & perseverance.  What are you prioritizing in your life?  Are you putting too many people’s needs ahead of your own?  Are you happy as a result?  If you were like me, I already know the answer…but you don’t have to stay this way.  You only get one chance at this life, so make the most out of it.  Make the choice to put your happiness first, and I don’t mean selfishly…that’s called pleasure.  You’re happier when your loved ones are doing well and happy…you’re happier when you have friends who like you and that you can depend on….you’re happier when you kick ass at work…you’re happier when you kill it on the basketball court (or whatever activity you like).  Make the decision to be happy, set your goals, then perservere until you attain them…you will stumble along the way…and there will be those around you who either purposely or unknowingly discourage you, but stay focused…it’s not their life…it’s yours.  And when it gets really hard…just remember the wise words of Stuart Smalley:


Ronnie Tucker: Everything You Need to Know About Meizu MX4, the Upcoming Ubuntu Phone – Gallery

Planet Ubuntu - Thu, 2014-09-18 05:04

The new Ubuntu Touch operating system from Canonical will power the new Meizu MX4 phone and it will be out in December, according to the latest information posted by the Chinese company. We now take a closer look at this new phone to see how it will hold up with an Ubuntu experience.

Canonical hasn’t provided any kind of information about a timetable for the launch of the new Ubuntu phone from Meizu, and even the information that we have right now has been posted initially on an Italian blog of the Chinese company. Basically, no one is saying anything officially, but that’s not really the point.

The new Meizu MX4 was announced just a couple of weeks ago and many Ubuntu users have asked themselves if this is the phone that will eventually feature the upcoming Ubuntu Touch. It looks like that is the case, so we now take a closer look at this powerful handset.

Source:

http://news.softpedia.com/news/Everything-You-Need-to-Know-About-Meizu-MX4-the-Upcoming-Ubuntu-Phone-458882.shtml

Submitted by: Silviu Stahie

Ayrton Araujo: Ubuntu shell overpowered

Planet Ubuntu - Thu, 2014-09-18 00:28

In order to have more productivity under my environment, as a command line centric guy, I started three years ago to use zsh as my defaul shell. And for who never tried it, I would like to share my personal thoughts.

What are the main advantages?
  • Extended globbing: For example, (.) matches only regular files, not directories, whereas az(/) matches directories whose names start with a and end with z. There are a bunch of other things;
  • Inline glob expansion: For example, type rm *.pdf and then hit tab. The glob *.pdf will expand inline into the list of .pdf files, which means you can change the result of the expansion, perhaps by removing from the command the name of one particular file you don’t want to rm;
  • Interactive path expansion: Type cd /u/l/b and hit tab. If there is only one existing path each of whose components starts with the specified letters (that is, if only one path matches /u/l/b*), then it expands in place. If there are two, say /usr/local/bin and /usr/libexec/bootlog.d, then it expands to /usr/l/b and places the cursor after the l. Type o, hit tab again, and you get /usr/local/bin;
  • Nice prompt configuration options: For example, my prompt is currently displayed as tov@zyzzx:/..cts/research/alms/talk. I prefer to see a suffix of my current working directory rather than have a really long prompt, so I have zsh abbreviate that portion of my prompt at a maximum length.

Font: http://www.quora.com/What-are-the-advantages-and-disadvantages-of-using-zsh-instead-of-bash-or-other-shells

The Z shell is mainly praised for its interactive use, the prompts are more versatilly, the completion is more customizable and often faster than bash-completion. And, easy to make plugins. One of my favorite integrations is with git to have better visibility of current repository status.

As it focus on interactive use, is a good idea to keep maintaining your shell scripts starting with #!/bin/bash for interoperability reasons. Bash is still most mature and stable for shell scripting in my point of view.

So, how to install and set up?

sudo apt-get install zsh zsh-lovers -y

zsh-lovers will provide to you a bunch of examples to help you understand better ways to use your shell.

To set zsh as the default shell for your user:

chsh -s /bin/zsh

Don't try to set zsh as default shell to your full system or some things should stop to work.

Two friends of mine, Yuri Albuquerque and Demetrius Albuquerque (brothers of a former hacker family =x) also recommended to use https://github.com/robbyrussell/oh-my-zsh. Thanks for the tip.

How to install oh-my-zsh as a normal user?

curl -L http://install.ohmyz.sh | sh

My $ZSH_THEME is seted to "bureau" under my $HOME/.zshrc. You can try "random" or other themes located inside $HOME/.oh-my-zsh/themes.

For command-not-found integration:

echo "source /etc/zshcommandnot_found" >> ~/.zshrc

If you doesn't have command-not-found package:

sudo apt-get install command-not-found -y

And, if you use Ruby under RVM, I also recommend to read this:
http://rvm.io/integration/zsh

Happy hacking :-)

Stuart Langridge: Responsive Dummies

Planet Ubuntu - Wed, 2014-09-17 13:11

After Remy “Remington” Sharp and Bruce “Bruce” Lawson published Introducing HTML5 in 2010, the web development community have been eager to see what they’ll turn their talents to next.1 Now their new book is out, Responsive Design for Dummies.

It’s… got its good points and its bad points. As the cover proudly proclaims, they fully embrace the New World Order of delivering essential features via Web Components. I particularly liked their demonstration of how to wrap a whole site inside a component, thus making your served HTML just be <bruces-site> and so saving you bandwidth2. Their recommendation that Flickr and Facebook use this approach to stop users stealing images may be the best suggestion for future-proofing the web that we’ve heard in 2014 so far. The sidebar on how to use this approach and hash-bang JavaScript URLs together ought to become the new way that we build everything, and I’m eager to see libraries designed for slow connections and accesssibility such as Angular.js adopt the technique.

Similarly, the discussion of how Service Workers can deliver business advantages on the Apple iWatch was welcome, particularly given the newness of the release. It’s rare to see a book this up-to-date and this ready to engage with driving the web forward. Did Bruce and Remy get early access to iWatch prototypes or something? I am eager to start leveraging these techniques with my new startup3.

It’s not all perfect, though. I think that devoting three whole chapters to a Dawkins-esque hymn of hatred for everyone who opposed the <picture> element was a bit more tactless than I was hoping for. You won, chaps, there’s no need to rub it in.4

I’d also like to see, if I’m honest, ideas for when breakpoints are less appropriate. I appreciate that the book comes with a free $500 voucher for Getty Images, but after at Bruce and Remy’s recommendation I downloaded separate images for breakpoints at 17px, 48px, 160px, 320px, 341px, 600px, 601px, 603px, 631px, 800px, 850px, 900px, 1280px, 2560px, and 4200px for retina Firefox OS devices, I only had $2.17 left to spend and my server has run out of disc space. Even after using their Haskell utility to convert the images to BMP and JPEG2000 formats I still only score 13.6% on the Google Pagespeed test, and my router has melted. Do better next time, chaps.

Nonetheless, despite these minor flaws, and obvious copy-editing flubs such as “responsive” being misspelled on the cover itself5, I’d recommend this book. Disclaimer: I know both the authors biblicallypersonally and while Bruce has indeed promised me “a night to remember” for a positive review, that has not affected at all my judgement of this book as the most important and seminal work in the Web field since Kierkegaard’s “Sarissa.js Tips and Tricks”.

Go and buy it. It’s so popular that it might actually be hard to find a copy, but if your bookseller doesn’t have it, you should shout at them.

  1. other than inappropriate swimwear, obviously
  2. I also liked their use of VML and HTML+TIME in a component
  3. it’s basically Uber for pie fillings
  4. although if you don’t rub it in it’ll stain the mankini
  5. clearly it was meant to say “ahahaha responsive design, what evaaaaar”, but maybe that didn’t fit

Mattia Migliorini: Windows: How to Solve Application Error 0xc0000142 and 0xc0000005

Planet Ubuntu - Wed, 2014-09-17 13:07

Windows applications sometimes fail to load. But why? It’ll not tell you, it will instead show a generic and pointless “Application Error” message. Inside this message you will read something like this:

The application was unable to start correctly (0xc0000142). Click OK to close the application.

The only thing you can do here is close the application and search on the Internet for that cryptic error code. And maybe it’s the reason why you are reading this post.
It’s not that easy to find a solution to this problem, but I found it thanks to Up and Ready and want to share it with you.

The problem

Windows tells you that the application was unable to start. You can try a hundred times, but the error does not solve itself magically, because it’s not casual. The problem is that the ddl that launches the application is unsigned or digitally no longer valid. And it’s not up to you, maybe you just downloaded the program from the official site.

The solution

To solve the Application Error you need an advanced Windows Sysinternals Tool called Autoruns for Windows. You can download it from the official website.

Click on the image to view it full size.

Extract the archive you downloaded, launch autoruns.exe and go to the AppInit tab, which will list all the dll that are unsigned or digitally no longer valid on you computer. Right click each of them, one at a time, go to Properties and rename them. After renaming each of them, try launching the application again to find the problematic dll.

If the previous method didn’t solve the application error, right click on the following entry:

HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Windows\AppInit_Dlls

and click on Jump to entry…

A new window opens: it’s the System Registry Editor. Double click LoadAppInit_DLLs and change the value from 1 to 0. Click OK to confirm and exit. Now launch the compromised program and it’ll start.

Note: some applications may change that value back to 1 after they get launched!

The post Windows: How to Solve Application Error 0xc0000142 and 0xc0000005 appeared first on deshack.

Ronnie Tucker: Torvalds says he has no strong opinions on systemd

Planet Ubuntu - Wed, 2014-09-17 05:03

Linux creator Linus Torvalds is well-known for his strong opinions on many technical things. But when it comes to systemd, the init system that has caused a fair degree of angst in the Linux world, Torvalds is neutral.

“When it comes to systemd, you may expect me to have lots of colourful opinions, and I just don’t,” Torvalds told iTWire in an interview. “I don’t personally mind systemd, and in fact my main desktop and laptop both run it.

Source:

http://www.itwire.com/business-it-news/open-source/65402-torvalds-says-he-has-no-strong-opinions-on-systemd

Submitted by: Sam Varghese

Ubuntu Kernel Team: Kernel Team Meeting Minutes – September 16, 2014

Planet Ubuntu - Tue, 2014-09-16 17:15
Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20140916 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://people.canonical.com/~kernel/reports/kt-meeting.txt


Status: Utopic Development Kernel

The Utopic kernel remains based on a v3.16.2 upstream stable kernel and
is uploaded to the archive, ie. linux-3.16.0-15.21. Please test and let
us know your results.
I’d also like to point out that our Utopic kernel freeze date is about 3
weeks away on Thurs Oct 9. Please don’t wait until the last minute to
submit patches needing to ship in the Utopic 14.10 release.
—–
Important upcoming dates:
Mon Sep 22 – Utopic Final Beta Freeze (~1 weeks away)
Thurs Sep 25 – Utopic Final Beta (~1 weeks away)
Thurs Oct 9 – Utopic Kernel Freeze (~3 weeks away)
Thurs Oct 16 – Utopic Final Freeze (~4 weeks away)
Thurs Oct 23 – Utopic 14.10 Release (~5 weeks away)


Status: CVE’s

The current CVE status can be reviewed at the following link:

http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html


Status: Stable, Security, and Bugfix Kernel Updates – Trusty/Precise/Lucid

Status for the main kernels, until today (Sept. 16):

  • Lucid – verification & testing
  • Precise – verification & testing
  • Trusty – verification & testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html

    For SRUs, SRU report is a good source of information:

  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 29-Aug through 20-Sep
    ====================================================================
    29-Aug Last day for kernel commits for this cycle
    31-Aug – 06-Sep Kernel prep week.
    07-Sep – 13-Sep Bug verification & Regression testing.
    14-Sep – 20-Sep Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

Elizabeth K. Joseph: Ubuntu at Fossetcon 2014

Planet Ubuntu - Tue, 2014-09-16 17:01

Last week I flew out to the east coast to attend the very first Fossetcon. The conference was on the smaller side, but I had a wonderful time meeting up with some old friends, meeting some new Ubuntu enthusiasts and finally meeting some folks I’ve only communicated with online. The room layout took some getting used to, but the conference staff was quick to put up signs and directing conference attendees in the right direction and in general leading to a pretty smooth conference experience.

On Thursday the conference hosted a “day zero” that had training and an Ubucon. I attended the Ubucon all day, which kicked off with Michael Hall doing an introduction to the Ubuntu on Phones ecosystem, including Mir, Unity8 and the Telephony features that needed to be added to support phones (voice calling, SMS/MMs, Cell data, SIM card management). He also talked about the improved developer portal with more resources aimed at app developers, including the Ubuntu SDK and simplified packaging with click packages.

He also addressed the concern of many about whether Ubuntu could break into the smartphone market at this point, arguing that it’s a rapidly developing and changing market, with every current market leader only having been there for a handful of years, and that new ideas need need to play to win. Canonical feels that convergence between phone and desktop/laptop gives Ubuntu a unique selling point and that users will like it because of intuitive design with lots of swiping and scrolling actions, gives apps the most screen space possible. It was interesting to hear that partners/OEMs can offer operator differentiation as a layer without fragmenting the actual operating system (something that Android struggles with), leaving the core operating system independently maintained.

This was followed up by a more hands on session on Creating your first Ubuntu SDK Application. Attendees downloaded the Ubuntu SDK and Michael walked through the creation of a demo app, using the App Dev School Workshop: Write your first app document.

After lunch, Nicholas Skaggs and I gave a presentation on 10 ways to get involved with Ubuntu today. I had given a “5 ways” talk earlier this year at the SCaLE in Los Angeles, so it was fun to do a longer one with a co-speaker and have his five items added in, along with some other general tips for getting involved with the community. I really love giving this talk, the feedback from attendees throughout the rest of the conference was overwhelmingly positive, and I hope to get some follow-up emails from some new contributors looking to get started. Slides from our presentation are available as pdf here: contributingtoubuntu-fossetcon-2014.pdf


Ubuntu panel, thanks to Chris Crisafulli for the photo

The day wrapped up with an Ubuntu Q&A Panel, which had Michael Hall and Nicholas Skaggs from the Community team at Canonical, Aaron Honeycutt of Kubuntu and myself. Our quartet fielded questions from moderator Alexis Santos of Binpress and the audience, on everything from the Ubuntu phone to challenges of working with such a large community. I ended up drawing from my experience with the Xubuntu community a lot in the panel, especially as we drilled down into discussing how much success we’ve had coordinating the work of the flavors with the rest of Ubuntu.

The next couple days brought Fossetcon proper, with I’ll write about later. The Ubuntu fun continued though! I was able to give away 4 copies of The Official Ubuntu Book, 8th Edition which I signed, and got José Antonio Rey to sign as well since he had joined us for the conference from Peru.

José ended up doing a talk on Automating your service with Juju during the conference, and Michael Hall had the opportunity to a talk on Convergence and the Future of App Development on Ubuntu. The Ubuntu booth also looked great and was one of the most popular of the conference.

I really had a blast talking to Ubuntu community members from Florida, they’re a great and passionate crowd.

Ubuntu LoCo Council: New SubLoCo Policy

Planet Ubuntu - Tue, 2014-09-16 15:24

Hi, after a lot of work, thinking and talking about the problem of the LoCo Organization and the SubLoCos, we came up with the following policy:

  • Each team will be a country (or state in the United States). We will call this a ‘LoCo’.
  • Each LoCo can have sub-teams. This sub-teams will be created at the will and need of each LoCo.
  • A LoCo may have sub-teams or not have sub-teams.
  • In the event a LoCo does have sub-teams, a Team Council needs to be created.
  • A Team Council is conformed by at least one member of each sub-team.
  • The members that will be part of the Team Council will be chosen by other current members of the team.
  • The Team Council will have the power to make decisions regarding to the LoCo.
  •  The Team Council will also have the power to request partner items, such as conference and DVD packs.
  • The LoCo Council will only recognize one team per country (or state in the United States). This is the team that will be in the ~locoteams team in Launchpad.
  • In the event a LoCo wants to go through the verification process, the LoCo will go through it, and not individual sub-teams.
  • LoCos not meeting the criteria of country/state teams will be denied verification.
  • In the event what is considered a sub-team wants to be considered a LoCo, it will need to present a request to the LoCo Council.
  • The LoCo Council will provide a response, which is, in no way, related to verification. The LoCo will still have to apply for verification if wanted.

We encourage the LoCo teams to see if this new form of organization is fits for you, if so please start forming subteams as you find useful. If a team needs help with this or anything else contact us, we are here to help!

The Fridge: Ubuntu Weekly Newsletter Issue 383

Planet Ubuntu - Mon, 2014-09-15 23:51

Welcome to the Ubuntu Weekly Newsletter. This is issue #383 for the week September 8 – 14, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Jose Antonio Rey
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

Pages

Subscribe to Free Software Magazine aggregator