The Beta version of SteamOS, a Debian-based distribution developed by Valve to be used in its hybrid PC / console, has just received an update and numerous packages.
Valve has two builds for SteamOS. One is a stable version (sort of) and the other one is a Beta (Alchemist). The two versions are not all that different from one another, but the Valve developers are using the Beta release to test some of the new updates before they hit the stable branch.
This is just the Beta version of SteamOS and not all of the packages included are stable. It will take a while until all these chages will be added to the Stable branch. The system requirements for Steam OS haven’t changed and have been pretty much the same since the beginning: an Intel or AMD 64-bit capable processor, 4GB or more memory, a 250GB or larger disk, NVIDIA, Intel, or AMD graphics card, and a USB port or DVD drive for installation. Check the official announcement for more details about this release.
Submitted by: Silviu Stahie
Noticeably absent from the trial and much of the media attention are the phone companies. Did they know their networks could be so systematically abused? Did they care?
In any case, the public has never been fully informed about how phones have been hacked. Speculation has it that phone hackers were guessing PIN numbers for remote voicemail access, typically trying birthdates and inappropriate PIN numbers like 0000 or 1234.There is more to it
Those in the industry know that there are additional privacy failings in mobile networks, especially the voicemail service. It is not just in the UK either.
There are various reasons for not sharing explicit details on a blog like this and comments concerning such techniques can't be accepted.
Nonetheless, there are some points that do need to be made:
- it is still possible for phones, especially voicemail, to be hacked on demand
- an attacker does not need expensive equipment nor do they need to be within radio range (or even the same country) as their target
- the attacker does not need to be an insider (phone company or spy agency employee)
The bottom line is that the only way to prevent voicemail hacking is to disable the phone's voicemail service completely. Voicemail is not really necessary given that most phones support email now. For those who feel they need it, consider running the voicemail service on your own private PBX using free software like Asterisk or FreeSWITCH. Some Internet telephony service providers also offer third-party voicemail solutions that are far more secure than those default services offered by mobile networks.
To disable voicemail, simply do two things:
- send a letter to the phone company telling them you do not want any voicemail box in their network
- in the mobile phone, select the menu option to disable all diversions, or manually disable each diversion one by one (e.g. disable forwarding when busy, disable forwarding when not answered, disable forwarding when out of range)
Hello and welcome to Ubuntu GNOME Community Guide for Newcomers
If you are interested to join Ubuntu GNOME Community as a volunteer to help ‘or’ you have joined already and you are a newcomer to Ubuntu GNOME Community, then this simple guide is for you.
3-Simple Simple Steps:
- First, you need to read Ubuntu GNOME Community Wiki Page.
- If you require further details, here is a list of ALL Ubuntu GNOME Wiki Pages.
- If the above two steps were not enough, please Contact Us.
That is all what you need to know and/or do if you are interested to join Ubuntu GNOME Team or you have already joined but you can’t find your way easily and need some help
For those who would like even further details, here is our Getting Involved Guide. This guide will explain to you from A-Z how to get involved with Ubuntu GNOME.
As always, thank you for choosing and joining Ubuntu GNOME!
Ubuntu GNOME Leaders Board
A few interesting things happened after I got a macbook air.
Firstly, I got a lot of shit from my peers and friends about it. This was funny to me, nothing really bothered me about it, but I can see this becoming really tiresome at events like hackathons or conferences.
As a byproduct, there’s a strong feeling in the hardcore F/OSS world that Apple hardware is the incarnation of evil.
As a result of both of the above, hardcore F/OSS (and Distro hackers) don’t buy apple hardware.
Therefore, GNU/Linux is complete garbage on Apple hardware. Apple’s firmware bugs don’t help, but we’re BAD.
Some might ask why this is a big deal. The fact is, this is one of the most used platforms for Open Source development (note I used that term exactly).
Are we to damn these users to a nonfree OS because we want to maintain our purity?
I had to give back my Air, but I still have a Mac Mini that i’ve been using for testing bugs on OSX in code I have. Very soon, my Mac Mini will be used to help fix the common bugs in the install process.
Some things you can do:
- Consider not giving off an attitude to people with Apple hardware. Be welcoming.
- Consider helping with supporting your favorate distro on Apple hardware. Props to Fedora for doing such a great job, in particular, mjg59 and Peter Jones for all they do with it.
- Help me make Debian Apple installs one-click.
In : lp
Out: <launchpadlib.launchpad.Launchpad at 0x7f49ecc649b0>
In : lp.distributions
Out: <launchpadlib.launchpad.DistributionSet at 0x7f49ddf0e630>
In : lp.distributions['ubuntu']
Out: <distribution at https://api.launchpad.net/1.0/ubuntu>
In : lp.distributions['ubuntu'].display_name
In : lp.distributions['ubuntu'].summary
Out: 'Ubuntu is a complete Linux-based operating system, freely available with both community and professional support.'
In : import sys; print(sys.version)
3.4.1 (default, Jun 9 2014, 17:34:49)
There is not much yet, but it's a start. python3 port of launchpadlib is coming soon. It has been attempted a few times before and I am leveraging that work. Porting this stack has proven to be the most difficult python3 port I have ever done. But there is always python-libvirt that still needs porting ;-)
Some of above is just merge proposals against launchpadlib & lazr.restfulclient, and requires not yet packaged modules in the archive. When trying it out, I'm still getting a lot of run-time asserts and things that haven't been picked up by e.g. pyflakes3 and has not been unit-tested yet.
Following the success of our new stand design at MWC earlier this
year, we applied the same design principles to the Ubuntu stand at
last months Mobile Asia Expo in Shanghai.
With increased floor space, compared to last year, and a new stand
location that was approachable from three key directions, we were
faced with a few new design challenges:
- How to effectively incorporate existing 7m wide banners into
the new 8m wide stand?
- How to make the stand open and approachable from three sides
with optimum use of floor space and maintaining the maximum
amount storage space possible?
- How to maintain our strong brand presence after any necessary
Proposed layout ideas
The final design utilised maximum floor space and incorporated the
positioning of our bespoke demo pods, that proved successful at MWC.
Along with strong branding featuring our folded paper background
with large graphics showcasing app and scope designs and a new aisle
banner. The main stand banners were then positioned in an alternating
arrangement aligned to the left and to the right above the stand.
This is my monthly summary of my free software related activities. If you’re among the people who made a donation to support my work (168.17 €, thanks everybody!), then you can learn how I spent your money. Otherwise it’s just an interesting status update on my various projects.Debian LTS
After having put in place the infrastructure to allow companies to contribute financially to Debian LTS, I spent quite some time to draft the announce of the launch of Debian LTS (on a suggestion of Moritz Mühlenhoff who pointed out to me that there was no such announce yet).
I’m pretty happy about the result because we managed to mention a commercial offer without generating any pushback from the community. The offer is (in my necessarily biased opinion) clearly in the interest of Debian but still the money doesn’t go to Debian so we took extra precautions. When I got in touch with the press officers, I included the Debian leader in the discussion and his feedback has been very helpful to improve the announce. He also officially “acked” the press release to give some confidence to the press officers that they were doing the right thing.
Lucas also pushed me to seek public review of the draft press release, which I did. The discussion was constructive and the draft got further improved.
The news got widely relayed, but on the flip side, the part with the call for help got almost no attention from the press. Even Linux Weekly News skipped it!
On the Freexian side, we just crossed 10% of a full-time position (funded by 6 companies) and we are in contact with a few other companies in discussion. But we’re far from our goal yet so we will have to actively reach out to more companies. Do you know companies who are still running Debian 6 servers ? If yes, please send me the details (name + url + contact info if possible) to email@example.com so that I can get in touch and invite them to contribute to the project.Distro Tracker
In the continuation of the Debian France game, I continued to work together with Joseph Herlant and Christophe Siraut on multiple improvements to distro tracker in order to prepare for its deployment on tracker.debian.org (which I just announced \o/).Debian France
Since the Debian France game was over, I shipped the rewards. 5 books have been shipped to:
- Joseph Herlant and Christophe Siraut for their distro-tracker work
- Dylan Aissi for his help within the Debian Med team
- Samuel Dorsaz and Thomas Debesse for their work towards better support of Brother printers
I orphaned sql-ledger and made a last upload to change the maintainer to Debian QA (with a new upstream version).
After having been annoyed a few times by dch breaking my name in the changelog, I filed #750855 which got quickly fixed.
I disabled a broken patch in quilt to fix RC bug #751109.
I filed #751771 when I discovered an incorrect dependency on ruby-uglifier (while doing packaging work for Kali Linux).
I tested newer versions of ruby-libv8 on armel/armhf on request of the upstream author. I had reported him those build failures (github ticket here).Thanks
See you next month for a new summary of my activities.
Here’s a reminder about next Monday’s 7th of July Ubuntu HTML5 apps session in Barcelona.
At this free event, I’ll be presenting Ubuntu’s HTML5 development story, together with a live coding session and a Q&A round at the end. You’ll learn how to use the Ubuntu SDK and the UI toolkit to easily reuse your web skills to create stunning Ubuntu apps.
HTML5 is the other side of the coin of the Ubuntu app developer offering, where both web and native are first class citizens, offering a very flexible yet focused approach for application development. Teaming up with BeMyApp meetups, the session will start at 7 p.m. at Barcelona’s Mobile World Centre.
I look forward to seeing you there!
Maybe do you remember, last year I mentored a Google Summer of code whose aim was to replace our well known Package Tracking System with something more modern, usable by derivatives and more easily hackable. The result of this project is a new Django-based software called Distro Tracker.
With the help of the Debian System Administrators, it’s now setup on tracker.debian.org!
This service is also managed by the Debian QA team, it’s deployed in /srv/tracker.debian.org/ (on ticharich.debian.org, a VM) if you want to verify something on the live installation. It runs under the “qa” user (so members of the “qa-core” group can administer it).
That said you can reproduce the setup on your workstation quite easily, just by checking out the git repository and applying this change:--- a/distro_tracker/project/settings/local.py +++ b/distro_tracker/project/settings/local.py @@ -10,6 +10,7 @@ overrides on top of those type-of-installation-specific settings. from .defaults import INSTALLED_APPS from .selected import * +from .debian import * ## Add your custom settings here
Speaking of contributing, the documentation includes a “Contributing” section to get you up and running, ready to do your first contribution!
Versions less than 1.0.7 of the Wordpress plugin Diagnostic Tool, contain several vulnerabilities:
Persistent XSS in the Outbound Connections view. An attacker that is able to cause the site to request a URL containing an XSS payload will have this XSS stored in the database, and when an admin visits the Outbound Connections view, the payload will run. This can be trivially seen in example by running a query for http://localhost/<script>alert(/xss/)</script> on that page, then refreshing the page to see the content run, as the view is not updated in real time. This is CVE-2014-4183.
Reflected XSS in DNS resolver test page. When a reverse lookup is performed, the results of gethostbyaddr() are inserted into the DOM unescaped. An attacker who (mis-) configures a DNS server to send an XSS payload as a reverse lookup may be able to either trick the administrator into performing a lookup, or (more likely) use the CSRF vulnerability documented below to trigger the XSS.
AJAX handlers do not have any CSRF protection on them. This allows an attacker to trigger the server into sending test emails (low severity), perform DNS lookups (high severity when combined with the reflected XSS above) and request the loading of pages by the server (including URLs that contain XSS payloads, triggering the persistent XSS documented above). Additionally, the last 2 vulnerabilities could be used to trigger an information leak for Wordpress servers that are behind a DDoS protection service (e.g., Cloudflare) or are being run as TOR anonymous services by forcing the server to request a page from the attacker's server or perform a DNS query against the attackers DNS server, allowing the attacker to learn the real IP of the server hosting Wordpress. This is CVE-2014-4182.
- 2014/06/15: Vulnerabilities discovered & reported to developers.
- 2014/06/30: Developers release Diagnostic Tool 1.0.7, fixing issues.
- 2014/07/04: Public disclosure.
We’re back with Season Seven, Episode Fourteen of the Ubuntu Podcast! Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are drinking tea and eating Foxes Ginger Crunch Creams biscuits in Studio L.Download OGG Download MP3 Play in Popup
In this week’s show:
- We also discuss:
- We share some Command Line Lurve from commandlinefu.com:
andmount | column -t df | column -t
- And we read your feedback, including:
We’ll be back next week, so please send your comments and suggestions to: firstname.lastname@example.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: email@example.com and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+
But I was missing a search engine for Firefox...
Firefox Search Engine for Explain Shell
Download from here and save it into your Firefox profile folder like:
Enjoy it! :)
But that's coming in towards the end of the story. Let's start at the beginning.
What You're Going to SeeThis post is a bit long. Here are the sections I've divided it into:
- What You're Going to See
- Introducing ACME
- Categorizing Teams
- Category Example
- Calculating the Levenshtein Distance of Teams
- Sorting and Interpretation
However, you don't need to read the whole thing to the main benefits. You can get the Cliff Notes version by reading the Premise, Categorizing Teams, Interpretation, and the Conclusion.
PremiseCompanies grow. Teams expand. If you're well-placed in your industry and providing in-demand services or products, this is happening to you. Individuals and small teams tend to deal with this sort of change pretty well. At an organizational level, however, this sort of change tends to have an impact that can bring a group down, or rocket it up to the next level.
Of the many issues faced by companies (or rapidly growing orgs within large companies) is this: "Our old structures, though comfortable, won't scale well with all these new teams and all the new hires joining our existing teams. How do we reorganize? Where do we put folks? Are there natural lines along which we can provide better management (and vision!) structure?"
The answer, of course, is "yes" -- but! It requires careful analysis and a deep understanding every team in your org.
The remainder of this post will set up a scenario and then figure out how to do a re-org. I use a software engineering org as an example, but that's just because I have a long and intimate knowledge of them and understand the ways in which one can classify such teams. These same methods could be applied a Sales group, Marketing groups, etc., as long as you know the characteristics that define the teams of which these orgs are comprised.
Introducing ACMEACME Corporation is the leading producer of some of the most innovative products of the 20th century. The CTO had previously tasked you, the VP of Software Development to bring this product line into the digital age -- and you did! Your great ideas for the updated suite are the new hottness that everyone is clamouring for. Subsequently, the growth of your teams has been fast, and dare we say, exponential.
More details on the scenario: your Software Development Group has several teams of engineers, all working on different products or services, each of which supports ACME Corporation in different ways. In the past 2 years, you've built up your org by an order of magnitude in size. You've started promoting and hiring more managers and directors to help organize these teams into sensible encapsulating structures. These larger groups, once identified, would comprise the whole Development Group.
Ideally, the new groups would represent some aspect of the company, software development, engineering, and product vision -- in other words, some sensible clustering of teams doing related work. How would you group the teams in the most natural way?
Simply dividing along language or platform lines may seem like the obvious answer, but is it the best choice? There are some questions that can help guide you in figuring this out:
- How do these teams interact with other parts of the company?
- Who are the stakeholders in feature development?
- Which sorts of customers does each team primarily serve?
ACME Software Development has grown the following teams, some of which focus on products, some on infrastructure, some on services, etc.:
- Digital Anvil Product Team
- Giant Rubber Band App Team
- Digital Iron Carrot Team
- Jet Propelled Unicycle Service Team
- Jet Propelled Pogo Stick Service Team
- Ultimatum Dispatcher API Team
- Virtual Rocket Powered Roller Skates Team
- Operations (release management, deployments, production maintenance)
- QA (testing infrastructure, CI/CD)
- Community Team (documentation, examples, community engagement, meetups, etc.)
Early SW Dev team hacking the ENIACCategorizing TeamsEach of those teams started with 2-4 devs hacking on small skunkworks projects. They've now blossomed to the extent that each team has significant sub-teams working on new features and prototyping for the product they support. These large teams now need to be characterized using a method that will allow them to be easily compared. We need the ability to see how closely related one team is to another, across many different variables. (In the scheme outlined below, we end up examining 50 bits of information for each team.)
Keep in mind that each category should be chosen such that it would make sense for teams categorized similarly to be grouped together. A counter example might be "Team Size"; you don't necessarily want all large teams together in one group, and all small teams in a different group. As such, "Team Size" is probably a poor category choice.
Here are the categories which we will use for the ACME Software Development Group:
- Implementation Focus
- Supported OS
- Deployment Type
- License Type
- Industry Segment
- Customer Type
- Corporate Priority
Category Example(Things are going to get a bit more technical at this point; for those who care more about the outcomes than the methods used, feel free to skip to the section at the end: Sorting and Interpretation.)
In all cases, we will encode these values as binary digits -- this allows us to very easily compare teams using Levenshtein distance, since the total of all characteristics we are filtering on can be represented as a string value. An example should illustrate this well.
(The Levenshtein distance between two words is the minimum number of single-character edits -- such as insertions, deletions or substitutions -- required to change one word into the other. It is named after Vladimir Levenshtein, who defined this "distance" in 1965 when exploring the possibility of correcting deletions, insertions, and reversals in binary codes.)
Let's say the Software Development Group supports the following languages, with each one assigned a binary value:
- LFE - #b0000000001
- Erlang - #b0000000010
- Elixir - #b0000000100
- Ruby - #b0000001000
- Python - #b0000010000
- Hy - #b0000100000
- Clojure - #b0001000000
- Java - #b0010000000
- CoffeeScript - #b1000000000
We could then compare this to a team that used just Hy and Clojure (#b0001100001), which has a Levenshtein distance of 1 with the previous language category value. A team that used Ruby and Elixir (#b0000001100) would have a Levenshtein distance of 5 with the LFE/Hy/Clojure team (which makes sense: a total of 5 languages between the two teams with no languages shared in common).
Calculating the Levenshtein Distance of TeamsAs a VP who is keen on deeply understanding your teams, you have put together a spreadsheet with a break-down of not only languages used in each team, but lots of other categories, too. For easy reference, you've put a "legend" for the individual category binary values is at the bottom of the linked spreadsheet.
In the third table on that sheet, all of the values for each column are combined into a single binary string. This (or a slight modification of this) is what will be the input to your calculations. Needless to say, as a complete fan of LFE, you will be writing some Lisp code :-)
(If you would like to try the code out yourself while reading, and you have lfetool installed, simply create a new project and start up the REPL: $ lfetool new library ld; cd ld && make-shellThat will download and compile the dependencies for you. In particular, you will then have access to the lfe-utils project -- which contains the Levenshtein distance functions we'll be using. You should be able to copy-and-paste functions, vars, etc., into the REPL from the Github gists.)
Let's create a couple of data structures that will allow us to more easily work with the data you collected about your teams in the spreadsheet:
We can use a quick copy and paste into the LFE REPL for two of those numbers to do a sanity check on the distance between the Community Team and the Digital Iron Carrot Team:
That result doesn't seem unreasonable, given that at a quick glance we can see both of these strings have many differences in their respective character positions.
It looks like we're on solid ground, then, so let's define some utility functions to more easily work with our data structures:
Now we're ready to roll; let's try sorting the data based on a comparison with a one of the teams:
It may not be obvious at first glance, but what the levenshtein-sort function did for us is compare our "control" string to every other string in our data set, providing both the distance and the string that the control was compared to. The first entry in the results is the our control string, and we see what we would expect: the Levenshtein distance with itself is 0 :-)
The result above is not very easily read by most humans ... so let's define a custom sorter that will take human-readable text and then output the same, after doing a sort on the binary strings:
(If any of that doesn't make sense, please stop in and say "hello" on the LFE mail list -- ask us your questions! We're a friendly group that loves to chat about LFE and how to translate from Erlang, Common Lisp, or Clojure to LFE :-) )
Sorting and InterpretationBefore we try out our new function, we should ponder which team will be compared to all the others -- the sort results will change based on this choice. Looking at the spreadsheet, we see that the "Digital Iron Carrot Team" (DICT) has some interesting properties that make it a compelling choice:
- it has stakeholders in Sales, Engineering, and Senior Leadership;
- it has a "Corporate Priority" of "Business critical"; and
- it has both internal and external customers.
Here we're making the request "Show me the sorted results of each team's binary string compared to the binary string of the DICT." Here are the human-readable results:
For a better visual on this, take a look at the second tab of the shared spreadsheet. The results have been applied to the collected data there, and then colored by major groupings. The first group shares these things in common:
- Lisp- and Python-heavy
- Middleware running on BSD boxen
- Mostly proprietary
- Externally facing
- Focus on apps and APIs
Next on the list is Operations and QA -- often a natural pairing, and this process bears out such conventional wisdom. These two are good candidates for a second group.
Things get a little trickier at the end of the list. Depending upon the number of developers in the Java-heavy Giant Rubber Band App Team, they might make up their own group. However, both that one and the next team on the list have frontend components written in Angular.js. They both are used internally and have Engineering as a stakeholder in common, so let's go ahead and group them.
The next two are cloud-deployed Finance APIs running on the Erlang VM. These make a very natural pairing.
Which leaves us with the oddball: the Community Team. The Levenshtein distance for this team is the greatest for all the teams ... but don't be mislead. Because it has something in common with all teams (the Community Team supports every product with docs, example code, Sales and TAM support, evangelism for open source projects, etc.), it will have many differing bits with each team. This really should be in a group all its own so that structure represents reality: all teams depend upon the Community Team. A good case could also probably be made for having the manager of this team report directly up to you.
The other groups should probably have directors that the team managers report to (keeping in mind that the teams have grown to anywhere from 20 to 40 per team). The director will be able to guide these teams according to your vision for the Software Group and the shared traits/common vision you have uncovered in the course of this analysis.
Let's go back to the Community Team. Perhaps in working with them, you have uncovered a hidden fact: the community interactions your devs have are seriously driving market adoption through some impressive and passionate service and open source docs+evangelism. You are curious how your teams might be grouped if sorted from the perspective of the Community Team.
Let's find out!
As one might expect, most of the teams remain grouped in the same way ... the notable exception being the split-up of the Anvil and Rubber Band teams. Mostly no surprises, though -- the same groupings persist in this model.
To be fair, if this is something you'd want to fully explore, you should bump the "Corporate Priority" for the Community Team much higher, recalculate it's overall bits, regenerate your data structures, and then resort. It may not change too much in this case, but you'd be applying consistent methods, and that's definitely the right thing to do :-) You might even see the Anvil and Rubber Band teams get back together (left as an exercise for the reader).
As a last example, let's throw caution and good sense to the wind and get crazy. You know, like the many times you've seen bizarre, anti-intuitive re-orgs done: let's do a sort that compares a team of middling importance and a relatively low corporate impact with the rest of the teams. What do we see then?
This ruins everything. Well, almost everything: the only group that doesn't get split up is the middleware product line (Jet Propelled and Iron Carrot). Everything else suffers from a bad re-org.
If you were to do this because a genuine change in priority had occurred, where the Giant Rubber Band App Team was now the corporate leader/darling, then you'd need to recompute the bit values and do re-sorts. Failing that, you'd just be falling into a trap that has beguiled many before you.
ConclusionIf there's one thing that this exercise should show you, it's this: applying tools and analyses from one field to fresh data in another -- completely unrelated -- field can provide pretty amazing results that turn mystery and guesswork into science and planning.
If we can get two things from this, the other might be: knowing the parts of the system may not necessarily reveal the whole (c.f. Complex Systems), but it may provide you with the data that lets you better predict emergent behaviours and identify patterns and structure where you didn't see them (or even think to look!) before.
I’ve previously blogged about how I sometimes setup a webcam to take pictures and turn them into videos. I thought I’d update that here with something new I’ve done, fully automated time lapse videos on Ubuntu. Here’s when I came up with:-
(apologies for the terrible music, I added that from a pre-defined set of options on YouTube)
(I quite like the cloud that pops into existence at ~27 seconds in)
Over the next few weeks there’s an Air Show where I live and the skies fill with all manner of strange aircraft. I’m usually working so I don’t always see them as they fly over, but usually hear them! I wanted a way to capture the skies above my house and make it easily available for me to view later.
So my requirements were basically this:-
- Take pictures at fairly high frequency – one per second – the planes are sometimes quick!
- Turn all the pictures into a time lapse video – possibly at one hour intervals
- Upload the videos somewhere online (YouTube) so I can view them from anywhere later
- Delete all the pictures so I don’t run out of disk space
- Automate it all
I’ve already covered this really, but for this job I have tweaked the .webcamrc file to take a picture every second, only save images locally & not to upload them. Here’s the basics of my .webcamrc:-[ftp] dir = /home/alan/Pictures/webcam/current file = webcam.jpg tmp = uploading.jpeg debug = 1 local = 1 [grab] device = /dev/video0 text = popeycam %Y-%m-%d %H:%M:%S fg_red = 255 fg_green = 0 fg_blue = 0 width = 1280 height = 720 delay = 1 brightness = 50 rotate = 0 top = 0 left = 0 bottom = -1 right = -1 quality = 100 once = 0 archive = /home/alan/Pictures/webcam/archive/%Y/%m/%d/%H/snap%Y-%m-%d-%H-%M-%S.jpg
Key things to note, “delay = 1″ gives us an image every second. The archive directory is where the images will be stored, in sub-folders for easy management and later deletion. That’s it, put that in the home directory of the user taking pictures and then run webcam. Watch your disk space get eaten up.Making the video
This is pretty straightforward and can be done in various ways. I chose to do two-pass x264 encoding with mencoder. In this snippet we take the images from one hour – in this case midnight to 1AM on 2nd July 2014 – from /home/alan/Pictures/webcam/archive/2014/07/02/00 and make a video in /home/alan/Pictures/webcam/2014070200.avi and a final output in /home/alan/Videos/webcam/2014070200.avi which is the one I upload.mencoder "mf:///home/alan/Pictures/webcam/archive/2014/07/02/00/*.jpg" -mf fps=60 -o /home/alan/Pictures/webcam/2014070200.avi -ovc x264 -x264encopts direct=auto:pass=1:turbo:bitrate=9600:bframes=1:me=umh:partitions=all:trellis=1:qp_step=4:qcomp=0.7:direct_pred=auto:keyint=300 -vf scale=-1:-10,harddup mencoder "mf:///home/alan/Pictures/webcam/archive/2014/07/02/00/*.jpg" -mf fps=60 -o /home/alan/Pictures/webcam/2014070200.avi -ovc x264 -x264encopts direct=auto:pass=2:bitrate=9600:frameref=5:bframes=1:me=umh:partitions=all:trellis=1:qp_step=4:qcomp=0.7:direct_pred=auto:keyint=300 -vf scale=-1:-10,harddup -o /home/alan/Videos/webcam/2014070200.avi Upload videos to YouTube
The project youtube-upload came in handy here. It’s pretty simple with a bunch of command line parameters – most of which should be pretty obvious – to upload to youtube from the command line. Here’s a snippet with some credentials redacted.python youtube_upload/youtube_upload.py --email=########## --password=########## --private --title="2014070200" --description="Time lapse of Farnborough sky at 00 on 02 07 2014" --category="Entertainment" --keywords="timelapse" /home/alan/Videos/webcam/2014070200.avi
I have set the videos all to be private for now, because I don’t want to spam any subscriber with a boring video of clouds every hour. If I find an interesting one I can make it public. I did consider making a second channel, but the youtube-upload script (or rather the YouTube API) doesn’t seem to support specifying a different channel from the default one. So I’d have to switch to a different channel by default work around this, and then make them all public by default, maybe.
In addition YouTube sends me a “Well done Alan” patronising email whenever a video is upload, so I know when it breaks, I stop getting those mails.
This is easy, I just rm the /home/alan/Pictures/webcam/archive/2014/07/02/00 directory once the upload is done. I don’t bother to check if the video uploaded okay first because if it fails to upload I still want to delete the pictures, or my disk will fill up. I already have the videos archived, so can upload those later if the script breaks.Automate it all
webcam is running constantly in a ‘screen’ window, that part is easy. I could detect when it dies and re-spawn it maybe. It has been known to crash now and then. I’ll get to that when that happens
I created a cron job which runs at 10 mins past the hour, and collects all the images from the previous hour.10 * * * * /home/alan/bin/encode_upload.sh
I learned the useful “1 hour ago” option to the GNU date command. This lets me pick up the images from the previous hour and deals with all the silly calculation to figure out what the previous hour was.
Here (on github) is the final script. Don’t laugh.Tweet
It was less than a month that we announced crossing the 10,000 users milestone for Ubuntu phones and tablets, and we’ve already reached another: 100,000 app downloads!
The new Ubuntu store used by phones, tablets, and soon the desktop as well, provides app developers with some useful statistics about how many times their app was downloaded, which version was downloaded, and what country the download originated from. This is very useful as it it lets the developer gauge how many users they currently have for their app, and how quickly they are updating to new versions. One side-effect of these statistics is that we can see how many total downloads there have been across all of the apps in the store, and this week we reached (and quickly passed) the 100,000th download.Users
We’re getting close to having Ubuntu phones go on sale from our partners at Bq and Meizu, but there are still no devices on the market that came with Ubuntu. This means that we’ve reached this milestone solely from developers and enthusiasts who have installed Ubuntu on one of their own devices (probably a Nexus device) or the device emulator.
The continued growth in the download number validates the earlier milestone of 10,000 users, a large number of them are clearly still using Ubuntu on their device (or emulator) and keeping their apps up to date (the number represents new app installs and updates). This means that not only are people trying Ubuntu already, many of them are sticking with it too. Yet another datapoint in support of this is the 600 new unique users who have been using the store since the last milestone announcement.
To supply all of these users with the apps they want, we’re continuing to build our community of app developers around Ubuntu. The first of these have already received their limited edition t-shirts, and are listed on the Ubuntu Pioneers page of the developer portal.
There is still time to get your app published, and claim your place on that page and your t-shirt, but they’re filling up fast so don’t delay. Go to our Developer Portal and get started today, you could be only a few hours away from publishing your first app in the store!
Today the Wiki Sub-Team of the Ubuntu Doc Team had a mini-sprint over the team’s homepage (https://wiki.ubuntu.com/DocumentationTeam). We have cleaned it up by removing some of the unwanted sub-pages, rewrote some of the pages to make it easier for new people to get involved with us, and much more.
Hopefully this is the best step to help people to understand our team’s workflow.
When users visit the Ubuntu download page from our main website, ubuntu.com, they are given the option to make a donation to show their appreciation and help further the work that goes into making and distributing Ubuntu. Donations ear-marked for “Community projects” made available to members of our community, and events that our members attend.
In keeping with our core principles of openness and transparency, the way these community funds were spent is detailed in a regular report and made available for everybody to see. These requests for funding are an opportunity for us to invest in our community, and every dollar spent has benefited the Ubuntu project through improved contributions, organization and outreach.
Once again everybody on this list deserves a big thanks for their continued contributions to Ubuntu. The funds they were given may cover immediate expenses, but they in no way cover all of the time, energy, and passion that these contributors have put into our community.
The latest funding report, using the same format as the previous one, can be viewed here.
Published on behalf of Michael Hall of the Community Team
Last week in Portland, Oregon, we had our second release management team work week of the year focusing on our goals and work ahead in Q3 of 2014. I was really excited to meet the new manager of the team, our new intern and two other team members I had not yet met.
It was quite awesome to have the face-to-face time with the team to knock out some discussions and work that required the kind of collaboration that a work week offers. One thing I liked working on the most was discussing the current success of the Early Feedback Release Manager role I have had on the team and discussing ideas for improving the pathways for future contributors in the team while also creating new opportunities and a new pathway for me to continue to grow.
One thing unique about this work week is we also took some time to participate in Open Source Bridge a local conference that Mozilla happened to be sponsoring at The Eliot Center and that Lukas Blakk from our team was speaking at. Lukas used her keynote talk to introduce her awesome project she is working on called the Ascend Project which she will be piloting soon in Portland.
While this was a great work week and I think we accomplished a lot, I hope in future work weeks that they are either out of town or that I can block off other life obligations to spend more time on-site as I did have to drop off a few times for things that came up or run off to the occasional meeting or Vidyo call.
Thanks to Lawrence Mandel for being such an awesome leader of our team and seeing the value in operating open by default. Thanks to Lukas for being a great mentor and awesome person to contribute alongside. Thanks to Sylvestre for bringing us French Biscuits and fresh ideas. Thanks to Bhavana for being so friendly and always offering new ideas and thanks to Pranav for working so hard on picking up where Willie left off and giving us a new tool that will help our release continue to be even more awesome.
In my situation, my smartphone can connect directly to my preferred wireless access point. So i need to setup a simple wireless repeater.
First is first, i need to connect to the main wireless access point using my pc. After that, i configure my pc to relay the internet to my secondary access point which already has its own DHCP server. I dont need to configure any DHCP server then.
My wireless card connected to the main wireless access point is wlan1. This is how i do it.
internet --> wlan1 (on my pc) --> ethernet (on my pc too) --> network cable --> tp-link wireless access point --> smartphone
For your info, i already configured my tp-link wireless access point to match my ethernet configurations (IPs, Subnets, gateways).
All i need to do after connecting to primary wireless access point is run this commands.
ip link set up dev eth0ip addr add 192.168.137.1/24 dev eth0 sysctl net.ipv4.ip_forward=1iptables -t nat -A POSTROUTING -o wlan1 -j MASQUERADEiptables -A FORWARD -i eth0 -o wlan1 -j ACCEPTiptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
take a note that my ethernet is eth0.
Since February, when I decided I didn’t want to use the local provider with Juju because my internet connection has a download speed of 400KBps, I opened an AWS account. This gave me 750 hours per month to use on t1.micro instances, which are awesome for Juju testing… until you hit some problems.
Main problem with the t1.micro instances is that they only have 613MB RAM. This is good for testing charms which do not require a lot of memory, but there are some (as nova-cloud-controller) which do require some more memory to run properly. Even worst, they require memory to finish installing.
I should note that, in general, my experience with t1.micro instances and the AWS free tier has been awesome, but in this cases there is no other solution than getting a bigger instance. If you are testing in cloud and you see a weird error you don’t understand, it may be that the machine has ran out of memory to allocate, so try in a bigger instance. If it doesn’t solve it, go ahead and report a bug. If it’s something on a charm’s code, we’ll look into it.