Planet Ubuntu
Subscribe to Planet Ubuntu feed
Planet Ubuntu - http://planet.ubuntu.com/
Updated: 1 hour 28 sec ago

David Tomaschik: CVE-2014-4182 & CVE-2014-4183: XSS & XSRF in Wordpress 'Diagnostic Tool' Plugin

Fri, 2014-07-04 07:00

Versions less than 1.0.7 of the Wordpress plugin Diagnostic Tool, contain several vulnerabilities:

  1. Persistent XSS in the Outbound Connections view. An attacker that is able to cause the site to request a URL containing an XSS payload will have this XSS stored in the database, and when an admin visits the Outbound Connections view, the payload will run. This can be trivially seen in example by running a query for http://localhost/<script>alert(/xss/)</script> on that page, then refreshing the page to see the content run, as the view is not updated in real time. This is CVE-2014-4183.

  2. Reflected XSS in DNS resolver test page. When a reverse lookup is performed, the results of gethostbyaddr() are inserted into the DOM unescaped. An attacker who (mis-) configures a DNS server to send an XSS payload as a reverse lookup may be able to either trick the administrator into performing a lookup, or (more likely) use the CSRF vulnerability documented below to trigger the XSS.

  3. AJAX handlers do not have any CSRF protection on them. This allows an attacker to trigger the server into sending test emails (low severity), perform DNS lookups (high severity when combined with the reflected XSS above) and request the loading of pages by the server (including URLs that contain XSS payloads, triggering the persistent XSS documented above). Additionally, the last 2 vulnerabilities could be used to trigger an information leak for Wordpress servers that are behind a DDoS protection service (e.g., Cloudflare) or are being run as TOR anonymous services by forcing the server to request a page from the attacker's server or perform a DNS query against the attackers DNS server, allowing the attacker to learn the real IP of the server hosting Wordpress. This is CVE-2014-4182.

Timeline:

  • 2014/06/15: Vulnerabilities discovered & reported to developers.
  • 2014/06/30: Developers release Diagnostic Tool 1.0.7, fixing issues.
  • 2014/07/04: Public disclosure.

Ubuntu Podcast from the UK LoCo: S07E14 – The One with the Tea Leaves

Thu, 2014-07-03 20:35

We’re back with Season Seven, Episode Fourteen of the Ubuntu Podcast! Alan Pope, Mark Johnson, Tony Whitmore, and Laura Cowen are drinking tea and eating Foxes Ginger Crunch Creams biscuits in Studio L.

 Download OGG  Download MP3 Play in Popup

In this week’s show:

The UUPC Big Clock (by @sil)

reset

and

mount | column -t df | column -t

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

Costales: Firefox Search Engine for Explain Shell web page

Thu, 2014-07-03 20:20
If you're a Linux sysadmin you'll really like Explain Shell webpage.
But I was missing a search engine for Firefox...

Firefox Search Engine for Explain Shell
Download from here and save it into your Firefox profile folder like:
~/.mozilla/firefox/<your_profile>/searchplugins/explainshell.xml

Enjoy it! :)

Duncan McGreggor: Uncovering Inherent Structures in Organizations

Thu, 2014-07-03 16:04
Vladimir LevenshteinThis post should have a subtitle: "Using Team Analysis and Levenshtein Distance to Reveal said Structure." It's the first part of that subtitle that is the secret, though: being able to correctly analyze and classify individual teams. Without that, using something clever like Levenshtein distance isn't going to be very much help.

But that's coming in towards the end of the story. Let's start at the beginning.

What You're Going to SeeThis post is a bit long. Here are the sections I've divided it into:

  • What You're Going to See
  • Premise
  • Introducing ACME
  • Categorizing Teams
  • Category Example
  • Calculating the Levenshtein Distance of Teams
  • Sorting and Interpretation
  • Conclusion

However, you don't need to read the whole thing to the main benefits. You can get the Cliff Notes version by reading the Premise, Categorizing Teams, Interpretation, and the Conclusion.

PremiseCompanies grow. Teams expand. If you're well-placed in your industry and providing in-demand services or products, this is happening to you. Individuals and small teams tend to deal with this sort of change pretty well. At an organizational level, however, this sort of change tends to have an impact that can bring a group down, or rocket it up to the next level.

Of the many issues faced by companies (or rapidly growing orgs within large companies) is this: "Our old structures, though comfortable, won't scale well with all these new teams and all the new hires joining our existing teams. How do we reorganize? Where do we put folks? Are there natural lines along which we can provide better management (and vision!) structure?"

The answer, of course, is "yes" -- but! It requires careful analysis and a deep understanding every team in your org.

The remainder of this post will set up a scenario and then figure out how to do a re-org. I use a software engineering org as an example, but that's just because I have a long and intimate knowledge of them and understand the ways in which one can classify such teams. These same methods could be applied a Sales group, Marketing groups, etc., as long as you know the characteristics that define the teams of which these orgs are comprised.



Introducing ACMEACME Corporation is the leading producer of some of the most innovative products of the 20th century. The CTO had previously tasked you, the VP of Software Development to bring this product line into the digital age -- and you did! Your great ideas for the updated suite are the new hottness that everyone is clamouring for. Subsequently, the growth of your teams has been fast, and dare we say, exponential.

More details on the scenario: your Software Development Group has several teams of engineers, all working on different products or services, each of which supports ACME Corporation in different ways. In the past 2 years, you've built up your org by an order of magnitude in size. You've started promoting and hiring more managers and directors to help organize these teams into sensible encapsulating structures. These larger groups, once identified, would comprise the whole Development Group.

Ideally, the new groups would represent some aspect of the company, software development, engineering, and product vision -- in other words, some sensible clustering of teams doing related work. How would you group the teams in the most natural way?

Simply dividing along language or platform lines may seem like the obvious answer, but is it the best choice? There are some questions that can help guide you in figuring this out:
  • How do these teams interact with other parts of the company? 
  • Who are the stakeholders in feature development? 
  • Which sorts of customers does each team primarily serve?
There are many more questions you could ask (some are implicit in the analysis data linked below), but this should give a taste.

ACME Software Development has grown the following teams, some of which focus on products, some on infrastructure, some on services, etc.:
  • Digital Anvil Product Team
  • Giant Rubber Band App Team
  • Digital Iron Carrot Team
  • Jet Propelled Unicycle Service Team
  • Jet Propelled Pogo Stick Service Team
  • Ultimatum Dispatcher API Team
  • Virtual Rocket Powered Roller Skates Team
  • Operations (release management, deployments, production maintenance)
  • QA (testing infrastructure, CI/CD)
  • Community Team (documentation, examples, community engagement, meetups, etc.)

Early SW Dev team hacking the ENIACCategorizing TeamsEach of those teams started with 2-4 devs hacking on small skunkworks projects. They've now blossomed to the extent that each team has significant sub-teams working on new features and prototyping for the product they support. These large teams now need to be characterized using a method that will allow them to be easily compared. We need the ability to see how closely related one team is to another, across many different variables. (In the scheme outlined below, we end up examining 50 bits of information for each team.)

Keep in mind that each category should be chosen such that it would make sense for teams categorized similarly to be grouped together. A counter example might be "Team Size"; you don't necessarily want all large teams together in one group, and all small teams in a different group. As such, "Team Size" is probably a poor category choice.
Here are the categories which we will use for the ACME Software Development Group:
  • Language
  • Syntax
  • Platform
  • Implementation Focus
  • Supported OS
  • Deployment Type
  • Product?
  • Service?
  • License Type
  • Industry Segment
  • Stakeholders
  • Customer Type
  • Corporate Priority
Each category may be either single-valued or multi-valued. For instance, the categories ending in question marks will be booleans. In contrast, multiple languages might be used by the same team, so the "Language" category will sometimes have several entries.

Category Example(Things are going to get a bit more technical at this point; for those who care more about the outcomes than the methods used, feel free to skip to the section at the end: Sorting and Interpretation.)

In all cases, we will encode these values as binary digits -- this allows us to very easily compare teams using Levenshtein distance, since the total of all characteristics we are filtering on can be represented as a string value. An example should illustrate this well.

(The Levenshtein distance between two words is the minimum number of single-character edits -- such as insertions, deletions or substitutions -- required to change one word into the other. It is named after Vladimir Levenshtein, who defined this "distance" in 1965 when exploring the possibility of correcting deletions, insertions, and reversals in binary codes.)
Let's say the Software Development Group supports the following languages, with each one assigned a binary value:
  • LFE - #b0000000001
  • Erlang - #b0000000010
  • Elixir - #b0000000100
  • Ruby - #b0000001000
  • Python - #b0000010000
  • Hy - #b0000100000
  • Clojure - #b0001000000
  • Java - #b0010000000
  • JavaScript - #b0100000000
  • CoffeeScript - #b1000000000
A team that used LFE, Hy, and Clojure would obtain its "Language" category value by XOR'ing the three supported languages, and would thus be #b0001100001. In LFE, that could be done by entering the following code the REPL:


We could then compare this to a team that used just Hy and Clojure (#b0001100001), which has a Levenshtein distance of 1 with the previous language category value. A team that used Ruby and Elixir (#b0000001100) would have a Levenshtein distance of 5 with the LFE/Hy/Clojure team (which makes sense: a total of 5 languages between the two teams with no languages shared in common). 

Calculating the Levenshtein Distance of TeamsAs a VP who is keen on deeply understanding your teams, you have put together a spreadsheet with a break-down of not only languages used in each team, but lots of other categories, too. For easy reference, you've put a "legend" for the individual category binary values is at the bottom of the linked spreadsheet.

In the third table on that sheet, all of the values for each column are combined into a single binary string. This (or a slight modification of this) is what will be the input to your calculations. Needless to say, as a complete fan of LFE, you will be writing some Lisp code :-)

(If you would like to try the code out yourself while reading, and you have lfetool installed, simply create a new project and start up the REPL: $ lfetool new library ld; cd ld && make-shellThat will download and compile the dependencies for you. In particular, you will then have access to the lfe-utils project -- which contains the Levenshtein distance functions we'll be using. You should be able to copy-and-paste functions, vars, etc., into the REPL from the Github gists.)
Let's create a couple of data structures that will allow us to more easily work with the data you collected about your teams in the spreadsheet:


We can use a quick copy and paste into the LFE REPL for two of those numbers to do a sanity check on the distance between the Community Team and the Digital Iron Carrot Team:


That result doesn't seem unreasonable, given that at a quick glance we can see both of these strings have many differences in their respective character positions.

It looks like we're on solid ground, then, so let's define some utility functions to more easily work with our data structures:


Now we're ready to roll; let's try sorting the data based on a comparison with a one of the teams:


It may not be obvious at first glance, but what the levenshtein-sort function did for us is compare our "control" string to every other string in our data set, providing both the distance and the string that the control was compared to. The first entry in the results is the our control string, and we see what we would expect: the Levenshtein distance with itself is 0 :-)

The result above is not very easily read by most humans ... so let's define a custom sorter that will take human-readable text and then output the same, after doing a sort on the binary strings:


(If any of that doesn't make sense, please stop in and say "hello" on the LFE mail list -- ask us your questions! We're a friendly group that loves to chat about LFE and how to translate from Erlang, Common Lisp, or Clojure to LFE :-) )


Sorting and InterpretationBefore we try out our new function, we should ponder which team will be compared to all the others -- the sort results will change based on this choice. Looking at the spreadsheet, we see that the "Digital Iron Carrot Team" (DICT) has some interesting properties that make it a compelling choice:

  • it has stakeholders in Sales, Engineering, and Senior Leadership;
  • it has a "Corporate Priority" of "Business critical"; and
  • it has both internal and external customers.
Of all the products and services, it seems to be the biggest star. Let's try a sort now, using our new custom function -- inputting something that's human-readable: 


Here we're making the request "Show me the sorted results of each team's binary string compared to the binary string of the DICT." Here are the human-readable results:


For a better visual on this, take a look at the second tab of the shared spreadsheet. The results have been applied to the collected data there, and then colored by major groupings. The first group shares these things in common:

  • Lisp- and Python-heavy
  • Middleware running on BSD boxen
  • Mostly proprietary
  • Externally facing
  • Focus on apps and APIs
It would make sense to group these three together.
Next on the list is Operations and QA -- often a natural pairing, and this process bears out such conventional wisdom. These two are good candidates for a second group.
Things get a little trickier at the end of the list. Depending upon the number of developers in the Java-heavy Giant Rubber Band App Team, they might make up their own group. However, both that one and the next team on the list have frontend components written in Angular.js. They both are used internally and have Engineering as a stakeholder in common, so let's go ahead and group them.
The next two are cloud-deployed Finance APIs running on the Erlang VM. These make a very natural pairing.
Which leaves us with the oddball: the Community Team. The Levenshtein distance for this team is the greatest for all the teams ... but don't be mislead. Because it has something in common with all teams (the Community Team supports every product with docs, example code, Sales and TAM support, evangelism for open source projects, etc.), it will have many differing bits with each team. This really should be in a group all its own so that structure represents reality: all teams depend upon the Community Team. A good case could also probably be made for having the manager of this team report directly up to you. 
The other groups should probably have directors that the team managers report to (keeping in mind that the teams have grown to anywhere from 20 to 40 per team). The director will be able to guide these teams according to your vision for the Software Group and the shared traits/common vision you have uncovered in the course of this analysis.
Let's go back to the Community Team. Perhaps in working with them, you have uncovered a hidden fact: the community interactions your devs have are seriously driving market adoption through some impressive and passionate service and open source docs+evangelism. You are curious how your teams might be grouped if sorted from the perspective of the Community Team.
Let's find out!


As one might expect, most of the teams remain grouped in the same way ... the notable exception being the split-up of the Anvil and Rubber Band teams. Mostly no surprises, though -- the same groupings persist in this model.

To be fair, if this is something you'd want to fully explore, you should bump the "Corporate Priority" for the Community Team much higher, recalculate it's overall bits, regenerate your data structures, and then resort. It may not change too much in this case, but you'd be applying consistent methods, and that's definitely the right thing to do :-) You might even see the Anvil and Rubber Band teams get back together (left as an exercise for the reader).

As a last example, let's throw caution and good sense to the wind and get crazy. You know, like the many times you've seen bizarre, anti-intuitive re-orgs done: let's do a sort that compares a team of middling importance and a relatively low corporate impact with the rest of the teams. What do we see then?


This ruins everything. Well, almost everything: the only group that doesn't get split up is the middleware product line (Jet Propelled and Iron Carrot). Everything else suffers from a bad re-org.

If you were to do this because a genuine change in priority had occurred, where the Giant Rubber Band App Team was now the corporate leader/darling, then you'd need to recompute the bit values and do re-sorts. Failing that, you'd just be falling into a trap that has beguiled many before you.


ConclusionIf there's one thing that this exercise should show you, it's this: applying tools and analyses from one field to fresh data in another -- completely unrelated -- field can provide pretty amazing results that turn mystery and guesswork into science and planning.

If we can get two things from this, the other might be: knowing the parts of the system may not necessarily reveal the whole (c.f. Complex Systems), but it may provide you with the data that lets you better predict emergent behaviours and identify patterns and structure where you didn't see them (or even think to look!) before.

Alan Pope: Creating Time-lapse Videos on Ubuntu

Thu, 2014-07-03 15:17

I’ve previously blogged about how I sometimes setup a webcam to take pictures and turn them into videos. I thought I’d update that here with something new I’ve done, fully automated time lapse videos on Ubuntu. Here’s when I came up with:-

(apologies for the terrible music, I added that from a pre-defined set of options on YouTube)

(I quite like the cloud that pops into existence at ~27 seconds in)

Over the next few weeks there’s an Air Show where I live and the skies fill with all manner of strange aircraft. I’m usually working so I don’t always see them as they fly over, but usually hear them! I wanted a way to capture the skies above my house and make it easily available for me to view later.

So my requirements were basically this:-

  • Take pictures at fairly high frequency – one per second – the planes are sometimes quick!
  • Turn all the pictures into a time lapse video – possibly at one hour intervals
  • Upload the videos somewhere online (YouTube) so I can view them from anywhere later
  • Delete all the pictures so I don’t run out of disk space
  • Automate it all
Taking the picture

I’ve already covered this really, but for this job I have tweaked the .webcamrc file to take a picture every second, only save images locally & not to upload them. Here’s the basics of my .webcamrc:-

[ftp] dir = /home/alan/Pictures/webcam/current file = webcam.jpg tmp = uploading.jpeg debug = 1 local = 1 [grab] device = /dev/video0 text = popeycam %Y-%m-%d %H:%M:%S fg_red = 255 fg_green = 0 fg_blue = 0 width = 1280 height = 720 delay = 1 brightness = 50 rotate = 0 top = 0 left = 0 bottom = -1 right = -1 quality = 100 once = 0 archive = /home/alan/Pictures/webcam/archive/%Y/%m/%d/%H/snap%Y-%m-%d-%H-%M-%S.jpg

Key things to note, “delay = 1″ gives us an image every second. The archive directory is where the images will be stored, in sub-folders for easy management and later deletion. That’s it, put that in the home directory of the user taking pictures and then run webcam. Watch your disk space get eaten up.

Making the video

This is pretty straightforward and can be done in various ways. I chose to do two-pass x264 encoding with mencoder. In this snippet we take the images from one hour – in this case midnight to 1AM on 2nd July 2014 – from /home/alan/Pictures/webcam/archive/2014/07/02/00 and make a video in /home/alan/Pictures/webcam/2014070200.avi and a final output in /home/alan/Videos/webcam/2014070200.avi which is the one I upload.

mencoder "mf:///home/alan/Pictures/webcam/archive/2014/07/02/00/*.jpg" -mf fps=60 -o /home/alan/Pictures/webcam/2014070200.avi -ovc x264 -x264encopts direct=auto:pass=1:turbo:bitrate=9600:bframes=1:me=umh:partitions=all:trellis=1:qp_step=4:qcomp=0.7:direct_pred=auto:keyint=300 -vf scale=-1:-10,harddup mencoder "mf:///home/alan/Pictures/webcam/archive/2014/07/02/00/*.jpg" -mf fps=60 -o /home/alan/Pictures/webcam/2014070200.avi -ovc x264 -x264encopts direct=auto:pass=2:bitrate=9600:frameref=5:bframes=1:me=umh:partitions=all:trellis=1:qp_step=4:qcomp=0.7:direct_pred=auto:keyint=300 -vf scale=-1:-10,harddup -o /home/alan/Videos/webcam/2014070200.avi Upload videos to YouTube

The project youtube-upload came in handy here. It’s pretty simple with a bunch of command line parameters – most of which should be pretty obvious – to upload to youtube from the command line. Here’s a snippet with some credentials redacted.

python youtube_upload/youtube_upload.py --email=########## --password=########## --private --title="2014070200" --description="Time lapse of Farnborough sky at 00 on 02 07 2014" --category="Entertainment" --keywords="timelapse" /home/alan/Videos/webcam/2014070200.avi

I have set the videos all to be private for now, because I don’t want to spam any subscriber with a boring video of clouds every hour. If I find an interesting one I can make it public. I did consider making a second channel, but the youtube-upload script (or rather the YouTube API) doesn’t seem to support specifying a different channel from the default one. So I’d have to switch to a different channel by default work around this, and then make them all public by default, maybe.

In addition YouTube sends me a “Well done Alan” patronising email whenever a video is upload, so I know when it breaks, I stop getting those mails.

Delete the pictures

This is easy, I just rm the /home/alan/Pictures/webcam/archive/2014/07/02/00 directory once the upload is done. I don’t bother to check if the video uploaded okay first because if it fails to upload I still want to delete the pictures, or my disk will fill up. I already have the videos archived, so can upload those later if the script breaks.

Automate it all

webcam is running constantly in a ‘screen’ window, that part is easy. I could detect when it dies and re-spawn it maybe. It has been known to crash now and then. I’ll get to that when that happens

I created a cron job which runs at 10 mins past the hour, and collects all the images from the previous hour.

10 * * * * /home/alan/bin/encode_upload.sh

I learned the useful “1 hour ago” option to the GNU date command. This lets me pick up the images from the previous hour and deals with all the silly calculation to figure out what the previous hour was.

Here (on github) is the final script. Don’t laugh.

Ubuntu App Developer Blog: 100,000 App Downloads

Thu, 2014-07-03 14:43

It was less than a month that we announced crossing the 10,000 users milestone for Ubuntu phones and tablets, and we’ve already reached another: 100,000 app downloads!

Downloads

The new Ubuntu store used by phones, tablets, and soon the desktop as well, provides app developers with some useful statistics about how many times their app was downloaded, which version was downloaded, and what country the download originated from. This is very useful as it it lets the developer gauge how many users they currently have for their app, and how quickly they are updating to new versions.  One side-effect of these statistics is that we can see how many total downloads there have been across all of the apps in the store, and this week we reached (and quickly passed) the 100,000th download.

Users

We’re getting close to having Ubuntu phones go on sale from our partners at Bq and Meizu, but there are still no devices on the market that came with Ubuntu.  This means that we’ve reached this milestone solely from developers and enthusiasts who have installed Ubuntu on one of their own devices (probably a Nexus device) or the device emulator.  

The continued growth in the download number validates the earlier milestone of 10,000 users, a large number of them are clearly still using Ubuntu on their device (or emulator) and keeping their apps up to date (the number represents new app installs and updates). This means that not only are people trying Ubuntu already, many of them are sticking with it too.  Yet another datapoint in support of this is the 600 new unique users who have been using the store since the last milestone announcement.

Pioneers

To supply all of these users with the apps they want, we’re continuing to build our community of app developers around Ubuntu. The first of these have already received their limited edition t-shirts, and are listed on the Ubuntu Pioneers page of the developer portal.

There is still time to get your app published, and claim your place on that page and your t-shirt, but they’re filling up fast so don’t delay. Go to our Developer Portal and get started today, you could be only a few hours away from publishing your first app in the store!

Svetlana Belkin: Doc Team Wiki Page Clean Up

Wed, 2014-07-02 22:36

Today the Wiki Sub-Team of the Ubuntu Doc Team had a mini-sprint over the team’s homepage (https://wiki.ubuntu.com/DocumentationTeam).  We have cleaned it up by removing some of the unwanted sub-pages, rewrote some of the pages to make it easier for new people to get involved with us, and much more.

Hopefully this is the best step to help people to understand our team’s workflow.


The Fridge: Community Donations Funding Report, Q1 2014

Wed, 2014-07-02 21:03

When users visit the Ubuntu download page from our main website, ubuntu.com, they are given the option to make a donation to show their appreciation and help further the work that goes into making and distributing Ubuntu. Donations ear-marked for “Community projects” made available to members of our community, and events that our members attend.

In keeping with our core principles of openness and transparency, the way these community funds were spent is detailed in a regular report and made available for everybody to see. These requests for funding are an opportunity for us to invest in our community, and every dollar spent has benefited the Ubuntu project through improved contributions, organization and outreach.

Once again everybody on this list deserves a big thanks for their continued contributions to Ubuntu. The funds they were given may cover immediate expenses, but they in no way cover all of the time, energy, and passion that these contributors have put into our community.

The latest funding report, using the same format as the previous one, can be viewed here.

Published on behalf of Michael Hall of the Community Team

Benjamin Kerensa: Release Management Work Week

Wed, 2014-07-02 19:46

Team discussing goals

Last week in Portland, Oregon, we had our second release management team work week of the year focusing on our goals and work ahead in Q3 of 2014. I was really excited to meet the new manager of the team, our new intern and two other team members I had not yet met.

It was quite awesome to have the face-to-face time with the team to knock out some discussions and work that required the kind of collaboration that a work week offers. One thing I liked working on the most was discussing the current success of the Early Feedback Release Manager role I have had on the team and discussing ideas for improving the pathways for future contributors in the team while also creating new opportunities and a new pathway for me to continue to grow.

One thing unique about this work week is we also took some time to participate in Open Source Bridge a local conference that Mozilla happened to be sponsoring at The Eliot Center and that Lukas Blakk from our team was speaking at. Lukas used her keynote talk to introduce her awesome project she is working on called the Ascend Project which she will be piloting soon in Portland.

Lukas Blakk Ascend Project Keynote at Open Source Bridge 2014

While this was a great work week and I think we accomplished a lot, I hope in future work weeks that they are either out of town or that I can block off other life obligations to spend more time on-site as I did have to drop off a few times for things that came up or run off to the occasional meeting or Vidyo call.

Thanks to Lawrence Mandel for being such an awesome leader of our team and seeing the value in operating open by default. Thanks to Lukas for being a great mentor and awesome person to contribute alongside. Thanks to Sylvestre for bringing us French Biscuits and fresh ideas. Thanks to Bhavana for being so friendly and always offering new ideas and thanks to Pranav for working so hard on picking up where Willie left off and giving us a new tool that will help our release continue to be even more awesome.

Mohamad Faizul Zulkifli: Sharing Wireless Internet To Ethernet

Wed, 2014-07-02 13:55
Sometimes we need to share our wireless internet to and ethernet connected clients or anything else such as wireless access points.
In my situation, my smartphone can connect directly to my preferred wireless access point. So i need to setup a simple wireless repeater.
First is first, i need to connect to the main wireless access point using my pc. After that, i configure my pc to relay the internet to my secondary access point which already has its own DHCP server. I dont need to configure any DHCP server then.
My wireless card connected to the main wireless access point is wlan1. This is how i do it.
internet --> wlan1 (on my pc) --> ethernet (on my pc too) --> network cable --> tp-link wireless access point --> smartphone
For your info, i already configured my tp-link wireless access point to match my ethernet configurations (IPs, Subnets, gateways).
All i need to do after connecting to primary wireless access point is run this commands.
ip link set up dev eth0ip addr add 192.168.137.1/24 dev eth0 sysctl net.ipv4.ip_forward=1iptables -t nat -A POSTROUTING -o wlan1 -j MASQUERADEiptables -A FORWARD -i eth0 -o wlan1 -j ACCEPTiptables -A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
take a note that my ethernet is eth0.

Pages