You can’t be too careful

Having a web page is probably the most complex of the 'simple' tasks available. The typical process pipeline would begin with DNS, converting a human-friendly name into an IP address, and would be registered through one of the many registrars on the Internet. This IP address would connect, via your ISP's address block, to your public router or load balancer, routing valid traffic (and only the valid traffic) to the appropriate machine on your network. This machine could be a GNU/Linux box, an embedded device, or an arbitrary, standalone, application that just happens to open a suitable port. This machine relies on the server software and (sometimes) the underlying operating system to determine which files are available to which users.

And at every stage there's software involved that could be bugged, broken, or suffering planet-sized security flaws. Each configuration file gives an opportunity for human error, opening the holes wider. Every registration service discloses a little more of your private information to the general public. With so many steps involved, is it any wonder that problems exist?

Subscribe to RSS - question