O'Reilly logo

Securing Ajax Applications by Christopher Wells

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Chapter 4. Protecting the Server

So, you want to run a web server in your basement to create the next big thing, and you’re looking for some cheap security advice on how to get started? Well, my first and best suggestion is don’t do it. I’m just saying if NASA—you know, rocket scientists—can’t keep hackers out of its web servers, what makes you think you can? Go find some ISP that has the services you are looking for, and pay the ISP to do it. The job of administering a web server on your own can consume every waking moment, and unless you don’t ever want to leave the house, it is well worth the money to let the pros handle the frontend work.

Are you really still reading? Picture this: you find that perfect somebody. You plan a romantic evening and go out to a movie and have a nice dinner. Just when things start to get interesting your phone trumpets out the cavalry charge ring tone informing you of 15 unauthorized login attempts on the web server. After apologizing to those around you for disrupting their dinner, your date raises an eyebrow and decides to skip dessert.

Still there, eh? I’m sorry. I know, it must sound glamorous to have your very own web server, but unless you have spent time thinking like a hacker, odds are whatever you put on the Internet will be vulnerable to attack.

Ajax applications require a web server to work. After all, what good is the XMLHttpRequest object without a web server to talk to on the backend. So, Ajax Security starts with the web server. If your web server is not secure, neither is your application. You need to know what role the web server plays in security. Securing a web server is a non-trivial task that requires an understanding of the web server’s relationship with the network. By being aware of what security measures are on the web server, you can balance the security necessary within your applications. In this chapter, I will look at how to ensure the network is secure, and then go through the steps for making a secure and dynamite web server. I will also address what to do in the event of an attack.

Network Security

See that funny-looking telephone-like cable coming out of your DSL/cable modem? That’s the Internet. Before we can set up a web server, we must first prepare the network. You don’t want to plug the web server into the Internet with a giant Hack Me sign on it, do you? We must take some precautions first.

What we really need to do is separate us from them, right? Us being—you know—us, and them being—well—the bad guys. We need a wall—make that a firewall—to keep them out.


A firewall is a device sitting between a private network and a public network. Part of what helps make a private network private is, in fact, the firewall. The firewall’s job is to control traffic between computer networks with different zones of trust—for example, an internal, trusted zone, such as a private network, and an external, nontrusted zone, such as the Internet.

Trust boundaries

Different trust zones meet in what is known as trust boundaries. It is like a seam in the network and, as mentioned earlier, seams require added security attention. We need to make sure that all the gaps are filled and that the firewall allows the right kind of traffic. We do this with firewall rules. Firewall rules establish a security policy governing what traffic is allowed to flow through the firewall and in what direction.

The ultimate goal is to provide a controlled interface between the different trust zones and enforce common security policy on the traffic that flows between them based on the following security principles:

Principle of least privilege

A user should be allowed to do only what she is required to do.

Separation of duties

Define roles for users and assign different levels of access control. Control how the application is developed, tested, and deployed and who has access to application data.

Firewalls are good at making quick decisions about whether one machine should be allowed to talk to another. The easiest way for the firewall to do this is to base its decisions on source address and destination address.

Security concerns

Hey, what’s this rule for? Far too often firewalls are found with rules that nobody remembers adding. This happens because administrators fear something will break if they remove them. When firewall rules are introduced, there should be a well-defined procedure for keeping track of each rule and its purpose.

Another problem is that to see whether a firewall is actually doing what it is supposed to be doing you need to beat on it with a penetration-testing tool and monitor it with intrusion detection software. In other words, you have to hack it to see if it breaks.

Port 80

That’s just web traffic, right? Port 80 is sometimes called the firewall bypass port. This is because many times any traffic will be allowed in and out of the firewall on port 80. Firewall administrators open port 80 for web traffic, and developers take advantage of the open port by running things such as web services through it—so much for firewall security.


SSL must be terminated before the firewall so that the firewall can inspect the data and make decisions about the content being sent or received. Otherwise, the data is encrypted with SSL. If the firewall or some proxy in front or behind the firewall terminates SSL, the user won’t see a lock in her browser and may become confused or concerned that she cannot do secure online banking, for example.

SSL proxies

There is a crafty solution to the SSL problem: an SSL proxy server. A proxy server can set up its own outbound SSL connection to the server the user wants to contact. The proxy server then negotiates a separate SSL connection with the user’s browser. The user’s browser doesn’t know what is on the other side of the proxy, so it cannot get to the other side without the proxy’s help.

The proxy then impersonates the destination web server by—on the fly—generating and signing a certificate for that web destination. The only way that this works is if the user’s browser trusts the proxy as a certificate authority. Meaning that if the user’s browser has a Certificate Authority (CA) certificate from the company in its trusted store of certificates, then the browser will accept the proxy’s generated certificate as legit.

Once this sort of proxy is set up, it is possible to thoroughly inspect all content flowing through without any worry about encryption getting in the way. Although this does now make it possible to inspect the contents of the web transaction, and an organization such as the Electronic Frontier Foundation (http://www.eff.org) might complain about the loss of the user’s privacy.

Network tiers and the DMZ

Multiple firewalls can be used to build tiers within trust boundaries. By building a tier with a firewall all the rules controlling access to that tier can be managed on each end. This allows for a flexible yet restrictive network configuration.

Where we see this type of configuration most is in the setup of a traditional demilitarized zone (DMZ) style firewall configuration. Figure 4-1 shows a typical tiered network.

A tiered network architecture
Figure 4-1. A tiered network architecture

If an attack happens within the DMZ it is isolated to this segment of the network, thereby limiting the damage an attacker can do. The secondary firewall protects the internal network in the event a DMZ machine is compromised.

Separation of duties

Boy, that’s a beefy machine you got there. It’s going to make a fine web server. However, you might be thinking it’s big enough to do everything (Web, FTP, news, mail, and so on), and it might be. But, the problem is that if the machine is compromised, everything is compromised. You don’t want that; that would be bad.

Thus it is a good practice to isolate these services and spread out functionality by creating a separate hardened machine for each major Internet service:

  • Firewalls

  • Proxies and gateway servers

  • Web servers

  • Application servers

  • Database servers

  • Logging servers

  • Email servers

  • FTP servers

Running these services separately limits the impact of an attack and reduces the surface area with which the attacker has to work. Yep, that’s right. Now you have an excuse to buy more machines! Remember, you are the one who wanted to get into the web site hosting business, right?

At the very least, there should be a point on your network before the web server that you can use as a point of inspection and detection. You may not need a full DMZ type setup, but if you are going to play on the Internet, I advise that you at least have a well-configured router and a firewall. Now that the network is prepared we can go back to building that web server.

Host Security

Image your web server as a gladiator about to go into battle. If it’s going to have any chance of survival it must be battle ready. Basically, you want something more like Russell Crowe and less like Mel Brooks.

Additionally, the server should be hardened as though there were no firewall on the network. Firewalls, such as in the case of port 80, are not a silver bullet. Servers behind firewalls can still be compromised. So, each server needs to look after and take care of itself.

In the following section I am going to build a secure server using a distribution of Linux called Ubuntu Server Edition. However, most, if not all, of these concepts can be applied equally to other operating systems.


Ubuntu comes from an African word, meaning humanity to others. The Ubuntu distribution of Linux brings the spirit of Ubuntu to the software world.

Built on a branch of the Debian distribution of Linux—known for its robust server installations and glacial release cycle—the Ubuntu Server has a strong heritage for reliable performance and predictable evolution. The first Ubuntu release with a separate server edition was 5.10, in October 2005. Figure 4-2 shows the bootup screen for the Ubuntu server installation disk.

The Ubuntu installation screen
Figure 4-2. The Ubuntu installation screen

A key lesson from the Debian heritage is that of security by default. The Ubuntu Server has no open ports after installation and contains only the essential software needed to build a secure server. This makes for an ideal place to start when thinking about building a web server.

Automatic LAMP

Additionally, in about 15 minutes, the time it takes to install Ubuntu Server Edition, you can have a LAMP (Linux, Apache, MySQL, and PHP) server up and ready to go.

When booting off the Ubuntu installation disk you are presented with the option to install a LAMP server. This option saves all the time and trouble associated with integrating Linux, Apache, MySQL, and PHP. Ubuntu integrates these things for you with security and ease of deployment in mind.


If you want to follow along with me, you may download and install the Ubuntu Server Edition from http://www.ubuntu.com. There is also an excellent tutorial available online at http://www.howtoforge.com/perfect_setup_ubuntu_6.06.

OS Hardening

Hardening a server’s operating system is not a trivial task—especially when it is your goal to make the server available on the Internet. Therefore extra precautions need to be taken, and every facet of the OS needs to be examined. Most modern operating systems are designed to be flexible and often configure things by default that can be potential security risks.


Mick Bauer’s book, Linux Server Security (O’Reilly) is one of the best guides for installing and securing everything Linux, and creating real solid bastion servers. If you’re serious about wanting a secure bastionized server, I highly recommend you read this book.

I am starting with a completely clean system. I went out to the Ubuntu web site, downloaded the newest version of the Ubuntu Server, and accepted all the default installation options.

Also—because it’s so cool—I chose the LAMP option to get the as advertised quick build of Apache installed, secured, and configured. Now, the installer has left me with a clean Linux build with no open ports, an administrator, and a disabled root account.

Figure 4-3 shows the screen after the Ubuntu installation is complete.

Ubuntu finished installation screen
Figure 4-3. Ubuntu finished installation screen

By default, the root account has been disabled for login. Ubuntu is one of the few Linux distributions to enforce this recommended security policy by default. Don’t worry, you still can perform administration tasks using superuser do (sudo).

I am going to log in to the system using the administration account I declared as part of the install process and then type:

sudo -i

This command provides an interactive (root) shell using sudo, so I don’t have to type sudo in front of every command.

Accounts management

Remember, we’re not building an ordinary laptop or desktop; we’re building a secure server. Very few people—only administrators—should be able to log in. Therefore, we must strictly control who and what is going to have access to this machine.

This starts by identifying all users. On my fresh Ubuntu install, and most other versions of Linux or Unix, you simply list the contents of the /etc/passwd file to reveal the system’s users.

The format of the passwd file is as follows:

Username:coded-password:UID:GID:user information:home-directory:shell

Example 4-1 shows the contents of my /etc/passwd file after my fresh installation.

Example 4-1. The/etc/passwd file
list:x:38:38:Mailing List Manager:/var/list:/bin/sh
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/bin/sh
mysql:x:103:104:MySQL Server,,,:/var/lib/mysql:/bin/false

Look at that; 24 accounts were created on a fresh install! Most people don’t even know for what these accounts are used. Several of these accounts are not necessary for a web server, so I will disable them by assigning a shell that cannot log in (/bin/false):

list:x:38:38:Mailing List Manager:/var/list:/bin/false
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/bin/false
mysql:x:103:104:MySQL Server,,,:/var/lib/mysql:/bin/false

Assigning a shell of /bin/false prevents a real person from being able to log in to the system via that account. After some time has passed, you may want to remove these accounts entirely.


On a Windows machine you can do this by right-clicking on My Computer and selecting Manage → System Tools → Local Users and Groups → Users.

For what are these accounts used, and why do I need to have them enabled? Excellent questions. For a program to run as a process, make connection, or read and write from the file system it has to “run as” a user. The user accounts are for programs and processes that are part of the core install. If you can determine that a service is not necessary for your machine, you can disable the service and delete the corresponding account.

Finally, the security principle of least privilege should also apply to users. No user, application, or process should have more privileges than it needs to perform its functions. A common way for an attacker to gain higher operating privileges is to cause a buffer overflow in a program already running with superuser privileges. Software defects that allow a user to execute with superuser privileges are a huge security issue, and the fixing of such software is a major part of maintaining a secure system.

Running services

In the case of a bastion web server sitting out on the Internet we want to be running as little as possible, and certainly not running any services that open up connections other than the web server itself.

Here is a list of the default services installed on my fresh Ubuntu system:

Sysklogd - the system logger
klogd - the kernel logging facility
mysql - the mysql database
mysql-ndb-mgm - supporting mysql service
makedev - create the devices in /dev used to interface with drivers in the kernel
mysql-ndb - supporting mysql service
rsync - facility for remote syncing of files
atd - at daemon for running commands at a specified time
cron - cron daemon for running commands on a periodic table
apache2 - the apache2 web server
rmnologin - remove /etc/nologin. allow users to login to your machine


On a Windows machine you can do this by right-clicking on My Computer and selecting Manage → Services and Applications → Services.

Start by looking through the list of running services and identify them. A modern operating system has many services, too many. For each one ask yourself whether the service is something that should be running on a web server.

In the case of this list, I plan on using everything listed. Your mileage may vary. For example, I chose Ubuntu’s LAMP install, which installed the MySql database services. If I didn’t want to run the database, I would disable it.

After you identify all the running services, make sure you know what each service is and what it does. The goal is to turn off as much as possible.


Some commands run with a special bit set that instructs the OS to run the command as a privileged user.

The idea is that some commands or daemon processes need to run with higher permissions than that of the user. Take for example the passwd command. If a user wants to change his password he executes the passwd command, but the user does not normally have permission to write to the /etc/passwd file. With the SUID bit set, the command can perform its function with superuser privileges.

This is obviously a security concern. It is critical that any command or process that has this bit set be something that is necessary and make sense given the system that we are creating. The best way to find these sorts of files is to issue a command that looks like this:

find / -perm +4000 -user root -type f -print

This command finds all the SUIDs for the root account. Examine the list and remove or disable any unnecessary items you find.

Logging and Auditing

A critical factor to a web server’s security is its logging. If there is an attack, often the most critical evidence will be found in the logs. Therefore, it is vital that the logs and logging mechanisms be securely implemented.


Syslog is the default logging facility on most Unix/Linux-based systems. It records events coming from the kernel (via klogd, a system daemon that intercepts and logs Linux kernel messages) and from any program or process running on the system. It can even record remote messages sent from other network devices and servers.

Facilities and priorities

Syslog categorizes its messages by facility. Facilities are system-named buckets for reporting syslog messages. Supported facilities on most Linux/Unix systems are:


For many security events


For access control related messages


Events that occur during cron jobs


For system processes and daemons


For kernel messages


For printer and printing related messages


For mail handling messages


Messages generated by syslog itself


Messages having to do with the news service


More messages generated by syslog


The default facility when none is defined


For logging uucp related messages


Miscellaneous default services

Unlike facilities, priorities are hierarchical levels designed to indicate the urgency of the message being logged. The following is a list of priorities listed by urgency:


Debug information, for debugging software


Just thought you might like to know


Something that should be noted


Something bad may have or could happen


Something bad happened


Something really bad happened


Hey! Something bad is happening! Call the cell phone!


Quick, pull the plug, shut down the Internet!

Syslog comes preconfigured on most distributions of Linux including my fresh Ubuntu install. The default location for log files is located at /var/log.

Syslog configuration file (/etc/syslog.conf)

Although the default configuration is acceptable, the /etc/syslog.conf file is still worth exploring, as you’ll see in Example 4-2.

Example 4-2. The /etc/syslog.conf file
#  /etc/syslog.conf     Configuration file for syslogd.
#                       For more information see syslog.conf(5)
#                       manpage.

# First some standard logfiles.  Log by facility.

auth.info,authpriv.*                 /var/log/auth.log
*.*;auth,authpriv.none          -/var/log/syslog
#cron.*                         /var/log/cron.log
daemon.*                        -/var/log/daemon.log
kern.*                          -/var/log/kern.log
lpr.*                           -/var/log/lpr.log
mail.*                          -/var/log/mail.log
user.*                          -/var/log/user.log
uucp.*                          /var/log/uucp.log

# Logging for the mail system.  Split it up so that
# it is easy to write scripts to parse these files.
mail.info                       -/var/log/mail.info
mail.warn                       -/var/log/mail.warn
mail.err                        /var/log/mail.err

# Logging for INN news system
news.crit                       /var/log/news/news.crit
news.err                        /var/log/news/news.err
news.notice                     -/var/log/news/news.notice

# Some `catch-all' logfiles.
        news.none;mail.none     -/var/log/debug
        mail,news.none          -/var/log/messages

# Emergencies are sent to everybody logged in.
*.emerg                         *

# I like to have messages displayed on the console, but only
#on a virtual console that I usually leave idle.
#       news.=crit;news.=err;news.=notice;\
#       *.=debug;*.=info;\
#       *.=notice;*.=warn       /dev/tty8

# The named pipe /dev/xconsole is for the `xconsole' utility.  To
#use it, you must invoke `xconsole' with the `-file' option:
#    $ xconsole -file /dev/xconsole [...]
# NOTE: adjust the list below, or you'll go crazy if you have
a reasonably
#      busy site..
        *.=notice;*.=warn       |/dev/xconsole

At the very least, the auth facility should have a priority of info or higher:

auth.info         /var/log/auth.log

Disk space is cheap, so capturing everything is not completely out of the question:

*.*             /var/log/all_messages

Decide what is important to you and run with it.


Logs mean nothing unless you do something with them. They must be processed, monitored, and reviewed. Sometimes logs are all that you have after an attack—if you’re lucky, and the attacker didn’t destroy or alter them.

With that in mind, decide for what things it is worth interrupting dinner, and which ones can go unnoticed.

Process accounting

After syslog is configured, you should also enable process accounting. Process accounting is good for recording all commands users execute on the system. On my Ubuntu install I use apt-get to install the base process accounting (acct) package.

apt-get install acct

Selecting previously deselected package acct.
(Reading database ... 16507 files and directories currently installed.)
Unpacking acct (from .../acct_6.3.99+6.4pre1-4ubuntu1_i386.deb) ...
Setting up acct (6.3.99+6.4pre1-4ubuntu1) ...
Starting process accounting: Turning on process accounting, file set to

After downloading and installing acct, you need to create an accounting database.

touch /var/log/account/pacct
chown root /var/account/pacct
chmod 0644 /var/log/account/pacct

The acct database is stored in binary as a single file /var/log/account/pacct, so it is not easily editable. This forces an attacker to delete the whole file to cover her tracks. The deletion of the file, however, by itself confirms that something suspicious happened.

Now, if you ever want to audit what a particular user has done, you can do so by running:

lastcomm [user-name]


Many have complained about Windows and how it handles logs. The complaints stem from the fact that most logging is disabled by default, and that the locations for the log files can be problematic for some situations. Even with these limitations, some prudent steps can be taken to help ensure that the system retains some valuable log information.

You should enable security auditing. Windows does not enable security auditing by default. To do so, two configuration changes are required.

On Windows you can enable audit logging by changing the policy settings located at Start → Settings → Control Panel → Administrative Tools → Local Security Policy.

Minimally, you should enable auditing for the following events:

  • Logon and logoff

  • User and group management

  • Security policy changes

  • Restart, shutdown, and system

You can also enable auditing of any file or directory structure by setting its properties (Security → Advanced Settings → Auditing).

A logging server

The best idea is to dedicate a server on your network, harden it, and send log messages to it from all your other machines. This way, the logs do not get compromised when the server does.

Having a centralized, hardened, logging server is ideal for log management. You can harden the server to allow only logging from specific IP addresses and to lock down all the listening ports except for the one for syslog. Having the logs stored in a different location than the web server means an attacker may be able to add false messages, but he won’t be able to destroy any logged messages.

Syslogd will accept logging messages remotely if it is instructed to do so on startup with the -r (for remote) startup option.

Keeping Up to Date

Now that the server is locked down with a minimal set of accounts and services, it is important to patch everything to make sure that everything is up-to-date. There are several update managers for Linux; the Advanced Packaging Tool (APT) comes with Ubuntu.

Keeping up-to-date is critical to the security of a web server. It used to be that there was a lag of months (30–120 days) between when vulnerability was discovered and seeing it successfully exploited on a system. Today, that time has been reduced to hours instead of days.


The sources for APT reside in its configuration file /etc/apt/sources.list. You can edit this file to include other repositories on the Internet.

To update the system, basically, it’s as simple as:

apt-get update

Ign cdrom://Ubuntu-Server 6.10 _Edgy Eft_ - Release i386 (20061025.1) edgy/main
Ign cdrom://Ubuntu-Server 6.10 _Edgy Eft_ - Release i386 (20061025.1)
edgy/restricted Translation-en_US
Get:1 http://us.archive.ubuntu.com edgy Release.gpg [191B]
Ign http://us.archive.ubuntu.com edgy/main Translation-en_US
Get:2 http://security.ubuntu.com edgy-security Release.gpg [191B]
Ign http://security.ubuntu.com edgy-security/main Translation-en_US
Ign http://us.archive.ubuntu.com edgy/restricted Translation-en_US
Ign http://security.ubuntu.com edgy-security/restricted Translation-en_US
Hit http://security.ubuntu.com edgy-security Release
Get:3 http://us.archive.ubuntu.com edgy-updates Release.gpg [191B]
Ign http://us.archive.ubuntu.com edgy-updates/main Translation-en_US
Ign http://us.archive.ubuntu.com edgy-updates/restricted Translation-en_US
Get:4 http://us.archive.ubuntu.com edgy-backports Release.gpg [191B]
Ign http://us.archive.ubuntu.com edgy-backports/main Translation-en_US
Ign http://us.archive.ubuntu.com edgy-backports/restricted Translation-en_US
Hit http://us.archive.ubuntu.com edgy Release
Hit http://security.ubuntu.com edgy-security/main Packages
Get:5 http://us.archive.ubuntu.com edgy-updates Release [23.3kB]
Hit http://security.ubuntu.com edgy-security/restricted Packages
Hit http://security.ubuntu.com edgy-security/main Sources
Hit http://security.ubuntu.com edgy-security/restricted Sources
Hit http://us.archive.ubuntu.com edgy-backports Release
Hit http://us.archive.ubuntu.com edgy/main Packages
Hit http://us.archive.ubuntu.com edgy/restricted Packages
Hit http://us.archive.ubuntu.com edgy/main Sources
Hit http://us.archive.ubuntu.com edgy/restricted Sources
Get:6 http://us.archive.ubuntu.com edgy-updates/main Packages [53.8kB]
Get:7 http://us.archive.ubuntu.com edgy-updates/restricted Packages [14B]
Get:8 http://us.archive.ubuntu.com edgy-updates/main Sources [16.3kB]
Get:9 http://us.archive.ubuntu.com edgy-updates/restricted Sources [14B]
Hit http://us.archive.ubuntu.com edgy-backports/main Packages
Hit http://us.archive.ubuntu.com edgy-backports/restricted Packages
Hit http://us.archive.ubuntu.com edgy-backports/main Sources
Hit http://us.archive.ubuntu.com edgy-backports/restricted Sources
Fetched 93.6kB in 9s (9939B/s)
Reading package lists... Done

APT keeps an inventory of what you have installed and cross-checks it against a central repository on the Internet. If there is an update for a package, AP automatically goes out to the Internet and downloads it. Then you can control when the updates get applied using the Upgrade option.

After APT has retrieved any updates for your installed packages, you can apply the updates with:

apt-get upgrade

Windows update

For all others in the world, there is of course Windows update. Microsoft tends to release monthly patches every first Tuesday of the month. So, on those Tuesdays, if you are running a Windows server, I would skip my dinner plans, kick off the download process, and order a pizza.

All the major operating systems have a vehicle for distributing patches. Figure out which one is right for you, and implement a procedure for checking for updates regularly.

Host Firewall

Remember, I said that this machine needs to act like there is no firewall or other device protecting it from unsavory network traffic. Most Linux systems, including my Ubuntu system, come with a firewall built-in. It’s called iptables—or ipchains if you are using a kernel of version 2.2 or older.

Using iptables

This is some black magic, but well worth it. On my Ubuntu system, iptables comes installed and enabled, but it is configured to let all network traffic through.

Because this machine must defend itself, we should alter this default configuration with some basic firewall rules locally. Example 4-3 shows an iptables script for a bastion server running HTTP.

Example 4-3. A sample IPTables script
# IPTables Local Firewall Script for bastion web servers.
# Adapted from bastion script found in:
# Bauer, Michael, Linux Server Security, second edition (O'Reilly)

# Please enter the name of your server

# Your server's IP Address

# IPTABLES Location
test -x $IPTABLES || exit 5

case "$1" in
echo -n "Loading $MYSERVER's ($IPADDRESS) Packet Filters..."

# Load kernel modules first
modprobe ip_tables
modprobe ip_conntrack_ftp

# Flush old custom tables
$IPTABLES —flush
$IPTABLES —delete-chain

# Set default-deny policies for all three default chains

# Exempt Loopback address

# Spoofing this host?
$IPTABLES -A INPUT -s $IPADDRESS -j LOG —log-prefix "Spoofed $MYSERVER's

# Add some generic Anti-spoofing rules
$IPTABLES -A INPUT -s -j LOG  —log-prefix "Spoofed source IP!"
$IPTABLES -A INPUT -s -j LOG  —log-prefix "Spoofed source IP!"
$IPTABLES -A INPUT -s -j LOG  —log-prefix "Spoofed source IP!"
$IPTABLES -A INPUT -s -j LOG  —log-prefix "Spoofed source IP!"
$IPTABLES -A INPUT -s -j LOG  —log-prefix "Spoofed source IP!"

# Too Popular?
$IPTABLES -A INPUT -s www.slashdot.org -j LOG —log-prefix "Slashdotted!"
$IPTABLES -A INPUT -s www.slashdot.org -j DROP
$IPTABLES -A INPUT -s www.digg.com -j LOG —log-prefix "Dugg!"
$IPTABLES -A INPUT -s www.digg.org -j DROP

# INBOUND POLICY ————————————

# Accept inbound packets that are part of previosly-OK'ed sessions

# Accept inbound packets that initiate HTTP sessions
$IPTABLES -A INPUT -p tcp -j ACCEPT —dport 80 -m state —state NEW

# Accept inbound packets that initiate Secure HTTP sessions
$IPTABLES -A INPUT -p tcp -j ACCEPT —dport 443 -m state —state NEW

# Allow outbound SSH (23)
#$IPTABLES -A INPUT -p tcp —dport 22 -m state —state NEW -j ACCEPT

# OUTBOUND POLICY ————————————

# If it's part of an approved connection, let it out

# Allow outbound DNS queries
$IPTABLES -A OUTPUT -p udp —dport 53 -m state —state NEW -j ACCEPT

# Allow outbound HTTP (80) for web services?
$IPTABLES -A OUTPUT -p tcp —dport 80 -m state —state NEW -j ACCEPT

# Allow outbound ping (debug)
#$IPTABLES -A OUTPUT -p icmp -j ACCEPT —icmp-type echo-request

# Allow outbound SMTP (25) for notifications
#$IPTABLES -A OUTPUT -p tcp —dport 25 -m state —state NEW -j ACCEPT

# Allow outbound SSH (23)
#$IPTABLES -A OUTPUT -p tcp —dport 22 -m state —state NEW -j ACCEPT

# Allow outbound NTP (123) for time sync?
#$IPTABLES -A OUTPUT -p tcp —dport 123 -m state —state NEW -j ACCEPT

# Log everything that gets rejected/DROP'd
$IPTABLES -A OUTPUT -j LOG —log-prefix "Packet dropped by default


echo -n "*** WARNING ***"
echo -n "Unloading $MYSERVER's ($IPADDRESS) Packet Filters!"
# Flush current table
$IPTABLES —flush
# Open up the gates.

echo "Shutting down packet filtering..."
$IPTABLES —flush
echo "$MYSERVER Firewall (IPTables) running status:"
$IPTABLES —line-numbers -v —list

echo "Usage: $0 {start|stop|wide_open|status}"
exit 1

Running this script is a good place to start. It sets up the basics. I really can’t get into an in-depth discussion about iptables here, but if you are interested in more information on the subject, I again urge you to read Linux Server Security (O’Reilly) or read any number of online resources to learn this powerful yet complicated packet filtering system.

Intrusion Detection

It’s a big bad Internet, and many curious people all over the world are interested in seeing what you have. If you put a server on the Internet it will be attacked; the question is whether you will know it.

Sometimes it is obvious. If all the pictures of people have been replaced with monkeys then you might suspect there has been an incident. But not all attacks are so obvious. Sometimes the goal for the attacker was merely to log in, or to place some code on your server to help her out later on. If you want to detect intruders, there are some standard places to start.

Log examination

It’s late, you’re having a hard time getting to sleep, so you fire up vi and start reading through your logs. You get about a third of the way into the http_access.log and notice several odd http requests. These could be attacks. The fact that they are still here may indicate that the server was attacked but not compromised.

File integrity checks

One way to make sure nothing has been altered on the system is to compare the existing file system to that of a stored snapshot. This can be done by using file integrity checkers that keep a database of all the files on the system, their sizes, and other relevant information and use that data to compare against the current running system. If something changes, notifications can be sent to the appropriate people.

One of the more popular of these programs is called Tripwire. Tripwire is a host-based intrusion detection system available for free at http://sourceforge.net/projects/tripwire. It keeps track of a system’s current file state and reports any changes. If an intruder adds, deletes, or modifies files on the file system, Tripwire can detect and report on the changes.


Tripwire can also serve many other purposes, such as integrity assurance, change management, policy compliance, and more.

Network monitoring

Another way to detect attacks is to inspect the network traffic directly and see if there is anything nefarious going on. Again, we don’t have to reinvent the wheel. Good network inspection programs are available, too.

Snort is perhaps the most popular network monitoring tool. Snort is also available for free on the Internet (http://www.snort.org). Snort is a network intrusion detection application that can inspect network traffic and react to suspicious activity. Snort acts in realtime, analyzing each packet of data on the wire and can inspect for content matching, probe signatures, OS fingerprinting attempts, buffer overflow attempts, and many other types of behavior.

Snort can be used with other software, such as SnortSnarf, OSSIM, sguil, and Snort’s graphical user interface, the Basic Analysis and Security Engine (BASE).

Make a Copy

Whew! That was a lot of work. Now, quick! Before you do anything else go and make a copy of everything. If you ever want to do this again, it would be easier to make a copy of what you just built than to do it all over again, don’t you think? After the server is fully up to date you should make an image of the entire operating system to serve as a template for future systems.


Partimage is a Ubuntu (Universe) package that will copy the entire contents of a Linux partition to a backup file. Creating an image file is great for:

  • Making a backup of the entire system

  • Installing the same configuration on several machines

  • Taking a snapshot in time, so as to record the system’s current state

A very good tutorial on how to back up an Ubuntu partition with Partimage is located at http://www.psychocats.net/ubuntu/partimage.


dd_rescue is a total system recovery utility designed to copy, byte by byte, the entire contents of a partition.

dd_rescue /dev/hda1 /dev/sda1

This will overwrite the contents of /dev/sda1 with a copy of /dev/hda1. If you do not want to destroy the contents of /dev/sda1 and have enough space you can write it to a file:

dd_rescue /dev/hda1 /dev/sda1/hda1backup.img

Recovery then looks something like this:

sudo mkdir /recovery sudo mount /dev/sda1/hda1backup.img /recovery

Incident Response

Incidents can and do happen. Security is a weakest link problem, and as long as you’re plugged into the Internet you have to be aware of the dangers and what can happen. So, if an incident does happen you need to be prepared for it. By being prepared you can minimize the damage of an attack and act swiftly instead of wondering what to do next.

So, why would anyone attack you? The answer could be as simple as because they can. However, usually attackers have a reason: there is something they want on your machine. Common attacks against Internet servers include:

  • Attacks against the server itself (to gain access)

  • Attacks against the content (defacement)

  • Attacks against the entity (theft, data, information gathering, defacement, slander)

Knowing which one of these attacks is more likely to happen to your server will help in preparing possible recovery actions and responses.

Have a plan (disaster recovery plan)

Sometimes you have to plan for the worst. Right now, you should stop and think about what you would do if you machine got attacked. Imagine the types of attacks that could happen. What is the worst thing that could happen? Scary, huh? Now imagine how you would respond. What would you do? Who would you call?

By identifying assets, visualizing the types of attack, and thinking of possible outcomes you can come up with a disaster recovery plan that can be executed in the event of an incident:

Identify your assets

What assets do you need to protect? What is on the server that should not fall in to the hands of an attacker? How is that information being protected?

Visualize an attack path

How would it happen? What is the worst that could happen? Knowing everything you know about the server, how would you try to break in?

Evaluate the risk of that asset being compromised

What is the risk?

Formulate a response

What’s the best course of action to take if the asset is compromised? Who needs to know; what needs to be done?

Take a reference snapshot of the file system and store it on removable media

In the event of an incident, this will be useful in identifying the extent of the damage.

Create a forensics disk that has known versions of programs, so you know it’s safe to use

A good set of common tools has already been assembled as part of a source forge project called Live View: http://liveview.sourceforge.net.

Document all your findings

Create a procedure for each potential event and a contacts list.

Report the incident

Contact all the people on the contacts list and notify them of the incident.

HELP! I’ve been hacked!

Don’t panic. Take a deep breath. Everything is going to be OK. Do you have a plan? If you do, now is the time to execute it. If you don’t, we need to try to contain what happened. To do this, we need to retake control of the system using reliable tools:

  1. Create a forensics toolkit CD complete with all the executables you will need to assess the system—such as Live View (http://liveview.sourceforge.net).

  2. Before you unplug anything, create an image of the current state of the system to preserve any evidence.

  3. Use the forensics toolkit CD.

  4. Check the file system for commands that may have been tampered with—such as ps, ls, netstat. Do a file integrity scan and perform a file system audit. Check all running processes, and make sure that a root kit or a Trojan is not running. Inspect the logs for evidence.

  5. Report the incident to the proper authorities.

The main goal is to try to determine the source of the attack. Once that is discovered, you can alter firewall rules and do a more solid job of locking down.

Web Server Hardening

Now that we have a secure, stable, bastionized host to begin with we can look at the web server itself. First, you are going to have to decide which web server to use. Ubuntu came with Apache2—at least that is what was installed after I chose the install LAMP option—so, I am going to start there. But several web servers are available, some part of larger frameworks like application servers.

The following are some general guidelines to protecting web servers/traffic:

  • Run SSL. Probably one of the best security things you could do is invest in a digital certificate (http://www.verisign.com) for your web server. In an age where Internet attacks are on the rise, it is hard to tell a secure site from an insecure one. SSL goes a long way toward solving that problem.

  • Require that all cookies going to the client are marked secure.

  • Authenticate users before initiating sessions.

  • Do server monitoring.

  • Read the logs.

  • Validate fire integrity.

  • Review web application for software flaws and vulnerabilities.

  • Consider running web applications behind a web proxy server, which prevents requests from directly accessing the application. This creates a place where content filtering can be done before data reaches the application.

Now, let’s look at the specific web servers and see what we can do to secure them.

Apache HTTP Server

The Apache HTTP Server is the most popular web server on the Internet, which helps explain why it comes as the default web server on so many systems. The Apache HTTP Server Project is an effort to develop and maintain an open source HTTP server for modern operating systems including Unix and Windows. The goal of this project is to provide a secure, efficient, and extensible server that provides HTTP services in sync with the current HTTP standards.

The following is a set of hardening guidelines for securing Apache:

  1. The Apache process should run as its own user and not root.

  2. Establish a group for web administration and allow that group to read/write configuration files and read the Apache log files:

    groupadd webadmin
    chgrp -R webadmin /etc/apache2
    chgrp -R webadmin /var/apache2
    chmod -R g+rw /etc/apache2
    chmod -R g+r /var/log/apache2
    usermod -G webadmin user1,user2
  3. Establish a group for web development.

    groupadd webdev
    chmod -R g+r /etc/apache2
    chmod -R g+rw /var/apache2
    chmod -R g+r /var/log/apache2
    usermod -G user1,user2,user3,user4
  4. Establish a group for compiling and other development.

    group development
    chgrp development 'which gcc' 'which cc'
    chmod 550 'which gcc' 'which cc'
    usermod -G development user1,user2
  5. Disable any modules you are not using.

  6. Manage .htaccess from within the httpd.conf file instead of .htaccess. In the server configuration file, put:

    <Directory />
    AllowOverride None
  7. Enable Mod_Security. This module intercepts request to the web server and validates them before processing. The filter can also be used on http response to trap information from being disclosed. (Note: enabling this module does have performance implications, but the security benefits far outweigh the performance impact for a web site with moderate web traffic.)

  8. Enable Mod_dosevasive. This module restricts the amount of requests that can be placed during a given time period. (Note: enabling this module does have performance implications, but the security benefits far outweigh the performance impact for a web site with moderate web traffic.)

Security concerns

Protect server files by default

Inside the Apache configuration file (httpd.conf) have the following directory directive:

<Directory />
  <LimitExcept GET POST>
    Deny from all
  Order Allow,Deny
  Allow from all
  Options None
  AllowOverride None

<Directory /var/apache2/htdocs/>
  <LimitExcept GET POST>
    Deny from all
  Options -Indexes -FollowSymLinks -Multiviews -Includes
  Order Allow,Deny
  Allow from all
  AllowOverride None
Script aliasing

From a security perspective it is better to designate which directories can employ dynamic functionality or execute scripts. By using script aliases administrators can control which directories and resources will be allowed to execute scripts. If a site needs the ability to execute scripts this approach is preferred.

Server side includes (SSI)

Server side includes are directives found in HTML pages that Apache evaluates while serving a page. If SSIs are enabled they allow dynamic execution of content without having to initiate another CGI program.

Generally I recommend not using SSIs. There are better options for serving dynamic content. SSI is easy to implement but because of its flexibility hard to secure.


Users may still use <--#include virtual="..." --> to execute CGI scripts if these scripts are in directories designated by a ScriptAlias directive.


mod_security is a web application firewall that is an Apache Web Server add-on module that provides intrusion detection, content filtering, and web-based attack protection. It is good at detecting and stopping many known web attacks, such as many SQL injection type attacks, cross-site scripting, directory traversal type attacks, and many more.


mod_security does come with a performance cost. Because the module must inspect web traffic going both to and from the web server it can cripple sites with high user loads. In most cases, however, the security benefits far outweigh the performance costs.


You can get the mod_security packages using apt:

apt-get install libapache2-mod-security
a2enmod mod-security
/etc/init.d/apache2 force-reload

The file /etc/httpd/conf.d/mod_security.conf should now exist.

Basic configuration

mod_security.conf contains an example mod_security configuration. The example configuration has a lot of stuff in it that we may not need, so I recommend trimming the file down a bit and starting with the basics:

<IfModule mod_security.c>
    # Turn the filtering engine On or Off
    SecFilterEngine On

    # Make sure that URL encoding is valid
    SecFilterCheckURLEncoding On

    # Unicode encoding check
    SecFilterCheckUnicodeEncoding Off

    # Only allow bytes from this range
    SecFilterForceByteRange 0 255

    # Only log actionable requests
    SecAuditEngine RelevantOnly

    # The name of the audit log file
    SecAuditLog /var/log/apache2/audit_log

    # Debug level set to a minimum
    SecFilterDebugLog /var/log/apache2/modsec_debug_log
    SecFilterDebugLevel 0

    # Should mod_security inspect POST payloads
    SecFilterScanPOST On

    # By default log and deny suspicious requests
    # with HTTP status 500
    SecFilterDefaultAction "deny,log,status:500"

    # Add custom secfilter rules here


From here, we can look at what actions we can configure.


Table 4-1 lists the most important actions mod_security can apply to an event caught by the filtering ruleset.

Table 4-1. mod_security filtering rulesets




Skip remaining rules and allow the matching request.


Write request to the audit log.


Chain the current rule with the rule that follows.


Deny the request.


Execute (launch) an external script or process as a result of this request.


Log the request (Apache error_log and audit log).


Message that will appear in the log.


Do not log the match to the audit log.


Do not log the match to any log.


Proceed to next rule.


If request is denied then redirect to this URL.


Use the supplied status codes if a request is denied.

Now, we can configure a few basic rules specific to our environment that enable mod_security to protect our applications.


Let’s say some of our applications pass parameters around that may end up in our MySql database. Let’s also say we were lazy and did not positively validate those fields before trying to INSERT them into the database. Then, some wily hacker comes along and tries to perform a SQL injection attack.

So, how does this really work? With mod_security’s filters we can write rules that look for these kinds of attacks:

SecFilter "drop[[:space:]]table"
SecFilter "select.+from"
SecFilter "insert[[:space:]]+into"


Ivan Ristic has provided a thorough primer on mod_security in his book Apache Security (O’Reilly). Go pick up a copy and have a look. I also highly recommend a visit to the site http://www.modsecurity.org if you intend on using mod_security. There you will find documentation, tools, and additional downloads.


PHP has grown from a set of tools that get web sites up and working fast to one of the most popular languages for web site development. The following are some recommendations for hardening web servers that use or support PHP.

Hardening guidelines

  1. Apply all the Apache security hardening guidelines.

  2. Disable allow_url_fopen in php.ini.

  3. Using disable_functions, disable everything you are not using.

  4. Disable enable_dl in php.ini.

  5. Set error_reporting to E_STRICT.

  6. Disable file_uploads from php.ini.

  7. Enable log_errors and ensure the log files have restricted permissions.

  8. Do not use or rely on magic_quotes_gpc for data escaping or encoding.

  9. Set a memory_limit that PHP will consume. 8M is a good default.

  10. Set a location for open_basedir.

Microsoft Internet Information Server (IIS)

Microsoft Internet Information Services (IIS) is an HTTP server that provides web application infrastructure for most versions of Windows.

In versions of IIS prior to 6.0, the server was not “locked down” by default. This open configuration, although flexible, was not very secure. Many unnecessary services were enabled by default. As threats to the server have increased so to has the need to harden the server. In these older versions of IIS, hardening the server is a manual process and often difficult to get right.

Lock down server

With IIS 6.0 administrators have more control over how, when, and what gets installed when installing the IIS server. Unlike previous versions, an out-of-the-box installation will result in an IIS server that accepts requests only for static files until configured to handle web applications plus sever timeouts, and other security policy settings are configured aggressively.

Secure configurations for web servers

Microsoft also provides a Security Configuration Wizard (SCW) that helps administrators through the configuration of the web server’s security policy.

Hardening guidelines

  1. Make sure that the system IIS is installed in a secured and hardened Windows environment. Additionally, make sure the server is configured to discourage Internet surfing and email use.

  2. Web site resources, HTML files, images, CSS, and so on should be located on a nonsystem file partition.

  3. The Parent Paths setting should be disabled.

  4. Potentially dangerous virtual directories, including IISSamples, IISAdmin, IISHelp, and Scripts should all be disabled or removed.

  5. The MSADC virtual directory should be secured or removed.

  6. Include directories should not have Read Web permission.

  7. No directories should allow anonymous access.

  8. Only allow Script access when SSL is enabled.

  9. Only allow Write access to a folder when SSL is enabled.

  10. Disable FrontPage extensions (FPSE).

  11. Disable WebDav.

  12. Map all extensions not used by the IIS applications to 404.dll (.idq, .htw, .ida, .shtml, .shtm, .stm, .idc, .htr, .printer, and so on).

  13. Disable all unnecessary ISAPI filters.

  14. Access to IIS metabase (%systemroot%\system32\inetsrv\metabase.bin) should be restricted via NTFS file permissions.

  15. IIS banner information should be restricted. (IP address in content location should be disabled.)

  16. Make sure certificates are valid, up to date, and have not been revoked.

  17. Use certificates appropriately. (For example, do not use web certificates for email.)

  18. Protect resources with HttpForbiddenHandler.

  19. Remove unused HttpModules.

  20. Disable tracing (Machine.conf ).

  21. Disable Debug Compilation (Machine.conf ).

  22. Enable Code Access security.

  23. Remove All Permissions from the local Intranet Zone.

  24. Remove All Permissions from the Internet Zone.

  25. Run the IISLockdown tool from Microsoft.

  26. Filter HTTP requests using URLScan.

  27. Secure or disable remote administration of the server.

  28. Set a low session timeout (15 minutes).

  29. Set account lockouts.

Security concerns

  • Do not install the IIS server on a domain controller.

  • Do not connect an IIS server to the Internet until it is fully hardened.

  • Do not allow anyone to log on to the machine locally except for the administrator.

Application Server Hardening

Like web servers, application servers are flexible in their configuration. This flexibility allows them to be integrated into diverse environments. However, in many cases the out-of-the-box installation will not be hardened for Internet usage. Steps need to be taken to configure these servers so that they are secure. The following are some hardening guidelines for application servers.

Java and .NET

The following are hardening recommendations for all next generation web application servers, but particularly for Java and .NET servers.

Hardening guidelines

  1. Run all applications over SSL.

  2. Do no rely on client-side validation. Make input validation decisions on the server.

  3. Use the HttpOnly cookie option to help protect against cross-site scripting.

  4. Plan how authentication and access controls work before implementation.

  5. Employ role-base authorization checks for resources such as pages and directories.

  6. Divide the file structure of the site into public and restricted areas and provide proper authentication and access controls to restricted areas.

  7. Validate all input for type, length, and format. Employ positive validation and check for known acceptable data before filtering for bad data.

  8. Handle exceptions securely by not providing debug or infrastructure details as part of the exception.

  9. Use absolute URLs when sites contain secure and unsecure items.

  10. Ensure parameters used in SQL statements or data access codes are validated for length and type of data to help prevent SQL injection.

  11. Mark cookies as “secure.” Restrict authentication cookies by requiring the use of the secure cookie property.

  12. Ensure authentication cookies are not persisted or logged.

  13. Make sure cookies have unique path/name combinations.

  14. Personalization cookies are separate from authentication cookies.

  15. Require error-directives or error pages for all web applications.

  16. Strong password policies are implemented for authentication.

  17. Define a low session timeout (15 minutes).

  18. Avoid generic server resource mappings such as wildcards (/*.do).

  19. Protect resources by storing them under the WEB-INF directory and not allowing direct access to them.

  20. Do not store sensitive data (passwords, private data, and so on) in a web application root directory or other browsable location.

For More Information

Apache. “Apache HTTP Server Project.” http://httpd.apache.org.

CERT. “Creating a Computer Security Incident Response Team: A Process for Getting Started.” http://www.cert.org/csirts/Creating-A-CSIRT.html.

Howtoforge. “Secure Your Apache with mod_security.” http://www.howtoforge.com/apache_mod_security.

Microsoft. Technical Overview of Internet Information Services (IIS) 6.0. http://download.microsoft.com/download/8/a/7/8a700c68-d1af-4c8d-b11e-5f974636a7dc/IISOverview.doc (accessed Dec. 1, 2006).

“Checklist: Securing Your Web Server.”, http://msdn2.microsoft.com/en-us/library/aa302351.aspx.

Microsoft., “Checklist: Securing ASP.NET.”; available from: http://msdn2.microsoft.com/ en-us/library/ms178699.aspx.

O’Reilly ONLamp.com. LAMP: The Open Source Platform. http://www.onlamp.com.

PHP. “Hypertext Preprocessor.” http://www.php.net.

Ristic, Ivan. Apache Security. California: O’Reilly Media, Inc., 2005.

Security Focus. “Incident Response Tools For Unix, Part One: System Tools.” http://www.securityfocus.com/infocus/1679.

Ubuntu. “What Is Ubuntu?” http://www.ubuntu.com.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required