La Défense
La Défense (source: Cocoparisienne via Pixabay)

Hardening the servers can be described as the art of making the most secure configuration possible without compromising the ability of the system to perform its primary business function. This can be a particularly difficult balancing act as restricting access to users and processes must be tempered with the fact that the server must still perform its primary function properly, and system administrators must still be able to access the system to perform their duties.

Disable services

Every service (daemon) that runs is executing code on the server. If there is a vulnerability within that code, then it is a potential weakness that can be leveraged by an attacker, not to mention it is consuming resources in the form of RAM and CPU cycles.

Many operating systems ship with a number of services enabled by default, many of which you may not use. These services should be disabled to reduce the attack surface on your servers. Of course, you should not just start disabling services with reckless abandon; before disabling a service it is prudent to ascertain exactly what it does and determine if you require that service.

The easiest way to discover which services are running on a Unix server is to use the ps command to list running services. Exact argument syntax can vary between versions, but the ps -ax syntax works on most systems and will list all currently running processes. For minor variations in syntax on your operating system check the manual page for ps using the command man ps.

Services should typically be disabled in startup scripts (rc or init, depending on operating system), as using the kill command will merely stop the currently running service, which will start once more during a reboot. On Linux the commands are one of: rc-update, update-rc.d, or service. On BSD-based systems, you typically edit the file /etc/rc.conf. For example, on several flavors of Linux the service command can be used to stop the httpd service:

$ sudo service httpd stop

Older Unix operating systems may use inetd or xinetd to manage services rather than rc or init scripts. xinetd is used to preserve system resources by being the only service running and starting other services on demand rather than leaving them all running all of the time. If this is the case, services can be disabled by editing the inetd.conf or xinetd.conf files, typically located in the /etc/ directory.

File permissions

Most Unix filesystems have a concept of permissions, that is, which files users and groups can read, write, or execute. Most also have the setuid (set user ID upon execution) permission, which allows a non-root user to execute a file with the permission of the owning user, typically root. This is because the normal operation of that command, even to a non-root user, requires root privileges, such as su or sudo.

Typically an operating system will set adequate file permissions on the system files during installation. However, as you create files and directories, permissions will be created according to your umask settings. As a general rule, the umask on a system should only be made more restrictive than the default. Cases where a less restrictive umask is required should be infrequent enough that chmod can be used to resolve the issue. Your umask settings can be viewed and edited using the umask command. See man umask1 for further details.

Incorrect file permissions can leave files readable by users other than the intended users. Many people wrongly believe that because a user has to be authenticated to log into a host, leaving world or group readable files on disk is not a problem; however, they do not consider that services also run using their own user accounts.

Take for example a system running a web server such as Apache, NGINX, or lighttpd; these web servers typically run under a user ID of their own such as “www-data.” If files you create are readable by “www-data” then if configured to do so, accidentally or otherwise, the web server has permission to read that file and to potentially serve it to a browser. By restricting filesystem-level access, we can prevent this from happening, even if the web server is configured to do so, as it will no longer have permission to open the file.

As an example, in the following, the file “test” can be read and written to by the owner “_www,” it can be read and executed by the group “staff,” and it can be read by anybody. This is denoted by the rw-, r-x, and r-- permissions in the directory listing:

$ ls -al test
-rw-r-xr--  1 _www  staff  1228 16 Apr 05:22 test

In Unix filesystem listing, there are 10 -'s, the last 9 of which correspond to read, write, and execute permissions for owner, group, and other (everyone). If a - is shown, that permission is not set; if the r, w, or x is set, then it is. Other special characters appear less often. For example, an S signifies that the setuid flag has been set.

If we wish to ensure that other can no longer see this file, we can modify the permissions using the chmod command (“o=” sets the other permissions to nothing):

$ sudo chmod o= test
$ ls -la test
-rw-r-x---  1 _www  staff  1228 16 Apr 05:22 test

Note that the r representing the read permission for other is now a -2.

Host-based firewalls

Many people consider firewalls to be appliances located at strategic points around a network to permit various types of connection. While this is true, most Unix operating systems have local firewall software built in so that hosts can firewall themselves. By enabling and configuring this functionality, the server is offered not only some additional protection should the network firewall fail to operate as expected, but it will also offer protection against hosts on the local LAN that can communicate with the server directly as opposed to via a network appliance firewall.

Typical examples of firewall software in Unix systems are iptables/netfilter, ipchains, pf, ipf, and ipfw—the configuration and use of which will, of course, vary platform to platform. The end goal, however, is the same: to create a ruleset that permits all traffic required to successfully complete the server's tasks and any related administration of the server, and nothing else.

One point to note is that using a stateful firewall on a host will consume RAM and CPU with keeping track of sessions and maintaining a TCP state table. This is because a stateful firewall not only permits and denies packets based on IP addresses and port numbers alone but also tracks features such as TCP handshake status in a state table. On busy server, a simple packetfilter (i.e., permitting and denying based on IP addresses, port numbers, protocols, etc. on a packet-by-packet basis) will consume far fewer resources but still allow an increased level of protection from unwanted connections.

Managing file integrity

File integrity management tools monitor key files on the filesystem and alert the administrator in the event that they change. These tools can be used to ensure that key system files are not tampered with (as is the case with a rootkit) and that files are not added to directories or configuration files modified without the administrator’s permission (as can be the case with backdoors in web applications, for example).

There are both commercial and free/open source file integrity management tools available through your preferred package management tool. Examples include Samhain and OSSEC. If you are looking to spend money to obtain extra features, such as integration with your existing management systems, there are also a number of commercial tools available.

Alternatively, if you cannot, for whatever reason, install file integrity monitoring tools, many configuration management tools can be configured to report on modified configuration files on the filesystem as part of their normal operation. This is not their primary function and does not offer the same level of coverage, so this solution is not as robust; however, if you are in a situation where you cannot deploy security tools but do have configuration management in place, this may be of some use.

Separate disk partitions

Disk partitions within Unix can be used not only to distribute the filesystem across several physical or logical partitions but also to restrict certain types of action depending on which partition they are taking place.

Options can be placed upon each mount point in /etc/fstab.

There are some minor differences between different flavors of Unix; consulting the system manual page, using man mount, before using options is recommended.

Some of the most useful and common mount point options from a security perspective are:

nodev—Do not interpret any special dev devices. If no special dev devices are expected, this option should be used. Typically only the /dev/ mount point would contain special dev devices.

nosuid—Do not allow setuid execution. Certain core system functions, such as su and sudo, will require setuid execution, thus this option should be used carefully. Attackers can use setuid binaries as a method of backdooring a system to quickly obtain root privileges from a standard user account. Setuid execution is probably not required outside of the system installed bin and sbin directories. You can check for the location of setuid binaries using the following command:

$ sudo find / -perm -4000

Binaries that are specifically setuid root, as opposed to any setuid binary, can be located using the following variant:

$ sudo find / -user root -perm -4000

ro—Mount the filesystem read only. If data does not need to be written or updated, this option may be used to prevent modification. This removes the ability for an attacker to modify files stored in this location, such as config files, static website content, and the like.

noexec—Prevents execution, of any type, from that particular mount point. This can be set on mount points used exclusively for data and document storage to prevent an attacker from using this as a location to execute tools that they may load onto a system and can defeat certain classes of exploit.

chroot—Alters the apparent root directory of a running process and any children processes. The most important aspect of this is that the process inside the chroot jail cannot access files outside of its new apparent root directory, which is particularly useful in the case of ensuring that poorly configured or exploited service cannot access anything more than it needs to.

There are two ways in which chroot can be initiated:

  • The process in question can use the chroot system call and chroot itself voluntarily. Typically these processes will contain chroot options within their configuration files, most notably allowing the user to set the new apparent root directory.
  • The chroot wrapper can be used on the command line when executing the command. Typically this would look something like:

    sudo chroot /chroot/dir/ /chroot/dir/bin/binary -args

For details of specific chroot syntax for your flavor of Unix, consult man chroot3.

There is a common misconception that chroot offers some security features; it simply does not. Chroot jails are not impossible to break out of, especially if the process within the chroot jail is running with root privileges. Typically, processes that are specifically designed to use chroot will drop their root privileges as soon as possible so as to mitigate this risk. Additionally, chroot does not offer the process any protection from privileged users outside of the chroot on the same system. Neither of these are reasons to abandon chroot but should be considered when designing use cases as it is not an impenetrable fortress; it is more a method of further restricting filesystem access.

Mandatory access controls

Various flavors of Unix support mandatory access controls (MAC), some of the most well known being SELinux, TrustedBSD, and the grsecurity patches. The method of configuration, granularity, and features of MAC vary across systems; however, the high-level concepts remain consistent.

MAC allows policies to be enforced that are far more granular in nature than those offered by traditional Unix filesystem permissions. The ability to read, write, and execute files is set in policies with more fine-grained controls, allowing a user to be granted or denied access on a per-file basis rather than for all files within the group to which they belong, for example.

Using MAC with a defined policy allows the owner of a system to enforce the principles of least privilege, that is, only permitting access to those files and functions that a user requires to perform their job and nothing more. This limits their access and reduces the chances of accidental or deliberate abuse from that account.

MAC can also be used with enforcement disabled, that is, operating in a mode in which violations of policy are not blocked but are logged, creating a more granular level of logging for user activity. The reasons for this will be discussed later in this book in the Logging & Monitoring section.


Article image: La Défense (source: Cocoparisienne via Pixabay).