Chapter 4. Looking for Vulnerabilities
After you perform reconnaissance activities and gather information about your target, you might normally move on to identifying entry points to remote systems. You are looking for vulnerabilities in the organization, which can be open to exploitation. You can identify vulnerabilities in various ways. Based on your reconnaissance, you may have even identified one or two. These may be based on the different pieces of information you obtained through open sources.
Vulnerability scanning is a common task for penetration testers but also for information security teams everywhere. A lot of commercial tools are available to scan for vulnerabilities but also some open source scanners as well. Some of the tools that Kali provides are designed to look across different types of systems and platforms. Other tools, though, are designed to specifically look for vulnerabilities in devices like routers and switches. It may not be much of a surprise that there are scanners for Cisco devices as well.
Most of the tools we’ll be looking at in this chapter will search for existing vulnerabilities. These are ones that are known, and identifying them can be done based on interactions with the system or its applications. Sometimes, though, you may want to identify new vulnerabilities. Tools are available in Kali that can help generate application crashes, which can become vulnerabilities, though the tool won’t create associated exploits. These tools are commonly called fuzzers. This is a comparatively easy way of generating a lot of malformed data that can be provided to applications to see how the inputs are handled.
To even start this process, though, you need to understand what a vulnerability is. It can be easy to misunderstand vulnerabilities or confuse them with other concepts. One important notion to keep in mind is that just because you have identified vulnerabilities does not mean they are going to be exploitable. Even if an exploit matches the vulnerability you find, it doesn’t mean that the exploit will work. It’s hard to understate the importance of this idea. Vulnerabilities do not necessarily lead to exploitation.
Understanding Vulnerabilities
Before going any further, let’s make sure we’re all on the same page when it comes to the definition of a vulnerability. They are sometimes confused with exploits, and when we start talking about risk and threats, these terms can get really muddled. A vulnerability is a weakness in a system or piece of software. This weakness is a flaw in the configuration or development of the system or software. If that vulnerability can be taken advantage of to gain access or impair the system, it is exploitable. The process to take advantage of that weakness is the exploit. A threat is the possibility of harm to a system or of having it become unavailable. Risk is the intersection of loss and probability, meaning you must have loss or damage that is measurable, and a probability of that loss, or damage, becomes actualized.
This is all fairly abstract, so let’s talk about this in concrete terms. Say someone leaves default usernames and passwords configured on a system. This was a very common thing, especially in devices like home wireless access points or cable modems. Leaving the default username and password in place is a vulnerability because default usernames and passwords can easily be tried. The process of trying the password is the exploit of the vulnerability of leaving it in place. This is an example of a vulnerability that comes from a misconfiguration. The vulnerabilities that are more regularly recognized are programmatic in nature and may come from programming mistakes like buffer overflows.
If you’re interested in vulnerabilities and keeping track of the work that goes into discovering them, you can subscribe to mailing lists like Full Disclosure. You can get details about vulnerabilities that have been found, sometimes including the proof-of-concept code that can be used to exploit the discovered vulnerability. With so much software out in the world, including web applications, a lot of vulnerabilities are found daily. Some are more trivial than others, which can make the process of keeping up with everything challenging. The archive for Full Disclosure is available at the SecLists website. You can subscribe from that page, as well as look through all the older disclosures.
We’re going to take a look at a couple of types of vulnerabilities. The first are local vulnerabilities. These vulnerabilities can be triggered only if you are logged into the system with local access. It doesn’t mean that you are sitting at the console—just that you have some interactive access to the system. You could be accessing remotely with either terminal or graphical desktop access. Local vulnerabilities may include privilege escalation vulnerabilities where a user with regular permissions gains higher-level privileges up to administrative rights. Using something like a privilege escalation, users may gain access to resources they shouldn’t otherwise have access to. They may also get full administrative rights to perform tasks like creating users and services or gaining access to sensitive data.
The contrasting vulnerability to a local vulnerability is a remote vulnerability. This is a vulnerability that can be triggered without local access. This does, though, require that a service be exposed that an attacker can get to. Remote vulnerabilities may be either authenticated or unauthenticated. If an unauthenticated user can exploit a vulnerability to get local access to the system, that would be a bad thing. Not all remote vulnerabilities lead to local or interactive access to a system. Vulnerabilities can lead to denial of service, data compromise, integrity compromise, or possibly complete, interactive access to the system.
Network devices like switches and routers are also prone to vulnerabilities. If one of these devices were to be compromised, it could be devastating to the availability or even confidentiality of the network. Someone who has access to a switch or a router can potentially redirect traffic to devices that shouldn’t otherwise have it. Kali comes with tools that can be used to test for vulnerabilities on network devices. As Cisco is a prominent vendor, it’s not surprising that a majority of tools focused on vulnerabilities in network devices are focused on Cisco.
Vulnerability Types
The Open Web Application Security Project (OWASP) maintains a list of common vulnerability categories. Periodically, OWASP updates a list of the top 10 application security issues. Software is released and updated each year, and every piece of software has bugs in it. When it comes to security-related bugs that create vulnerabilities, some common ones should be considered. Before we get into how to search for these vulnerabilities, you should understand a little bit about what each of these vulnerabilities is.
Buffer Overflow
Buffer overflow is a common vulnerability and has been for decades. Although some languages perform a lot of checking on the data being entered into the program as well as data that is being passed around in the program, not all languages do that. It is sometimes up to the language and how it creates the executable to perform these sorts of checks. However, some languages perform no such checks. Checking data automatically creates overhead, and not all languages want to force that sort of overhead on programmers and programs. Newer languages are much better about being memory safe, including Go, Rust, and Swift. The C programming language has long been notorious for offering no or limited protection against memory errors like buffer overflows.
A buffer overflow takes advantage of the way data is structured in memory. Each program gets a chunk of memory. Some of that memory is allocated for the code, and some is allocated for the data the code is meant to act on. Part of that memory is a data structure called a stack. Think about going through a cafeteria line or even a buffet. The plates or trays are in a stack. Someone coming through pulls from the top of the stack, but when the plates or trays are replenished, the new plates or trays are put on the top of the stack. When the stack is replenished in this way, you can think about pushing onto the stack. However, when the topmost item is removed, you can think about popping off the top of the stack.
Programs work in the same way. Programs are generally structured through the use of functions. A function is a segment of code that performs a clearly defined action or set of actions. It allows for the same segment of code to be called multiple times in multiple places in the program without having to duplicate that segment each time it is needed. It also allows for nonlinear code execution. Rather than having one long program that is run serially, using functions allows the program to alter its flow of execution by jumping around in memory. When functions are called, they are often called with parameters, meaning pieces of information. These parameters are the data the function acts on. When a function is called, the parameters and the local variables to the function are placed on the stack. This block of data is called a stack frame.
Inside the stack frame is not only the data associated with the function but also the address the program should return to after the function is completed. This is how programs can run nonlinearly. The CPU doesn’t maintain the entire flow of the program. Instead, before a function is called, the address within the code block where the program was last executing is also pushed on the stack.
Buffer overflows become possible because these parameters are allocated a fixed-sized space on the stack. Let’s say you expect to take in data from the user that is 10 bytes long. If the user enters 15 characters, that’s 5 more bytes (assuming a single byte for a character, which isn’t necessarily the case) than the space that was allocated for the variable that is being copied into it. Because of the way the stack is structured, all the variables and data come before the return instruction pointer. The data being placed into the buffer has nowhere to go if the language runtime doesn’t do any of the checking ahead of time to truncate the data. Instead, it just writes over the next addresses in memory. This can result in the return instruction pointer being overwritten if you send in enough data to get to where the instruction pointer is stored.
Figure 4-1 shows a simplified example of a stack frame for an individual function. Some elements that belong on the stack frame aren’t demonstrated here. Instead, we’re focusing on just the parts that we care about. If the function is reading into Var2, the attacker can input more than the 32 characters expected. Once the 32 characters have been exceeded, any additional data will be written into the address space where the return instruction address is stored. When the function returns, that value will be read from the stack, and the program will try to jump to that address. A buffer overflow tries to get the program to jump to a location known by or under the control of the attacker to execute the attacker’s code.
An attacker running code they want to run, rather than the program’s code, is referred to as arbitrary code execution. The attacker can control the flow of the program’s execution, meaning they control what the program does, including the code it runs. An attacker who can do that can potentially get access to resources the program owner has permissions to access. The reason is that attackers will commonly open a command shell on the remote system, which is why the code injected into buffer space is called shellcode—because it runs a shell.
Race Condition
Any program running does not have exclusive access to the processor. While a program is in running mode, it is being swapped into and out of the processor queue so the code can be executed. Modern programs may be multithreaded. They have multiple, simultaneous paths of execution. These execution threads still have access to the same data space, and if I have two threads running that are both altering a variable, and the threads somehow get out of sequence, problems can arise in the way the program operates. Example 4-1 shows a small section of C code. This is not how you should be writing programs, of course, since there is a global variable and there are better ways to protect shared data, but this is just to demonstrate a concept, not a good way of writing C code.
Example 4-1. Simple C function
int x; void update(int y) { x = x + y if (x == 100) { printf("we are at the value"); } }
Let’s say we have two threads running that function simultaneously. The global variable x is being incremented by an unknown value by two separate threads. This variable is a single place in memory. By contrast, the two threads executing the update function will get their own instances of the variable y, which is passed into the function. The x variable, though, has to be shared by function instances. A race condition is what happens when two separate execution paths are accessing the same set of data at the same time. When the memory isn’t locked, a read can be taking place at a time when an unexpected write has happened. A second read to the memory location may retrieve a different value. It all depends on timing.
In this case, let’s look at the line x = x + y. First, the values in the memory locations referred to by x and y need to be read. Let’s say when we retrieve the value of x, it has a value of 11. We then add the value of y. Perhaps before the resulting value gets written back out to the memory location referred to by x, another function instance has updated that memory location. If y was set to 5, the value from this called instance would be 16. Maybe in the second instance, though, y was 10. Suddenly, the value of x is 21, except that as soon as x is written here, we have a value of 16. Which is correct? With programming like this, you get unpredictable behaviors in the program.
If the correct flow of a program requires specific timing, there is a chance of a race condition. Variables may be altered before a critical read that can control functionality of the program. You may have something like a filename that could be inserted before the value is read and operated on. Race conditions can be tricky to find and isolate because of the asynchronous nature of programs with multiple threads. Without controls like semaphores to indicate when values are in a state in which they can be read or written to safely, you may get inconsistent behavior simply because the programmer can’t directly control which thread will get access to the CPU in which order.
Of course, what we’ve looked at here is a simple example just to clearly demonstrate the point. Far subtler programming errors can lead to unpredictable behaviors as a result of race conditions.
Input Validation
Input validation is a broad term that somewhat encompasses buffer overflows as well as other vulnerabilities. If the buffer passed in is too long and hasn’t been checked, that’s an input validation problem. However, input validation is a problem that extends beyond just buffer overflows. In fact, buffer overflows are about checking the size of input, but other types of errors are a result of the input containing values that could be detrimental to the program or the system the program is running on. Example 4-2 shows a small fragment of C code that could easily be vulnerable to attack without proper input validation.
Example 4-2. C Program with potential input validation errors
int tryThis(char *value) { int ret; ret = system(value); return ret; }
This is a small function that takes a string as a parameter. The parameter is passed directly to the C library function system, which passes execution to the operating system. If the value useradd attacker were to be passed in, that would be passed directly to the operating system, and if the program had the right permissions because of the user it was running as, it would be creating a user called attacker. Any operating system command could be passed through like this. Without proper input validation, this could be a significant issue, especially without appropriate permissions given to the program under attack. This is one reason, frankly, that Kali Linux no longer has users log in directly as the root user. Programming errors could be exploited in a program running as the root user, meaning they have the entire run of the system to do what they want.
This sort of input-validation issue is perhaps more likely to be seen in web applications. Command injection, SQL injection, and XML injection attacks are all examples of poor input validation. Values are being passed into elements of an application without being checked. This input could potentially be an operating system command or SQL code, as examples. If the programmer isn’t properly validating input before acting on it, bad things can happen.
Access Control
Access control is a bit of a catchall category from a vulnerability perspective. On its face, access control is just determining who can get access to resources and what level of access they get. One area where access control can become a vulnerability is when programs are run as users that have more permissions or privileges than the program strictly needs to function. Any program running as root, for example, is potentially problematic. If the code can be exploited, as with poorly validated input or a buffer overflow, anything the attacker does will have root permissions.
This is not strictly limited to programs running as root. Any program runs with the permissions of the program’s owner. If any program owner has permissions to access any resource on a system, an exploit of that program can give an attacker access to that resource. These types of attacks can lead to a privilege escalation: a user gets access to something they shouldn’t have access to in the normal state of affairs within the system.
This particular issue could be alleviated, at least to a degree, by requiring authentication within the application. That’s a hurdle for an attacker to clear before just exploiting a program—they would have to circumvent the authentication either by a direct attack or by acquiring or guessing a password. Sometimes the best we can hope for is to make getting access an annoyance.
Vulnerability Scanning
Vulnerability scanning is the process of looking primarily for known vulnerabilities. In the rest of this chapter, we will be looking at vulnerability scanners, but when people think about vulnerability scanners, they may think of general-purpose scanners that can look at both local and remote vulnerabilities. A lot of commercial scanners are available. Back in the 1990s, an early scanner was the Security Administrator’s Toolkit for Analyzing Networks (SATAN), though for people who were offended by the acronym, you could run a program that would make a change and call it SANTA instead. SATAN eventually became SAINT, the Security Administrator’s Integrated Network Toolkit, which is still available as a commercial scanner. SATAN, however, was open source and freely available. Not long after came another open source, freely available scanner called Nessus.
Nessus was originally developed in 1998, but by 2005, its developers decided to close the source and turn it into commercial software. My recollection at the time was that the developers were tired of being the only ones contributing. The community wasn’t contributing, so they closed the source. All this is to explain the foundation of an open source vulnerability scanner called OpenVAS, which started out as a fork of Nessus. Early versions used the graphical program and the foundation of Nessus, so there wasn’t much difference.
OpenVAS, like so many other vulnerability scanners, and frankly a lot of commercial software, has moved to a web-based interface. While you can get one of the commercial scanners, install it, and use it on Kali Linux, OpenVAS is available as a package that can be installed. It is not installed by default, so you need to install the openvas package. Once it’s installed as a package, you need to do the work of installing and preparing everything needed to use OpenVAS. This is not necessarily a straightforward process. First, it requires the PostgreSQL database to be installed, which is OK because Metasploit, which is installed by default, also requires PostgreSQL. The first step in configuring OpenVAS is seen in Example 4-3. You need to run gvm_setup. This is not a short process. In addition to setting up the database, it downloads all the signatures.
Example 4-3. Installing OpenVAS
┌──(kilroy@badmilo)-[~] └─$ sudo gvm-setup [>] Starting PostgreSQL service [>] Creating GVM's certificate files [>] Creating PostgreSQL database [*] Creating database user [*] Creating database [*] Creating permissions CREATE ROLE [*] Applying permissions GRANT ROLE [*] Creating extension uuid-ossp CREATE EXTENSION [*] Creating extension pgcrypto CREATE EXTENSION [>] Migrating database [>] Checking for GVM admin user [*] Creating user admin for gvm [*] Please note the generated admin password [*] User created with password 'af6c349c-5a7e-4a84-862a-de61f00e807d'. [*] Configure Feed Import Owner [*] Define Feed Import Owner [>] Updating GVM feeds [*] Updating NVT (Network Vulnerability Tests feed from Greenbone Security ↩ Feed/Community Feed)
Vulnerability scanners know what a vulnerability looks like because they look for signatures or patterns. Vulnerability scanners do not attempt to exploit vulnerabilities. They don’t find vulnerabilities that have not previously existed. They look for patterns in their interactions with programs and systems. These patterns are developed by the OpenVAS maintainers based on vulnerability announcements. Don’t assume, however, that just because vulnerability scanners don’t exploit vulnerabilities to validate them that the scanner is perfectly safe. It’s possible for a scanner to inadvertently cause damage to systems, including outages. When you are sending a lot of data looking for vulnerabilities, there is a chance the receiving system will end up having issues. Applications that are really fragile may end up having problems that result in failures in the application.
The gvm_setup process downloads hundreds or thousands of XML files that contain vulnerability information. Each vulnerability installed has to be cataloged so you can use it in your vulnerability scan. Example 4-4 shows some of these XML files being downloaded. Depending on the speed of your disk, processor, and network, this can take several hours. Be patient, walk away, find something else to do, and just wait until everything has been installed. You will need to pay attention to the last part of the installation. Part of the output in the last section will be the password for the administrator user. You will need that to log into OpenVAS for the first time.
Example 4-4. Downloading signatures
nvdcve-2.0-2010.xml 22,577,713 100% 983.91kB/s 0:00:22 (xfr#10, to-chk=33/44) nvdcve-2.0-2011.xml 22,480,816 100% 969.40kB/s 0:00:22 (xfr#11, to-chk=32/44) nvdcve-2.0-2012.xml 25,153,405 100% 995.05kB/s 0:00:24 (xfr#12, to-chk=31/44) nvdcve-2.0-2013.xml 28,559,864 100% 989.41kB/s 0:00:28 (xfr#13, to-chk=30/44) nvdcve-2.0-2014.xml 30,569,278 100% 991.56kB/s 0:00:30 (xfr#14, to-chk=29/44) nvdcve-2.0-2015.xml 32,900,521 100% 634.44kB/s 0:00:50 (xfr#15, to-chk=28/44)
Once the installation has completed, you will see output like Example 4-5. You can see the password for the admin user toward the end of the output. You can see this is long. While you can add users on the command line, it’s probably easiest to just make sure you copy this password and store it somewhere until you can get logged into the web interface. You will also see the suggestion that you run gvm-check-setup. You can get through gvm-setup and still have a broken OpenVAS installation. You should make sure you run gvm-check-setup.
Example 4-5. Completed OpenVAS configuration
ent 735 bytes received 106,076,785 bytes 986,767.63 bytes/sec total size is 106,049,031 speedup is 1.00 [+] GVM feeds updated [*] Checking Default scanner [*] Modifying Default Scanner Scanner modified. [+] Done [*] Please note the password for the admin user [*] User created with password 'af6c349c-5a7e-4a48-862a-de61f00e708d'. [>] You can now run gvm-check-setup to make sure everything is correctly configured
Ideally, you should get through gvm-check-setup without any problems. If you do, you will see the output shown in Example 4-6. If you run into errors, I can tell you from personal experience that resolving them can be very challenging. Try to just do a straightforward, clean installation without deviating from the basic approach described here.
Example 4-6. Running gvm-check-setup
┌──(kilroy@badmilo)-[~] └─$ sudo gvm-check-setup gvm-check-setup 22.4.1 Test completeness and readiness of GVM-22.4.1 Step 1: Checking OpenVAS (Scanner)... OK: OpenVAS Scanner is present in version 22.4.1. OK: Notus Scanner is present in version 22.4.4. OK: Server CA Certificate is present as /var/lib/gvm/CA/servercert.pem. Checking permissions of /var/lib/openvas/gnupg/* OK: _gvm owns all files in /var/lib/openvas/gnupg OK: redis-server is present. OK: scanner (db_address setting) is configured properly using the ↩ redis-server socket: /var/run/redis-openvas/redis-server.sock OK: redis-server is running and listening on socket: ↩ /var/run/redis-openvas/redis-server.sock. OK: redis-server configuration is OK and redis-server is running. OK: the mqtt_server_uri is defined in /etc/openvas/openvas.conf OK: _gvm owns all files in /var/lib/openvas/plugins OK: NVT collection in /var/lib/openvas/plugins contains 85634 NVTs. OK: The notus directory /var/lib/notus/products contains 301 NVTs. Checking that the obsolete redis database has been removed OK: No old Redis DB OK: ospd-OpenVAS is present in version 22.4.6. Step 2: Checking GVMD Manager ... OK: GVM Manager (gvmd) is present in version 22.4.2. Step 3: Checking Certificates ... OK: GVM client certificate is valid and present as ↩ /var/lib/gvm/CA/clientcert.pem. OK: Your GVM certificate infrastructure passed validation. Step 4: Checking data ... OK: SCAP data found in /var/lib/gvm/scap-data. OK: CERT data found in /var/lib/gvm/cert-data. Step 5: Checking Postgresql DB and user ... OK: Postgresql version and default port are OK. gvmd | _gvm | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc | 16435|pg-gvm|10|2200|f|22.4.0|| OK: At least one user exists. Step 6: Checking Greenbone Security Assistant (GSA) ... OK: Greenbone Security Assistant is present in version 22.04.1~git. Step 7: Checking if GVM services are up and running ... OK: ospd-openvas service is active. OK: gvmd service is active. OK: gsad service is active. Step 8: Checking few other requirements... OK: nmap is present. OK: ssh-keygen found, LSC credential generation for GNU/Linux targets is ↩ likely to work. OK: nsis found, LSC credential package generation for Microsoft Windows ↩ targets is likely to work. OK: xsltproc found. Step 9: Checking greenbone-security-assistant... OK: greenbone-security-assistant is installed It seems like your GVM-22.4.1 installation is OK.
You should now have a working OpenVAS setup. You can start it by using gvm-start. If the services are running and you want to stop them, you can run gvm_stop. If you ever have problems with your OpenVAS installation, it’s worth running gvm-check-setup again. Once you have OpenVAS installed, you can access it with any browser at http://127.0.0.1:9392. Remember, the username is admin and the password is whatever was output from your setup process. Each installation is going to have a different password, so make sure to keep track of yours.
Local Vulnerabilities
Local vulnerabilities require some level of access to the system. The object of a local vulnerability is not to gain access. You have to already have local access to execute a program that has such a vulnerability. The idea of exploiting a local vulnerability is often to gain access to something the attacker doesn’t otherwise have access to, meaning it could be a privilege escalation.
Local vulnerabilities can occur in any program on a system. This includes running services—programs that are running in the background without direct user interaction and often called daemons—as well as any other program that a user can get access to. A program like passwd is setuid to allow any user to run it and get temporary root privileges. A setuid program sets the user ID of the running program to the owner of the file. This is necessary because changing a user’s password requires changes to a file that only root can write to. If I wanted to change my password, I could run passwd, but because the password database has to be changed, the passwd program needs to have root privileges to write to the needed file. If there were a vulnerability in the passwd program, that program would be running temporarily as root, which means any exploit during the time period the program was running as root would have root permissions.
Note
A program that has the setuid bit set starts up as the user that owns the file. Normally, the user that owns a file would be root because users need to be able to perform tasks that require root privileges, like changing their own password. However, you can create a setuid program for any user. No matter the user that started the program, when it’s running, it will appear as though the owner of the program on disk is running the program.
Using lynis for Local Checks
Programs are available on most Linux distributions that can run tests for local vulnerabilities. Kali is no different. One of these programs is lynis, a vulnerability scanner that runs on the local system and runs through numerous checks for settings that would be common in a hardened operating system installation. Operating systems that are hardened are configured to be resistant to attacks. This can mean enabling logging, tightening permissions, and choosing other settings.
The program lynis has settings for various scan types. You can do quick scans or complete scans, depending on the depth you want to go. There is also the possibility of running in pentest mode, which is an unprivileged scan. This limits what can be checked. Anything that requires root access, like looking at configuration files, can’t be checked in pentest mode. This can provide you good insight into what an attacker can do if they gain access to a regular, unprivileged account. Example 4-7 shows partial output of a run of lynis against a basic Kali installation.
Example 4-7. Output from lynis
[+] Kernel ------------------------------------ - Checking default run level [ RUNLEVEL 5 ] - Checking CPU support (NX/PAE) CPU support: PAE and/or NoeXecute supported [ FOUND ] - Checking kernel version and release [ DONE ] - Checking kernel type [ DONE ] - Checking loaded kernel modules [ DONE ] Found 120 active modules - Checking Linux kernel configuration file [ FOUND ] - Checking default I/O kernel scheduler [ NOT FOUND ] - Checking for available kernel update [ OK ] - Checking core dumps configuration - configuration in systemd conf files [ DEFAULT ] - configuration in /etc/profile [ DEFAULT ] - 'hard' configuration in /etc/security/limits.conf [ DEFAULT ] - 'soft' configuration in /etc/security/limits.conf [ DEFAULT ] - Checking setuid core dumps configuration [ DISABLED ] - Check if reboot is needed [ NO ] [+] Memory and Processes ------------------------------------ - Checking /proc/meminfo [ FOUND ] - Searching for dead/zombie processes [ NOT FOUND ] - Searching for IO waiting processes [ NOT FOUND ] - Search prelink tooling [ NOT FOUND ] [+] Users, Groups, and Authentication ------------------------------------ - Administrator accounts [ OK ] - Unique UIDs [ OK ] - Unique group IDs [ OK ] - Unique group names [ OK ] - Password file consistency [ SUGGESTION ] - Checking password hashing rounds [ DISABLED ] - Query system users (non daemons) [ DONE ] - NIS+ authentication support [ NOT ENABLED ] - NIS authentication support [ NOT ENABLED ] - Sudoers file(s) [ FOUND ] - PAM password strength tools [ SUGGESTION ] - PAM configuration files (pam.conf) [ FOUND ] - PAM configuration files (pam.d) [ FOUND ] - PAM modules [ FOUND ] - LDAP module in PAM [ NOT FOUND ] - Accounts without expire date [ OK ] - Accounts without password [ OK ] - Locked accounts [ OK ] - Checking user password aging (minimum) [ DISABLED ] - User password aging (maximum) [ DISABLED ] - Checking Linux single user mode authentication [ OK ] - Determining default umask - umask (/etc/profile) [ NOT FOUND ] - umask (/etc/login.defs) [ SUGGESTION ] - LDAP authentication support [ NOT ENABLED ] - Logging failed login attempts [ ENABLED ]
As you can see from the output, lynis found problems with the pluggable authentication module (PAM) password-strength tools, such that it was willing to offer a suggestion. It also found a problem with password file consistency, though for more details, I would have to look at the suggestion. Additionally, it found a problem with the default file permission settings. This is the umask setting that it checked in /etc/login.defs. The full output from the tool has lots of other recommendations, but this was a full audit of the system, and it’s an out-of-the-box installation, for the most part. However, it’s interesting to note that in the previous edition of the book, running lynis found problems with the single-user mode authentication, which is when you boot the system into single-user mode, commonly used for critical system administration where you don’t want anything being touched, like the filesystem, while you are performing tasks. This issue has apparently been resolved since that version of Kali, as it’s no longer a problem here.
The console output provides one level of detail, but a logfile is created. A logfile is stored in the home directory of the user running the command. Additional log details can be found in var/log/lynis.log. Example 4-8 shows a fragment of the output from the logfile that was stored in my home directory when I ran it as my user. The output in this logfile shows every step taken by the program as well as the outcome from each step. You will also notice that when there are findings, the program will indicate them in the output. You will see in the case of libpam-usb that there is a suggestion for further hardening the operating system against attack.
Example 4-8. Logfile from a run of lynis
2023-07-10 19:31:33 ==== 2023-07-10 19:31:33 Discovered directories: /bin, /sbin, /usr/bin, /usr/sbin, /usr/local/bin, /usr/local/sbin 2023-07-10 19:31:33 DEB-0001 Result: found 7981 binaries 2023-07-10 19:31:33 Status: Starting Authentication checks... 2023-07-10 19:31:33 Status: Checking if libpam-tmpdir is installed and enabled... 2023-07-10 19:31:33 ==== 2023-07-10 19:31:33 Performing test ID DEB-0280 (Checking if libpam-tmpdir is ↩ installed and enabled.) 2023-07-10 19:31:33 - libpam-tmpdir is not installed. 2023-07-10 19:31:33 Hardening: assigned partial number of hardening points ↩ (0 of 2). Currently having 0 points (out of 2) 2023-07-10 19:31:33 Suggestion: Install libpam-tmpdir to set $TMP and $TMPDIR for ↩ PAM sessions [test:DEB-0280] [details:-] [solution:-] 2023-07-10 19:31:33 Status: Starting file system checks... 2023-07-10 19:31:33 Status: Starting file system checks for dm-crypt, ↩ cryptsetup & cryptmount... 2023-07-10 19:31:33 ==== 2023-07-10 19:31:33 Skipped test DEB-0510 (Checking if LVM volume groups or file ↩ systems are stored on encrypted partitions) 2023-07-10 19:31:33 Reason to skip: Prerequisites not met (ie missing tool, other ↩ type of Linux distribution) 2023-07-10 19:31:33 ==== 2023-07-10 19:31:33 Skipped test DEB-0520 (Checking for Ecryptfs) 2023-07-10 19:31:33 Reason to skip: Prerequisites not met (ie missing tool, other ↩ type of Linux distribution) 2023-07-10 19:31:34 Status: Starting Software checks...
This is a program that can be used on a regular basis by anyone who operates a Linux system so they can be aware of issues they need to correct. As someone involved in penetration or security testing, though, you can be running this program on Linux systems that you get access to. If you are working hand in hand with the company you are testing for, performing local scans will be easier. You may be provided local access to the systems so you can run programs like this. You would need this program installed on any system you wanted to run it against, of course. In that case, you wouldn’t be running it from Kali itself. However, you can get a lot of experience with lynis by running it on your local system and referring to the output.
OpenVAS Local Scanning
You are not limited to testing on the local system for local vulnerabilities. By this I mean that you don’t have to be logged in to running programs to perform testing. Instead, you can use a remote vulnerability scanner and provide it with login credentials. This will allow the scanner to log in remotely and run the scans through a login session. In the Greenbone Security Assistant web interface, most of what we will be doing in this section is in the Configuration menu. This is where you configure all the essential elements for creating a scan.
Earlier, we installed OpenVAS; but now we can take a look at using it to scan for vulnerabilities. While it is primarily a remote vulnerability scanner, as you will see, it can be provided with credentials to log in. Those login credentials, shown being configured in Figure 4-2, will be used by OpenVAS to log in remotely to run tests locally through the login session. You can select the option for OpenVAS to autogenerate, which will have OpenVAS trying passwords against a specified username.
The credential creation is only part of the process, though. You still need to configure a scan that can use the credentials. The first thing to do is to either identify or create a scan configuration that includes local vulnerabilities for the target operating systems you have. As an example, Figure 4-3 shows a dialog box displaying a section of the vulnerability families available in OpenVAS. You can see a handful of operating systems listed with local vulnerabilities. This includes Debian and Ubuntu. Other operating systems are included, and each family may have hundreds, if not thousands, of vulnerabilities.
Once you have your vulnerabilities selected, you need to create targets and apply your credentials. Figure 4-4 shows the dialog box in OpenVAS creating a target. This requires that you specify an IP address, or an IP address range, or a file that includes the list of IP addresses that are meant to be the targets. Although this dialog box provides other options, the ones that we are most concerned with are those where we specify credentials. The credentials configured here have been selected to be used against targets that have SSH servers running on port 22. If you have previously identified other SSH servers that may be running in a nonstandard configuration, you can specify other ports. In addition to SSH, you can select SMB and ESXi as protocols to log in with.
Each operating system is going to be different, and this is especially true with Linux, which is why there are different families in OpenVAS for local vulnerabilities. Each distribution is configured a little differently and has different sets of packages. Each package may have different default configuration settings. Beyond the distribution, users can have a lot of choices for package categories. Once the base is installed, hundreds of additional packages could typically be installed, and each of those packages can introduce vulnerabilities.
Note
One common approach to hardening is to limit the number of packages that are installed. This is especially true when it comes to server systems in which the bare-minimum amount of software necessary to operate the services should be installed.
Once you have all your configurations in place, you still need to create a scan task. Under the Scans menu, you would select Tasks. As with the other pages, you should see an icon that looks like a sheet of paper with an asterisk in the upper-right corner. This is how you add a new configuration, and on this page, you are creating a new scan task. Figure 4-5 shows the dialog box where you create a new scan task. The important settings here are the Scan Targets and Scan Config pull-downs. You would select the scan target you created that had your credentials. Then you would select the scan config you created that has the right set of families selected for your scan.
Once you’ve configured the new scan task, it will show up in the list of tasks. During the task configuration, you could set it to run on a schedule, but if you want to, you can also run it on demand. Just click the play button that looks a little like a sideways triangle.
Root Kits
While not strictly a vulnerability scanner, Rootkit Hunter is worth knowing about. This program can be run locally on a system to determine whether it has been compromised and has a root kit installed. A root kit is a software package that is meant to facilitate a piece of malware. It may include replacement operating system utilities to hide the existence of the running malware. For example, the ps program may be altered to not show the processes associated with the malware. Additionally, ls may hide the existence of the malware files. Root kits may also implement a backdoor that will allow attackers remote access.
If root kit software has been installed, it may mean that a vulnerability somewhere has been exploited. It also means that software that you don’t want is running on your system. Knowing about Rootkit Hunter can be useful to allow you to scan systems. You may want to spend time running this program on any system that you have run scanners against and found vulnerabilities. This may be an indication that the system has been compromised. Running Rootkit Hunter will allow you to determine whether root kits are installed on your system.
The name of the executable is rkhunter, and it’s easy to run, though it’s not installed in a default build of the current Kali Linux distribution. rkhunter runs checks to determine whether root kits have been installed. To start with, it runs checks on file permissions, which you can see a sample of in Example 4-9. Beyond that, rkhunter does pattern searches for signatures of what known root kits look like. Just like most antivirus programs, rkhunter can’t find what it doesn’t know about. It will look for anomalies, like incorrect file permissions. It will look for files that it knows about from known root kits. If there are root kits it doesn’t know about, those won’t be detected.
Example 4-9. Running Rootkit Hunter
┌──(kilroy@badmilo)-[~] └─$ sudo rkhunter --check [ Rootkit Hunter version 1.4.6 ] Checking system commands... Performing 'strings' command checks Checking 'strings' command [ OK ] Performing 'shared libraries' checks Checking for preloading variables [ None found ] Checking for preloaded libraries [ None found ] Checking LD_LIBRARY_PATH variable [ Not found ] Performing file properties checks Checking for prerequisites [ OK ] /usr/sbin/adduser [ OK ] /usr/sbin/chroot [ OK ] /usr/sbin/cron [ OK ] /usr/sbin/depmod [ OK ] /usr/sbin/fsck [ OK ] /usr/sbin/groupadd [ OK ] /usr/sbin/groupdel [ OK ] /usr/sbin/groupmod [ OK ] /usr/sbin/grpck [ OK ]
As with lynis, this is a software package; you would need to install Rootkit Hunter on a system that you were auditing for malicious software. You can’t run it from your Kali instance on a remote system. If you are doing a lot of work with testing and exploits on your Kali instance, it’s not a bad idea to keep checking your own system. Anytime you run software from a source you don’t necessarily trust completely, which may be the case if you are working with proof-of-concept exploits, you should be checking your system for viruses and other malware. Yes, this is just as true on Linux as it is on other platforms. Linux is not invulnerable to attacks or malware. It’s best to keep your system as clean and safe as you can.
Remote Vulnerabilities
While you may sometimes be given access to systems by working closely with your target, you definitely will have to run remote checks for vulnerabilities when you are doing security testing. When you get complete access, which may include credentials to test with, desktop builds to audit without impacting users, or configuration settings from network devices, you are doing clear-box testing. If you have no cooperation from the target, aside from a clear agreement with them about what you are planning on doing, you are doing opaque-box testing; you don’t know anything at all about what you are testing. You may also do gray-box testing. This is somewhere between clear box and opaque box, though there are a lot of gradations in between.
When testing for remote vulnerabilities, getting a head start is useful. You will need to use a vulnerability scanner. While OpenVAS is not the only vulnerability scanner that can be used, it is freely available and included with the Kali Linux repositories. This should be considered a starting point for your vulnerability testing. If all it took was to just run a scanner, anyone could do it. Running vulnerability scanners isn’t hard. The value of someone doing security testing isn’t loading up a bunch of automated tools. Instead, it’s the interpretation and validation of the results as well as going beyond the automated tools. The hard work is understanding the output of the scanner and being able to determine whether the findings are legitimate as well as the actual priority of the vulnerability.
Earlier, we explored how OpenVAS can be used for local scanning. It can also be used, and perhaps is more commonly known, for scanning for remote vulnerabilities. This is what we’re going to be spending some time looking at now. OpenVAS is a fairly dense piece of software, so we’ll be skimming through some of its capabilities rather than providing a comprehensive overview. The important part is to get a handle on how vulnerability scanners work.
Note
As stated earlier, the OpenVAS project began when Nessus was forked from the Nessus project. Since that time, significant architectural changes have occurred in the design of OpenVAS. Although Nessus has also gone to a web interface, there is no resemblance at all between OpenVAS and Nessus any longer, whether in the interface or in the underlying scanner architecture.
When using OpenVAS or any vulnerability scanner, it will have a collection or database of known vulnerabilities. This collection should be regularly updated, just like antivirus programs. When you set up OpenVAS, one of the first things that happens is that the current collection of vulnerability definitions is downloaded. If you have the system running regularly with the OpenVAS services, your vulnerabilities will get updated for you. If you have had OpenVAS down for a time and you want to run a scan, it’s worth making sure that all your signatures are updated. You can do this on the command line by using the command greenbone-nvt-sync. This needs to be run as the _gvm user created for OpenVAS. To do that, you would run the command sudo -u _gvm greenbone-nvt-sync. This will run the command using the specified user. OpenVAS uses the Security Content Automation Protocol to exchange information between your installation and the remote servers where the content is stored.
OpenVAS uses a web interface, much like a lot of other applications today. To get access to the web application, you go to https://localhost:9392. When you log in, you are presented with a dashboard. This includes graphs related to your own tasks. The dashboard also presents information about the vulnerabilities it knows about and their severities. In Figure 4-6, you can see a web page open to the dashboard. You can see the number of tasks (it’s a new installation so there is only one) as well as a chart showing the vulnerabilities that are in the database.
The menus for accessing features and functions are along the top of the page. From there, you can access features related to the scans, assets, and configurations, as well as the collection of security information that OpenVAS knows about, with all the vulnerabilities it is aware of.
Quick Start with OpenVAS
While OpenVAS is certainly a dense piece of software, providing a lot of capabilities for customization, it does provide a simple way to get started. A scan wizard allows you to just provide a target and get started scanning. If you want to get a quick sense of common vulnerabilities that may be found on the target, this is a great way to go. A simple scan using the wizard will use the defaults, which is a way to get you started quickly. To get started with the wizard, you navigate to the Scans menu and select Tasks. At the top left of that page, you will see some small icons. The purple one that looks like a wizard’s wand opens the Task Wizard. Figure 4-7 shows the menu that pops up when you roll your cursor over that icon.
From that menu, you can select the Advanced Task Wizard, which gives you more control over assets and credentials, among other settings. You can also select the Task Wizard, which you can see in Figure 4-8. Using the Task Wizard, you will be prompted for a target IP address. The IP address that is populated when it’s brought up is the IP address of the host from which you are connected to the server. You can enter not only a single IP address here but also an entire network, as seen in Figure 4-8. For my case, I would use 192.168.1.0/24. That is the entire network range from 192.168.1.0–255. The /24 is a way of designating network ranges without using subnet masks or a range notation. You will see this a lot, and it’s commonly called CIDR notation, which is the Classless Inter-Domain Routing notation.
Once you have entered your target or targets, all you need to do is click Start Scan, and OpenVAS is off to the races, so to speak. You have started your very first vulnerability scan. This is the easiest way to get a scan started, but you don’t have any control over the type of scan or even when the scan will run. For that, we need to look at the Advanced Scan Wizard.
Tip
It may be useful to have some vulnerable systems around when you are running your scans. Although you can get various systems (and a simple web search for vulnerable operating systems will turn them up), one is really useful. Metasploitable 2 is a deliberately vulnerable Linux installation. Metasploitable 3 is the updated version based on Windows Server 2008, though there is also a version of Metasploitable 3 that is built on Ubuntu. Metasploitable 2 is a straight-up download. Metasploitable 3 is a build-it-on-your-own-system operating system. It requires VirtualBox and additional software.
We can look at the Advanced Scan Wizard, shown in Figure 4-9, to see the broader set of configuration settings you have access to while still using a wizard to help set all the values. This will give you a quick look ahead to what we will be working with on a larger scale when we move to creating scans from start to finish.
Creating a Scan
If you want more control of your scan, additional steps are required. There are a few places to start, because you need several components in place before you can start the scan. A simple starting point is the same place in the interface where we were setting up local scans. You need to establish targets. If you want to run local scans as part of your overall scan, you would set up your credentials as we did earlier, going to the Configuration menu and selecting Credentials. Once you have set whatever credentials you need, you can go to Configuration/Targets to access the dialog box that allows you to specify targets.
From there, you add in or configure any credentials you may have, and your targets are set up. You need to think about the kind of scan you want to do. This is where you need to go to Scan Configs, also under the Configuration menu. This is something else we looked at quickly under “Local Vulnerabilities”. OpenVAS does come with scan configs built in, and you can see the list in Figure 4-10. These are canned configurations that you won’t be able to make changes to. Also in this list, you will see a couple of configurations I created. If you want something different from what the canned scans offer you, you need to either clone one of these and edit it or create your own.
When you want to create your own scan configuration, you can start with a blank configuration or a full and fast configuration. Once you have decided where you want to start, you can begin selecting the scan families to include in your scan configuration. Additionally, you can alter the way the scanner behaves. You can see a set of configuration settings in Figure 4-11 that will change the way the scan is run and the locations it uses. One area to point out specifically here is the Safe Checks setting. This indicates that the only checks to run are ones that are known to be safe, meaning they aren’t as likely to cause problems with the target systems. This does mean that some checks won’t get run, and they may be the checks that test the very vulnerabilities you are most concerned with. After all, if just probing for a vulnerability can cause problems on the remote system, that’s something the company you are working with should be aware of.
Vulnerability scanners aren’t intended to exploit vulnerabilities. However, just poking at software to evaluate its reaction can be enough to cause application crashes. In the case of the operating system, as with network stack problems, you may be talking about crashing the operating system and causing a denial of service, even if that’s not what you were looking to do. This is an area where you need to make sure you are clear up front with the people you are doing the testing for. If they are expecting clean testing, and you are working in cooperation with them, you need to be clear that sometimes, even if you aren’t going for outages, outages will happen. Safe Checks is a setting to be careful of, and you should be very aware of what you are doing when you turn it off. Safe Checks disables tests that may have the potential to cause damage to the remote service, potentially disabling it, for instance.
Although you can also adjust additional settings, you are ready to go after you have set your scan configuration and your targets. Before you get started here, you may want to consider setting some schedules. This can be helpful if you want to work with a company and do the testing off-hours. If you are doing security testing or a penetration test, you likely want to monitor the scan. However, if this is a routine scan, you may want to set it to run overnight so as not to affect day-to-day operations of the business. While you may not be impacting running services or systems, you will be generating network traffic and using up resources on systems. This will have an impact if you were to do it while the business were operational.
Let’s assume, though, that you have your configurations in place. You just want to get a scan started with everything you have configured. From here, you need to go to the Scans menu and select Tasks. Then click the New Task icon. This brings up another dialog box, which you can see in Figure 4-12. In this dialog box, you give the task a name, which then shows the additional options, and then you can select your targets and your scan config. You can also select a schedule, if you created one.
On our simple installation, we will have the choice of a single scanner to use. That’s the scanner on our Kali system. In a more complex setup, you may have multiple scanners to select from and manage all from a single interface. You will also be able to select the network interface you want to run the scan on. While this will commonly be handled by the routing tables on your system, you can indicate a specific source interface. This may be useful if you want all your traffic to source from one IP address range while you are managing from another interface.
Finally, you have the choice of storing reports within the OpenVAS server. You can indicate how many you want to store so you can compare one scan result to another to demonstrate progress. Ultimately, the goal of all your testing, including vulnerability scanning, is to improve the security posture of your target. If the organization is getting your recommendations and then not doing anything with them, that’s worse than not running the scans at all. What happens when you present a report to the organization you are working for is that they become aware of the vulnerabilities you have identified. This information can then be used against them if they don’t do anything with what you have told them.
OpenVAS Reports
The report is the most important aspect of your work. You will be writing your own report when you are done testing, but the report that is issued from the vulnerability scanner is helpful for you to understand where you might start looking. You should be aware of two things when you start to look at vulnerability scanner reports. First, the vulnerability scanner uses specific signatures to determine whether the vulnerability is there. This may be something like banner grabbing to compare version numbers. You can’t be sure that the vulnerability exists because a tool like OpenVAS does not exploit the vulnerability. Second, and this is related, you can get false positives. A false positive is an indication that the vulnerability exists when it doesn’t. Since the vulnerability scanner does not exploit the vulnerability, the best it can do is get a probability.
If you are not running a scan with credentials, you are going to miss detecting a lot of vulnerabilities. You will also have a higher potential for getting false positives. This is why a report from OpenVAS or any other scanner isn’t sufficient. Since there is no guarantee that the vulnerability actually exists, you need to be able to validate the reports so your final report presents legitimate vulnerabilities that need to be remediated.
However, enough with the remonstration. Let’s get on with looking at the reports so we can start determining what is legitimately troubling and what may be less concerning. The first thing we need to do is go back to the OpenVAS web interface after the scan is complete. Scans of large networks with a lot of services can be very time-consuming, especially if you are doing deep scans. In the Scans menu, you will find the item Reports. From there, you get to the Report dashboard. That will give you a list of all the scans you have done as well as some graphs of the severity of the findings from your scans. You can see the Report dashboard in Figure 4-13.
When you select the scan you want the report from, you will be presented with a list of all vulnerabilities that were found. When I use the word report, it may sound like we are talking about an actual document, which you can certainly get, but really all we’re looking for is the list of findings and their details. We can get all of that just as easily from the web interface as we can from a document. I find it easier in most cases to be able to click back and forth from the list to the details as needed. Your own mileage will, of course, vary, depending on what’s most comfortable for you. Figure 4-14 shows the list of vulnerabilities resulting from the scan of my network. I like to keep some vulnerable systems around for fun and demonstration purposes. Having everything up-to-date wouldn’t yield us much to look at.
You’ll see eight columns in the list of vulnerabilities. Some of these are fairly self-explanatory. The Vulnerability and Severity columns should be clear. The vulnerability is a short description of the finding. The severity is worth talking about, though. This assessment is based on the impact that may result from the vulnerability being exploited. The issue with the severity provided by the vulnerability scanner is that it doesn’t take anything else into account. All it knows is the severity that goes with that vulnerability, regardless of any other mitigations that are in place that could limit the exposure to the vulnerability. This is where having a broader idea of the environment can help. As an example, let’s say there is an issue with a web server, like a vulnerability in PHP, a programming language for web development. However, the website could be configured with two-factor authentication, and special access could be granted just for this scan. This means only authenticated users could get access to the site to exploit the vulnerability.
Just because mitigations are in place for issues that may reduce their overall impact on the organization doesn’t mean those issues should be ignored. All it means is that the bar is higher for an attacker, not that it’s impossible for the exploit to happen. Experience and a good understanding of the environment will help you key in on your findings. The objective shouldn’t be to frighten the bejeebers out of people but instead to provide them with a reasonable expectation of where they sit from the standpoint of exposure to attack. Working with the organization will ideally get them to improve their overall security posture.
The next column to talk about is the QoD, or Quality of Detection, column. As noted earlier, the vulnerability scanner can’t be absolutely certain that the vulnerability exists. The QoD rating indicates the scanner’s level of certainty that the vulnerability exists. The higher the score, the more certain the scanner is. If you have a high QoD and a high severity, this is probably a vulnerability that someone should be investigating. As an example, one of the findings is shown in Figure 4-15. This has a QoD of 97% and a severity of 10, which is as high as the scanner goes. OpenVAS considers this a serious issue that it believes is confirmed. This is shown by the output received from the system under test.
Each finding will tell you how the vulnerability was detected. In this case, OpenVAS had gotten access to the local system and reviewed a list of the installed packages. OpenVAS identified that the version installed has an open vulnerability that was disclosed by the developer. To verify this, you can review the CVE report. You can also look at the list of installed packages to verify the version number installed. Finally, and perhaps most importantly, you can review the security advisory provided by Canonical, the company behind Ubuntu. In some cases, remediations may be in place to limit the exposure, while the application still carries the same version number as that provided by the upstream package maintainer.
When you get results from some services, it’s worth trying as best as you can to duplicate them manually. This is where you may want to turn up the logging as high as you can. You can do this by going to the scanner preferences and turning on Log Whole Attack. You can also check the application log from the target application to see exactly what was done. Repeating the attack and then modifying it in useful ways can be important. You may get an error message from the listening service or, if it’s a web application, from the application or application server. You may get low-quality detection results that may still be serious that need to be verified by hand.
If you need help performing the additional research and validation, the findings will have a list of resources. These web pages will have more details on the vulnerability, which can help you understand the attack so you can work on duplicating it. Often, these resources point to the announcement of the vulnerability. They may also provide details from vendors about fixes or workarounds.
Another column to take a look at from Figure 4-14 is the second column, which is labeled with just an icon. This is the column indicating the solution type. The solutions may include workarounds, vendor fixes, or mitigations. Each finding will provide additional details about the workarounds or fixes that may be possible. One of the vulnerabilities that was detected was features of an SMTP server that could lead an attacker to information about email addresses. Figure 4-16 shows one of the findings and its solution. This particular solution is a vendor fix. In this case, the fix is to update the installed piece of software to the latest version. You may also find workarounds in identified vulnerabilities, and the workaround will be documented.
The final columns to look at are the Host and Location columns. The host tells you which system had the vulnerability. This is important so your organization knows the system it needs to perform the configuration work on. The location tells you which port the targeted service runs on. This lets you know where you should target your additional testing. When you provide details to the organization, the system that’s impacted is important to include. I also include any mitigations or fixes that may be available when I write reports for clients.
Network Device Vulnerabilities
OpenVAS is capable of testing network devices. If your network devices are accessible over the networks you are scanning, they can get touched by OpenVAS, which can detect the type of device and apply the appropriate tests. However, programs are also included with Kali that are specific to network devices and vendors. Since Cisco is a common networking vendor, there is a better chance that someone will be developing tools and exploits against those devices. Cisco has majority market share in routing and switching, so those devices make good targets for attacks.
Network devices are often managed over networks. This can be done through web interfaces using HTTP(S) or on a console through a protocol like SSH or—far less ideal but still a remote possibility—Telnet. Once you have any device on a network, it has the potential to be exploited. Using the tools available in Kali, you can start to identify potential vulnerabilities in the critical network infrastructure.
Auditing Devices
First, we will use a tool to do some basic auditing of Cisco devices on the network. The Cisco Auditing Tool (CAT) is used to attempt logins to devices you provide. It does this given a provided word list to attempt logins with. The downside to using this tool is that it uses Telnet to attempt connections rather than SSH, which would be more common on well-secured networks. Any management over Telnet can be intercepted and read in plain text because that’s how it’s transmitted. Since management of network devices will include passwords, it’s more common to use encrypted protocols like SSH for management.
Note
Many of the tools in this section will not be installed in Kali by default. The packages are available, but they are probably not going to be there when you try to run them for the first time. Fortunately, Kali will typically notice what you are trying to do and suggest a package that will install the tool you are trying to run. You can always find and install ahead of time, but you can also just try running the tool and let Kali help you get it installed.
CAT can also investigate a system by using the Simple Network Management Protocol (SNMP). The version of SNMP used by CAT is outdated. This is not to say that some devices don’t still use outdated versions of protocols like SNMP. SNMP can be used to gather information about configuration as well as system status. The older version of SNMP uses a community string for authentication, which is provided in clear text because the first version of SNMP doesn’t use encryption. CAT uses a word list of potential community strings, though it was common for the read-only community string to be public and the read-write community string to be private for a long time. They were the defaults in many cases, and unless the configuration of the system was changed, that’s what you would need to supply.
CAT is an easy program to run. It’s a Perl script that calls individual modules for SNMP and brute-force runs. As I’ve noted, it does require you to provide the hosts. You can provide a single host or a text file with a list of hosts in it. Example 4-10 shows the help output for CAT and how to run it against Cisco devices.
Example 4-10. CAT output
┌──(kilroy@badmilo)-[/etc/default] └─$ CAT Cisco Auditing Tool - g0ne [null0] Usage: -h hostname (for scanning single hosts) -f hostfile (for scanning multiple hosts) -p port # (default port is 23) -w wordlist (wordlist for community name guessing) -a passlist (wordlist for password guessing) -i [ioshist] (Check for IOS History bug) -l logfile (file to log to, default screen) -q quiet mode (no screen output)
The program cisco-torch can be used to scan for Cisco devices. One of the differences between this and CAT is that cisco-torch can be used to scan for available SSH ports/services. Additionally, Cisco devices can store and retrieve configurations from Trivial File Transfer Protocol (TFTP) servers. cisco-torch can be used to fingerprint both TFTP and Network Transfer Protocol (NTP) servers. This will help identify infrastructure related to both Cisco Internetwork Operating System (IOS) devices and the supporting infrastructure for those devices. IOS is the operating system that Cisco uses on its routers and enterprise switches. Example 4-11 shows a scan of a local network looking for Telnet, SSH, and Cisco web servers. All these protocols can be used to remotely manage Cisco devices.
Note
Cisco has been using its IOS for decades now. IOS should not be confused with iOS, which is what Apple calls the operating system that controls its mobile devices.
Example 4-11. Output from cisco-torch
┌──(kilroy@badmilo)-[~] └─$ cisco-torch -t -s -w 192.168.1.0/24 Using config file torch.conf... Loading include and plugin ... ############################################################### # Cisco Torch Mass Scanner # # Because we need it... # # http://www.arhont.com/cisco-torch.pl # ############################################################### List of targets contains 256 host(s) Will fork 50 additional scanner processes Range Scan from 192.168.1.0 to 192.168.1.5 528028: Checking 192.168.1.0 ... HUH db not found, it should be in fingerprint.db Skipping Telnet fingerprint Range Scan from 192.168.1.48 to 192.168.1.53 528036: Checking 192.168.1.48 ... Range Scan from 192.168.1.24 to 192.168.1.29 528032: Checking 192.168.1.24 ... HUH db not found, it should be in fingerprint.db HUH db not found, it should be in fingerprint.db Skipping Telnet fingerprint Range Scan from 192.168.1.30 to 192.168.1.35 Skipping Telnet fingerprint Range Scan from 192.168.1.72 to 192.168.1.77 528040: Checking 192.168.1.72 ... 528033: Checking 192.168.1.30 ... HUH db not found, it should be in fingerprint.db Range Scan from 192.168.1.66 to 192.168.1.71 528039: Checking 192.168.1.66 ... Skipping Telnet fingerprint HUH db not found, it should be in fingerprint.db Skipping Telnet fingerprint Range Scan from 192.168.1.84 to 192.168.1.89 528042: Checking 192.168.1.84 ...
Cisco devices have known vulnerabilities. This says nothing at all about Cisco or its developers but everything about having a lot of code in complex devices as well as a long run of being a very common choice for companies in network gear. As always, attackers will spend time looking for vulnerabilities in commonly used systems. Running network scans or other tools that will identify Cisco devices on the network is one thing, but at some point you will also want to identify vulnerabilities. As a result, we need to be able to identify vulnerabilities in the devices on the network. Fortunately, in addition to using OpenVAS for vulnerability scanning, which can also look for vulnerabilities in network devices like those manufactured by Cisco, a Perl script comes with Kali to look for Cisco vulnerabilities. This script, cge.pl, knows about specific vulnerabilities related to Cisco devices. Example 4-12 shows the list of vulnerabilities that can be tested with cge.pl as well as how to run the script, which takes a target and a vulnerability number.
Example 4-12. Running cge.pl for Cisco vulnerability scanning
┌──(kilroy@badmilo)-[~] └─$ cge.pl Usage : perl cge.pl <target> <vulnerability number> Vulnerabilities list : [1] - Cisco 677/678 Telnet Buffer Overflow Vulnerability [2] - Cisco IOS Router Denial of Service Vulnerability [3] - Cisco IOS HTTP Auth Vulnerability [4] - Cisco IOS HTTP Configuration Arbitrary Administrative Access Vulnerability [5] - Cisco Catalyst SSH Protocol Mismatch Denial of Service Vulnerability [6] - Cisco 675 Web Administration Denial of Service Vulnerability [7] - Cisco Catalyst 3500 XL Remote Arbitrary Command Vulnerability [8] - Cisco IOS Software HTTP Request Denial of Service Vulnerability [9] - Cisco 514 UDP Flood Denial of Service Vulnerability [10] - CiscoSecure ACS for Windows NT Server Denial of Service Vulnerability [11] - Cisco Catalyst Memory Leak Vulnerability [12] - Cisco CatOS CiscoView HTTP Server Buffer Overflow Vulnerability [13] - 0 Encoding IDS Bypass Vulnerability (UTF) [14] - Cisco IOS HTTP Denial of Service Vulnerability
One final Cisco tool to look at is cisco-ocs. This is another Cisco scanner, but no parameters are needed to perform the testing. You don’t choose what cisco-ocs does; it just does it. All you need to do is provide the range of addresses. You can see a run of cisco-ocs in Example 4-13. After you tell it the range of addresses, and start and stop IP, the tool will start testing each address in turn for entry points and potential vulnerabilities.
Example 4-13. Running cisco-ocs
┌──(kilroy@badmilo)-[~] └─$ cisco-ocs 192.168.1.1 192.168.1.254 ********************************* OCS v 0.2 ****************************** **** **** **** coded by OverIP **** **** overip@gmail.com **** **** under GPL License **** **** **** **** usage: ./ocs xxx.xxx.xxx.xxx yyy.yyy.yyy.yyy **** **** xxx.xxx.xxx.xxx = range start IP **** **** yyy.yyy.yyy.yyy = range end IP **** **** **** ************************************************************************** (192.168.1.1) Filtered Ports (192.168.1.2) Filtered Ports
As you can see from the tools here, several programs are looking for Cisco devices and potential vulnerabilities. If you can find these devices, and they show either open ports to test logins or, even worse, vulnerabilities, it’s definitely worth flagging them as devices to look for exploits. This is not to say that Cisco devices are the only networking devices available, but they have been around long enough and have enough installed devices around the world that they make an attractive target for tool development. Over time, as more companies like Palo Alto Networks and others get more traction than they already have, you can expect open source tools that scan for them and identify vulnerabilities to become available.
Database Vulnerabilities
Database servers commonly have a lot of sensitive information, though they are commonly on isolated networks. This is not always the case, however. Some organizations may also believe that isolating the database protects it, which is not true. If an attacker can get through the web server or the application server, both of those systems may have trusted connections to the database. This exposes a lot of information to attack. When you are working closely with a company, you may get direct access to the isolated network to look for vulnerabilities. Regardless of where the system resides, organizations should definitely be locking down their databases and remediating any vulnerabilities found.
Oracle is a large company that built its business on enterprise databases. If a company needs large databases with sensitive information, it may well have gone to Oracle. The program oscanner that comes installed in Kali scans Oracle databases to perform checks. The program uses a plug-in architecture to enable tests of Oracle databases, including trying to get the security identifiers (SIDs) from the database server, list accounts, crack passwords, and several other attacks. oscanner is written in Java, so it should be portable across multiple operating systems.
oscanner also comes with several lists, including lists of accounts, users, and services. Some of the files don’t have a lot of possibilities in them, but they are starting points for attacks against Oracle. As with so many other tools you will run across, you will gather your own collection of service identifiers, users, and potential passwords as you go. You can add to these files for better testing of Oracle databases. As you test more and more systems and networks, you should be increasing the data possibilities you have for running checks. This will, over time, increase the possibility of success. Keep in mind that when you are running word lists for usernames and passwords, you are going to be successful only if the username or password configured on the system matches something in the word lists exactly.
Identifying New Vulnerabilities
Software has bugs. It’s the nature of the beast. Software, especially larger pieces of software, is complex. The more complexity, the more chance for error. Think about all the choices that are made in the course of running a program. If you start calculating all the potential execution paths through a program, you will quickly get into large numbers. How many of those complete execution paths get tested when software testing is performed? Chances are, only a subset of the entire set of execution paths. Even if all the execution paths are being tested, what sorts of input are being tested?
Some software testing may be focused on functional testing. This is about verifying that the functionality specified is correct. You can do this by positive testing—making sure that what happens is expected to happen. There may also be some amount of negative testing: you want to make sure that your program fails politely if something unexpected happens. It’s this negative testing that can be difficult to accomplish, because if you have a set of data you expect, it’s only a partial set compared with everything that could possibly happen in the course of running a program, especially one that takes user input at some point.
Boundary testing occurs when you go after the bounds of expected input. You test the edges of the maximum or minimum values and just outside the maximum or minimum, checking for errors and correct handling of the input.
Sending applications data they don’t expect is a way to identify bugs in a program. You may get error messages that provide information that may be useful, or you may get a program crash. One way of accomplishing this is to use a class of applications called fuzzers. A fuzzer generates random or variable data to provide to an application. The input is programmatically generated based on a set of rules.
Note
Fuzzing may be considered opaque-box testing by some people, because the fuzzing program has no knowledge of the inner workings of the service application. It sends in data, regardless of what the program is expecting the input to look like. Even if you have access to the source code, you are not developing the tests you run with a fuzzer with respect to the way the source code looks. From that standpoint, the application may as well be an opaque box, even if you have the source code.
Kali has a few fuzzers installed and more that can be installed. The first one to look at, sfuzz, used to send network traffic to servers. sfuzz has a collection of rule files that tell the program how to create the data that is being sent. Some of these are based on particular protocols. For instance, Example 4-14 shows the use of sfuzz to send SMTP traffic to an email server. The -T flag indicates that we are using TCP, and the -s flag says we are going to do sequence fuzzing rather than literal fuzzing. The -f flag says to use the file /usr/share/sfuzz-db/basic.smtp as input for the fuzzer to use. Finally, the -S and -p flags indicate the target IP address and port, respectively.
Example 4-14. Using sfuzz to fuzz an SMTP server
┌──(kilroy@badmilo)-[~] └─$ sudo sfuzz -T -s -f /usr/share/sfuzz-db/basic.smtp -S 127.0.0.1 -p 25 [17:37:30] dumping options: filename: </usr/share/sfuzz-db/basic.smtp> state: <8> lineno: <14> literals: [30] sequences: [31] symbols: [0] req_del: <200> mseq_len: <50050> plugin: <none> s_syms: <0> <-- snip --> [17:37:30] info: beginning fuzz - method: tcp, config from: ↩ [/usr/share/sfuzz-db/basic.smtp], out: [127.0.0.1:25] [17:37:30] attempting fuzz - 1 (len: 50057). [17:37:30] info: tx fuzz - (50057 bytes) - scanning for reply. [17:37:30] read: 220 badmilo.washere.com ESMTP Postfix (Debian/GNU) 250 badmilo.washere.com ======================================================================== [17:37:30] attempting fuzz - 2 (len: 50057). [17:37:30] info: tx fuzz - (50057 bytes) - scanning for reply. [17:37:30] read: 220 badmilo.washere.com ESMTP Postfix (Debian/GNU) 250 badmilo.washere.com ======================================================================== [17:37:30] attempting fuzz - 3 (len: 50057). [17:37:30] info: tx fuzz - (50057 bytes) - scanning for reply. [17:37:30] read: 220 badmilo.washere.com ESMTP Postfix (Debian/GNU) 250 badmilo.washere.com ======================================================================== [17:37:30] attempting fuzz - 4 (len: 50057). [17:37:30] info: tx fuzz - (50057 bytes) - scanning for reply. [17:37:31] read: 220 badmilo.washere.com ESMTP Postfix (Debian/GNU) 250 badmilo.washere.com ======================================================================== [17:37:31] attempting fuzz - 5 (len: 50057). [17:37:31] info: tx fuzz - (50057 bytes) - scanning for reply. [17:37:31] read: 220 badmilo.washere.com ESMTP Postfix (Debian/GNU) 250 badmilo.washere.com =========================================================================
One of the issues with using fuzzing attacks is that they may generate program crashes. While this is ultimately the intent of the exercise, the question is how to determine when the program has actually crashed. You can do it manually, of course, by running the program under test in a debugger session so the debugger catches the crash. The problem with this approach is that it may be hard to know which test case caused the crash, and while finding a bug is good, just getting a program crash isn’t enough to identify vulnerabilities or create exploits that take advantage of the vulnerability. A bug, after all, is not necessarily a vulnerability. It may simply be a bug. Software packages can be used to integrate program monitoring with application testing. You can use a program like valgrind to instrument your analysis. Example 4-15 shows starting up a POP3 server with the memcheck tool in valgrind. This will watch for memory leaks.
Example 4-15. Memory leak checking with valgrind
┌──(kilroy@badmilo)-[~] └─$ sudo valgrind --tool=memcheck popa3d ==552080== Memcheck, a memory error detector ==552080== Copyright (C) 2002-2022, and GNU GPL'd, by Julian Seward et al. ==552080== Using Valgrind-3.19.0 and LibVEX; rerun with -h for copyright info ==552080== Command: popa3d ==552080== +OK
Once you have valgrind running with a service, you can then run a tool like sfuzz against it to see if you can find memory leaks. Of course, valgrind also comes with other features in addition to memcheck that you can use to instrument your applications. The challenge with a tool like valgrind is that it needs to have the program you call stay running. Many services want to be in daemon mode, which means they appear to terminate cleanly from the perspective of the terminal where you run it. The called program in turn calls another program, but it’s the called program that valgrind is watching. As soon as it stops, valgrind has nothing to watch anymore. There are options for valgrind to follow child processes that can help in these circumstances. This tool, though, will give you some insight into what is happening with the program under test as you are working with application testing.
In some cases, you may find programs that are targeted at specific applications or protocols. Whereas sfuzz is a general-purpose fuzzing program that can go after multiple protocols, programs like protos-sip are designed specifically to test the Session Initiation Protocol (SIP), a common protocol used in VoIP implementations. The protos-sip package is a Java application that was developed as part of a research program. The research turned into the creation of a company that sells software developed to fuzz network protocols.
Not all applications are services that listen on networks for input. Many applications take input in the form of files. Even something like sfuzz that takes definitions as input takes those definitions in the form of files. Certainly word processing, spreadsheet programs, presentation programs, and a wide variety of other types of software use files. Some fuzzers are developed for the purpose of testing applications that take files as input.
One program that you can use to do a wider range of fuzz testing is zzuf. This program can manipulate input into a program so as to feed it unexpected data. Example 4-16 shows a run of zzuf against the program pdf-parser, which is a Python script used to gather information out of a PDF file. What we are doing is passing the run of the program into zzuf as a command-line parameter after we have told zzuf what to do. There is a challenge with this program, though. It’s an older tool, so it hasn’t been tested against more current versions of Python. You’ll see the errors here.
Example 4-16. Fuzzing pdf-parser with zzuf
┌──(kilroy@badmilo)-[~] └─$ zzuf -s 0:10 -c -C 0 -T 3 pdf-parser -a fuzzing.pdf This program has not been tested with this version of Python (3.11.4) Should you encounter problems, please use Python version 3.11.1 Comment: 151 XREF: 1 Trailer: 1 StartXref: 1 Indirect object: 1 Indirect objects with a stream: 1: 2020 Unreferenced indirect objects: 2020 2 R This program has not been tested with this version of Python (3.11.4) Should you encounter problems, please use Python version 3.11.1 Comment: 56 XREF: 0 Trailer: 1 StartXref: 0 Indirect object: 32 Indirect objects with a stream: 2022, 2030, 2033, 2034, 2037, 2040, 2, 6, 9, 11, ↩ 12, 14, 16, 18, 20, 21 15: 2022, 2021, 2033, 2034, 2036, 2037, 2040, 2, 3, 5, 12, 18, 24, 26, 27 /EztGState 1: 2029 /Font 2: 2025, 2026 /FontDescrip4or 1: 2027 /OCG 1: 2123 /OCMD 1: 2030 /ObjStm 7: 6, 9, 14, 16, 17, 20, 21 /OâjStm 1: 11 /Page 1: 2024 /Pages 1: 25 /XRef 1: 2041 Unreferenced indirect objects: 2 0 R, 3 1 R, 5 0 R, 6 0 R, 9 0 R, 11 0 R, 12 0 R, ↩ 14 0 R, 16 0 R, 17 0 R, 18 0 R, 20 0 R, 21 0 R, 24 0 R, 26 0 R, 2022 0 R, ↩ 2024 0 R, 2036 0 R, 2037 0 R, 2041 0 R, 2123 0 R Unreferenced indirect objects without /ObjStm objects: 2 0 R, 3 1 R, 5 0 R, ↩ 11 0 R, 12 0 R, 18 0 R, 24 0 R, 26 0 R, 2022 0 R, 2024 0 R, 2036 0 R, ↩ 2037 0 R, 2041 0 R, 2123 0 R
On the command line for zzuf, we are telling it to use seed values (-s) and to fuzz input only on the command line. Any program that reads in configuration files for its operation wouldn’t have those configuration files altered in the course of running. We’re looking to alter only the input from the file we are specifying. Specifying -C 0 tells zzuf not to stop after the first crash. Finally, -T 3 says we should time out after 3 seconds so that the testing doesn’t get hung up.
Using a tool like this can provide a lot of potential for identifying bugs in applications that read and process files—specifically, a PDF reader, in this case. As a general-purpose program, zzuf has potential even beyond the limited capacities shown here. Beyond file fuzzing, it can be used for network fuzzing. If you are interested in locating vulnerabilities, a little time using zzuf could be well spent.
Summary
Vulnerabilities are the potentially open doors that attacks can come through by using exploits. Identifying vulnerabilities is an important task for someone doing security testing, since remediating vulnerabilities is an important element in an organization’s security program. Here are some ideas to take away:
-
A vulnerability is a weakness in a piece of software or a system. A vulnerability is a bug, but a bug may not be a vulnerability.
-
An exploit is a means of taking advantage of a vulnerability to obtain something the attacker shouldn’t have access to.
-
OpenVAS is an open source vulnerability scanner that can be used to scan for both remote and local vulnerabilities.
-
Local vulnerabilities require someone to have some sort of authenticated access, which may make them less critical to some people, but they are still essential to remediate since they can be used to allow escalation of privileges.
-
Network devices are also open to vulnerabilities and can provide an attacker access to alter traffic flows. Scanning for vulnerabilities in the network devices can be done using OpenVAS or other specific tools, including those focused on Cisco devices.
-
Identifying vulnerabilities that don’t exist can take some work, but tools like fuzzers can be useful in triggering program crashes, which may be vulnerabilities.
Useful Resources
-
Mateusz Jurczyk’s Black Hat slide deck, “Effective File Format Fuzzing”
-
Jose Ramon Palanco’s blog, “The Amazing World of File Fuzzing”
-
Hanno Böck’s tutorial, “Beginner’s Guide to Fuzzing”
-
Hacker Target, “OpenVAS Tutorial”
-
The Craft of Coding, “Defensive Programming, Validating Input in C and Fortran”
Get Learning Kali Linux, 2nd Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.