
Fundamentals of forensic methodology is an important component of forensic science, which studies the methods and techniques of investigating criminal cases. This field combines the theoretical foundations and practical skills necessary for the effective detection, capture, storage and analysis of evidence.
The baseline consists of creating a snapshot of certain parts of the system to compare with the future state to highlight changes.
For example, you can calculate and store the hash of each file in the file system to be able to find out which files have been modified. This can also be done with created user accounts, running processes, running services, and anything else that should not change significantly or at all.
File Integrity Monitoring (FIM) is a critical security technique that protects IT environments and data by tracking changes to files. It involves two key steps:
Baseline comparison: Establish a baseline using file attributes or cryptographic checksums (such as MD5 or SHA-2) for future comparisons to detect changes.
Real-time change notifications: Receive instant notifications when files are accessed or modified, usually through an OS kernel extension.
An attacker may be interested in changing the timestamps of files to avoid detection. Timestamps can be found inside the MFT in the $STANDARD_INFORMATION__ and __ $FILE_NAME attributes.
Both attributes have 4 timestamps: modify , access , create , and modify the MFT register (MACE or MACB).
Windows Explorer and other tools show information from $STANDARD_INFORMATION.
This tool changes the timestamp information inside $STANDARD_INFORMATION , but not the information inside $FILE_NAME . Thus, it is possible to detect suspicious activity.
The USN log (update sequence number log) is an NTFS (Windows NT file system) feature that tracks changes to volumes. The UsnJrnl2Csv tool allows you to verify these changes.
The preview image is the result shown by the tool, where you can see that some changes have been made to the file.
All metadata changes to the file system are logged in a process known as write-ahead journaling. Logged metadata is stored in a file named **$LogFile** located in the root directory of the NTFS file system. Tools such as LogFileParser can be used to analyze this file and identify changes.
Again, you can see in the output of the tool that some changes have been made. Using the same tool, you can determine to what time the timestamps were changed:
CTIME: file creation time
ATIME: file modification time
MTIME: modification of the MFT file registry
RTIME: file access time
$STANDARD_INFORMATION
і $FILE_NAME comparison
Another way to identify suspicious modified files would be to compare the time on both attributes for inconsistencies.
NTFS timestamps have a precision of 100 nanoseconds. Then searching for files with timestamps like 2010-10-10 10:10: 00.000:0000 is very suspicious.
SetMace is a forensics protection tool
This tool can modify both $STARNDAR_INFORMATION and $FILE_NAME attributes. However, Windows Vista requires an active OS to change this information.
NFTS uses cluster and minimum information size. This means that if a file takes up one and a half clusters, the reminder will never be used until the file is deleted. Then you can hide the data in this free space.
There are tools like slacker that allow you to hide data in this “hidden” space. However, analysis of $logfile and $usnjrnl may show that some data has been added:
The free space can then be obtained using tools such as FTK Imager. Note that this kind of tool can store content in scrambled or even encrypted form.
This is a tool that shuts down your computer if it detects any changes in USB ports. The way to detect this is to check the running processes and look at each python script that is running.
These distributions run in RAM. The only way to detect them is if the NTFS file system is mounted with write permission. If it is installed with read-only permission, it will be impossible to detect the intrusion.
Several Windows logging methods can be disabled to make forensic investigation much more difficult.
This is a registry key that maintains the dates and times when each executable file was launched by the user.
There are two steps to disable UserAssist:
Set the two registry keys HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced\Start_TrackProgs and HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced\Start_TrackEnabled both to zero to signal that we want to disable UserAssist.
Clear the registry subtrees that look like HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\UserAssist\<hash>.
This will save information about programs that are running to improve Windows performance. However, it can also be useful for the practice of forensics.
Run regedit
Select the file path HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SessionManager\Memory Management\PrefetchParameters
Right-click on both EnablePrefetcher and EnableSuperfetch
Select Change for each to change the value from 1 (or 3) to 0
Restart
Whenever a folder is opened from an NTFS volume on a Windows NT server, the system takes time to update the timestamp field in each folder in the list, called the last access time. On an NTFS volume that is in heavy use, this can affect performance.
Open the registry editor (Regedit.exe).
Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem.
Look for NtfsDisableLastAccessUpdate. If it doesn’t exist, add this DWORD and set it to 1, which will disable the process.
Close the registry editor and restart the server.
All USB device entries are stored in the Windows registry under the USBSTOR registry key, which contains subkeys that are created every time you connect a USB device to a PC or laptop. You can find this key here H KEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\USBSTOR. Deleting this will delete your USB history. You can also use the USBDeview tool to make sure you remove them (and remove them).
Another file that stores USB information is the setupapi.dev.log file in C:\Windows\INF. This should also be removed.
List of shadow copies with the vssadmin list shadowstorage “Delete” function running.vssadmin delete shadow
You can also delete them using the GUI by following the instructions offered at https://www.ubackup.com/windows-10/how-to-delete-shadow-copies-windows-10-5740.html
To disable shadow copies, follow these steps:
Open the Services program by typing “services” in the search text box after clicking the Windows Start button.
Find Volume Shadow Copy in the list, select it, and then right-click to open Properties.
In the Startup Type drop-down menu, select Disabled, then confirm the changes by clicking Apply and OK.
You can also change the configuration of the files that will be copied to the shadow copy of the registry:
HKLM\SYSTEM\CurrentControlSet\Control\BackupRestore\FilesNotToSnapshot
You can use the Windows tool : cipher /w:C This will specify a cipher to remove any data from the available unused disk space inside the C drive.
You can also use tools like the Eraser
reg add ‘HKLM\SYSTEM\CurrentControlSet\Services\eventlog’ /v Start /t REG_DWORD /d 4 /f
In the Services section, disable the Windows Event Log service
WEvtUtil.exec clear-log or WEvtUtil.exe
fsutil usn deletejournal /d c:
Some docker container is suspected to have been compromised:
docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cc03e43a052a lamp-wordpress "./run.sh" 2 minutes ago Up 2 minutes 80/tcp wordpress
You can easily find changes made to this image container using:
docker diff wordpress C /var C /var/lib C /var/lib/mysql A /var/lib/mysql/ib_logfile0 A /var/lib/mysql/ib_logfile1 A /var/lib/mysql/ibdata1 A /var/lib/mysql/mysql A /var/lib/mysql/mysql/time_zone_leap_second.MYI A /var/lib/mysql/mysql/general_log.CSV ...
In the previous command, C stands for Changed and A stands for Added . If you find that some interesting file like /etc/shadow has been modified, you can download it from the container to check for malicious activity with:
docker cp wordpress:/etc/shadow.
You can also compare it to the original one, which starts a new container and extracts a file from it:
docker run -d lamp-wordpress docker cp b5d53e8b468e:/etc/shadow original_shadow #Get the file from the newly created container diff original_shadow shadow
If you find that some suspicious file has been added, you can access the container and check it:
docker exec -it wordpress bash
When you’re provided with the exported docker image (probably in .tar format), you can use the diff-container to extract a summary of changes:
docker save <image> > image.tar #Export the image to a .tar file container-diff analyze -t sizelayer image.tar container-diff analyze -t history image.tar container-diff analyze -t metadata image.tar
You can then extract the image and access the blobs to search for suspicious files that you might have found in the change history:
tar -xf image.tar
You can get basic information from the running image:
docker inspect <image>
You can also get a summary of the change history using:
docker history --no-trunc <image>
You can also create a docker file from an image using:
alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm alpine/dfimage" dfimage -sV=1.36 madhuakula/k8s-goat-hidden-in-layers>
To find added/changed files in docker images, you can also use the dive utility (download it from releases ):
#First you need to load the image in your docker repo sudo docker load < image.tar 1 ⨯ Loaded image: flask:latest #And then open it with dive: sudo dive flask:latest
This allows you to navigate between different docker image blocks and check which files have been changed/added. Red means added and yellow means changed. Use tab to switch to another view and space bar to collapse/open folders.
With die you will not be able to access the contents of the different stages of the image. To do this, you will need to unpack each layer and access it. You can extract all image layers from the directory where the image was extracted by running:
tar -xf image.tar for d in `find * -maxdepth 0 -type d`; do cd $d; tar -xf ./layer.tar; cd ..; done
Note that when you run a docker container inside a host, you can see the processes running in the container from the host that just started ops -ef This way (as root) you can get a memory dump of the processes from the host and look for credentials .
DD
#This will generate a raw copy of the disk dd if=/dev/sdb of=disk.img
dcfldd
#Raw copy with hashes along the way (more secur as it checks hashes while it's copying the data) dcfldd if=<subject device> of=<image file> bs=512 hash=<algorithm> hashwindow=<chunk size> hashlog=<hash file> dcfldd if=/dev/sdc of=/media/usb/pc.image hash=sha256 hashwindow=1M hashlog=/media/usb/pc.hashes
Ви можете завантажити програму зображення FTK тут .
ftkimager /dev/sdb evidence --e01 --case-number 1 --evidence-number 1 --description 'A description' --examiner 'Your name'
Ви можете створити образ диска за допомогою інструментів ewf .
ewfacquire /dev/sdb #Name: evidence #Case number: 1 #Description: A description for the case #Evidence number: 1 #Examiner Name: Your name #Media type: fixed #Media characteristics: physical #File format: encase6 #Compression method: deflate #Compression level: fast #Then use default values #It will generate the disk image in the current directory
On Windows, you can try using the free version of Arsenal Image Mounter ( https://arsenalrecon.com/downloads/ ) to mount a forensic image.
#Get file type file evidence.img evidence.img: Linux rev 1.0 ext4 filesystem data, UUID=1031571c-f398-4bfb-a414-b82b280cf299 (extents) (64bit) (large files) (huge files) #Mount it mount evidence.img /mnt
#Get file type file evidence.E01 evidence.E01: EWF/Expert Witness/EnCase image file format #Transform to raw mkdir output ewfmount evidence.E01 output/ file output/ewf1 output/ewf1: Linux rev 1.0 ext4 filesystem data, UUID=05acca66-d042-4ab2-9e9c-be813be09b24 (needs journal recovery) (extents) (64bit) (large files) (huge files) #Mount mount output/ewf1 -o ro,norecovery /mnt
This is a Windows program for mounting volumes. You can download it here https://arsenalrecon.com/downloads/
cannot mount /dev/loop0 read-only, in this case you need to use the flags -o ro,norecovery
wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error. In this case, the mount failed because the offset of the file system is different from the offset of the disk image. You need to find the sector size and the starting sector:
fdisk -l disk.img Disk disk.img: 102 MiB, 106954648 bytes, 208896 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00495395 Device Boot Start End Sectors Size Id Type disk.img1 2048 208895 206848 101M 1 FAT12
Note that the sector size is 512 and the start is 2048 . Then mount the image like this:
mount disk.img /mnt -o ro,offset=$((2048*512))
Initial information gathering
First of all, it is recommended to have a USB stick with known binaries and libraries on it (you can just get ubuntu and copy the /bin , /sbin , /lib and /lib64 folders), then plug in the USB and change the env variables to use those binaries files:
export PATH=/mnt/usb/bin:/mnt/usb/sbin export LD_LIBRARY_PATH=/mnt/usb/lib:/mnt/usb/lib64
Once you’ve configured your system to use good and known binaries, you can start getting some basic information:
date #Date and time (Clock may be skewed, Might be at a different timezone) uname -a #OS info ifconfig -a || ip a #Network interfaces (promiscuous mode?) ps -ef #Running processes netstat -anp #Proccess and ports lsof -V #Open files netstat -rn; route #Routing table df; mount #Free space and mounted devices free #Meam and swap space w #Who is connected last -Faiwx #Logins lsmod #What is loaded cat /etc/passwd #Unexpected data? cat /etc/shadow #Unexpected data? find /directory -type f -mtime -1 -print #Find modified files during the last minute in the directory
While getting the basics, you should check for odd things like:
Root processes usually run with low PIDS, so if you find a root process with a high PID, you can suspect
Check registered user logins without the /etc/passwd inner shell
Check password hashes inside /etc/shadow for non-shell users
To get the memory of a working system, it is recommended to use LiME. To compile it, you need to use the same kernel as the victim.
Remember that you cannot install LiME or any other component on the victim computer as this will cause several changes
So if you have an identical version of Ubuntu you can use apt-get install lime-forensics-dkms . In other cases, you need to download LiME from github and compile it with the correct kernel headers. To get the exact kernel headers of the victim machine, you can simply copy the /lib/modules/<kernel version> directory to your machine and then compile LiME using them:
make -C /lib/modules/<kernel version>/build M=$PWD sudo insmod lime.ko "path=/home/sansforensics/Desktop/mem_dump.bin format=lime"
LiME supports 3 formats:
Raw (each segment combined together)
Padded (same as raw, but with zeros in the right bits)
Lime (recommended format with metadata
LiME can also be used to send a dump over the network instead of storing it on the system using something like: path=tcp:4444
First of all, you will need to turn off the system. This is not always an option, as sometimes the system will be a working server that the company cannot afford to shut down. There are 2 ways to turn off the system: normal shutdown and shutdown according to the “plug in” principle. The former will allow processes to complete as usual and synchronize the file system, but it will also allow potential malware to destroy evidence. The “pull the plug” approach can cause some information loss (a small amount of information will be lost because we’ve already made a memory image) and malware won’t have any way to do that. So if you suspect that there might be malware, just run the command in the system and pull the fork.sync
It is important to note that before connecting the computer to anything related to the case, you should make sure that it is mounted as read-only to avoid changing any information.
#Create a raw copy of the disk dd if=<subject device> of=<image file> bs=512 #Raw copy with hashes along the way (more secure as it checks hashes while it's copying the data) dcfldd if=<subject device> of=<image file> bs=512 hash=<algorithm> hashwindow=<chunk size> hashlog=<hash file> dcfldd if=/dev/sdc of=/media/usb/pc.image hash=sha256 hashwindow=1M hashlog=/media/usb/pc.hashes
Creating a disk image with no data.
#Find out if it's a disk image using "file" command file disk.img disk.img: Linux rev 1.0 ext4 filesystem data, UUID=59e7a736-9c90-4fab-ae35-1d6a28e5de27 (extents) (64bit) (large files) (huge files) #Check which type of disk image it's img_stat -t evidence.img raw #You can list supported types with img_stat -i list Supported image format types: raw (Single or split raw file (dd)) aff (Advanced Forensic Format) afd (AFF Multiple File) afm (AFF with external metadata) afflib (All AFFLIB image formats (including beta ones)) ewf (Expert Witness Format (EnCase)) #Data of the image fsstat -i raw -f ext4 disk.img FILE SYSTEM INFORMATION -------------------------------------------- File System Type: Ext4 Volume Name: Volume ID: 162850f203fd75afab4f1e4736a7e776 Last Written at: 2020-02-06 06:22:48 (UTC) Last Checked at: 2020-02-06 06:15:09 (UTC) Last Mounted at: 2020-02-06 06:15:18 (UTC) Unmounted properly Last mounted on: /mnt/disk0 Source OS: Linux [...] #ls inside the image fls -i raw -f ext4 disk.img d/d 11: lost+found d/d 12: Documents d/d 8193: folder1 d/d 8194: folder2 V/V 65537: $OrphanFiles #ls inside folder fls -i raw -f ext4 disk.img 12 r/r 16: secret.txt #cat file inside image icat -i raw -f ext4 disk.img 16 ThisisTheMasterSecret
Linux offers tools to ensure the integrity of system components, which is critical to identifying potentially problematic files.
RedHat-based systems: use rpm -Va for comprehensive verification.
Debian-based systems : dpkg –verify for initial verification, followed by debsums | grep -v “OK$” (after installing debsums with apt-get install debsums) to detect any problems.
To effectively search for installed programs on Debian and RedHat systems, consider using system logs and databases along with manual checks in common directories.
For Debian, check /var/lib/dpkg/status and /var/log/dpkg.log for package installation details, using the grepfilter for specific information.
RedHat users can query the RPM database with rpm -qa –root=/mntpath/var/lib/rpm to get a list of installed packages.
To find software installed manually or outside of these package managers, look in directories such as /usr/local, /opt, , , and . Combine directory listings with system commands to identify executables unrelated to known packages, improving the search for all installed programs./usr/sbin
/usr/bin
/bin
/sbin
# Debian package and log details cat /var/lib/dpkg/status | grep -E "Package:|Status:" cat /var/log/dpkg.log | grep installed # RedHat RPM database query rpm -qa --root=/mntpath/var/lib/rpm # Listing directories for manual installations ls /usr/sbin /usr/bin /bin /sbin # Identifying non-package executables (Debian) find /sbin/ -exec dpkg -S {} \; | grep "no path found" # Identifying non-package executables (RedHat) find /sbin/ –exec rpm -qf {} \; | grep "is not" # Find exacuable files find / -type f -executable | grep <something>
Imagine a process that was executed from /tmp/exec and deleted. It can be removed
cd /proc/3746/ #PID with the exec file deleted head -1 maps #Get address of the file. It was 08048000-08049000 dd if=mem bs=1 skip=08048000 count=1000 of=/tmp/exec2 #Recorver it
cat /var/spool/cron/crontabs/* \ /var/spool/cron/atjobs \ /var/spool/anacron \ /etc/cron* \ /etc/at* \ /etc/anacrontab \ /etc/incron.d/* \ /var/spool/incron/* \ #MacOS ls -l /usr/lib/cron/tabs/ /Library/LaunchAgents/ /Library/LaunchDaemons/ ~/Library/LaunchAgents/
Paths by which malware can be installed as a service:
/etc/inittab : Calls initialization scripts such as rc.sysinit, redirecting to startup scripts.
/etc/rc.d/ and /etc/rc.boot/ : Contains scripts to start the service, the latter is present in older versions of Linux.
/etc/init.d/ : Used in certain versions of Linux, such as Debian, to store startup scripts.
Services can also be enabled via /etc/inetd.conf or /etc/xinetd/ , depending on the Linux variant.
/etc/systemd/system : Directory for system and service management scripts.
/etc/systemd/system/multi-user.target.wants/ : Contains references to services that should be run at the multi-user level.
/usr/local/etc/rc.d/ : for your own or third-party services.
~/.config/autostart/ : for special autostart programs that can be hiding places for malware targeting users.
/lib/systemd/system/ : default system-wide module files provided by installed packages.
Linux kernel modules, often used by malware as components of rootkits, are loaded at system boot time. Directories and files critical to these modules include:
/lib/modules/$(uname -r) : contains modules for the running version of the kernel.
/etc/modprobe.d : Contains configuration files to control module loading.
/etc/modprobe and /etc/modprobe.conf : files for global module settings.
Linux uses various files to automatically launch programs after user login, potentially containing malware:
/etc/profile.d/ *, /etc/profile and /etc/bash.bashrc : Executed for any user login.
~/.bashrc , ~/.bash_profile , ~/.profile , and ~/.config/autostart : Custom user files that are started upon login.
/etc/rc.local : Runs after all system services have started, marking the end of the transition to a multi-user environment.
Linux systems track user activity and system events using various log files. These logs are key to detecting unauthorized access, malware infection, and other security incidents. Key log files include:
var/log/syslog (Debian) or /var/log/messages (RedHat): Store system-wide messages and actions.
/var/log/auth.log (Debian) or /var/log/secure (RedHat): log of authentication attempts, successful and failed logins. Use grep -iE “session opened for|accepted password|new session|not in sudoers” /var/log/auth.log to filter relevant authentication events.
/var/log/boot.log : Contains system startup messages.
/var/log/maillog or /var/log/mail.log : Logs mail server activities, useful for tracking mail-related services.
/var/log/kern.log : Stores kernel messages, including errors and warnings.
/var/log/dmesg : Stores device driver messages.
/var/log/faillog : Logs failed login attempts, aiding in security breach investigations.
/var/log/cron : logs execution of cron jobs.
/var/log/daemon.log : Tracks the background activity of the service.
/var/log/btmp : Unsuccessful login attempts to documents.
/var/log/httpd/ : Contains HTTPD Apache error and access logs.
/var/log/mysqld.log or /var/log/mysql.log : Logs MySQL database activity.
/var/log/xferlog : Logs FTP file transfers.
/var/log/ : Always check here for unexpected logs.
Linux syslogs and auditing subsystems can be disabled or removed as a result of an intrusion or malware incident. Because logs on Linux systems usually contain some of the most useful information about malicious activity, attackers regularly delete them. Therefore, when examining available log files, it is important to look for gaps or irregular entries that may indicate deletion or tampering.
Linux supports a per-user command history that is stored in:
~/.bash_history
~/.zsh_history
~/.zsh_sessions/*
~/.python_history
In addition, the last -Faiwx command provides a list of user logins. Check it for unknown or unexpected entries. Check for files that may grant additional r privileges:
Check /etc/sudoers for unexpected user privileges that may have been granted.
Check /etc/sudoers.d/unexpected user privileges that may have been granted.
Check /etc/groups for unusual group memberships or permissions.
Check /etc/passwd for unusual group memberships or permissions.
Some programs also create their own logs:
SSH : Check ~/.ssh/authorized_keys and ~/.ssh/known_hosts for unauthorized remote connections.
Gnome Desktop : Look in ~/.recently-used.xbel for files that have been recently accessed by Gnome applications.
Firefox/Chrome : Check your browser and download history in ~/.mozilla/firefox or ~/.config/google-chrome for suspicious activity.
VIM : See ~/.viminfo for usage details such as file access paths and search history.
Open Office : Check for recently accessed documents that may indicate compromised files.
FTP/SFTP : Check the logs in ~/.ftp_history or ~/.sftp_history for file transfers that may be unauthorized.
MySQL : Examine ~/.mysql_history for executed MySQL queries, which could potentially reveal unauthorized database activity.
Less : Parse ~/.lesshst for usage history, including files viewed and commands executed.
Git : Check ~/.gitconfig and project .git/logs for changes in repositories.
usbrip is a small program written in pure Python 3 that parses Linux log files (/var/log/syslog*or /var/log/messages*depending on the distro) to generate USB event history tables.
It’s interesting to know all the USBs that have been used, and it would be more useful if you have an authorized list of USBs to find “violations” (uses of USBs that are not on the list).
pip3 install usbrip usbrip ids download #Download USB ID database
usbrip events history #Get USB history of your curent linux machine usbrip events history --pid 0002 --vid 0e0f --user kali #Search by pid OR vid OR user #Search for vid and/or pid usbrip ids download #Downlaod database usbrip ids search --pid 0002 --vid 0e0f #Search for pid AND vid
Check /etc/passwd , /etc/shadow , and security logs for unusual names or accounts created and/or used in close proximity to known unauthorized events. Also check for possible sudo brute-force attacks. Also, check files like /etc/sudoers and /etc/groups for unexpected privileges granted to users. Finally, look for accounts without passwords or passwords that are easy to guess.
Analysis of file system structures during malware investigation
When investigating malware incidents, the file system structure is a key source of information that reveals both the sequence of events and the content of the malware. However, malware authors develop methods to thwart this analysis, such as altering file timestamps or evading the file system to store data.
To counter these anti-forensic methods, it is important to:
Perform thorough history analysis using tools such as Autopsy to visualize the history of events or mactime’s Sleuth Kit to obtain detailed history data.
Investigate unexpected scripts in the system’s $PATH, which may include shell or PHP scripts used by attackers.
Check /dev for unusual files, as it traditionally contains special files, but may contain files related to malware.
Look for hidden files or directories with names like “.. ” (dot dot space) or “..^G” (dot dot control-G) that may be hiding malicious content.
Identify the setuid root files using the command: find / -user root -perm -04000 -print This finds files with elevated permissions that can be abused by attackers.
Review the deletion timestamps in the inode tables to detect mass file deletions that may indicate the presence of rootkits or trojans.
Check consecutive inodes for adjacent malicious files after identifying one, as they may have been placed together.
Check the shared binary directories ( /bin , /sbin ) for recently modified files, as they may have been modified by malware.
# List recent files in a directory: ls -laR --sort=time /bin``` # Sort files in a directory by inode: ls -lai /bin | sort -n```
Note that an attacker can change the time to make the files look legitimate, but they cannot change the inode. If you find that a file indicates that it was created and modified at the same time as the rest of the files in the same folder, but the inode is unexpectedly larger, then the file’s timestamps have been modified.
To compare file system versions and pinpoint changes, we use simplified git diff commands:
To find new files, compare the two directories:
git diff --no-index --diff-filter=A path/to/old_version/ path/to/new_version/
For changed content, list the changes, ignoring certain lines:
git diff --no-index --diff-filter=M path/to/old_version/ path/to/new_version/ | grep -E "^\+" | grep -v "Installed-Time"
To detect deleted files:
git diff --no-index --diff-filter=D path/to/old_version/ path/to/new_version/
Filter options ( –diff-filter ) help narrow down specific changes, such as added ( A ), deleted ( D ), or modified ( M ) files.
A: Attached files
C: copied files
D: Deleted files
M: Modified files
R: Renamed files
T: type changes (eg file to symlink)
U: Unmerged files
X: Unknown files
B: Cracked files
Yara (set)
sudo apt-get install -y yara
Use this script to download and merge all yara malware rules from github: https://gist.github.com/andreafortuna/29c6ea48adf3d45a979a78763cdc7ce9 Create the rules directory and execute it. A file called malware_rules.yar will be created which contains all yara rules for malware.
wget https://gist.githubusercontent.com/andreafortuna/29c6ea48adf3d45a979a78763cdc7ce9/raw/4ec711d37f1b428b63bed1f786b26a0654aa2f31/malware_yara_rules.py mkdir rules python malware_yara_rules.py
Scan
yara -w malware_rules.yar image #Scan 1 file yara -w malware_rules.yar folder #Scan the whole folder
ClamAV (install)
sudo apt-get install -y clamav
sudo freshclam #Update rules clamscan filepath #Scan 1 file clamscan folderpath #Scan the whole folder
Capa detects potentially malicious capabilities in executables: PE, ELF, .NET. So it will find things like Att&ck tactics or suspicious possibilities like:
check the OutputDebugString error
run as a service
the process of creation
IOC stands for Indicator of Compromise. An IOC is a set of terms that define potentially unwanted software or confirmed malware. Blue Teams uses this definition to search for this kind of malicious files in its systems and networks. Sharing these definitions is very useful because when malware is identified on a computer and an IOC is created for that malware, other blue teams can use it to identify the malware faster.
The tool for creating or modifying IOCs is the IOC Editor. You can use tools like Redline to find specific IOCs in a device.
Loki is a scanner for simple indicators of compromise. Detection is based on four detection methods:
1. File Name IOC Regex match on full file path/name 2. Yara Rule Check Yara signature matches on file data and process memory 3. Hash Check Compares known malicious hashes (MD5, SHA1, SHA256) with scanned files 4. C2 Back Connect Check Compares process connection endpoints with C2 IOCs (new since version v.10)
Linux Malware Detect (LMD) is a Linux malware scanner released under the GNU GPLv2 license designed to detect threats in shared environments. It uses threat data from network intrusion detection systems to extract malware actively used in attacks and generates signatures for detection. In addition, threat data is also obtained from user reports using the LMD scan feature and resources from the malware community.
rkhunter
Tools such as rkhunter can be used to scan the file system for possible rootkits and malware.
sudo ./rkhunter --check -r / -l /tmp/rkhunter.log [--report-warnings-only] [--skip-keypress]
FLOSS is a tool that will try to find obfuscated lines in executables using various methods.
PEpper checks some basic things inside the executable (binary data, entropy, urls and ips, some yara rules).
PEstudio is a tool that allows you to get information about Windows executables, such as imports, exports, headers, as well as check the total number of viruses and find potential attack methods.
DiE is a tool for determining if a file is encrypted and for finding packers.
NeoPI is a Python script that uses a variety of statistical techniques to detect obfuscated and encrypted content in text/script files. NeoPI’s intended purpose is to help discover hidden web shell code.
PHP-malware-finder does its best to detect obfuscated/tricky code, as well as files that use PHP functions that are often used in malware/webshells.
When checking a malware sample, you should always check the signature of the binary, as the developer who signed it may already be associated with the malware.
#Get signer codesign -vv -d /bin/ls 2>&1 | grep -E "Authority|TeamIdentifier" #Check if the app’s contents have been modified codesign --verify --verbose /Applications/Safari.app #Check if the signature is valid spctl --assess --verbose /Applications/Safari.app
If you know that some folder containing web server files was last updated on a certain date . Check the date when all files on the web server were created and modified, and if any date is suspicious, check that file.
Base lines
If the folder’s files should not have been changed, you can calculate the hash of the folder’s original files and compare them to the current ones. Any change will be suspicious.
Statistical analysis
When information is stored in logs, you can check statistics such as how many times each web server file was accessed, as the web shell can be one of the largest .