Linux Fundamentals

 


Linux is an operating system used for personal computer, servers and even Mobil devices. linux stands as a fundamental pillar of the cyber security for it's robustness flexibility and open-source nature. first it was created as a kernel then ultimately linux operating system started with Unix operating system release by ken Thompson. At first in 1991 linux release the first operation system by Linus Torvalds. now about 600 distribution are available. the most popular and most usage Ubuntu, Davian, Fedora, Manjaro, redhat. as it is a open source so anyone can modify it source code. the five core principal is everything is a file, small and single purpose program, Ability to chain programs together to perform complex tasks, Avoid captive user interfaces, Configuration data stored in a text file. the components are Bootloader, OS kernel, Daemons, OS shell, graphics server, utilities, windows manager .  linux file system:

  • / : The top-level directory is the root filesystem and contains all of the files required to boot the operating system.
  • / bin: Contains essential command binaries.
  • / etc: Local system configuration files. Configuration files for installed applications may be saved here as well.
  • / home: Each user on the system has a subdirectory here for storage.
  • / opt: Optional files such as third-party tools can be saved here.
  • / root: The home directory for the root user.
  • / sbin:  This directory contains executables used for system administration (binary system files).
  • / user:  contains all executable libraries and man files.
  • / var: This directory contains variable data files such as log files, email in-boxes, web application related files, cron files, and more.
Shell: A Linux terminal, also called a shell or command line, provides a text-based input/output (I/O) interface between users and the kernel for a computer system. The most commonly used shell in Linux is the Bourne-Again Shell (BASH), and is part of the GNU project. Everything we do through the GUI we can do with the shell. The shell gives us many more possibilities to interact with programs and processes to get information faster. Besides, many processes can be easily automated with smaller or larger scripts that make manual work much easier.
Besides Bash, there also exist other shells like Tcsh/CshKshZshFish shell and others.

prompt:  The Bash prompt is the text displayed in the terminal that indicates the shell is ready to accept a command. It’s part of the Bash shell, which is a command-line interpreter used on Unix-based systems like Linux and macOS. ~: This represents the current directory. A tilde (~) stands for the user's home directory. $: Indicates that commands can now be typed (it's usually $ for regular users and # for the root user).

The prompt can be customized using special characters and variables in the shell’s configuration file (.bashrc for the Bash shell). For example, we can use: the \u character to represent the current username, \h for the hostname, and \w for the current working directory. 

\d Date (Mon Feb 6), \D  {%Y-%m-%d} Date (YYYY-MM-DD), \H Full hostname, \j Number of jobs managed by the shell, \n Newline, \r Carriage return, \s Name of the shell, \t Current time 24-hour (HH:MM:SS), \T Current time 12-hour (HH:MM:SS), \@ Current time, \u Current username,\w  Full path of the current working directory

Customizing the prompt can be a useful way to make your terminal experience more personalized and efficient. It can also be a helpful tool for troubleshooting and problem-solving, as it can provide important information about the system’s state at any given time.  some commands to get the system information;

  • whoami        Displays current username.
  • id Returns users identity
  • hostname print the current host of the system
  • uname print the basic information about OS and hardware.
  • pwd print the working directory
  • ifconfig print the network interface address.
  • ip print the routing, network interface and tunnels.
  • netstat print the networks status.
  • ss investigate sockets.
  • ps print the process status.
  • who print the who are logged in.
  • env print the environment or sets and executes command
  • lsblk print how many device are blocked.
  • lsusb list of usb devices.
  • lsof list of open file.
  • lspci list of pci device.
  • whatis for see tools and command info
Navigation workflow's

it is very essential for working the terminal. you sometimes need to know which directory you are working or you need to see the all file and folder or need to see all file and folder along with hidden folder. for this you need to do this command in the terminal. for this you need to use this command

ls         : only show you additional file and folders
ls -l      : show all file and folder
ls  -la   : show all hidden file and folder .
ls -l   /directory name   : you can also see only specific director files by typing this directory name.

you can also use cd .. or  only .. for go back one directory. for directly go home directory type / or ~ .
you can see the additional information of the directory like created date folder name owner time permission etc.




                                                 working with files and directory

Linux is very favorite for using it's command line use. that make it's more popular. because it's provide huge facility's for a user. you an edit , create, delete, and can many more in terminal command. lets see some command:

touch                   : for file creating. you can also create file in a specific directory directly. so you nee use dot (.) before current directory. like touch ./dev/phishing/strombreaker/newfolder/new.txt

file file name       : to see what is inside the file.

mkdir   name        :creating a directory.

mkdir -p name////: creating subdirectory. like mkdir -p folder/subfolder/sub/then/last/folder

less                      : use for see the standard view for you content. generally show one page. you can also used for small g for go to top line, big G for go to the end, /word means which word do you want to find out just type /the word. q for quite.

cp                         :   for copy file. you can copy as you like. if you want to copy your new.text file in here /home/kali/desktop/phishing/zphisher. just type cp new.text/home/kali/desktop/phishing/zphisher .you can also copy multiple same extensions or same name file using wildcard. like cp *.txthome/kali/desktop/phishing/zphisher

cp -r                         : for folder copy. cp -r folder/  /home/kali/desktop/phishing/zphisher

cp -i                        : for asking you if you want or not. cp -i file or folder home/kali/desktop/phishing/zphisher

mv                         : file rename. mv filename newfilename.

mv                         : move directory. mv filename /path/path/path...

rm                         : remove file. rm filename.

rm -f                     : remove forces .
rm -i                     : asking removing file yes or not.

rmdir                   : directory remove.

rm -rf                  : everything remove.

vi, vim, nano, gedit, lifpade: for text edit.

which  filename  : return you executed file or link path

find  / -type name file or folder -name file name : for find out file and folder. you could also used file size by -size +20k,-20k,+200k or -100k etc options. you can also add users by using -user root, admin etc. you can also find out same extensions file using asterisk sign *.file extension name. 

locate file or folder name   : it is more faster than find. 
                                               
                                                  file Descriptors and Redirections

By now, we've become familiar with many commands and their output and that brings us to our next subject I/O (input/output) streams. let's take some idea of it By default the first three file descriptors in linux are : Date stream for input :- STDIN(<) - 0; Date stream for output :- STDOUT(>) - 1; Date stream for output that relates to an error occurring :- STDERR - 2. By default the echo command takes the input (standard input or stdin) from the keyboard and returns the output (standard output or stdout) to the screen. So that's why when you type echo Hello World in your shell, you get Hello World on the screen. However, I/O redirection allows us to change this default behavior giving us greater file flexibility. now see some command:
           STDOUT (Standard out )

echo text       : print the text on windows.

echo text  > file name   : it's means the text will save this file. The > is a redirection operator that allows us the change where standard output goes. It allows us to send the output of echo text to a file instead of the screen. If the file does not already exist it will create it for us. However, if it does exist it will overwrite it. Well let's say I didn't want to overwrite my peanuts.txt, luckily there is a redirection operator for that as well, >>.

echo text >> filename  : it doesn't override the previous text but add new.

            STDIN (Standard In)

cat < filename  : for stdin redirection. Normally in the cat command, you send a file to it and that file becomes the stdin, in this case, we redirected peanuts.txt to be our stdin. Then the output of cat peanuts.txt which would be Hello World gets redirected to another file called banana.txt.

cat <<EOF> file name  : you can also use this here EOF means end of file.

cat <created filename> output file name     : take input and give output in another file.

          stderr (Standard Error)

this is used for sometimes some error shows like permission denied, file not found etc. like this 



 we can check this by redirecting to "/dev/null" This way, we redirect the resulting errors to the "null device," which discards all data. now see no error showing.


you can also redirect it into another output like this. 

 
                                                    Filter Content

from the result output you can filter your result . here is some filter out command:

pip   (|)     : The pipe operator |, represented by a vertical bar, allows us to get the stdout of a command and make that the stdin to another process. 

less   & more   : Using more and less, you can easily scroll through large files, search for text, and navigate forward or backward without modifying the file itself. This is especially useful when you're working with large logs or text files that don't fit neatly into one screen.

tee                    : if you use tee you can see that command output and also see the same result in the tee file. like you use ls | tee ok.txt . in ls command you can see all directory. in the tee file means ok.txt if you open you will see the same result store there.

head & tail      : head shows you the header part content and tail show you the bottom content.

sort                 : it will do arrangements your content and well decorated.

grep                : it does catch the given all name file and folder.

wc  & nl         :  wc means word count. nl number of line.

uniq                : for filter out the uniq word.

tr                      : for translate form of position 

cut                     : for make the result short . 

join   & split      : for concate and seperate .

column              :  for see the result in a column.

sed                    : for replace content. sed's/replace/replace/.../g'
                                                                                          

                                         regex (Regular Expressions)

Regular expressions are a powerful tool to do pattern based selection. It uses special notations similar to those we've encountered already such as the * wildcard. We'll go through a couple of the most common regular expressions, these are almost universal with any programming language.  They allow you to find, replace, and manipulate data with incredible precision. Think of RegEx as a highly customizable filter that lets you sift through strings of text, looking for exactly what you need—whether it's analyzing data, validating input, or performing advanced search operations. let's see some of these examples:

 (^) words name                : it will return Beginning of a line with start the word.

($) words name                 : it will return End of a line with end the words.

words name (.)                   : Matching any single character with using the word and .

word name [] word name  : This can be a little tricky, brackets allow us to specify characters found within the bracket. you can use word in the [] bracket which word do you want to find out. like you want to find out this word which is created by d and then next word is i, l, k etc and last word is g . in this case you can do it easy way d[ilk]g. the word would be dig, dlg, dkg. you can also use rang by [a-z as you like] . but careful about case sensitive [A-Z] and [a-z] these are not similar. so the result comes different. you can also use and, or operator like (b.*passwd) . you can also use pip like (mysql | bin) it's means when any one match it will grep it.

                                                   Permission Management

In Linux, permissions are like keys that control access to files and directories. These permissions are assigned to both users and groups, much like keys being distributed to specific individuals and teams within an organization. Each user can belong to multiple groups, and being part of a group grants additional access rights, allowing users to perform specific actions on files and directories. so it's very important to us maintaining the permissions management. there are three types of permissions a file or directory. rwx means read, write and execute. The permissions can be set for the owner, group or other.

Change Permissions:

We can modify permissions using the chmod command, permission group references (u - owner, g - Group, o - others, a - All users), and either a [+] or a [-] to add remove the designated permissions.

chmod a+ or -  w/r/x  or chmod u+x or chmod +x/w/r      :  you can use what you need add permission or remove.

We can also set the permissions for all other users to read only using the octal value assignment. like chomd 777 for give all permission read write and execute. 755 ,754 etc.

chown  filename    : for changing ownership.

chgrp       filername : for changing groups permissions .

you can also change the both permissions together by using colon (:). chmod ownname:group name filername.

sticky bit : This permission bit, "sticks a file/directory" this means that only the owner or the root user can delete or modify the file. This is very useful for shared directories. this process also same just chmod +t your file or folder name.

                                                   User Management

Effective user management is a fundamental aspect of Linux system administration. Administrators frequently need to create new user accounts or assign existing users to specific groups to enforce appropriate access controls. Additionally, executing commands as a different user is often necessary for tasks that require different privileges. For example, certain groups may have exclusive permissions to view or modify specific files or directories, which is essential for maintaining system security and integrity. This capability allows us to gather more detailed information locally on the machine, which can be critically important for troubleshooting or auditing purposes. 

Here is a list that will help us to better understand and deal with user management.

sudo             : Execute command as a different user.

su                 : the default user is the superuser.

useradd / userdel  : for Creates a new user or delete the previous one.

usermod       : for modify user accounts.

addgroup / delgroup   :  for new group add or delete.

passwd            : for change user password.

                                            Package Management

Whether working as a system administrator, maintaining our own Linux machines at home, or building/upgrading/maintaining our penetration testing distribution of choice, it is crucial to have a firm grasp on the available Linux package managers and the various ways to utilize them to install, update, or remove packages. Packages are archives that contain binaries of software, configuration files, information about dependencies and keep track of updates and upgrades. The features that most package management systems provide are: 

Package downloading, Dependency resolution, A standard binary package format, Common installation and configuration locations, Additional system-related configuration and functionality, Quality control. We can use many different package management systems that cover different types of files like ".deb", ".rpm", and others. If an installed software has been deleted, the package management system then retakes the package's information, modifies it based on its configuration, and deletes files. There are different package management programs that we can use for this.  like :
dpkg (ubuntu) , apt(Davian), aptitude, gem (ruby), pip (python), git, snap, yum or dnf , rmp(red hat) 

It is highly recommended to set up our virtual machine (VM) locally to experiment with it. Let us experiment a bit in our local VM and extend it with a few additional packages. First, let us install git by using apt. let see different package installation and removal process.  

apt Install and remove package from a repository:
apt install or remove package name || ex: apt install golang || apt remove golang

yum/ dnf Install and remove package from a repository:
dnf install or remove package name   || ex: dnf install golang  || dnf remove golang.

dpkg -i install or -r remove Install  package :
dpkg -i package name .deb          ||ex: dpkg -i go1.24.3.linux-amd64.deb ||dpkg -r go1.24.3.linux-amd64.deb

rpm -i install or -r remove packages:
rpm -i package name .rpm || ex: rpm -i go1.24.3.linux-amd64.rpm||rpm -r go1.24.3.linux-admd64.rpm.

  zip and tar package
gzip is program used to compress files in Linux, they end in a .gz extension.  to make zip use gzip filename . for unzip use gunzip filename. 

 Creating archives with tar

Unfortunately, gzip can't add multiple files into one archive for us. Luckily we have the tar program which does. When you create an archive using tar, it will have a .tar extension.
             ex:            tar cvf mytarfile.tar mycoolfile1+mycoolfile2+.... || 
                                   ðŸ‘‡            👇                         ðŸ‘‡
c = create, v= verbose, f= file ,  output name.ext. ,    your file name which you want to zip.

Unpacking archives with tar

To extract the contents of a tar file, use:

         ex:      tar  xvf tarfile  ➡ x= extract, v= verbose, f= file.

Compressing/uncompressing archives with tar and gzip

Many times you'll see a tar file that has been compressed such as: mycompressedarchive.tar.gz, all you need to do is work outside in, so first remove the compression with gunzip and then you can unpack the tar file. Or you can alternatively use the z option with tar, which just tells it to use the gzip or gunzip utility. 
Create a compressed tar file:

 tar czf filename.tar.gz : ➡ c= create, z= zee files, f= file.

Uncompress and unpack:

$ tar -xzf filename.tat.gzip
                                                                
                                                service and process management 

Processes are the programs that are running on your machine. They are managed by the kernel and each process has an ID associated with it called the process ID (PID). run the ps command to see a list of running processes and ps aux command The a displays all processes running, including the ones being ran by other users. The u shows more details about the processes. And finally the x lists all processes that don't have a TTY associated with it, these programs will show a ? in the TTY field, they are most common in daemon processes that launch as part of the system startup. Another very useful command is the top command, top gives you real time information about the processes running on your system instead of a snapshot. The nice command is used to set priority for a new process. The renice command is used to set priority on an existing process. 

                                                           Network Services 

 Linux, managing various network services is essential. Proficiency in handling these services is crucial for several reasons. Network services are designed to perform specific tasks, many of which enable remote operations. It is important to have the knowledge and skills to communicate with other computers over the network, establish connections, transfer files, analyze network traffic, and configure these services effectively. This expertise allows us to identify potential vulnerabilities during penetration testing. Additionally, understanding the configuration options of each service enhances our overall comprehension of network security. Secure Shell (SSH) is a network protocol that allows the secure transmission of data and commands over a network. It is widely used to securely manage remote systems and securely access remote systems to execute commands or transfer files. In order to connect to our or a remote Linux host via SSH, a corresponding SSH server must be available and running. The most commonly used SSH server is the OpenSSH server. OpenSSH is a free and open-source implementation of the Secure Shell (SSH) protocol that allows the secure transmission of data and commands over a network.  you can easily install openssh from kali repository by typing sudo apt install openssh-server-y. OpenSSH can be configured and customized by editing the file /etc/ssh/sshd_config with a text editor. Here we can adjust settings such as the maximum number of concurrent connections, the use of passwords or keys for logins, host key checking, and more. However, it is important for us to note that changes to the OpenSSH configuration file must be done carefully. 

Network File System (NFS) is a network protocol that allows us to store and manage files on remote systems as if they were stored on the local system. It enables easy and efficient management of files across networks. For example, administrators use NFS to store and manage files centrally (for Linux and Windows systems) to enable easy collaboration and management of data. For Linux, there are several NFS servers, including NFS-UTILS (Ubuntu), NFS-Ganesha (Solaris), and OpenNFS (Redhat Linux). 

A Virtual Private Network (VPN) functions like a secure, invisible tunnel that connects us to another network, allowing seamless and protected access as if we were physically present within it. This is achieved by establishing an encrypted tunnel between the client and the server, ensuring that all data transmitted through this connection remains confidential and safeguarded from unauthorized access. We can install the server and client with the following command: 
sudo apt install openvpn -y for connect sudo openvpn --config file.ovpn

                                          Working with Web Services

Another crucial element in web development is the communication between browsers and web servers. Setting up a web server on a Linux operating system can be done in several ways, with popular options including Nginx, IIS, and Apache. Among these, Apache is one of the most widely used web servers. Think of Apache as the engine that powers your website, ensuring smooth communication between your website and visitors. If you haven't already, let's install Apache: sudo apt install apache2 -y
Now, we can start the server using the apache2ctl, systemctl or servicecommands . There also exists an apache2 binary, but it’s generally not used to directly to start the server (this is due to the use of environment variables in the default configuration.) After Apache has been started, we navigate using our browser to the default page (http://localhost). By default, Apache will serve on HTTP port 80, and your browser will default to this port as well whenever you enter an HTTP URI (unless otherwise specified.). If you are using the Pwnbox, you might experience an error when attempting to start Apache; this is due to port 80 being occupied by another service. To set an alternate port for our web server, we can edit the /etc/apache2/ports.conf file. Here, we have set it to port 8080. Another important aspect of working with web servers is learning how to communicate with them using command-line tools like curl and wget. These tools are incredibly useful when we want to systematically analyze the content of a webpage hosted on a web server.  

cURL is a tool that allows us to transfer files from the shell over protocols like HTTP, HTTPS, FTP, SFTP, FTPS, or SCP, and in general, gives us the possibility to control and test websites remotely via command line. Besides the remote servers' content, we can also view individual requests to look at the client's and server's communication. Usually, cURL is already installed on most Linux systems. 

An alternative to curl is the tool wget. With this tool, we can download files from FTP or HTTP servers directly from the terminal, and it serves as a solid download manager. If we use wget in the same way, the difference to curl is that the website content is downloaded and stored locally.
Another option that is often used when it comes to data transfer is the use of Python 3. In this case, the web server's root directory is where the command is executed to start the server. For this example, we are in a directory where WordPress is installed and contains a "readme.html." Now, let us start the Python 3 web server and see if we can access it using the browser.

                                                           Backup and Restore

Linux systems provide a range of powerful tools for backing up and restoring data, designed to be both efficient and secure. These tools help ensure that our data is not only protected from loss or corruption, but also easily accessible when we need it. Think of your data as valuable treasures stored in a house. The backup tools on Linux, such as Rsync, Duplicity, and Deja Dup, act like different kinds of safes. Rsync is like a fast-moving transport that only carries what's new or changed, making it the ideal way to send updates to a remote vault. Duplicity is a high-security safe that not only stores the treasure but also locks it with a complex code, ensuring no one else can access it. Deja Dup is a simple, accessible safe that anyone can operate, while still offering the same level of protection. Encrypting your backups adds an additional lock on your safe, ensuring that even if someone finds it, they can't get inside. For users who prefer a simpler, more user-friendly option, Deja Dup offers a graphical interface that makes the backup process straightforward. Behind the scenes, it also uses Rsync, and like Duplicity, it supports encrypted backups. Deja Dup is ideal for users who want quick, easy access to backup and restore options without needing to dive into the command line. we can use the apt package manager for install this package.  This will install the latest version of Rsync on the system. Once the installation is complete, we can begin using the tool to back up and restore data. 
                                           

                               File System Management

The best file system choice depends on the specific requirements of the application or user such as: 

  • ext2 is an older file system with no journaling capabilities, which makes it less suited for modern systems but still useful in certain low-overhead scenarios (like USB drives).

  • ext3 and ext4 are more advanced, with journaling (which helps in recovering from crashes), and ext4 is the default choice for most modern Linux systems because it offers a balance of performance, reliability, and large file support.

  • Btrfs is known for advanced features like snapshotting and built-in data integrity checks, making it ideal for complex storage setups.

  • XFS excels at handling large files and has high performance. It is best suited for environments with high I/O demands

  • NTFS, originally developed for Windows, is useful for compatibility when dealing with dual-boot systems or external drives that need to work on both Linux and Windows systems.

  • Linux's file system architecture is based on the Unix model, organized in a hierarchical structure. In Linux, files can be stored in one of several key types:

    • Regular files
    • Directories
    • Symbolic links
Each category of user can have different permission levels.                              

                                             Network Configuration

One of the primary tasks in network configuration is managing network interfaces. This involves assigning IP addresses, configuring network devices such as routers and switches, and setting up various network protocols. A deep understanding of network protocols, including TCP/IP (the core protocol suite for Internet communications), DNS (domain name resolution), DHCP (for dynamic IP address allocation), and FTP (file transfer), is critical. We must also be familiar with different types of network interfaces—whether wired or wireless—and be able to troubleshoot connectivity issues. 

Network Access Control

Another vital component of network configuration is network access control (NAC). As penetration testers, we need to be well-versed in how NAC can enhance network security and the various technologies available. Key NAC models include: 

Type Description
Discretionary Access Control (DAC) This model allows the owner of the resource to set permissions for who can access it.
Mandatory Access Control (MAC) Permissions are enforced by the operating system, not the owner of the resource, making it more secure but less flexible.
Role-Based Access Control (RBAC) Permissions are assigned based on roles within an organization, making it easier to manage user privileges.

the ifconfig command is still widely used in many Linux distributions and continues to be a reliable tool for network management. we can use the route command with the add option. On Linux systems, this can be achieved by updating the /etc/resolv.conf file, which is a simple text file containing the system’s DNS information. By adding the appropriate DNS server addresses (Google's public DNS - 8.8.8.8 or 8.8.4.4), the system can correctly resolve domain names to IP addresses, ensuring smooth communication over the network. After completing the necessary modifications to the network configuration, it is essential to ensure that these changes are saved to persist across reboots. This can be achieved by editing the /etc/network/interfaces file, which defines network interfaces for Linux-based operating systems. Thus, it is vital to save any changes made to this file to avoid any potential issues with network connectivity. This will open the interfaces file in the vim editor. We can add the network configuration settings to the file like this.
As we navigate the world of Linux, we inevitably encounter a wide range of technologies, applications, and services that we need to become familiar with. This is a crucial skill, particularly if we work in cybersecurity and strive to improve our expertise continuously. For this reason, we highly recommend dedicating time to learning about configuring important security measures such as SELinux, AppArmor, and TCP wrappers on your own. 

                             Remote Desktop Protocols in Linux

Remote desktop protocols are used in Windows, Linux, and macOS to provide graphical remote access to a system. Two of the most common protocols for this type of access are:

  • Remote Desktop Protocol (RDP): Primarily used in Windows environments. RDP allows administrators to connect remotely and interact with the desktop of a Windows machine as if they were sitting right in front of it.

  • Virtual Network Computing (VNC): A popular protocol in Linux environments, although it is also cross-platform. VNC provides graphical access to remote desktops, allowing administrators to perform tasks on Linux systems in a similar way to RDP on Windows. 

For these VNC connections, many different tools are used. Among them are for example:


All computer systems have an inherent risk of intrusion. Some present more of a risk than others, such as an internet-facing web server hosting multiple complex web applications. Linux systems are also less prone to viruses that affect Windows operating systems and do not present as large an attack surface as Active Directory domain-joined hosts. Regardless, it is essential to have certain fundamentals in place to secure any Linux system. One of the Linux operating systems' most important security measures is keeping the OS and installed packages up to date. This can be achieved with a command such as: If firewall rules are not appropriately set at the network level, we can use the Linux firewall and/or iptables to restrict traffic into/out of the host. Besides, there are different applications and services such as Snort, chkrootkit, rkhunter, Lynis, and others that can contribute to Linux's security. TCP wrapper is a security mechanism used in Linux systems that allows the system administrator to control which services are allowed access to the system. command cat /etc/hosts.allow or deny. 

On Linux systems, most common services have default locations for access logs:

Service Description
Apache      Access logs are stored in the /var/log/apache2/access.log file (or similar, depending on the distribution).
Nginx  Access logs are stored in the /var/log/nginx/access.log file (or similar).
OpenSSH      Access logs are stored in the /var/log/auth.log file on Ubuntu and in /var/log/secure on CentOS/RHEL.
MySQL  Access logs are stored in the /var/log/mysql/mysql.log file.
PostgreSQL  Access logs are stored in the /var/log/postgresql/postgresql-version-main.log file.
Systemd  Access logs are stored in the /var/log/journal/ directory.

Security logs

These security logs and their events are often recorded in a variety of log files, depending on the specific security application or tool in use. For example, the Fail2ban application records failed login attempts in the /var/log/fail2ban.log file, while the UFW firewall records activity in the /var/log/ufw.log file. Other security-related events, such as changes to system files or settings, may be recorded in more general system logs such as /var/log/syslog or /var/log/auth.log. As penetration testers, we can use log analysis tools and techniques to search for specific events or patterns of activity that may indicate a security issue and use that information to further test the system for vulnerabilities or potential attack vectors.

0 Comments