Cybersecurity Risk Management

Share This Post:

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Cybersecurity Risk Management

Cybersecurity risk management is an ongoing balancing act between the costs of fixing a network vulnerability to a particular threat and the benefits of the fix based on the potential magnitude of vulnerability’s impact.  Let’s break this statement down a little more as it has some key definitions:

  • Risk:  The likelihood that a threat will exploit a vulnerability. 
  • Threat: The danger
  • Vulnerability: A weakness that a threat can exploit 
  • Impact:  The magnitude of  harm caused by a threat that exploits a vulnerability.

Successful hackers and administrators must be aware of a range of security threats and how to manage them.

What Is Risk Management In Cybersecurity

Cybersecurity Risk Management
Understanding how risk evolves where threats, vulnerabilities and assets intersect.

The primary goal of risk management is to reduce risk to a level that the organization will accept, so as the term suggests, risk management doesn’t eliminate risks, but rather identifies, monitors, and limits risks to a reasonable level, with the amount of remaining risk termed “residual” risk.

Management chooses the acceptable risk level based on their organizational goals and balancing available resources (such as money, hardware, and time) to dedicate to mitigate the risk.
There are multiple risk response/management methods an organization can invoke, including:

  • Avoid: By not providing a service or not participating in a risky activity. 
    Transfer. The organization transfers/shares the risk by purchasing insurance or outsourcing.
  • Mitigate:  Implementation of controls to reduce risks by either reduce the vulnerabilities or reduce the threat impact of the threat.
  • Accept: When the cost of a control outweighs a risk, an organization will often accept the risk. Similarly, even after implementing controls, residual risk remains and the organization accepts this residual risk.

Risk Assessment: An important component of risk management is quantifying/qualifying risks based on different values or judgments. A risk assessment is a point-in-time assessment, or a snapshot; it assesses the risks based on current conditions, such as current threats, vulnerabilities, and existing controls.  It’s common to perform risk assessments on new systems or applications in order to determine if the potential risks may offset the potential gains.  Risk assessments start by first identifying:

  • Assets: A product, system, resource, or process that an organization values.
  • Asset value: Identifies the worth of the asset to the organization. It can be a specific monetary value or subjective value, such as Low, Medium, and High.
    • Helps an organization focus on the high-value assets and avoid wasting time on low-value assets.

After identifying asset values, a risk assessment:

  • Ranks the impact of potential threats and the associated harm, in order to prioritize risks based on the likelihood of occurrence and impact.
  •  Identifies threats and vulnerabilities in order to determine the likelihood a threat will attempt to exploit a vulnerability. 
  • Includes recommendations on what controls to implement to mitigate risks.

Risk management in cybersecurity involves:

  • Risk assessment methods.
  • Vulnerability scanners and penetration testing.

It’s important to realize that you can’t eliminate risk associated with threats exploiting vulnerabilities, so users and organizations must practice risk management to reduce the risks using different types of security controls. 

Threat Assessments: Helps organizations identify and categorize threats, attempting to predict the threats as well as the likelihood the threat will occur and the potential impact from these threats. Once identified and prioritized, organizations manage threats by implementing security controls to protect against the most serious threats.  As previously defined, a threat is a circumstance or event that potentially compromises the confidentiality, integrity, or availability of data or a system and are one of:

  • Accidental human threats: Accidental deletion/corruption of data usually because users access data that they shouldn’t be able to access or administrator making a configuration changes.
  • Environmental threats:  Items like power failure and natural disaster.
  • Malicious human threats: Involves the intentional launching of attacks, including network attacks, system attacks, and the release of malware.


The purpose of a threat assessment is to improve the security posture of any system or application by ensuring that limited resources used appropriately.  Common threat assessments include:

  • Environmental: An environmental threat assessment evaluates
    the likelihood of an environmental threat occurring.
  • External: An external threat assessment evaluates threats from outside an organization. This includes any threats from external attackers. It also includes any natural threats, such as hurricanes, earthquakes, and tornadoes.
  • Manmade: A manmade threat assessment evaluates all threats from humans. These include both malicious human threats and accidental human threats.
  • Internal: An internal threat assessment evaluates threats from within an organization, including malicious employees, accidents and equipment failure.

Vulnerabilities: Weakness in software/hardware/process that a threat could exploit to breach security and include:

  • Use of default configurations: System hardening requires changing default hardware and software configurations, including changing default usernames and passwords. 
  • No malware protection/failing to update definitions: Antivirus and anti-spyware methods protect systems from malware, but if they aren’t used and kept up to date, systems are vulnerable to malware attacks.
  • Misconfiguration of firewalls: If firewalls aren’t properly configured, systems are more vulnerable to attacks.
  • Organizational policies not implemented: If job separation, mandatory vacations, and job rotation policies aren’t implemented, an organization may be more susceptible to fraud and collusion from employees.
  • Not installing updates: If systems aren’t kept up to date with patches, hot fixes, and service packs, they are vulnerable to bugs and flaws in the software.

 

Quantitative Risk Assessment

Quantitative measurements use numbers, such as a monetary figure representing cost and asset values, making it easier to prioritize risks; asset value, either the revenue value (which is usually the higher value) or replacement value of an asset is an important element in a quantitative risk assessment.   One commonly used quantitative model uses the following values to determine risks:

  • Single loss expectancy (SLE). The SLE is the cost of any single loss.
  • Annual rate of occurrence (ARO). The ARO indicates how many times the loss will occur in a year.
    • If the ARO is less than 1, the ARO is represented as a percentage. For example, if you anticipate the occurrence once every two years, the ARO is 50 percent or.5
  • Annual loss expectancy (ALE). The ALE is the value of SLE x ARO.

Managers use these two simple guidelines for most of these decisions:
• If the cost of the control is less than the savings, purchase it.
• If the cost of the control is greater than the savings, accept the risk.

Qualitative Risk Assessment

Some qualitative risk assessments use surveys or focus groups of experts to provide their best judgments and then tabulate the results in order to categorize risks based on:

  • Likelihood of occurrence: is the probability that an event will occur, such as the likelihood that a threat will attempt to exploit a vulnerability.
  • Impact: Magnitude of harm resulting from a risk, including the negative results of an event, such as the loss of confidentiality, integrity, or availability of a system or data.

Experts panels typically rank the above categories low, medium, and high to rate them and assign numbers to these judgments, say 1, 2, and 3  so when results are tabulate, they can be used to compare different risks objectively and allocate resources accordingly.  One of the challenges with a qualitative risk assessment is gaining consensus on the probability and impact as probability and impact are often subject to debate. 

 

Risk Assessment Report

The final phase of the assessment is the report identifying the risks discovered during the assessment and the recommended controls; management uses this report to decide which controls to implement and which risks to accept.   As an attacker, this report is worth its weight in gold as the report lists all of the systems vulnerabilities, risks and controls. 

Risk Registers: Depending on which standard you’re following, there are different definitions of a risk register but regardless an easy way to create a risk register is in a table format with columns such as:

  • Category. Risk categories could include downtime due to hardware failures, outages from an attack, downtime to database server failure, data breaches, and more.
  • Specific risk
  • Likelihood of occurrence.
  • Impact. 
  • Risk score.
  • Security controls or mitigation steps
  • Contingencies.
  • Risk score with security controls
  • Action assigned To: document who has responsibility for implementing the security control.
  • Action deadline: When the security control should be implemented.

 

Supply Chain Assessment:  As a company’s supply chain includes all the elements required to produce and sell a product, a supply chain assessment evaluates these elements—the raw materials supply sources and all the processes required to create, sell, and distribute the product in order to identify risks. 

Scanning and Penetration Testing Tools

Prior to attempting to scan, detect vulnerabilities or penetrate a network you do not own, it is important to acquire specific, written authorization before performing any vulnerability testing or penetration testing.  Any agreement to perform testing for an organization requires  written rules-of-engagement document that identifies the boundaries and risks of the penetration test, so if testing does result in network issues, liabilities are avoided. 

Penetration testing tools can be classified two ways:

  • Intrusive Versus Non-Intrusive TestingScans can be either intrusive (potentially disruptive) or non-intrusive (non-compromising): 
    • As pen-testing involve probing a system, attempting to exploit discovered vulnerabilities, which can potentially disrupt services and even take a system down, these tests are intrusive and more invasive than vulnerability scans. 
    • Vulnerability scans are generally non-intrusive and less invasive than penetration tests as they exploit found vulnerability.
  • Passive Versus Active Tools: Passive tool tests systems  non-intrusively with little chance of compromising a system. An active tool uses intrusive and invasive methods and can potentially affect the operations of a system.
    • Vulnerability scanning is passive in that it does not take any action to exploit these vulnerabilities.

 

How To Check For Network Vulnerabilities

Checking for vulnerabilities is where the rubber hits the road.   As an administrator, it is a check on your skill set and as a hacker, it allows you exploit network short comings; vulnerability assessments and network scans will help you assess the security posture of an organizations systems and networks.

Vulnerability assessments, part of an overall risk management plan, identify vulnerabilities, or weaknesses, within systems, networks, and organizations, and include:

  • Reviewing security policies and logs.
  • Interviewing personnel.
  • Testing systems with variety of scans and penetration tests.

A typical vulnerability assessment includes the following high-level steps:

  • Identify assets and capabilities.
  • Prioritize assets based on value.
  • Identify vulnerabilities and prioritize them.
  • Recommend controls to mitigate serious vulnerabilities.

Password Crackers

A password cracker attempts to discover a password. Passwords are typically encrypted or hashed so that they aren’t easily readable.  If passwords are protected with weak methods, a password cracker can discover the password.  There are two categories of password crackers—offline and online:

  • An offline password cracker attempts to discover passwords by analyzing a database or file containing hashed or encrypted passwords in order to discover the actual passwords. A key benefit of an offline password cracking attack is that attackers have unlimited time to analyze the passwords.
  • An online password cracker attempts to discover passwords by guessing them in a brute force attack. For example, some online password crackers attempt to discover the passwords for specific accounts by trying to log on to the accounts remotely. Other online password crackers collect network traffic and attempt to crack any passwords sent over the network.
Cybersecurity Risk Management: Password Cracking
Screenshot of John the Ripper password cracking tool.

Network Scanners

Network scanners gather information about hosts within a network and include tools like Nmap, Netcat and Nessus; these network scanners make use of the following methods:

  • ARP ping scan: Address Resolution Protocol (ARP) resolves IP addresses to media access control (MAC) addresses.  Hosts receiving an ARP packet with its IP address responds with its MAC address. If the host responds, the network scanner knows that a host is operational with that IP address.
  • Ping scanPing scans send an Internet Control Message Protocol (ICMP) ping to a range of IP addresses in a network; if the host responds, the network scanner knows there is a host operational with that IP address.  As caution, firewalls often block ICMP, so it can give inconsistent results.
  • Syn stealth scan:  As seen in the diagram below,  host sends out a SYN (synchronize) packet to initiate a TCP session. The other host responds with a SYN/ACK (synchronize/acknowledge) packet. The first host then completes the handshake with an ACK packet to establish the connection. A syn stealth scan sends a single SYN packet to each IP address in the scan range. If a host responds, the scanner knows that a host is operational with that IP address. However, instead of responding with an ACK packet, a scanner typically sends an RST (reset) response to close the connection.
three way handshake
A simplified diagram of prototypical 3 way TCP handshake.
  • Port scan: Typically scans well-known ports designated by the Internet Assigned Numbers Authority (IANA) for open ports, which indicate that an underlying protocol is running on the system. 
  • Service scan. Verifies the protocol or service running on a system.
  • OS detection: Analyze packets from an IP address to identify the OS in a process referred to as TCP/IP fingerprinting. OS detection techniques don’t rely on a single value but typically evaluate multiple values included in responses from systems.

Network Mapping

A network mapping tool, like nmap seen in the screenshot below, is used to uncover devices connected to a network and often done as part of a network scan, but it only focuses on connectivity. In contrast, a full network scan also includes additional scans to identify open ports, running services, and OS details.

Wireless Scanners

Wireless scanners, like Wireshark, can typically use both passive and active scans.

  • When using a passive scan, a scanner just listens to all the traffic being broadcast on known channels within the 2.4 GHz and 5 GHz frequency ranges.
  • When using an active scan, a wireless scanner acts like a scanner/cracker and can gain more information about an AP by sending queries to it. 

Wireless scanning is also useful in rogue system detection to find APs placed into service without authorization.   While an administrator knows which APs are authorized, rogue AP discovery with a wireless scan is easy by investigate any unknown SSIDs.  With a portable wireless scanner, you can locate rogue APs by walking around. As you move closer to a rogue AP, the signal becomes stronger. As you move farther away from it, the signal becomes weaker.

Banner Grabbing with Network Scanning

Banner grabbing is a technique used to gain information about remote systems and many network scanners make use of it; It is often used to identify the operating system along with information about some applications so you can then find system vulnerabilities.  As seen in the screenshot below, you can see the server is running Linux Ubuntu among other technologies.  

Cybersecurity Risk Management: nmap
Using nmap for banner grabbing.

Vulnerability Scanning

Vulnerability scanners identify a wide range of weaknesses and known security issues that attackers can exploit, identifying which systems are susceptible to attacks.  Administrators can scan specific systems or an entire network to detect unauthorized changes or missing some key security settings that result in a vulnerability.  Most vulnerability scanners combine multiple a number of functions including:

  • Identification of misconfigurations and vulnerabilities
  • Passive testing of security controls

Vulnerability scanners can run as a credentialed scan using the credentials of an account, or as non-credentialed without any user credentials.

  • Non credentialed Attacks: Upon an initial attack, attackers typically lack internal credentialing, so they run non-credentialed scans.  Although a credentialed scan is more accurate, administrators  run non-credentialed scans to see what a non credentialed attacker sees.
  • Credentialed Attacks: Security administrators often run credentialed scans with the privileges of an administrator account as attackers use privilege escalation techniques to gain administrative access, allowing them to run a credentialed scan against a network.  This scan checks deeper security issues than a non-credentialed scan with more accurate test results and fewer false positives.

 

Configuration compliance scanners, typically run on a set schedule as credentialed scans to ensure they can accurately read the configuration of systems during the scan, in order to provide configuration validation. They use a file identifying the proper configuration for systems, so when the scanning, the scanner verifies that target systems have the same configuration as defined by the configuration file. 

Identifying Vulnerabilities and Misconfigurations

Vulnerability scanners utilize a database of known vulnerabilities, such as the Common Vulnerabilities and Exposures (CVE), as seen below, is a list maintained by the MITRE Corporation to test systems for vulnerabilities.

common vulnerabilities and exposures
Screenshot of CVE website.

The risk scores quantify risks, allowing security experts to prioritize vulnerabilities. The SCAP also includes risk scores for items in the CVE.

Beyond just looking for vulnerabilities, attackers use vulnerability scanners that leverage the National Vulnerability Database (NVD), which includes lists of common misconfigurations, security-related software flaws, and impact ratings or risk scores to detect misconfiguration of settings such as:

  • Default accounts/passwords: Basic operating system and application hardening requires the removal of these defaults; vulnerability scanners can discover if these have not be modified.
  • Open ports: A vulnerability if the services associated with these ports aren’t monitored/managed.
  • Security and configuration errors: Vulnerability scans can also check the system against a configuration or security baseline to identify unauthorized changes.
  • Sensitive data: Some scanners include data loss prevention (DLP) techniques to detect sensitive data sent over the network. 
  • Weak passwords: Many scanners include a password cracker that can discover weak passwords or verify that users are creating strong passwords in compliance with an organization’s policy. 

Passively Testing Security Controls

Vulnerability scanners do not attempt to exploit any vulnerabilities, just identify them.  Administrators then assess the vulnerabilities to determine which ones to mitigate. 

Vulnerability scanners can also identify missing security controls, such as the lack of up-to- date patches or the lack of antivirus software. 

False Positive: Vulnerability scanners can report a vulnerability when it doesn’t actually exist. False positives can result in higher administrative overhead because administrators have to investigate them. 

 

Penetration Testing

Penetration testing (pen-testing) is geared towards assessing a system’s security controls.  While starting with passive reconnaissance, like a network or  vulnerability scan, its goal is the exploitations vulnerabilities by simulating or performing an attack.  The ultimate goal of any pen-testing is helping an  organization model threat impacts on a system by determining the extent of damage that an attacker could inflict by exploiting a vulnerability.

Another aspect of pen-testing is it allow organizations to develop responses to system attacks as well as finding flaws in policy implementation.   As mentioned earlier, since penetration testing exploits vulnerabilities, it has the potential to disrupt business operations so the test should end right before causing real damage.   In order to avoid interference with live systems, testers sometimes work on test systems instead of live production systems.  

Penetration tests activities include:

  • Passive reconnaissance: As this doesn’t engage a target, it isn’t illegal and does not include using any tools to send information to targets and analyze the responses. However, passive reconnaissance can include using tools to gather information from systems other than the target. Collects information about a targeted system, network, or organization using open-source intelligence and includes:
    • Viewing social media sources about the target,
    • News reports
    • Organization’s web site.
    • Collecting wireless networks information from the network such as network SSIDs.
  • Active reconnaissance: Should never be done without getting prior written authorization as active reconnaissance engages targets and is almost always illegal; involves tools to send data and analyzing the responses from targets with network/vulnerability scanners. Active reconnaissance identifies active IP addresses, ports and services as well as the operating system running on individual systems.
  • Initial exploitation: After discovering vulnerabilities, testers attempt to exploit them to get full access to the system in order to install software on the exploited system.
  • Escalation of privilege: Many times, the tester first accesses a low-level system/account and then use various scripts to scan the network looking for vulnerabilities in order to gain more privileges on the network.
  • Pivot:  Gaining additional information from the system by using the initial point of compromise.
  • Persistence: Tools that allow testers to stay within a network for prolonged period of time without detection;  commonly used techniques involve creating a backdoor back into the network.

White, Gray, and Black Box Testing

Pen-testers are classified based on the level of knowledge of the system they have prior to starting the test and include:

  • White box testers: have full knowledge of the environment before starting a white box test.
  • Gray box testers: have some knowledge of the environment prior to starting.
    Black box testers: have zero knowledge of the environment prior to starting; the approach to the test is how an attacker would  do it.  
    Remember this

Exploitation Frameworks

Exploitation frameworks store information about security vulnerabilities and are used by penetration testers/hackers to locate and exploit software;  include tools used to check for vulnerabilities and execute exploits on any discovered vulnerabilities.  Common exploitation frameworks are:

  • Browser Exploitation Framework/BeEF: An open source web browser exploitation framework focuses on identifying web browser vulnerabilities, allowing attacks from within an exploited web browser.
  • Metasploit Framework: Has data on over 1,600 exploits and includes methods to develop, test, and use exploit code.
  • Web Application Attack and Audit Framework/w3af:  Focuses on web application vulnerabilities. 

Packet Sniffing with a Protocol Analyzer

As we’ve previously mentioned, a protocol analyzer, sometimes called a packet sniffer,  is used by sys admins and hackers to capture/analyze network packets in order to view IP headers.

  • Administrators use protocol analyzers to troubleshoot communication issues between network systems, or identify potential attacks using manipulated or fragmented packets.
  • Attackers use the tool to capture data sent across a network in cleartext by connecting an unauthorized switch within a network to capture traffic and forward it to a system running a protocol analyzer. 

 

Before running you protocol analyzer, you need a special sort of network interface card.  Normally a network interface card (NIC), like the one in your computer, uses non-promiscuous mode and only processes packets addressed directly to its IP address; protocol analyzers to use promiscuous mode to processes all packets regardless of the IP address. This allows the protocol analyzer to capture all packets that reach the NIC.

The screenshot below is of Wireshark, a free protocol analyzer, shows three panes:

  • The top pane: shows the source and destination IP addresses and the Server Message Block (SMB) protocol used to send files over the network, and this packet includes the contents of that file. 
  • The middle pane: shows detail from a packet with the Internet Protocol version 4 header information partially expanded. 
  • The bottom pane: shows the entire contents of the packet displayed in hexadecimal and ASCII characters.
Packet Sniffing with wireshark
Wireshark interface showing three panes of data: 1) the Packet List, 2) Packet Details and 3) Packet Bytes.

The image above shows Wireshark after it captured packets transmitted over the network; a packet capture can easily include thousands of packets, so Wireshark includes filters that administrators use to focus on specific types of traffic. These filters also allow them to quantify the traffic.  There is a lot to be learned from a captured packet:

  • The protocol analyzer allows you to verify manipulation of header flags as manipulation of these flags are associated with different types of attacks, 
  • You can also see the source and destination IP addresses within the IP header field. You can expand the Ethernet II section to show the media access control (MAC) addresses of the source and destination computers.
  • You can view the unencrypted credentials because SMB sends it in cleartext if they were not encrypted before sending.
  • You can identify the actual source of traffic; if an attacker is manipulating or fragmenting packets as part of an attack, you can use the source IP address to identify the potential source of the attack.
    • The source IP address doesn’t always identify the actual attacker:
      • Attackers take control of other computers and launch attacks from them without the knowledge of the owner.
      • Port Address Translation (PAT) translates public and private IP addresses so if the traffic goes through a device using PAT, the protocol analyzer only captures the translated IP address, not the original IP address.

Command Line Tools for Penetration Testing

  • Tcpdump: A command-line packet analyzer that allows you to capture packets like from the command line. Many admins use tcpdump to capture the packets and later use Wireshark to analyze the packet capture.
      • As with most Linux command-line tools, tcpdump is case sensitive and the switches must be entered with the proper case.
        • “-c”: represents count and indicates the capture should stop after receiving the specified number of packets. 
        • “-C”: file size and indicates the maximum size (in millions of bytes) of a packet capture. When the file reaches this size, tcpdump closes it and starts storing packets in a new file.
  • Nmap: A network scanner that has many functionalities, including identifying all the active hosts and their IP addresses in a network, the protocols and services running on each of these hosts, and the operating system of the host. When running the command, you include the scan type(s), optional options, and target specifications.
Cybersecurity Risk Management: nmap
Using nmap for banner grabbing.

The screenshot above shows the execution of the nmap command “nmap -sV -O scanme.nmap.org” and includes 2 switches:

-sV: Probe open ports to determine service/version info.

-O: Enable OS detection.

  • Netcat: use it for remotely accessing Linux systems. It doesn’t include native encryption so it’s common to use SSH to secure the session; other uses of Netcat include:
    • Transferring files
    • Port scanner: You can run a port scan against a single IP address. It allows you to specify the range of ports, such as 10 through 1024 and randomize the ports scanned to evade detection. 

Logs and Cybersecurity Risk Management

As logs record the what, when, where and who of events that happen on a system, they are a critical part of cybersecurity risk management, which is why we have addressed the topic in two previous articles, “Linux System Logs: How Do They Work” and “How Do Linux Logs Work“, so we will just deal with high level items below.

Linux logging

It’s tempting to set up logging to record every event and provide as much detail as possible as most logging system supports a verbose mode that will log additional details. If a system is attacked, you may be able to learn details of the attack by reviewing the operating system logs. Depending on the type of attack, any of the operating system logs may be useful.  However:

  • The amount of disk space available limits what gets logged
  • When logging is enabled, there is an implicit responsibility to review the logs, so the more you log, the more you must review.

Windows Operating System Event Logs: viewable using the Windows Event Viewer and include:

  • The Security log: functions as a security log, an audit log, and an access log.
    • records auditable events, such as when a user logs on or off, or when a user accesses a resource.
    • Some auditing is enabled by default in some systems, but administrators can add additional auditing.
    • Records audited events as successes or failures.
      • Success indicates an audited event completed successfully, such as a user successfully logging on or successfully deleting a file.
      • Failure indicates that a user tried to perform an action but failed, such as failing to log on or trying to delete a file but receiving a permission error instead.
  • The Application log:  Records events recorded by any applications or programs running on the system configured to do so. Any application has the capability of recording errors in the Application log.
  • System log: Record events related to the functioning of the operating system, including when it starts, when it shuts down, information on services starting and stopping, drivers loading or failing, or any other system component event deemed important by the system developers.

Firewall and Router Access Logs: Can be configured to record specific information that is useful for troubleshooting connectivity issues and identifying potential intrusions or attacks.   These logs include information on where the packet came from (the source) and where it is going (the destination) by capturing IP addresses, MAC addresses, and ports.

Linux Logs: You can view logs using the System Log Viewer on Linux systems by using the cat command from the terminal and the ones to look at are:

  • var/log/auth.log: The authentication log contains information related to successful and unsuccessful logins.
    var/log/boot.log: Log entries created when the system boots are contained here.
  • var/log/faillog: This log contains information on failed login attempts. It can be viewed using the faillog command.
  • /var/log/httpd/: If the system is configured as an Apache web server, you can view access and error logs with this directory.
  • /var/log/kern.log: The kernel log contains information logged by the system kernel, which is the central part of the Linux operating system.
  • var/log/messages: This log contains a wide variety of general system messages. It includes some messages logged during startup, some messages related to mail, the kernel, and messages related to authentication. It stores general system activity log entries.
  • Some Linux distributions include the utmp, wtmp, and btmp files within the /var/log folder (but might be elsewhere):
    • btmp file: Records failed login attempts.
    • utmp file: Maintains information on the current status of the system, including who is currently logged in. The who command queries this file to display a list of users currently logged in.
    • wtmp file: is an archive of the utmp file. Depending on how it is implemented, it can be a circular file, overwriting itself when it reaches a predetermined size.

Antivirus logs: Records all antivirus activity, including when scans were run and if any malware was detected. These logs also identify if malware was removed or quarantined.

Application logs: Many server applications include logging capabilities within the application. 

Performance logs: Performance logs can monitor system. performance and give an alert when preset performance thresholds are exceeded.

Security Information and Event Management Systems

Security Information and Event Management Systems (SIEMS) are designed for large enterprises that need to continuously monitor massive amounts of data and activity from multiple sources.  The location of the SIEM and its components depend on how the SIEM is used, however, it’s common to locate the SIEM within the private network, even if it collects some data from the demilitarized zone (DMZ) as the internal network will provide the best protection for the log data.  They combine:

  • Security event management (SEM):  Provides real-time monitoring, analysis, and notification of security events, such as suspected security incidents.
  • Security information management (SIM):  Provides long-term storage of data, along with methods of analyzing the data looking for trends, or creating reports needed to verify compliance of laws or regulations.

SIEMs typically have the following common features and functionality:

  • Aggregation: The combining of several dissimilar items into a single item as a SIEM can collect data from multiple sources, such as firewalls, intrusion detection systems, proxy servers, and more.  While these devices formats the logs differently, the SIEM aggregates/stores it in such a way that it is easy to analyze and search.
  • Automated alerting: Comes with predefined alerts to notify admins of suspicious events; SIEMs can create new alerts.
  • Automated triggers: Cause an action in response to a predefined events.
  • Correlation engine:  Collects/analyzes event log data from various systems within the network looking for common attributes and then uses advanced analytic tools to detect patterns of potential security events and raises alerts.
  • Database storage: Can easily search/analyze collected log data from devices throughout the network and stores these logs in the database.
  • Event deduplication:  An SIEM stores only a single copy of any duplicate log entries.
  • Logs/WORM. Includes methods to prevent modification of log entries, referred to as write once read many (WORM).
  • Time synchronization: All servers sending data to the SIEM should be synchronized with the same time, as it is important when investigating an incident so that security investigators know when events occurred. As large organizations have locations in different time zones the SIEM needs to compensates for the time offset, usually by converting times to Greenwich Mean Time (GMT).

In very large organizations, the aggregation and correlation engine can consume a lot of processing power, so organizations sometimes off-load these processes to another server so the the primary SIEM appliance can focus on alerts and triggers.

Continuous monitoring: includes monitoring all relevant security controls, which includes performing periodic threat assessments, vulnerability assessments, and risk assessments, with the goal of maintaining a strong security posture.

Additionally, many organizations perform routine vulnerability scans, penetration tests as well as routine audits/reviews, such as usage auditing reviews and permission auditing reviews.

Auditing and Reviews

Usage auditing: Logging user activity information and can include when a user: 

  • Accessed a file.
  • Logs on: Configuring logging of logon attempts is an important security step for system monitoring. After configuring logging, a system records the time and date when users log on, and when they access systems within a network. When users first log on to their account, it’s recorded as a logon action. Additionally, when users access a resource over the network (such as a file server), it is also recorded as a logon action. Many systems utilize single sign-on (SSO), so users don’t have to provide their credentials again. However, their access is still recorded as a logon action.
  • Accesses a network share
  • Reads a file
  • Modifies a file
  • Creates a file
  • Prints a file
  • Uses a proxy server to access a website

Usage auditing review: looks at the logs to see what users are doing as logs create an audit trail of what happened. Usage auditing reviews are often done to re- create the audit trail, or reconstruct what happened in the past.

Permission Auditing and Review:  Identify the rights and permissions (seen in the image below) granted to users, compared to what the users need in order to enforce the principle of least privilege is enforced.Most organizations ensure that permission auditing reviews are performed at least once a year if not more as the goal is to catch potential problems and prevent security incidents; unless automated, become burdensome if security administrators are required to do them too often, such as daily or even once a week.

linux file ownership
Screenshot demonstrating file listing with ownership and permissions being displayed.

Permission audits detect privilege creep, a common problem that violates the principle of least privilege, which occurs users obtain more and more privileges due to changing job requirements, but unneeded privileges are never removed.  A way to avoid this is using role based access with group privileges

 

 

Summary: Cybersecurity Risk Management

  • Threats are potential danger and a threat assessment evaluates potential threats which can be classified in numerous ways:
    • Environmental threats include natural threats such as weather events.
    • Manmade threats are any potential dangers from people and can be either malicious or accidental.
    • Internal threats typically refer to employees within an organization,
    • External threats can come from any source outside the organization.
  • It is not possible to eliminate risk, but you can take steps to manage it. An organization can avoid a risk by not providing a service or not participating in a risky activity. Insurance transfers the risk to another entity. You can mitigate risk by implementing controls, but when the cost of the controls exceeds the cost of the risk, an organization accepts the remaining, or residual, risk.
  • A quantitative risk assessment uses specific monetary amounts to identify cost and asset values. The SLE identifies the amount of each loss, the ARO identifies the number of failures in a year, and the ALE identifies the expected annual loss. You calculate the ALE as SLE × ARO. A qualitative risk assessment uses judgment to categorize risks based on likelihood of occurrence and impact.
  • A risk register is a comprehensive document listing known information about risks. It typically includes risk scores along with recommended security controls to reduce the risk scores. A supply chain assessment evaluates everything needed to produce and sell a product. It includes all the raw materials and processes required to create and distribute a finished product.
  • Password crackers attempt to discover passwords and can identify weak passwords, or poorly protected passwords. Network scanners can detect all the hosts on a network, including the operating system and services or protocols running on each host.
  • Vulnerability scanners are passive and non-intrusive, having little impact on a system during a test; a penetration test is active, intrusive and more invasive than a vulnerability scan, potentially compromising a system.
  • A vulnerability scanner can identify vulnerabilities, misconfigured systems, and the lack of security controls such as up-to-date patches. Vulnerability scans are passive and have little impact on a system during a test. In contrast, a penetration test is intrusive and can potentially compromise a system.
  • Usage auditing records user activity in logs. A usage auditing review looks at the logs to see what users are doing and it can be used to re-create an audit trail.
  • Permission auditing reviews help ensure that users have only the access they need and no more and can detect privilege creep issues.

Share This Post:

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

Table of Contents

You May Like

Related Posts

Networking
Linux Administrator

How Do Computers Communicate ?

Given the pervasiveness of the Internet in our daily lives, understanding how computers communicate with one another is essential to improving your online security.  In

Read More »