Skip Navigation

Threats, Vulnerabilities, and Controls

In the previous lesson, we introduced security concepts including the CIA Triad: Confidentiality, Integrity, and Availability. This lesson explores threats, vulnerabilities, and controls, which collectively provide a means to analyze and harden computer systems.

Page Contents

Video Lecture


Watch at Internet Archive

Threats

In terms of information security, threats are possible dangers that could compromise the confidentiality, integrity, or availability of a computer system or service. Threats come in a variety of forms. They can be external threats, which are those posed by threat actors located outside the system or company (for example, hackers on the Internet). However, threats need not be external: internal threats are possible dangers from within the organization (such as a rogue user or disgruntled employee). Moreover, a good design considers both malicious (intentional) threats and accidental threats (such as a user pushing the wrong button by mistake).

When identifying potential threats to a system, we must first determine the ways in which it can be compromised. If a compromise occurs, we must assess how that compromise would reduce the confidentiality, integrity, and/or availability of the system or data. A common mistake in performing threat analysis is to assume that all threats are both external and malicious. Failing to design a system to account for internal threats and honest accidents will leave the system insecure.

Cat with a pen (photo)

Figure 1: A cat can easily move a small object, such as a pen. Like any good cat, he clearly shows no interest in the small object while the camera is out. However, he was always fond of knocking remote controls and other items off tables. [Bear Murphy, 2003-2020]

Consider a silly, non-technical example (Figure 1): a domestic cat can easily manipulate a small object, such as a pen. While there is (hopefully) not a confidentiality concern in this case, there are possible threats to integrity and availability. The cat could chew on the pen, damaging it and thereby affecting its integrity. More likely, the cat would threaten the availability of the pen, perhaps by taking it away from the human and hiding it under the refrigerator. We’ll continue this example in the subsequent sections.

Vulnerabilities

Weaknesses in a system that permit a threat to be realized, compromising the confidentiality, integrity, and/or availability of the system, are called vulnerabilities. A critical vulnerability is a flaw in the system that can be exploited by an attacker with the correct tools, or by someone in the correct situation. The distinction here is that a system could have an identifiable weakness, but there is no practical way to exploit that weakness. A vulnerability becomes critical whenever a practical exploit becomes available. Note that a non-critical vulnerability today could become a critical one tomorrow.

Identifying vulnerabilities begins after identifying the threats to a system. For each threat, the security analyst needs to determine how the system is potentially affected by the threat and whether or not the threat could be realized due to some weakness in the system. Each identified weakness that would allow a threat to materialize is a vulnerability, and that vulnerability becomes critical if a practical means of exploiting it is available.

Using our silly example above (Figure 1), several characteristics of the pen make it vulnerable to the cat. First, the pen is small enough for the cat to manipulate it easily, whether with paws or teeth. Second, the pen is light enough that the cat could move it or even carry it away somewhere else. These two characteristics of the pen are vulnerabilities that allow the cat to threaten the pen’s integrity and availability. As shown in the photo, these vulnerabilities are critical, since the pen is clearly accessible to the cat.

Controls

Controls are safeguards implemented to close vulnerabilities and mitigate threats, in order to protect the confidentiality, integrity, and availability of the system. Controls can be physical, procedural, or technical in nature. For example, locking the data center where the systems are housed is a type of physical control. Requiring a system administrator to document changes to the system is a type of procedural control. Deploying a firewall on a system is a type of technical control.

Security professionals identify possible controls once threats and vulnerabilities have been identified. The purpose of adding a control is to close vulnerabilities and mitigate threats. Controls are therefore a type of safeguard intended to make a system less vulnerable.

In our running silly example, there are several possible controls that we could implement to reduce the vulnerability of the pen. We could close the vulnerability directly if we increased the size of weight of the pen to a point where the cat would not be capable of moving it. Alternatively, we could prevent exploitation of the vulnerability without modifying the pen if we secure the pen inside a drawer, where the cat cannot reach it (Figure 2).

Securing a pen in a drawer (photo)

Figure 2: Securing the pen inside a drawer makes it inaccessible to the cat.

This silly example also provides one more threat mitigation opportunity: we could remove the cat from the room where the pen is located, thereby directly mitigating the threat. While removing the cat is a legitimate control in this situation, directly mitigating threats by removing them is not usually practical. For example, we cannot possibly eliminate all remote attackers from the Internet. Nor can we completely prevent all insider threats to the system, since a trustworthy employee could go rogue at some future time. Controls in information processing systems are best approached as ways to live with threats, as opposed to trying to eliminate them altogether.

Trade-offs with Controls

A critical problem with implementing security controls is that the controls themselves can negatively impact the CIA Triad compliance of a system, rendering it insecure by definition. In particular, security controls have a tendency to reduce the availability of a system by limiting its functionality to a point where the system no longer performs adequately for its intended purpose. One extreme example of a control run amok is to provide physical security for laptop computer systems by permanently affixing them to employees’ desks. While the control is effective at preventing the physical loss of the system (partially providing some confidentiality and integrity), the whole point of a laptop computer is to be portable. If the business need for acquiring the laptop in the first place was to provide employees with a way of computing while traveling, this particular control causes the system to become insecure by definition, since it destroys availability.

Alternate controls may be equally effective with fewer side effects. In the above case of the laptops, a combination of insurance, encryption on the devices, and perhaps some tracing software might provide adequate confidentiality and integrity, while preserving availability. In our silly example, placing the pen together with other pens in a cup on a desk (Figure 3) might be sufficient to disinterest the cat. While the cat still could reach the pen, the risk is reduced if the pen is not as easily manipulated. In most cases, as with the pen, we have to accept a certain amount of risk in order to operate a system.

Pens in a cup (photo)

Figure 3: Placing pens together in a cup makes an individual pen less likely to be attacked by the cat.

For the vast majority of business cases, practical security is really a matter of protecting the system to a sufficient level that the cost of attacking it is more than the benefit provided by doing so. Unless your system is of exceptionally high value so as to be worth attacking in its own right (such as a government system holding military secrets), most attackers will move on to softer targets. A large number of security exploits are motivated by opportunity and available vulnerabilities. Security controls therefore need to safeguard the system to an acceptable level of risk, maintaining availability to its intended users for their intended purposes.

Cat with toys (photo)

Figure 4: Providing the cat with appropriate small objects might divert attention from the pens. On the other hand, the cat might simply decide to use the toys as a bed and continue to play with pens.

To conclude our silly example, we can provide softer targets for the cat, thereby potentially diverting his attention away from objects that we would like to protect. A basket of cat toys (Figure 4) provides the cat with plenty of other small objects that can be manipulated, potentially making our pen less interesting. As anyone who has a cat knows, this approach is only ever partially successful, as cats tend to become bored with toys and will eventually move back to other objects.

Human threat actors tend to be more goal-driven than cats, but there is a similar technique that can be used on them. A honeypot is a specially designed system or environment that is set up in such a way as to encourage a threat actor to attack it, giving the threat actor the impression that the attack is successful. Honeypots can be used as early warning systems to identify the presence of a threat actor. In addition, it is sometimes possible to identify the threat actor who attacks a honeypot, which allows for procedural controls to be implemented, such as removing network access, terminating a rogue employee, or prosecuting a criminal actor.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.