>Business > Information Security Assessment Variants

 Information Security Assessment Variants

There are several differing variations of security evaluations within the domain of cyber security, and they’re not always simple to demarcate within our minds. 

This blog by AICorespot is a brief feature on the dominant variations of security assessments, combined with what demarcates them from typically leveraged cousins. 

Security Assessment Variations 

  • Vulnerability Assessment – A vulnerability assessment is a technical assessment developed to yield as many vulnerabilities as feasible in a setting, combined with severity and remediation prioritization data. 

Typically mistaken with: The vulnerability is usually mistaken and/or conflated with the Penetration Test. This is mainly because sales personnel believe the second one seems cooler.  

Ideally leveraged when: The vulnerability assessment is ideally leveraged when security maturity is minimal to medium, when you require a prioritized listing of everything that’s going wrong, where the objective is to rectify as many things as feasible as effectively as possible. 

  • Penetration Test: A penetration test, or pentest, is a technical evaluation developed to accomplish a particular objective, for instance, to steal and/or compromise consumer data, to gain domain administrator, or to alter sensitive salary data. 

Typically mistaken with: The Pentest is most usually mistaken and/or conflated with the initial entry of this blog, the vulnerability assessment. Another way to perceive this is to imagine vulnerability assessments as seeking for security issues when you are aware that they are present, and pentesting as validating a configuration when you think it is secured. 

Ideally leveraged when: As a pentest is developed to accomplish one or more particular objectives, they should not be commissioned by low or medium security organizations in most scenarios. Carrying out a pentest against a low or medium security shop will merely yield recommendation stalwarts such as “Implement patching across the organization”, “Disable inactive users”, and the all-time great – “Understand where your sensitive data is.” Don’t expend financial resources on a penetration test unless you’ve already underwent one or 17 vulnerability evaluations and subsequently remediated everything that was identified. Penetration tests are for security evaluations that are presumed to be robust, not recording the contents of a salad. 

  • Red Team Assessment: A Red Team “assessment” is something of misnomer within the context of industry as corporate Red Team services should in the best case scenario be ongoing instead of point-in-time. So it ought to be more of a service than an assessment. Notwithstanding that distinction, the central objective of a corporate Red Team is to enhance the quality of the corporate data security defences, which, if one is present, would be the organization’s Blue Team. As a matter of fact, that’s what a lowercase “red team” is: an independent group that challenges a business to enhance its effectiveness. In the scenario of corporate Red Teams, the org they’re enhancing is the blue team. 

Red Team services ought to always have the following five aspects: Organizational Independence, Defensive Coordination, Continuous Operation, Adversary Emulation, and Efficacy Measurement. 

Typically mistaken with: Red Team services are usually mistaken with Penetration Testing. Sales and marketing groups are leveraging the teams almost interchangeably, as are several internal security groups. Individuals mistaking the two are essentially seeing “Red Teaming” as a sexier, more elite variant of Penetration Test. They aren’t similar. A Penetration Test is a defined, scoped, and point-in-time assessment that has particular objectives for success or failure. A corporate Red Team (whether internal or external) is an ongoing service that mimics real-world malicious actors for the purpose of enhancing the Blue Team. They might share TTPs at times, but they possess varying purposes. 

Ideally leveraged when: Red Team Services are ideally leveraged when an enterprise has gone over the fundamentals of robust vulnerability management and has at least some capacity to identify and respond to malicious or suspect behaviour in the environment. If an enterprise still has issues with fundamental asset management, patching, egress traffic control, and other basics, it’s usually ideal that they find solutions to those prior to hiring or developing a “Red Team”. Red Teams are for evaluating mature security postures in a real-world fashion, not for enumerating problems in low-maturity environments. If you don’t possess a blue team, you likely are not in requirement of a Red Team. 

Audit: An audit can be technically-based and/or documentation-based, and concentrates on how a current configuration stands up against a desirable standard. This is a critical point. It does not provide proof or validate security, it is a validation of conformance with a provided viewpoint on what security means. These two things should not be mistaken with each other. 

Typically mistaken with: Audits are usually mistaken with pretty much any other variant of security evaluation where the objective is to identify vulnerabilities and rectify them. That could be aspect of an audit, if there’s an item in the standard that states you shouldn’t have vulnerabilities, but the critical attribute is mapping present state against a random standard. 

Ideally Leveraged When: Enterprises leverage audits as a demonstration of compliance. Critically, compliance ought not to be leveraged to demonstrate security. Secure enterprises are considerably more probable to be compliant (if checked), but compliant organizations should lay no claims to being secure as they are in line with standard X or Y. 

White/Grey/Black-box assessment: The white/grey/black evaluation parlance is leveraged to signify how much internal data a tester will get to know or leverage during a provided technical evaluation. The levels map light to internal transparency, so a white-box evaluation is where the tester has complete access to all internal data available, like network diagrams, source code, etc. A grey-box assessment is the next degree of opacity down from white, implying that the tester has some data but not all of it. The amount varies. A black-box assessment – as you’re probably guessing, is an evaluation where the tester has nil internal knowledge regarding the environment, that is, it is carried out from the malicious actor’s viewpoint. 

Ideally Leveraged With: The biggest source of confusion with regards to white/grey/black-box nomenclature is not the realization that they aren’t really an assessment variant but instead an aspect of one. They’re most usually combined with vulnerability assessments where you’re attempting to identify the most problems as possible, and that furnishes considerable incentive to open the curtains up a bit. Recall that the objective of a vulnerability assessment is to identify as many problems as feasible, so hiding internal data from a tester that keeps them from identifying problems does not hurt them, it hurts the organization. Don’t confuse wanting to be aware what attackers can see/do with wanting to be aware of what problems you have. These are two independent things and require to be approached independently. If you wish to know what an attacker can do, rectify all of your problems until you’re confident you’re as secure as possible, then execute a penetration test. 

Ideally Leveraged When: White-box assessments are ideally leveraged with vulnerability assessments as you wish to identify as many problems as feasible, regardless of how the tester came to find out about them. Grey-box assessments are usually leveraged when individuals are confused about the variations between a pentest and a vulnerability assessment. They wish to provide some data, but not all. Let’s be clear: if you’re attempting to identify all of your problems, you shouldn’t hold back data from the tester. If you’re performing a pentest, however, you shouldn’t provide the tester anything, which is a black-box assessment. Keep these things clear in your head and you’ll be fine. 

Risk Assessment: Risk assessments, such as threat models, are really broad in both how they’re comprehended and how they’re carried out. At the highest level, a risk assessment should involve determination of what the present level of acceptable risk is, measuring the present risk level, and then deciding what can be done to bring these two in line where there are mismatches. Risk assessments usually consist of the rating of risks in dual dimensions: probability, and impact, and both quantitative and qualitative models are leveraged. In several ways, risk assessments and threat modelling are similar exercises, as the objective of each is to decide a course of action that will bring risk to an ok level. 

Typically mistaken with: Risk assessments are usually mistaken with threat assessments, as both are pursuing similar objectives. The main differentiator in where assessments begin and where they place their concentration. Threat models concentrate on attack scenarios and then shift into the agents, the vulns, the controls, and the prospective impacts. Risk assessments often begin from the asset side, rating the value of the asset and the map onto it the possible threats, probability of loss, the impact of losses, etc. 

Ideally leveraged when: Risk assessments should debatably be viewed as an umbrella term for deciding what you possess that is of value, how it can be attacked, what you would lose if those attacks were successful, what should be carried out to tackle the issues. It’s critical that when somebody states they’re going to perform a risk assessment that you delve deeper into exactly what is implied by that, that is, what strategy or methodology will be leveraged, what the artifacts will be, etc. 

  • Threat Assessment: A threat assessment is a variation of security review that’s a little bit different than the others specified. Generally, it pertains more to physical attacks than tech, but the lines are being blurred. The main concentration of a threat assessment is to decide if a threat (think bomb threat or violence threat) that was made, or that was identified in some other fashion, is credible. The driver for the assessment is to decide how many resources – if any – ought to be spent on tackling the issue in question. 

Typically mistaken with: The term “threat” is leveraged in various ways within the domain of security, which leads to considerable confusion. In this scenario, the term is leveraged as in “a threat was made”, or “determining if the threat was real”, in opposition to the “threat-agent” usage. The origin comes from the Secret Service investigating school violence, and the challenge was identifying which of the thousands of threats they got they should react to with really restricted resources. This is in stark contrast to what several think of when they hear threat assessment, which is investigating possible threat-agents, like hackers, governments, etc. 

Ideally Leveraged When: A threat assessment is ideally leveraged in scenarios where somebody has made a claim around carrying out an attack in the future, or such a possibility is uncovered in some way. The objective in that scenario would be learn if the scenario is worth spending resources on tackling. 

  • Threat Modelling: Threat modelling is not a well-comprehended variant of security assessment to a majority of enterprises, and aspect of the problem is that it implies several differing things to several different people. At the most fundamental level, threat modelling is the procedure of capturing, documentation of, and usually visualizing how threat-agents, vulnerabilities, attacks, countermeasures, and impacts to the enterprise are connected for a provided environment. As the name indicates, the concentration often begins with the threat agent and a provided attack scenario, however, the subsequent workflow then captures what susceptibilities might be taken advantage of, what exploits might be leveraged, what countermeasures might exist to cease/diminish this type of attack, and what business impact may be the outcome. 

Typically Mistaken With: Threat modelling can be a source of confusion, generally speaking. A lot of the conflict stems from debates with regards to definitions and semantics, as threat modelling often consists of discussions regarding threats, threat-agents, vulnerabilities, exploits, risks, controls, and impacts. Each of these is loaded by itself, and when you begin attempting to have a conversation with all of them simultaneously religious wars are often the outcome. The other problem is that individuals lose track of the objective as there are various elements in play. Are we attempting to detect vulnerabilities? Are we attempting to profile threat-agents? Are we recording possible business impacts? The ideal way to summarize is to state that Threat Modelling brings a dose of possible reality to a security posture. It demonstrates to you, via attack scenarios, where gaps are present that could have the outcome of real-world consequences. 

Ideally Leveraged When: Enterprises should be leveraging threat modelling early and frequently, and they should surely be included in the developmental process. They are a way to make sure that know possible attack scenarios can actually be managed by a provided security posture. They can also be really illuminating from a pure documentation and visibility perspective. Observing your possible threat-actors, how they’re probable to attack your app or system, leveraging what vulns and what exploits, and what it’ll probably do to your organization is usually a sobering experience. They’re particularly useful for demonstrating to non-security-people how compliance and security products do not a security program make. 

Bug Bounty: A Bug Bounty is a variant of technical security assessment that harnesses crowdsourcing to identify susceptibilities in a system. The core concept is simple: security testers, notwithstanding quality, possess their own array of strengths, weaknesses, experiences, biases, and preferences, and these combine to yield differing discoveries for the same system when evaluated by differing individuals. To put it differently, you can provide 100 experienced security testers the exact same evaluation methodology and they’re probable to identify broadly varying vulnerabilities. The bug bounty concept is to embrace this difference rather than fighting it by leveraging several testers on a singular assessment. 

Typically Mistaken With: Bug bounties are a comparatively new strategy to performing technical security evaluation, and there is some confusion with regards to whether they should be carried out rather than another security test or in addition. The ideal answer, we’d argue, is that a bug bounty should be considered a vulnerability assessment in its objective of identifying as many issues to remediate as feasible, but be considered a pentest in that you should do traditional vulnerability assessments first. The reason for this is that bug bounties, as they leverage several individuals, excel in identifying uncommon and eccentric issues, and the exercise is somewhat wasted on identification of the common problems that can be unveiled leveraging automation and single-tester assessments. 

Ideally Leveraged When: Bug bounties are ideally leveraged when you have already carried out one or more traditional vulnerability assessments (which should have integrated both automated and manual testing.) and then you have remediating all the things that was identified. Consider them an optional step between traditional vulnerability assessments and a pentest, which, as observed above, does not look to discover all issues but rather to confirm that the security posture is where it requires to be going after particular objectives. 

Most Frequently Confused 

The following are some of the most typical errors made when thinking about these assessment variants: 

  • If you aren’t confident with regards to your security posture and know already that it’s not robust, you should be performing Vulnerability Assessments – not Pentesting. Pentesting is for evaluating your posture after you have it where you want it. 
  • The ideal way to think about Bug Bounties is an improvement to the discovery phase of a Vulnerability Assessment. Vulnerability Assessments have two pieces: Discovery (identifying as many issues as feasible), and Prioritization (ranking what should be rectified first). Bug bounties are great at the first part, and not good at the second. As such, they are ideally leveraged when you have performed several vulnerability assessments already and have already identified the simple stuff. Bug bounties excel at identifying issues not discovered leveraging other methods. 
  • As marketing and sales drive the infosec industry, individuals are consistently conflating Red Teaming and Pentesting. As Red Teams are meant to mimic the adversary they typically only function if they are both ongoing and run over long periods – ideally permanently. So if you have some organization providing to do a 2-week “Red Team” engagement, this is probably better detailed as a Pentest. So the critical distinctions are the emulation of real-world attackers, which includes their tenacity, the permanent duration of the attack, the TTP sophistication, etc. Assessments that lack those elements are Pentests, not Red Team engagements. 


Here’s the summarized version. 

  • Vulnerability assessments are developed to identify as many susceptibilities as feasible for the purpose of prioritizing remediation attempts. The output is a listing of prioritized issues. 
  • Penetration tests are developed to decide if an attacker can accomplish particular objectives when encountering your present security posture, like stealing sensitive data or other activities that would harm the enterprise. The output is a report stating if the objectives were accomplished or not, and any other observations that might have been made along the way. Pentests do not furnish a total list of susceptibilities or necessarily any prioritization of what was discovered; it’s mostly a yes or no for accomplishing the agreed-upon objectives. 
  • Red Teams are developed to continuously and efficiently mimic an enterprise’s real-world attackers for the purpose of enhancing its defensive capabilities. Red Teams operate on an ongoing basis, with near-full-scope and really limited restrictions, and consistently evolve their strategies to match and/or surpass the capacities of the enterprise’s actual attackers. 
  • Audits are developed to determine how a provided enterprise measures against a set standard. Audits, as a rule, do not evaluate security directly, but instead evaluation compliance with a standard. The standard being evaluated against might have a robust or weak link to actual security, and ought not to be confused with a vulnerability assessment or pentest. The output of an audit is a listing of areas the must be rectified in order to accomplish compliance. 
  • White/Grey/Black-box Assessments are a measure of how much data is being furnished to a security evaluation organization during an assessment. These could be internal, external, application-driven, network-driven, with or with no exploitation, etc. The only consideration for $SHADE-box assessments is the amount of data being shared with the evaluation party. 
  • Risk Assessments are for deciding the most critical risks facing a provided enterprise for the purpose of making sure that they are brought within condonable levels for the enterprise. They can take several forms, but the output is always a listing of prioritized risks followed by recommendations. 
  • Threat Assessments are for deciding whether a provided threat (often, but not necessarily, physical in nature) is worth spending restricted resources on. Output is typically a recommendation of what – if any amount of effort should be devoted to the issue. 
  • Threat Models are for deciding the several threats, threat situations, threat-actors, vulnerabilities, exploits, controls, and impacts that are connected to a provided system. They are ideally carried out early and often during the creation procedure and can also be repeated after considerable changes. Output usually includes documentation of each of the above, combined with residual risk after controls are considered, combined with recommendations for enhancement. 
  • Bug Bounties are projects that harness crowdsourcing for the discovery of susceptibilities in a system. They are a utility in the vulnerabilities in a system. They are a utility in the vulnerability assessment toolbox. The strategies leveraged by those taking part in a bounty can vary broadly, as can the variant of system being evaluated. The critical part is that instead of an internal team, or a specific set of contracted staff members performing the work, it’s rather a large collection of independent researchers who all bring their own viewpoints to the testing. 
Add Comment