Before the design of new security solutions can begin, the security analysts must first understand the current state of the organization and its relationship to security.
This module examines the processes necessary to undertake formal risk management activities in the organization. Risk management is the process of identifying, assessing, and reducing risk to an acceptable level and implementing effective control measures to maintain that level of risk.
This is done with a number of processes from risk analysis through various types of feasibility analyses.
When you complete this module, you will be able to:
· Define risk management, risk identification, and risk control.
· Understand how risk is identified and assessed
· Assess risk based on the probability of its occurrence and impact on an organization.
· Grasp the fundamental aspects of documenting risk through the creation of a risk assessment
· Describe the risk mitigation strategy options for controlling risks.
· Identify the categories that can be used to classify controls.
· Recognize the conceptual frameworks that exist for evaluating risk controls, and be able to formulate a cost benefit analysis.
· Understand how to maintain and perpetuate risk controls.
Textbook Principles of Information Security, Michael E. Whitman and Herbert J. Mattord, 2005 Second Edition Chapter 4
Unless otherwise specified, all definitions and materials used in this module have been sourced from Principles of Information Security. 2005 Second Edition, Chapter 4 and the instructor’s manual for the text. The materials from the instructor’s manual are used with the permission of the publisher Thomson Course Technology.
Risk identification is the formal process of examining and documenting the current information technology security situation.
Risk identification is conducted within the larger process of identifying and justifying risk controls, known as risk management.
To keep up with the competition, organizations must design and create a safe environment in which business processes and procedures can function.
This environment must maintain the confidentiality, privacy and integrity of organizational data. These objectives are met through the application of the principles of risk management.
Risk management is the process of identifying vulnerabilities in an organization’s information systems and taking carefully reasoned steps to assure the confidentiality, integrity, and availability of all the components in the organization’s information system.
First, we must identify, examine, and understand the information and systems currently in place.
Informed of our own nature, and aware of our own weaknesses, we must then know the enemy.
For information security this means identifying, examining, and understanding the threats that most directly affect our organization and the security of our organization’s information assets.
Information security. Understand the threats and attacks that introduce risk into the organization.
Management and users. Play a part in the early detection and response process. Ensure that sufficient resources are allocated
Information technology. Assist in building secure systems and operating them safely.
General management, IT management, and information security management are collectively accountable for identifying and classifying risks.
A risk management strategy calls on us to “know ourselves” by identifying, classifying, and prioritizing the organization’s information assets.
These assets are the targets of various threats and threat agents, and the goal is to protect the assets from the threats.
Once we have gone through the process of self-examination, we can move into threat identification.
We begin the process by identifying and assessing the value of our information assets.
This iterative process begins with the identification of assets, including all of the following elements of an organization’s system: people, procedures, data, software, hardware, and network.
Then, we classify and categorize the assets, adding details as we dig deeper into the analysis.
Automated tools can sometimes uncover the system elements that make up the hardware, software, and network components.
Once stored, the inventory list must be kept current by using a tool that periodically refreshes the data.
When deciding which information assets to track, you may want to consider including these asset attributes:
· IP address
· MAC address
· Element type
· DeviceClass = S (server)
· DeviceOS = W2K (windows 2000)
· DeviceCapacity = AS (Advanced Server)
· Hardware, software, and network asset identification
· Serial number
· Manufacturer’s name
· Manufacturer’s model number or part number
· Software version, update revision, or FCO number
· Physical location
· Logical location
· Controlling entity
The human resources, documentation, and data information assets of an organization are not as easily identified and documented as tangible assets, such as hardware and software.
These assets should be identified, described, and evaluated using
Some attributes will be unique to a class of elements.
· People: Position name/number/ID (try to avoid names and stick to identifying positions, roles or functions); supervisor; security clearance level; special skills
· Procedures: Description; intended purpose; relationship to hardware, software, and networking elements; storage location for reference; storage location for update
· Data: Classification; owner/creator/manager; size of data structure; type of data structure used (sequential or relational); online or offline; location; backup procedures employed
Many organizations already have a classification scheme.
Examples of these kinds of classifications are confidential data, internal data, and public data. Informal organizations may have to organize themselves to create a useable data classification model.
The other side of the data classification scheme is the personnel security clearance structure, which identifies the level of information individuals are authorized to view based on what each person needs to know.
As each asset of the organization is assigned to a category, posing a number of questions assists in developing the weighting criteria to be used for asset valuation. These questions include:
· Which information asset is the most critical to the success of the organization?
· Which information asset generates the most revenue?
· Which information asset generates the most profitability?
· Which information asset would be the most expensive to replace?
· Which information asset would be the most expensive to protect?
· Which information asset would be the most embarrassing or cause the greatest liability if revealed?
Which factor is the most important to the organization?
Once each question has been weighted, calculating the importance of each asset is straightforward. The final step is to list the assets in order of importance. This can be achieved by using a weighted factor analysis worksheet.
Corporate and military organizations use a variety of classification schemes.
Information owners are responsible for classifying the information assets for which they are responsible.
The U.S. Military Classification Scheme has a more complex categorization system than required by most corporations. For most information, the military uses a five-level classification scheme: Unclassified, Sensitive But Unclassified (i.e., For Official Use Only), Confidential, Secret, and Top Secret.
Most organizations do not need the detailed level of classification used by the military or federal agencies. A simple scheme, such as Public, For Official Use Only, Sensitive, and Classified, will allow the organization to protect its sensitive information.
The other side of the data classification scheme is the personnel security clearance structure. For each user of data in the organization, a single level of authorization must be assigned that indicates the level of classification he or she is authorized to view.
Before an individual is allowed access to a specific set of data, he or she must meet the need-to-know requirement.
Management of classified data includes its storage, distribution, portability, and destruction.
Information that is not unclassified or public must be clearly marked as such.
When classified data is stored, it must be available only to authorized individuals.
When an individual carries classified information, it should be inconspicuous, as in a locked briefcase or portfolio.
The clean desk policy requires employees to secure all information in appropriate storage containers at the end of each day.
When copies of classified information are no longer valuable or excessive copies exist, proper care should be taken to destroy them by means of shredding, burning or transferring to an authorized document destruction service.
Some individuals would not hesitate to engage in dumpster diving to retrieve information that could embarrass an organization or compromise information security.
Each of the identified threats has the potential to attack any of the protected assets.
If you assume every threat can and will attack every information asset, the project scope will quickly become so complex that it will overwhelm the ability to plan.
To make this part of the process manageable, each step in the threat and vulnerability identification processes is managed separately and then coordinated at the end of the process.
Each threat must be examined to assess its potential impact on the targeted organization. This is referred to as a threat assessment.
To frame the discussion of threat assessment, address each threat with a few questions:
· Which threats present a danger to this organization’s assets in the given environment?
· Which threats represent the most danger to the organization’s information?
· How much would it cost to recover from a successful attack?
· Which of these threats would require the greatest expenditure to prevent?
We now face the challenge of reviewing each information asset for the threats it faces and creating a list of the vulnerabilities.
Vulnerabilities are specific avenues that threat agents can exploit to attack an information asset.
Now, we examine how each of the threats that are possible or likely, could be perpetrated and list the organization’s assets and their vulnerabilities?
The process works best when groups of people with diverse backgrounds within the organization work iteratively in a series of brainstorming sessions.
At the end of the process, a list of information assets and vulnerabilities is developed. This list is the starting point for the next step, risk assessment.
We can determine the relative risk for each of the vulnerabilities through a process called risk assessment.
Risk assessment assigns a risk rating or score to each information asset, which is useful in gauging the relative risk to each vulnerable information asset and making comparative ratings later in the risk control process.
Likelihood is the probability that a specific vulnerability will be attacked. Likelihood vulnerabilities are assigned a number between 0.1 for low and 1.0 for high.
Valuation of Information Assets. We can assign weighted scores for the value of each information asset to the organization. Scores can range from 1 to 100, with 100 reserved for mission-critical assets.
Factors in Risk Estimation
Percentage of Risk Mitigated by Current Controls. If a vulnerability is fully managed by an existing control, it no longer needs to be considered for additional controls and can be set aside. If it is partially controlled, estimate what percentage of the vulnerability has been controlled.
Uncertainty. It is not possible to know everything about each vulnerability, such as how likely it is to occur and how much impact a successful attack would have. We must apply judgment to add a factor into the equation to allow for an estimation of the uncertainty of the information
For the purpose of relative risk assessment, risk equals likelihood of vulnerability occurrence times value (or impact) minus percentage risk already controlled plus an element of uncertainty
For each threat and its associated vulnerabilities that have residual risk, create a preliminary list of control ideas.
Residual risk is the risk that remains to the information asset even after the existing control has been applied.
One particular application of controls is in the area of access controls, which specifically address admission of a user into a trusted area of the organization.
There are a number of approaches to controlling access. Access controls can be discretionary, mandatory, or non-discretionary.
Discretionary access controls (DAC) are implemented at the discretion or option of the data user.
Mandatory access controls (MAC) are structured and coordinated with a data classification scheme, and are required.
Non-discretionary access controls are determined by a central authority in the organization and can be based on an individual’s role (role-based controls), a specified set of tasks the individual is assigned (task-based controls), or the specified lists maintained on subjects or objects.
Yet another type of non-discretionary access is a lattice-based access control, in which a lattice structure (or matrix) contains subjects and objects, and the boundaries between each pair are demarcated. This specifies the level of access (if any) that each subject has to each object.
In a lattice-based access control, the column of attributes associated with a particular object (such as a printer) is referred to as an access control list or ACL.
The row of attributes associated with a particular subject (such as a user) is referred to as a capabilities table.
The goal of this process has been to identify the organization’s information assets that have specific vulnerabilities and list them, ranked according to those that most need protection.
In preparing this list, we have collected and preserved a wealth of factual information about the assets, the threats they face, and the vulnerabilities they experience.
We should also have collected some information about the controls that are already in place.
The process you develop for risk identification should include defining what function the reports will serve, who is responsible for preparing the reports, and who is responsible for reviewing them.
The ranked vulnerability risk worksheet is the working document for the next step in the risk management process: assessing and controlling risk.
When management has determined that risks from information security threats are creating a competitive disadvantage, they empower the information technology and information security communities to control the risks.
Once the project team for information security development has created the ranked vulnerability risk worksheet, the team must choose one of four basic strategies to control the risks that result from these vulnerabilities.
The four strategies are:
· Apply safeguards that eliminate or reduce the remaining uncontrolled risks for the vulnerability (avoidance).
· Transfer the risk to other areas or to outside entities (transference).
· Reduce the impact should the vulnerability be exploited (mitigation).
· Understand the consequences and accept the risk without control or mitigation (acceptance).
Avoidance is the risk control strategy that attempts to prevent the realization or exploitation of the vulnerability. This is the preferred approach, as it seeks to avoid risk in its entirety rather than dealing with it after it has been realized.
Avoidance is accomplished through countering threats, removing vulnerabilities in assets, limiting access to assets, and adding protective safeguards.
There are three common methods of risk avoidance: avoidance through application of policy, avoidance through application of training and education, and avoidance through application of technology.
Transference is the control approach that attempts to shift the risk to other assets, other processes, or other organizations.
If an organization does not already have quality security management and administration experience, it should hire individuals or firms that provide such expertise.
This allows the organization to transfer the risk associated with the management of these complex systems to another organization with established experience in dealing with those risks.
Mitigation is the control approach that attempts to reduce the impact caused by the exploitation of vulnerability through planning and preparation.
This approach includes three types of plans: disaster recovery plans (DRP), business continuity plans (BCP), and incident response plans (IRP).
Mitigation begins with the early detection that an attack is in progress.
The most common mitigation procedure is the disaster recovery plan.
The DRP includes the entire spectrum of activities used to recover from an incident. The DRP can include strategies to limit losses before and during the disaster.
DRPs usually include all preparations for the recovery process, strategies to limit losses during the disaster, and detailed steps to follow when the disaster has ended.
The actions that an organization should take while the incident is in progress are defined in a document called the incident response plan or IRP.
The IRP provides answers to questions that victims might pose in the midst of a disaster.
It answers these questions:
· What do I do NOW?
· What should the administrators do first?
· Whom should they contact?
· What should they document?
DRP and IRP planning overlap to a degree. In many ways, the DRP is the subsection of the IRP that covers disastrous events.
While some DRP and IRP decisions and actions are the same, their urgency and results can differ dramatically.
The DRP focuses more on preparations completed before and actions taken after the incident, while the IRP focuses on intelligence gathering, information analysis, coordinated decision-making, and urgent, concrete actions.
The third type of planning document within the mitigation strategy is the business continuity plan or BCP.
The BCP is the most strategic and long term of the three plans. It encompasses the continuation of business activities if a catastrophic event occurs, such as the loss of an entire database, building, or operations center.
The BCP includes planning the steps necessary to ensure the continuation of the organization when the scope or scale of a disaster exceeds the DRPs ability to restore operations.
With the acceptance control approach, an organization evaluates the risk of a vulnerability and allows the risky state to continue as is.
The only acceptance strategy that is recognized as valid occurs when the organization has:
· Determined the level of risk.
· Assessed the probability of attack.
· Estimated the potential damage that could occur from these attacks.
· Performed a thorough cost-benefit analysis.
· Evaluated controls using each appropriate type of feasibility.
· Decided that the particular function, service, information, or asset did not justify the cost of protection.
Acceptance of risk is the choice to do nothing to protect a vulnerability and to accept the outcome of its exploitation.
This control, or rather lack of control, is based on the assumption that it may be a prudent business decision to examine the alternatives and determine that the cost of protecting an asset does not justify the security expenditure.
The term risk appetite is used to describe the degree to which an organization is willing to accept risk as a trade-off for the expense of applying controls.
The level of threat and value of the asset should play a major role in the selection of a risk control strategy.
The following rules of thumb can be applied in selecting the preferred strategy:
· When a vulnerability exists, implement security controls to reduce the likelihood of a vulnerability’s being exercised.
· When a vulnerability can be exploited, apply layered protections, architectural designs, and administrative controls to minimize the risk or prevent this occurrence.
· When the attacker’s cost is less than his potential gain, apply protections to increase the attacker’s cost (e.g., use system controls to limit what a system user can access and do, thereby significantly reducing an attacker’s gain).
· When potential loss is substantial, apply design principles, architectural designs, and technical and non-technical protections to limit the extent of the attack, thereby reducing the potential for loss.
Controlling risk through avoidance, mitigation, or transference may be accomplished by implementing controls or safeguards.
There are four categories of controls:
· Control function
· Architectural layer
· Strategy layer
· Information security principle
Controls or safeguards designed to defend a vulnerability are either preventive or detective.
Preventive controls stop attempts to exploit a vulnerability by implementing enforcement of an organizational policy or a security principle, such as authentication or confidentiality.
Detective controls warn of violations of security principles, organizational policies, or attempts to exploit vulnerabilities. Detective controls use techniques such as audit trails, intrusion detection, or configuration monitoring.
Some controls apply to one or more layers of an organization’s technical architecture.
Among the commonly used architectural layer designators are: organizational policy, external networks, extranets (or demilitarized zones), Intranets (WAN and LAN), network devices that interface network zones (switches, routers, firewalls, and hubs), systems, (computers for mainframe, server, or desktop use) and applications.
Controls are sometimes classified by the risk control strategy they operate within:
Controls operate within one or more of the commonly accepted information security principles:
Before deciding on the strategy for a specific vulnerability, all the economic and non-economic consequences of the vulnerability facing the information asset must be explored.
We need to ask, “What are the actual and perceived advantages of implementing a control opposed to the actual and perceived disadvantages of implementing the control?”
The approach most commonly considered for a project of information security controls and safeguards is the economic feasibility of implementation.
An organization begins by evaluating the worth of the information assets to be protected and the loss in value if those information assets are compromised by the specific vulnerability.
It is only common sense that an organization should not spend more to protect an asset than it is worth.
The formal process to document this is called a cost benefit analysis or an economic feasibility study.
Just as it is difficult to determine the value of information, it is also difficult to determine the costs of safeguards.
Some of the items that impact the cost of a control or safeguard include:
· Cost of development or acquisition
· Training fees
· Cost of implementation
· Service costs
· Cost of maintenance
Benefit is the value that the organization recognizes by using controls to prevent losses associated with a specific vulnerability.
This is usually determined by valuing the information asset or assets exposed by the vulnerability and then determining how much of that value is at risk, and how much risk there is for the asset.
Asset valuation is the process of assigning financial value or worth to each information asset. Some will argue that it is virtually impossible to accurately determine the true value of information and information-bearing assets.
The valuation of assets involves estimation of real and perceived costs associated with the design, development, installation, maintenance, protection, recovery, and defense against market loss, and litigation for every set of information bearing systems or information assets.
Some of the components of asset valuation include:
· Value retained from the cost of creating or acquiring the information asset.
· Value retained from past maintenance of the information asset.
· Value implied by the cost of replacing the information.
· Value from providing the information.
· Value incurred from the cost of protecting the information.
· Value to owners.
· Value of intellectual property.
· Value to adversaries.
· Loss of productivity while the information assets are unavailable.
· Loss of revenue while information assets are unavailable.
The organization must be able to place a dollar value on each collection of information and the information assets it comprises.
This value is based on the answers to these questions:
· How much did it cost to create or acquire this information?
· How much would it cost to recreate or recover this information?
· How much does it cost to maintain this information?
· How much is this information worth to the organization?
· How much is this information worth to the competition?
Once an organization has estimated the worth of various assets, it can begin to examine the potential loss that could occur from the exploitation of a vulnerability or a threat occurrence. This process results in the estimate of potential loss per risk.
The questions that must be asked here include:
· What damage could occur, and what financial impact would it have?
· What would it cost to recover from the attack, in addition to the financial impact of damage?
· What is the single loss expectancy for each risk?
The expected value of a loss can be stated in the following equation:
Annualized Loss Expectancy
Single Loss Expectancy (SLE) * Annualized Rate of Occurrence (ARO)
where SLE equals the asset value times the exposure factor (EF).
The ARO is how often you expect a specific type of attack to occur. SLE is the calculation of the value associated with the most likely loss from an attack. EF is the percentage loss that would occur from a given vulnerability being exploited.
In its simplest definition, the CBA determines whether or not the control alternative being evaluated is worth the associated cost incurred to control the specific vulnerability.
The CBA is calculated using the ALE from earlier assessments.
CBA = ALE(prior) – ALE(post) – ACS
ALE(prior) is the annualized loss expectancy of the risk before the implementation of the control.
ALE(post) is the ALE examined after the control has been in place for a period of time.
ACS is the annual cost of the safeguard.
Instead of determining the financial value of information and then implementing security as an acceptable percentage of that value, an organization could take a different approach and look to peer institutions for benchmarks.
Benchmarking is the process of seeking out and studying the practices used in other organizations that produce the results you desire in your organization.
When benchmarking, an organization typically uses one of two measures to compare practices: metrics-based measures or process-based measures.
Metrics-based measures are comparisons based on numerical standards, such as:
· Number of successful attacks.
· Staff hours spent on systems protection.
· Dollars spent on protection.
· Number of security personnel.
· Estimated value in dollars of the information lost in successful attacks.
· Loss in productivity hours associated with successful attacks.
An organization uses this information to rank competitive businesses within a similar size or market and to determine how it measures us to the competitors.
Process-based measures are generally less focused on numbers and more strategic than metrics-based measures.
For each of the areas the organization is interested in benchmarking, process-based measures enable the organization to examine the activities an individual company performs in pursuit of its goal, rather than the specifics of how goals are attained.
The primary focus is the method the organization uses to accomplish a particular process, rather than the outcome.
In information security, two categories of benchmarks are used: standards of due care and due diligence, and best practices.
Within best practices, gold standard is a sub-category of practices that are typically viewed as “the best of the best.”
When organizations adopt levels of security for a legal defense, they may need to show that they have done what any prudent organization would do in similar circumstances. This is referred to as a standard of due care.
Organizations cannot implement these standards and then ignore them. The application of controls at or above the prescribed levels and the maintenance of those standards of due care show that the organization has performed due diligence.
Due diligence is the demonstration that the organization is diligent in ensuring that the implemented standards continue to provide the required level of protection.
Failure to support a standard of due care or due diligence can open an organization to legal liability, provided it can be shown that the organization was negligent in its application or lack of application of information protection.
Security efforts that seek to provide a superior level of performance in the protection of information are referred to as best business practices or simply best practices or recommended practices.
Best security practices (BSPs) are those security efforts that are among the best in the industry, balancing the need to access with the need to provide adequate protection.
Best practices seek to provide as much security as possible for information and systems while maintaining a solid degree of fiscal responsibility.
When considering to adopt best practices in your organization, consider the following:
· Does your organization resemble the identified target organization of the best practice?
· Are the resources you can expend similar to those identified in the best practice? A best practice proposal that assumes unlimited funding and does not identify needed tradeoffs will be of limited value if your approach has strict resource limits.
· Are you in a similar threat environment as that proposed in the best practice? A proposal of best practice from months and even weeks ago may not be appropriate for the current threat environment.
Problems with Benchmarking and Best Practices
The biggest problem with benchmarking in information security is that organizations don’t talk to each other.
Another problem with benchmarking is that no two organizations are identical.
A third problem is that best practices are a moving target. What worked well two years ago may be completely worthless against today’s threats.
One last issue to consider is that simply knowing what was going on a few years ago, as in benchmarking, doesn’t necessarily tell us what to do next.
Baselining is the analysis of measures against established standards.
In information security, baselining is the comparison of security activities and events against the organization’s future performance.
When baselining, it is useful to have a guide to the overall process.
Additional methods determine feasibility.
Organizational feasibility examines how well the proposed information security alternatives will contribute to the efficiency, effectiveness, and overall operation of an organization.
Above and beyond the impact on the bottom line, the organization must determine how the proposed alternatives contribute to the business objectives of the organization.
Operational feasibility addresses user acceptance and support, management acceptance and support, and the overall requirements of the organizations’ stakeholders. Operational feasibility is sometimes known as behavioral feasibility, because it measures the behavior of users.
One of the fundamental principles of systems development is obtaining user buy-in on a project. A common method for obtaining user acceptance and support is through user involvement. User involvement can be obtained through three simple steps: communication, education, and involvement.
In addition to the straightforward feasibilities associated with the economic costs and benefits of the controls, the project team must also consider the technical feasibilities associated with the design, implementation, and management of controls.
Technical feasibility examines whether or not the organization has the technology necessary to implement and support the control alternatives.
For some organizations, the most significant feasibility evaluated may be political. Within organizations, political feasibility defines what can and cannot occur based on the consensus and relationships between the communities of interest.
The limits placed on an organization’s actions or behaviors by the information security controls must fit within the realm of the possible before they can be effectively implemented, and that realm includes the availability of staff resources.
At a minimum, each information asset-vulnerability pair should have a documented control strategy that clearly identifies any residual risk remaining after the proposed strategy has been executed.
Some organizations document the outcome of the control strategy for each information asset-vulnerability pair as an action plan.
This action plan includes concrete tasks, each with accountability assigned to an organizational unit or to an individual.
We must convince budget authorities to spend up to the value of a particular asset in order to protect it from an identified threat.
Each control or safeguard that is implemented will impact more than one threat-asset pair.
Between the impossible task associated with the valuation of information assets, and the dynamic nature of the ALE calculations, it’s no wonder organizations are looking for a more straightforward method of implementing controls that doesn’t involve such imperfect calculations.
The spectrum of steps previously described was performed with actual values or estimates. This is known as a quantitative assessment.
However, an organization could determine that it cannot put specific numbers on these values. Fortunately, it is possible to repeat these steps using estimates based on a qualitative assessment.
Instead of using specific values, scales can be developed to simplify the process.
How do you calculate the values and scales of either qualitative or quantitative assessment? One technique for accurately estimating scales and values is the Delphi Technique.
The Delphi Technique, named for the Oracle at Delphi, is a process whereby a group rates or ranks a set of information.
The individual responses are compiled and then returned to the group for another iteration.
This process continues until the group is satisfied with the result.
Access control list
Annualized loss expectancy (ALE)
Annualized rate of occurrence (ARO)
Best business practices
Business continuity plan (BCP)
Business recovery site
Clean desk policy
Cost benefit analysis
Disaster recovery plan (DRP)
Discretionary access control
Economic feasibility study
Incident response plan (IRP)
Lattice-based access control
Mandatory access control
Performance gap Policy
Single loss expectancy (SLE)