Download presentation
Presentation is loading. Please wait.
Published byAlfredo Botella Maidana Modified over 6 years ago
1
Auditing Networks, Perimeters and Systems
Unit 3 – STAR Case Study The SANS Institute This page intentionally left blank. Copyright 2001 Marchany
2
TBS Case Study Sort of….. We applied some but not all TBS concepts in our first attempt to determine the status of our asset security. This process took about 12 months. Security committee met once every 2-3 weeks. We’re starting the fourth phase and are applying more TBS concepts this time. This is an example of how we applied TBS techniques to our site’s risk analysis process. We used some but not all of the TBS methodologies mentioned in previous slides. This is the “real world” example of the theory. This section describes the process behind the creation of the STAR technique. The fruits of the committee’s labors are available to you when you visit our www site at Copyright 2001 Marchany
3
The Committee Management and Technical Personnel from the major areas of IS University Libraries Educational Technologies University Network Management Group University Computing Center Administrative Information Systems Our security committee consisted of upper management (VP’s, Directors), middle managers and techies (sysadmins, netadmins). As impressive as this sounds, there were only 9 people on the committee. Why an odd number? Can’t have a tie so a decision is always made. These areas are the departments of our IS division. Copyright 2001 Marchany
4
The Committee’s Scope Information Systems Division only
Identified and prioritized Assets RISKS associated with those ASSETS CONTROLS that may applied to the ASSETS to mitigate the RISKS Did NOT specifically consider assets outside IS control. However, those assets are included as clients when considering access to assets we wish to protect The committee dealt with the IS assets only but recognized that the threats to the assets were constant across the entire organization. We also tried to categorize controls for the risks (a slight variation of TBS techniques). Here we follow the TBS strategy of identifying our assets. This information came from our Machine Room managers, network managers and property inventory control groups. Copyright 2001 Marchany
5
The Committee’s Charge
From our VP for Information Systems “Establish whether IS units are taking all reasonable precautions to protect info resources and to assure the accurate & reliable delivery of service” “Investigate and advise the VPIS as to the security of systems throughout the university….Provide documentation of the security measures in place.” Our Vice President for Information Systems reports directly to the President and this was his charge to the committee. Copyright 2001 Marchany
6
Identifying the Assets
Compiled a list of IS assets (+100 systems) Categorize them as critical, essential, normal Critical - VT can’t operate w/o this asset for even a short period of time. Essential - VT could work around the loss of the asset for up to a week. The asset needs to be returned to service asap. Normal - VT could operate w/o this asset for a finite period but entities may need to identify alternatives. Our asset matrix contained about 100 individual systems and we prioritized them according to the 3 categories listed above. We modified the original TBS asset categories to fit our needs. Copyright 2001 Marchany
7
Prioritizing the Assets
The network(router, bridges, cabling, etc.) was treated as a single entity and deemed critical. Some assets were classified as critical and then rank ordered using a matrix prioritization technique. Each asset was compared to the other and members voted on their relative importance. Members could split their vote. Rather than deal with the individual components of the network (cabling, routers, switches, bridges, etc.), we lumped these components into a single entity called the “network”. The second bullet is a long winded explanation for stating that the committee voted based on their prejudices of the value of the assets to their work. Real users such as managers use the computer as a tool to help them do their real job. In other words, the computer is like a stapler. The techies (sysadmins) don’t want to mess with the machines and think the users should adapt to the machines rather than the other way. Copyright 2001 Marchany
8
Prioritizing the Assets
Asset weight values calculated by a simple formula. Weight = sum of vote values. Criteria: Criticality, Value to the Org, Impact of Outage Team members vote for the top 5 Assets in order. First place vote = 5 points times # votes received Second place vote = 4 points times #votes received Third place vote = 3 points times # votes received Fourth place vote = 2 points times #votes received Fifth place vote = 1 point times # votes received This determines the criticality of the assets listed in Exhibit A. How did we select which assets were critical, essential or normal? This was the original formula we used. There were certainly 1 or 2 systems that everyone agreed was critical. The individual committee members were asked to vote for their top 5 critical assets. Once the votes were counted, the above formula helped us define the list of critical assets. We followed a similar strategy in determining the list of essential assets. The remaining assets were categorized as normal. The assets listed in Exhibit A are the critical assets and this is how we determined which ones were critical. What happened to the essential and normal assets? Well, we need to prioritize our defenses so we pick the critical assets first. What applies to a critical asset will apply to the other categories. Once the assets were organized into critical, essential or normal categories, we needed to determine which was the most critical of the critical assets. We calculated weight values for the Critical assets. These values are shown in Exhibit B. Copyright 2001 Marchany
9
Identifying the Risks A RISK was selected if it caused an incident that would: Be extremely expensive to fix Result in the loss of a critical service Result in heavy, negative publicity especially outside the university Have a high probability of occurring Risks were prioritized using matrix prioritization technique The 4 criteria for considering a risk are listed above. The negative publicity requirement was added from bitter experience. The probability of the risk occurring was important as well. A virus has a higher probability of occurring than a meteor hitting the data center . Again, the committee members voted based on their knowledge and prejudices of the risks. Copyright 2001 Marchany
10
Prioritizing the Risks
Same as formula for prioritizing Assets Criteria: Scope of Impact Probability of an incident Weight = sum of vote values This determines the criticality of the risks shown in Exhibit B This vote was done in a similar manner to the voting process for prioritizing the assets. Members were allowed to split their vote. Copyright 2001 Marchany
11
Prioritizing the Assets & Risks
The values in the first (white) column of exhibits B and D are the weight values assigned to the asset or risk. The ordering of the Assets & Risks was determined by simple vote. How many think asset 1 is more critical than asset 2? Same for risks. The votes are shown in the white squares. This page intentionally left blank. Copyright 2001 Marchany
12
Mapping Risks and Assets
We built a matrix that maps the ordered list of critical assets against the ordered list of risks regardless of whether or not A particular risk actually applied to the asset Controls exist and/or already in place The matrix provides general guidance about the order each asset/risk is examined. All assets/risks need to be examined eventually. Once we had the assets and risks prioritized by vote, we combined the two into the matrix. The position is based on their importance with the more important risks and assets being placed in the upper left corner of the matrix. The cell values are simply the multiplication of the weights/100. Copyright 2001 Marchany
13
Mapping Risks and Assets
The more critical assets and risks as determined by the matrices in Exhibits B & D, are closer to the upper left corner of the matrix. An example of this is Exhibit E. The Weights of the Asset-Risk = Asset Weight * Risk Weight)/100. These values are listed in the cells of Exhibit E. This page intentionally left blank. Copyright 2001 Marchany
14
Identifying Controls Specific controls identified by the committee were put in a matrix The controls were then mapped against a list of risks and in those cells are the control ids that can mitigate a particular risk for a particular asset We listed all the controls that would mitigate the risks to the assets. These controls are listed in no particular order, they are simply the order the committee thought of them. Copyright 2001 Marchany
15
Mapping Controls to the R/A Matrix
Exhibit G shows the controls that apply to a particular Asset-Risk pair. Exhibit F lists controls that could be applied to a Risk. Example: For the Site 1-Sysadmin Practice pair, the cell shown at the intersection of the 2 items lists controls 7, 13, 14, 30, 33 as possible controls to mitigate the risk on the asset. This page intentionally left blank. Copyright 2001 Marchany
16
The Overall Compliance Matrix
This is a 1 page overall report of the status of the Assets listed in Exhibit E (Asset/Risk) matrix. Assets are listed on the X-axis. Risks are listed on the Y-Axis. Color codes show whether the Asset is protected from the Risks. Shown in Appendix 2. At this point, we have the following: a list of assets in a matrix a list of risks in a matrix a mapping of risks to assets a list of controls for the risks These tables give us the information needed to assign our scarce security dollars. The next set of matrices are used to determine our compliance. They are the reporting matrices that could be used by either the auditor or the sysadmin/security officer to determine the security level of an asset. The list is tailored to the specific risks and assets listed in the previous set of matrices so we have a useful set of information. The individual action items are color coded red, yellow, green, gray and list whether a particular action was taken. Copyright 2001 Marchany
17
The Asset/Risk Compliance Matrix
Another way of displaying the report. This page intentionally left blank. Copyright 2001 Marchany
18
The Control Compliance Matrix
Lists the controls from Exhibit F and shows if the control is installed on a particular asset. A quick way to determine what controls are on which asset. This page intentionally left blank. Copyright 2001 Marchany
19
The Individual Action Compliance Matrix
Assets are listed on the X-axis Risks are listed on the Y-axis. Subcategories of the risks are listed and compliance is shown by color coding the cells. The Audit Security checklist (shown at the end of Appendix B) contains the actual commands to perform the task. This page intentionally left blank. Copyright 2001 Marchany
20
The Audit/Security Checklist - I
The detailed commands used to check an asset. Based on the Defense Information Infrastructure (DII) and Common Operating Environment (COE) initiative. We took the checklists from this site, modified them according to our R/A matrix and built checklists for Sun, IBM, NT. Our thanks to the unknown author who wrote the original document. Checklists are available from A fragment is shown in Appendix 3. The full document is available from in the Checklists section. The compliance matrices are the “report” piece of the security review. The actual checklist is shown in the appendix. This document contains the actual commands used to perform the tests. They could be automated or done manually. Again, the tests are tailored specifically to the requirements of the security review. We waste our time, in other words, only on the things that matter to us. . The original checklists were available on the WWW from a .gov site. Our deep appreciation to the unknown author for saving us an incredible amount of time. Copyright 2001 Marchany
21
The Audit/Security Checklist - II
We’re now using the CIS Benchmark Rulers as our checklists. The CIS provides a scanning tool that lets us check the status of our systems quickly. See to download the scanning tool and the checklist. Another example of changing times…. The original checklists mentioned in the previous slide proved to be very comprehensive but very unwieldy and cumbersome. We’ve started to use the CIS Benchmark Rulers as a base set of procedures and modifying them to fit our STAR requirements. The CIS scanning tool checks your system against the benchmark and gives you a “score” on its relative security rating. We’ll talk more about the CIS Rulers later in this course. Copyright 2001 Marchany
22
STAR Lab Exercise We’re going to walk through the STAR process as a group. I’ll provide the asset matrix and we are going to rank them. I’ll provide the risk matrix and we are going to rank them. We’ll map the asset-risk matrix to see how our votes create an “audit” strategy. I expect a lively discussion This page intentionally left blank. Copyright 2001 Marchany
23
Recommendations The STAR process recommends a general order which IS should apply scarce resources to perform a cost benefit analysis for the various assets & risks. For each asset, as directed by mgt., appropriate staff should: Review the risks & controls Add any further risks/controls not identified Assess the potential cost of an incident Assess the cost of control purchases and deployment Analyze cost vs. benefit for each asset Submit results to mgt. which retains the responsibility to weigh investments and make implementation decisions These recommendations were made by the committee in its report to upper management. Copyright 2001 Marchany
24
Conclusions TBS provides a quantitative, repeatable method of prioritizing your assets. The matrices provide an easy to read summary of the state of your assets. These matrices can be used to provide your auditors with the information they need. The checklist contains the detailed commands to perform the audit/security check. The matrices in the appendices should give you a quick way to list the results of your security review, determine your compliance with the action items and provide you with the platform specific commands to actually do the review. Copyright 2001 Marchany
25
Course Revision History
v1.1 – March 2001 v1.2 – edited by J. Kolde – May 2001 V1.3 – edited by R. Marchany, 5/29/01 V1.4 – rcm, 6/19/01 Copyright 2001 Marchany
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.