SSCP : System Security Certified Practitioner (SSCP) : Part 46

  1. What is the Maximum Tolerable Downtime (MTD)?

    • Maximum elapsed time required to complete recovery of application data
    • Minimum elapsed time required to complete recovery of application data
    • Maximum elapsed time required to move back to primary site after a major disruption
    • It is maximum delay businesses can tolerate and still remain viable

    Explanation:

    The Maximum Tolerable Downtime (MTD) is the maximum length of time a BUSINESS FUNCTION can endure without being restored, beyond which the BUSINESS is no longer viable

    NIST SAYS:
    The ISCP Coordinator should analyze the supported mission/business processes and with the process owners, leadership and business managers determine the acceptable downtime if a given process or specific system data were disrupted or otherwise unavailable. Downtime can be identified in several ways.

    Maximum Tolerable Downtime (MTD). The MTD represents the total amount of time the system owner/authorizing official is willing to accept for a mission/business process outage or disruption and includes all impact considerations. Determining MTD is important because it could leave contingency planners with imprecise direction on selection of an appropriate recovery method, and the depth of detail which will be required when developing recovery procedures, including their scope and content.

    Other BCP and DRP terms you must be familiar with are:

    Recovery Time Objective (RTO). RTO defines the maximum amount of time that a system resource can remain unavailable before there is an unacceptable impact on other system resources, supported mission/business processes, and the MTD. Determining the information system resource RTO is important for selecting appropriate technologies that are best suited for meeting the MTD. When it is not feasible to immediately meet the RTO and the MTD is inflexible, a Plan of Action and Milestone should be initiated to document the situation and plan for its mitigation.

    Recovery Point Objective (RPO). The RPO represents the point in time, prior to a disruption or system outage, to which mission/business process data can be recovered (given the most recent backup copy of the data) after an outage. Unlike RTO, RPO is not considered as part of MTD. Rather, it is a factor of how much data loss the mission/business process can tolerate during the recovery process. Because the RTO must ensure that the MTD is not exceeded, the RTO must normally be shorter than the MTD. For example, a system outage may prevent a particular process from being completed, and because it takes time to reprocess the data, that additional processing time must be added to the RTO to stay within the time limit established by the MTD.

    References used for this question:
    KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Page 276.
    and
    http://csrc.nist.gov/publications/nistpubs/800-34-rev1/sp800-34-rev1_errata-Nov11-2010.pdf

  2. Out of the steps listed below, which one is not one of the steps conducted during the Business Impact Analysis (BIA)?

    • Alternate site selection
    • Create data-gathering techniques
    • Identify the company’s critical business functions
    • Select individuals to interview for data gathering
    Explanation:

    Selecting and Alternate Site would not be done within the initial BIA. It would be done at a later stage of the BCP and DRP recovery effort. All of the other choices were steps that would be conducted during the BIA. See below the list of steps that would be done during the BIA.

    A BIA (business impact analysis ) is considered a functional analysis, in which a team collects data through interviews and documentary sources; documents business functions, activities, and transactions ; develops a hierarchy of business functions; and finally applies a classification scheme to indicate each individual function’s criticality level.

    BIA Steps
    1. Select individuals to interview for data gathering.
    2. Create data-gathering techniques (surveys, questionnaires, qualitative and quantitative approaches).
    3. Identify the company’s critical business functions.
    4. Identify the resources these functions depend upon.
    5. Calculate how long these functions can survive without these resources.
    6. Identify vulnerabilities and threats to these functions.
    7. Calculate the risk for each different business function.
    8. Document findings and report them to management.

    Reference(s) used for this question:
    Harris, Shon (2012-10-18). CISSP All-in-One Exam Guide, 6th Edition (p. 905-909). McGraw-Hill. Kindle Edition.

  3. When you update records in multiple locations or you make a copy of the whole database at a remote location as a way to achieve the proper level of fault-tolerance and redundancy, it is knows as?

    • Shadowing
    • Data mirroring
    • Backup
    • Archiving
    Explanation:

    Updating records in multiple locations or copying an entire database to a remote location as a means to ensure the appropriate levels of fault-tolerance and redundancy is known as Database shadowing. Shadowing is the technique in which updates are shadowed in multiple locations. It is like copying the entire database on to a remote location.

    Shadow files are an exact live copy of the original active database, allowing you to maintain live duplicates of your production database, which can be brought into production in the event of a hardware failure. They are used for security reasons: should the original database be damaged or incapacitated by hardware problems, the shadow can immediately take over as the primary database. It is therefore important that shadow files do not run on the same server or at least on the same drive as the primary database files.

    The following are incorrect answers:

    Data mirroring In data storage, disk mirroring is the replication of logical disk volumes onto separate physical hard disks in real time to ensure continuous availability. It is most commonly used in RAID 1. A mirrored volume is a complete logical representation of separate volume copies.

    Backups In computing the phrase backup means to copy files to a second medium (a disk or tape) as a precaution in case the first medium fails. One of the cardinal rules in using computers is back up your files regularly. Backups are useful in recovering information or a system in the event of a disaster, else you may be very sorry 🙁

    Archiving is the storage of data that is not in continual use for historical purposes. It is the process of copying files to a long-term storage medium for backup.

    Reference(s) used for this question:
    Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 27614-27626). Auerbach Publications. Kindle Edition.
    http://en.wikipedia.org/wiki/Disk_mirroring
    http://www.webopedia.com/TERM/A/archive.html
    http://ibexpert.net/ibe/index.php?n=Doc.DatabaseShadow

  4. Recovery Site Strategies for the technology environment depend on how much downtime an organization can tolerate before the recovery must be completed. What would you call a strategy where the alternate site is internal, standby ready, with all the technology and equipment necessary to run the applications?

    • External Hot site
    • Warm Site
    • Internal Hot Site
    • Dual Data Center
    Explanation:

    Internal Hot Site—This site is standby ready with all the technology and equipment necessary to run the applications positioned there. The planner will be able to effectively restart an application in a hot site recovery without having to perform any bare metal recovery of servers. If this is an internal solution, then often the organization will run non-time sensitive processes there such as development or test environments, which will be pushed aside for recovery of production when needed. When employing this strategy, it is important that the two environments be kept as close to identical as possible to avoid problems with O/S levels, hardware differences, capacity differences, etc., from preventing or delaying recovery.
    Recovery Site Strategies Depending on how much downtime an organization has before the technology recovery must be complete, recovery strategies selected for the technology environment could be any one of the following:
    Dual Data Center—This strategy is employed for applications, which cannot accept any downtime without negatively impacting the organization. The applications are split between two geographically dispersed data centers and either load balanced between the two centers or hot swapped between the two centers. The surviving data center must have enough head room to carry the full production load in either case.

    External Hot Site—This strategy has equipment on the floor waiting, but the environment must be rebuilt for the recovery. These are services contracted through a recovery service provider. Again, it is important that the two environments be kept as close to identical as possible to avoid problems with O/S levels, hardware differences, capacity differences, etc., from preventing or delaying recovery. Hot site vendors tend to have the most commonly used hardware and software products to attract the largest number of customers to utilize the site. Unique equipment or software would generally need to be provided by the organization either at time of disaster or stored there ahead of time.
    Warm Site—A leased or rented facility that is usually partially configured with some equipment, but not the actual computers. It will generally have all the cooling, cabling, and networks in place to accommodate the recovery but the actual servers, mainframe, etc., equipment are delivered to the site at time of disaster.
    Cold Site—A cold site is a shell or empty data center space with no technology on the floor. All technology must be purchased or acquired at the time of disaster.

    Reference(s) used for this question:
    Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 21265-21291). Auerbach Publications. Kindle Edition.

  5. Which of the following items is NOT a benefit of cold sites?

    • No resource contention with other organisation
    • Quick Recovery
    • A secondary location is available to reconstruct the environment
    • Low Cost
    Explanation:

    A cold site is a permanent location that provide you with your own space that you can move into in case of a disaster or catastrophe. It is one of the cheapest solution available as a rental place but it is also the one that would take the most time to recover. A cold site usually takes one to two weeks for recoverey.

    Although major disruptions with long-term effects may be rare, they should be accounted for in the contingency plan. The plan should include a trategy to recover and perform system operations at an alternate facility for an extended period. In general, three types of alternate sites are available:

    Dedicated site owned or operated by the organization. Also called redundant or alternate sites;
    Reciprocal agreement or memorandum of agreement with an internal or external entity; and
    Commercially leased facility.

    Regardless of the type of alternate site chosen, the facility must be able to support system operations as defined in the contingency plan. The three alternate site types commonly categorized in terms of their operational readiness are cold sites, warm sites, or hot sites. Other variations or combinations of these can be found, but generally all variations retain similar core features found in one of these three site types.

    Progressing from basic to advanced, the sites are described below:

    Cold Sites are typically facilities with adequate space and infrastructure (electric power, telecommunications connections, and environmental controls) to support information system recovery activities.

    ƒWarm Sites are partially equipped office spaces that contain some or all of the system hardware, software, telecommunications, and power sources.

    Hot Sites are facilities appropriately sized to support system requirements and configured with the necessary system hardware, supporting infrastructure, and support personnel.

    As discussed above, these three alternate site types are the most common. There are also variations, and hybrid mixtures of features from any one of the three. Each organization should evaluate its core requirements in order to establish the most effective solution.

    Two examples of variations to the site types are:

    ƒMobile Sites are self-contained, transportable shells custom-fitted with specific telecommunications and system equipment necessary to meet system requirements.

    ƒMirrored Sites are fully redundant facilities with automated real-time information mirroring. Mirrored sites are identical to the primary site in all technical respects.

    There are obvious cost and ready-time differences among the options. In these examples, the mirrored site is the most expensive choice, but it ensures virtually 100 percent availability. Cold sites are the least expensive to maintain, although they may require substantial time to acquire and install necessary equipment. Partially equipped sites, such as warm sites, fall in the middle of the spectrum. In many cases, mobile sites may be delivered to the desired location within 24 hours, but the time necessary for equipment installation and setup can increase this response time. The selection of fixed-site locations should account for the time and mode of transportation necessary to move personnel and/or equipment there. In addition, the fixed site should be in a geographic area that is unlikely to be negatively affected by the same hazard as the organization’s primary site.

    The following reference(s) were used for this question:
    http://csrc.nist.gov/publications/nistpubs/800-34-rev1/sp800-34-rev1_errata-Nov11-2010.pdf

  6. Qualitative loss resulting from the business interruption does NOT usually include:

    • Loss of revenue
    • Loss of competitive advantage or market share
    • Loss of public confidence and credibility
    • Loss of market leadership
    Explanation:

    This question is testing your ability to evaluate whether items on the list are Qualitative or Quantitative. All of the items listed were Qualitative except Lost of Revenue which is Quantitative.

    Those are mainly two approaches to risk analysis, see a description of each below:

    A quantitative risk analysis is used to assign monetary and numeric values to all elements of the risk analysis process. Each element within the analysis (asset value, threat frequency, severity of vulnerability, impact damage, safeguard costs, safeguard effectiveness, uncertainty, and probability items) is quantified and entered into equations to determine total and residual risks. It is more of a scientific or mathematical approach to risk analysis compared to qualitative.

    A qualitative risk analysis uses a “softer” approach to the data elements of a risk analysis . It does not quantify that data, which means that it does not assign numeric values to the data so that they can be used in equations.

    Qualitative and quantitative impact information should be gathered and then properly analyzed and interpreted. The goal is to see exactly how a business will be affected by different threats.

    The effects can be economical, operational, or both. Upon completion of the data analysis, it should be reviewed with the most knowledgeable people within the company to ensure that the findings are appropriate and that it describes the real risks and impacts the organization faces. This will help flush out any additional data points not originally obtained and will give a fuller understanding of all the possible business impacts.

    Loss criteria must be applied to the individual threats that were identified. The criteria may include the following:

    Loss in reputation and public confidence
    Loss of competitive advantages
    Increase in operational expenses
    Violations of contract agreements
    Violations of legal and regulatory requirements
    Delayed income costs
    Loss in revenue
    Loss in productivity

    Reference used for this question:
    Harris, Shon (2012-10-18). CISSP All-in-One Exam Guide, 6th Edition (p. 909). McGraw-Hill. Kindle Edition.

  7. Within the realm of IT security, which of the following combinations best defines risk?

    • Threat coupled with a breach
    • Threat coupled with a vulnerability
    • Vulnerability coupled with an attack
    • Threat coupled with a breach of security
    Explanation:

    The Answer: Threat coupled with a vulnerability. Threats are circumstances or actions with the ability to harm a system. They can destroy or modify data or result an a DoS. Threats by themselves are not acted upon unless there is a vulnerability that can be taken advantage of. Risk enters the equation when a vulnerability (Flaw or weakness) exists in policies, procedures, personnel management, hardware, software or facilities and can be exploited by a threat agent. Vulnerabilities do not cause harm, but they leave the system open to harm. The combination of a threat with a vulnerability increases the risk to the system of an intrusion.

    The following answers are incorrect:

    Threat coupled with a breach. A threat is the potential that a particular threat-source will take advantage of a vulnerability. Breaches get around security. It does not matter if a breach is discovered or not, it has still occured and is not a risk of something occuring. A breach would quite often be termed as an incident or intrusion.

    Vulnerability coupled with an attack. Vulnerabilities are weaknesses (flaws) in policies, procedures, personnel management, hardware, software or factilities that may result in a harmful intrusion to an IT system. An attack takes advantage of the flaw or vulnerability. Attacks are explicit attempts to violate security, and are more than risk as they are active.

    Threat coupled with a breach of security. This is a detractor. Although a threat agent may take advantage of (Breach) vulnerabilities or flaws in systems security. A threat coupled with a breach of security is more than a risk as this is active.

    The following reference(s) may be used to research the Qs in this question:

    ISC2 OIG, 2007 p. 66-67
    Shon Harris AIO v3 p. 71-72

  8. How often should a Business Continuity Plan be reviewed?

    • At least once a month
    • At least every six months
    • At least once a year
    • At least Quarterly
    Explanation:

    As stated in SP 800-34 Rev. 1:
    To be effective, the plan must be maintained in a ready state that accurately reflects system requirements, procedures, organizational structure, and policies. During the Operation/Maintenance phase of the SDLC, information systems undergo frequent changes because of shifting business needs, technology upgrades, or new internal or external policies.

    As a general rule, the plan should be reviewed for accuracy and completeness at an organization-defined frequency (at least once a year for the purpose of the exam) or whenever significant changes occur to any element of the plan. Certain elements, such as contact lists, will require more frequent reviews.

    Remember, there could be two good answers as specified above. Either once a year or whenever significant changes occur to the plan. You will of course get only one of the two presented within you exam.

    Reference(s) used for this question:
    NIST SP 800-34 Revision 1

  9. Which of the following best describes what would be expected at a “hot site”?

    • Computers, climate control, cables and peripherals
    • Computers and peripherals
    • Computers and dedicated climate control systems.
    • Dedicated climate control systems
    Explanation:

    A Hot Site contains everything needed to become operational in the shortest amount of time.

    The following answers are incorrect:

    Computers and peripherals. Is incorrect because no mention is made of cables. You would not be fully operational without those.

    Computers and dedicated climate control systems. Is incorrect because no mention is made of peripherals. You would not be fully operational without those.

    Dedicated climate control systems. Is incorrect because no mentionis made of computers, cables and peripherals. You would not be fully operational without those.

    According to the OIG, a hot site is defined as a fully configured site with complete customer required hardware and software provided by the service provider. A hot site in the context of the CBK is always a RENTAL place. If you have your own site fully equipped that you make use of in case of disaster that would be called a redundant site or an alternate site.

    Wikipedia: “A hot site is a duplicate of the original site of the organization, with full computer systems as well as near-complete backups of user data.”

    References:

    OIG CBK, Business Continuity and Disaster Recovery Planning (pages 367 – 368)
    AIO, 3rd Edition, Business Continuity Planning (pages 709 – 714)
    AIO, 4th Edition, Business Continuity Planning , p 790.
    Wikipedia – http://en.wikipedia.org/wiki/Hot_site#Hot_Sites

  10. Which of the following is the BEST way to detect software license violations?

    • Implementing a corporate policy on copyright infringements and software use.
    • Requiring that all PCs be diskless workstations.
    • Installing metering software on the LAN so applications can be accessed through the metered software.
    • Regularly scanning PCs in use to ensure that unauthorized copies of software have not been loaded on the PC.
    Explanation:

    The best way to prevent and detect software license violations is to regularly scan used PCs, either from the LAN or directly, to ensure that unauthorized copies of software have not been loaded on the PC.

    Other options are not detective.

    A corporate policy is not necessarily enforced and followed by all employees.

    Software can be installed from other means than floppies or CD-ROMs (from a LAN or even downloaded from the Internet) and software metering only concerns applications that are registered.
    Source: Information Systems Audit and Control Association, Certified Information Systems Auditor 2002 review manual, Chapter 3: Technical Infrastructure and Operational Practices (page 108).

  11. In what way can violation clipping levels assist in violation tracking and analysis?

    • Clipping levels set a baseline for acceptable normal user errors, and violations exceeding that threshold will be recorded for analysis of why the violations occurred.
    • Clipping levels enable a security administrator to customize the audit trail to record only those violations which are deemed to be security relevant.
    • Clipping levels enable the security administrator to customize the audit trail to record only actions for users with access to user accounts with a privileged status.
    • Clipping levels enable a security administrator to view all reductions in security levels which have been made to user accounts which have incurred violations.
    Explanation:

    Companies can set predefined thresholds for the number of certain types of errors that will be allowed before the activity is considered suspicious. The threshold is a baseline for violation activities that may be normal for a user to commit before alarms are raised. This baseline is referred to as a clipping level.

    The following are incorrect answers:
    Clipping levels enable a security administrator to customize the audit trail to record only those violations which are deemed to be security relevant. This is not the best answer, you would not record ONLY security relevant violations, all violations would be recorded as well as all actions performed by authorized users which may not trigger a violation. This could allow you to indentify abnormal activities or fraud after the fact.

    Clipping levels enable the security administrator to customize the audit trail to record only actions for users with access to user accounts with a privileged status. It could record all security violations whether the user is a normal user or a privileged user.

    Clipping levels enable a security administrator to view all reductions in security levels which have been made to user accounts which have incurred violations. The keyword “ALL” makes this question wrong. It may detect SOME but not all of violations. For example, application level attacks may not be detected.

    Reference(s) used for this question:

    Harris, Shon (2012-10-18). CISSP All-in-One Exam Guide, 6th Edition (p. 1239). McGraw-Hill. Kindle Edition.
    and
    TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.

  12. Prior to a live disaster test also called a Full Interruption test, which of the following is most important?

    • Restore all files in preparation for the test.
    • Document expected findings.
    • Arrange physical security for the test site.
    • Conduct of a successful Parallel Test
    Explanation:

    A live disaster test or Full interruption test is an actual simulation of the Disaster Recovery Plan. All operations are shut down and brought back online at the alternate site. This test poses the biggest threat to an organization and should not be performed until a successful Parallell Test has been conducted.

    1. A Checklist test would be conducted where each of the key players will get a copy of the plan and they read it to make sure it has been properly developed for the specific needs of their departments.

    2. A Structure Walk Through would be conducted next. This is when all key players meet together in a room and they walk through the test together to identify shortcoming and dependencies between department.

    3. A simulation test would be next. In this case you go through a disaster scenario up to the point where you would move to the alternate site. You do not move to the alternate site and you learn from your mistakes and you improve the plan. It is the right time to find shortcomings.

    4. A Parallell Test would be done. You go through a disaster scenario. You move to the alternate site and you process from both sites simultaneously.

    5. A full interruption test would be conducted. You move to the alternate site and you resume processing at the alternate site.

    The following answers are incorrect:

    Restore all files in preparation for the test. Is incorrect because you would restore the files at the alternate site as part of the test not in preparation for the test.

    Document expected findings. Is incorrect because it is not the best answer. Documenting the expected findings won’t help if you have not performed tests prior to a Full interruption test or live disaster test.

    Arrange physical security for the test site. Is incorrect because it is not the best answer. why physical security for the test site is important if you have not performed a successful structured walk-through prior to performing a Full interruption test or live disaster test you might have some unexpected and disasterous results.

  13. Which of the following should be emphasized during the Business Impact Analysis (BIA) considering that the BIA focus is on business processes?

    • Composition
    • Priorities
    • Dependencies
    • Service levels
    Explanation:

    The Business Impact Analysis (BIA) identifies time-critical aspects of the critical business processes, and determines their maximum tolerable downtime. The BIA helps to Identify organization functions, the capabilities of each organization unit to handle outages, and the priority and sequence of functions and applications to be recovered, identify resources required for recovery of those areas and interdependencies

    In performing the Business Impact Analysis (BIA) it is very important to consider what the dependencies are. You cannot bring a system up if it depends on another system to be operational. You need to look at not only internal dependencies but external as well. You might not be able to get the raw materials for your business so dependencies are very important aspect of a BIA.

    The BIA committee will not truly understand all business processes, the steps that must take place, or the resources and supplies these processes require. So the committee must gather this information from the people who do know— department managers and specific employees throughout the organization. The committee starts by identifying the people who will be part of the BIA data-gathering sessions. The committee needs to identify how it will collect the data from the selected employees, be it through surveys, interviews, or workshops. Next, the team needs to collect the information by actually conducting surveys, interviews, and workshops. Data points obtained as part of the information gathering will be used later during analysis. It is important that the team members ask about how different tasks— whether processes, transactions, or services, along with any relevant dependencies— get accomplished within the organization.

    The following answers are incorrect:
    composition This is incorrect because it is not the best answer. While the make up of business may be important, if you have not determined the dependencies first you may not be able to bring the critical business processes to a ready state or have the materials on hand that are needed.

    priorities This is incorrect because it is not the best answer. While the priorities of processes are important, if you have not determined the dependencies first you may not be able to bring the critical business processes to a ready state or have the materials on hand that are needed.

    service levels This is incorrect because it is not the best answer. Service levels are not as important as dependencies.

    Reference(s) used for this question:
    Schneiter, Andrew (2013-04-15). Official (ISC)2 Guide to the CISSP CBK, Third Edition : Business Continuity and Disaster Recovery Planning (Kindle Locations 188-191). . Kindle Edition.
    and
    Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (Kindle Locations 18562-18568). McGraw-Hill. Kindle Edition.

  14. Who should direct short-term recovery actions immediately following a disaster?

    • Chief Information Officer.
    • Chief Operating Officer.
    • Disaster Recovery Manager.
    • Chief Executive Officer.
    Explanation:

    The Disaster Recovery Manager should also be a member of the team that assisted in the development of the Disaster Recovery Plan. Senior-level management need to support the process but would not be involved with the initial process.

    The following answers are incorrect:

    Chief Information Officer. Is incorrect because the Senior-level management are the ones to authorize the recovery plan and process but during the initial recovery process they will most likely be heavily involved in other matters.

    Chief Operating Officer. Is incorrect because the Senior-level management are the ones to authorize the recovery plan and process but during the initial recovery process they will most likely be heavily involved in other matters.

    Chief Executive Officer. Is incorrect because the Senior-level management are the ones to authorize the recovery plan and process but during the initial recovery process they will most likely be heavily involved in other matters.

  15. Which one of the following represents an ALE calculation?

    • single loss expectancy x annualized rate of occurrence.
    • gross loss expectancy x loss frequency.
    • actual replacement cost – proceeds of salvage.
    • asset value x loss expectancy.
    Explanation:

    Single Loss Expectancy (SLE) is the dollar amount that would be lost if there was a loss of an asset. Annualized Rate of Occurrence (ARO) is an estimated possibility of a threat to an asset taking place in one year (for example if there is a change of a flood occuring once in 10 years the ARO would be .1, and if there was a chance of a flood occuring once in 100 years then the ARO would be .01).

    The following answers are incorrect:

    gross loss expectancy x loss frequency. Is incorrect because this is a distractor.
    actual replacement cost – proceeds of salvage. Is incorrect because this is a distractor.
    asset value x loss expectancy. Is incorrect because this is a distractor.

  16. A periodic review of user account management should not determine:

    • Conformity with the concept of least privilege.
    • Whether active accounts are still being used.
    • Strength of user-chosen passwords.
    • Whether management authorizations are up-to-date.
    Explanation:

    Organizations should have a process for (1) requesting, establishing, issuing, and closing user accounts; (2) tracking users and their respective access authorizations; and (3) managing these functions.

    Reviews should examine the levels of access each individual has, conformity with the concept of least privilege, whether all accounts are still active, whether management authorizations are up-to-date, whether required training has been completed, and so forth. These reviews can be conducted on at least two levels: (1) on an application-by-application basis, or (2) on a system wide basis.

    The strength of user passwords is beyond the scope of a simple user account management review, since it requires specific tools to try and crack the password file/database through either a dictionary or brute-force attack in order to check the strength of passwords.

    Reference(s) used for this question:
    SWANSON, Marianne & GUTTMAN, Barbara, National Institute of Standards and Technology (NIST), NIST Special Publication 800-14, Generally Accepted Principles and Practices for Securing Information Technology Systems, September 1996 (page 28).

  17. Due care is not related to:

    • Good faith
    • Prudent man
    • Profit
    • Best interest
    Explanation:

    Officers and directors of a company are expected to act carefully in fulfilling their tasks. A director shall act in good faith, with the care an ordinarily prudent person in a like position would exercise under similar circumstances and in a manner he reasonably believes is in the best interest of the enterprise. The notion of profit would tend to go against the due care principle.

    Source: ANDRESS, Mandy, Exam Cram CISSP, Coriolis, 2001, Chapter 10: Law, Investigation, and Ethics (page 186).

  18. Why would anomaly detection IDSs often generate a large number of false positives?

    • Because they can only identify correctly attacks they already know about.
    • Because they are application-based are more subject to attacks.
    • Because they can’t identify abnormal behavior.
    • Because normal patterns of user and system behavior can vary wildly.
    Explanation:

    Unfortunately, anomaly detectors and the Intrusion Detection Systems (IDS) based on them often produce a large number of false alarms, as normal patterns of user and system behavior can vary wildly. Being only able to identify correctly attacks they already know about is a characteristic of misuse detection (signature-based) IDSs. Application-based IDSs are a special subset of host-based IDSs that analyze the events transpiring within a software application. They are more vulnerable to attacks than host-based IDSs. Not being able to identify abnormal behavior would not cause false positives, since they are not identified.

    Source: DUPUIS, Cl?ment, Access Control Systems and Methodology CISSP Open Study Guide, version 1.0, march 2002 (page 92).

  19. What is the essential difference between a self-audit and an independent audit?

    • Tools used
    • Results
    • Objectivity
    • Competence
    Explanation:

    To maintain operational assurance, organizations use two basic methods: system audits and monitoring. Monitoring refers to an ongoing activity whereas audits are one-time or periodic events and can be either internal or external. The essential difference between a self-audit and an independent audit is objectivity, thus indirectly affecting the results of the audit. Internal and external auditors should have the same level of competence and can use the same tools.

    Source: SWANSON, Marianne & GUTTMAN, Barbara, National Institute of Standards and Technology (NIST), NIST Special Publication 800-14, Generally Accepted Principles and Practices for Securing Information Technology Systems, September 1996 (page 25).

  20. What setup should an administrator use for regularly testing the strength of user passwords?

    • A networked workstation so that the live password database can easily be accessed by the cracking program.
    • A networked workstation so the password database can easily be copied locally and processed by the cracking program.
    • A standalone workstation on which the password database is copied and processed by the cracking program.
    • A password-cracking program is unethical; therefore it should not be used.
    Explanation:

    Poor password selection is frequently a major security problem for any system’s security. Administrators should obtain and use password-guessing programs frequently to identify those users having easily guessed passwords.

    Because password-cracking programs are very CPU intensive and can slow the system on which it is running, it is a good idea to transfer the encrypted passwords to a standalone (not networked) workstation. Also, by doing the work on a non-networked machine, any results found will not be accessible by anyone unless they have physical access to that system.

    Out of the four choice presented above this is the best choice.
    However, in real life you would have strong password policies that enforce complexity requirements and does not let the user choose a simple or short password that can be easily cracked or guessed. That would be the best choice if it was one of the choice presented.

    Another issue with password cracking is one of privacy. Many password cracking tools can avoid this by only showing the password was cracked and not showing what the password actually is. It is masking the password being used from the person doing the cracking.

    Source: National Security Agency, Systems and Network Attack Center (SNAC), The 60 Minute Network Security Guide, February 2002, page 8.

Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments