CISA : Certified Information Systems Auditor : Part 151

  1. Which of the following exposures associated with the spooling of sensitive reports for offline printing should an IS auditor consider to be the MOST serious?

    • Sensitive data can be read by operators.
    • Data can be amended without authorization.
    • Unauthorized report copies can be printed.
    • Output can be lost in the event of system failure.

    Unless controlled, spooling for offline printing may enable additional copies to be printed. Print files are unlikely to be available for online reading by operations. Data on spool files are no easier to amend without authority than any other file. There is usually a lesser threat of unauthorized access to sensitive reports in the event of a system failure.

  2. Applying a retention date on a file will ensure that:

    • data cannot be read until the date is set.
    • data will not be deleted before that date.
    • backup copies are not retained after that date.
    • datasets having the same name are differentiated.
    A retention date will ensure that a file cannot be overwritten before that date has passed. The retention date will not affect the ability to read the file. Backup copies would be expected to have a different retention date and therefore may be retained after the file has been overwritten. The creation date, not the retention date, will differentiate files with the same name.
  3. Which of the following is a network diagnostic tool that monitors and records network information?

    • Online monitor
    • Downtime report
    • Help desk report
    • Protocol analyzer
    Protocol analyzers are network diagnostic tools that monitor and record network information from packets traveling in the link to which the analyzer is attached. Online monitors (choice A) measure telecommunications transmissions and determine whether transmissions were accurate and complete. Downtime reports (choice B) track the availability of telecommunication lines and circuits. Help desk reports (choice C) are prepared by the help desk, which is staffed or supported by IS technical support personnel trained to handle problems occurring during the course of IS operations.
  4. An intruder accesses an application server and makes changes to the system log. Which of the following would enable the identification of the changes?

    • Mirroring the system log on another server
    • Simultaneously duplicating the system log on a write-once disk
    • Write-protecting the directory containing the system log
    • Storing the backup of the system log offsite
    A write-once CD cannot be overwritten. Therefore, the system log duplicated on the disk could be compared to the original log to detect differences, which could be the result of changes made by an intruder. Write-protecting the system log does not prevent deletion or modification, since the superuser can override the write protection. Backup and mirroring may overwrite earlier files and may not be current. 
  5. IT operations for a large organization have been outsourced. An IS auditor reviewing the outsourced operation should be MOST concerned about which of the following findings?

    • The outsourcing contract does not cover disaster recovery for the outsourced IT operations.
    • The service provider does not have incident handling procedures.
    • Recently a corrupted database could not be recovered because of library management problems.
    • incident logs are not being reviewed.
    The lack of a disaster recovery provision presents a major business risk. Incorporating such a provision into the contract will provide the outsourcing organization leverage over the service provider. Choices B, C and D are problems that should be addressed by the service provider, but are not as important as contract requirements for disaster recovery.
  6. Which of the following BEST ensures the integrity of a server’s operating system?

    • Protecting the server in a secure location
    • Setting a boot password
    • Hardening the server configuration
    • Implementing activity logging
    Hardening a system means to configure it in the most secure manner (install latest security patches, properly define the access authorization for users and administrators, disable insecure options and uninstall unused services) to prevent nonprivileged users from gaining the right to execute privileged instructions and thus take control of the entire machine, jeopardizing the OS’s integrity. Protecting the server in a secure location and setting a boot password are good practices, but do not ensure that a user will not try to exploit logical vulnerabilities and compromise the OS. Activity logging has two weaknesses in this scenario-it is a detective control (not a preventive one), and the attacker who already gained privileged access can modify logs or disable them.
  7. The MOST significant security concerns when using flash memory (e.g., USB removable disk) is that the:

    • contents are highly volatile.
    • data cannot be backed up.
    • data can be copied.
    • device may not be compatible with other peripherals.
    Unless properly controlled, flash memory provides an avenue for anyone to copy any content with ease. The contents stored in flash memory are not volatile. Backing up flash memory data is not a control concern, as the data are sometimes stored as a backup. Flash memory will be accessed through a PC rather than any other peripheral; therefore, compatibility is not an issue.
  8. The database administrator (DBA) suggests that DB efficiency can be improved by denormalizing some tables. This would result in:

    • loss of confidentiality.
    • increased redundancy.
    • unauthorized accesses.
    • application malfunctions.
    Normalization is a design or optimization process for a relational database (DB) that minimizes redundancy; therefore, denormalization would increase redundancy. Redundancy which is usually considered positive when it is a question of resource availability is negative in a database environment, since it demands additional and otherwise unnecessary data handling efforts.
    Denormalization is sometimes advisable for functional reasons. It should not cause loss of confidentiality, unauthorized accesses or application malfunctions.
  9. Web and e-mail filtering tools are PRIMARILY valuable to an organization because they:

    • protect the organization from viruses and nonbusiness materials.
    • maximize employee performance.
    • safeguard the organization’s image.
    • assist the organization in preventing legal issues
    The main reason for investing in web and e-mail filtering tools is that they significantly reduce risks related to viruses, spam, mail chains, recreational surfing and recreational e- mail. Choice B could be true in some circumstances (i.e., it would need to be implemented along with an awareness program, so that employee performance can be significantly improved). However, in such cases, it would not be as relevant as choice A. Choices C and D are secondary or indirect benefits.
  10. The BEST way to minimize the risk of communication failures in an e-commerce environment would be to use:

    • compression software to minimize transmission duration.
    • functional or message acknowledgments.
    • a packet-filtering firewall to reroute messages.
    • leased asynchronous transfer mode lines.
    Leased asynchronous transfer mode lines are a way to avoid using public and shared infrastructures from the carrier or Internet service provider that have a greater number of communication failures. Choice A, compression software, is a valid way to reduce the problem, but is not as good as leased asynchronous transfer mode lines. Choice B is a control based on higher protocol layers and helps if communication lines are introducing noise, but not if a link is down. Choice C, a packet-filtering firewall, does not reroute messages.
  11. An IS auditor reviewing an organization’s data file control procedures finds that transactions are applied to the most current files, while restart procedures use earlier versions. The IS auditor should recommend the implementation of:

    • source documentation retention.
    • data file security.
    • version usage control.
    • one-for-one checking.
    For processing to be correct, it is essential that the proper version of a file is used. Transactions should be applied to the most current database, while restart procedures should use earlier versions. Source documentation should be retained for an adequate time period to enable documentation retrieval, reconstruction or verification of data, but it does not aid in ensuring that the correct version of a file will be used. Data file security controls prevent access by unauthorized users who could then alter the data files; however, it does not ensure that the correct file will be used. It is necessary to ensure that all documents have been received for processing, one-for-one; however, this does not ensure the use of the correct file.
  12. Which of the following BEST limits the impact of server failures in a distributed environment?

    • Redundant pathways
    • Clustering
    • Dial backup lines
    • Standby power
    Clustering allows two or more servers to work as a unit, so that when one of them fails, the other takes over. Choices A and C are intended to minimize the impact of channel communications failures, but not a server failure. Choice D provides an alternative power source in the event of an energy failure.
  13. When reviewing a hardware maintenance program, an IS auditor should assess whether:

    • the schedule of all unplanned maintenance is maintained.
    • it is in line with historical trends.
    • it has been approved by the IS steering committee.
    • the program is validated against vendor specifications.
    Though maintenance requirements vary based on complexity and performance workloads, a hardware maintenance schedule should be validated against the vendor-provided specifications. For business reasons, an organization may choose a more aggressive maintenance program than the vendor’s program. The maintenance program should include maintenance performance history, be it planned, unplanned, executed or exceptional. Unplanned maintenance cannot be scheduled. Hardware maintenance programs do not necessarily need to be in line with historical trends. Maintenance schedules normally are not approved by the steering committee.
  14. An IS auditor observes a weakness in the tape management system at a data center in that some parameters are set to bypass or ignore tape header records. Which of the following is the MOST effective compensating control for this weakness?

    • Staging and job set up
    • Supervisory review of logs
    • Regular back-up of tapes
    • Offsite storage of tapes
    If the IS auditor finds that there are effective staging and job set up processes, this can be accepted as a compensating control. Choice B is a detective control while choices C and D are corrective controls, none of which would serve as good compensating controls.
  15. To verify that the correct version of a data file was used for a production run, an IS auditor should review:

    • operator problem reports.
    • operator work schedules.
    • system logs.
    • output distribution reports.
    System logs are automated reports which identify most of the activities performed on the computer. Programs that analyze the system log have been developed to report on specifically defined items. The auditor can then carry out tests to ensure that the correct file version was used for a production run. Operator problem reports are used by operators to log computer operation problems. Operator work schedules are maintained to assist in human resources planning.
    Output distribution reports identify all application reports generated and their distribution.
  16. Which of the following is the BEST type of program for an organization to implement to aggregate, correlate and store different log and event files, and then produce weekly and monthly reports for IS auditors?

    • A security information event management (SIEM) product
    • An open-source correlation engine
    • A log management tool
    • An extract, transform, load (ETL) system
    A log management tool is a product designed to aggregate events from many log files (with distinct formats and from different sources), store them and typically correlate them offline to produce many reports (e.g., exception reports showing different statistics including anomalies and suspicious activities), and to answer time-based queries (e.g., how many users have entered the system between 2 a.m. and 4 a.m. over the past three weeks?). A SIEM product has some similar features. It correlates events from log files, but does it online and normally is not oriented to storing many weeks of historical information and producing audit reports. A correlation engine is part of a SIEM product. It is oriented to making an online correlation of events. An extract, transform, load (ETL) is part of a business intelligence system, dedicated to extracting operational or production data, transforming that data and loading them to a central repository (data warehouse or data mart); an ETL does not correlate data or produce reports, and normally it does not have extractors to read log file formats.
  17. Doing which of the following during peak production hours could result in unexpected downtime?

    • Performing data migration or tape backup
    • Performing preventive maintenance on electrical systems
    • Promoting applications from development to the staging environment
    • Replacing a failed power supply in the core router of the data center
    Choices A and C are processing events which may impact performance, but would not cause downtime. Enterprise-class routers have redundant hot-swappable power supplies, so replacing a failed power supply should not be an issue. Preventive maintenance activities should be scheduled for non-peak times of the day, and preferably during a maintenance window time period. A mishap or incident caused by a maintenance worker could result in unplanned downtime.
  18. Which of the following would BEST maintain the integrity of a firewall log?

    • Granting access to log information only to administrators
    • Capturing log events in the operating system layer
    • Writing dual logs onto separate storage media
    • Sending log information to a dedicated third-party log server
    Establishing a dedicated third-party log server and logging events in it is the best procedure for maintaining the integrity of a firewall log. When access control to the log server is adequately maintained, the risk of unauthorized log modification will be mitigated, therefore improving the integrity of log information. To enforce segregation of duties, administrators should not have access to log files. This primarily contributes to the assurance of confidentiality rather than integrity. There are many ways to capture log information: through the application layer, network layer, operating systems layer, etc.; however, there is no log integrity advantage in capturing events in the operating systems layer. If it is a highly mission-critical information system, it may be nice to run the system with a dual log mode. Having logs in two different storage devices will primarily contribute to the assurance of the availability of log information, rather than to maintaining its integrity. 
  19. Which of the following will prevent dangling tuples in a database?

    • Cyclic integrity
    • Domain integrity
    • Relational integrity
    • Referential integrity
    Referential integrity ensures that a foreign key in one table will equal null or the value of a primary in the other table. For every tuple in a table having a referenced/foreign key, there should be a corresponding tuple in another table, i.e., for existence of all foreign keys in the original tables, if this condition is not satisfied, then it results in a dangling tuple. Cyclical checking is the control technique for the regular checking of accumulated data on a file against authorized source documentation. There is no cyclical integrity testing. Domain integrity testing ensures that a data item has a legitimate value in the correct range or set. Relational integrity is performed at the record level and is ensured by calculating and verifying specific fields.
  20. The objective of concurrency control in a database system is to:

    • restrict updating of the database to authorized users.
    • prevent integrity problems when two processes attempt to update the same data at the same time.
    • prevent inadvertent or unauthorized disclosure of data in the database.
    • ensure the accuracy, completeness and consistency of data.
    Concurrency controls prevent data integrity problems, which can arise when two update processes access the same data item at the same time. Access controls restrict updating of the database to authorized users, and controls such as passwords prevent the inadvertent or unauthorized disclosure of data from the database. Quality controls, such as edits, ensure the accuracy, completeness and consistency of data maintained in the database.