CISA : Certified Information Systems Auditor : Part 152

  1. Which of the following controls would provide the GREATEST assurance of database integrity?

    • Audit log procedures
    • Table link/reference checks
    • Query/table access time checks
    • Rollback and roll forward database features

    Explanation: 
    Performing table link/reference checks serves to detect table linking errors (such as completeness and accuracy of the contents of the database), and thus provides the greatest assurance of database integrity. Audit log procedures enable recording of all events that have been identified and help in tracing the events. However, they only point to the event and do not ensure completeness or accuracy of the database’s contents. Querying/monitoring table access time checks helps designers improve database performance, but not integrity. Rollback and roll forward database features ensure recovery from an abnormal disruption. They assure the integrity of the transaction that was being processed at the time of disruption, but do not provide assurance on the integrity of the contents of the database.

  2. An IS auditor analyzing the audit log of a database management system (DBMS) finds that some transactions were partially executed as a result of an error, and are not rolled back. Which of the following transaction processing features has been violated?

    • Consistency
    • Isolation
    • Durability
    • Atomicity
    Explanation: 
    Atomicity guarantees that either the entire transaction is processed or none of it is. Consistency ensures that the database is in a legal state when the transaction begins and ends, isolation means that, while in an intermediate state, the transaction data is invisible to external operations. Durability guarantees that a successful transaction will persist, and cannot be undone.
  3. During maintenance of a relational database, several values of the foreign key in a transaction table of a relational database have been corrupted. The consequence is that:

    • the detail of involved transactions may no longer be associated with master data, causing errors when these transactions are processed.
    • there is no way of reconstructing the lost information, except by deleting the dangling tuples and reentering the transactions.
    • the database will immediately stop execution and lose more information.
    • the database will no longer accept input data.
    Explanation: 
    When the external key of a transaction is corrupted or lost, the application system will normally be incapable of directly attaching the master data to the transaction data. This will normally cause the system to undertake a sequential search and slow down the processing. If the concerned files are big, this slowdown will be unacceptable. Choice B is incorrect, since a system can recover the corrupted external key by re indexing the table. Choices C and D would not result from a corrupted foreign key.
  4. In a relational database with referential integrity, the use of which of the following keys would prevent deletion of a row from a customer table as long as the customer number of that row is stored with live orders on the orders table?

    • Foreign key
    • Primary key
    • Secondary key
    • Public key
    Explanation: 
    In a relational database with referential integrity, the use of foreign keys would prevent events such as primary key changes and record deletions, resulting in orphaned relations within the database. It should not be possible to delete a row from a customer table when the customer number (primary key) of that row is stored with live orders on the orders table (the foreign key to the customer table). A primary key works in one table, so it is not able to provide/ensure referential integrity by itself. Secondary keys that are not foreign keys are not subject to referential integrity checks. Public key is related to encryption and not linked in any way to referential integrity.
  5. When performing a database review, an IS auditor notices that some tables in the database are not normalized. The IS auditor should next:

    • recommend that the database be normalized.
    • review the conceptual data model.
    • review the stored procedures.
    • review the justification.
    Explanation: 
    If the database is not normalized, the IS auditor should review the justification since, in some situations, denormalization is recommended for performance reasons. The IS auditor should not recommend normalizing the database until further investigation takes place. Reviewing the conceptual data model or the stored procedures will not provide information about normalization.
  6. A database administrator has detected a performance problem with some tables which could be solved through denormalization. This situation will increase the risk of:

    • concurrent access.
    • deadlocks.
    • unauthorized access to data.
    • a loss of data integrity.
    Explanation: 
    Normalization is the removal of redundant data elements from the database structure. Disabling normalization in relational databases will create redundancy and a risk of not maintaining consistency of data, with the consequent loss of data integrity. Deadlocks are not caused by denormalization. Access to data is controlled by defining user rights to information, and is not affected by denormalization.
  7. An IS auditor finds that client requests were processed multiple times when received from different independent departmental databases, which are synchronized weekly. What would be the BEST recommendation?

    • increase the frequency for data replication between the different department systems to ensure timely updates.
    • Centralize all request processing in one department to avoid parallel processing of the same request.
    • Change the application architecture so that common data is held in just one shared database for all departments.
    • implement reconciliation controls to detect duplicates before orders are processed in the systems.
    Explanation: 
    Keeping the data in one place is the best way to ensure that data are stored without redundancy and that all users have the same data on their systems. Although increasing the frequency may help to minimize the problem, the risk of duplication cannot be eliminated completely because parallel data entry is still possible. Business requirements will most likely dictate where data processing activities are performed. Changing the business structure to solve an IT problem is not practical or politically feasible. Detective controls do not solve the problem of duplicate processing, and would require that an additional process be implemented to handle the discovered duplicates.
  8. Which of the following database controls would ensure that the integrity of transactions is maintained in an online transaction processing system’s database?

    • Authentication controls
    • Data normalization controls
    • Read/write access log controls
    • Commitment and rollback controls
    Explanation: 
    Commitment and rollback controls are directly relevant to integrity. These controls ensure that database operations that form a logical transaction unit will complete in its entirety or not at all; i.e., if, for some reason, a transaction cannot be fully completed, then incomplete inserts/updates/deletes are rolled back so that the database returns to its pretransition state. All other choices would not address transaction integrity.
  9. An IS auditor finds that, at certain times of the day, the data warehouse query performance decreases significantly. Which of the following controls would it be relevant for the IS auditor to review?

    • Permanent table-space allocation
    • Commitment and rollback controls
    • User spool and database limit controls
    • Read/write access log controls
    Explanation: 
    User spool limits restrict the space available for running user queries. This prevents poorly formed queries from consuming excessive system resources and impacting general query performance. Limiting the space available to users in their own databases prevents them from building excessively large tables. This helps to control space utilization which itself acts to help performance by maintaining a buffer between the actual data volume stored and the physical device capacity. Additionally, it prevents users from consuming excessive resources in ad hoc table builds (as opposed to scheduled production loads that often can run overnight and are optimized for performance purposes), in a data warehouse, since you are not running online transactions, commitment and rollback does not have an impact on performance. The other choices are not as likely to be the root cause of this performance issue.
  10. Which of the following is widely accepted as one of the critical components in networking management?

    • Configuration management
    • Topological mappings
    • Application of monitoring tools
    • Proxy server troubleshooting
    Explanation: 
    Configuration management is widely accepted as one of the key components of any network, since it establishes how the network will function internally and externally, it also deals with the management of configuration and monitoring performance. Topological mappings provide outlines of the components of the network and its connectivity. Application monitoring is not essential and proxy server troubleshooting is used for troubleshooting purposes.
  11. Which of the following controls will MOST effectively detect the presence of bursts of errors in network transmissions?

    • Parity check
    • Echo check
    • Block sum check
    • Cyclic redundancy check
    Explanation: 
    The cyclic redundancy check (CRC) can check for a block of transmitted data. The workstations generate the CRC and transmit it with the data. The receiving workstation computes a CRC and compares it to the transmitted CRC. if both of them are equal. Then the block is assumed error free, in this case (such as in parity error or echo check), multiple errors can be detected. In general, CRC can detect all single-bit and bubble-bit errors. Parity check (known as vertical redundancy check) also involves adding a bit (known as the parity bit) to each character during transmission. In this case, where there is a presence of bursts of errors (i.e., impulsing noise during high transmission rates), it has a reliability of approximately 50 percent. Inhigher transmission rates, this limitation is significant. Echo checks detect line errors by retransmitting data to the sending device for comparison with the original transmission.
  12. Which of the following types of firewalls provide the GREATEST degree and granularity of control?

    • Screening router
    • Packet filter
    • Application gateway
    • Circuit gateway
    Explanation: 
    The application gateway is similar to a circuit gateway, but it has specific proxies for each service. To handle web services, it has an HTTP proxy that acts as an intermediary between externals and internals, but is specifically for HTTP. This means that it not only checks the packet IP addresses (layer 3) and the ports it is directed to (in this case port 80, or layer 4), it also checks every HTTP command (layers 5 and 7). Therefore, it works in a more detailed (granularity) way than the others. Screening router and packet filter (choices A and BJ work at the protocol, service and/or port level. This means that they analyze packets from layers 3 and 4, and not from higher levels. A circuit gateway (choice D) is based on a proxy or program that acts as an intermediary between external and internal accesses. This means that during an external access, instead of opening a single connection to the internal server, two connections are established-one from the external server to the proxy (which conforms the circuit-gateway) and one from the proxy to the internal server. Layers 3 and 4 (IP and TCP) and some general features from higher protocols are used to perform these tasks.
  13. Which of the following is MOST directly affected by network performance monitoring tools?

    • Integrity
    • Availability
    • Completeness
    • Confidentiality
    Explanation:
    In case of a disruption in service, one of the key functions of network performance monitoring tools is to ensure that the information has remained unaltered. It is a function of security monitoring to assure confidentiality by using such tools as encryption. However, the most important aspect of network performance is assuring the ongoing dependence on connectivity to run the business. Therefore, the characteristic that benefits the most from network monitoring is availability.
  14. A review of wide area network (WAN) usage discovers that traffic on one communication line between sites, synchronously linking the master and standby database, peaks at 96 percent of the line capacity. An IS auditor should conclude that:

    • analysis is required to determine if a pattern emerges that results in a service loss for a short period of time.
    • WAN capacity is adequate for the maximum traffic demands since saturation has not been reached.
    • the line should immediately be replaced by one with a larger capacity to provide approximately 85 percent saturation.
    • users should be instructed to reduce their traffic demands or distribute them across all service hours to flatten bandwidth consumption.
    Explanation:
    The peak at 96 percent could be the result of a one-off incident, e.g., a user downloading a large amount of data; therefore, analysis to establish whether this is a regular pattern and what causes this behavior should be carried out before expenditure on a larger line capacity is recommended. Since the link provides for a standby database, a short loss of this service should be acceptable. If the peak is established to be a regular occurrence without any other opportunities for mitigation (usage of bandwidth reservation protocol, or other types of prioritizing network traffic), the line should be replaced as there is the risk of loss of service as the traffic approaches 100 percent. If, however, the peak is a one-off or can be put in other time frames, then user education may be an option.
  15. While reviewing the IT infrastructure, an IS auditor notices that storage resources are continuously being added. The IS auditor should:

    • recommend the use of disk mirroring.
    • review the adequacy of offsite storage.
    • review the capacity management process.
    • recommend the use of a compression algorithm.
    Explanation:
    Capacity management is the planning and monitoring of computer resources to ensure that available IT resources are used efficiently and effectively. Business criticality must be considered before recommending a disk mirroring solution and offsite storage is unrelated to the problem.
    Though data compression may save disk space, it could affect system performance.
  16. In a small organization, an employee performs computer operations and, when the situation demands, program modifications. Which of the following should the IS auditor recommend?

    • Automated logging of changes to development libraries
    • Additional staff to provide separation of duties
    • Procedures that verify that only approved program changes are implemented
    • Access controls to prevent the operator from making program modifications
    Explanation:
    While it would be preferred that strict separation of duties be adhered to and that additional staff is recruited as suggested in choice B, this practice is not always possible in small organizations. An IS auditor must look at recommended alternative processes. Of the choices, C is the only practical one that has an impact. An IS auditor should recommend processes that detect changes to production source and object code, such as code comparisons, so the changes can be reviewed on a regular basis by a third party. This would be a compensating control process.
    Choice A, involving logging of changes to development libraries, would not detect changes to production libraries. Choice D is in effect requiring a third party to do the changes, which may not be practical in a small organization.
  17. Vendors have released patches fixing security flaws in their software. Which of the following should an IS auditor recommend in this situation?

    • Assess the impact of patches prior to installation. 
    • Ask the vendors for a new software version with all fixes included.
    • install the security patch immediately.
    • Decline to deal with these vendors in the future.
    Explanation:
    The effect of installing the patch should be immediately evaluated and installation should occur based on the results of the evaluation. To install the patch without knowing what it might affect could easily cause problems. New software versions withal fixes included are not always available and a full installation could be time consuming. Declining to deal with vendors does not take care of the flaw.
  18. Which of the following controls would be MOST effective in ensuring that production source code and object code are synchronized?

    • Release-to-release source and object comparison reports
    • Library control software restricting changes to source code
    • Restricted access to source code and object code
    • Date and time-stamp reviews of source and object code
    Explanation:
    Date and time-stamp reviews of source and object code would ensure that source code, which has been compiled, matches the production object code. This is the most effective way to ensure that the approved production source code is compiled and is the one being used.
  19. Change management procedures are established by IS management to:

    • control the movement of applications from the test environment to the production environment.
    • control the interruption of business operations from lack of attention to unresolved problems.
    • ensure the uninterrupted operation of the business in the event of a disaster.
    • verify that system changes are properly documented.
    Explanation:
    Change management procedures are established by IS management to control the movement of applications from the test environment to the production environment. Problem escalation procedures control the interruption of business operations from lack of attention to unresolved problems, and quality assurance procedures verify that system changes are authorized and tested.
  20. In regard to moving an application program from the test environment to the production environment, the BEST control would be to have the:

    • application programmer copy the source program and compiled object module to the production libraries
    • application programmer copy the source program to the production libraries and then have the production control group compile the program.
    • production control group compile the object module to the production libraries using the source program in the test environment.
    • production control group copy the source program to the production libraries and then compile the program.
    Explanation:
    The best control would be provided by having the production control group copy the source program to the production libraries and then compile the program.
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments