SSCP : System Security Certified Practitioner (SSCP) : Part 49

  1. Which of the following embodies all the detailed actions that personnel are required to follow?

    • Standards
    • Guidelines
    • Procedures
    • Baselines

    Explanation:

    Procedures are step-by-step instructions in support of of the policies, standards, guidelines and baselines. The procedure indicates how the policy will be implemented and who does what to accomplish the tasks.”

    Standards is incorrect. Standards are a “Mandatory statement of minimum requirements that support some part of a policy, the standards in this case is your own company standards and not standards such as the ISO standards”

    Guidelines is incorrect. “Guidelines are discretionary or optional controls used to enable individuals to make judgments with respect to security actions.”

    Baselines is incorrect. Baselines “are a minimum acceptable level of security. This minimum is implemented using specific rules necessary to implement the security controls in support of the policy and standards.” For example, requiring a password of at leat 8 character would be an example. Requiring all users to have a minimun of an antivirus, a personal firewall, and an anti spyware tool could be another example.

    References:

    CBK, pp. 12 – 16. Note especially the discussion of the “hammer policy” on pp. 16-17 for the differences between policy, standard, guideline and procedure.
    AIO3, pp. 88-93.

  2. Which of the following is not a method to protect objects and the data within the objects?

    • Layering
    • Data mining
    • Abstraction
    • Data hiding
    Explanation:

    Data mining is used to reveal hidden relationships, patterns and trends by running queries on large data stores.

    Data mining is the act of collecting and analyzing large quantities of information to determine patterns of use or behavior and use those patterns to form conclusions about past, current, or future behavior. Data mining is typically used by large organizations with large databases of customer or consumer behavior. Retail and credit companies will use data mining to identify buying patterns or trends in geographies, age groups, products, or services. Data mining is essentially the statistical analysis of general information in the absence of specific data.

    The following are incorrect answers:

    They are incorrect as they all apply to Protecting Objects and the data within them. Layering, abstraction and data hiding are related concepts that can work together to produce modular software that implements an organizations security policies and is more reliable in operation.

    Layering is incorrect. Layering assigns specific functions to each layer and communication between layers is only possible through well-defined interfaces. This helps preclude tampering in violation of security policy. In computer programming, layering is the organization of programming into separate functional components that interact in some sequential and hierarchical way, with each layer usually having an interface only to the layer above it and the layer below it.

    Abstraction is incorrect. Abstraction “hides” the particulars of how an object functions or stores information and requires the object to be manipulated through well-defined interfaces that can be designed to enforce security policy. Abstraction involves the removal of characteristics from an entity in order to easily represent its essential properties.

    Data hiding is incorrect. Data hiding conceals the details of information storage and manipulation within an object by only exposing well defined interfaces to the information rather than the information itslef. For example, the details of how passwords are stored could be hidden inside a password object with exposed interfaces such as check_password, set_password, etc. When a password needs to be verified, the test password is passed to the check_password method and a boolean (true/false) result is returned to indicate if the password is correct without revealing any details of how/where the real passwords are stored. Data hiding maintains activities at different security levels to separate these levels from each other.

    The following reference(s) were used for this question:

    Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 27535-27540). Auerbach Publications. Kindle Edition.
    and
    Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 4269-4273). Auerbach Publications. Kindle Edition.

  3. What is called the formal acceptance of the adequacy of a system’s overall security by the management?

    • Certification
    • Acceptance
    • Accreditation
    • Evaluation
    Explanation:

    Accreditation is the authorization by management to implement software or systems in a production environment. This authorization may be either provisional or full.

    The following are incorrect answers:

    Certification is incorrect. Certification is the process of evaluating the security stance of the software or system against a selected set of standards or policies. Certification is the technical evaluation of a product. This may precede accreditation but is not a required precursor.

    Acceptance is incorrect. This term is sometimes used as the recognition that a piece of software or system has met a set of functional or service level criteria (the new payroll system has passed its acceptance test). Certification is the better tem in this context.

    Evaluation is incorrect. Evaluation is certainly a part of the certification process but it is not the best answer to the question.

    Reference(s) used for this question:
    The Official Study Guide to the CBK from ISC2, pages 559-560

    AIO3, pp. 314 – 317
    AIOv4 Security Architecture and Design (pages 369 – 372)
    AIOv5 Security Architecture and Design (pages 370 – 372)

  4. What is it called when a computer uses more than one CPU in parallel to execute instructions?

    • Multiprocessing
    • Multitasking
    • Multithreading
    • Parallel running
    Explanation:

    A system with multiple processors is called a multiprocessing system.

    Multitasking is incorrect. Multitasking involves sharing the processor amoung all ready processes. Though it appears to the user that multiple processes are executing at the same time, only one process is running at any point in time.

    Multithreading is incorrect. The developer can structure a program as a collection of independent threads to achieve better concurrency. For example, one thread of a program might be performing a calculation while another is waiting for additional input from the user.

    “Parallel running” is incorrect. This is not a real term and is just a distraction.

    References:

    CBK, pp. 315-316
    AIO3, pp. 234 – 239

  5. What can be defined as an abstract machine that mediates all access to objects by subjects to ensure that subjects have the necessary access rights and to protect objects from unauthorized access?

    • The Reference Monitor
    • The Security Kernel
    • The Trusted Computing Base
    • The Security Domain
    Explanation:

    The reference monitor refers to abstract machine that mediates all access to objects by subjects.

    This question is asking for the concept that governs access by subjects to objects, thus the reference monitor is the best answer. While the security kernel is similar in nature, it is what actually enforces the concepts outlined in the reference monitor.

    In operating systems architecture a reference monitor concept defines a set of design requirements on a reference validation mechanism, which enforces an access control policy over subjects’ (e.g., processes and users) ability to perform operations (e.g., read and write) on objects (e.g., files and sockets) on a system. The properties of a reference monitor are:

    The reference validation mechanism must always be invoked (complete mediation). Without this property, it is possible for an attacker to bypass the mechanism and violate the security policy.
    The reference validation mechanism must be tamperproof (tamperproof). Without this property, an attacker can undermine the mechanism itself so that the security policy is not correctly enforced.
    The reference validation mechanism must be small enough to be subject to analysis and tests, the completeness of which can be assured (verifiable). Without this property, the mechanism might be flawed in such a way that the policy is not enforced.

    For example, Windows 3.x and 9x operating systems were not built with a reference monitor, whereas the Windows NT line, which also includes Windows 2000 and Windows XP, was designed to contain a reference monitor, although it is not clear that its properties (tamperproof, etc.) have ever been independently verified, or what level of computer security it was intended to provide.

    The claim is that a reference validation mechanism that satisfies the reference monitor concept will correctly enforce a system’s access control policy, as it must be invoked to mediate all security-sensitive operations, must not be tampered, and has undergone complete analysis and testing to verify correctness. The abstract model of a reference monitor has been widely applied to any type of system that needs to enforce access control, and is considered to express the necessary and sufficient properties for any system making this security claim.

    According to Ross Anderson, the reference monitor concept was introduced by James Anderson in an influential 1972 paper.

    Systems evaluated at B3 and above by the Trusted Computer System Evaluation Criteria (TCSEC) must enforce the reference monitor concept.

    The reference monitor, as defined in AIO V5 (Harris) is: “an access control concept that refers to an abstract machine that mediates all access to objects by subjects.”

    The security kernel, as defined in AIO V5 (Harris) is: “the hardware, firmware, and software elements of a trusted computing based (TCB) that implement the reference monitor concept. The kernel must mediate all access between subjects and objects, be protected from modification, and be verifiable as correct.”

    The trusted computing based (TCB), as defined in AIO V5 (Harris) is: “all of the protection mechanisms within a computer system (software, hardware, and firmware) that are responsible for enforcing a security policy.”

    The security domain, “builds upon the definition of domain (a set of resources available to a subject) by adding the fact that resources withing this logical structure (domain) are working under the same security policy and managed by the same group.”

    The following answers are incorrect:

    “The security kernel” is incorrect. One of the places a reference monitor could be implemented is in the security kernel but this is not the best answer.

    “The trusted computing base” is incorrect. The reference monitor is an important concept in the TCB but this is not the best answer.

    “The security domain is incorrect.” The reference monitor is an important concept in the security domain but this is not the best answer.

    Reference(s) used for this question:
    Official ISC2 Guide to the CBK, page 324

    AIO Version 3, pp. 272 – 274
    AIOv4 Security Architecture and Design (pages 327 – 328)
    AIOv5 Security Architecture and Design (pages 330 – 331)

    Wikipedia article at https://en.wikipedia.org/wiki/Reference_monitor

  6. Who should DECIDE how a company should approach security and what security measures should be implemented?

    • Senior management
    • Data owner
    • Auditor
    • The information security specialist
    Explanation:

    They are responsible for security of the organization and the protection of its assets.

    The following answers are incorrect because :

    Data owner is incorrect as data owners should not decide as to what security measures should be applied.

    Auditor is also incorrect as auditor cannot decide as to what security measures should be applied.

    The information security specialist is also incorrect as they may have the technical knowledge of how security measures should be implemented and configured , but they should not be in a position of deciding what measures should be applied.

    Reference : Shon Harris AIO v3 , Chapter-3: Security Management Practices , Page : 51.

  7. Which of the following is responsible for MOST of the security issues?

    • Outside espionage
    • Hackers
    • Personnel
    • Equipment failure
    Explanation:

    Personnel cause more security issues than hacker attacks, outside espionage, or equipment failure.

    The following answers are incorrect because:

    Outside espionage is incorrect as it is not the best answer.
    Hackers is also incorrect as it is not the best answer.
    Equipment failure is also incorrect as it is not the best answer.
    Reference : Shon Harris AIO v3 , Chapter-3: Security Management Practices , Page : 56

  8. Which of the following is BEST defined as a physical control?

    • Monitoring of system activity
    • Fencing
    • Identification and authentication methods
    • Logical access control mechanisms
    Explanation:

    Physical controls are items put into place to protect facility, personnel, and resources. Examples of physical controls are security guards, locks, fencing, and lighting.

    The following answers are incorrect answers:

    Monitoring of system activity is considered to be administrative control.

    Identification and authentication methods are considered to be a technical control.

    Logical access control mechanisms is also considered to be a technical control.

    Reference(s) used for this question:
    Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (Kindle Locations 1280-1282). McGraw-Hill. Kindle Edition.

  9. Which of the following is given the responsibility of the maintenance and protection of the data?

    • Data owner
    • Data custodian
    • User
    • Security administrator
    Explanation:

    It is usually responsible for maintaining and protecting the data.

    The following answers are incorrect:

    Data owner is usually a member of management , in charge of a specific business unit and is ultimately responsible for the protection and use of the information.

    User is any individual who routinely uses the data for work-related tasks.

    Security administrator’s tasks include creating new system user accounts , implementing new security software.
    References : Shon Harris AIO v3 , Chapter – 3: Security Management Practices , Pages : 99 – 103

  10. According to private sector data classification levels, how would salary levels and medical information be classified?

    • Public.
    • Internal Use Only.
    • Restricted.
    • Confidential.
    Explanation:

    Typically there are three to four levels of information classification used by most organizations:
    Confidential: Information that, if released or disclosed outside of the organization, would create severe problems for the organization. For example, information that provides a competitive advantage is important to the technical or financial success (like trade secrets, intellectual property, or research designs), or protects the privacy of individuals would be considered confidential. Information may include payroll information, health records, credit information, formulas, technical designs, restricted regulatory information, senior management internal correspondence, or business strategies or plans. These may also be called top secret, privileged, personal, sensitive, or highly confidential. In other words this information is ok within a defined group in the company such as marketing or sales, but is not suited for release to anyone else in the company without permission.

    The following answers are incorrect:

    Public: Information that may be disclosed to the general public without concern for harming the company, employees, or business partners. No special protections are required, and information in this category is sometimes referred to as unclassified. For example, information that is posted to a company’s public Internet site, publicly released announcements, marketing materials, cafeteria menus, and any internal documents that would not present harm to the company if they were disclosed would be classified as public. While there is little concern for confidentiality, integrity and availability should be considered.

    Internal Use Only: Information that could be disclosed within the company, but could harm the company if disclosed externally. Information such as customer lists, vendor pricing, organizational policies, standards and procedures, and internal organization announcements would need baseline security protections, but do not rise to the level of protection as confidential information. In other words, the information may be used freely within the company but any unapproved use outside the company can pose a chance of harm.

    Restricted: Information that requires the utmost protection or, if discovered by unauthorized personnel, would cause irreparable harm to the organization would have the highest level of classification. There may be very few pieces of information like this within an organization, but data classified at this level requires all the access control and protection mechanisms available to the organization. Even when information classified at this level exists, there will be few copies of it

    Reference(s) Used for this question:
    Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 952-976). Auerbach Publications. Kindle Edition.

  11. Which of the following would be the best criterion to consider in determining the classification of an information asset?

    • Value
    • Age
    • Useful life
    • Personal association
    Explanation:

    Information classification should be based on the value of the information to the organization and its sensitivity (reflection of how much damage would accrue due to disclosure).

    Age is incorrect. While age might be a consideration in some cases, the guiding principles should be value and sensitivity.

    Useful life. While useful lifetime is relevant to how long data protections should be applied, the classification is based on information value and sensitivity.

    Personal association is incorrect. Information classification decisions should be based on value of the information and its sensitiviry.

    References
    CBK, pp. 101 – 102.

  12. What are the three FUNDAMENTAL principles of security?

    • Accountability, confidentiality and integrity
    • Confidentiality, integrity and availability
    • Integrity, availability and accountability
    • Availability, accountability and confidentiality
    Explanation:

    The following answers are incorrect because:

    Accountability, confidentiality and integrity is not the correct answer as Accountability is not one of the fundamental principle of security.

    Integrity, availability and accountability is not the correct answer as Accountability is not one of the fundamental principle of security.

    Availability, accountability and confidentiality is not the correct answer as Accountability is not one of the fundamental objective of security.

    References : Shon Harris AIO v3 , Chapter – 3: Security Management Practices , Pages : 49-52

  13. Within the context of the CBK, which of the following provides a MINIMUM level of security ACCEPTABLE for an environment ?

    • A baseline
    • A standard
    • A procedure
    • A guideline
    Explanation:

    Baselines provide the minimum level of security necessary throughout the organization.

    Standards specify how hardware and software products should be used throughout the organization.

    Procedures are detailed step-by-step instruction on how to achieve certain tasks.

    Guidelines are recommendation actions and operational guides to personnel when a specific standard does not apply.
    Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 3: Security Management Practices (page 94).

  14. One of these statements about the key elements of a good configuration process is NOT true

    • Accommodate the reuse of proven standards and best practices
    • Ensure that all requirements remain clear, concise, and valid
    • Control modifications to system hardware in order to prevent resource changes
    • Ensure changes, standards, and requirements are communicated promptly and precisely
    Explanation:

    Configuration management isn’t about preventing change but ensuring the integrity of IT resources by preventing unauthorised or improper changes.

    According to the Official ISC2 guide to the CISSP exam, a good CM process is one that can:

    (1) accommodate change;
    (2) accommodate the reuse of proven standards and best practices;
    (3) ensure that all requirements remain clear, concise, and valid;
    (4) ensure changes, standards, and requirements are communicated promptly and precisely; and
    (5) ensure that the results conform to each instance of the product.

    Configuration management
    Configuration management (CM) is the detailed recording and updating of information that describes an enterprise’s computer systems and networks, including all hardware and software components. Such information typically includes the versions and updates that have been applied to installed software packages and the locations and network addresses of hardware devices. Special configuration management software is available. When a system needs a hardware or software upgrade, a computer technician can accesses the configuration management program and database to see what is currently installed. The technician can then make a more informed decision about the upgrade needed.

    An advantage of a configuration management application is that the entire collection of systems can be reviewed to make sure any changes made to one system do not adversely affect any of the other systems

    Configuration management is also used in software development, where it is called Unified Configuration Management (UCM). Using UCM, developers can keep track of the source code, documentation, problems, changes requested, and changes made.
    Change management
    In a computer system environment, change management refers to a systematic approach to keeping track of the details of the system (for example, what operating system release is running on each computer and which fixes have been applied).

  15. Which of the following is based on the premise that the quality of a software product is a direct function of the quality of its associated software development and maintenance processes?

    • The Software Capability Maturity Model (CMM)
    • The Spiral Model
    • The Waterfall Model
    • Expert Systems Model
    Explanation:

    The Capability Maturity Model (CMM) is a service mark owned by Carnegie Mellon University (CMU) and refers to a development model elicited from actual data. The data was collected from organizations that contracted with the U.S. Department of Defense, who funded the research, and became the foundation from which CMU created the Software Engineering Institute (SEI). Like any model, it is an abstraction of an existing system.

    The Capability Maturity Model (CMM) is a methodology used to develop and refine an organization’s software development process. The model describes a five-level evolutionary path of increasingly organized and systematically more mature processes. CMM was developed and is promoted by the Software Engineering Institute (SEI), a research and development center sponsored by the U.S. Department of Defense (DoD). SEI was founded in 1984 to address software engineering issues and, in a broad sense, to advance software engineering methodologies. More specifically, SEI was established to optimize the process of developing, acquiring, and maintaining heavily software-reliant systems for the DoD. Because the processes involved are equally applicable to the software industry as a whole, SEI advocates industry-wide adoption of the CMM.

    The CMM is similar to ISO 9001, one of the ISO 9000 series of standards specified by the International Organization for Standardization (ISO). The ISO 9000 standards specify an effective quality system for manufacturing and service industries; ISO 9001 deals specifically with software development and maintenance. The main difference between the two systems lies in their respective purposes: ISO 9001 specifies a minimal acceptable quality level for software processes, while the CMM establishes a framework for continuous process improvement and is more explicit than the ISO standard in defining the means to be employed to that end.
    CMM’s Five Maturity Levels of Software Processes

    At the initial level, processes are disorganized, even chaotic. Success is likely to depend on individual efforts, and is not considered to be repeatable, because processes would not be sufficiently defined and documented to allow them to be replicated.
    At the repeatable level, basic project management techniques are established, and successes could be repeated, because the requisite processes would have been made established, defined, and documented.
    At the defined level, an organization has developed its own standard software process through greater attention to documentation, standardization, and integration.
    At the managed level, an organization monitors and controls its own processes through data collection and analysis.
    At the optimizing level, processes are constantly being improved through monitoring feedback from current processes and introducing innovative processes to better serve the organization’s particular needs.

    When it is applied to an existing organization’s software development processes, it allows an effective approach toward improving them. Eventually it became clear that the model could be applied to other processes. This gave rise to a more general concept that is applied to business processes and to developing people.
    CMM is superseded by CMMI

    The CMM model proved useful to many organizations, but its application in software development has sometimes been problematic. Applying multiple models that are not integrated within and across an organization could be costly in terms of training, appraisals, and improvement activities. The Capability Maturity Model Integration (CMMI) project was formed to sort out the problem of using multiple CMMs.

    For software development processes, the CMM has been superseded by Capability Maturity Model Integration (CMMI), though the CMM continues to be a general theoretical process capability model used in the public domain.
    CMM is adapted to processes other than software development

    The CMM was originally intended as a tool to evaluate the ability of government contractors to perform a contracted software project. Though it comes from the area of software development, it can be, has been, and continues to be widely applied as a general model of the maturity of processes (e.g., IT Service Management processes) in IS/IT (and other) organizations.

    Source:

    http://searchsoftwarequality.techtarget.com/sDefinition/0,,sid92_gci930057,00.html
    and
    http://en.wikipedia.org/wiki/Capability_Maturity_Model

  16. Which of the following determines that the product developed meets the projects goals?

    • verification
    • validation
    • concurrence
    • accuracy
    Explanation:

    Software Development Verification vs. Validation:

    Verification determines if the product accurately represents and meets the design specifications given to the developers. A product can be developed that does not match the original specifications. This step ensures that the specifications are properly met and closely followed by the development team.

    Validation determines if the product provides the necessary solution intended real-world problem. It validates whether or not the final product is what the user expected in the first place and whether or not it solve the problem it intended to solve. In large projects, it is easy to lose sight of overall goal. This exercise ensures that the main goal of the project is met.

    From DITSCAP:
    6.3.2. Phase 2, Verification. The Verification phase shall include activities to verify compliance of the system with previously agreed security requirements. For each life-cycle development activity, DoD Directive 5000.1 (reference (i)), there is a corresponding set of security activities, enclosure 3, that shall verify compliance with the security requirements and evaluate vulnerabilities.

    6.3.3. Phase 3, Validation. The Validation phase shall include activities to evaluate the fully integrated system to validate system operation in a specified computing environment with an acceptable level of residual risk. Validation shall culminate in an approval to operate.

    NOTE:
    DIACAP has replace DITSCAP but the definition above are still valid and applicable for the purpose of the exam.

    Reference(s) used for this question:
    Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (p. 1106). McGraw-Hill. Kindle Edition.
    and
    http://iase.disa.mil/ditscap/DITSCAP.html

  17. A ‘Pseudo flaw’ is which of the following?

    • An apparent loophole deliberately implanted in an operating system program as a trap for intruders.
    • An omission when generating Psuedo-code.
    • Used for testing for bounds violations in application programming.
    • A normally generated page fault causing the system to halt.
    Explanation:

    A Pseudo flaw is something that looks like it is vulnerable to attack, but really acts as an alarm or triggers automatic actions when an intruder attempts to exploit the flaw.

    The following answers are incorrect:

    An omission when generating Psuedo-code. Is incorrect because it is a distractor.
    Used for testing for bounds violations in application programming. Is incorrect, this is a testing methodology.
    A normally generated page fault causing the system to halt. This is incorrect because it is distractor.

  18. Which of the following is considered the weakest link in a security system?

    • People
    • Software
    • Communications
    • Hardware
    Explanation:

    The Answer: People. The other choices can be strengthened and counted on (For the most part) to remain consistent if properly protected. People are fallible and unpredictable. Most security intrusions are caused by employees. People get tired, careless, and greedy. They are not always reliable and may falter in following defined guidelines and best practices. Security professionals must install adequate prevention and detection controls and properly train all systems users Proper hiring and firing practices can eliminate certain risks. Security Awareness training is key to ensuring people are aware of risks and their responsibilities.

    The following answers are incorrect: Software. Although software exploits are major threat and cause for concern, people are the weakest point in a security posture. Software can be removed, upgraded or patched to reduce risk.

    Communications. Although many attacks from inside and outside an organization use communication methods such as the network infrastructure, this is not the weakest point in a security posture. Communications can be monitored, devices installed or upgraded to reduce risk and react to attack attempts.

    Hardware. Hardware components can be a weakness in a security posture, but they are not the weakest link of the choices provided. Access to hardware can be minimized by such measures as installing locks and monitoring access in and out of certain areas.

    The following reference(s) were/was used to create this question:

    Shon Harris AIO v.3 P.19, 107-109
    ISC2 OIG 2007, p.51-55

  19. Which of the following is NOT a basic component of security architecture?

    • Motherboard
    • Central Processing Unit (CPU
    • Storage Devices
    • Peripherals (input/output devices)
    Explanation:
    The CPU, storage devices and peripherals each have specialized roles in the security archecture. The CPU, or microprocessor, is the brains behind a computer system and performs calculations as it solves problemes and performs system tasks. Storage devices provide both long- and short-term stoarge of information that the CPU has either processed or may process. Peripherals (scanners, printers, modems, etc) are devices that either input datra or receive the data output by the CPU.

    The motherboard is the main circuit board of a microcomputer and contains the connectors for attaching additional boards. Typically, the motherboard contains the CPU, BIOS, memory, mass storage interfaces, serial and parallel ports, expansion slots, and all the controllers required to control standard peripheral devices.

    Reference(s) used for this question:
    TIPTON, Harold F., The Official (ISC)2 Guide to the CISSP CBK (2007), page 308.

  20. Which of the following is a set of data processing elements that increases the performance in a computer by overlapping the steps of different instructions?

    • pipelining
    • complex-instruction-set-computer (CISC)
    • reduced-instruction-set-computer (RISC)
    • multitasking
    Explanation:

    Pipelining is a natural concept in everyday life, e.g. on an assembly line. Consider the assembly of a car: assume that certain steps in the assembly line are to install the engine, install the hood, and install the wheels (in that order, with arbitrary interstitial steps). A car on the assembly line can have only one of the three steps done at once. After the car has its engine installed, it moves on to having its hood installed, leaving the engine installation facilities available for the next car. The first car then moves on to wheel installation, the second car to hood installation, and a third car begins to have its engine installed. If engine installation takes 20 minutes, hood installation takes 5 minutes, and wheel installation takes 10 minutes, then finishing all three cars when only one car can be assembled at once would take 105 minutes. On the other hand, using the assembly line, the total time to complete all three is 75 minutes. At this point, additional cars will come off the assembly line at 20 minute increments.

    In computing, a pipeline is a set of data processing elements connected in series, so that the output of one element is the input of the next one. The elements of a pipeline are often executed in parallel or in time-sliced fashion; in that case, some amount of buffer storage is often inserted between elements. Pipelining is used in processors to allow overlapping execution of multiple instructions within the same circuitry. The circuitry is usually divided into stages, including instruction decoding, arithmetic, and register fetching stages, wherein each stage processes one instruction at a time.

    The following were not correct answers:

    CISC: is a CPU design where single instructions execute several low-level operations (such as a load from memory, an arithmetic operation, and a memory store) within a single instruction.

    RISC: is a CPU design based on simplified instructions that can provide higher performance as the simplicity enables much faster execution of each instruction.

    Multitasking: is a method where multiple tasks share common processing resources, such as a CPU, through a method of fast scheduling that gives the appearance of parallelism, but in reality only one task is being performed at any one time.

    Reference:
    KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, pages 188-189.
    Also see
    http://en.wikipedia.org/wiki/Pipeline_(computing)

Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments