350-601 : Implementing and Operating Cisco Data Center Core Technologies (DCCOR) : Part 05

  1. An application receives the following output after querying an API by using an HTTPS GET method:

    {

    “id”: 12345.

    “fname”: “John”,

    “lname”: “Doe”,

    “group”: “Accounting” {

    “role”: “Receivables”

    }

    }

    Which of the following is the architecture upon which the API is most likely constructed?

    • the Java API
    • GraphQL
    • a REST API
    • a SOAP API

    Explanation:

    Most likely, representational state transfer (REST) is the architecture upon which the Application Programming Interface (API) in this scenario is most likely constructed. Open APIs, such as REST APIs, can be used to enable services such as billing automation and centralized management of cloud infrastructure.
    REST is an API architecture that uses Hypertext Transfer Protocol (HTTP) or HTTP Secure (HTTPS) to enable external resources to access and make use of programmatic methods that are exposed by the API. For example, a web application that retrieves user product reviews from an online marketplace for display on third-party websites might obtain those reviews by using methods provided in an API that is developed and maintained by that marketplace. REST APIs can return data in Extensible Markup Language (XML) format or in JavaScript Object Notation (JSON) format. The output in this scenario is in JSON format.
    It is not likely that the API in this scenario was constructed in Simple Object Access Protocol (SOAP) API. SOAP is an older API messaging protocol that uses HTTP and XML to enable communication between devices. SOAP APIs are typically more resource-intensive than REST APIs and, therefore, slower. Unlike REST APIs, SOAP APIs do not return JSON-formatted output.
    It is not likely that the API in this scenario was constructed in Graph Query Language (GraphQL). GraphQL is an API query language and a runtime that is intended to lower the burden of making multiple API calls to obtain a single set of data. For example, data that requires three or four HTTP GET requests when constructed from a standard REST API might take only one request when using GraphQL. Similar to REST API, GraphQL output is in JSON format. Although GraphQL can use HTTP or HTTPS, it is not limited to those protocols.
    It is not likely that the API in this scenario was constructed by using the Java API. Unlike SOAP, REST, and GraphQL, the Java API is typically accessed by Java applications that are running in the Java virtual machine (VM), which is the Java component that executes compiled Java programs. The Java API is a collection of Java classes that developers can use for data collection or to build interfaces.

  2. A user retrieves data in JSON format from an application by submitting a GET request on TCP port 80.

    Which of the following technologies are most likely in use? (Choose two.)

    • XML
    • SOAP
    • HTTPS
    • HTTP
    • REST API

    Explanation:

    Most likely, representational state transfer (REST) Application Programming Interface (API) and Hypertext Transfer Protocol (HTTP) are in use if a user retrieves data in JavaScript Object Notation (JSON) format from an application by submitting a GET request on Transmission Control Protocol (TCP) port 80. REST is an API architecture that uses HTTP or HTTP Secure (HTTPS) to enable external resources to access and make use of programmatic methods that are exposed by the API. In this way, users can interact with specific portions of a data structure from a remote system. By default, HTTP operates on TCP port 80. A GET request is an HTTP method of retrieving information from an HTTP server.
    It is not likely that HTTPS is in use in this scenario, because the TCP port on which the GET request is being made is the HTTP port. If the encrypted HTTPS protocol was being used in this scenario, the TCP port on which the request is being made would most likely be TCP port 443. By default, HTTPS servers listen for traffic on TCP port 443. On Cisco Application Policy Infrastructure Controller (APIC) devices, HTTPS, not HTTP, is enabled by default. It is possible to enable HTTP on an APIC device. However, HTTP is less secure than HTTPS and is therefore not recommended for that purpose.
    It is not likely that Simple Object Access Protocol (SOAP) API is being used in this scenario, because the user is retrieving data from a Cisco APIC device. Cisco APIC does not support SOAP, which is an older API messaging protocol that uses HTTP and XML to enable communication between devices. SOAP APIs are typically more resource-intensive than more modern APIs and, therefore, slower. Open APIs can be used to enable services such as billing automation and centralized management of cloud infrastructure.
    Extensible Markup Language (XML) is not in use in this scenario. XML is an output format that is supported by REST API. However, in this scenario, the user has retrieved data from the Cisco APIC device in JSON format.

  3. You want to allow remote users to log in to a Nexus 7000 Series switch nondefault VDC by using TACACS+. The TACACS+ configuration has been previously completed on the switch. You issue the following commands:

    switchto vdc MyVDC

    configure terminal

    aaa user default-role

    exit

    copy running-config start-config

    Which of the following user roles will occur when a remote user logs in to the VDC named MyVDC by using TACACS+?

    • The user will be assigned the vdc-operator role.
    • The user will be assigned the network-admin role.
    • The user will not be assigned a role and will be denied login.
    • The user will be assigned the vdc-admin role.
    • © The user will be assigned the network-operator role.

    Explanation:

    The user will be assigned the vdc-operator role when the remote user logs in by using Terminal Access Controller Access-Control System Plus (TACACS+) in this scenario. The vdc-operator role has read-only access to a specific virtual device context (VDC) on the switch. In this scenario, the aaa user default-role command has been issued in the VDC named MyVDC, which is a nondefault VDC on the switch. The aaa user default-role command configures the Authentication, Authorization, and Accounting (AAA) feature on the switch to automatically assign remote users the default user role at login. The default remote user role for nondefault VDCs on a Cisco Nexus switch is the vdc-operator role.
    Cisco Nexus switches use role-based access control (RBAC) to assign management privileges to a given user. By default, a Nexus 7000 switch is configured with the following user roles:
    – network-admin — has read and write access to all VDCs on the switch
    – network-operator — has read-only access to all the VDCs on the switch
    – vdc-admin — has read and write access to a specific VDC on the switch
    – vdc-operator — has read-only access to a specific VDC on the switch
    The user will not be assigned the network-admin role. In addition, the user will not be assigned the network-operator role. These roles are applied to users who have access to all VDCs that are configured on the switch, not a specific nondefault VDC. If the aaa user default-role command had been issued in the default VDC in this scenario, remote users who log in to the default VDC would be assigned a network-operator user role.
    The user will not be assigned the vdc-admin user role. The vdc-admin user role allows read and write access to a specific VDC on the switch. If remote users were automatically assigned the vdc-admin role when logging in to the VDC named MyVDC, those users would have administrative access to the VDC, which is a security risk.
    The user will be assigned a role and will not be denied login. In this scenario, TACACS+ is already configured on the Cisco Nexus 7000 Series switch. In addition, the aaa user default-role command has been issued. If the command had not been issued or if the no aaa user default-role command had been issued later in the configuration, remote users who attempt to log in to the VDC named MyVDC would be denied access because no user role is assigned to those users.

  4. Which of the following default Cisco UCS Manager user roles have read-write access to system logs? (Choose two.)

    • AAA Administrator
    • Network Administrator
    • Operations
    • Read-Only
    • Administrator

    Explanation:

    Of the available choices, only the Cisco Unified Computing System (UCS) Manager Administrator user role and the Cisco UCS Manager Operations user role have read-write access to system logs. The Cisco UCS System is installed with several default user roles for its role-based access control (RBAC) system. User accounts in Cisco UCS Manager are assigned one of the default roles or a custom role that is defined by an administrator. These roles define the access privileges for each account. Cisco UCS Manager is web-based software that can be used to manage a single UCS domain.
    The Cisco UCS default user role of Administrator is automatically assigned to the default admin account. Administrators have read-write access to the entire system. Although the Administrator role can be assigned to other user accounts, this role cannot be removed from the default admin account. The Operations user role has read-write access to system logs and read access to the rest of the system. An Operations user can read and write to Syslog servers and faults.
    The AAA Administrator user role does not have read-write access to system logs. The AAA Administrator user role has read-write access to users, roles, and the system’s authentication, authorization, and accounting (AAA) configuration. AAA Administrators can read the rest of the system but cannot write to it.
    The Network Administrator user role does not have read-write access to system logs. The Network Administrator user role has read-write access to the fabric interconnect and network security. However, this role has only read access to the rest of the system.
    The Read-Only user role does not have read-write access to system logs. The Read-Only user role has read access to the configuration. In addition, the Read-Only user role cannot modify the system state.
    There are five other default Cisco UCS Manager user roles:
    – Facility Manager — has read-write access to power management
    – Server Equipment Administrator — has read-write access to physical server operations
    – Server Profile Administrator — has read-write access to logical server operations
    – Server Security Administrator — has read-write access to security operations
    – Storage Administrator — has read-write access to storage operations

  5. Which of the following Cisco UCS Director Orchestrator components are stateful processes that are created when a workflow executes?

    • service requests
    • workflows
    • tasks
    • rollbacks
    • approvals

    Explanation

    Cisco Unified Computing System (UCS) Director Orchestrator service requests are stateful processes that are created when a workflow executes. Cisco UCS Director uses hardware abstraction to convert hardware and software into programmable actions that can then be combined into an automated custom workflow. Cisco UCS Director Orchestrator is the Cisco UCS Director engine that enables this automation.
    Service requests are Cisco UCS Director processes that can exist in one of several states. For example, a service request that has not run yet might exist in a scheduled state. A service request that has been successfully executed exists in a completed state. A service request that was attempted but not successfully executed might exist in a failed state.
    Tasks are best described as the atomic units of work in Cisco UCS Director Orchestrator. A task is a single action and is therefore the smallest unit of work. A workflow is a series of tasks; it is a container that defines the order in which tasks should be performed. However, it is possible for a workflow to contain a single task. Workflows can be created and deployed from workflow templates.
    Approvals are points in a workflow that require user intervention. For example, a service request might exist in a blocked state if the request cannot complete until an administrator approves the service request. Approvals enable administrators to provide input values that can affect the product of a given workflow.
    Rollbacks can be used to undo the results of workflows. For example, a workflow that creates unintended objects or components in a system can be rolled back so that those objects or components are removed. Cisco UCS Director Orchestrator rollbacks are different from relational database rollbacks in that they are not transactional. Instead, tasks in the workflow are executed in reverse order when a workflow is rolled back.

  6. How does each APIC server in the Cisco ACI communicate with ACI nodes and other APIC servers during the APIC cluster discovery process?

    • by using LLDP
    • by using MAC addresses
    • by using public IP addresses
    • by using private IP addresses

    Explanation:

    Each Cisco Application Policy Infrastructure Controller (APIC) server in the Cisco Application Centric Infrastructure (ACI) communicates with ACI nodes and other APIC servers during the APIC cluster discovery process by using private Internet Protocol (IP) addresses. Private IP addresses are not routable over the Internet. When an ACI fabric is booted, an internal private IP addressing scheme is used to enable the APIC to communicate with other nodes and controllers.
    APIC controllers do not use Link-Layer Discovery Protocol (LLDP) to communicate with ACI nodes and other APIC servers during the APIC cluster discovery process. However, LLDP is used by APIC controllers to discover the private IP addresses and other information assigned to other APIC controllers in the cluster. LLDP is a standard protocol that detects neighboring devices of any type. An ACI fabric also uses LLDP along with Dynamic Host Configuration Protocol (DHCP) to discover switch nodes and to assign IP addresses to virtual extensible local area network (VXLAN) tunnel endpoints (VTEPs). LLDP is also used by APIC to detect virtual switches, although it is also possible to use Cisco Discovery Protocol (CDP) for that purpose.
    APIC controllers do not use public IP addresses to communicate with ACI nodes and other APIC servers during the APIC discovery process. Unlike private IP addresses, public IP addresses are routable over the Internet.
    APIC controllers do not use Media Access Control (MAC) addresses to communicate with ACI nodes and other APIC servers during the APIC cluster discovery process. However, an ACI fabric does use destination MAC addresses to perform Layer 2 switching.

  7. Which of the following OTV technologies is a physical or logical Layer 3 interface?

    • OTV edge device
    • OTV internal interface
    • OTV join interface
    • OTV overlay interface

    Explanation:

    An Overlay Transport Virtualization (OTV) join interface is the OTV technology that is a physical or logical Open Systems Interconnection (OSI) networking model Layer 3 interface. OTV is a means of simplifying the deployment of data center interconnect (DCI), enabling the extension of Layer 2 applications between data centers. An OTV join interface is used to connect an overlay network to remote overlay edge devices. An OTV join interface can be a physical interface, subinterface, or logical interface such as a port channel.
    An OTV edge device is not a physical or logical Layer 3 interface. An OTV edge device is typically a virtual device context (VDC) running on a Cisco Nexus switch, such as a Nexus 7000 Series switch. The edge device is responsible for receiving Layer 2 traffic for virtual local area networks (VLANs) that are extended. Ethernet frame traffic is encapsulated into Internet Protocol (IP) packets and transmitted to the remote network.
    An OTV internal interface is a Layer 2 interface, not a Layer 3 interface. The OTV internal interface is an interface on the OTV edge device that connects to the VLANs that are being extended. Typically, the OTV internal interface operates similarly to any other Layer 2 switch trunk port or access port and does not require configuration specific to OTV.
    An OTV overlay interface is a logical interface that is defined by the user. The OTV overlay interface receives and forwards any Layer 2 frames that must be transmitted to the remote site.

  8. You issue the port-channel load-balance scr-dst-port module 4 command on a Cisco Nexus 7000 Series switch running NX-OS 6.2.

    How is distribution loaded for port channels on slot 4?

    • based on the source and destination port
    • based on the source and destination MAC address
    • based on the defaults because the command contains invalid syntax
    • based on the source IP address and destination port
    • based on the source MAC address and destination port only

    Explanation:

    Distribution for port channels on slot 4 in this scenario is loaded based on the source and destination port because the command has been issued with the src-dst-port keyword and the module 4 keyword. The port-channel load-balance command can be used to modify the load distribution criteria used by EtherChannel. The basic syntax of the port-channel load-balance command is port-channel load-balance method [module slot], where method is one of the following 10 keywords:
    – dst-ip
    – dst-mac
    – dst-port
    – src-dst-ip
    – src-dst-mac
    – src-dst-port
    – src-ip
    – src-mac
    – src-port
    – vlan-only
    The dst-ip keyword, dst-mac keyword, and dst-port keyword load port channel distribution based on the destination Internet Protocol (IP) address, Media Access Control (MAC) address, and port number, respectively. Similarly, the src-ip keyword, src-mac keyword, and src-port keyword load port channel distribution based on the source IP address, source MAC address, and port number, respectively. The src-dst-ip keyword, src-dst-mac keyword, and src-dst-port keyword load port channel distribution based on the source and destination IP addresses, MAC addresses, and port numbers, respectively. The vlan-only keyword loads distribution on only the virtual local area network (VLAN) modules.
    The optional module keyword accepts a slot number value. If you configure the port-channel load-balance command with the module keyword, the configuration applies only to the specified slot. Otherwise, the configuration applies to the entire device. By default on a Cisco Nexus switch, a port channel load balances Layer 2 packets based on the source and destination MAC addresses. Layer 3 packets, on the other hand, are load balanced based on the source and destination IP addresses. You must be operating in the default virtual device context (VDC) on the switch in order to issue the port-channel load-balance command.
    The command in this scenario does not contain invalid syntax, because the Nexus switch in this scenario is running NX-OS 6.2. Prior to NX-OS 5.1(3), the port-channel load-balance command required an ethernet keyword. Therefore, the valid syntax for NX-OS versions older than 5.1(3) is port-channel load-balance ethernet method [module slot]. In NX-OS 5.1(3), Cisco removed that keyword.

  9. You want to migrate a VMware VM and its datastore from one ESXi server to another ESXi server. You do not want to power off the VM to perform the migration.

    Which of the following solutions should you choose?

    • copying or cloning
    • cold migration
    • Storage vMotion
    • vSphere vMotion

    Explanation:

    Of the available choices, you should choose Storage vMotion if you want to migrate a VMware virtual machine (VM) and its datastore from one VMware ESXi server to another ESXi server without powering off the VM. VMware’s ESXi server is a bare-metal server virtualization technology, which means that ESXi is installed directly on the hardware it is virtualizing instead of running on top of another operating system (OS). This layer of hardware abstraction enables tools like Storage vMotion to migrate ESXi VMs and their datastores from one host to another without powering off the VM, enabling the VM’s users to continue working without interruption.
    You should not choose vSphere vMotion to perform the migration in this scenario, because vMotion allows migration of only the VM and its virtual components; it does not allow migration of the datastore. The datastore is the repository of VM-related files, such as logs and virtual disks. When migrating a VM by using vMotion, only the virtualized environment moves to a new host, not the datastore.
    You should not choose cold migration, copying, or cloning in this scenario. Cold migration is the process of powering down a VM and moving the VM or the VM and its datastore to a new location. While a cold migration is in progress, no users can perform tasks inside the VM. Both copying and cloning create new instances of a given VM. Therefore, neither action is a form of migrating a VM to another host. Typically, a VM must be powered off or suspended in order to successfully copy or clone it.

  10. Which of the following is a software environment that runs a separate OS from its host OS?

    • a hypervisor
    • a mezzanine card
    • a VM
    • an API

    Explanation:

    A virtual machine (VM) is a software environment that runs a separate operating system (OS) from its host OS. However, a VM does not itself abstract the hardware on which the host OS is installed. VMs provide the hardware environment for guest OSes and the applications they run by making calls to the hypervisor, which then makes calls to either the host OS or the bare-metal server, depending on the type of hypervisor installed on the device.
    A hypervisor is software that has two roles: the abstraction of physical hardware and the creation of VMs. Hypervisors are capable of virtualizing the physical components of computer hardware. Virtualization enables the creation of multiple VMs that can be configured and run in separate instances on the same hardware. Hardware abstraction is the use of software to emulate physical hardware. Hardware abstraction enables device-independent software development and allows a given VM to become portable between physical devices.
    An Application Programming Interface (API) is typically used to enable an application to perform functions on a remote framework, database, or application. For example, representational state transfer (REST) is an API architecture that uses Hypertext Transfer Protocol (HTTP) or HTTP Secure (HTTPS) to enable external resources to access and make use of programmatic methods that are exposed by the API. A web application that retrieves user product reviews from an online marketplace for display on third-party websites might obtain those reviews by using methods provided in an API that is developed and maintained by that marketplace.
    A mezzanine card is a computer hardware component that can be plugged into expansion slots on a main board. Cisco Unified Computing System (UCS) B-Series blade servers use mezzanine cards to add a variety of network interfaces to the system. For example, the Cisco UCS Virtual Interface Card (VIC) 1280 is a mezzanine card that adds 10-gigabit-per-second (Gbps) Ethernet port and Fibre Channel over Ethernet (FCoE) capabilities to a Cisco UCS B-Series blade server.

  11. Which of the following are most likely to operate in the control plane of a Nexus switch? (Choose three.)

    • OSPF
    • cut-through switching
    • BGP
    • store-and-forward switching
    • EIGRP
    • SNMP

    Explanation:

    Of the available choices, Enhanced Interior Gateway Routing Protocol (EIGRP), Open Shortest Path First (OSPF), and Border Gateway Protocol (BGP) are all most likely to operate in the control plane of a Nexus switch. A Nexus switch consists of three operational planes: the data plane, which is also known as the forwarding plane, the control plane, and the management plane. The control plane is responsible for gathering and calculating the information required to make the decisions that the data plane needs for forwarding. Routing protocols operate in the control plane because they enable the collection and transfer of routing information between neighbors. This information is used to construct routing tables that the data plane can then use for forwarding.
    Cut-through switching and store-and-forward switching are most likely to operate in the data plane of a Nexus switch. Of the three, the data plane is where traffic forwarding occurs. Cut-through switching allows a switch to begin forwarding a frame before the frame has been received in its entirety. Store-and-forward switching receives an entire frame and stores it in memory before forwarding the frame to its destination.
    Simple Network Management Protocol (SNMP) is an Internet Protocol (IP) network management protocol that operates in the management plane of a Nexus switch. The management plane is responsible for monitoring and configuration of the control plane. Therefore, network administrators typically interact directly with protocols running in the management plane.

  12. Which of the following frame fields does Cisco FabricPath use to identify the unique FabricPath topology that a unicast frame is traversing?

    • FTAG
    • IS-IS
    • LID
    • STP
    • TTL

    Explanation:

    Of the available choices, Cisco FabricPath uses the forwarding tag (FTAG) frame field to identify the unique FabricPath topology that a unicast frame is traversing. Each topology in FabricPath is assigned a unique tag. For multicast or broadcast traffic, the 10-bit FTAG field contains an ID for a forwarding tree that contains multiple destinations within the topology. The FTAG field is the second of three fields that reside in the FabricPath tag field. It is preceded by the 16-bit Ethertype field and succeeded by the Time to Live (TTL) field.
    Cisco FabricPath frames are classic Ethernet frames that are encapsulated with a 16-byte FabricPath header. This header contains a 48-bit outer destination address (ODA), which is a Media Access Control (MAC) address, and a 48-bit outer source address (OSA). In addition, the field contains the 32-bit FabricPath tag. The classic Ethernet frame’s cyclic redundancy check (CRC) field is replaced by a new CRC field that is updated to reflect the additional header data in the frame.
    Cisco FabricPath does not use the TTL frame field to identify the unique FabricPath topology that a unicast frame is traversing. The TTL field is the Cisco FabricPath frame field that FabricPath uses to mitigate temporary Open Systems Interconnection (OSI) networking model Layer 2 loops. The TTL field in a FabricPath frame operates similarly to the TTL field in IP networking in that the field is decremented by a value of 1 each time it traverses a new hop. If the TTL expires, the frame is discarded. The TTL field is a 6-bit field that resides at the end of the FabricPath tag field.
    Cisco FabricPath does not use an Intermediate System-to-Intermediate System (IS-IS) frame field to identify the unique FabricPath topology that a unicast frame is traversing. In addition, FabricPath does not use a Spanning Tree Protocol (STP) frame field. However, the IS-IS routing protocol is used as a Layer 3 replacement for traditional STP in a Cisco FabricPath topology. In traditional networking, STP is used to prevent Layer 2 switching loops in a topology that contains redundant links. The use of the IS-IS routing protocol ensures that Cisco FabricPath operates as a multipath environment for Layer 2 packets. In other words, IS-IS ensures that Cisco FabricPath is capable of Layer 2 multipath forwarding.
    Cisco FabricPath does not use the local ID (LID) frame field to identify the unique FabricPath topology that a unicast frame is traversing. Instead, the LID field stores a 16-bit value that identifies the edge port that a packet is either destined to or sent from. The LID field is the last field in the ODA and OSA fields of a Cisco FabricPath header. The edge port can be either a physical port or a logical port. In addition, the value in the LID field is locally significant to the switch that the frame is traversing.

  13. You issue the following commands on SwitchA and SwitchB, which are Cisco Nexus 7000 Series switches

    SwitchA(config)#vpc domain 101
    
    SwitchA(config-vpc-domain)#peer-keepalive destination 192.168.1.2 source 192.168.1.1 vrf default
    
    SwitchA(config-vpc-domain)#exit
    
    SwitchA(config)#interface range ethernet 2/1 - 2
    
    SwitchA(config-if-range)#switchport
    
    SwitchA(config-if-range)#channel-group 1 mode active
    
    SwitchA(config-if-range)#interface port-channel 1
    
    SwitchA(config-if)#switchport mode trunk
    
    SwitchB(config)#vpc domain 101
    
    SwitchB(config-vpc-domain)#peer-keepalive destination 192.168.1.1 source 192.168.1.2 vrf default
    
    SwitchB(config-vpc-domain)#exit
    
    SwitchB(config)#interface range ethernet 2/1 - 2
    
    SwitchB(config-if-range)#switchport
    
    SwitchB(config-if-range)#channel-group 1 mode active
    
    SwitchB(config-if-range)#interface port-channel 1
    
    SwitchB(config-if)#switchport mode access

    Which of the following is most likely a problem with this configuration?

    • The Ethernet port range is using the wrong channel group mode on SwitchB.
    • The vpc peer-link command is missing from SwitchA.
    • The vpc peer-link command must be issued on both switches.
    • Port-channel 1 on both switches should be an access port.
    • The vPC domain ID on SwitchB should be a different value.

    Explanation:

    Most likely, the vpc peer-link command must be issued on both switches to complete the configuration in this scenario. A virtual port channel (vPC) peer link should always be comprised of 10-gigabit-per-second (Gbps) Ethernet ports. Peer links are configured as a port channel between the two members of the vPC domain. You should configure vPC peer links after you have successfully configured a peer keepalive link. Cisco recommends connecting two 10-Gbps Ethernet ports from two different input/output (I/O) modules. To configure a peer link, you should issue the vpc peer-link command in interface configuration mode. For example, the following commands configure a peer link on Port-channel 1:

    SwitchA(config)#interface port-channel 1
    
    SwitchA(config-if)#switchport mode trunk
    
    SwitchA(config-if)#vpc peer-link
    
    SwitchB(config)#interface port-channel 1
    
    SwitchB(config-if)#switchport mode trunk
    
    SwitchB(config-if)#vpc peer-link

    Port-channel 1 on both switches should be a trunk port. Trunk ports are used to carry traffic from multiple virtual local area networks (VLANs) across physical switches. Access ports can only carry data from a single VLAN and are typically connected to end devices, such as hosts or servers.
    It is not a problem that the channel group mode is configured to active on the Ethernet ports in this scenario. It is important to issue the correct channel-group commands on a port channel’s member ports prior to configuring the port channel. For example, if you are creating Port-channel 1 by using the Ethernet 2/1 and Ethernet 2/2 interfaces, you could issue the following commands on each switch to correctly configure those interfaces as members of the port channel:

    SwitchA(config)#interface range ethernet 2/1 - 2
    
    SwitchA(config-if-range)#switchport
    
    SwitchA(config-if-range)#channel-group 1 mode active
    
    SwitchB(config)#interface range ethernet 2/1 - 2
    
    SwitchB(config-if-range)#switchport
    
    SwitchB(config-if-range)#channel-group 1 mode active

    The vPC domain ID on SwitchB should be the same value as the vPC domain ID on SwitchA. A vPC domain is comprised of two switches per domain. Each switch in the vPC domain must be configured with the same vPC domain ID. To enable vPC configuration on a Cisco Nexus 7000 Series switch, you should issue the feature vpc command on both switches. To assign the vPC domain ID, you should issue the vpc domain domain-id command, where domain-id is an integer in the range from 1 through 1000, in global configuration mode. For example, issuing the vpc domain 101 command on a Cisco Nexus 7000 Series switch configures the switch with a vPC domain ID of 101.

  14. You issue the following commands on SwitchA:

    feature vpc
    
    vpc domain 101
    
    peer-keepalive destination 192.168.1.2 source 192.168.1.1 vrf mgmt
    
    interface port-channel 1
    
    switchport mode trunk
    
    vpc peer-link
    
    interface port-channel 3
    
    switchport mode trunk
    
    vpc 3
    You issue the following commands on SwitchB:
    
    feature vpc
    
    vpc domain 101
    
    peer-keepalive destination 192.168.1.1 source 192.168.1.2 vrf mgmt
    
    interface port-channel 1
    
    switchport mode trunk
    
    vpc peer-link
    
    interface port-channel 3
    
    switchport mode trunk
    
    vpc 3

    Which of the following is true?

    • The switch port mode is not correctly configured for port-channel 3.
    • The switch port mode is not correctly configured for port-channel 1.
    • The switches will successfully form vPC domain 101.
    • SCR The vPC number in port-channel 3 is not correctly configured.
    • The peer keepalive configuration is in the wrong VRF instance.

    Explanation:

    In this scenario, the switches will successfully form virtual port channel (Vpc) domain 101. A vPC enables you to bundle ports from two switches into a single Open Systems Interconnection (OSI) Layer 2 port channel. Similar to a normal port channel, a vPC bundles multiple switch ports into a single high-speed trunk port. A single vPC domain cannot contain ports from more than two switches. For ports on two switches to successfully form a vPC domain, all the following must be true:
    – The vPC feature must be enabled on both switches.
    – The vPC domain ID must be the same on both switches.
    – The peer keepalive link must be configured and must be 10 gigabits per second (Gbps) or more.
    – The vPC number must be the same on both switches.
    In this scenario, the feature vpn command has been issued on both SwitchA and SwitchB. This command ensures that the vPC feature is enabled on each switch. In addition, the vpc domain 101 command has been issued on both switches. This command ensures that each switch is configured to operate in the same vPC domain.
    The switch port modes are correct for port-channel 1 and port-channel 3 in this scenario. Port-channel 1 on SwitchA and SwitchB is configured as a trunk port. In addition, this port channel is configured with the peer keepalive link for this vPC domain. The peer keepalive link requires a trunk port. Port-channel 3 is also configured as a trunk port. Normal port channels can be configured as either access ports or trunk ports. However, you should configure normal port channels as trunk ports if you intend to have traffic from multiple virtual local area networks (VLANs) traverse the port.
    The vPC number is correctly configured in this scenario. In order to correctly form a vPC domain between two switches, the vPC number must be the same on each switch. In this scenario, a vPC number of 3 has been configured on port-channel 3 on each switch. Port-channel 3 is a normal trunk port channel.
    The peer keepalive link is not in the wrong virtual routing and forwarding (VRF) instance in this scenario. Peer keepalive links can be configured to operate in any VRF, including the management VRF. The peer keepalive link operates at Layer 3 of the OSI networking model; it is used to ensure that vPC switches are capable of determining whether a vPC domain peer has failed. On SwitchA, the peer-keepalive command has been issued with a source Internet Protocol (IP) address of 192.168.1.1 and a destination IP address of 192.168.1.2. On SwitchB, the peer-keepalive command has been issued with a source IP address of 192.168.1.2 and a destination IP address of 192.168.1.1. Based on this information, you can conclude that SwitchA is configured with an IP address of 192.168.1.1 and that SwitchB is configured with an IP address of 192.168.1.2.

  15. You want to deploy services in OSI Layers 4 through 7 in a Cisco ACI fabric.

    Which of the following should you configure first?

    • an EPG
    • a bridge domain
    • a filter
    • a contract
    • a tenant

    Explanation:

    If you want to deploy services in Open Systems Interconnection (OSI) networking model Layers 4 through 7 in a Cisco Application Centric Infrastructure (ACI) fabric, you should first configure a tenant. Tenants are containers that can be used to represent organizations, domains, or specific groupings of information. Typically, tenants are configured to ensure that different policy types are isolated from each other, similar to user groups or roles in a role-based access control (RBAC) environment.
    You do not need to configure an endpoint group (EPG) first, because EPGs are a primary element of a tenant. EPGs are logical groupings of endpoints that provide the same application or components of an application. For example, a collection of Hypertext Transfer Protocol Secure (HTTPS) servers could be logically grouped into an EPG labeled WEB. EPGs are typically collected within application profiles. EPGs can communicate with other EPGs by using contracts.
    You do not need to configure a contract first, because contracts are a primary element of a tenant. Contracts are policy objects that define how EPGs communicate with each other. There are three types of contracts that can be applied in an ACI fabric:
    Regular — applies filters to matching traffic and typically follows taboo contracts
    Taboo — denies and logs matching traffic
    Out-of-Band (O0B) – applies to OOB traffic from the management tenant
    You do not need to configure a bridge domain first, because bridge domains are a primary element of a tenant. Bridge domains are logical Layer 2 forwarding configurations within an ACI fabric that use switched virtual interfaces (SVIs) for gateways and can be configured to span multiple physical devices. In this respect, bridge domains are similar to virtual local area networks (VLANs). However, the purpose of a bridge domain is to define the Media Access Control (MAC) address space and flood domain.
    You do not need to configure a filter first, because filters are a primary element of a tenant. Filters are low-level ACI objects that help define EPG contracts. Filters operate at Layer 2, Layer 3, and Layer 4 of the OSI networking model.

  16. Which of the following Cisco switches can be configured only by first using Telnet or SSH to access a parent device?

    • Nexus 7000 Series
    • Nexus 2000 Series
    • Nexus 5000 Series
    • Nexus 9000 Series

    Explanation:

    Of the available choices, Cisco Nexus 2000 Series switches can be configured only by first using Telnet or Secure Shell (SSH) to access a parent device. The Cisco Nexus 2000 Series of switches are fabric extenders (FEXs) and cannot operate as standalone switches. FEX technologies depend on parent switches, such as a Cisco Nexus 5500 Series switch or a Cisco Nexus 7000 Series switch, to provide forwarding tables and control plane functionality. FEX technologies are intended to extend the network to edge devices. Typically, FEX devices in the Cisco Nexus 2000 Series are managed by first connecting to the parent device by using either Telnet or SSH and then configuring the FEX.
    Cisco Nexus 5000 Series switches operate as standalone physical switches. Cisco Nexus 5000 Series switches are data center access layer switches that can support 10-gigabit-per-second (Gbps) or 40-Gbps Ethernet, depending on the model. Native Fibre Channel (FC) and FC over Ethernet (FCoE) are also supported by Cisco Nexus 5000 Series switches.
    Cisco Nexus 7000 Series switches operate as standalone physical switches. Cisco Nexus 7000 Series switches are typically used as an end-to-end data center solution, which means that the series is capable of supporting all three layers of the data center architecture: core layer, aggregation layer, and access layer. In addition, the Cisco Nexus 7000 Series supports virtual device contexts (VDCs). The Cisco Nexus 7000 Series can support up to 100-Gbps Ethernet.
    Cisco Nexus 9000 Series switches operate as standalone physical switches. Cisco Nexus 9000 Series switches can operate either as traditional NX-OS switches or in an Application Centric Infrastructure (ACI) mode. Unlike Cisco Nexus 7000 Series, Cisco Nexus 9000 Series switches do not support VDCs or storage protocols.

  17. Which of the following statements best describes vPC domains?

    • There can be only two peers per domain.
    • They monitor the status of vPC peers.
    • They synchronize the state between two vPC peers.
    • They synchronize the control plane and the data plane.

    Explanation:

    There can be only two peers, or switches, per virtual port channel (vPC) domain. A vPC enables you to bundle ports from two peers, which form a domain, into a single Open Systems Interconnection (OSI) Layer 2 port channel. Similar to a normal port channel, a vPC bundles multiple switch ports into a single high-speed trunk port. A single vPC domain cannot contain ports from more than two switches. For ports on two switches to successfully form a vPC domain, all the following must be true:
    – The vPC feature must be enabled on both switches.
    – The vPC domain ID must be the same on both switches.
    – The peer keepalive link must be configured and must be 10 gigabits per second (Gbps) or more.
    – The vPC number must be the same on both switches.
    A vPC peer link, not a vPC domain, synchronizes the state between two vPC peers. A vPC peer link is typically comprised of a port channel made up of two physical ports on each switch. This link synchronizes Media Access Control (MAC) address tables between switches and serves as a transport for data plane traffic. Bridge protocol data unit (BPDU) and Link Aggregation Control Protocol (LACP) packets are also forwarded to the second peer over this link, which causes the vPC peers to appear to be a single control plane.
    A vPC peer keepalive link, not a vPC domain, monitors the status of vPC peers. The peer keepalive link operates at Layer 3 of the OSI networking model; it is used to ensure that vPC switches are capable of determining whether a vPC domain peer has failed. Peer keepalive links can be configured to operate in any virtual routing and forwarding (VRF) instance, including the management VRF. Each vPC peer keepalive link is configured with the remote peer’s IP address as its destination IP address and the local peer’s IP address as its source address. Peer keepalive links must be trunk links.
    Cisco Fabric Services, not a vPC domain, synchronizes the control plane and the data plane. Cisco Fabric Services is a messaging protocol that operates between vPC peers. Control plane and data plane information is synchronized over the vPC peer link.

  18. An administrator configures a Cisco GSS and migrates DNS services to the GSS.

    Which of the following is most likely being implemented?

    • a GSLB service
    • an APIC solution
    • a hypervisor running DNS services
    • a FabricPath architecture

    Explanation:

    Most likely, a Cisco global server load balancing (GSLB) service is being implemented if an administrator configures a Cisco Global Site Selector (GSS) and migrates Domain Name System (DNS) services to the GSS. A Cisco GSLB solution is designed to optimize DNS infrastructure, thereby ensuring business continuity in the event of disaster. When DNS services are migrated to GSS, disaster recovery is enhanced by the global load balancing of server load balancers (SLBs) across data centers in disparate geographic locations.
    It is not likely that the administrator is implementing a Cisco FabricPath architecture. In addition, it is not likely that the administrator is implementing a Cisco Application Policy Infrastructure Controller (APIC) solution. The Cisco APIC is a means of managing the Cisco Application Centric Infrastructure (ACI). A Cisco ACI architecture requires both the APIC and the spine switches and leaf switches of FabricPath to complete the architecture. The APIC communicates with the spine and leaf nodes and provides policy distribution as well as centralized management.
    Spine switches are the Cisco FabricPath component that form the backbone of Cisco FabricPath’s switching fabric. Typically, leaf switches are connected to every spine switch along the backbone so that the spine switches provide connectivity between the leaf switches. A leaf switch is the Cisco FabricPath component that provides access layer connectivity. End hosts and classic Ethernet (CE) networks are typically directly connected to leaf switches by using edge ports. Leaf switches connect to spine switches by using core ports.
    It is not likely that the administrator is implementing a hypervisor running DNS services. A hypervisor is hardware virtualization software that runs either on a bare-metal server or as an application on an operating system (OS); hypervisors are used to create and run one or more virtual machines (VMs). Bare-metal server hypervisors are known as Type 1 hypervisors. Application hypervisors are known as Type 2 hypervisors. Neither type of hypervisor will run DNS services by itself. Instead, a VM would need to be created and configured with a guest OS. The DNS service would then need to be configured within the guest OS.

  19. Which of the following FCoE switch port types might require you to consider an STP configuration?

    • a VF port
    • a SPAN port
    • a VE port
    • a VN port

    Explanation:

    A Fibre Channel over Ethernet (FCoE) virtual fabric (VF) interface port type might require you to consider a Spanning Tree Protocol (STP) configuration. FCoE is used in data centers to encapsulate Fibre Channel (FC) over an Ethernet network. This encapsulation enables the FC protocol to communicate over 10 gigabit-per-second (Gbps) Ethernet. There are two types of FCoE switch ports: a VF port and a virtual edge (VE) port.
    An FCoE VE port typically connects to an end host. If the end host is connected to an Ethernet network that is configured with virtual local area networks (VLANs), the STP configuration might require extra attention, especially if the Ethernet fabric is not using Per-VLAN Spanning Tree Plus (PVST+). A proper STP configuration on the Ethernet fabric prevents the Ethernet topology from affecting storage area network (SAN) traffic.
    An FCoE VE port typically connects to a port on another FC forwarder (FCF). STP does not operate on VE ports, because these ports typically connect two FCFs. FC does not require switching loop prevention, because FCFs have no concept of switching loops. VE ports typically default to trunk mode.
    A virtual node (VN) port is a port on an end host, such as a host bus adapter (HBA) port, not a port on an FC switch. It is this type of port to which VF ports are typically connected. Although a VN port might participate in an Ethernet VLAN that is using STP, in this scenario you have been asked to identify a switch port type for which an STP configuration might be a consideration.
    A switched port analyzer (SPAN) port, which is also known as a mirroring port, is a type of port that is used to collect copies of packets transmitted over another port, over a given device, or over a network. In an FCoE configuration, a SPAN destination port can be either an FC interface or an Ethernet interface. SPAN source ports, on the other hand, can be FC interfaces, virtual FC (vFC) interfaces, a virtual SAN (vSAN), a VLAN, an Ethernet interface, a port channel interface, or a SAN port channel interface.

  20. Which of the following is not a benefit of server virtualization?

    • reduces network bandwidth hot spots
    • can aid configuration standardization
    • reduces facility expenses
    • can grow and shrink based on resource need
    • reduces maintenance downtime

    Explanation:

    Server virtualization causes, not reduces, network bandwidth hot spots. Therefore, reducing network bandwidth hot spots is not a benefit of server virtualization. Server virtualization is the process of using a virtual machine (VM) hypervisor to create and maintain multiple server VMs on a single hardware server. Because all the VMs are using the same physical network connection, it is possible that required network bandwidth could surpass network capacity. It is therefore important that VM administrators understand the network demands of each VM installed on a given physical server and migrate or deploy new VMs on new hardware as necessary.
    Server virtualization can reduce maintenance downtime. Because server VMs can be migrated from one hypervisor to another without powering down the VM, maintenance on a hypervisor’s host need not hinder virtualized server availability.
    Server virtualization can grow and shrink based on resource need. This feature is known as elasticity and is commonly used by cloud-based virtual servers. A virtual server that requires more hardware resources can grow to consume those resources and shrink when those resources are no longer required.
    Server virtualization can reduce facility expenses. When servers are virtualized, it is not necessary to purchase new hardware for each new server that is deployed at the facility. Instead, new servers can be quickly instantiated on existing hardware, thereby eliminating the cost of purchasing additional hardware.
    Server virtualization can aid configuration standardization. Because VMs can be cloned, administrators can create what is known as a golden image, which is a single VM that is equipped with a standardized, secure configuration. This golden image can then be cloned to create new VMs that are already equipped with that standard, secure configuration. Deploying and instantiating VMs that are already configured with a standard, secure configuration reduces administrative overhead and prevents accidental deployment of an insecure configuration.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments