CCIE DC TIPs

FI到NEXUS的UPLINK口,在NEXUS上要spanning-tree port type edge trunk,实现快速收敛。

Gen1不支持UCS Chassis到FI之间Port Channel。只有两边都是Gen2才行。

从一个FI UPLINK口收到的流量不会从另一个UPLINK口出去。

同一CHASSIS上的同一IO只连同一个FI,不可以IO A连FI A+B。

HOST上的VIC card分1280和1240,1280从HOST到IO A/B分别有4条PATH,每条10G,一共80G带宽。1240从HOST到IO A/B分别有2条PATH,每条10G,一共40G带宽。IO分为2208和2204两种,2208有8口共80G带宽,2204有4口共40G带宽。所以用2208对外一个有160G带宽可用,而如果CHASSIS使用1280卡满负荷运行会产生8×80G=640G流量。

HOST上的STORAGE流量通过VSAN PIN GROUP从指定FI的FC口流出,从HOST到FI的路径是自动内部FCOE。FI的HOST模式自动将FI设成NPV MODE,所以要求上家FC交换机有NPIV功能。所以就是MDS要开NPIV。

要令Nexus上的Unified Port在ethernet和fc间转换需要使用命令slot然后进入各个口进行type定义,之后需要重启。开启FCOE之后就可以启用新的FC口了。

VDC ha-policy:

Default VDC不可以修改,RESET=RELOAD。

reload:重启机器,保持VDC配置不变, copy start run。

restart:不重启机器,no vdc, copy start run。

bringdown:什么都不做,只将坏的sup关停。

switchover:SUP间切换。

NPV-NPIV

NPV mode SW 没有E PORT,只有F,NP,SD。

NPIV-NPIV之间应该是E port????

qos/mtu

You can only apply input to a qos policy; you can apply both input and output to a queuing policy.

qos是给input traffic分组打TAG,queuing负责bandwidth,network-qos负责对qos分组的数据进行再塑(qos class default没有cos标,所以如果是改MTU的话应该应用在class default上)。

1000v needs mtu to be applied under port-profile and qos needs to be in service policy under port-profile; 5k needs it to be applied in network-qos under “system qos – service policy”.

Advertisements

CCIE Data Center 预备

Main Difference between line cards on N7K series

http://www.cisco.com/c/en/us/products/switches/nexus-7000-series-switches/models-comparison.html

The initial series of line cards launched by Cisco for Nexus 7k series switches were M1 and F1. M1 series line cards are basically used for all major layer 3 operations like MPLS, routing etc, however, the F1 series line cards are basically layer 2 cards and used for for FEX, Fabric Path, FCoE etc. If there is only F1 card in your chassis, then you cannot achieve layer 3 routing. You need to have a M1 card installed in chassis so that F1 card can send the traffic to M1 card for proxy routing. The fabric capacity of M1 line card is 80 Gbps. Since F1 line card don’t have L3 functionality, they are cheaper and provide a fabric capacity of 230 Gbps.
Later cisco released M2 and F2 series of line cards. A F2 series line card can also do basic Layer 3 functions, however, cannot be used for OTV or MPLS.  M2 line card’s fabric capacity is 240 Gbps while F2 series line cards have fabric capacity of 480 Gbps.
New F3 card supports everything including OTV, MPLS, etc.

We can mix all cards in same VDC EXCEPT F2 card. The F2 card has to be on it’s own VDC.  You can’t mix F2 cards with M1/M2 and F1 in the same VDC. As per cisco, its a hardware limitation and it creates forwarding issues.

We can connect a FEX to two parent switches, however, only 5ks. we CANNOT connect a nexus 2k to two Nexus 7Ks. This is dual-homed FEX design and it is supported.

Fabric module

Fabric module are the hardware modules which provides backplane connectivity between I/O modules and SUP. In traditional switches like 6500, the crossbar switch fabric was integrated into chassis and there was no redundancy if crossbar switch fabric is faulty,however, in nexus 7k we have fabric redundancy using switch fabric module.
They are inserted into chassis in the backside and are hot-swappable same as line cards (I/O) modules,however, they dont have any ports on them to connect any external device. So they are not alternatives to line card or I/O modules.  You can see them in “show module” command output and they are shown as “X-bar”. Nexus 7010 and 7018 can have upto 5 fab modules.

There are two series of Fabric modules, FAB1 and FAB2.
Each FAB1 has a maximum throughput of 46Gbps per slot meaning the total per slot bandwidth available when chassis is running on full capacity, ie. there are five FAB1s in a single chassis would be 230Gbps.  Each FAB2 has a maximum throughput of 110Gbps/slot meaning the total per slot bandwidth available when there are five  FAB2s in a single chassis would be 550Gbps.  These are the FAB module capacity,however, the actual throughput from a line card is really dependent on  type of line card being used and the fabric connection of the linecard being used.

Bandwidth sharing when allocating vdc ports

As per the below pic, you can see that even or odd continuous ports are on one side and each group of four ports are on same hardware ASIC. This is a port-group and first port of the group is marked YELLOW as you can see in below diagram.

So, being said that N7K-M132XP-12 has 32 10G ports, it means that each port-group (group of 4 ports for this line card) share 10G speed among themselves. YES!! that is correct. All ports dont get 10G dedicated bandwidth. So, the total capacity of the card is 80G, not 320 (as we were expecting) as there can be 8 port-grp of 4 ports each. This is designed on the concept that “Chances are less that all devices are sending data at the same time”. So, 1,3,5,7 will be in same port-grp and similary 2,4,6,8 and so on…!!
So, 4 ports in a group will share the total available bandwidth of 10G.
What if we have requirement for some critical application that we need dedicated bandwidth of 10 G? In that case, first port of a port-group can be put into “DEDICATED” mode and that port will always be the first one of the group..ie. marked in yellow as shown in above pic. So, 1,2,9,10,17,18,25,26 can be put into dedicated mode and if you have put a port in a port-grp into dedicated mode, all other 3 ports in that group will get disabled. You can not configure them. If you have put Eth1/2 into dedicated mode, and if you try to configure Eth1/4 then you will get : “ERROR: Ethernet1/4: Config not allowed, as first port in the port−grp is dedicated”

Shared mode is the default mode. Command to configure port into dedicated mode is:
We first need to shutdown the port
N7K# config t
N7K(config)#interface Eth1/2
N7K(config-if)#rate-mode dedicated

划分接口的时候,G口都是各口可划分,10G口一般是按Group划分。F1上的10G口是2个一组,其他M1/F2/M2上的10G口都是4个一组。

VPC Peer-link 口如果在M1(10G)卡上,建议使用DEDICATED模式,保证链路通畅。Peer Keep-alive 最好不要用MGMT0,因为在Switchover SUP的时候SUP RELOAD,会影响VPC监控PEER,把本来的永远不DOWN变成受SUP影响。

VSS VS VPC

VSS是6500上类似STACK的功能,把两台6500合并成一台管理,所以不存在HSRP。

VPC是两台NEXUS独立管理,逻辑上合并为一台共享VLAN控制,在外联设备看来他们是一台设备,但是实际管理时这两台设备是独立的。所以存在HSRP。

VPC-HM vs VPC-HM Mac pinning (VSM应用)

1000V连接非CLUSTER的两个支持PC(不可使vpc)的UPstream机器时,需要用VPC-HM(Host Mode)的方式。1000v会自动根据CDP探测信息分配连接相同upstream设备的接口到同1个subgroup。使用CDP探测意味着放弃LACP,所以upstream PC mode应为on。

如果upstream不支持PC,那么就要用到mac-pinning。mac-pinning中因为没有PC所以一个vm口只对应一个subgroup,默认是从1开始递增,如果使用relative命令,则从0开始计数。

Nexus-1000V-Port-Channels

以上图为例,UCS对SERVER端接口不支持PC,VSM可以做PC,VEM会将PC分为两组,主机Source MAC只会出现在其中一条线路中,这样保证Upstream SW不会从多个接口收到同一MAC。

VPC-HM Mac pinning是在普通HM基础上,通过Source MAC来分组分配LB给那个组。

upstream开启cdp:channel-group auto mode on sub-group cdp

upstream未开启cdp:channel-group auto mode on manual

upstream不支持PC:channel-group auto mode on mac-pinning

 

 VDC

Nexus7000 根据不同的Licence,SUP1可以支持4个VDC, SUP2支持4+1admin VDC,SUP2E支持8+1。

Nexus 默认VDC编号为VDC#1. 所有未分配接口都属于VDC0;默认所有物理接口都属于Default VDC。

Default VDC#1 unique feature

• VDC creation/deletion/suspend
• Resource allocation : in t erfaces and memory
• NX-OS Upgrade across all VDCs
• EPLD Upgrade (f or new hardware features)
• Ethanalyzer capt u res: control plane/data plan e (with ACL) traffic
• Feature-set installation f or Nexus 2000, FabricPath and FCoE
• CoPP
• Port Channel load balancing
• Hardware IDS check s control

The storage VDC creates a “virtual” MDS with in the Nexus 7000 with the following feature sets:
• Participates as a full FCF in the network.
• the default VDC can not be the storage VDC.
• Zoning, FC alias, fc domains, IVR, Fabric Binding, and soon.
• FCoE target support.
• FCoE ISLs to other switches: Nexus 7000, 5000, and MDS.
• Only one storage VDC per chassis.
• Does not require Advanced License (VDC).
• Does count toward total VDC count;
• Shared interfaces, exception t o the rule allowing an interface to exist in only one VDC. The shared interface concepts are f or F1, F2, and F2e modules with a CNA installed in a server. The traffic is based on the L2 Ethertype; Ethernet VDC “owns” the interface, and the storage VDC sees the interface as well.

M没有FCoE,只有F有。所以M不能shared interface。

The Cisco Nexus 5000 Series switch does not support multiple VDCs. All switch resources are managed in the default VDC.

Nexus 7K要开启FCOE需要搭配SUP2/F Module,然后建立Storage VDC才能开启FCOE,不然看不到。5K自带FCOE。

5K不能Zoning,需要依靠Upstream 7K。

5548up 中UP意味着通用接口,支持任何SFP插入,可以做ethernet 又可以FC.

Fex

Fex uplink模式分普通的Static pinning和port channel。普通STATIC PIN是把FEX接口单独对应到某一UPLINK口上,show int xx fex-intf 会显示FEX口和此UPLINK口的PIN关系。Portchannel是把UPLINK口都汇到一个组中,然后把这个组和FEX口做PIN,show int poxx fex-intf会看到FEX口都对应到此PO上,如果show int xx fex-intf去看PO口member中的PIN关系,会显示为空。

所以普通Static pinning牵扯到对最高UPLINK口数的设定,而PORT CHANNEL因为是全组UPLINK,所以UPLINK口数只需为1。

STP

diameter 计算方法:拓扑中任何SW间需要到达的最大需要经过的SW数量。
spanning-tree vlan 10 root primary diameter 3;
普通单一交换机VPC链接NEXUS时,任何交换机间最大距离是需要通过3台设备(包括起点自己)。这个diameter改变后,系统会自动修改HELLO等TIMER,方便不知道什么样组合最适用的用户修改TIMER。

Spanning-tree port types:

Normal: 什么都不改就是Normal。
Network: 比Normal多一个Bridge Assurance,如果接口没有收到BPDU,就自动BLCOK此接口。如果两个BRIDGE上VLAN DATABASE不一样,就自动BLOCK接口上的VLAN STP。
Edge: 就是Portfast。

VPC STP Pseudo Information

http://www.cisco.com/c/en/us/support/docs/routers/7000-series-routers/116140-config-nexus-peer-00.html

开启Peer-Switch的Nexus会对外宣告同一个Virtual MAC(0023.04ee.bexx),这样在VPC LINK链接的设备看来他们就是只连在了一台设备上。对于非VPC Link链接的设备,他们看到的是Nexus真正的MAC(int vlan 1),这样他们就能明确的区分出他们实际链接在哪个设备上,ROOT具体是谁。Pseudo Information就是用于这种情况下对全局Spanning-tree vlan 1-10 priority的一种微调。对于非VPC Link上的设备,会以Pseudo Information中定义的数值计算ROOT,按照下图的配置,结果就是VPC Link设备会以4096和VMAC链接ROOT,而非VPC Link设备会以Designated中设定的值来选择链接ROOT的真实MAC。建议Psedo的Root priority低于Global中的vlan priority。

116140-config-nexus-peer-03

FabricPath

Fabricpath Basic Concept Summary

Fabircpath Architect Design

类似MPLS的2层技术,和OTV类似,但它是把2层Encapsulate到2层头中(OTV是把2层放进3层Overlay中),通过ISIS实现最优路径选择,通过类似MPLS的使用SW ID标识Next-hop,实现CORE SW上不做MAC学习,把学习任务都分配给各EDGE层的Fabricpath SW,有些类似OSPF中的Stub。这样减轻CORE的负担(Conventional MAC Learning)。

FP配置只需个接口开启SW MOD FA模式,然后保证VLAN也是mod fabricpath就可以了。对于每台SW,FP有各自的Switch-ID,可以通过fabricpath switch-id xx 定义,而VPC+中可以在VPC DOMAIN下定义另一个Switch-id,两台机子共享同一个Virtual SID,这样在其他的连入机器看来就只是连到了一台FP机器上。

当FP和多个STP链接时,FP用同一个BID(C84C.75FA.6000)来使STP认为他们链接的是同一个SW。这样即使FP本身不传STP信息,也不会影响STP自身的计算。

当FP与STP相连时,需要所有FP EDGE都是FP VLAN的ROOT;而在FP DOMAIN内部,会根据Root Priority, System ID(VDC MAC), Switch ID的优先级(越高越好)来选取ROOT,fabricpath domain default 命令下也有一个写PRIORITY的地方,这个就是为DOMAIN中选ROOT用的。有2种方法让FP EDGE成为ROOT,1. 直接用spanning‑tree vlan x root,让SW成为root,2.spanning-tree pseudo-information微调,这样能使SW在被spanning‑tree vlan x priority定义为某一优先级后,再有一个可以微调的能力(Pseudo只对STP设备有效???)。因为EDGE都是ROOT了,所以就应该有ROOT-GUARD的功能,这样在收到更高级BPDU的时候,可以自行阻断接口,来保证自己是ROOT。当多个FP EDGE与同一个STP设备相连时,需要保证这些EDGE都是ROOT都有相同的Spanningtree domain。

Each FabricPath edge switch must be configured as root for all FabricPath VLANs. If you are connecting STP devices to the FabricPath fabric, make sure you configure all edge switches as STP root by using the spanning‑tree vlan x root primary command (or manually configure the bridge priority on each switch to force the switch to be root). Additionally, if multiple FabricPath edge switches connect to the same STP domain, make sure that those edge switches use the same bridge priority value.

In order to ensure the FabricPath fabric acts as STP root, all FabricPath edge ports have the STP root-guard function enabled implicitly – if a superior BPDU is received on a FabricPath edge port, the port is placed in the “L2 Gateway Inconsistent” state until the condition is cleared (that is, superior BPDUs are no longer being received). A syslog is also generated when this condition occurs:

%STP-2-L2GW_BACKBONE_BLOCK: L2 Gateway Backbone port inconsistency blocking port port-channel100 on VLAN0010.

On all Cisco FabricPath switches that have Classic Ethernet (CE) ports connected to Classic Ethernet switches,
configure the same root priority using the spanning-tree pseudo information command shown here:
S100(config)# spanning-tree pseudo-information
S100(config-pseudo)# vlan 10-100 root priority 4096
Make sure that the configured priority is the best (lowest) in the network, so that the Cisco FabricPath region is the
root of the spanning tree. If the Classic Ethernet edge ports receive a superior STP Bridge Protocol Data Unit
(BPDU), those ports will be blocked from forwarding traffic.
When a traditional spanning-tree domain is connected to a Cisco FabricPath network, the whole Cisco FabricPath
network will be perceived as a single spanning-tree switch, which, however, does not pass STP BPDUs by default.
To provide correct and consistent end-to-end MAC address learning, the Cisco FabricPath IS-IS could transport
topology change notification (TCN) information to the Cisco FabricPath edge switches and initiate a flush of the
MAC address table. To achieve this behavior, additional steps are required:
● Identify all Cisco FabricPath edge ports connected to the same external spanning-tree domain.
● Configure all identified Cisco FabricPath edge ports with the identical spanning-tree domain ID using the
following command: spanning-tree domain .
Note that a given Cisco FabricPath leaf can be configured with only a single spanning tree domain ID.
These steps help ensure a loop-free environment by forwarding relevant BPDUs between all Cisco FabricPath
edge ports configured with the identical spanning-tree domain ID (Figure 12).

By default, Cisco FabricPath creates two MDTs (Multidestination Tree) in the default Topology 0, and multidestination traffic is mapped to either of those trees for load-balancing purposes. MDT1是Broadcast/Unicast Tree,MDT2是Multicast Tree。

The first tree is built by electing a root, very much like STP the root is elected based on :

–          Highest Root Priority

–          Highest System ID

–          Highest Switch ID

fabricpath domain default下设定本机FA的Root Priority,最高的为MDT1的ROOT,次高的为MDT2的ROOT。

Fabricpath1 Fabricpath2 Fabricpath3 Fabricpath4 Fabricpath5 Fabricpath6

Fabricpath中不存在Spanning-tree,是对已有FRAME进行二次封装,自动携带TAG,只用在接口下定义sw mod fa就可以。注意,用作VPC的接口不能再用作FP。配置中就像OSPF划分AREA一样,Edge对host的接口为普通接口CE,对CORE的接口是FP,CORE之间是FP做VPC+。VPC+ PEER间必须是F卡连接

fabric1

M卡不具备FP功能,所以如果要将M卡上的VLAN导入FP中,就要独立建立VDC,然后将M的VDC与F的FP VDC连线导入。

fabric2

FA和CE混合连接的HSRP概念图。FP中的HSRP和VPC一样,是Control Plane ACT/Std,Data Plane ACT/ACT。

有两种方式形成ACT/ACT HSRP,一种是上图这种VPC+,另一种就是下面这种Anycast HSRP。

fabric3

这种结构中,借由基于ISIS的FP计算,HSRP可以脱离TIMER束缚,达到HSRP组中Contorl Plane上active hsrp一旦FAIL,其他机器直接根据FP的状态计算替换FAIL的SW。这就是Anycast HSRP (AC HSRP)。同时,这种结构实现了ECMP,S100有4条线路可以到达SPIN组,虽然S10才是Control Plane Actvie,但是S10-40都会回复一个VMAC给S100,这样一来S100看来就有4条路径可达HSRP。

VPC+是通过virtual SID将两台FP SW混成1台实现ACT/ACT HSRP,Anycast HSRP是多台FP SW使用同一个VMAC回复实现ACT/ACT HSRP。

EIGRP

EIGRP没有具体的对于IPV6的版本,所以是在Router eigrp name下修改address-family ipv号来辨识是V6还是V4的。不像OSPF有专门的V3用于IPV6。

PIM

Nexus只支持pim sparse mode。

ip pim rp-address 10.1.1.1 group-list 224.0.0.0/4 (指定10.10.10.1为RP)
ip pim anycast-rp 10.1.1.1 192.168.1.1            (指定可以成为RP10.10.10.1的机器IP)
ip pim anycast-rp 10.1.1.1 192.168.2.2

interface loopback1
  ip address 192.168.1.1/32
  ip router ospf 1 area 0.0.0.0
  ip pim sparse-mode

可以令多个NEXUS成为RP。只需要用ip pim anycast-rp建立一个组,指定组中的机器可以成为RP候选,具体选哪个由机器自己决定。

Kickstart是Linux kernel镜像,Sys image是交换机功能镜像。

show diff rollback-patch checkpoint test running-config 显示run config中较test不同的部分。比如先存了checkpoint test,然后no username xx,则此命令显示diff为no username xx.

NPV-Network

Cisco Storage VDC 中的接口可以做FC和NPV。有NPV功能的SW在Upstream SW看来是有NPIV功能的HOST,Upstream SW接口是F-Port,SW对Upstream口是NP-Port,SW对HOST口是F。NPV的SW不做本地switching,而是使用和CORE相同的DOMAIN ID将自己作为Proxy去和Upsream SW联系,由Upstream SW local switching。NPV Edge类似HUB,集中数据交由上家处理,这样不同设备就不存在协议不通的问题,反正都是集合好后给上家集中处理。开启NPV的SW所连Upstream SW应开启NPIV,而这台SW本身不需要Domain ID。对于Upstream SW,这台SW就像一台开启了NPIV的HOST。NPIV就像BLADE上有很多VM需要WWN获得各自的LUN,NPV就像是多个Standalone Servers要连到一个SW并获得各自的LUN。因为EDGE相当于扩展了CORE,所以有些FC操作在EDGE上就不能进行了,需要到CORE上操作。

NPIV

NPIV解决了以往只能传输单一WWPN的限制,可以从一个N_Port传输多个WWPN,这样在虚拟机环境中就可以将一个BLADE上的VM上的WWN用一个N_Port引出接口连到Storage的LUN。NPIV需要所有SW都开启这功能。

VSAN

SAN是物理连接的专供存储网络,VSAN是基于同一个物理连接拓扑,通过类似VLAN的概念变幻出可以承载多种SAN拓扑的网络技术。同一台HOST的FCID可以出现在多个VSAN中。各VSAN中路由,ZONING,域概念等相互独立。

默认所有接口都属于VSAN 1,如果没有创建其他VSAN,则默认所有FC接口都属于VSAN1。VSAN 4096是isolated VSAN,当接口所在VSAN被删除后,接口会被自动分配给VSAN 4096以避免冲突。

VSAN 1 cannot be deleted, but it can be suspended.

Nexus5000不支持基于WWN自动分配PORT。

VSAN Trunk只在CISCO的FC交换机上的FC接口才有效果,默认FC接口开启trunking,如果和其他品牌交换机相连,TRUNK是DISALBED。模式有on,off,auto,auto-auto会出现no trunking,必须有一端是ON才能出现trunking

对于F Port Port-Channel,需要开启feature fport-port-channel,才能在F类接口使用PC。

MDS一侧,如果用PC和FI相连,需要在MDS进入INT PO#,channel mode active,开启LACP。

vsan和Port Channel下都可以定义LB的方式,5000和7000 vsan下默认都是src-dst-oxid。src-dst-id=active/standby(只用一条),src-dst-oxid=active/active(全用)。san portchannel下名字不太一样,source-destination-ip=src-dst-id, source-destionation-port=src-dst-oxid,默认是source-dest-ip。

VFC

VFC=FCOE。在NEXUS上的两类接口FC和ETH,如果要作FCOE,那就是要将FC接口得到的FC DATA通过ETH口传出去,VFC就是在配置上实现这一功能的媒介,就像OTV OVERLAY。

VFC不能VSAN Trunking。VFC口不能用作SPAN DST。VFC不能PORT TRACKING。VFC不能改SPEED。

因为要将FC的lossless信息通过ETH的loss通道传输,FCOE中会用到PFC(Priority Flow Control),就是以收端通知发端它的堵塞情况来要求发端暂停或继续发送数据。PFC在CoS中的默认值是3

MTU

首先这篇BLOG很好:http://www.ccierants.com/2013/10/ccie-dc-definitive-jumbo-frames.html

根据我自己实际测试,接口最大可通过SIZE是8976(一般为MTU-28bit作为实际可通过SIZE,比如MTU9000,则实际可通过SIZE为8972),即使接口MTU写到9216,也是只有8976。

Jumbo Frame对于iSCSI传输的重要性不言而喻,NEXUS5K/7K对于MTU的规定不同,对于2L/3L接口的MTU规定也不同。

原则是服务器端都是MTU9000,NEXUS上或者UCS的QOS上都是9216。

BEST EFFORT值限制了UCS全局可接受最大MTU,作用就和SYSTEM JUMBOMTU一样。即使没有ADAPTER应用BEST EFFORT,BEST EFFORT也对UCS起作用。比如BEST EFFORT=5000,GOLD =9216,如果所有ADAPTER都应用GOLD,那么其实这些ADAPTER只能接受5000。相反,如果BEST EFFORT=9216,GOLD=5000,那么所有应用GOLD的ADAPTER就是5000,其余ADAPTER就是9216或者别的QOS MTU值。

5K:MTU配置基于QOS,要在POLICY MAP中定义;

7K:默认system jumbomtu 9216(2158-9216)定义系统MTU,和5K QOS作用相同,不同是其接口下可以直接定义具体接口的MTU;

L2:只能在1500和系统MTU之间选择之一

L3:可以在1500到9216之间选择任意偶数。

system jumbomtu中的值是给L2用的。如果要改Port Channel的MTU,需要先在其各自接口中改MTU,然后再到PC中改MTU。

5K中QOS配置:

class-map type network-qos class-fcoe            把QOS组1分给FCOE CLASS
match qos-group 1
class-map type network-qos class-default        定义DEFAULT CLASS,没写GTOUP号,默认是0号
class-map type network-qos class-all-flood
match qos-group 2
class-map type network-qos class-ip-multicast
match qos-group 2
policy-map type network-qos jumbo9216         MTU9216的QOS Policy Map,因为这里要和
class type network-qos class-fcoe                    FCOE+iSCSI,所以既要有2158又要有9216。
pause no-drop
mtu 2158
class type network-qos class-default
mtu 9216
multicast-optimize
policy-map type network-qos default-nq-policy
class type network-qos class-default
multicast-optimize
policy-map type network-qos fcoe-default-nq-policy       系统自动生成的QOS POLICY不能修改,所以才
class type network-qos class-fcoe                                   重建了一个JUMBO9216,然后又把FCOE
pause no-drop                                                               MTU又写了一遍
mtu 2158
class type network-qos class-default
multicast-optimize
system qos
service-policy type queuing input fcoe-default-in-policy
service-policy type queuing output fcoe-default-out-policy
service-policy type qos input fcoe-default-in-policy
service-policy type network-qos jumbo9216                   应用JUMBO9216

UCS上也和5K类似,要做一个9216的JUMBO FRAME QOS。然后在NIC上改MTU9000。

UCS

6100不能用Expansion Card做SERVER,FC Ports,6200可以用任何口做Unified ports,也就可以做任何PORT TYPE。

vCon group用来在BLADE层面负载均衡,BLADE上使用的APADTER不同,vCon分配也不同:

vcon

vCon分布可以强制,选择如下:

  • All—All configured vNICs and vHBAs can be assigned to the vCon, whether they are explicitly assigned to it, unassigned, or dynamic. This is the default.
  • Assigned Only—vNICs and vHBAs must be explicitly assigned to the vCon. You can assign them explicitly through the service profile or the properties of the vNIC or vHBA.
  • Exclude Dynamic—Dynamic vNICs and vHBAs cannot be assigned to the vCon. The vCon can be used for all static vNICs and vHBAs, whether they are unassigned or explicitly assigned to it.
  • Exclude Unassigned—Unassigned vNICs and vHBAs cannot be assigned to the vCon. The vCon can be used for dynamic vNICs and vHBAs and for static vNICs and vHBAs that are explicitly assigned to it.
  • Exclude usNIC—Cisco usNICs cannot be assigned to the vCon. The vCon can be used for all other configured vNICs and vHBAs, whether they are explicitly assigned to it, unassigned, or dynamic.

VSG

http://www.cisco.com/c/en/us/products/collateral/interfaces-modules/virtual-security-gateway-nexus-1000v-series-switch/deployment_guide_c07-647435.html

VSG中Policy和port-group间的应用关系如图:

VSG

更成熟的方法是创建vPath,把多个Policy-set放到一个vPath上,再用vservice path xxx调用。这样就能组合多个ACL-Set,而不用每次修改ACL都重新写vservice node xxx profile xxx

VSM-VSG-ESX-vCenter的网络关系很重要。对于MGMT,出于安全考虑,可以独立分配给一个VLAN。VSM对于VSG和ESX的控制,是直接的,而不是通过vCenter,也就是说,要保证VSM和他们之间的通信,如果vCenter和他们之间能通,VSM和vCenter也能通(vCenter上有两个接口),不代表VSM能控制VSG。

UCS

LAN Uplinks Manager可以控制每个VLAN具体被分配到哪个FI上的哪个UPLINK上。

PIN GROUP负责指定ve网卡走哪些UPLINK口。

一个是在HOST层控制,一个是在FI上控制。从2.0开始,对于指定UPLINK的VLAN(VLAN MANAGER),只有从UPLINK口接到的去往HOST的请求或回复才会理会,其他接口接到信息的话,会DROP,这叫RPF。如果新建VLAN没有指定UPLINK,那么默认是此VLAN可以在所有UPLINK上通行。

VSM 总结

version 4.2(1)SV2(2.2)

svs switch edition essential

no feature telnet

feature lacp

banner motd #Nexus 1000v Switch#

ip domain-lookup

ip host Nexus1000v 10.10.1.101

hostname Nexus1000v

errdisable recovery cause failed-port-state

vem 3

host id 2fb52500-0000-0000-0000-000000000004

vem 4

host id 2fb52500-0000-0000-0000-000000000003

vem 5

host id 2fb52500-0000-0000-0000-000000000002

vem 6

host id 2fb52500-0000-0000-0000-000000000001

Continue reading