Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Monday, March 30, 2015

Top 10 SDN and NFV Directory Products – Feb. 2015

Feb. ’15 ProductRank Last Month & Peak


Nuage Virtualized Services Controller (VSC)
Nuage kicks off our list with its first appearance (well, since we started last month.) Could it be theJanuary Webinar Q&A that brought the VSC here? No, that webinar covered the VNS, making this first entry of the VSC that much more impressive.
Last Month: –
Peak: 10


HP Virtual Application Networks (VAN) SDN Controller
HP‘s SDN controller moved up a spot. Though its controller didn’t make the news, HP sure did. HP launched a new line of branded white box switches and also launched the HP Networking Channel on SDxCentral. Could explain the jump.
Last Month: 10
Peak: 9


Cisco Application Policy Infrastructure Controller (APIC)
Back at No. 8, a spot up from last month, the Cisco APIC continues to interest readers and customers. It must be SDxCentral’s page,What is Cisco APIC? Could also be the fact that two of the top ten biggest stories last month, onCisco claiming to be serious about open source and how it’s ready to fight VMware, had readers interested in Cisco products.
Last Month: 9
Peak: 8


Dell S-Series S4810 High-Performance 10-/40-Gb/s Top-of-Rack Switch
Sitting at No. 7 for three months in a row, the Dell S-Series S4810 doesn’t want to budge from its spot. What’s interesting is Dell hasn’t had any big breaking news on SDxCentral’s feed since November (not including mentions.)
Last Month: 7
Peak: 7


Juniper Contrail
Juniper Contrail drops two spots from last month. It’s still bouncing around in the middle. February certainly hasn’t been shy of Juniper news with Craig Matsumoto’s exclusive on why Juniper ousted CEO Shaygan Kheradpir and our two breaking news stories last week, Jim Dolce’s return and how Juniper isbringing open source to its NFV story.
Last Month: 4
Peak: 4


VMware NSX
VMware NSX moves up. Will it catch its peak again? The three breaking news stories within a week of each other may explain this little jump: the latest NSX tally,VMware mixing NSX and vCloud Air, and VCE adopting VMware NSX.
Last Month: 6
Peak: 4


Huawei U2000 Network Management System (NMS)
U2000 NMS wasn’t in the news, but Huawei was for the first time since June 2014. Did it take Justin Dustzadeh joining Visa to help NMS reach the number four spot? Probably not, but it did spike interest in Huawei on SDxCentral. See the connection I’m making there?
Last Month: 5
Peak: 4


The Cisco Extensible Network Controller (XNC)
How often do you type in XNC in your favorite search engine?Cisco did a DemoFriday way back in 2013 on the XNC, and it’s still snagging the No. 3 spot for the third month in a row. If your wondering what the XNC is, check out SDxCentral’s What is a Cisco XNC?
Last Month: 3
Peak: 3


EstiNet 8.1 OpenFlow Network Simulator and Emulator
Planting itself at No. 2 for the second month in a row, this simulator and emulator supports OpenFlow, so maybe that’s what piques interest.
Last Month: 2
Peak: 2


Virtual CPE (vCPE) Framework
At No. 1 for the third month in a row. Why so many clicks? As with last month, I’d have to go with theDemoFriday and resulting Q&Athat Aricent rocked back in December. Or perhaps it’s due to strong interest in vCPE as a whole resulting in searches that land here due to the awesome SEO value products get in our directory.
Last Month: 1
Peak: 1

GSM fundamentals (Huawei)

Huawei 6 months industrial training program-Cognitel

Saturday, March 28, 2015

Understanding NFV in 6 videos

If the adage says a picture is worth a thousand words, then a video should worth a million. In today’s post I offer you a quick way to fully understanding Network Functions Virtualization (NFV), Software Defined Networking (SDN), and some of its related trends through six short videos, ranging from the very basics of virtualization and cloud concepts, to the deepness of today’s architecture proposed for the NFV installations.

What “the heck” are virtualization and cloud about?

A short and self-explanatory video from John Qualls, President and CEO of Bluelock, covering the very basics of data centres transition towards virtualized models.

What is the difference between NFV and SDN?

This great summary from Prayson Pate, Chief Technologist at Overture Networks, highlights the differences and similarities between NFV and SDN, and how are these complemented in the telecoms industry.

Let us talk about the architecture

Now the basics are established we can see the overall architecture. Look at these diagrams from HP and Intel where they show the main components involved.

So, wait a minute, what is that thing they call OpenFlow?

The following video from Jimmy Ray Purser, Technical host for Cisco TechWise and BizWise TV, explains OpenFlow in a quick and straight way.

What about OpenStack?

This piece from Rackspace, featuring Niki Acosta & Scott Sanchez, makes a great summary about OpenStack, its origin, and its situation in the industry.

Now, what are the challenges faced and some real cases for the carriers?

Now that the concepts are clear and defined, we can study a couple of real use cases scenarios in the carriers’ network and its architecture, as well as methods for addressing the challenges faced in the NFV evolution. In the following video Tom Nolle, Chief Architect for CloudNFV, presents Charlie Ashton VP Marketing and US Business Development at 6Wind, and Martin Taylor CTO at Metaswitch Networks, covering some use cases like the Evolved Packet Core (EPC) and the Session Border Controllers (SBC) based on NFV.

Wrapping up, where are the vendors and the operators at with NFV?

The following pitch features Barry Hill, VP Sales & Marketing from Connectem Inc., at the IBM Smart Camp 2013 hosted in Silicon Valley. It covers a summary of the market opportunity for NFV, their specific solution for the operators EPC, and a brief check on the carriers’ status with it.

Although the ETSI ISG group for NFV definition will most likely publish the standards for it in one year from now, it is already a reality, and all the vendors and operators are working on it in some way or another. No matter if you are just starting to explore this trend, or mastering it already, I hope these videos gave you something about it you did not know before.

Monday, March 23, 2015

Will TV Viewing Habits Change Metro Architecture?

According to a couple recent surveys, TV viewing is dropping in the 18-34 year old age group.  Some are already predicting that this will mean the end of broadcast TV, cable, and pretty much the media World as We Know It.  Certainly there are major changes coming, but the future is more complicated than the “New overtakes the Old” model.  It’s really dependent on what we could call lifestyle phases, and of course it’s really complicated.  To make things worse, video could impact metro infrastructure planning as much as NFV could, and it’s also perhaps the service most at risk to being itself impacted by regulatory policy.  It’s another of those industry complications, perhaps one of the most important.

Let’s start with video and viewing changes, particularly mobile broadband.  “Independence” is what most young people crave.  They start to grow up, become more socially aware, link with peer groups that eventually influence them more than their parents do.  When a parent says “Let’s watch TV” to their kids, the kids hear “Stay where I can watch you!”  That’s not an attractive option, and so they avoid TV because they’re avoiding supervision.  This was true fifty years ago and it’s still true.

Kids roaming the streets or hanging out in Starbucks don’t have a TV there to watch, and mobile broadband and even tablets and WiFi have given them an alternative entertainment model, which is streaming video.  So perhaps ten years ago, we started to see youth viewing behavior shift because technology opened a new viewing option that fit their supervision-avoidance goal.

Few people will watch a full hour TV show much less a movie on a mobile device.  The mobile experience has to fit into the life of people moving, so shorter clips like music videos or YouTube’s proverbial stupid pet tricks caught on.  When things like Facebook and Twitter came along, they reinforced the peer-group community sense, and they also provided a way of sharing viewing experiences through a link.

Given all this, it’s hardly surprising that youth has embraced streaming.  So what changes that?  The same thing that changes “youth”, which is “aging”.  Lifestyles march on with time.  The teen goes to school, gets a job and a place to live, enters a partner relationship, and perhaps has kids of his/her own.

Fast forward ten years.  Same “kid” now doesn’t have to leave “home” to avoid supervision, but they still hang out with friends and they still remember their streaming habits.  Stupid pet tricks seem a bit more stupid, and a lot of social-media chatter can interfere with keying down after a hard day at the office.  Sitting and “watching TV” seems more appealing.  My own research says that there’s a jump in TV viewing that aligns with independent living.

Another jump happens two or three years later when the “kid” enters a stable partner relationship.  Now that partner makes up a bigger part of life, the home is a better place to spend time together, and financial responsibilities are rising and creating more work and more keying down.  There’s another jump in TV viewing associated with this step.

And even more if you add children to the mix.  Kids don’t start being “independent” for the first twelve years or so on the average.  While they are at home, the partner “kids” now have to entertain them, to build a set of shared experiences that we would call “family life”.  Their TV viewing soars at this point, and while we don’t have full data on how mobile-video-exposed kids behave as senior citizens yet, it appears that it may stay high for the remainder of their lives.

These lifecycle changes drive viewing changes, and this is why Neilson and others say that TV viewing overall is increasing even as it’s declining as a percentage of viewing by people between 18 and 34.  If you add to this mix the fact that in any stage of life you can find yourself sitting in a waiting room or on a plane and be bored to death (and who shows in-flight movies anymore?), you see that mobile viewing of video is here to stay…sort of.

The big problem that TV faces now isn’t “streaming” per se, it’s “on-demand” in its broadest sense—time-shifted viewing.  Across all age groups we’re seeing people get more and more of their “TV” in non-broadcast form.  Competition among the networks encourages them to pile into key slots with alternative shows while other slots are occupied by the TV equivalent of stupid pet tricks.  There are too many commercials and reruns.  Finally, we’re seeing streaming to TV become mainstream, which means that even stay-at-homes can stream video instead of watching “what’s on”.

I’ve been trying to model this whole media/video mess with uncertain results, largely because there are a huge number of variables.  Obviously network television creates most of the original content, so were we to dispense with it we’d have to fund content development some other way.  Obviously cable networks could dispense with “cable” and go directly to customers online, and more importantly directly to their TV.  The key for them would be monetizing this shift, and we’re only now getting some data from “on-demand” cable programming regarding advertising potential for that type of delivery.  I’m told that revenue realization from streaming or on-demand content per hundred views is less than a third of channelized real-time viewing.

I think all of this will get resolved, and be resolved in favor of streaming/on-demand in the long run. It’s the nature of the current financial markets to value only the current quarter, which means that media companies will self-destruct the future to make a buck in the present.  My model suggests that about 14% of current video can sustain itself in scheduled-viewing broadcast form, but that ignores the really big question—delivery.

If I’m right that only 14% of video can sustain broadcast delivery then it would be crazy for the cable companies to allocate the capacity for all the stuff we have now, a view that most of the cable planners hold privately already.  However, the traffic implications of streaming delivery and the impact on content delivery networks and metro architecture would be profound.

My model suggests that you end up with what I’ll call simultaneity classes.  At the top of the heap are original content productions that are released on a schedule whether they’re time-shifted in viewing or not and that command a considerable audience.  This includes the 14% that could sustain broadcast delivery and just a bit more—say 18% of content.  These would likely be cached in edge locations because a lot of people would want them.  There’s another roughly 30% that would likely be metro-cached in any significant population center, which leaves about 52% that are more sparsely viewed and would probably be handled as content from Amazon or Netflix is handled today.

The top 14% of content would likely account for about two-thirds of views, and the next 30% for 24% of views, leaving 10% for all the rest.  Thus it would be this first category of viewing, widely seen by lots of people, that would have the biggest impact on network design.  Obviously all of these categories would require streaming or “personalized delivery”, which means that the total traffic volume to be handled could be significant even if everyone were watching substantially the same shows.

“Could” may well be the important qualifier here.  In theory you could multicast video over IP, and while that wouldn’t support traditional on-demand programming there’s no reason it couldn’t be used with prime-time material that’s going to be released at a particular time/date.  I suspect that as on-demand consumption increases, in fact, there will be more attention paid to classifying material according to whether it’s going to be multicast or not.  The most popular material might well be multicast at its release and perhaps even at a couple of additional times, just to control traffic loads.

The impact of on-demand on networking would focus on the serving/central office for wireline service, and on locations where you’d likely find SGWs today for mobile services (clusters of cells). 

The goal of operators will be to push caches forward to these locations to avoid having to carry multiple copies of the same videos (time-shifted) to users over a lot of metro infrastructure.  So the on-demand trend will tend to encourage forward caching, which in turn would likely encourage at least mini-data-center deployments in larger numbers.

What makes this a bit harder to predict is the neutrality momentum.  The more “neutral” the Internet is, the less operators can hope to earn from investing in it.  It seems likely that the new order (announced but not yet released) will retain previous exemptions for “interior” elements like CDNs.  That would pose some interesting challenges because current streaming giants like Amazon and Netflix don’t forward-cache in most networks.  Do operators let them use forward caches, charge for the use of them, or what?

There’s even a broader question, which is whether operators take a path like that of AT&T (and in a sense Verizon) and deploy an IP-non-Internet video model.  For the FCC to say that AT&T had to share U-verse would be a major blow to customers and shareholders, but if they don’t say that then they are essentially sanctioning the bypassing of the Internet for content in some form.  The only question would be whether bypassing would be permitted for more than just content.

On-demand video is yet another trend acting to reshape networking, particularly in the metro sense.  Its complicated relationship with neutrality regulations mean it’s hard to predict what would happen even if consumer video trends themselves were predictable.  Depending on how video shakes out, how NFV shakes out, and how cloud computing develops, we could see major changes in metro spending, which means major spending changes overall.  If video joins forces with NFV and the cloud, then changes could come very quickly indeed.

Sunday, March 15, 2015

Data Center Tier Levels

Your enterprise's old data center has reached the end of the road, and the whole kit and caboodle is moving to a colocation provider. What should you be looking for in the data center, and just how much uptime comes from within?

A lot of the work measuring data center reliability has been done for you. The Uptime Institute's simple data center Tier levels describe what should be provided in terms of overall availability by the particular technical design of a facility.

There are four Uptime Tiers. Each Tier must meet or exceed the capabilities of the previous Tier. Tier I is the simplest and least highly available, and Tier IV is the most complex and most available.

Tier I: Single non-redundant power distribution paths serve IT equipment with non-redundant capacity components, leading to an availability target of 99.671% uptime. Capacity components include items such as uninterruptable power supply, cooling systems and auxiliary generators. Any capacity component failure will result in downtime for a Tier I data center, as will scheduled maintenance.

Tier II: A redundant site infrastructure with redundant capacity components leads to an availability target of 99.741% uptime. The failure of any capacity component can be manually operated by switching over to a redundant item with a short period of downtime, and scheduled maintenance still requires downtime.

Tier III: Multiple independent distribution paths serve IT equipment; there are at least dual power supplies for all IT equipment and the availability target is 99.982% uptime. Planned maintenance can be carried out without downtime. However, a capacity component failure still requires manual switching to a redundant component, which will result in downtime.

Tier IV: All cooling equipment is dual-powered and a completely fault-tolerant architecture leads to an availability target of 99.995% uptime. Planned maintenance and capacity component outages trigger automated switching to redundant components. Downtime should not occur.

In most cases, costs reflect Tiering -- Tier I should be the cheapest, and Tier IV should be the most expensive. But a well-implemented, well-run Tier III or IV facility could have costs that are comparable to a badly run lower-Tier facility.

Watch out for colocation vendors who say their facility is Tier III- or Tier IV-"compliant"; this is meaningless. Quocirca has even seen instances of facility owners saying they are Tier III+ or Tier 3.5. If they want to use the Tier nomenclature, then they should have become certified by the institute.

Saturday, March 14, 2015

Data center design standards bodies

Several organizations produce data center design standards, best practices and guidelines. This glossary lets you keep track of which body produces which standards, and what each acronym means.

ASHRAE: The American Society of Heating, Refrigerating and Air-Conditioning Engineers produces data center standards and recommendations for heating, ventilation and air conditioning installations. The technical committee develops standards for data centers' design, operations, maintenance and energy efficiency. Data center designers should consult all technical documents from ASHRAE TC 9.9: Mission Critical Facilities, Technology Spaces and Electronic Equipment.

BICSI: The Building Industry Consulting Service International Inc. is a global association that covers cabling design and installation. ANSI/BICSI 002-2014, Data Center Design and Implementation Best Practices, covers electrical, mechanical and telecommunications structure in a data center, with comprehensive considerations from fire protection to data center infrastructure management.

BREEAM: The BRE Environmental Assessment Method (BREEAM) is an environmental standard for buildings in the U.K. and nearby countries, covering design, construction and operation. The code is part of a framework for sustainable buildings that takes into account economic and social factors as well as environmental. It is managed by BRE Global, a building science center focused on research and certification.

The Green Grid Association: The Green Grid Association is well-known for its PUE metric, defined as power usage effectiveness or efficiency. PUE measures how well data centers use power by a ratio of total building power divided by power used by the IT equipment alone. The closer to 1 this ratio comes, the more efficiently a data center is consuming power. Green Grid also publishes metrics for water (WUE) and carbon (CUE) usage effectiveness based on the same concept.

IDCA: The International Data Center Authority is primarily known as a training institute, but also publishes a holistic data center design and operations ranking system: the Infinity Paradigm. Rankings cover seven layers of data centers, from location and facility through data infrastructure and applications.

IEEE: The Institute of Electrical and Electronics Engineers provides more than 1,300 standards and projects for various technological fields. Data center designers and operators rely on the Ethernet network cabling standard IEEE 802.3ba, as well as IEEE 802 standards, for local area networks such as IEEE 802.11 wireless LAN specifications.

ISO: The International Organization for Standardization is an overarching international conglomeration of standards bodies. The ISO releases a wide spectrum of data center standards, several of which apply to facilities. ISO 9001 measures companies' quality control capabilities. ISO 27001 certifies an operation's security best practices, regarding physical and data security as well as business protection and continuity efforts. Other ISO standards that data center designers may require include environmental practices, such as ISO 14001 and ISO 50001.

LEED: The Leadership in Energy and Environmental Design is an international certification for environmentally conscious buildings and operations managed by the U.S. Green Building Council. Five rating systems -- building design, operations, neighborhood development and other areas -- award a LEED level -- certified, silver, gold or platinum -- based on amassed credits. The organization provides a data-center-specific project checklist, as the LEED standard includes adaptations for the unique requirements of data centers.

NFPA: The National Fire Protection Association publishes codes and standards to minimize and avoid damage from hazards, such as fire. No matter how virtualized or cloudified your IT infrastructure, fire regulations still govern your workloads. NFPA 75 and 76 standards dictate how data centers contain cold/cool and hot aisles with obstructions like curtains or walls. NFPA 70 requires an emergency power off button for the data center to protect emergency respondents.

NIST: The National Institute of Standards and Technology oversees measurements in the U.S. NIST's mission includes research on nanotechnology for electronics, building integrity and diverse other industries. For data centers, NIST offers recommendations on authorization and access. Refer to special publications 800-53, Recommended Security Controls for Federal Information Systems, and SP 800-63, Electronic Authentication Guideline.

OCP: The Open Compute Project is known for its server and network design ideas. But OCP, started by Internet giant Facebook to promote open source in hardware, also branches into data center design. OCP's Open Rack and optical interconnect projects call for 21 inch rack slots and intra-rack photonic connections. OCP's data center design optimizes thermal efficiency with 277 Volts AC power and tailored electrical and mechanical components. 

OIX: The Open IX Association focuses on Internet peering and interconnect performance from data centers and network operators, along with the content creators, distribution networks and consumers. It publishes technical requirements for Internet exchange points and data centers that support them. The requirements cover designed resiliency and safety of the data center, as well as connectivity and congestion management.

Telcordia: Telcordia is part of Ericsson, a communications technology company. The Telcordia GR-3160 Generic Requirements for Telecommunications Data Center Equipment and Spaces particularly relates to telecommunications carriers, but the best practices for network reliability and organizational simplicity can benefit any data center that delivers applications to end users or host applications for third-party operators. The standard deals with environmental protection and testing for hazards, ranging from earthquakes to lightning surges.

TIA: The Telecommunications Industry Association produces communications standards that target reliability and interoperability. The group's primary data center standard, ANSI/TIA-942-A, covers network architecture and access security, facility design and location, backups and redundancy, power management and more. TIA certifies data centers to ranking levels on TIA-942, based on redundancy in the cabling system.

The Uptime Institute: The Uptime Institute certifies data center designs, builds and operations on a basis of reliable and redundant operating capability to one of four tier levels. Data center designers can certify plans; constructed facilities earn tier certification after an audit; operating facilities can prove fault tolerance and sustainable practices. Existing facilities, which cannot be designed to meet tier level certifications, can still obtain the Management Operations Stamp of Approval from Uptime.

Thursday, March 12, 2015

Cisco ACI CLI Commands "Cheat Sheet"

The goal of this document is to provide a concise list of useful commands to be used in the ACI environment. For in-depth information regarding these commands and their uses, please refer to the ACI CLI Guide.

Please note that legacy style commands (show firmware, show version, etc) will not be included in this guide. The below commands are new for ACI. Legacy commands may be added later on, but the point of this document is to be short and sweet.

This document is formatted in the following way: commands are surrounded by <> in bold and possible user-given arguments within commands (if necessary) are surrounded by () with a | in between multiple arguments. Brackets [] will be used for mandatory verbatim arguments. A dash (-) will be the barrier between a command and the explanation for a command. For example:

     - shows the status of a given interface as well as statistics
        interface ID is in () because it is a user-specified argument, you can put any interface you want

     - show the MAC port status
        ns|alp and 0|1 are in brackets because you must use either one of those arguments

Command Completion and Help
Context sensitive help and command completion in ACI is a bit different than in other command line interfaces from Cisco.  Since iShell builds mostly on Bash, these features tend to build off of the standard bash Programmable Completion feature.  

  • Tab - Use the tab key to auto complete commands.  In cases where there are multiple commands that match the typed characters, all options should be displayed horizontally.  

    Example Usage:

    admin@tsi-apic1-211:~> mo
    moconfig     mocreate     modelete     modinfo      modprobe     modutil      mofind       moprint      more         moset        mostats      mount        mount.fuse   mount.nfs    mount.nfs4   mountpoint   mountstats   mount.tmpfs
    admin@tsi-apic1-211:~> mo

    This is more than just iShell, it includes all Bash commands.  Hitting Tab before typing any CLI command on the APIC results in:
    Display all 1430 possibilities? (y or n)
  • Esc Esc - Use Double escape to get context sensitive help for available ishell commands.  This will display short help for each command.  [Side note: In early beta code, Double Escape after typing a few characters would only show one of the matching commands rather than all of them.  This is addressed via CSCup27989 ]

    Example Usage:

     attach           Show a filesystem object
     auditlog         Display audit-logs
     controller       Controller configuration
     create           create an MO via wizard
     diagnostics      Display diagostics tests for equipment groups
     dn               Display the current dn
     eraseconfig      Erase configuration, restore to factory settings
     eventlog         Display event-logs
     fabricnode       Commission/Decommission/Wipeout a fabric node
     faults           Display faults
     firmware         Add/List/Upgrade firmware
     health           Display health info
     loglevel         Read/Write loglevels
     man              Show man page help
     moconfig         Configuration commands
     mocreate         Create an Mo
     modelete         Delete an Mo
  • man  - All commands should have man pages.  [Side note: If you find an iShell command without a man page - open a bug]  The manual page for the commands will give you more detailed info on what the commands do and how to use them.

Cisco Application Centric Infrastructure CLI Commands (APIC, Leaf/Spine)

Clustering User Commands
 - shows the current cluster size and state of APICs
- changes the size of the cluster
 - Decommissions the APIC of the given ID
 - Factory resets APIC and after reboot will load into setup script
 - Reboots the APIC of the given ID
 - shows replica which are not healthy
 - shows the state of one replica
 - large output which will show cluster size, chassisID, if node is active, and summary of replica health
 - shows fabric node vector
 - shows appliance vector
 - verifies APIC hardware
 - shows link status
 - shows the status of bond link
 - shows dhcp client information to confirm dhcp address from APIC
 - commissions, decommissions, or wipes out given node. wipeout will completely wipeout the node including configuration. Use sparingly.

SSL Troubleshooting
 - tries to connect ssl between APIC and Node and gives output of SSL information
 -shows logging of DME-logs for node
 - shows policy-element logs for SSL connectivity
Can also check logs in the /var/log/dme/log directory

Switch Cert Verification
 - Next to PRINTABLESTRING, it will list Insieme or Cisco Manufacturing CA. Cisco means new secure certs are installed, Insieme means old unsecure are installed
- Shows start and end dates of certificate. Must be within range for APIC to accept
- Shows keypairs of specified cert

Switch Diagnostics
 - shows bootup tests and diagnostics of given module
 - shows ongoing tests of given module
 - shows diagnostic result of given module or all modules
 - shows diagnostic result of given test on given module
 - show debug information for the diagnostic modules

Debug Commands
 - shows debug output of given argument
 - enables/disables given argument on all modules
 - gets the interval of given argument
 - EPC mon statistics
 - EPC mon statistics
 - EOBC/EPC switch status (0: EOBC, 1: EPC)
 - SC card broadcom switch status

Insieme ELTM VRF, VLAN, Interface Commands
 - dumps ELTM trace to output file
 - dumps eltm trace to console
 - shows vrf table of given vrf
 - vrf summary, shows ID, pcTag, scope
 - shows vlan information. Can substitute (brief) for a vlan ID

OSPF CLI Commands
 - shows OSPF neighbors of given vrf
 - shows OSPF routes of given vrf
 - shows ospf interfaces of given vrf
 - shows ospf information of given vrf
 - shows ospf traffic of given vrf

External Connectivity
 - shows arp entries for given vrf
 - shows ospf neighbors for given vrf
 - shows bgp sessions/peers for given vrf
 - shows ospf routes for given vrf
 - shows bgp unicast routes for given vrf
 - shows static routes for given vrf
 - shows routes for given vrf
 - shows external LPMs
 - shows next hops towards NorthStar ASIC or external router
 - HigigDstMapTable Indexed using DMOD/DPORT coming from T2. Provides a pointer to DstEncapTable. 
 - DstEncapTable Indexed using the HigigDstMapTable’s result. Gives tunnel forwarding data.
 - RwEncapTable Indexed using the HigigDstMapTable’s result. Gives tunnel encap data.

ISIS Fabric Unicast Debugging
 - shows ISIS statistics
 - shows ISIS adjacencies for given vrf. Can also add detail
 - shows lldp neigbor status
 - shows interface status information and statistics
 - shows isis database, can also add detail
 - shows isis route information
 - shows isis traffic information
 - shows all discovered tunnel end points
 - shows isis statistics of given vrf
 - shows isis event history
 - shows isis memory statistics
 - provides isis tech-support output for TAC

ASIC Platform Commands
 - shows the MAC port status
 - shows the MAC port counters
 - shows ASIC block counters for given ASIC. Can also add [detail] for more details
 - shows interrupts for given ASIC

ASIC Platform Commands - T2 Specific
 - shows receive counters for T2
 - shows transmit counters for T2
 - shows per port packet type counters
 - shows ingress drop counters
 - shows egress drop counters
&  - setting register to specific trigger. 9 registers per port (0-8)
    ex -   - sets 4th register to select RFILDR selector (bit 13)
 - checking the stats for above command

ASIC Platform Commands - NS Specific
 - shows port counters
 - shows internal port counters
 - shows vlan counters
 - shows per-tunnel counters
 - shows ASIC block counters
 - shows well-defined tables

Fabric Multicast - General
 - shows currecnt state of FTAG, cost, root port, OIF list
 - shows GM-LSP database
 - shows GIPO routes, Local/transit, OIF list
 - shows topology and compute stats, MRIB update stats, Sync+Ack packet stats, Object store stats
 - shows isis multicast event history logs
 - more detailed than above command, specifically dealing with forwarding events and forwarding updates

Fabric Multicast Debugging - MFDM
 - flood/OMF/GIPi membership
 per BD

 - GIPi membership
 - specific
 - per BD
 - specific per BD

 - flood membership
 - per BD

 - OMF membership
 - per BD

 - IPMC membership 
 - specific IPMC

Fabric Multicast Debugging - L2 Multicast
 - flood/OMF/GIPi membership
 - per BD

 - GIPi membership
 - specific
 - per BD
 - specific per BD

 - flood membership
 - per BD

 - MET membership
 - specific MET
 - flood MET
 - per BD
 - specific per BD
 - IPMC membership
 - specific IPMC

Fabric Multicast Debugging - MRIB
 - shows IP multicast routing table for given vrf

Fabric Multicast Debugging - MFIB
 - shows FTAGs
 - shows GIPO routes

Fabric Multicast Debugging - IGMP
 - shows multicast route information in IGMP
 - shows multicast router information IGMP
 - FD to BD vlan mapping. IGMP gets FD and G from Istack. It needs to know the BD to create (BD, G)
 - verify BD membership of a port in IGMP. Only when ports are part of BD joins are processed
 - verify the tunnel to IF mapping in IGMP. IGMP uses this to get the groups on VPC and only sync them.

Fabric Multicast Debugging - MFDM
 - shows IPv4 multicast routing table for given vrf
 - Verify FD to BD vlan mapping. MFDM gets (FD,port) memberships from vlan_mgr and uses this information go create BD floodlists.
 - BD to GIPO mapping. GIPO is used by Mcast in Fabric
 - FD-vxlan to GIPO mapping
 - tunnel to phy mapping

Fabric Multicast Debugging - M2rib
 - shows multicast route information in M2rib
 - shows multicast route informatino in M2rib

Fabric Multicast Debugging - PIXM
 - RID to IPMC mapping. IFIDX is RID and LTL is IPMC

Fabric Multicast Debugging - VNTAG Mgr
 - IPMC to DVIF mapping. LTL is IPMC

EP Announce - Debugging

iBash CLI

 - show endpoint information

BCM Table Dump

Fabric QoS Debugging - CoPP CLI

 - CoPP statistics (red = dropped, green = allowed)
 - shows QoS classes configured
 - shows QoS classes/policices configured per vlan
 - shows ppf details
 - shows QoS classes configured in hardware
 - shows the QoS DSCP/dot1p policy configured for a vlan in HW
 - shows QoS DSCP/dot1p policy summary
 - shows QoS DSCP/dot1p policy in detail
 - shows T2 TCAM entries for specified group
 - shows QoS counters on each port
 - shows QoS counters on each port (internal)
 - shows QoS counters for each class for all ports

 - shows the edge port config on the HIF (FEX) ports, the internal VLAN mapping and the STP TCN packet statistics received on the fabric ports
 - shows mcp information by interface
 - shows stats for all interfaces
 - shows mcp information per vlan
 - shows stats for all vlans
 - shows mcp information per msti region
 - shows stats for all msti regions

iTraceroute CLI
 - node traceroute
 - Tenant traceroute for vlan encapped source EP
 - Tenant traceroute for vxlan encapped source EP

ELAM Setup and debugging (follow commands in order)
 - starts ELAM on given ASIC
 - sets trigger for ELAM
 - sets source and destination mac addresses
 - Starts capture
 - shows capture status
 - shows report of the capture

VMM Troubleshooting
 - shows VM controllers and their attributes such as IP/hostname, state, model, serial number
 - shows hypervisor inventory of given VM controller

TOR Sync Troubleshooting

 - can see which VLAN is learn disable
 - can see which VLAN is learn disable
 - see if timer is attached on the VLAN/vrf

OpFlex Debugging
 - shows if OpFlex is online (status = 12 means OpFlex is online, remoteIP is anycast IP, intra vlan is vlan used by VTEP, FTEP IP is the iLeaf's IP)
 - check if DPA is running

 - uplinks and vtep should be in forwarding state. PC-LTL of uplink port should be non-zero
 - Check port channel type
 - if port channel type is LACP, can use this command to see the individual uplink LACP state
 - verify if the VTEP received a valid DHCP IP address

SPAN Debugging

BPDU Debugging
 - shows if BPDU Guard/Filter is enabled or disabled
 - check if the bpdu-drop stats are incrementing on the uplinks/virtual ports

VEM Misc Commands
 - show channel status
 - check port status
 - check per EPG flood lists
 - check vLeaf multicast membership
 - show packet stats
 - show packet counters

 - debug vxlan packet path
 - debug vxlan packet path
 - show above logging output

FEX Troubleshooting
 - shows all FEXs and their states
 - gives detailed stats of given FEX
 - gives environmental stats of FEX
 - shows FEX version
 - shows FEX fabric interface information
 - shows logging information for FEX
 - shows transceiver information for FEX
 - show FEX reset reason
 - shows FEX module information
 - shows debugging information and you can grep to find what you want
 - use to find out which service is failing the sequence and you can debug that process further

My Blog List

Networking Domain Jobs