Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Showing posts with label Design. Show all posts
Showing posts with label Design. Show all posts

Tuesday, April 2, 2013

Evaluating Network Gear Performance

 
Choosing the right equipment for your network is hard. Even ignoring the ever-growing roster of features one must account for when evaluating candidate hardware, it's important not to overlook performance limitations. Below are some of the most crucial characteristics to consider when doing your research.

Throughput

Throughput is the rate at which a device can convert input to output. This is different from bandwidth, which is the rate at which data travels across a medium. An Ethernet switch, for example, might have 48 ports running at an individual bandwidth of 1 Gbps each but be able to switch only a total of 12 Gbps among the ports at any given time. This is said to be the switch's maximum throughput.
 
Throughput is measured in two units: bits per second (bps) and packets per second (pps). Most people are most familiar with bits per second. This is the amount of data which flows through a particular point within a duration of one second, typically expressed as megabits (Mbps) or gigabits (Gbps) per second. Capitalization is important here. A lowercase 'b' indicates bits, whereas an uppercase 'B' indicates bytes. Speed is always measured in bits per second, with a lowercase 'b' (Kbps or Mbps).
 
Packets per second, similarly expressed most often as Kpps or Mpps, is another way of evaluating throughput. It conveys the number of packets or frames which can be processed in one second. This approach to measuring throughput is used to expose limitations of the processing power of devices, as shorter packets demand more frequent forwarding decisions. For example, a router might claim a throughput of 30 Mbps per second using full-size packets. However, it might also be limited to processing 40 Kpps. If each packet received was the minimum size of 64 bytes (512 bits), the router would be limited to just 20.48 Mbps (512 * 40,000) of throughput.
 
Cisco maintains often cited baseline performance measurements for its most popular routers and switches. If you work out the math, you can see that the Mbps numbers listed in the router performance document were derived using minimum-length (64 byte) packets. These numbers thus present a worst case scenario. Packets on a production network typically vary widely in size, and larger packets will yield higher bits-per-second rates.
 
Keep in mind that these benchmarks were taken with no features other than IP routing enabled. Adding additional features and services such as access control lists or network address translation may reduce throughput. Unfortunately, it's impractical for a vendor to list throughput rates with and without myriad features enabled, so you'll have to do some testing yourself.

Oversubscription

Ethernet switches are often built with oversubscribed backplanes. Oversubscription refers to a point of congestion within a system where the potential rate of input is greater than the potential rate of output. For example, a switch with 48 1 Gbps ports might have a backplane throughput limitation of only 16 Gbps. This means that only 16 ports can transmit at wire rate (the physical maximum throughput) at any point in time. This isn't usually a problem at the network edge, where few users or servers ever need to transmit at these speeds for a prolonged time. However, oversubscription imposes much more critical considerations in the data center or network core.
 
As an example, let's look at the 16-port 10 Gbps Ethernet module WS-X6816-10G-2T for the Cisco Catalyst 6500 switch. Although the module provides an aggregate of 160 Gbps of potential throughput, its connection to the chassis backplane is only 40 Gbps. The module is oversubscribed at a ratio of 4:1. This module should only be used in situations where the aggregate throughput demand from all interfaces is not expected to exceed 40 Gbps.

IP Route Capacity

The maximum number of routes a router can hold in its routing table is limited by the amount of available content-addressable memory (CAM). Although a low-end router may be able to run BGP and exchange routes with BGP peers, it likely won't have sufficient memory to accept the full IPv4 Internet routing table, which comprises over 400 thousand routes. (Of course, low-end routers should never be implemented in a position where they would need to receive the full routing table.) Virtual routing contexts, in which a router stores multiple copies of a route in separate forwarding tables, increase routing table size exponentially, further elevating the importance of properly sizing routers for the role they play.

Maximum Concurrent Sessions

Firewalls and intrusion prevention systems perform stateful inspection of traffic transiting from one trust zone to another. These devices must be able to keep up with the demand for throughput not only in terms of bits per second and packets per second but also in the number of concurrent stateful sessions. A single web request might trigger the initiation of one or two dozen TCP connections to various content servers from an internal host. The firewall or IPS must be able to track the state of and inspect potentially thousands of sessions at any point in time. If the device's maximum capacity is reached, attempts to open new sessions may be rejected until a number of current sessions are closed or expire. Such devices are likewise limited in how fast they can create new sessions.
 

Wednesday, March 27, 2013

A Next-generation Enterprise WAN architecture

Courtesy - Network World
 
 
Enterprise WANs have changed very little in the last 15 or so years. While price/bit for the Enterprise WAN has improved somewhat over that time, it hasn’t increased with Moore’s Law as has computing, storage, Internet access, LAN switching … and pretty much everything else associated with IT. And while Internet connections have seen Moore’s Law bring about quantum improvements in price/bit, the unaided public Internet is still not reliable enough to deliver business-class quality of service and performance predictability, which is why the overwhelming majority of Enterprise WANs are based not on IPSec VPNS over the Internet but instead on private MPLS services from telcos like AT&T, Verizon and BT.
Starting now and over the next few years, however, the Enterprise WAN will have a revolution of its own. Closely correlated with the rise of cloud computing, this Next-generation Enterprise WAN (NEW, for short) architecture will help organizations move to cloud computing more quickly, but will be applicable even for those enterprises that don’t plan to use public cloud services for many years, if ever.
This column will cover this NEW architecture: why it will happen, what it will look like, the key technologies enabling it, the implications for various applications, including next-generation applications, the impact on enterprises’ relationships with their service providers, how it enables a smooth migration to leveraging cloud services, and much more.
This Next-generation Enterprise WAN architecture will happen precisely because private WANs need a revolution in price/performance to cost effectively support the next wave of applications and the move on the computing side of the house towards cloud computing. Also driving this trend it the fact that the plain-old-public-Internet is not reliable enough for the corporate WAN today, and that it won’t deliver the necessary reliability or performance predictability by itself that most enterprises demand from their WAN and applications.
One undeniable trend of the last several years has been to favor data center consolidation. This began because of the benefits on the computing and OpEx, rather than the network. Indeed, this trend put even more pressure on the WAN (and on the data center LAN, but that issue is beyond our scope). Server virtualization technology and WAN Optimization technology have further enabled and accelerated the data center consolidation trend. In fact, these are two of the technologies that are key to the NEW architecture.
Three additional technologies also play a critical role. Distributed replicated file service, such as DFS Replication from Microsoft, and similar file synchronization services (e.g. DropBox and Box.net in the public cloud world) have been around for some time, but have come into their own more recently as network bandwidth has become more available and more affordable. One might argue that this is more of a computing/storage/application technology than a network technology; there is some truth to that. Nevertheless, we include it here as one of the key enablers of the next-gen WAN.
Colocation (colo) facilities, and in particular carrier-neutral colo facilities, are our fourth component. While colos have been around for a while, and many IT folks are familiar with them for public-facing websites and perhaps know them as the location for many public cloud services, the nearly infinite amount of diverse, very inexpensive bandwidth available at colos will make them a critical component of this NEW architecture.
Our final technology is the newest one: WAN Virtualization. WAN Virtualization does for the WAN what RAID did for storage, enabling organizations to combine diverse sources of bandwidth and build WANs that have 20 to 100 times the bandwidth, with monthly WAN costs reduced by 40% to 80% or even more, and more reliability and performance predictability than any single-vendor MPLS network. WAN Virtualization is the catalyst of our NEW architecture.
With the combination of these technologies, Enterprise WANs will have far lower monthly telecom costs, far higher bandwidth, and will be more reliable. If that troika alone isn’t enough, this NEW architecture also delivers lower OpEx (people) costs, significantly better application performance and, just as importantly, better application performance predictability. It will enable next-generation applications, e.g. HD videoconferencing.
This architecture also enables benefits and changes beyond those to the WAN itself. It can enable further server consolidation, up to the elimination of all branch-based servers if desired. It will facilitate the centralization of network and IT complexity, e.g. for Internet access and remote site backup.
It will allow enterprises to leverage cloud computing – public, private or hybrid – in an incremental, secure and reliable way. Enterprise WAN managers can prepare and enable their WAN for the move to private or public cloud computing, at whatever pace the computing side of the organizations wants to go, without sacrificing the network reliability and network security they have today.
By doing all of these things it helps lower overall IT CapEx and OpEx, not just networking OpEx. Wide Area Network design is, for the first time in a long time, strategic.
One of the most beautiful points is that most of this next-generation network upgrade pays for itself out of the WAN OpEx budget. It also provides a long-term way to leverage Internet economics and Moore’s Law, giving enterprises a way to cost effectively scale their WANs and leverage new WAN technologies, even those that are consumer-oriented, as they appear. It gives enterprises leverage with their telecom service providers for the first time.
Just as cloud computing is making now an interesting and exciting time to be on the computing side of IT, the confluence of these five technologies - server virtualization, WAN Optimization, distributed/replicated/synchronized file services, colocation and WAN Virtualization – is making this an interesting and exciting time for the Enterprise WAN.

Friday, August 17, 2012

Important Points in MPLS network Design

Courtesy - Broadband Nation

Before jumping into MPLS (Multi-Protocol Label Switching) for your network design there's important items to consider. Take a step back and first consider what you need your network to do, how, and what must happen if there are issues.

Intent of the network is definitely a critical piece. It is so important to understand what you are putting over the network in order to engineer the optimal network. I've had the same conversations with clients when they realize that they can't have as many call paths as they would like and still be able to surf the net.

Also, business continuity is definitely key. Documenting the plan and understanding how traffic should flow in the event the primary path is unavailable for any reason ensures you have survivability in the instance of an interruption of service or outage of any kind.

If you understand the intent, you can accurately plan for outages or interruptions in your disaster plan. Most times, you don't need to have every type of traffic pass over the MPLS during an outage. You need to understand what is most important to your business, what has the biggest impact on your revenue, and then design a plan that ensures that you don't lose that piece of the puzzle for any length of time.

I think the most important piece is not even the Disaster Recovery plan, but more so the "business continuity" piece. The reality is you want the design to be flawless and address the rare occasion prior to it happening. With having a "business continuity" plan in place this allows for you to continue business seamlessly in the event something does happen. The Disaster Recovery plan will only address how do you recover in the event of an outage, cost, and time associated with the disaster. Business continuity will minimize that impact of these three and ensure you are still operational during this time. The other important pieces of course are the speed and security of the network.

Also key now that I think about it is default route pathing. This ties in to the intent of the network. Even if the original intent of the MPLS network is not to pass internet traffic .... having dynamic routing on the core so that one site can piggy back off of another in the case of an internet connection outage at one of the sites, is often one of the most useful side features that everyone always seems to forget or overlook. Sometimes this can be done with you changing your firewall/router's default gateway to point to the MPLS hop manually or can be done with OSPF/EIGRP.

Monday, June 18, 2012

VoIP – Per Call Bandwidth Consumption


One of the most important factors to consider when you build packet voice networks is proper capacity planning. Within capacity planning, bandwidth calculation is an important factor to consider when you design and troubleshoot packet voice networks for good voice quality.


This document explains voice codec bandwidth calculations and features to modify or conserve bandwidth when Voice over IP (VoIP) is used.


http://www.cisco.com/en/US/tech/tk652/tk698/technologies_tech_note09186a0080094ae2.shtml

Tuesday, May 10, 2011

Network Design - Simple Explanation

 
  • Three Layer Model:
    • Access layer: Where end users are connected. For intra-vlan routing.
    • Distribution Layer: where access layer switches are aggregated. For inter-VLAN routing
    • Core Layer: where distribution layer switches are aggregated. Center to all users.

  • Access layer;
    • Low cost per port
    • High port density
    • Scalable uplinks to higher layers.
    • Resiliency through multiple uplinks.
    • User access functions like VLAN, traffic and protocol filter and QOS.

  • Distribution Layer:
    • High L3 throughput for packet handling.
    • Access list, packet filters and Qos features.
    • Scalable and resilient high-speed links to access and core layers.
    • Acts as L3 boundary for access VLANs. Broadcast shouldn’t travel across Distribution layer.

  • Core Layer:
    • Very high L3 throughput.
    • Advanced QOS and L3 protocol functions.
    • Redundancy and resilience for HA.

  • Switch Block:
    • Collection of access layer switches together with distribution switches(2).
    • Sized based on traffic types and behavior, size and number of workgroups.
    • Need redundancy within switch block.
    • Broadcast from a PC should be confine within switch block.

  • Core block:
    • An enterprise/campus network backbone.
    • Collapsed Core: Distribution and core layer are unified. Router performs both layer functions
    • Dual Core: Two core routers and switch blocks are connected to core routers in redundant fashion.

My Blog List

Networking Domain Jobs