Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Wednesday, January 30, 2013

Big Data - DATA: FROM CAVE PAINTINGS TO IPADS



http://humanfaceofbigdata.com/

Big Data—everybody’s talking about it. But what does it mean?
More importantly, what does it mean for you?
Some people think Big Data is simply more information than can be stored on a personal computer. Others think of it as overlapping sets of data that reveal unseen patterns, helping us understand our world—and ourselves—in new ways.


Still others think that our smart phones are turning each of us into human sensors and that our planet is developing a nervous system. Below, experience how Big Data is shaping your life.


Click on "Lets Go" in this link to understand about Big Data.
http://www.studentfaceofbigdata.com/


Storage 101 - Part 6


Storage 101 - Part 5

This article concludes this series by discussing point to point and ring topologies for Fibre Channel.

Introduction

In the previous article in this series, I discussed the Fibre Channel switched fabric topology. As I mentioned in that article, switched fabric is by far the most common Fibre Channel topology used in Storage Area networks. Even so, there are two additional Fibre Channel topologies that I wanted to show you.

The Point to Point Topology


Point to point is by far the simplest Fibre Channel topology. It is so simple in fact, that its simplicity renders it unsuitable for use in SAN environments.

Point to point topology can best be thought of as a direct connection between two Fiber Channel nodes. In this topologies the N_Port from one node is connected to the N_Port of another. The cable that is used to make the connection performs a cross over so that the traffic being transmitted from the first node gets sent to the receive port on the second node. Likewise, the second node’s transmit port sends traffic to the first node’s receive port. The process is very similar to connecting two Ethernet devices together without a switch by using a crossover cable.

As you can see, a point to point topology is extremely simple in that there are no switches used. The down side to using point to point connectivity is that this type of topology severely limits your options because the design can’t be scaled to service more complex storage requirements without switching to a different topology.

Arbitrated Loop Topology


The other type of topology that is sometimes used with Fibre Channel is known as an arbitrated loop. This type of topology is also sometimes referred to simply as Loop or as FC-AL.

The Arbitrated Loop topology has been historically used as a low cost alternative to the switched fabric topology that I discussed in the previous article. Switched fabric topologies can be expensive to implement because of their reliance on Fibre Channel switches. In contrast, the arbitrated loop topology does not use switches.

It is worth noting that today Fibre Channel switches are less expensive than they once were, which makes the use of switched fabric more practical than it was a few years ago. The reason why I mention this is because in a SAN environment you really should be using a switched fabric. A switched fabric provides the highest level of flexibility and the highest resiliency when a component failure occurs. Even so, an arbitrated loop can be a valid option in smaller organizations with limited budgets, so I wanted to at least talk about it.

Just as the fabric topology can be implemented (cabled) in several different ways, so too can the ring topology. Although the phrase ring topology implies that devices will be cabled together in a ring, this concept does not always hold true.

The first way in which a ring topology can be cabled is in a ring. In doing so, the Fibre Channel devices are arranged in a circle (at least from a cabling standpoint anyway) and each device in the circle has a physical connection to the device to its left and to the device to its right.

This type of design has one major disadvantage (aside from the limitations that are shared by all forms of the ring topology). That disadvantage is that the cabling can become a single point of failure for the ring. If a cable is damaged or unplugged then the entire ring ceases to function. This occurs because there is no direct point to point connection between devices. If one device wants to communicate with another device then the transmission must be passed from device to device until it reaches its intended destination.

Another way in which the ring topology can be implemented is through the use of a centralized Fibre Channel hub. From a cabling prospective, this topology is not a ring at all, but rather a star topology. Even so, the topology is still defined as a ring topology because it makes use of NL_Ports (node loop ports) rather than the N_Ports that are used with a switched star topology.

So why would an organization use a Fibre Channel hub as a part of a ring topology? It’s because using a hub prevents the ring’s cabling from becoming a single point of failure. If a cable is broken or unplugged it will cause the associated device to become inaccessible, but the hub ensures that the affected device is bypassed and that the rest of the ring can continue to function. This would not be the case without the Fibre Channel hub. If the same failure were to occur on a Fibre Channel ring that was not based around a hub then the entire ring would cease to function.

The other advantage to using a Fibre Channel hub is that the hub can increase the ring’s scalability. I will talk more about scalability in a moment, but for right now I’m sure that some of you are curious as to the cost of a Fibre Channel hub. The price varies among vendors and is based on the number of ports on the hub. However, prices for Fibre Channel hubs start at less than a hundred dollars, and higher end hubs typically cost less than a thousand dollars. By way of comparison, some low end Fibre Channel switches cost less than five hundred dollars, but most cost several thousand.

Public Loops

Occasionally Fibre Channel loops use a design known as a public loop. A public loop is a hub based Fibre Channel loop that is also tied into a switched fabric. In this type of topology, the devices within the ring connect to the hub using NL_Ports. However, the hub itself is also equipped with a single FL_Port that connects the loop to a single port on a Fibre Channel switch. Needless to say, this is a low performance design since the switch port’s bandwidth must be shared by the entire ring.

Scalability


Arbitrated loops are limited to a total of 127 ports. When a hub is not used, each device in the loop requires two ports, because it must link to the device to its left and the device to its right. When a hub is used then each device only requires a single port, so the loop could theoretically accommodate up to 127 devices (although hardware limitations often limit the actual number of devices that can be used).

On the lower limit, an arbitrated loop can have as few as two devices. Although a loop consisting of two devices is cabled similarly to a point to point topology, it is a true ring topology because unlike a point to point topology, NL_Ports are being used.

One last thing that you need to know about the ring topology is that it is a serial architecture and the ring’s bandwidth is shared among all of the devices on the ring. In other words, only once device can transmit at a time. This is a stark contrast to a switched fabric in which multiple communications can occur simultaneously.

Conclusion

As you can see, Fibre Channel uses some unique hardware, but the technology does share some similarities with Ethernet, at least as far as networking topologies are concerned.
 

Tuesday, January 29, 2013

Cloud Service Assurance Process

 
We will discuss the service assurance aspect of the services that are provisioned through the cloud service fulfillment process. We will discuss the fundamental service assurance processes and corresponding best practices that are typically employed to successfully manage the services that are delivered to customers.
 
 
Service assurance is a combination of fault and performance management. Cloud service assurance requires fault and performance management of cloud infrastructure that is comprised of network, compute and storage in addition to the applications that run on services platform itself. Reporting an incident to the appropriate management system for technical remediation and further, to informational dashboards in near real time for end user notification is a fundamental requirement. It is also essential the cloud service fulfillment processes be closely coupled with cloud assurance processes, since cloud customers may only be using the infrastructure for a defined period of time. For example, customers may use a cloud service as a test/dev environment only for the period of time when they are developing/testing a service. Previously, customers stayed on the network for very long periods of time, hence the need to spin up and down services with the appropriate assurance monitoring and management did not happen as part of the service lifecycle.
 
Cloud service providers must resolve service-related problems quickly to minimize outages and revenue loss. Cloud assurance solutions give the customer and the operational team maximum visibility into service performance and cost-effective management of SLAs coupled with service impact analysis.
 

Cloud End-to-End Service Assurance Flow

Figure 1 below shows the typical end-to-end service assurance steps:
Picture1.jpg
Figure 1: Cloud Assurance Flow – data collection to display
Unlike the cloud service fulfillment cycle that starts with the customer and ends with a provisioning task into the service platform resources, service assurance events occur in the resources affecting services with variable levels of degradation. Eventually the customer is made aware of the service degradation via notification unless it is fixed pro-actively before it is perceived by the customer.
The steps illustrated in Figure 1 are explained below:
 
 
  1. The incident/fault and performance events are sent by the infrastructure devices comprised of network devices, compute/server platforms, storage devices, and applications.
  2. The domain managers collect these events from infrastructure devices through the CLI, Simple Network Management Protocol (SNMP) polling, or traps sent by the infrastructure devices. Typically separate domain managers are used for the network, compute, and storage devices.
  3. The domain managers receive the messages from the infrastructure devices and de-duplicate/filter the events.
  4. The device events received by the domain managers have device information only and do not contain any service information. The domain manager will enrich the events by looking into the service catalog. In other words, the events are mapped to the services to determine which services are affected because of the events in the infrastructure. It is also possible to map to customers and determine the customers who are impacted by the events.
  5. The service impact is assessed and the information is shown on a dashboard for the operations personnel. This will help the operations people to prioritize the remediation efforts.
  6. The service impact information is forwarded to other locations, including mobile devices.
  7. The service impact information is sent to the service desk (SD).
  8. The service impact information is sent to the service-level manager (SLM).
  9. The SD proactively manages customers, informs them of service impacts, and keeps them up to date on the remediation efforts. Also, the service impact is checked against the SLA to determine any SLA violations and business impacts.
  10. The events, SLA violations, trouble tickets, and so on are displayed on a portal for various consumers, such as customers, operations people, suppliers, and business managers.

Best Practices for Cloud Service Assurance using ITILv3 Principles

 
ITILv3 provides the IT life cycle processes: service strategy, service design, service transition, service operate and Continuous Service Improvement (CSI). Applying these processes is a good way to establish service assurance processes for data center virtualization and cloud management. Figure 2 shows cloud service assurance flow based on ITILv3:
Picture2.jpg
Figure 2: Cloud service provisioning flow based on ITIL V3 principles

Mapping ITILv3 phases to ‘Infrastructure as a Service’ requirements

 
 
Figure 2 also shows some of the items that need to be considered in each of the five phases of the cloud service life cycle for cloud assurance. More details are provided in the following sections along the lines of ITIL V3 phases.
1. Service Strategy: In the cloud strategy phase for the cloud assurance consider the following topics:
  • Architecture Assessment
  • Business Requirements
  • Demand Management
 
 
2. Service Design:
The following items should be considered, taking input from the service strategy phase:
  • Service Catalog Management
  • Service Level Management
  • Availability Management
  • Capacity Management
  • Incident Management
  • Problem Management
  • Supplier Management
  • Information Security Management
  • Service Continuity Management
 
 
3. Service Transition:
Consider the following items in this phase:
  • Change Management
  • Configuration and setting up all the Assurance Systems
  • Service asset management in the CMS (CMDB)
  • Migration from current state to target state (people, processes, products, and partners)
  • Staging and Validation of all systems and processes for assurance
 
 
4. Service Operate: Cloud Operate phase is where the service provider takes possession of the management of cloud operations from the equipment vendors, system integrators and partners, and monitors and audits the service using the monitoring systems (FCAPS) to ensure the SLAs are met. Consider the following items in the cloud Operate phase:
  • Service Desk (function)
  • Incident Management
  • Problem Management
  • Event Management
  • Other IT day-to-day activities
 
 
5. Cloud CSI Phase: Continuous Service Improvement (also referred as Optimization) for cloud assurance involves improving on the operations by adding best practices to the processes, tools and configurations.
  • Auditing the configurations against best practices and changing the configurations as appropriate
  • Fine tuning the tools and processes based on best practices
  • Adding new products and services, and performing assessments to ensure the new services can be incorporated into the current environment. If not, determine the changes required and go through the cloud life cycle, starting from cloud strategy.
 

Monday, January 28, 2013

Storage 101 - Part 5

 
 
This article continues the discussion of Fibre Channel SANs by discussing Fibre Channel topologies and ports.

In the previous article in this series, I spent some time talking about Fibre Channel switches and how they work together to form a switched fabric. Now I want to turn my attention to the individual switch ports.

One of the big ways that Fibre Channel differs from Ethernet is that not all switch ports are created equally. In an Ethernet environment, all of the Ethernet ports are more or less identical. Sure, some Ethernet ports support a higher throughput than others and you might occasionally encounter an uplink port that is designed to route traffic between switches, but in this day and age most Ethernet ports are auto sensing. This means that Ethernet ports are more or less universal, plug and play network ports.

The same cannot be said for Fibre Channel. There are a wide variety of Fibre Channel switch port types. The most common type of Fibre Channel switch port that you will encounter is known as an N_port, which is also known as a Node Port. An N_port is simply a basic switch port that can exist on a host or on a storage device.

Node ports exist on hosts and on storage devices, but in a SAN environment traffic from a host almost always passes through a switch before it gets to a storage device. Fibre Channel switches do not use N_ports. If you need to connect an N_port device to a switch then the connection is made through an F_port (known as a Fabric Port).

Another common port type that you need to know about is an E_port. An E_port is an Expansion Port. Expansion ports are used to connect Fibre Channel switches together, much in the same way that Ethernet switches can be connected to one another. In Fibre Channel an E_port link between two switches is known as an ISL or Inter switch link.

One last port type that I want to quickly mention before I move on is a D_port. A D_port is a diagnostic port. As the name implies, this port is used solely for switch troubleshooting.
It is worth noting that these are the most basic types of ports that are used in Fibre Channel SANs. There are about half a dozen more standard port types that you might occasionally encounter and some of the vendors also define their own proprietary port types. For example, some Brocade Fibre Channel switches offer a U_port, or Universal Port.

Right about now I’m sure that many of you must be wondering why there are so many additional port types and why I haven’t addressed each one individually. In a Fibre Channel SAN the types of ports that are used depend largely on the Fibre Channel topology that is being used.

Fibre Channel topologies loosely mimic the topologies that are used by other types of networks. The most common topology is the switched fabric topology. The switched fabric topology uses the same basic layout as most Ethernet networks. Ethernet networks use a lot of different names for this topology. It is often referred to a hub and spoke topology or a star topology.

Regardless of the name, the basic idea behind this topology is that each device is connected to a central switch. In the case of Ethernet, each network node is connected to an Ethernet switch. In the case of Fibre Channel, network nodes and storage devices are all connected to a switch. The switch port types that I discussed earlier are all found in switched fabric topologies.

It is important to understand that describing a switched fabric topology as a star or as a hub and spoke topology is a bit of an over simplification. When you describe a network topology as a star or as a hub and spoke, the assumption is that there is a central switch and that each device on the network ties into that switch.

Although this type of design is a perfectly viable option for Fibre Channel networks, most Fibre Channel networks use multiple switches that are joined together through E_ports. Often times a single switch lacks a sufficient number of ports to accommodate all of the network devices, so multiple switches must be joined together in order to provide enough ports for the devices.

Another reason why the star or hub and spoke model is a bit of an over simplification is because in a SAN environment it is important to provide full redundancy. That way, a switch failure cannot bring down the entire SAN. In some ways a redundant switched fabric could still be thought of as a star or a hub and spoke topology, it’s just that the redundancy requirement creates multiple “parallel stars”.
To give you a more concrete idea of what I am talking about, check out the diagrams below. Figure A shows a basic switched fabric that completely adheres to the star or hub and spoke topology. In this figure you can see that hosts and storage devices are all linked to a central Fibre Channel switch.


Figure A: This is the most basic example of a switched fabric.

In contrast, the Fibre Channel network shown in Figure B uses the same basic topology, but with redundancy. The biggest difference between the two diagrams is that in Figure B, each host and each storage device is equipped with two Host Bus Adapters. There is also a second switch present. Each host and storage device maintains two separate Fibre Channel connections – one connection to each switch. This prevents there from being any single points of failure on the SAN. If a switch were to fail then storage traffic is simply routed through the remaining switch. Likewise, this design is also immune to the failure of a host bus adapter or a Fibre Channel cable, because the network is fully redundant.


Figure B: This is a switched fabric with redundancy.

As you look at the diagram above, you will notice that there is no connection between the two Fibre Channel switches. If this were a real Fibre Channel network then the switches would in all likelihood be equipped with expansion ports (E_ports). Even so, using them is unnecessary in this situation. Remember, our goal is to provide redundancy, not just to increase the number of available F_ports.
In a larger SAN there would typically be many more switches than what is shown in the diagram. That’s because you would typically need to use expansion ports to provide greater capacity, but also continue to provide redundancy. To see how this works check out Figure C.

Figure C: This is a redundant multi-switch fabric.

In the figure above I have omitted all but one node and one storage device for the sake of clarity. Even so, the diagram uses the same basic design as the diagram shown in Figure B. Each node and each storage device uses multiple host bus adapters to connect to two redundant switches.

The thing that makes this diagram different is that we have made use of the switch’s expansion ports and essentially formed two parallel, redundant networks. Each side of the network uses a series of switches that are linked together through their expansion ports to provide the needed capacity, but the two networks are not joined to one another at the switch level.

Conclusion

Although switched fabric is the most common Fibre Channel topology, it is not the only topology that can be used in a SAN. In the next article in this series, I will show you the point to point topology and the ring topology. As I do, I will discuss the unique port requirements for Fibre Channel rings.
 

Wireless Basics


Way back in the early days of Wireless LAN (WLAN) development, there were a whole lot of folks trying different types of technologies to get wireless LAN communications to work. Eventually some clear winners started to rise to the top and it was seen that interoperability between these technologies needed to exist. It was this desire for inter-operation that eventually led to the creation of the wireless LAN standards that we have today.
 
But what are standards? Standards allow companies who build wireless networking devices know that their equipment will work with other manufacturer's wireless equipment. These standards are known today as IEEE 802.11a, 802.11b, 802.11g and the most recently ratified 802.11n.
 

Jamming out to the Band

 
2.4 Gigahertz Radio Band
 
I'm a big music fan and love a great band. But today we're talking the radio frequency band. This was thought by many to be the best band for commercial inroads to consumers. The problem was and still is today is that its a very crowded RF band that's used by most home appliances including things like your microwave oven!
 
IEEE 802.11 - The Beginning of Wireless Networking Standards
 
At the beginning of any new technology, inventors and technology developers tend to each take a different view on the specific technology at hand. Wireless networking was no exception. At that time players within the industry were looking at all types of methods of transferring LAN data without the use of wires. As things evolved people realized their technologies weren't compatible with each other. Again, Standards were needed to help move the technology to consumers and businesses.
To be honest standards don't always necessarily help get the technology to consumers quickly but it does ensure interoperability when it does get to the consumer.
Eventually certain functions, features and terminology that were common among each manufacturer got taken into the standard. It essentially became a popularity contest of features or terminology.

And the winners of the Wireless Networking popularity contest are:

  • Access Point (AP)
  • Basic Service Set (BSS)
  • Extended Service Set (ESS)
  • SSID (Service Set Identifier)
  • WEP (Wireless Equivalent Privacy)
  • Ad hoc Networking
  • Infrastructure Networking
 
These are just a few terms that made it into the standard and all of these date back to the first 802.11 wireless networks.
 
The IEEE 802.11b Standard

IEEE 802.11b was the first major upgrade to the WLAN specification. It was exciting news because this new standard ratified the wireless speeds up to 11Mbps. The typical range for an IEEE 802.11b wireless network is about 100 feet (30 meters) or so depending on the environment.
Though the radio band was still over crowded, the 802.11b standard provided much needed relief for places not accessible by wire AND at a decent usable speed.
 
The IEEE 802.11g Standard

 

 
The next major improvement in WLAN networking in the 2.4 GHz RF band was IEEE 802.11g. The "g" standard provided for even more network speed allowing up to 54Mbps AND was an easy upgrade for users of 802.11b as the new radios was backward compatible.
 
802.11g was able to achieve these cool new speeds through the use of Orthogonal Frequency Division Multiplexing (OFDM). Multiplexing is a technology that allows you to take multiple pieces of data and combine them into a single unit to be modulated and sent over the same radio channel. What OFDM does is, it takes the data that needs to be transmitted and breaks it up into 52 sub-carriers that are all multiplexed together into a single data stream. Since there are 52 sub-carriers the final data stream can be sent at a slower rate, provide better reliability, sent a great distance and yet deliver more data.
 
5 Gigahertz Radio Band
 
While IEEE 802.11b was gaining wide acceptance in the 2.4 Gigahertz band, 802.11a was quietly getting some use in the 5 Gigahertz RF range.
 
IEEE 802.11a was 54 Mbps before 802.11g was even born
 
One of big advantages that 802.11a had over 802.11b was that it's speed was 54Mbps and operating in the less clutter 5 Gig RF band. It also used OFDM as the modulation technique, so why didn't it take off like 802.11b did?
 
The biggest problem was that it wasn't compatible with devices that ran in 2.4Ghz band, namely 802.11b and g. And it tended to be a bit more expensive (at the time).

Today's Wireless Networking technologies allow you to operate in both frequencies.

IEEE 802.11n
 
IEEE 802.11n uses both OFDM and MIMO (multiple-in multiple-out) RF modulation techniques. This allows for a maximum throughput of 600Mbps using four MIMO streams or 150Mbps using a single stream. It operates in both RF bands, 2.4 GHz and 5 GHz and is backward compatible with the other standards.
 
The biggest problem with 802.11n is that it has only recently been ratified and many device manufacturers have not released firmware updates for the new standard.
 
All in all WLAN technologies have come a long way and are getting better every day. They're more secure, faster, cheaper and more scalable. It easy to see that every day we are becoming more and more connected. And wireless networking has just begun to become part of the overall network infrastructure.
 

Sunday, January 27, 2013

Cloud Service Fulfillment Process

Courtesy - Venkata Josyula, Malcolm Orr, Greg Page

This Chalk Talk from Greg Page and Venkata (Josh) Josyula discusses the fundamental processes and corresponding best practices that are typically employed to successfully fulfill a customer's "service request" for IT infrastructure in a real-time environment.


Cloud End-to-End Service Provisioning Flow



Figure 1 below shows the typical end-to-end provisioning steps for a customer interacting with a Cloud Service Provider (CSP) through a self-service portal and ordering a service. When appropriate, the customer would receive a confirmation from the CSP that the service request is fulfilled and is ready for use (typically via the portal in addition to a confirmation email).


Picture1a.jpg

Figure 1: Typical End-to-End IaaS provisioning steps

(click to view larger image)



The steps illustrated in Figure 1 are explained as follows:


  1. The customer logs on to the portal and is authenticated by the identity management.
  2. Based on the customer’s entitlement, the portal extracts a subset of services that the user can order from the service catalogue and constructs a ‘request catalogue’.
  3. The customer selects a service, e.g. a virtualized web server. Associated to this service is a set of technical requirements such as the amount of vRAM, vCPU etc. in addition to business requirements such as high availability or SLA requirements.
  4. The portal now raises a change request with the service desk which, when approved, will create a service instance and notify the portal. In most cases the approval process is automatic and happens quickly. The service request state is maintained in the service desk and can be queried by the customer through the self-service portal.
  5. The service desk raises a request with the IT process automation tool to fulfill the service. The orchestration tool extracts the technical service information from the service catalogue and decomposes the service into individual parts, such as compute resource configuration, network configuration, and so on.
  6. In the case of our virtualized web server running on Cisco UCS (Unified Computing System), we have three service parts: the server part, the network part, and the infrastructure part. The provisioning process is initiated.
  7. The virtual machine running on the blade or server is provisioned using the server/compute domain manager.

8&9. The network, including firewalls and load balancers, as well, the storage is provisioned by the network, network services and storage domain managers.

10-13.Charging is initiated for billing / chargeback and the Change management case is closed and the customer is notified accordingly.




Best Practices for Cloud Service Fulfilment using ITILv3 Principles.





ITILv3 provided the IT life cycle processes: service strategy, service design, service transition, service operate and Continuous Service Improvement (CSI). Applying these processes is a good way to establish service provision processes for data center virtualization and cloud provisioning. Figure 2 shows cloud service provisioning flow based on ITILv3.


Picture2a.jpg

Figure 2: Cloud service provisioning flow based on ITIL V3 principles

(click to view larger image)




Figure 2 shows some of the items that need to be considered in each of the five phases of the cloud service life cycle to provision a cloud. Data center virtualization and cloud-computing technologies can have a significant impact on IT service delivery, cost, and continuity of services; but as with any transformative technology, the adoption is greatly influenced by the up-front preparedness and strategy. IT governance can help the CIOs to become the agent of change and be an active partner in laying out the company's strategy.


The IT organization's success factors include the following:

  • Technology decisions driven by a business strategy (not the other way around)
  • Sustaining the IT activities as efficiently as possible
  • Speed to market
  • Technology architecture aligning with the business initiatives.

Mapping ITILv3 phases to ‘Infrastructure as a Service’ requirements


1. Service Strategy: With the preceding principles in mind, the following high-level tasks are done during the service strategy phase:
  • Cloud architecture assessment
  • Operations (people, processes, products, and partners [the 4Ps])
  • Demand management
  • Financial management or value creation (ROI)
  • Risk management
  • Service Design Phase

2. Service design: The following items should be considered, taking input from the service strategy phase:
  • Service catalogue management
  • Orchestration
  • Security design
  • Network configuration and change management (NCCM)
  • Service-level agreements (SLA)
  • Billing and chargeback

3. Service Transition: The following items are considered in this phase:
  • Change management
  • Service asset and configuration management: maintained.
  • Orchestration and integration
  • Migration, staging and validation




4. Service Operate: All ITILv3 phases are important, but this phase draws the most attention because 60%–70% of the I.T. budget is spent in dealing with day-to-day operations. In this phase, the service provider monitors and audits the service to ensure that the SLAs are met. The following items are considered in the service operate phase:

  • Service desk (function),
  • Incident management,
  • Problem management, Service fulfillment,
  • Event management
  • Access management



5. Cloud CSI Phase: This phase is, as its name implies, a proactive methodology to improve the IT organization activities using best practices, as opposed to reactive responses. CSI interacts with all the aforementioned phases to improve each of the phases through feedback loops. Specific activities that can be quickly adopted in this phase include:

  • Audit the configurations in all the infrastructure devices. The inventory and collection of data from all devices must be done to ensure the manageability of the cloud infrastructure.
  • Identify infrastructure that is end of life (EOL), end of service (EOS).
  • Fine-tune the management tools and processes based on best practices.




Finally, note that adding new products and services requires assessment to ensure that the new services can be incorporated into the current operating environment without sacrificing the quality of service to customers.
 

Saturday, January 26, 2013

Network Management Command - Must Know


Basic and most commonly used to test the physical network

ping 192.168.0.1 -t, parameter-t is waiting for the user to interrupt the test
PING is the most commonly used commands in order to facilitate network management to use this feature, part of the route, such as: sea spiders routing

View DNS, IP, Mac
A.Win98: winipcfg
B.Win2000 more: Ipconfig / all
C.NSLOOKUP: View Hebei DNS
C:> nslookup
Default Server: ns.hesjptt.net.cn
Address: 202.99.160.68
> Server 202.99.41.2 DNS will be changed to a 41.2
> Pop.pcpop.com
Server: ns.hesjptt.net.cn
Address: 202.99.160.68
Non-authoritative answer:
Name: pop.pcpop.com
Address: 202.99.160.212

Network messenger (~) is often asked
Net send computer name / IP | * (broadcast) content, attention can not cross the network segment
net stop messenger stop messenger service, and can also be modified panel - services
the beginning of the net start messenger messenger service

4. Probing the other name of the other computer, where the group, domain and user name (the hunt for the works)
ping-a IP-t, show only NetBios name
nbtstat-a 192.168.1.1 all

5.netstat-a shows all ports open your computer
netstat-s-e a more detailed displays your network, including TCP, UDP, ICMP and IP statistics, etc.
Probe the arp binding (dynamic and static) list, all connected to my computer, display each other's IP and MAC address
arp-a


7. Bundled IP and MAC address of the proxy server side to resolve local area network within the theft of the IP! :

ARP-s 192.168.10.59 00-50-ff-6c-08-75
Delete NIC IP and MAC address binding:
arp-d NIC IP


8 Hide your computer in the Network Neighborhood (so they can not see you!)
net config server / hidden: yes
net config server / hidden: no, compared to open



9. Several net command
A. display the current list of workgroup servers net view, without the option to use the command, it will display the list of current domain or network computer.
Such as: View the shared resources on the IP can
C:> net view 192.168.10.8
Notes in the resource sharing of the shared resource name type uses 192.168.10.8
--------------------------------------
Website Services Disk
The command completed successfully.
B. View the list of user accounts on the computer net user
And C. View network link net use
For example: net use z: \ 192.168.10.8movie the movie shared directory of the IP mapped to the local Z-disk
D. Records link net session
For example:
C:> net session
Type of computer user name customers open the idle time
-------------------------------------------------- -----------------------------
\ 192.168.10.110 ROME Windows 2000 2195 0 00:03:12
\ 192.168.10.51 ROME Windows 2000 2195 0 00:00:39
The command completed successfully.


10 trace route command
A.tracert pop.pcpop.com
Addition to display the routing,
B.pathping pop.pcpop.com 325S analysis and calculation of lost packets%
In order to bring convenience to the network management, sea spiders route is designed to provide network management: ping tests, trace route (Tracert) subnet (netcalc) subnet (netcalc), whois query, the IP attribution to inquiries, domain name query (Nslookup) With these common tools, network management even easier.
 

Thursday, January 24, 2013

Enterprise Storage Heats up Again !


Once you spend any time inside the storage marketplace, you'll come to appreciate there are many segments and subsegments.
Symmetrix-vmax-10The need to store information is ubiquitous -- the approaches are not.

Sit down and attempt to segment the storage marketplace, and you'll quickly end up with a fairly complicated model.
One familiar category is what is imprecisely called "enterprise storage" or sometimes "tier 1" -- the storage that supports the enterprise's most critical applications.

These are no-compromise environments that demand the best in predictability: performance, availability, recoverability and so on.

A subsegment of this market appears to be heating up very quickly: the entry-level enterprise-class storage array.

EMC has its very successful VMAX 10K. Hitachi has recently offered the HUS VM. And HP has invested heavily in enhancing 3PAR in this segment. IBM, NetApp, et. al. really don't play in this game, despite what their aspirational marketing might say.

So -- one has to ask -- why is this particular subsegment of a rather familiar storage landscape becoming popular all at once?

And -- of course -- what is EMC doing about it?


What Makes Enterprise Storage Different

Since no one governs the use of the term, you'll see it applied to all sorts of storage products. I’m a bit more demanding: just because someone uses a storage product in an enterprise doesn't make itenterprise storage in my book.
Symmetrix-vmax-10-2
Think consolidation of many demanding enterprise workloads, with an iron-clad requirement for predictability. Superior performance, even under duress. The ability to withstand multiple component failures. Sophisticated replication at scale. Advanced management capabilities. And so on.

Historically, these mission-critical applications ran on dedicated physical servers, but that's changed as well -- the preference is quickly becoming virtual-first, which introduces even more demanding requirements into the environment.

From an architectural perspective, it's hard to claim to be an enterprise-class storage array if you only have two storage controllers. Why? If the first one fails over to its twin, you're looking at a 50% performance degradation. While that might be acceptable in many environments; it's unacceptable in the most demanding. So let's presume multi-controller designs as at least one of the defining characteristics.

So why are smaller (and more cost-effective!) versions of these multi-controller arrays becoming so darn popular?

The Challenges Are The Same Everywhere

If you're a bank, or a telco, or any other large-scale operation, there are a handful of applications that keep you in business. Bad IT days are to be avoided at all reasonable costs. If you're operating in, say, one of the western economies, you're probably operating a decent scale -- and can justify full-blown implementations of these storage architectures.

But let's say you're doing business in China, or Brazil, or the Middle East. The business requirements are still the same, but you probably have a lot less capacity to deal with. And, indeed, there's surprisingly strong demand for these entry-level enterprise-class arrays outside the western economies.
But there's more at play here ...

A Longer Term Perspective Emerges

I've seen storage buying habits evolve now for almost 20 years, and -- not surprisingly -- a lot of storage gets sold in response to a specific requirement.

Here's what we need for SAP. Here's what we need for email. Sure, there's some opportunistic capacity sharing going on, but it's not the main thought.
ConsolidationDoing storage this way means that your landscape will grow willy-nilly over the years, and at some point there's a forcing function to consider a more consolidated approach, which motivates the customer to think in term of a smaller number of bigger arrays.

Of course, going to that sort of consolidated storage environment isn't the easiest IT project in the world ...
I think there are now enough intelligent enterprise IT buyers out there who have seen this movie before, and want to avoid it if possible.

As a result, they're willing to pay a small premium for an enterprise-class storage architecture that starts at reasonable size, but can expand to accommodate dramatic growth in application requirements, number of applications, performance, protection, availability, etc.
They want to start out on the right foot, so to speak.

Now, we here at EMC don't want to be caught napping when the market moves, so we targeted this specific space last year with the VMAX 10K. It’s been surprisingly successful. But, since then, both Hitachi and HP have gotten into the game.

So we need to bring more game.

Introducing The New VMAX 10K
VMAX-10KNot one to rest on our laurels, there's now a new member of the VMAX family to consider -- a significantly enhanced version of the 10K.

More performance. More functionality. Aggressive price. And it's a VMAX.

I think it's going to do extremely well in this segment.

VMAX 10K At A Glance
Having been around this business for so long, I continue to be amazed at just how far these technologies have evolved.
Case in point: the "baby" VMAX 10K array is fairly impressive in its own right:

  • one to four storage engines (that's a total of eightcontrollers)
  • up to 512 GB of storage cache
  • up to 1,560 drives and 64 ports, max 1.5 PB usable capacity
  • can start as small as a single storage engine and 24 drives
Plus, of course, all the rich software that's integral to the VMAX: Enginuity, SRDF, FAST VP, et. al.
So, with that as our baseline -- what's new?

2X Performance Bump

Nice, but where does this particular claim come from?
Slide6A bunch of little things, and a few big ones -- the use of 2.8 GHz Xeon Westmere Intel processors, which now deliver 12 cores @ 2.8 GHz vs. the previous 8 cores @ 2.4 Ghz per storage engine. The 10K also now uses the same internal interconnect as its big brother, the VMAX 40K.

Translated, this means roughly 2x the back-end IOPS, and a useful 30% bump on front-end bandwidth.

Of course, the *exact* performance improvement a specific customer may see is dependent on all sorts of factors, but just about everyone should see something noticeable.
More performance also means more workloads that can be consolidated, larger effective capacities can be driven harder, more advanced functionality (e.g. tiering, replication) can be used with less impact, and so on.
More performance is always a good thing.

Federated Tiered Storage Improvements

Over the last few years, many enterprise storage arrays have learned a neat trick: they can connect to existing block storage arrays and "front end" them.
Slide7
This can be especially useful if you've got the typical 'stranded asset' problem -- the old storage now is somewhat more useful: it can benefit from the caching being done by newer array, it can be used with all the cool software features of the new enterprise array, and so on.
Both EMC and Hitachi have done this for a while. As far as I know, HP doesn't do this with 3PAR for architectural reasons.

Not surprisingly, EMC does this particular trick a bit better than Hitachi:

- EMC implements an error-checking protocol on data transfers between the VMAX and the older storage
- EMC allows the external capacity to play at anytier as part of FAST VP
This last bit is more useful than it might sound. While there's plenty of need to recycle older, slower storage as a capacity tier, we're guessing that people will eventually want to consolidate newer, faster arrays (maybe flash-based?) as part of a larger, consolidated tiering environment.

And, while we're on FTS (federated tiered storage), don't underestimate the power of bringing newer features to older storage: things like VPLEX and Recoverpoint just work of course, but also snaps, clones, remote replicas, storage management tools, etc. etc. I think it's an under-utilized capability in most shops.

Data At Rest Encryption (D@RE)

Slide8Another small bit of magic that the VMAX team has come up with is their implementation of data encryption. It's about as seamless and transparent as you can imagine.

Because it uses encryption engine on the controller (vs. on the drive) it can encrypt any storage type, including externally attached storage arrays. There is *no* measurable performance impact. And it's now integrated with the RSA Key Manager for a somewhat easier deployment.

If someone comes to you and says "hey, we should be encrypting all of our drives", it's sort of a turn-it-on-and-forget-about-it situation for most customers. If you think you're going to need the encryption feature, it should be specified when you order the product -- it can't be easily added later.

Data Compression For Old Data

In this release of Enginuity, we've got the first implementation of yet another efficiency technology to put into play -- in-place compression of stale data.

The customer sets a threshold (e.g. not accessed in the last 60 days), and the VMAX will compress it. Start accessing the data again, and it expands. While this doesn't meet the ultimate goal of real-time on-the-fly dynamic space reduction, it's still very useful in most production environments.

Quick thought -- how much of your storage capacity hasn't been accessed in the last 60 days?

Host I/O Limits

Most storage arrays try to please everyone -- turn around each and every I/O request just as fast as you can.
Slide10
While that's an admirable goal, it's not ideal for everyone.
For example, we've got a growing cadre of service provider customers who deliver tiered storage services using VMAX.
This new feature helps them articulate *exactly* what customers are getting when they sign up for a particular class of storage service, e.g. 2000 IOPS.

Customers who sign up for a low storage performance level shouldn't get a "free lunch", as there would be less incentive to move up to a higher service class.

More pragmatically, customers and service providers continually want to push these arrays to their max, and these controls help them better utilize aggregated resource limits: CPU, memory, bandwidth. If you can limit what one group of applications can get (no need to overprovision) you can get much more useful work out of a given asset.

BMW
Not exactly a typical requirement, but it does exist.
This release of Enginuity implements host I/O controls (defined either by storage group and/or port group) to limit how much bandwidth and/or IOPs are allowed. This function is integrated into Unisphere for VMAX, which makes it straightforward to set up and monitor.
There's a longer list of other features in the new version of Enginuity as well -- available on all the VMAXen -- but these are the ones that stood out for me.

Packaging

The VMAX 10K now supports popular third-party racks. Again, a little thing that means a lot to certain customers.
Customers can either use the EMC-supplied enclosures, or go with a decent number of popular racking options for the VMAX 10K components. There's even a nifty Cylon-style VMAX bezel available to preserve the nice looks :)

While we're on physical packaging, the VMAX 10K now supports what's dubbed "system bay dispersion", allowing two enclosures to be separated to aid in weight distribution (these are heavy boxes) or just give you a bit more flexibility as to where everything goes.

There's now support for intermixed 2.5" drives as well as the more familiar 3.5".
There are more than a few IT organizations who've put a lot of thought and effort into their data center physical infrastructure, and they've standardized on racking for all the right reasons. Service providers, in particular, care about this greatly.

The Partner Opportunity
Slide9Historically, the VMAX hasn't been what you'd describe as a "partner friendly" product for a variety of reasons.

Well, with the VMAX 10K, that's apparently changed. I was quite pleased to see just how many VMAX 10Ks have been sold as part of partner-led engagements.

For partners, you can see why it's an attractive proposition: the VMAX is a very differentiated offering, it targets customers who are looking for a high-value solution, and there's typically a nice services component that goes with it.
And, of course, the product has a helluva good reputation :)

What All Of This Might Mean
This particular segment of the market (entry-level enterprise-class storage arrays) have recently become popular -- which sort of surprises me.

If I could point to a root cause, I'd suggest it's a maturation of perspective: enterprise storage is much more than just a bunch of disks sitting behind a pair of controllers.

Like any growth area, it draws new competitive entrants. We at EMC have to compete vigorously in each and every segment of the broader storage marketplace. Nothing is easy in the storage market -- there are plenty of hungry companies out there.

If anything, the new VMAX 10K sends a clear message: we're bringing our best game to our customers, each and every day.

 

VMware vCloud Director 5.1 New Features For Staters

 
 
For starters, just check out some of the increases in the scalability:
 
 
Supported in 5.0
Supported in VCD 5.1 (*)
# of VMs
20,000
30,000
# of powered on VMs
10,000
10,000
# VMs per vApp
64
128
# of hosts
2,000
2,000
# of VCs
25
25
# of users
10,000
10,000
# of Orgs
10,000
10,000
Max # vApps per org
500
3,000
# of vDCs
10,000
10,000
# of Datastores
1,024
1,024
# of consoles
300
500
# of vApp Networks
-
1,000
# of External Networks
-
512
# of Isolated vDC Networks
-
2,000
# of Direct vDC Networks
-
10,000
# of Routed vDC Networks
-
2,000
# Network Pools
-
25
# Catalogs
1,000
10,000
 
(Note: The numbers represented here are ‘soft’ numbers and do not reflect hard limits that can not be exceeded)
You’ll notice that there are some line items here that refer to ‘vDC Networks’. This is a new construct in 5.1, which replaces the organization network concept in previous versions. Organization vDC networks simplify the virtual network topology present in vCloud Director and facilitate more efficient use of resources.
 
That’s not all the networking changes present though! Major enhancements have been introduced with the Edge Gateway. Some highlights include:
 
- The ability to have two different deployment models (compact and full) to provide users a choice over resource consumption and performance.
 
- High availability provided by having a secondary Edge Gateway that can seamlessly take over in the event of a failure of the primary
 
- Multiple interfaces. In previous versions the vShield Edge device supported 2 interfaces. The Edge Gateways in vCloud Director 5.1 now support 10 interfaces and can be connected to multiple external networks.
 
The networking services that are provided out of the box with vCloud Director 5.1 have also been enhanced. DHCP can be provided on isolated networks. NAT services now allow for the specification of SNAT and DNAT rules to provide a finer degree of control. There’s also support for a virtual load balancer that can direct network traffic to a pool of servers using one of several algorithms.
 
Additionally, vCloud Director 5.1 introduces support for VXLAN. This provides ‘virtual wires’ that the cloud administrator can use to define thousands of networks that can be consumed on demand.
 
Providing the ability to have a L2 domain with VXLAN that encompasses multiple clusters gives rise to the need to support the use of multiple clusters within a Provider VDC. This is part of the Elastic VDC feature that has now been extended to support the Allocation Pool resource model, along with the Pay-As-You-Go model.
 
Support for Storage Profiles provides the ability for cloud administrators to quickly provide multiple tiers of storage to the organizations. Previously, to do this, one had to define multiple Provider VDCs. For those who have done this, a feature has also been added to allow for the merger of multiple Provider VDCs into a single object.
 
Numerous changes were also added to increase the usability. Top on this list is the support for snapshots! The UI has also been updated to make it a easier for the end user to create new vApps, reset leases, and find items within the catalog. Support for tagging objects with metadata information is also provided through the UI as well.
 
I’m sure you’ll agree that this represents a lot of features… And I haven’t even gotten into the API extensibility features
or the support for Single Sign-On (SSO)! For now, if you want more information, I’d suggest reading the What’s New whitepaper here:
 
 

Wednesday, January 23, 2013

Storage 101 - Part 4

Storage 101 - Part 2
Storage 101 - Part 3


This article continues the discussion of Storage Area Networking by discussing Fibre Channel switches.

Introduction


In my previous article in this series, I talked about some of the fabric topologies that are commonly used in Storage Area Networks. In case you missed that particular article, a fabric is essentially either a single switch or a collection of switches that are joined together to form the Storage Area Network. The way in which the switches are connected forms the basis of the topologies that I discussed in the previous article.

Fibre Channel Switches

Technically a SAN can be based on either iSCSI or Fibre Channel, but Fibre Channel SANs are far more common than iSCSI SANs. Fibre Channel SANs make use of Fibre Channel switches.

Before I get too far into a discussion on Fibre Channel switches, I need to explain that although a fabric is defined as one or more switches used to form a storage area network, the fabric and the SAN are not always synonymous with one another. The fabric is the basis of the SAN, but a SAN can consist of multiple fabrics. Typically multi fabric SANs are only used in large, enterprise class organizations.

There are a couple of reasons why an organization might opt to use a multi fabric SAN. One reason has to do with storage traffic isolation. There might be situations in which an organization needs to isolate certain storage devices from everything else either for business reasons or because of a regulatory requirement.

Another reason why some organizations might choose to use a multi-fabric SAN is because doing so allows an organization to overcome limitations inherent in Fibre Channel. Like Ethernet, Fibre Channel limits the total number of switches that can be used on a network. Fibre Channel allows for up to 239 switches to be used within a single fabric.

Given this limitation it is easy to assume that you can get away with building a single fabric SAN so long as you use fewer than 239 switches. For the most part this idea holds true, but there some additional limitations that must be taken into consideration with regard to fabric design.

Just as the Fibre Channel specification limits the number of switches that can be used within a fabric, there are also limitations to the total number of switch ports that can be supported. Therefore, if your Fibre Channel fabric is built from large switches with lots of ports then the actual number of switches that you can use will likely be far fewer than the theoretical limit of 239.

Fibre Channel Switching Basics

One of the first things that you need to know about Fibre Channel switches is that not all switches are created equally. Fibre Channel is a networking standard and every Fibre Channel switch is designed to adhere to that standard. However, many of the larger switch manufacturers incorporate proprietary features into their switches. These proprietary features are not directly supported by the Fibre Channel specification.

That being the case, the functionality that can be achieved within a SAN varies widely depending upon the switches that are used within the SAN. It is perfectly acceptable to piece a SAN together using Fibre Channel switches from multiple vendors. Doing so is fairly common in fact, simply because of how quickly some vendors offer and then discontinue various models of switches. For example, an organization might purchase a Fibre Channel switch and later decide to expand their SAN by adding an additional switch. By the time that the second switch is needed the vendor who supplied the previously existing switch might have stopped making that model of switch. Hence the organization could end up using a different model of switch from the same vendor or they might choose to use a different vendor’s switch.

When a fabric contains switches from multiple vendors (such as HP and IBM) the fabric is said to be heterogeneous. Such situations are also sometimes referred to as an open fabric. When a SAN consists of one or more open fabrics, the switch’s proprietary features must usually be disabled. This allows one vendor’s Fibre Channel switch to work with another vendor’s switch since each switch adheres to a common set of Fibre Channel standards.

The alternative of course is to construct a homogeneous fabric. A homogenous fabric is one in which all of the switches are provided by the same vendor. The advantage of constructing a homogenous fabric is that the switches can operate in native mode, which allows the organization to take full advantage of all of the switch’s proprietary features.

The main disadvantage to using a homogenous fabric is something that I like to call vendor lock. Vendor lock is a situation in which an organization uses products that are provided by a single vendor. When this goes on for long enough, the organization becomes completely dependent upon the vendor. Vendor dependency can lead to inflated pricing, poor customer service, and ongoing sales pressure.

Regardless of which vendor’s switches you choose to use, Fibre Channel switches generally fall into two different categories – Core and Edge.

Core switches are also sometimes called Director switches. They are used primarily in situations in which redundancy is essential. Typically a core switch is built into a rack mounted chassis that uses a modular design. The reason why the switch has a modular design is for redundancy. A core switch is generally designed to prevent individual components within the switch from becoming single points
of failure.

The other type of switch is called an edge switch. Edge switches tend to have fewer configuration options and less redundancy than core switches. However, some edge switches do have at least some degree of redundancy built in.

It is important to understand that the concepts of core switches and edge switches are not a part of the Fibre Channel specification. Instead, vendors market various models of switches as either core switches or edge switches based on how they intend for a particular switch model to be used. The terms core and edge give customers an easy way to get a basic idea of what they can expect from the switch.

SAN Ports

I plan to talk in detail about switch ports in Part 5 of this series, but for right now I wanted to introduce you to the concept of Inter Switch Linking. Fibre Channel switches can be linked to one another through the use of an Inter Switch Link (ISL). ISLs allow storage traffic to flow from one switch to another.

As you will recall, I spent some time earlier talking about how some vendors build proprietary features into their switches that will only work if you use switches from that vendor. Some of these features come into play with regard to ISLs.

ISLs are a Fibre Channel standard, but some vendors use ISLs in a non-standard way. For example, most switch vendors support a form of ISL aggregation in which multiple ISL ports can be used together to emulate a single very high bandwidth ISL. Cisco refers to this as EISL, whereas Brocade refers to it as trunking. The point is that if you want to use ISL aggregation you will have to be vendor consistent with your Fibre Channel switches.

Conclusion

In this article I have tried to familiarize you with some of the basics of Fibre Channel switches. In Part 5, I plan to talk about Fibre Channel switch ports.

 

Tuesday, January 22, 2013

Cisco Just Found A Brilliant New Way To Beat VMware

 
Cisco just bought a small 1 percent stake in Parallels, best known for its software that lets you run Windows apps on a Mac.

The stake, however, gives Cisco a seat on Parallels' board.
It's an interesting move for Cisco with a lot of implications.
 
The software that lets you run Windows on a Mac, or more generally multiple operating systems on the same computer, is called virtualization.
 
That's what VMware does. And Cisco, once friendly with VMware, is increasingly butting heads with it in the data center.
Although Parallels is best known for the its desktop software, the company also has server virtualization technology. VMware's server virtualization software is its bread-and-butter product.
 
Cisco and VMware had been close partners until last summer, with Cisco heavily relying on VMware's software and on storage products from VMware's parent company, EMC. The three of them were so cozy, they actually had a joint venture, VCE. VCE sells hardware and services to data centers and was pretty successful, reportedly selling around $1 billion worth of gear in 2012.
 
Then VMware bought a startup, Nicira, for $1.26 billion last summer. Nicira plays in the field of software-defined networking that completely changes how companies build networks. Its a disruptive technology that could really hurt Cisco.
 
The Nicira acquisition drew a line in the sand between Cisco and VMware with VCE caught in the middle.
 
But Cisco was far too dependent on VMware to declare all-out war immediately. Virtualization is a must-have feature for servers, so Cisco needs it for a popular line of servers it sells. Plus Cisco is a huge VMware customer for its internal IT systems.
 
This stake in Parallels gives Cisco direct access to alternative virtualization technology to use in its servers, which could set it up, possibly, to sever the VCE relationship.
 
It's not the first move Cisco has made to cut ties with VMware. In October, Cisco released its own version of a cloud-building tech called OpenStack. OpenStack scares VMware. It competes with VMware's prize cloud operating system, vCloud.
 
Armed with its own version of OpenStack, and a stake in Parallels, Cisco has way to ditch its internal use of VMware's software.
 
Plus, last month, Cisco bought cloud management company Cloupia, for an undisclosed sum. That means Cisco is assembling the pieces to not just ditch VMware, but to compete head on.

 

Monday, January 21, 2013

Storage 101 - Part 3


This article continues the discussion of storage area networks by talking about the storage fabric and about the three most commonly used fabric topologies.

Introduction

In the second part of this article series, I talked all about hosts and host hardware. In this article, I want to turn my attention to the storage fabric.


As previously explained, SANs consist of three main layers – the host layer (which I talked about in the previous article), the fabric layer, and the storage layer. The fabric layer consists of networking hardware that establishes connectivity between the host and the storage target. The fabric layer can consist of things like SAN hubs, SAN switches, fiber optic cable, and more.

Fabric Topologies


Before I get too far into my discussion of the fabric layer, I need to explain that SANs are really nothing more than networks that are dedicated to the sole purpose of facilitating communications between hosts and storage targets. That being the case, it should come as no surprise that there are a number of different topologies that you can implement. In some ways SAN fabric topologies mimic the topologies that can be used on regular, non-SAN networks. There are three main fabric topologies that you need to know about. These include point to point, arbitrated loop, and switched fabric.

Point to Point

Point to point is the simplest and least expensive SAN fabric topology. However, it is also the least practical. A point to point topology is essentially a direct connection between a host and a storage target. The simplicity and cost savings come into play in the fact that no additional SAN hardware is needed (such as switches and routers). Of course the price for this simplicity is that the fabric can only include two devices – the host and the storage target. The fabric cannot be expanded without switching to a different topology. Because of this, some would argue that point to point isn’t even a true SAN topology.

Arbitrated Loop


The simplest and least expensive “true SAN” topology is an arbitrated loop. An arbitrated loop makes use of a Fibre Channel hub. Hubs are kind of like switches in that they contain ports and devices can communicate with each other through these ports. The similarities end there however.

Fibre channel hubs lack the intelligence of a switch, and they do not segment communications like a switch does. This leads to a couple of important limitations. For starters, all of the devices that are attached to a hub exist within a common collision domain. What this means is that only one device can transmit data at a time. Otherwise, if two devices attempted simultaneous communications the transmissions would collide with each other and be destroyed.

Because of the way that Fibre Channel hubs work, each hub provides for a certain amount of bandwidth and that bandwidth must be shared by all of the devices that are connected to the hub. This means that the more devices you connect to a Fibre Channel hub, the more each device must compete with other devices for bandwidth.

Because of bandwidth limitations and device capacity, the arbitrated loop topology is suitable only for small or medium sized businesses. The limit to the number of devices that can be connected to an arbitrated loop is 127. Even though this probably sounds like a lot of devices, it is important to remember that this is a theoretical limit, not a practical limit.

In the real world, Fibre Channel hubs are becoming more difficult to find, but you can still buy them. Most of the Fibre Channel hubs that I have seen recently only offer eight ports. That isn’t to say that you can only build a hub with eight devices however. You can use a technique called hub cascading to join multiple hubs together into a single arbitration loop.

As the arbitration loop grows in size, there are a few things to keep in mind. First, the 127 device limit that I mentioned previously is the limit for the entire arbitration loop, not just for a single hub. You can’t exceed the device limit just by connecting an excessive number of hubs together.
Another thing to consider is that the hub itself counts as a device. Therefore, an eight port hub with a device plugged into each port would actually count as nine devices.

Probably the most important thing to remember with regard to hub cascading is that hardware manufacturers have their own rules about hub cascading. For example, many of the Fibre Channel hubs on the market can be cascaded twice, which means that the maximum number of hubs that you could use in your arbitration loop would be three. If you assume that each hub contains eight ports then the entire arbitration loop would max out at 24 devices (although the actual device count would be 27 because each of the three hubs counts as a device).

Keep in mind that this represents a best case scenario (assuming that the manufacturer does impose a three hub limit). The reason why I say this is because in some cases you might have to use a hub port to connect the next hub in the cascade. Some hubs offer dedicated cascade ports separate from the device ports, but others do not.

Earlier I mentioned that using an arbitration loop was the cheapest and easiest way to build a SAN. The reason for this is that Fibre Channel hubs do not typically require any configuration. Devices can simply be plugged in, and the hub does the rest. Keep in mind however, that arbitration loops tend to be slow (compared to switched fabrics) and that they lack the flexibility for which SANs have come to be known.

Switched Fabric


The third topology is known as a switched fabric. Switched fabric is probably the most widely used Fibre Channel topology today. It is by far the most flexible of the three topologies, but it is also the most expensive to implement. When it comes to SANs however, you usually get what you pay for.

As the name implies, the switched fabric topology makes use of a Fibre Channel switch. Fibre Channel switches are not subject to the same limitations as hubs. Whereas an arbitration loop has a theoretical limit of 127 devices, a switched fabric can theoretically scale to accommodate millions of devices. Furthermore, because of the way that a switched fabric works, any device within the fabric is able to communicate with any other device.

As you can see, Fibre Channel switches are very powerful, but they also have the potential to become a single point of failure. A switch failure can bring down the entire SAN. As such, switched fabrics are usually designed in a way that uses redundant switches. This allows the SAN to continue to function in the event of a switch failure. I will discuss switched fabrics in much more detail in the next article in this series.

Conclusion

In this article, I have introduced you to the three main topologies that are used for SAN communications. In Part 4 of this article series, I plan to talk more in depth about the switched fabric topology and about Fibre Channel switches in general.
 

My Blog List

Networking Domain Jobs