Friday, November 30, 2012

Cisco launches new Cloud and Managed Services Partner Program


Cisco has announced the reworking of its cloud provider and reseller programs, and its Managed Services program, merging them into a single united program.
 "Every analyst confirms that the cloud is only becoming bigger, with IDC forecasting nearly 30% of all IT will be in the cloud by 2020," said Ricardo Moreno, Senior. Director, Channels Programs & Strategy at Cisco.
 
"It's huge and it's there for partners to capture the opportunity."  Moreno acknowledged the new program represents a significant rethink of how Cisco is approaching the cloud.  "Since two years ago, when we came up with the strategy, our Go-to-Market strategy has been partner based," Moreno said.
 
"We have learned from our original programs and are evolving them now."  
 
Cisco's cloud strategy -- in common with several other vendors -- defines three key roles for partners: cloud builder; cloud provider and cloud reseller. The cloud builder role, which Moreno said was closest to the traditional integration business, was evolved in September into the Master Cloud Builder specialization.  "The idea was to leverage the model that the partners know best, and we added some elements including application knowledge to the Cloud Builder specialization," Moreno said.  
 
"Now, with this new announcement the cloud provider and reseller roles are being combined with the Managed Services program which we have had in place for six years into a new Cloud and Managed Services portfolio," Moreno said.  
 
"There is much greater simplification from turning two program tracks into a simple program," Moreno said.
 
These include streamlined audits and simplified pricing. The pricing will be consistent and predictable globally, allowing partners to position and sell Cisco-Powered managed and cloud services more effectively.  Rebates have also been enhanced. CMSP partners can take advantage of the Cisco Value Incentive Program (VIP), effective January 27, 2013.
 
This replaces the current Cisco Managed Services Channel Program (MSCP) rebates.  "Cloud providers before did not have access to rebates, and neither they nor managed service providers has had access to VIP through that program," Moreno said.
 
"This makes one single rebate process to manage. The previous program had some flat rebates and now they have all been enhanced and harmonized."  
 
CMSP partners are also eligible for additional Cisco incentive programs including Opportunity Incentive Program (OIP), Teaming Incentive Program (TIP), Solution Incentive Program (SIP), and Technology Migration Program (TMP).  
 
"In the past, all these were not available to these partners," Moreno said.  
In addition, "Cisco Powered" branding will be a part of CMSP, to provide partners with strong branding.
 
"This is something we hadn't been promoting in recent years in managed services, but we will be now," Moreno said.  
 
Once certified, CMSP partners can now sell and deliver cloud & managed services globally. "Being able to offer services outside of their region without qualifying again is important for cloud partners," Moreno said.  
 
Finally, to assist partners in building, marketing, and selling their services, Cloud Market Development Funds, business acceleration tools and services, sales training and Cisco's Cloud Marketplace (http://marketplace.cisco.com/cloud) will be available.  
 
"Any CMSP participant can advertise themselves on Cisco Cloud Marketplace to other partners," Moreno said. "This promotes partner to partner collaboration. This was announced in September, but very much connects to what we are doing here."
 
The transition process will last until August 1, 2013, after which the old MSCP managed services program will disappear,  "That should be ample time to transition -- 9 to 23 months depending on the anniversary date," Moreno said.
 
The biggest change in qualification requirements will be that under the CMSP, Cisco Powered Service designations are being elevated to mandatory requirements.  "There are two main changes," said Arjun Lahiri, Senior Manager, Worldwide Channels at Cisco.  
 
"CMSP Master partners will need to have a minimum of two Cisco Powered Services, and CMSP Advanced Partners will be required to have one. Today, these are not mandates, but with the new program they will be mandatory."
The CMSP Express partner tier will not be required to qualify on any Cisco Powered Services, but Lahiri said they will be required to have two Cisco-based services, with the difference being that these may have some other vendor equipment, while the Cisco Powered services are exclusively Cisco validated and tested.
 

10 Things to Know about Cisco UCS


“10 Things to Know about Cisco UCS.” Here we go….

  • The most important feature of UCS is its management architecture. The hardware was all designed with unified management in mind in order to reduce the administrative overhead of today’s server environments. As companies move to more highly virtualized environments and cloud architectures, automation and orchestration becomes key. UCS provides the management and provisioning tools at a hardware level to quickly realize the benefits of these types of environments and maximize the inherent cost reductions.

  • UCS is not just about blades. The management and I/O infrastructure is designed from the ground up to manage the entire server infrastructure including rack-mount servers. While blade adoption rates continue to grow, 60% of all servers are still rack-mount. UCS’s ability to manage both rack-mount and blade servers under one platform is a key differentiator with major ROI benefits. This ability will be available by the end of the calendar year.

  • UCS is based on industry standards such as the 802 Ethernet standards and x86 hardware architecture, making it vendor neutral and fully compatible with other systems. The UCS system is interoperable with any existing infrastructure and can be tied into management and monitoring applications already being utilized.

  • Using the Virtual Interface Card (VIC) or Generation 1 Converged Network Adapters (CNA) from Emulex or Qlogic, UCS has a unique capability of detecting network failures and fail traffic paths in hardware on the card. This allows network administrators to design and configure network failover end-to-end, ensuring consistent policies and bandwidth utilization. Additionally this unique feature provides faster failover and higher redundancy than other systems.

  • The management infrastructure of UCS is designed to allow an organization to provision and manage the system in the way that most closely fits its process. If a more dynamic process is desired, UCS allows a single administrator to cross traditional boundaries in order to increase operational flexibility. If the current organizational structure is rigid and changes are not desired, UCS provides tight Role Based Access Control (RBAC) tools to maintain strict boundaries that match the current customer environment. If an organization is looking to UCS to provide an Infrastructure as a Service (IaaS) type environments, the benefits of UCS can be extended into custom self-service portals using the UCS XML interface.

  • UCS reduces infrastructure components and costs by providing advanced tools for I/O consolidation. The UCS system is designed to converge disparate I/O networks onto a single Ethernet infrastructure. This consolidation is not limited to FCoE deployments; it extends these benefits to NFS, iSCSI, RDMA and any other protocol utilizing Ethernet for Layer 2 communication.

  • Current UCS hardware provides up to 80Gbps of converged I/O to each chassis of 4-8 blades. This is done using a pair of redundant I/O modules which both operate in an active fashion. This is not a bandwidth limitation of the mid-plane which was designed for 40Gbps Ethernet and above. Future I/O modules will provide additional bandwidth to the chassis and blades as data center I/O demands increase.

  • The single-point-of-management for the server access layer provided by UCS can be extended to the VMware virtual switching infrastructure, further reducing administrative overhead. Using Pass-Through Switching (PTS) on UCS, the VMware virtual switching environment can be managed through the UCS service profile the same way physical blades are managed.

  • Memory extension on the UCS B250-M1 and B250-M2 blades provide industry leading 384GB of memory density for 2 socket servers. Moreover, because this increased density is gained through additional DIMM slots, lower density DIMMS can be used at significantly lower cost to reach up to 194GB of memory. In addition to the M250 blades, the B440 adds support for the 2 or 4 Xeon 7500 processors with 4, 6, or 8 cores depending on processor model.

  • While the UCS architecture was designed to amplify the benefits of server virtualization and Virtual Desktop infrastructures (VDI), the platform is standards based and can be used with any bare metal x86 based operating system such as Windows, SUSE/Red Hat Linux, etc. UCS can operate with any mix of server operating systems desired for any given customer.
 

Thursday, November 29, 2012

Cisco announces new job-role focused data center certifications

 
Today, Cisco is announcing a new comprehensive, job-role-focused training and certification program for its datacenter architecture. It includes a comprehensive Career Certification portfolio consisting of the Cisco CCNA Data Center, CCNP DataCenter and CCIE Data Center, as well as a robust product training portfolio.
 
"The key message here is that our career certifications now fully embrace the unified datacenter architecture -- unified computing, unified fabric and unified management," said Antonella Corno, product manager for Data Center virtualization and cloud product portfolio at Cisco. "These announcements focus on the first two unified computing and unified fabric. Unified management is cloud-focused and isn't as mature as the others, and so is not ready to receive a full career certification yet."
 
Corno said that as technology has been evolving to more intelligence in the networks, the industry has discovered that a knowledge gap is growing.
 
"Those tech trends were responding to specific business challenges, and IT organizations had to rethink the way they leverage the infrastructure, which was leading the industry into new roles and skillsets, and we had to address the gaps that were forming in training," she said.
 
"Moving to more complex infrastructures needs better trained people to lower the cost of operations, increase the functional excellence of staff and ensure the network evolves with business," Corno added.
 
The new specialist certifications cover every data center product. Joining CCIE Data Center, which was announced in February 2012, are CCNA Data Center and CCNP Data Center. All see two key pillars of the Unified Cisco Data Center architecture -- Cisco Unified Computing and Cisco Unified Fabric -- addressed across the job roles of design, implementation and troubleshooting.
 
CCNA Data Center lays the foundation for basic level job roles such as Data Center Networking Administrators that require competence in areas including network and server virtualization and storage and IP networking convergence. Good for three years, the certification requires two exams, one on routing and switching (DCICN), the other on data center technologies (DCICT).
 
The CCNP Data Center certification is a more advanced certification, aligned specifically to the job role of Data Center Network Designers and professional-level Data Center networking practitioners. It consists of six exams, in Troubleshooting, Implementation and Design, for both Unified Computing Support and Unified Fabric Support. The candidate needs to complete four of them, depending on their specialization.
 
The certification validates an individual's capabilities, including troubleshooting a virtualized computing environment based on the Cisco Unified Computing System platform focusing on storage and network connectivity, installation, memory, adapter connectivity and booting issues, drivers, BIOS and deploying a virtualized environment.
 
"The CCNP prerequisites are the CCNA and 3-5 years of experience," Corno said.
 
While the CCIE exam was announced earlier this year, Cisco is announcing that its written exam is now available, and the lab exam will be available December 10, 2012.
 
Cisco also announced an expanded product training portfolio for the Cisco Nexus 1000V, 2000/5000, 7000, the Cisco MDS 9000 series and the and Cisco Unified Computing System (Cisco UCS) B and C-Series.
 
CCNA Data Center and CCNP Data Center exams are now available through Pearson VUE. Training courses are available through Cisco Authorized Learning Partners.
 

Wednesday, November 28, 2012

Digging into Sofware Define Data Center

 
The software defined data center is a relatively new buzzword embraced by the likes of EMC and VMware.  This post is intended to take it a step deeper as I seem to be stuck at 30,000 feet for the next five hours with no internet access and no other decent ideas. For the purpose of brevity (read: laziness) I’ll use the acronym SDDC for Software Defined Data Center whether or not this is being used elsewhere.)
 
First let’s look at what you get out of a SDDC:
 
Legacy Process:
 
In a traditional legacy data center the workflow for implementing a new service would look something like this:
  1. Approval of the service and budget
  2. Procurement of hardware
  3. Delivery of hardware
  4. Rack and stack of new hardware
  5. Configuration of hardware
  6. Installation of software
  7. Configuration of software
  8. Testing
  9. Production deployment
This process would very greatly in overall time but 30-90 days is probably a good ballpark (I know, I know, some of you are wishing it happened that fast.)
 
Not only is this process complex and slow but it has inherent risk. Your users are accustomed to on-demand IT services in their personal life. They know where to go to get it and how to work with it. If you tell a business unit it will take 90 days to deploy an approved service they may source it from outside of IT. This type of shadow IT poses issues for security, compliance, backup/recovery etc.
 
SDDC Process:
 
As described in the link above an SDDC provides a complete decoupling of the hardware from the services deployed on it. This provides a more fluid system for IT service change: growing, shrinking, adding and deleting services. Conceptually the overall infrastructure would maintain an agreed upon level of spare capacity and would be added to as thresholds were crossed. This would provide an ability to add services and grow existing services on the fly in all but the most extreme cases. Additionally the management and deployment of new services would be software driven through intuitive interfaces rather than hardware driven and disparate CLI based.
 
The process would look something like this:
  1. Approval of the service and budget
  2. Installation of software
  3. Configuration of software
  4. Testing
  5. Production deployment
The removal of four steps is not the only benefit. The remaining five steps are streamlined into automated processes rather than manual configurations. Change management systems and trackback/chargeback are incorporated into the overall software management system providing a fluid workflow in a centralized location. These processes will be initiated by authorized IT users through self-service portals. The speed at which business applications can be deployed is greatly increased providing both flexibility and agility.
 
Isn’t that cloud?
 
Yes, no and maybe. Or as we say in the IT world: ‘It depends.’ SDDN can be cloud, with on-demand self-service, flexible resource pooling, metered service etc. it fits the cloud model. The difference is really in where and how it’s used. A public cloud based IaaS model, or any given PaaS/SaaS model does not lend itself to legacy enterprise applications. For instance you’re not migrating your Microsoft Exchange environment onto Amazon’s cloud. Those legacy applications and systems still need a home. Additionally those existing hardware systems still have value. SDDC offers an evolutionary approach to enterprise IT that can support both legacy applications and new applications written to take advantage of cloud systems. This provides a migration approach as well as investment protection for traditional IT infrastructure.
 
How it works:
 
The term ‘Cloud operating System’ is thrown around frequently in the same conversation as SDDC. The idea is compute, network and storage are raw resources that are consumed by the applications and services we run to drive our businesses. Rather than look at these resources individually, and manage them as such, we plug them into a a management infrastructure that understands them and can utilize them as services require them. Forget the hardware underneath and imagine a dashboard of your infrastructure something like the following graphic.
 
 
image
 
The hardware resources become raw resources to be consumed by the IT services. For legacy applications this can be very traditional virtualization or even physical server deployments. New applications and services may be deployed in a PaaS model on the same infrastructure allowing for greater application scale and redundancy and even less tie to the hardware underneath.
 
Lifting the kimono:
Taking a peak underneath the top level reveals a series of technologies both new and old. Additionally there are some requirements that may or may not be met by current technology offerings. We’ll take a look through the compute, storage and network requirements of SDDC one at a time starting with compute and working our way up.
 
Compute is the layer that requires the least change. Years ago we moved to the commodity x86 hardware which will be the base of these systems. The compute platform itself will be differentiated by CPU and memory density, platform flexibility and cost. Differentiators traditionally built into the hardware such as availability and serviceability features will lose value. Features that will continue to add value will be related to infrastructure reduction and enablement of upper level management and virtualization systems. Hardware that provides flexibility and programmability will be king here and at other layers as we’ll discuss.
 
Other considerations at the compute layer will tie closely into storage. As compute power itself has grown by leaps and bounds our networks and storage systems have become the bottleneck. Our systems can process our data faster than we can feed it to them. This causes issues for power, cooling efficiency and overall optimization. Dialing down performance for power savings is not the right answer. Instead we want to fuel our processors with data rather than starve them. This means having fast local data in the form of SSD, flash and cache.
 
Storage will require significant change, but changes that are already taking place or foreshadowed in roadmaps and startups. The traditional storage array will become more and more niche as it has limited capacities of both performance and space. In its place we’ll see new options including, but not limited to migration back to local disk, and scale-out options. Much of the migration to centralized storage arrays was fueled by VMware’s vMotion, DRS, FT etc. These advanced features required multiple servers to have access to the same disk, hence the need for shared storage. VMware has recently announced a combination of storage vMotion and traditional vMotion that allows live migration without shared storage. This is available in other hypervisor platforms and makes local storage a much more viable option in more environments.
 
Scale-out systems on the storage side are nothing new. Lefthand and Equalogic pioneered much of this market before being bought by HP and Dell respectively. The market continues to grow with products like Isilon (acquired by EMC) making a big splash in the enterprise as well as plays in the Big Data market. NetApp’s cluster mode is now in full effect with OnTap 8.1 allowing their systems to scale out. In the SMB market new players with fantastic offerings like Scale Computing are making headway and bringing innovation to the market. Scale out provides a more linear growth path as both I/O and capacity increase with each additional node. This is contrary to traditional systems which are always bottle necked by the storage controller(s).
 
We will also see moves to central control, backup and tiering of distributed storage, such as storage blades and server cache. Having fast data at the server level is a necessity but solves only part of the problem. That data must also be made fault tolerant as well as available to other systems outside the server or blade enclosure. EMC’s VFcache is one technology poised to help with this by adding the server as a storage tier for software tiering. Software such as this place the hottest data directly next the processor with tier options all the way back to SAS, SATA, and even tape.
 
By now you should be seeing the trend of software based feature and control. The last stage is within the network which will require the most change. Network has held strong to proprietary hardware and high margins for years while the rest of the market has moved to commodity. Companies like Arista look to challenge the status quo by providing software feature sets, or open programmability layered onto fast commodity hardware. Additionally Software Defined Networking has been validated by both VMware’s acquisition of Nicira and Cisco’s spin-off of Insieme which by most accounts will expand upon the CiscoOne concept with a Cisco flavored SDN offering. In any event the race is on to build networks based on software flows that are centrally managed rather than the port-to-port configuration nightmare of today’s data centers.
 
This move is not only for ease of administration, but also required to push our systems to the levels required by cloud and SDDC. These multi-tenant systems running disparate applications at various service tiers require tighter quality of service controls and bandwidth guarantees, as well as more intelligent routes. Today’s physically configured networks can’t provide these controls. Additionally applications will benefit from network visibility allowing them to request specific flow characteristics from the network based on application or user requirements. Multiple service levels can be configured on the same physical network allowing traffic to take appropriate paths based on type rather than physical topology. These network changes are require to truly enable SDDC and Cloud architectures.
 
Further up the stack from the Layer 2 and Layer 3 transport networks comes a series of other network services that will be layered in via software. Features such as: load-balancing, access-control and firewall services will be required for the services running on these shared infrastructures. These network services will need to be deployed with new applications and tiered to the specific requirements of each. As with the L2/L3 services manual configuration will not suffice and a ‘big picture’ view will be required to ensure that network services match application requirements. These services can be layered in from both physical and virtual appliances but will require configurability via the centralized software platform.
 
Summary:
 
By combining current technology trends, emerging technologies and layering in future concepts the software defined data center will emerge in evolutionary fashion. Today’s highly virtualized data centers will layer on technologies such as SDN while incorporating new storage models bringing their data centers to the next level. Conceptually picture a mainframe pooling underlying resources across a shared application environment. Now remove the frame.
 

Tuesday, November 27, 2012

VXLAN Deep Dive

Courtesy - definethecloud.com
 
By far the most popular virtualization technique in the data center is VXLAN. This has as much to do with Cisco and VMware backing the technology as the tech itself. That being said VXLAN is targeted specifically at the data center and is one of many similar solutions such as: NVGRE and STT.) VXLAN’s goal is allowing dynamic large scale isolated virtual L2 networks to be created for virtualized and multi-tenant environments. It does this by encapsulating frames in VXLAN packets. The standard for VXLAN is under the scope of the IETF NVO3 working group.
 
image
 
The VXLAN encapsulation method is IP based and provides for a virtual L2 network. With VXLAN the full Ethernet Frame (with the exception of the Frame Check Sequence: FCS) is carried as the payload of a UDP packet. VXLAN utilizes a 24-bit VXLAN header, shown in the diagram, to identify virtual networks. This header provides for up to 16 million virtual L2 networks.
 
Frame encapsulation is done by an entity known as a VXLAN Tunnel Endpoint (VTEP.) A VTEP has two logical interfaces: an uplink and a downlink. The uplink is responsible for receiving VXLAN frames and acts as a tunnel endpoint with an IP address used for routing VXLAN encapsulated frames. These IP addresses are infrastructure addresses and are separate from the tenant IP addressing for the nodes using the VXLAN fabric. VTEP functionality can be implemented in software such as a virtual switch or in the form a physical switch.
 
VXLAN frames are sent to the IP address assigned to the destination VTEP; this IP is placed in the Outer IP DA. The IP of the VTEP sending the frame resides in the Outer IP SA. Packets received on the uplink are mapped from the VXLAN ID to a VLAN and the Ethernet frame payload is sent as an 802.1Q Ethernet frame on the downlink. During this process the inner MAC SA and VXLAN ID is learned in a local table. Packets received on the downlink are mapped to a VXLAN ID using the VLAN of the frame. A lookup is then performed within the VTEP L2 table using the VXLAN ID and destination MAC; this lookup provides the IP address of the destination VTEP. The frame is then encapsulated and sent out the uplink interface.
 
 
image
Using the diagram above for reference a frame entering the downlink on VLAN 100 with a destination MAC of 11:11:11:11:11:11 will be encapsulated in a VXLAN packet with an outer destination address of 10.1.1.1. The outer source address will be the IP of this VTEP (not shown) and the VXLAN ID will be 1001.
 
In a traditional L2 switch a behavior known as flood and learn is used for unknown destinations (i.e. a MAC not stored in the MAC table. This means that if there is a miss when looking up the MAC the frame is flooded out all ports except the one on which it was received. When a response is sent the MAC is then learned and written to the table. The next frame for the same MAC will not incur a miss because the table will reflect the port it exists on. VXLAN preserves this behavior over an IP network using IP multicast groups.
 
Each VXLAN ID has an assigned IP multicast group to use for traffic flooding (the same multicast group can be shared across VXLAN IDs.) When a frame is received on the downlink bound for an unknown destination it is encapsulated using the IP of the assigned multicast group as the Outer DA; it’s then sent out the uplink. Any VTEP with nodes on that VXLAN ID will have joined the multicast group and therefore receive the frame. This maintains the traditional Ethernet flood and learn behavior.
 
VTEPs are designed to be implemented as a logical device on an L2 switch. The L2 switch connects to the VTEP via a logical 802.1Q VLAN trunk. This trunk contains an VXLAN infrastructure VLAN in addition to the production VLANs. The infrastructure VLAN is used to carry VXLAN encapsulated traffic to the VXLAN fabric. The only member interfaces of this VLAN will be VTEP’s logical connection to the bridge itself and the uplink to the VXLAN fabric. This interface is the ‘uplink’ described above, while the logical 802.1Q trunk is the downlink.
 
image
 
Summary of part I:
 
VXLAN is a network overlay technology design for data center networks. It provides massively increased scalability over VLAN IDs alone while allowing for L2 adjacency over L3 networks. The VXLAN VTEP can be implemented in both virtual and physical switches allowing the virtual network to map to physical resources and network services. VXLAN currently has both wide support and hardware adoption in switching ASICS and hardware NICs, as well as virtualization software.
 
 
Part - II
 
This part will dive deeper into how VXLAN operates on the network.
 
Let’s start with the basic concept that VXLAN is an encapsulation technique. Basically the Ethernet frame sent by a VXLAN connected device is encapsulated in an IP/UDP packet. The most important thing here is that it can be carried by any IP capable device. The only time added intelligence is required in a device is at the network bridges known as VXLAN Tunnel End-Points (VTEP) which perform the encapsulation/de-encapsulation. This is not to say that benefit can’t be gained by adding VXLAN functionality elsewhere, just that it’s not required.
 
image
 
Providing Ethernet Functionality on IP Networks:
 
As discussed in Part 1, the source and destination IP addresses used for VXLAN are the Source VTEP and destination VTEP. This means that the VTEP must know the destination VTEP in order to encapsulate the frame. One method for this would be a centralized controller/database. That being said VXLAN is implemented in a decentralized fashion, not requiring a controller. There are advantages and drawbacks to this. While utilizing a centralized controller would provide methods for address learning and sharing, it would also potentially increase latency, require large software driven mapping tables and add network management points. We will dig deeper into the current decentralized VXLAN deployment model.
 
VXLAN maintains backward compatibility with traditional Ethernet and therefore must maintain some key Ethernet capabilities. One of these is flooding (broadcast) and ‘Flood and Learn behavior.’ I cover some of this behavior here (http://www.definethecloud.net/data-center-101-local-area-network-switching) but the summary is that when a switch receives a frame for an unknown destination (MAC not in its table) it will flood the frame to all ports except the one on which it was received. Eventually the frame will get to the intended device and a reply will be sent by the device which will allow the switch to learn of the MACs location. When switches see source MACs that are not in their table they will ‘learn’ or add them.
 
VXLAN is encapsulating over IP and IP networks are typically designed for unicast traffic (one-to-one.) This means there is no inherent flood capability. In order to mimic flood and learn on an IP network VXLAN uses IP multi-cast. IP multi-cast provides a method for distributing a packet to a group. This IP multi-cast use can be a contentious point within VXLAN discussions because most networks aren’t designed for IP multi-cast, IP multi-cast support can be limited, and multi-cast itself can be complex dependent on implementation.
 
Within VXLAN each VXLAN segment ID will be subscribed to a multi-cast group. Multiple VXLAN segments can subscribe to the same ID, this minimizes configuration but increases unneeded network traffic. When a device attaches to a VXLAN on a VTEP that was not previously in use, the VXLAN will join the IP multi-cast group assigned to that segment and start receiving messages.
 
image
In the diagram above we see the normal operation in which the destination MAC is known and the frame is encapsulated in IP using the source and destination VTEP address. The frame is encapsulated by the source VTEP, de-encapsulated at the destination VTEP and forwarded based on bridging rules from that point. In this operation only the destination VTEP will receive the frame (with the exception of any devices in the physical path, such as the core IP switch in this example.)
 
image
 
In the example above we see an unknown MAC address (the MAC to VTEP mapping does not exist in the table.) In this case the source VTEP encapsulates the original frame in an IP multi-cast packet with the destination IP of the associated multicast group. This frame will be delivered to all VTEPs participating in the group. VTEPs participating in the group will ideally only be VTEPs with connected devices attached to that VXLAN segment. Because multiple VXLAN segments can use the same IP multicast group this is not always the case. The VTEP with the connected device will de-encapsulate and forward normally, adding the mapping from the source VTEP if required. Any other VTEP that receives the packet can then learn the source VTEP/MAC mapping if required and discard it. This process will be the same for other traditionally flooded frames such as ARP, etc. The diagram below shows the logical topologies for both traffic types discussed.
 
image
 
As discussed in Part 1 VTEP functionality can be placed in a traditional Ethernet bridge. This is done by placing a logical VTEP construct within the bridge hardware/software. With this in place VXLANs can bridge between virtual and physical devices. This is necessary for physical server connectivity, as well as to add network services provided by physical appliances. Putting it all together the diagram below shows physical servers communicating with virtual servers in a VXLAN environment. The blue links are traditional IP links and the switch shown at the bottom is a standard L3 switch or router. All traffic on these links is encapsulated as IP/UDP and broken out by the VTEPs.
 
image
 
Summary of Part II:
 
VXLAN provides backward compatibility with traditional VLANs by mimicking broadcast and multicast behavior through IP multicast groups. This functionality provides for decentralized learning by the VTEPs and negates the need for a VXLAN controller.

Monday, November 26, 2012

HP EVI vs. Cisco OTV: A Technical Look

 
HP announced two new technologies in the late summer, Multitenant Device Context (MDC) and Ethernet Virtual Interconnect (EVI), that target private clouds. Mike Fratto outlined the business and market positions, particularly in regard to Cisco's Overlay Transport Virtualization (OTV) and Virtual Device Context. However, the technology is also interesting because it's a little different than Cisco's approach. This post will drill into HP's EVI and contrast it with Cisco's OTV, as well as with VPLS.
 
HP EVI supports Layer 2 Data Center Interconnect (L2 DCI). L2 DCI technology is a broad term for technologies that deliver VLAN extension between data centers. Extending VLANs lets virtual machines move between data centers without changing a VM's IP address (with some restrictions). The use cases for such a capability include business continuity and disaster recovery. For a more extensive discussion of L2 DCI, please see the report The Long-Distance LAN.
 
HP EVI is a MAC-over-GRE-over-IP solution. Ethernet frames are encapsulated into GRE/IP at ingress to the switch. The GRE/IP packets are then routed over the WAN connection between the data centers.
 
EVI adds a software process to act as control plane to distribute the MAC addresses in each VLAN between the EVI-enabled switch. Thus, the switch in data center A updates the MAC address table in data center B and vice versa. By contrast, in traditional use, Ethernet MAC addresses are auto-discovered as frames are received by the switch.
 
Because GRE packets are TCP/IP packets they can be routed over any WAN connection, making it widely useful for customers. In a neat bit of synergy, the HP Intelligent Resilient Framework (IRF) chassis redundancy feature means that WAN connections are automatically load-balanced because switches that are clustered in an IRF configuration act as a single switch (a Borg architecture, not an MLAG architecture). Therefore, multiple WAN connections between IRF clusters are automatically load-balanced by the control plane either as LACP bundles or through ECMP IP routing, which is a potential improvement over Cisco's OTV L2 DCI solution.
 
However, note that load balancing of the end-to-end traffic flow is not straightforward because there are three connections to be considered: LAN-facing, to the servers using MLAG bundles; WAN-facing, where the WAN links go from data center edge switches to the service provider; and intra-WAN, or within the enterprise or service provider WAN. Establishing the load balancing capabilities of each will take some time.
chart: comparing HP EVI with Cisco OTV and VPLS

Because HP has chosen to use point-to-point GRE, the EVI edge switch must perform packet replication. Ethernet protocols such as ARP rely heavily on broadcasts to function. In a two-site network this isn't problem, but for three sites or more, the EVI ingress switch needs to replicate a broadcast EVI frame to every site. HP assures me that this can be performed at line rate, for any speed, for any number of data centers. That may be so, but creating full mesh replication for n* (n-1) WAN circuits could result in poor bandwidth utilization in networks that have high volumes of Ethernet broadcasts.
 
Cisco's OTV is also MAC-over-GRE-over-IP (using EoMPLS headers), but it adds a small OTV label into the IP header. The OTV control plane acts to propagate the MAC address routing table.
 
Like HP's EVI, OTV can complicate load balancing. Cisco's Virtual Port Channel (vPC) shares the control plane, while HP's IRF shares the data plane. Although a vPC-enabled pair of Nexus 7000 switches run as autonomous control planes, NX-OS can load balance evenly using IP. OTV load balances by using a 5-tuple hash and will distribute traffic over multiple paths for the WAN.
 
OTV also supports the use of multicast routing in the WAN to deliver a much more efficient replication of Ethernet broadcasts in large-scale environments. Instead of meshing a large DCI core, a Source Specific Multicast (with good reasons) should be more efficient for multiple sites. Badly designed applications, such as Microsoft NLB, will be much more efficient using multicast.
 
EVI Compared To MPLS/VPLS
 
For many enterprises, MPLS is not a consideration. MPLS is a relatively complex group of protocols that requires a fair amount of time to learn and comprehend. However, building mission-critical business services that aren't MPLS is really hard. Service providers can offer L2 DCI using their MPLS networks with VPLS. Operationally, enterprise infrastructure is diverse and customised to each use case. Service provider networks tend toward homogeneity and simplicity because of the scale.
 
Some enterprises will buy managed VPLS services from service providers. They will also discover that such VPLS services are of variable quality, offer poor loop prevention, and can be expensive and inefficient. (For more, see the above-referenced report.) This is what drives Cisco and HP to deliver better options in OTV and EVI.
 
Other EVI Claims
 
HP notes that its solution doesn't require "multicast in the default configuration." HP wants to contrast itself to Cisco's OTV, which uses Source Specific Multicast in the WAN core, because many network engineers might think configuring multicast to be too hard. Building an SSM design over a Layer 3 WAN core is a substantial requirement and not a technology that most enterprise engineers would be comfortable configuring. On the other hand, configuring SSM over a Layer 2 WAN core (using Dark Fibre or DWDM) is trivial.
 
However, Cisco OTV has a unicast mode that works in a similar way to HP EVI, which most engineers would choose for simplicity. That said, the SSM WAN core offers scaling and efficiency if you need it, while HP's EVI does not.
 
The HP EVI approach is potentially more effective at load balancing WAN circuits than 5-tuple hashing in Cisco OTV, but it's unlikely to make much difference in deployment.
 
EVI's Enterprise Value
 
HP EVI is aimed at enterprises with private clouds. The technology looks like a good strategy. HP says EVI will be available in the A12500 switch in December. HP has a poor history of delivering on time (we're still waiting for TRILL and EVB), so plan accordingly. Cisco OTV is shipping and available in the Nexus 7000 and ASR products (for substantial license fees). HP says it won't charge for EVI.
 
Private clouds are shaping up to be a huge market in the next five years, and HP is addressing this space early by bringing L2 DCI capabilities to its products. HP EVI looks to be a good technology to meet customer needs. Combined with Multitenant Device Content, it should keep HP on competitive footing with Cisco. Of course, it's easy to make this kind of technology work in a PowerPoint. We'll have to wait and see how it works in real deployments.
 
 

Stateless Transport Tunneling (STT)

 
 
STT is another tunneling protocol along the lines of the VXLAN and NVGRE proposals. As with both of those the intent of STT is to provide a network overlay, or virtual network running on top of a physical network. STT was proposed by Nicira and is therefore not surprisingly written from a software centric view rather than other proposals written from a network centric view. The main advantage of the STT proposal is it’s ability to be implemented in a software switch while still benefitting from NIC hardware acceleration. The other advantage of STT is its use of a 64 bit network ID rather than the 32 bit IDs used by NVGRE and VXLAN.
 
The hardware offload STT grants relieves the server CPU of a significant workload in high bandwidth systems (10G+.) This separates it from it’s peers that use an IP encapsulation in the soft switch which negate the NIC’s LSO and LRO functions. The way STT goes about this is by having the software switch inserts header information into the packet to make it look like a TCP packet, as well as the required network virtualization features. This allows the guest OS to send frames up to 64k to the hypervisor which are encapsulated and sent to the NIC for segmentation. While this does allow for the HW offload to be utilized it causes several network issues due to it’s use of valid TCP headers it causes issues for many network appliances or “middle boxes.”
 
STT is not expected to be ratified and is considered by some to have been proposed for informational purposes, rather than with the end goal of a ratified standard. With its misuse of a valid TCP header it would be hard pressed for ratification. STT does bring up the interesting issue of hardware offload.
 
The IP tunneling protocols mentioned above create extra overhead on host CPUs due to their inability to benefit from NIC acceleration techniques. VXLAN and NVGRE are intended to be implemented in hardware to solve this problem. Both VXLAN and NVGRE use a 32 bit network ID because they are intended to be implemented in hardware, this space provides for 16 million tenants. Hardware implementation is coming quickly in the case of VXLAN with vendors announcing VXLAN capable switches and NICs.
 

Sunday, November 25, 2012

Packets Are the Past, Flows Are the Future

Courtesy - Greg Ferro
 
Speeds and feeds have been the networking story for the last two decades. Vendors have battled to see which device can push out the most packets per second, or pare down Ethernet frame latency to the fewest milliseconds, or offer up the biggest MAC tables. And as Ethernet speeds increased, so did feeds: 100-Mbps Ethernet is now 10 Gbps, while port counts have jumped from tens to hundreds per chassis.
 
However, networking is changing because the future is about flows. Packet and ports are no longer the metrics by which the network will be judged. Application clients and servers aren't aware of packets or ports. The OS opens a session with a remote host and makes the connection via the transport layer using TCP, and dispatches a stream of data over the network. (That's why TCP is a definition for propagating a stream of data.)
 
That stream, or flow, is run through the IP process to segment the stream into a consistent packet data structure, which is easily routed through network paths. Packets enable resilient network designs by creating uniform, stateless, independent data chunks. Packet networking has been the preferred method for networking for the last 20 years (but it's not only one). The use of packets remains a sound technical concept for abstracting the forwarding path from the payload: Focus on the network, don't be concerned about the data.
 
But consider common use cases for the network. The user wants to see a Web page load smoothly and quickly. The Web developer wants the Web browser to request an image, an HTML file or a script file from the Web server and load rapidly. For a sys admin, the OS opens a socket to manage a connection to a remote system and to stream data through it. These use cases are all about flows of data, but the network supports the delivery as a series of packets.
 
And consider the hosts at either end of the connection. In the past, servers were always located on a single port of a switch. Clients were located at a specific WAN connection or on a specific switch in the campus LAN. Packets ingressed and egressed the network at known locations--the ports.
 
The time of packets and ports is passing. In virtualized server environments, the network ingress for a given application isn't physical, can change regularly, and is configured by the VM administrator. Users are moving from desktops to handsets and laptops. The network must now deliver data flows from server to client. That's a fundamental change in network services. Clearly, networking by packets doesn't match the use cases, yet today's network engineer is concerned only with packets and ports.
 
Communication between application clients and servers has always been about flows, not packets, but the technical challenges of configuring the network have hidden that issue behind technical abstraction of packets. As technology has improved, packet forwarding has become commoditized. The port density and performance of the current generation of network devices is good enough for majority of the market.
 
And that's why emerging technologies such as software-defined networking (SDN) and OpenFlow are gaining acceptance. They are designed to enable applications and data flows, not just to move packets from point A to point B. In a subsequent post, I'll look at how SDN is a business technology while OpenFlow is a networking technology.

Saturday, November 24, 2012

OSPF Basics


What Is OSPF?
• Open Shortest Path First
• Link State Protocol using the Shortest Path First algorithm (Dijkstra) to calculate loop-free routes
• Used purely within the TCP/IP environment
• Designed to respond quickly to topology changes but using minimal protocol traffic
• Used in both Enterprise and Service Provider Environment
• Uses IP protocol 89
• Metric is cost, based on interface bandwidth by default (10^8 / BW in bps)
• Sends partial route updates only when there are changes
• Send hello packets every 10 sec with dead timer of 40 sec over Point to Point & Broadcast networks
• Send hello packets every 30 sec with dead timer of 120 sec over NBMA networks
• Uses multicast address 224.0.0.5 (ALL SPF Routers)
• Uses multicast address 224.0.0.6 (ALL DR Routers)

Different Types of OSFP LSAs1. Router Link State Advertisement (Type 1 LSA)
2. Network Link State Advertisement (Type 2 LSA)
3. Summary Link State Advertisement (Type 3 and Type 4 LSA)
4. External Link State Advertisement (Type 5 LSA)

Different types of OSPF Packet
1. Hello
2. Database description
3. Link State Request
4. Link State Update
5. Link State Acknowledgement

Different Types of OSPF Areas
Regular Area: ABRs forward all LSAs from backbone
• Summary LSA (summarized/non-summarized) from other areas injected
• External links injected

Stub Area: A stub area is an area with single exit point (if you need multiple exit points then configure it as NSSA) into which External LSA are not flooded
• Summary LSAs from other areas injected
• LSA type 5 not injected
• Default LSA injected into area as Summary LSA
• Define all routers in area as stub
• External link flaps will not be injected
Consolidates specific external links— default 0.0.0.0
Used in networks with a lot of LSA Type 5

Totally Stubby Area
A Totally Stubby Area Forwards Default Link 0.0.0.0
The ABR will block not only the AS External LSAs bit also all Summary LSAs, except a single Type 3 LSA to advertise the default route.

Not So Stubby Areas (NSSA)
Benefits of stub area, but ASBR is allowed

New type external LSA (type 7)
• Type 7 LSAs flooded throughout the area
• No type 5 external LSAs in the area
• Type 7 LSAs will be converted into type 5 LSAs when flooded into area 0 by ABRs
Filtering and summaries allowed at ABRs

Areas are used to make OSPF Scale
• OSPF uses a 2 level hierarchical model
• One SPF per area, flooding done per area
• Regular, Stub, Totally Stubby and NSSA Area Types
• A router has a separate LS database for each area to which it belongs
• All routers belonging to the same area should have identical databases
• SPF calculation is performed independently for each area
• LSA flooding is bounded by area
• If any link flaps in the area it takes nlogn calculations where n is the number of the links in the area


 

Friday, November 23, 2012

OSPF High Availability with SSO,NSF and NSR


Nonstop Forwarding with Stateful Switchover (NSF/SSO) are redundancy mechanisms for intra-chassis route processor failover.

SSO synchronizes Layer 2 protocol state, hardware L2/L3 tables (MAC, FIB, adjacency table), configuration, ACL and QoS tables.

NSF gracefully restarts routing protocol neighbor relationships after an SSO fail-over

1. Newly active redundant route processor continues forwarding traffic using synchronized HW forwarding tables.
2. NSF capable routing protocol (eg: OSPF) requests graceful neighbor restart.Routing neighbors reform with no traffic loss.
3. Cisco and RFC3623 as per standard.
4. Graceful Restart capability must always be enabled for all protocols. This is only necessary on routers with dual processors that will be performing switch overs.
5. Graceful Restart awareness is on by default for non-TCP based interior routing protocols (OSPF,ISIS and EIGRP). These protocols will start operating in GR mode as soon as one side is configured.
6. TCP based protocols (BGP) must enable GR on both sides of the session and the session must be reset to enable GR. The information enabling GR is sent in the Open message for these protocols.

Nonstop Routing (NSR) is a stateful redundancy mechanism for intra chassis route processor (RP) failover.

NSR , unlike NSF with SSO, 1. Allows routing process on active RP to synchronize all necessary data and states with routing protocol process on standy RP.
2. When switchover occurs, routing process on newly active RP has all necessary data and states to continue running without requiring any help from its neighbor(s).
3. When switchover occurs, routing process on newly active RP has all necessary data and states to continue running without requiring any help from its neighbor(s).
4. Standards are not necessary as NSR does NOT require additional communication with protocol peers
5. NSR is desirable in cases where routing protocol peer doesn’t support Cisco or IETF standards to support Graceful Restart capabilities exchange.
6. NSR uses more system resources due to information transfer to standby processor.

Deployment Considerations for SSO/NSF and NSR

1. From a high level, you need to protect the interfaces (SSO), the forwarding plane (NSF) and the control plane (GR or NSR).
2. Enabling SSO also enables NSF.
3. Each routing protocol peer needs to be examined to ensure that both its capability has been enabled and that its peer has awareness enabled.
4. While configuring OSPF with NSF, make sure all the peer devices that participate in OSPF must be made OSPF NSF-aware.
5. While configuring OSPF with Non Stop Routing (NSR), it doesn’t require peer devices be NSR capable. It only requires more system resources. Both NSF and NSR could be active at same time but NSF is used as fallback.


Configuration:-
OSPF with Nonstop Forwarding
redundancy
mode sso
!
router ospf 1
nsf [cisco | ietf]


OSPF with Nonstop Routing
nsr process-failures switchover
!
router ospf 1
nsr
<!–google_ad_section_end–>
 

Cisco Shells Out $1.2B for Meraki



Only days after unveiling the acquisition of cloud management software vendor Cloupia, Cisco Systems Inc. (Nasdaq: CSCO) has announced a deal to buy "cloud networking" specialist Meraki Networks Inc. for US$1.2 billion in cash.

Meraki has been developing its centralized, remote management capabilities since 2006 and now has a range of tools and products (Ethernet switches, security devices, Wi-Fi access points) that enables network managers to run their networks using a central, remote (or "cloud") management platform.

Meraki, which has 330 staff, more than 10,000 customers and an order book currently running at an annual rate of about $100 million, will form the core of Cisco's new Cloud Networking Group once the acquisition is complete. That is expected to happen some time in the next couple of months. According to a letter sent to staff by CEO Sanjit Biswas, Meraki had been planning an IPO but the recent takeover offer from Cisco was too good to turn down. The company had raised more than $80 million from its investors, which include Google (Nasdaq: GOOG), Sequoia Capital and DAG Ventures Management .

Why this matters

The centralized management of networks is the infrastructure and application control model that looks set to dominate in the future, so it makes sense that Cisco would want to maintain its role as a key provider of networking capabilities to enterprises and service providers by acquiring key players in this space.

What's interesting about Meraki, though, is its long-time focus on the remote management of wireless networking capabilities. Those capabilities not only make its technology increasingly relevant in an enterprise world that is grappling with the challenge of mobile security and access rights in the bring-your-own-device (BYOD) age, but make it even more relevant for Cisco as it targets mobile operators with its carrier Wi-Fi and small-cell products.

And given that Cisco has also recently acquired Wi-Fi traffic analyzer startup ThinkSmart, it seems very likely that the networking giant might still be looking to flesh out its wireless management and cloud networking portfolio further with other targeted Service Provider Information Technology (SPIT) acquisitions.
 

Wednesday, November 21, 2012

Great reasons to adopt ERP solutions on the Cloud

 
 
It’s no secret that businesses need to keep up with the latest technological developments if they are to stay competitive within their industry. Among the most recent advancements to sweep through offices around the world were cloud-based solutions (also known as SaaS, or ‘Software as a Service’). With this new standard, businesses helped themselves with the introduction implementation of new, cloud-supported systems such as Customer Relationship Management (CRM) and Human Capital Management (HCM); both of which managed to bring obvious benefits to organisations everywhere.
 
 
Despite such obvious benefits, not all new cloud-based solutions manage to spread as quickly as they perhaps should throughout the global business ecosystem. An example of this type of relative “slow starter” can be seen in Enterprise Resource Planning (ERP). One of the main reasons that are often cited for the modest uptake of cloud-based ERP solutions was fear about its security. Though this was always going to be a natural concern, it is one that arises from misconceptions about SaaS ERP services and how they function. Time has a way of vindicating genuinely useful technologies, and following its trepid beginnings, businesses are now beginning to realise that cloud-based ERP solutions can be even more secure than their traditional on-premises ERP systems.
 
The global rise in adoption of SaaS ERP systems is only expected to continue over the upcoming years; Forrester Research even claim that it will see an increase of a staggering 21% worldwide by the year 2015. Take a look at the following clip on Epicor Express. A SaaS based ERP solution for small businesses.
 
 
Even with this in mind, some companies remain hesitant to step into the future and reap the benefits of a SaaS ERP system. This is understandable, as their benefits are not entirely understood. However, there are many great reasons to join the myriad of companies making this exciting change:
 
Low, manageable costs
 
People tend to think ‘upgrade’ and ‘expensive’ are synonymous with one another; however, this is certainly not the case with SaaS solutions. This is primarily because they are provided on a subscription basis, which means that they companies with smaller budgets are able to lower their initial capital costs. In addition to this, the ERP system becomes a monthly expenditure; this in turn allows the costs to be spread out over time. Reputable SaaS ERP providers will also give their customers piece of mind by capping any potential fees increases – saving the headache of having to “shop around” for the most competitive provider.
 
Speedy set up, quicker value
 
One of the major advantages of a SaaS service is that it is completely owned and managed by the vendor. This means that there’s nothing to install and no time wasted through having to set-up complicated software. Nowadays, the majority of quality services come ‘pre-configured’; this allows their customers to “hit the ground running” and see a much greater return on their investment.
 
No ownership, no extras
 
By making use of a cloud-based ERP system, a company negates one of the most costly aspects of such a system: the hardware. An on-site ERP system would require costly peripheral resources which would factor heavily into consideration if and when a company expands. With SaaS systems, the process is much simpler. Customers simply have to add a new user to their existing subscription at a cost that has already been specified.
 
Improved reliability
 
Through specialising in a service, SaaS providers are often able to provide their customers with a much great level of reliability than they might be able to achieve by “going it alone”. This means a guarantee of service and operations that is typically highlighted in contractual form – allowing customers to be sure that their ERP system will be accessible over 99% of the time.
 
Improved support
 
The majority of issues with any ERP system arise through how it is being used as opposed to the actual system itself. That is why quality SaaS vendors offer their customers superb support through troubleshooting services. This equates to less “down time” and less frustrated staff.
 
Great business focus
 
Ultimately, a business needs to be able to achieve what it has set out to achieve within its chosen sector. Although ERP is undoubtedly important, it is a means to an end and should not become the sole focus of whichever company is making use of it. Businesses around the world are realising that passing the burden of maintaining and operating their ERP systems to a cloud-based vendor makes sense from any angle it’s viewed.
 
 

Tuesday, November 20, 2012

Cisco Identity Services Engine – an independent look

Aman Diwakar, Security Solutions Architect from World Wide Technology goes over an overview of Cisco ISE on BrightTalk.


 

Monday, November 19, 2012

Digital Malls: The Next Generation of Self-Service Shopping


All the players in the U.S. retail ecosystem today—mall developers, retailers, vending operators, and consumer product manufacturers—are facing key demographic, economic, and technological changes. The “new normal” world of retailing is challenging retail players to reverse vacancy rates and sales declines, create enhanced customer experiences, reduce labor and construction costs, deepen brand differentiation, optimize small urban formats, and justify investment in innovation.




In the midst of these challenges, three emerging, technology-enabled, self-service retail trends offer the glimmer of a new opportunity.

Innovative vending machines, micro-markets, and virtual stores are developing on separate tracks today, but combined, they could create a completely new retail business model—something Cisco IBSG calls “Digital Malls.”

Unattended retailing has gone far beyond the days of “put in a dollar, get a soda.” Innovative vending machines are smart, networked devices that enable more convenient, interactive consumer experiences with video, mobile, and social capabilities. Micro-markets are unattended food stores in large workplaces where consumers pay for their own purchases at video-enabled kiosks. Virtual stores are physical e-commerce installations where consumers click on product pictures using a touchscreen or their cell phones and place orders for later home delivery.

However, as exciting as these three trends are, independently they are not enough to tackle the challenges facing the retail ecosystem.

Consumers still need retail destinations with an array of merchandise, food, and entertainment experiences to inspire repeat visits. Individual innovative vending machines, micro-markets, and virtual stores will not accomplish this or create the scale necessary to lower labor, building, or networking costs.

However, Cisco IBSG believes that these three trends could be combined to create “Digital Malls,” highly engaging self-service shopping environments placed in densely populated venues—such as airports, transit stations, shopping malls, amusement parks, stadiums, universities, hotels, fairs and festivals, frequent-flyer clubs, and large workplaces and condo complexes—where consumers have the time, need, and enthusiasm to shop.

Illustration of a digital Mall



 Digital Malls would offer retailers and manufacturers access to shoppers in many additional distribution channels, with no or low labor costs. Digital Malls could be indoor or outdoor, stationary or mobile—an always-available shopping area or an exciting temporary installation at a major event—in a fun, highly graphic, branded environment.

Within Digital Malls, the next generation of virtual stores could offer an array of retailers as well as entertainment using gesture technology, 3D, or augmented reality to create an immersive shopping experience. And the next generation of micro-markets could offer immediate-need general merchandise and convenient services such as dry cleaning machines and remote banking.

Cisco IBSG estimates that Digital Malls could be a US$7 billion revenue opportunity across thousands of high-traffic venues, with payback of investment in two years or less. To find out more, read our Point of View and listen to our slide cast.
 

Cisco Announces Intent to Acquire Cloupia

 
Based in Santa Clara, Calif, Cloupia is a software company that automates converged data center infrastructure – allowing enterprises and service providers to speed the deployment and configuration of physical and virtual infrastructure from a single management console. Together, Cisco and Cloupia will extend the converged management benefits of the Cisco Unified Computing System (UCS) Manager and UCS Central beyond compute to include server, network, storage, and virtualization functions, simplifying the IT administrator’s operations and improving overall reliability in system deployment.
 
With the introduction of the Unified Computing System in 2009, Cisco has led the industry in moving towards converged infrastructure solutions that are managed as a whole system instead of individual components. Unfortunately, many IT administrator tools still only solve part of the puzzle, leaving much of their work reliant on tedious manual operations. Cisco alleviates the complexity in the evolution to cloud based IT service delivery by unifying infrastructure and allowing it to be managed as a whole. Cisco data center products allow IT administrators to rapidly provision applications across bare-metal and virtual environments and manage their entire lifecycle through automated processes based on policy.
 
Cisco’s acquisition of Cloupia benefits Cisco’s Data Center strategy by providing single “pane-of-glass” management across Cisco and partner solutions including FlexPod, VSPEX, and Vblock. Cloupia’s products will integrate into the Cisco data center portfolio through UCS Manager, UCS Central, and Nexus 1000V, strengthening Cisco’s overall ecosystem strategy by providing open APIs for integration with a broad community of developers and partners.
 
Similar to previous acquisitions in cloud management, such as Tidal, LineSider and NewScale, the acquisition of Cloupia also complements Cisco’s Intelligent Automation for Cloud (IAC) solution.
Mergers, acquisitions and investments are a key part of Cisco’s build, buy, and partner innovation framework and supports our strategy of providing best-in-class solutions for customers. The Cloupia acquisition aligns to Cisco’s strategic goals to develop innovative data center, virtualization, and cloud technologies, while also cultivating top talent. The Cloupia team will join Cisco’s Data Center Group led by David Yen, senior vice president.
 
As we continue to use all of Cisco’s assets to drive innovation, acquisitions such as Cloupia will help bring additional expertise, new technology, and unique business models into Cisco. The Cloupia acquisition reinforces Cisco’s commitment to deliver an intelligent network by providing market leading infrastructure across the data center.
 

Thursday, November 15, 2012

MIT researchers may have solved the spectrum crunch



Researchers at MIT's Research Laboratory of Electronics said they have discovered a way to improve wireless data transmissions without adding base stations or finding more spectrum. The researchers said they figured out a way for devices to use algebra to seamlessly weave data streams from Wi-Fi to LTE without dropping packets of data.


According to MIT's Technology Review, Professor Muriel Medard is leading the effort and the technology has been developed by researchers at MIT, the University of Porto in Portugal, Harvard University, Caltech and the Technical University of Munich.


Typically, a percentage of data packets are dropped due to interference or congestion when they are transmitted over a wireless network. Dropped packets cause delays and generate back-and-forth traffic on the network to replace those packets, which causes more congestion.


The MIT technology changes the way data packets are sent: So instead of sending packets, it sends an algebraic equation that describes a series of packets. If a packet goes missing, instead of asking the network to resend it, the receiving device solves the missing packet problem itself.


The new technology has been tested in Wi-Fi networks at MIT. The researchers managed to boost speeds from 1 Mbps to 16 Mbps in systems where 2 percent of data packets are typically lost. In situations where 5 percent of data packets were typically lost, the bandwidth jumped from 0.5 Mbps to 13.5 Mbps.


MIT researchers said several companies have licensed the underlying technology but MIT is under nondisclosure agreements and can't reveal those firms. The licensing is being handled by the MIT/Caltech startup, Code-On Technologies.

Although the technology is still in its early stages, the improvements are viewed as a breakthrough and some experts believe it could be widely deployed within two to three years.

Wednesday, November 14, 2012

Facebook Wants to Build Dissolving Data Centers



Are cardboard computers in our future? Facebook and Purdue University are partnering to take green IT a step further by developing the first biodegradable server chassis.


Tech companies will spend a whopping $45 billion by 2016 to make data centers green, according to a report by Pike Research. But Facebook wants to take the concept of "green IT" a step further, by developing biodegradable data centers.


The social network is sponsoring an Open Compute Foundation contest with Purdue University's College of Technology to develop a more sustainable server chassis. Servers, according to Purdue, are replaced about every four years, which results in a lot of waste.


Does the contest seem a little far-fetched to you? Here’s a bit more on the compost concept behind it:


"Open Compute wants to change [the amount of waste] starting with the server chassis. These are typically made of steel, which is recyclable, but even recycling generates waste. What would happen if these chassis could be placed in compost instead?"


Purdue's participating students will receive a server to use to test new designs, and the winners will attend the Open Compute Summit to present the design and have "a chance to be a rock star in the open source hardware movement."


Should the designs be successful, could we expect to see more biodegradable tech hit the market? Given the fast-paced innovation and how quickly new tech becomes outdated, it certainly seems plausible.


Would you buy easily compostable gadgets?



Juniper Is Building Data Center Network For Terra


Terra, a global digital media company has selected Juniper and its QFabric to help build its data center network. Terra has an audience of around 100 million every month for its sports, news and entertainment content. The company has to maintain its service levels and enhance customer experience every day and one of the things that it needed was a networking vendor who would meet its criteria. Terra needed a solution that would scale across multiple data centers and yet keep it simple.


Terra has chosen QFX3500 from Juniper as an onramp to QFabric. Apart from the simplicity of operations, QFabric will also enhance performance and reduce TCO or total cost of ownership.


After the selection of the QFX series, Terra had already finished up with its data center transformation by converting switches into nodes in a single tier. In the near future, events like the 2016 Summer Olympics and the World Cup in 2014 will put a huge demand on Terra’s services.

SDN: Experimental networking tech gets a commercial boost


A new technology for tackling networking problems was given a commercial boost on Tuesday, when startup Big Switch Networks pushed out a major software-defined networking product suite.


The Open Software-Defined Networking product suite was launched by Big Switch Networks on Tuesday, alongside with a large set of partners including Microsoft, Citrix, Dell, F5 and Juniper Networks.


The new products — the Big Network Controller, the Big Tap monitoring tool and the Big Virtual Switch - allow network administrators to cram more virtual machines onto servers. They do this by increasing the efficiency of the overall network and giving the servers advanced monitoring capabilities, the Palo Alto-based company said.


Software-defined networking (SDN) is causing disruption in the networking industry, as it lets businesses move many networking features off expensive networking gear and onto cheap servers. It goes hand in hand with virtualisation as a key technology for making efficient, easily configurable private and public clouds.


"SDN is the most disruptive and transformative trend to hit the networking industry in over 20 years," Guido Appenzeller, the chief executive and co-founder of Big Switch Networks, said in a statement.


For example, SDN allows businesses to change the structure of a network via its software, without having to move physical equipment around. It also gives admins greater insight into a network's traffic flow, meaning they can spot and remedy bottlenecks earlier, as well as letting them avoid proprietary networking technologies.


OpenFlow savvy

The fundamental technology beneath Big Switch Networks's suite is the OpenFlow network communications protocol — something the company knows a lot about, given that Appenzeller led the Stanford University team that developed the first version of the protocol.


OpenFlow technology lets servers communicate with networking equipment, making it possible to hive off some network gear duties to servers, allowing for both cost savings and more efficient, flexible network structures.


The Big Network Controller is an application platform that communicates with OpenFlow software switches embedded in hypervisors and OpenFlow-compatible network interface cards (NIC). This allows admins to use advanced software tools to manage both the physical and the virtual network.


The Big Tap has monitoring capabilities for an SDN, while the Big Virtual Switch helps create and manage the physical network.


Big Switch Networks's main SDN competitor is Nicira, which was acquired by VMware in the summer for over a billion dollars.


However, Nicira's products are designed to manage virtual networks rather than physical ones, giving Big Switch Networks an edge in terms of the breadth of IT environments it can cover.


All the OSDN suite products are available now, Big Switch Networks said, but did not specify in which regions. Pricing for the Network Controller start at $1,700 a month, while Big Tap begins at $500 and Big Virtual Switch at $4,200.