Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Sunday, February 22, 2015

Cisco Stretching the ACI Networking Fabric


The latest release of the APIC software includes a feature that enables the ACI Fabric to link switches at sites up to 18.6 miles apart.


Cisco Systems has seen strong adoption over the past year of its Application Centric Infrastructure technology, the vendor's answer to the growing network virtualization movement.

In the last financial quarter, Cisco saw the number of customers for the Application Centric Infrastructure (ACI) and the foundational Nexus 9000 switches increase to more than 1,700, and revenues for the switches increase 350 percent. It was enough to convince an ebullient CEO John Chambers to declare that Cisco was winning the contest in a software-defined networking (SDN) market that was supposed to threaten the company's future.

Now Cisco is enabling customers to stretch the ACI Fabric between data centers that are almost 20 miles apart. In the latest software release (1.0(3f)) of the Application Policy Infrastructure Controller (APIC) for the Nexus 9000 switches, Cisco is offering a feature—the Stretched Fabric—that enables each leaf and spine switch that is used to create a fabric to be located up to 30 kilometers (more than 18.6 miles) apart.

ACI fabrics are normally used at a single site, with the entire mesh design connecting every leaf switch to every spine switch in the fabric for improved throughput and convergence, Ravi Balakrishnan, senior product marketing manager for data center solutions at Cisco, wrote in a post on the company blog. Typically, full mesh connectivity is expensive or impossible, given the lack of fiber connections or the cost to connect each switch between the sites.

The Stretched Fabric capabilities leverage transit leaf switches, which are used to connect the spine switches at each site, without the need for special requirements or configurations, according to Balakrishnan.

According to the Cisco Website, when using the ACI Stretched Fabric capabilities, customers should provision at least two APIC nodes at one of the sites—usually the one that is considered the main site or the one with the most switches—and a third APIC node at the other site. The APIC nodes will automatically synchronize and offer all the controller functions to each site, according to Cisco officials.

"The key benefits of stretched fabric include workload portability and VM [virtual machine] mobility," Balakrishnan wrote. "The stretched ACI fabric behaves the same way as a regular ACI fabric, supporting full VMM [Virtual Machine Manager] integration. For example, one VMware vCenter operates across the stretched ACI fabric sites. The ESXi hosts from both sites are managed by the same vCenter and Distributed Virtual Switch (DVS). They are stretched between the two sites."

SDN and network-functions virtualization (NFV) are designed to help businesses and service providers that are trying to manage the rapidly changing network demands brought by such trends as mobility, big data and the cloud. Organizations are looking for the ability to build networks that are more automated, scalable, programmable and affordable than traditional infrastructures.

SDN and NFV do this by putting the control plane and network tasks into software that can run on less-expensive commodity hardware, such as white boxes. Many industry observers have said this threatens the core networking businesses of such companies as Cisco and Juniper Networks, which sell expensive networking gear.

However, white boxes bring their own challenges in such areas as management and integration, convincing some OEMs—such as Dell, Juniper and, most recently, Hewlett-Packard—to create branded open switches that can run software from a range of third parties, a trend Gartner analysts have termed "brite boxes."

Cisco introduced ACI in 2013, announcing a strategy that aims to get the best performance out of applications by using a combination of software and hardware to create networking infrastructures that can be optimized in both physical and virtual environments and that can leverage partnerships with other vendors. Cisco CEO Chambers stressed the architectural aspect of ACI, which includes the integration of networking, servers, storage appliances and security capabilities that isn't offered in white-box systems.

"We are pulling away from our competitors and leading in both the SDN thought leadership and customer implementations," he said last week. "ACI and APIC will become the cornerstone of the next generation of networking architectures for many years."

Saturday, February 21, 2015

Juniper Unveils New Routers for Mobile Networks


The devices, as well as new capabilities in the Junos Space management platform, aim to help carriers improve bandwidth and create new services.


Juniper Networks is rolling out new networking routers and features in its Junos Space network management software to help out mobile service providers that are under pressure to supply more bandwidth and more quickly spin out new services to customers.

Juniper officials on Feb. 18 unveiled the new ACX500 and ACX5000 routers, which offer such capabilities as improved accuracy in detecting signal hand-offs, improved routing of small cell signals, embedded virtualized functions and the aggregation of multiple networks, from small cells and macro cells to residential and commercial broadband networks.

New features in the Junos Connectivity Services Director and Cross Provisioning Platform software enable the automated provisioning and unified management of Carrier Ethernet services on multivendor networks, which essentially reduces the amount of time spent deploying services and decreases the overall cost of operation, according to Juniper officials.

The new offerings come as mobile service providers are being pushed to supply greater network bandwidth and to more quickly provision services for their customers. They are addressing those demands to some degree with small cells, which can be used to deliver greater bandwidth and faster services. However, the use of these devices creates the need to better aggregate them and make management easier via a stronger backhaul, officials said.


uniper is looking to enable service providers to create better back hauls for small cells and improved Ethernet deployments through the integration of both the physical infrastructure and virtual services, according to Mike Marcellin, senior vice president of marketing and strategy at Juniper.

"These new solutions enhance the mobile subscriber experience and improve profit margins for service providers by cost-effectively meeting operational challenges with zero-touch rapid deployment and delivering seamless access for new, high-bandwidth services," Marcellin said in a statement.

The ACX500 is designed to reduce capacity issues that are arising with the increasingly complex LTE networks. At the same time, the router comes with an integrated GPS receiver, which officials said will reduce the number of dropped calls and improve locations-based services, such as directions and E911 emergency calls. In addition, the router's capabilities will improve the quality of video streaming, Juniper officials said. Small cell routing will be improved via integrated security and high-precision timing with less than 0.5-microsecond phase accuracy, they said.

The router, which will be available in the second quarter, has no fans and is rugged enough to be used indoors or outside.

"The ACX500 routers also support Juniper Networks Junos OS, extensive Layer 2 and Layer 3 features, IP/MPLS with traffic engineering support, rich network management, fault management, service monitoring and OAM capabilities," a Juniper blogger noted in a post on the company site. "By converging all of these features into a single device, the ACX500 line of routers delivers a level of scalability and reliability that will improve your customers' overall quality of experience while lowering costs for deploying, maintaining and updating the network infrastructure."

The ACX5000, which will be available this quarter, is a dense, terabit-scale Carrier Ethernet router that can aggregate multiple networks. It offers Gigabit Ethernet, 10GbE and 40GbE capabilities as well as offering carriers an efficient Metro Ethernet platform that will help cut operating expenses.
The router supports Kernel Virtual Machine (KVM)-compliant network-functions virtualization (NFV) that enables carriers to quickly create such services as network analytics and security services, as well as virtualized network functions that are customized for subscribers.

Tuesday, February 17, 2015

REST APIs in SDN: An introduction for network engineers


Network engineers and administrators are at a crossroad. On one hand, protocols like BGP, IS-IS and MPLS still play a critical role in networks, so it's vital we maintain their base knowledge of traditional networking. On the other hand, software-defined networking is here to stay, so new skills like network programmability present a slew of new technologies to conquer. This article is designed to help network professionals by diving into a potentially cryptic component of SDN and network programmability: application programming interfaces (APIs).


A network engineer's view of APIs

Simply put, an API is an interface presented by software (such as a network operating system) that provides the capability to collect information from or make a change to an underlying set of resources. The inner workings of software may seem alien, but the concept is quite familiar to us networking pros. Let's use the Simple Network Management Protocol (SNMP) as an analogy. SNMP provides the means to request or retrieve data like interface statistics from forwarding elements. SNMP also allows applications to write configuration to networking devices. Although that's not a common use case for SNMP, it's helpful to keep in mind because APIs provide this same functionality for a wider array of software applications.


APIs in the context of SDN

The definition of an API may not, on its own, be overly useful for a network engineer, so let's take that definition and examine it in the context of SDN. In an open SDN model, a common interface discussed is the northbound interface (NBI). The NBI is the interface between software applications, such as operational support systems, and a centralized SDN controller.


Getting hands-on with SDN and REST APIs

Since many engineers best learn new technologies by getting direct experience with them, let's look at some next steps to get our hands on some REST APIs. The following three steps outline how to get up and running.

First, if you have no previous programming experience, acquire a tool to generate REST API calls. The Chrome browser, for example, has multiple plug-ins to generate REST API messages. These include Postman and the Advanced REST Client. Firefox has the RESTClient add-on for the same functionality. For those more comfortable with a command-line interface, the curl utility may also be used.

Second, get access to an SDN controller or a controller-like platform that supports REST APIs. Ryu and ONOS are open source options that fit the bill. For those wanting to align to a particular vendor, there are options such as NEC's ProgrammableFlow Controller or Juniper's OpenContrail.

Lastly, dig up the relevant REST API documentation for the SDN controller or platform, such as those for the Ryu controller or for OpenContrail. Although the formatting of the documentation varies, look for the following items: URI string for the requested, HTTP method (e.g., GET, POST, PUT, DELETE) and JSON/XML payload and/or parameters. Both the Ryu and OpenContrail documentation provide examples illustrating how to send a valid REST API message.

To sum it up, we have to demystify APIs in the context of software-defined networks. Why? Because SDN presents new technologies and novel ways for networking professionals to think about and solve networking challenges.



Saturday, February 14, 2015

Introducing “6-pack”: the first open hardware modular switch


As Facebook’s infrastructure has scaled, we’ve frequently run up against the limits of traditional networking technologies, which tend to be too closed, too monolithic, and too iterative for the scale at which we operate and the pace at which we move. Over the last few years we’ve been building our own network, breaking down traditional network components and rebuilding them into modular disaggregated systems that provide us with the flexibility, efficiency, and scale we need.

We started by designing a new top-of-rack network switch (code-named “Wedge”) and a Linux-based operating system for that switch (code-named “FBOSS”). Next, we built a data center fabric, a modular network architecture that allows us to scale faster and easier. For both of these projects, we broke apart the hardware and software layers of the stack and opened up greater visibility, automation, and control in the operation of our network.

But even with all that progress, we still had one more step to take. We had a TOR, a fabric, and the software to make it run, but we still lacked a scalable solution for all the modular switches in our fabric. So we built the first open modular switch platform. We call it “6-pack.”






The platform

The “6-pack” platform is the core of our new fabric, and it uses “Wedge” as its basic building block. It is a full mesh non-blocking two-stage switch that includes 12 independent switching elements. Each independent element can switch 1.28Tbps. We have two configurations: One configuration exposes 16x40GE ports to the front and 640G (16x40GE) to the back, and the other is used for aggregation and exposes all 1.28T to the back. Each element runs its own operating system on the local server and is completely independent, from the switching aspects to the low-level board control and cooling system. This means we can modify any part of the system with no system-level impact, software or hardware. We created a unique dual backplane solution that enabled us to create a non-blocking topology.




We run our networks in a split control configuration. Each switching element contains a full local control plane on a microserver that communicates with a centralized controller. This configuration, often called hybrid SDN, provides us with a simple and flexible way to manage and operate the network, leading to great stability and high availability.

The only common elements in the system are the sheet metal shell, the backplanes, and the power supplies, which make it very easy for us to change the shell to create a system of any radix with the same building blocks.

Below you can see the high-level “6-pack” block diagram and the internal network data path topology we picked for the “6-pack” system.



The line card

If you’re familiar with “Wedge,” you probably recognize the central switching element used on that platform as a standalone system utilizing only 640G of the switching capacity. On the “6-pack” line card we leveraged all the “Wedge” development efforts (hardware and software) and simply added the backside 640Gbps Ethernet-based interconnect. The line card has an integrated switching ASIC, a microserver, and a server support logic to make it completely independent and to make it possible for us to manage it like a server.






The fabric card

The fabric card is a combination of two line cards facing the back of the system. It creates the full mesh locally on the fabric card, which in turn enables a very simple backplane design. For convenience, the fabric card also aggregates the out-of-band management network, exposing an external interface for all line cards and fabrics.






Bringing it together

With “6-pack,” we have created an architecture that enables us to build any size switch using a simple set of common building blocks. And because the design is so open and so modular – and so agnostic when it comes to switching technology and software – we hope this is a platform that the entire industry can build on. Here's what we think separates “6-pack” from the traditional approaches to modular switches:




“6-pack” is already in production testing, alongside “Wedge” and “FBOSS.” We plan to propose the “6-pack” design as a contribution to the Open Compute Project, and we will continue working with the OCP community to develop open network technologies that are more flexible, more scalable, and more efficient.


Thursday, February 12, 2015

Preparing for the Data Center of the Future



Data center operators are under more pressure than ever to provide the fastest, most reliable data possible while balancing demands for higher computing power and efficiency. Meanwhile, their use of virtualization, cloud architectures, and security techniques, as well as software-defined networking and storage has given rise to increasingly complex environments.

Given this already challenging environment, the term “future-proofing” is often chalked up to vendor-speak, meant to scare customers into buying oversized equipment for just-in-case scenarios that may actually never happen.

However, future-proofing, or the attempt to anticipate future demands, is an important element to data center management and planning. New IT devices are coming to market with unforeseen capabilities at record volume and pace, using more and more data, making it seemingly impossible to anticipate what the future will bring and how it will affect the data center.

Future-Proof Without the Cost

There are ways to future-proof the data center without having to make costly investments, while still ensuring that IT infrastructure can adapt and change over time to meet evolving business needs – even in a rapidly changing, unpredictable landscape.

Through data center infrastructure management (DCIM) software and prefabricated, modular infrastructure, data center operators can stimulate adaptability and flexibility for existing and new facilities. Data centers can anticipate and respond to current and future data center needs by:


  • Accounting for increasing demands for processing power and storage capacity
  • Applying a more sophisticated level of monitoring, analysis and management
  • Enabling system management integration between facilities and IT
  • Providing smart energy management and increased control capabilities


This allows data centers to meet evolving company needs, future technologies and the new environmental factors, while extending the service life of existing infrastructure.

Taking Advantage of Prefabricated Data Centers

In January 2014, we looked at the reasons why prefabricated, modular data center infrastructure can help owners and operators meet challenges related to traditional data center builds, such as having too many parties involved, the complexity of long-duration builds, quality and cost inconsistencies and incompatibility of equipment.

However, the biggest advantage of prefabricated, modular data center infrastructure in terms of future-proofing are closely tied to its ability to easily scale up or down capacity. Not only does this reduce both upfront capital and ongoing operational expenses (CAPEX and OPEX), but owners and operators can quickly add power and cooling capacity to meet increasing demands and actual business needs.

This is compared to the traditional method of installing the power and cooling infrastructure as part of the data center building and sizing the facilities according to potential maximum future needs – an almost impossible (and costly) task that uses up valuable real estate, increases utility bills and decreases efficiency – without truly guaranteeing that estimates will ever match actual requirements.

What if facility power, power distribution, cooling and IT physical infrastructure were not built into the building, but instead were prefabricated building blocks that could be deployed and changed as needed throughout the life cycle of the data center? Owners and operators could deploy prefabricated IT building blocks and raise density or availability levels by adding extra matching power and cooling building blocks, or, in the future, even swap out AC-powered prefabricated building blocks to DC-powered prefabricated building blocks, which could possibly be the standard in the future.

Monitor and Manage the Data Center of the Future

Intelligent, informed data center planning with an eye toward future needs can help owners and operators avoid being caught off-guard by unanticipated changes within the IT environment or changing business landscapes. Planning is most effective when decisions are informed by past and real-time data collected from IT and facility systems, which provides actionable insight from the data collected via DCIM solutions.

The ability to use DCIM data as a benchmarking tool in the planning process is perhaps the most effective method for preparing for future data center needs. This is because DCIM solutions can aggregate data collected from both the physical building structure and IT equipment within the facility – from the building down to the server level – not only bridging the all-too-common gap between facilities and IT, but also allowing owners and operators to identify trends, develop action plans and prepare for potential problems or needs down the line.

DCIM solutions also show how adding, moving or changing physical equipment can affect operations, thereby providing accurate insight for common planning questions and optimizing existing infrastructure capacities.

By increasing data center flexibility through prefabricated, modular data center infrastructure and DCIM software, data center operators can transition their facility from a cost center to a business driver enabling organizations to better mitigate risk and prepare for the future.

Wednesday, February 11, 2015

Why Cisco is Warming to Non-ACI Data Center SDN



There are three basic ways to do software defined networking in a data center. One way is to use OpenFlow, an SDN standard often criticized for poor scalability. Another, much more popular way, is to use virtual network overlays. And the third is Cisco’s way.

The networking giant proposed its proprietary Application Centric Infrastructure as an alternative to open-standards-based data center SDN in 2013. It is similar in concept to virtual overlays but works on Cisco gear only.

Some Cisco switches support OpenFlow, but not the current-generation Nexus 9000 line, although the company has said it is planning to change that sometime in the future. Now, however, there’s a new, more interesting twist to Cisco’s SDN story.

Last week the company announced that Nexus 9000 switches will soon (before the end of the month) support an open protocol called BGP EVPN. Older-generation lines will also support it in the near future. The aim is essentially to make it easier to implement virtual overlays on the latest Cisco switches. Theoretically, support of the protocol would also open doors for third-party SDN controllers to be used to manage these environments.

The move indicates a realization on Cisco’s part that ACI is still a tough sell for many customers, and that interest in Nexus 9000 switches is there, while interest in going all-in with its proprietary SDN is trailing behind. The company’s own sales figures confirm that reality. According to Cisco, out of the 1,000 or so Nexus 9,000 customers, only about 200 have bought the Application Policy Infrastructure Controller, the SDN controller for ACI environments, since it started shipping in August.

“This is an interesting move by Cisco in that [the open protocol-based overlay technology] is a difference from their ACI stack,” Mike Marcellin, senior vice president of strategy and marketing at Cisco rival Juniper Networks, said. “Cisco is acknowledging that ACI is not for everybody, and [that] they need to think a little more broadly.”

A Control Plane for VXLAN

BGP EVPN is a control plane technology for a specific type of virtual overlays: VXLAN. The VXLAN framework was created by Storvisor, Cumulus Networks, Arista, Broadcom, Cisco, VMware, Intel, and Red Hat. While it describes creation of the overlay, it does not describe a control plane, which is where BGP EVPN comes in.

EVPN is a protocol generally used to connect multiple data centers over Wide Area Networks. It enables a service provider to create multiple virtual connections between sites for a number of customers over a single physical network while keeping each customer’s traffic private.

Cisco, Juniper, Huawei, Verizon, Bloomberg, and Alcatel-Lucent have authored a standard for using EVPN as a network virtualization overlay. An NVO is essentially a way to isolate traffic between virtual machines belonging to lots of different tenants in a data center. It is powerful because it enables private connectivity between VMs sitting in different parts of a data center or in entirely different data centers. It also enables entire VMs to move from one host device to another.

The BGP protocol has played a key role in enabling the Internet. Disparate systems operated by different organizations use it to exchange routing and reachability information. In other words, BGP is the language used by devices on the Internet to tell other devices that they exist and how they can be reached.

EVPN relies on BGP to distribute control plane traffic, to discover “Provider Edge” devices (devices that connect the provider’s network to the customer’s network), and to ensure that traffic meant to stay within a specific EVPN instance doesn’t venture outside.

Making VXLAN on Cisco Easier

Customers have been able to set up VXLAN overlays on Cisco switches before, but without BGP EVPN it was a tedious, complicated task, Gary Kinghorn, product marketing manager for Cisco’s Data Center Solutions Group, said.

The VXLAN spec requires users to enable core switches in their networks to run multicast routing. With multicast, if one VM needs to reach another, several copies of a packet get sent out to several destinations, and only one of them reaches the intended recipient. BGP EVPN introduces an alternative and more scalable approach, where the controller simply keeps track of where each VM resides, so only one copy of a packet is sent directly to its destination.

The controller also gels with OpenStack, so users of the popular open source cloud architecture can use it to provision and automate their virtual networks.

While BGP is not a requirement for using EVPN, most people that adopt EVPN will use BGP, because it’s so widespread and well known, Kinghorn said.

Juniper No Stranger to EVPN

Juniper has supported EVPN for some time now. “It’s always been our preferred data center connectivity solution, because it is standards-based,” Marcellin said. “We support EVPN today on our routing platform. We basically built it into Junos.” Junos is the company’s network operating system.

Juniper has its own SDN controller, called Contrail, which also leverages BGP. While it can potentially be used with VXLAN overlays, it is really an alternative overlay technology. Contrail can interoperate with Cisco gear or even create overlays on top of Cisco switches, he added.

Interest is growing in using the combination of EVPN and VXLAN, although Marcellin hasn’t seen widespread adoption at this point. “It’s actually a pretty elegant way to solve basic challenges some people face.”

In Nascent Market, Variety is a Good Thing

By adding support for EVPN, Cisco is broadening its potential reach in the growing SDN market. This is a new area, and it’s too early to tell which of the various technological approaches will ultimately enjoy the most use. While it may no longer have the reputation of the cutting edge tech company, Cisco still enjoys the biggest install base among rivals in the data center market, and broadening the variety of data center SDN ideas it supports only puts it in a better position to preserve its market share in this quickly changing environment.

Monday, February 9, 2015

VNI Mobile Forecast



Networks are an essential part of business, education, government, and home communications. Many residential, business, and mobile IP networking trends are being driven largely by a combination of video, social networking, and advanced collaboration applications, termed visual networking. The Cisco Visual Networking Index (VNI) is our ongoing effort to forecast and analyze the growth and use of IP networks worldwide.


VNI Mobile Forecast

In February 2015, Cisco released the Cisco VNI Global Mobile Data Traffic Forecast, 2014 - 2019. Global highlights from the updated study include the following projections:

By 2019:

  • There will be 5.2 billion global mobile users, up from 4.3 billion in 2014
  • There will be 11.5 billion mobile-ready devices and connections, more than 4 billion more than there were in 2014
  • The average mobile connection speed will increase 2.4-fold, from 1.7 Mbps in 2014 to 4.0 Mbps by 2019
  • Global mobile IP traffic will reach an annual run rate of 292 exabytes, up from 30 exabytes in 2014.


VNI Complete Forecast

In June 2014, Cisco released the complete VNI Global IP Traffic Forecast, 2013 – 2018. Global highlights from the updated study include the following projections:

  • By 2018, there will be nearly four billion global Internet users (more than 51 percent of the world's population), up from 2.5 billion in 2013
  • By 2018, there will be 21 billion networked devices and connections globally, up from 12 billion in 2013
  • Globally, the average fixed broadband connection speed will increase 2.6-fold, from 16 Mbps in 2013 to 42 Mbps by 2018
  • Globally, IP video will represent 79 percent of all traffic by 2018, up from 66 percent in 2013


Click Here for VNI Mobile Forecast Highlights, 2014 – 2019



Cisco offers ACI alternative for Nexus 9000 switches



Cisco is adding a new control plane capability to its Nexus 9000 switches for customers not yet opting for or needing a full-blown application policy infrastructure.

Cisco’s BGP Control Plane for VXLAN is designed to appeal to operators of multitenant clouds looking for familiar BGP routing protocol features with which to scale their networks and make them more flexible for the demands of cloud networking. VXLAN, which scales VLAN segmentation to 16 million endpoints, does not specify a control plane and relies on a flood-and-learn mechanism for host and endpoint discovery, which can limit scalability, Cisco says.

BGP Control Plane for VXLAN can also serve as an alternative to Cisco’s Application Centric Infrastructure (ACI) control plane for the Nexus 9000s. The ACI fabric is based on VXLAN routing and an application policy controller called Application Policy Infrastructure Controller (APIC).

“This is definitely an alternative deployment model,” said Michael Cohen, director of product management in Cisco’s Insieme Networks Business Unit. “It’s a lighter weight (ACI) and some customers will just use this.”

BGP Control Plane for VXLAN runs on the standalone mode versions of the Nexus 9000, which requires a software upgrade to operate in ACI mode.

Cohen sidestepped questions on whether Cisco would now offer another controller just for the BGP Control Plane for VXLAN environments in addition to the ACI APIC and APIC Enterprise Module controllers it now offers.

Cisco says BGP Control Plane for VXLAN will appeal to customers who do not want to deploy multicast routing or who have scalability concerns related to flooding. It removes the need for multicast flood-and-learn to enable VXLAN tunnel overlays for network virtualization.

The new control plane uses the Ethernet virtual private network (EVPN) address-family extension of Multiprotocol BGP to distribute overlay reachability information. EVPN is a Layer 2 VPN technology that uses BGP as a control-plane for MAC address signaling / learning and VPN endpoint discovery.

The EVPN address family carries both Layer 2 and 3 reachability information, which allows users to build either bridged overlays or routed overlays. While bridged overlays might be simpler to deploy, routed ones are easier to scale out, Cisco says.

BGP authentication and security constructs provide more secure multitenancy, Cisco says, and BGP policy constructs can enhance scalability by constraining route updates where they are not needed.

The BGP Control Plane for VXLAN now allows the Cisco Nexus 9300 and 9500 switches to support VXLAN in both multicast flood-and-learn and the BGP-EVPN control plane. Cisco says dual capability allows resiliency in connectivity for servers attached to access or leaf switches with efficient utilization of available bandwidth.

The 9300 leaf switch can also route VXLAN overlay traffic through a custom Cisco ASIC, which the company touts as a benefit over Broadcom Trident II-based platforms from competitors – like Arista. VXLAN routing at the leaf allows customers to bring their boundary between Layer 2 and 3 overlays down to the leaf/access layer, which Cisco says facilitates a more scalable design, contains network failures, enables transparent mobility, and offers better abstract connectivity and policy.

Cisco says BGP Control Plane for VXLAN works with platforms that are consistent with the IETF draft for EVPN. Several vendors, including Juniper and Alcatel-Lucent, have implemented or have plans to implement EVPN in network virtualization offerings. AT&T and Verizon are co-authors of some of the IETF drafts on this capability.

BGP Control Plane for VXLAN is available now on the Nexus 9300 and 9500 switches. It will be available on the Cisco Nexus 7000 switches and ASR 9000 routers in the second quarter.



Wednesday, February 4, 2015

The Role of “VNFaaS”


The cloud and NFV have a lot in common.  Most NFV is expected to be hosted in the cloud, and many of the elements of NFV seem very “cloud-like”.  These obvious similarities have been explored extensively, so I’m not going to bother with them.  Are there any other cloud/NFV parallels, perhaps some very important one?  Could be.

NFV is all about services, and the cloud about “as-a-service”, but which one?  Cloud computing in IaaS form is hosted virtualization, and so despite the hype it’s hardly revolutionary.  What makes the cloud a revolution in a multi-dimensional way is SaaS, software-as-a-service.  SaaS displaces more costs than IaaS, requires less technical skill on the part of the adopter.  With IaaS alone, it will be hard to get the cloud to 9% of IT spending, while with SaaS and nothing more you could get to 24%.  With “platform services” that create developer frameworks for the cloud that are cloud-specific, you could go a lot higher.

NFV is a form of the cloud.  It’s fair to say that current conceptions of function hosting justified by capex reductions are the NFV equivalent of IaaS, perhaps doomed to the same low level of penetration of provider infrastructure spending.  It’s fair to ask whether there’s any role for SaaS-like behavior in NFV, perhaps Virtual-Network-Function-as-a-Service, or VNFaaS.

In traditional NFV terms we create services by a very IaaS-like process.  Certainly for some services that’s a reasonable approach.  Could we create services by assembling “web services” or SaaS APIs?  If a set of VNFs can be composed, why couldn’t we compose a web service that offered the same functionality?  We have content and web and email servers that support a bunch of independent users, so it’s logical to assume that we could create web services to support multiple VNF-like experiences too.

At the high level, it’s clear that VNFaaS elements would probably have to be multi-tenant, which means that the per-tenant traffic load would have to be limited.  A consumer-level firewall might be enough to tax the concept, so what we’d be talking about is representing services of a more transactional nature, the sort of thing we already deliver through RESTful APIs.  We’d have to be able to separate users through means other than virtualization, of course, but that’s true of web and mail servers today and it’s done successfully.  So we can say that for at least a range of functions, VNFaaS would be practical.

From a service model creation perspective, I’d argue that VNFaaS argues strongly for my often-touted notion of functional orchestration.  A VNFaaS firewall is a “Firewall”, and so is one based on a dedicated VNF or on a real box.  We decompose the functional abstraction differently for each of these implementation choices.  So service modeling requirements for VNFaaS aren’t really new or different; the concept just validates function/structure separation as a requirement (one that sadly isn’t often recognized).

Managing a VNFaaS element would be something like managing any web service, meaning that you’d either have to provide an “out-of-band” management interface that let you ask a system “What’s the status of VNFaaS-Firewall?” or send the web service for the element a management query as a transaction.  This, IMHO, argues in favor of another of my favorite concepts, “derived operations” where management views are synthesized by running a query against a big-data repository where VNFaaS elements and other stuff has their status stored.  That way the fact that a service component had to be managed in what would be in hardware-device terms a peculiar way wouldn’t matter.

What we can say here is that VNFaaS could work technically.  However, could it add value?  Remember, SaaS is a kind of populist concept; the masses rise up and do their own applications defying the tyranny of internal IT.  Don’t tread on me.  It’s hard to see how NFV composition becomes pap for the masses, even if we define “masses” to mean only enterprises with IT staffs.  The fact is that most network services are going to be made up of elements in the data plane, which means that web-service-and-multi-tenant-apps may not be ideal.  There are other applications, though, where the concept of VNFaaS could make sense.

A lot of things in network service are transactional in nature and not continuous flows.  DNS comes to mind.  IMS offers another example of a transactional service set, and it also demonstrates that it’s probably necessary to be able to model VNFaaS elements if only to allow something like IMS/HHS to be represented as an element in other “services”.  You can’t deploy DNSs or IMS every time somebody sets up a service or makes a call.  Content delivery is a mixture of flows and transactions.  And it’s these examples that just might demonstrate where VNFaaS could be heading.

“Services” today are data-path-centric because they’re persistent relationships between IT sites or users.  If we presumed that mobile users gradually moved us from being facilities-and-plans-centric to being context-and-event-centric, we could presume that a “service” would be less about data and more about answers, decisions.  A zillion exchanges make a data path, but one exchange might be a transaction.  That means that as we move toward mobile/behavioral services, contextual services, we may be moving toward VNFaaS, to multi-tenant elements represented by objects but deployed for long-term use.

Mobile services are less provisioned than event-orchestrated.  The focus of services shifts from the service model to a contextual model representing the user.  We coerce services by channeling events based on context, drawing from an inventory of stuff that looks a lot like VNFaaS.  We build “networks” not to support our exchanges but to support this transfer of context and events.

If this is true, and it’s hard for me to see how it couldn’t be, then we’re heading away from fixed data paths and service relationships and toward extemporaneous decision-support services.  That is a lot more dynamic than anything we have now, which would mean that the notion of service agility and the management of agile, dynamic, multi-tenant processes is going to be more important than the management of data paths.  VNFs deployed in single-tenant service relationships have a lot of connections because there are a lot of them.  VNFaaS links, multi-tenant service points, have to talk to other process centers but only edge/agent processes have to talk to humans, and I shrink the number of connections, in a “service” sense, considerably.  The network of the future is more hosting and less bits, not just because bits are less profitable but because we’re after decisions and contextual event exchanges—transactions.

This starts to look more and more like a convergence of “network services” and “cloud services”.  Could it be that VNFaaS and SaaS have a common role to play because NFV and the cloud are converging and making them two sides of the same coin?  I think that’s the really profound truth of our time, NFV-wise.  NFV is an accommodation of cloud computing to two things—flows of information and increasing levels of dynamism.  In our mobile future we may see both services and applications become transactional and dynamic, and we may see “flows” developing out of aggregated relationships among multi-tenant service/application components.  It may be inevitable that whatever NFV does for services, it does for the cloud as well.

Sunday, February 1, 2015

PMP Series – Project Scope Management - Part 1



The Project Management Professional exam is one of the few that can be overwhelming at the beginning because of the amount of information that a candidate has to go through in order to be ready for the examination. 

These series of posts will provide a guide to preparing for the Project Management Professional Exam. In this particular post, we will focus on Project Scope Management, one of the knowledge areas that you have to be proficient in so as to pass the PMP exam. We will explore the five processes in this knowledge area, including the inputs, tools and techniques and outputs of these processes.

Important Note: The PMI certified reference book for the PMP exam is the Guide to the Project Management Body of Knowledge (PMBOK), also known as the Project Manager’s Bible. However, reading this book alone can be very inundating. You should use other support materials that you might find easier to read. A pre-requisite for the exam is that you undergo formal training. I strongly recommend that you take the training because, aside from receiving guidance from a formal instructor, you get to discuss topics with people from different projects and learn from their experiences.

Projects are divided into 5 phases: 

  • Initiating, 
  • Planning, 
  • Execution, 
  • Monitoring/Controlling and 
  • Closing.


The PMBOK also divides project Management into nine knowledge areas:

* Project Scope Management
* Project Time Management
* Project Cost Management
* Project Quality Management
* Project Integration Management
* Project Communication Management
* Project Human Resource Management
* Project Risk Management

For each of these knowledge areas, there are some project management processes that need to be carried out in order to be able to successfully manage the project. PMI will test your understanding of these processes and how they interact together to aid the successful initiation, planning, execution, monitoring, controlling and closure of projects.

In total, there are 42 processes spread across these 5 process groups. For the exam, you should take some time to get familiar with the processes and their process groups/knowledge areas. You should also understand how each process relates with each other. The outputs of some of the processes serve as inputs to other processes.

In order to pass the exam you should assume the role of a project manager and be able to justify your decisions based not only on the concepts that you have learnt while studying for the exam but also on your work experience both as a project team member and as a project manager.
The rest of this post will be dedicated to understanding Project Scope Management.

PROJECT SCOPE MANAGEMENT

Project scope management involves managing the extent of the project. The concept here is to ensure that ALL the work to be done is included and ONLY the work to be done is included. Scope management also involves ensuring that the work that was agreed to be done is the work that has been done before the project is certified as completed. Project Scope Management includes 5 processes:


  1. Collect Requirements
  2. Define Scope
  3. Create Work Breakdown Structure
  4. Verify Scope
  5. Control Scope


The Requirements Collection phase involves gathering all the requirements from the stakeholders. The stakeholders are usually documented in a stakeholder register and the high level documentation of the project is usually contained in the project charter. The project charter also appoints and authorizes the project manager to take charge of the project. These two documents (project charter and stakeholder register) serve as inputs for the process of requirements collection.

A stakeholder is anyone who has an impact on (or is impacted by) the project. Assuming we have a project where we are required to build an e-commerce website, in order to collect requirements you need to talk to every one whose opinion will count when the website is eventually built. This ranges from the sponsor of the project all the way to the eventual user of the website.

Armed with these documents, you will conduct your research qualitatively (using interviews, focus groups, facilitated workshops) and quantitatively (using observations, surveys and prototypes) in order to determine the requirements of the project. The tools and techniques employed (interviews, focus groups, etc.) help you generate requirements, a plan for managing them and a document for tracking their origin and state throughout the project. This document is called the requirement’s traceability matrix. These three documents are the outputs of this process.

Scope Definition typically involves creating the project scope statement from the high level information (in the project charter) and the requirements you have gathered during the requirements collection process. As a project manager, you should employ expert advice from your team in doing this. Close attention should be paid to the eventual goal of the project and alternatives should also be explored. The output of this process is the scope statement. You should sign off the scope statement with the client to ensure that you are on the same page.

Work Breakdown Structure Creation involves dividing the entire project work into smaller bits until you have reached manageable units of work that can be easily controlled as individual entities. The small unit of work is called a work package. The goal here is that the units must be clear, measurable, assignable and easy to control. This is decomposing the requirements and project scope statement until you reach manageable units of work.

The output of this process is a Work breakdown structure, which is a chart that shows the decomposed project (a sample from Wikipedia is shown below) and a dictionary created to explain the content of the WBS. The WBS, WBS dictionary and the project scope statement are the three documents that form the scope baseline of a project.

The first three processes in the Project scope management are part of the planning process group of the project. You should ensure that you have a work breakdown structure stating all your work packages before you move into the execution phase of the project. The remaining two processes in project scope management are in the monitoring and controlling process group of the project.

Scope Verification happens after we have started receiving deliverables on the project. It involves formally accepting the deliverables of a project. For the PMI exam, it is important to note that the goal of this process is NOT validating the deliverables. The process of validating the deliverables is carried out in a different process (Quality Control). This process focuses on inspecting the validated deliverables to ensure they meet the original requirements of the project as defined in the requirements documents.

As a project manager, you should carry out scope verification with the project sponsor and any issues raised during the scope verification process should be addressed using change requests. A change request typically expands, adjusts, or reduces the project scope. You should also pay attention to the impact that changes in the scope would cause on the other constraints of the project (time, cost and quality). 

One of the most critical aspects of the project manager’s job is controlling the scope throughout the project. This involves monitoring the project to ensure that there are no changes to the cost baseline which is not controlled. As a project manager, your job is not to forbid changes entirely; instead, your job is to ensure that if changes are to occur, the correct integrated change control process is followed. This ensures that all the stakeholders are aware of the change and more importantly, they are aware of the impact that the change has on the entire project. This stakeholder expectation management is a critical part of your job as the project manager.

When the changes in the project scope are not controlled, this situation is called a scope creep. A scope creep can occur both from the client and the project team, and it’s the job of the project manager to prevent them. Even when you estimate that you can over deliver on a project and exceed the client’s expectations, the best practice is that you should not. Exceeding the customer’s expectations beyond the scope of the project is called gold plating and it is a form of scope creeping. A good project manager performs regular checks to measure changes (also called variances) to the baseline in order to catch any scope creeps before it’s too late.

A summary of the five processes that make up the project scope management process is shown in the diagram below.


My Blog List

Networking Domain Jobs