Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Thursday, October 30, 2014

Network as a service: The core of cloud connectivity

One of the complexities of the cloud is the fact that the cloud model has so many variations -- public, private, hybrid -- and many implementation options driven by multiple cloud providers and cloud software vendors. One common element in all clouds is cloud networking, which is always essential in connecting cloud resources to users, and increasingly for making connections among cloud resources and providers. Any enterprise that's building a cloud strategy has to develop a cloud networking strategy, and that strategy has to be versatile enough to accommodate rather than limit cloud choices.

A good place to start is with the notion of network as a service, or NaaS. Public cloud services include NaaS features implicitly (the Internet is an "access NaaS" for most public clouds) or explicitly (with VPN capabilities), and private and hybrid clouds will almost always control network connections using a NaaS model.

NaaS, for a cloud network builder, is an abstract model of a network service that can be at either Layer 2 (Ethernet, VLAN) or Layer 3 (IP, VPN) in the OSI model. A cloud user defines the kind of NaaS that their cloud connectivity requires, and then uses public or private tools to build that NaaS. NaaS can define how users access cloud components, and also how the components themselves are connected in a private or hybrid cloud.

The best-known example of NaaS in the public cloud space is Amazon's Elastic IP address service. This service lets any cloud host in EC2, wherever it is located, be represented by a constant IP address. The Elastic IP NaaS makes the cloud look like a single host. This is an example of an access-oriented NaaS application.

NaaS as cloud connectivity

In the private cloud, the most common example of NaaS comes from OpenStack's Neutron APIs, which let users build models of network services at Level 2 or Level 3 and then add their virtual machine (VM) instances to these models. This NaaS model builds inter-component connections that connect cloud application elements with each other, and also defines internetwork gateways to publish application services to users. Not all private cloud stacks have NaaS or Neutron-like capabilities, though, and where they don't exist it will be necessary to use management/orchestration tools, popularly called DevOps tools, in the cloud to build NaaS services in a private cloud deployment.

For hybrid clouds -- the direction most cloud users expect to be going with their own cloud plans -- NaaS is likely to be a three-step process. First, you'll need to define the public cloud NaaS service, and then the cloud connectivity you'll need for your private cloud, and finally the bridge between them. In most cases, this "hybrid bridge" is a gateway between the two NaaS services in your cloud, but it's often a gateway that operates on two levels. First, it has to provide an actual network connection between the public and private cloud, which in many cases will mean setting up a router device or software-based router. Second, it has to ensure that the directories that provide addressing for cloud application components (DHCP to assign addresses, DNS to decode URLs to addresses) are updated when components are deployed or moved. This is actually a part of cloud integration, and it may be done using DevOps tools or commercial products for cloud integration.

Cloud networking is critical for the cloud's success, and approaching it as the union of NaaS domains is a good way to plan and implement the necessary elements, and keep them running optimally for efficient cloud use.

Wednesday, October 29, 2014

Carrier SDN vs. Enterprise SDN

The software-defined networking trend has been picking up momentum and new use cases have been continuously evolving since its initial launch. In this article, we will take a look at this evolution, from the very basics to carrier-class SDN. We will do this through the lens of fundamental SDN architectural characteristics. These basic ingredients ultimately define the type and scale of market applications, as well as the road map of products implementing them. Additionally, we will demonstrate how these ingredients are centered on the pivotal SDN control plane distribution.

To start, you may find it interesting that SDN for enterprises and carriers assumes a new model for networking, a model different from traditional IP. The basic IP network model consists of autonomous junctions directing packets based on address-prefix tables, a kind of location-based set of “area codes” — 1.1.x go left, 1.2.x go right. This model represents a clear way for anyone joining the World Wide Web to send and receive data packets from anyone else. This is true, of course, as long as we know the location county, zip, town, street coordinates, or the IP address. In this model, packets find their way and zero in on their target hop by hop, source to destination.

The SDN model for networking is completely different, looking more like a programmable crossbar of sources and destinations, consumers and producers, subscribers and services or functions. Such a matrix assumes that you can physically get from any row to any column, recursively. But it also assumes that each specific “patch paneling” is under strict programmable control for both tapping and actual connectivity. No entity can talk to any other unless explicitly provisioned to do so. This is achieved using the key SDN element — “the controller” software entity that allocates whole flows or network conversations. This is a significant shift from the model of the IP “ant farm” of packets making their way from one junction to the next.

The straightforward architecture implementation of the model outlined above was initially defined as a centralized controller setting up each flow, from every source to every destination. However, this flow setup needs to be completed through every physical hop from source to destination. This is an implementation detail resulting from the structure of how networks are physically scaled, where not every endpoint is directly connected to any other endpoint.

But is this really just an implementation detail? Not exactly. It turns out that the job of the controller becomes exponentially more complex as the diameter of the network, or the average number of hops from source to destination, increases. This scaling aspect immediately identifies the first key distinction of carrier SDN: federation. Non-carrier SDN products that stick with a centralized approach are restricted to networks with size one or two diameters, namely spine-leaf enterprise data centers or point-to-point site circuits. And, even in these environments, due to the physical meshing factor, scaling large enough has to assume moderate dynamics in flow allocation for centralization to work.

Most SDN architectures that are on the path to carrier-grade, or more generalized and less restricted SDN, choose federated distributed-overlay architecture. In this model, the diameter of the network is surrounded by SDN edges, taking the hop-to-hop topology factor out of the scale equation. SDN edges control the mesh, allocate dynamic patch-points in the flow tables, and link the “outer-lay” identities while letting traditional IP bridging and routing autonomous junctions connect the underlay locations. This federated SDN architecture opens up the possibilities for carrier-scale use cases where SDN overlays are surrounding and leveraging the carrier distribution center networks, carrier metropolitan networks, and also the national backbones that are already in place. Carriers no longer need a greenfield network in order to deploy SDN applications.

Now that we have an SDN overlay in place, we can next look at global information propagation across the overlay network. This is now becoming the most fundamental and mission-critical SDN requirement and distribution element. Why? Since we have federated SDN control at the edges of the network, how will each SDN edge node know what’s behind every other SDN edge node or where the logical row and column identities of the model physically reside? We could keep global information centralized and have just the flow setups distributed, but that blocking point would limit the dynamics and overall flows per second, restricting the performance of the network. Because of this, most overlay architectures distribute global awareness by pre-pushing the global data records to all nodes on the edge. This information push is allowing SDN edges to know tenancy filtering and set up new flows concurrently.

This data push approach is once again a clear demarcation from traditional approaches, distancing carrier SDN and carrier use-case categorization from SDN on enterprise networks. Carrier use cases that require subscriber awareness when setting up flows cannot pre-push all subscriber information to all locations. Maintaining such a massive replication and distribution of data consistently is just not feasible. Subscriber-aware carrier SDN architectures have to allow for a non-blocking, pull and publish-subscribe method of sharing global information. This is typically done by leveraging the underlay IP network in addition to hop-to-hop transport, as well as for implementing an IP distributed non-blocking hash table or IP directory, also termed mapping. Many carrier SDN use cases are subscriber-aware or content-aware; these include mobile function chaining, evolved packet core virtualization, multimedia services, content distribution, virtual customer premises equipment, virtual private networks, and more. In general, most SDN use cases that connect subscriber flows to network functions require subscriber-aware lookup, pull, and classification. Otherwise, every possible service permutation is managed statically, and every single new function may double the static permutation maintenance. This would be exponentially burdensome for global carrier network management and operations.

Lastly, we look at carrier SDN solution element density and element extendibility-programmability qualities. Just like information sharing, these qualities are a direct result of an SDN model distribution and provide distinctions for carrier-class use cases and carrier SDN applicability. As far as SDN edge density, non-carrier-class SDN can potentially afford to push overlay edges all the way in to the tenant system or host. This software is for convenience only and doesn’t provide a “real OpenFlow” approach. This is a possible low-density SDN edge option mainly for hosting, since we don’t assume large scopes per software network. Rather, we expect many “small” enterprise tenants. Hence we do not assume massive global information sharing compared to a multimillion-subscriber database or multibillion-machine-to-machine identity base. For enterprise SDN, we also don’t assume too many physical geographic locations or heavy-duty SDN edge device packaging. This, of course, is not the case for carrier SDN applications. For those, global information sharing is massive and the number of SDN edge nodes must be kept at hundreds to thousands and not hundreds of thousands, and therefore the density of each SDN edge node should be quite high. Carrier SDN edge node capacity is a full rack size bandwidth-wise, and is in the thousands of flow-setup and mapping-lookup per second per node. We also look at millions of concurrent flows and millions of concurrent publish-subscribe states kept per node.

Similarly, an additional direct result of SDN distribution is that if SDN programmability is no longer “locked” inside the “controller,” a clear method needs to specify how distributed programmability and extendibility are delivered in each of the nodes, and how they are synced across the solution. If we refer to such distributed programmable logic as “FlowHandlers” and the variance of such logic as FlowHandler.Lib, then once again we see another immediate distinction of carrier SDN. While basic SDN solutions will have very limited FlowHandler.Lib, basically for handling multi-tenancy virtual L2/3, carrier use cases will a have a far more elaborate FlowHandler.Lib, with distinct flow-mapping logic for protocols, SIP, SCTP, GTP, GRE, NFS, etc.

These protocols enable applications such as mobility, VoIP, voice over LTE, content distribution, transcoding, and header enrichment, to name a few. Carrier-class FlowHandler.Lib will also handle jitter buffers and TCP flow control when SDN overlays span large geo-distributions, and/or connect very different mediums such as wide area networks (WAN) and radio access networks (RAN). Carrier FlowHandlers by law, and by default, should also be able to account for each and every flow connected over the public network. Accounting for patterns is an important public network security consideration; for instance, network functions such as firewalls or tapping may be opted in and out of dynamically per carrier FlowHandler decision. Additional important functions of a carrier-class FlowHandler are the abilities to apply traffic engineering and segment flow routes in the underlay IP network, optimize long-lived backup and replication flows, and load balance core-spine links and core routes.

To summarize, we touched upon the differences and distinctions of SDN in general, and when it comes to carrier SDN vs. enterprise SDN in particular. There are many differences as far as technology and use cases, but even more key are the architectural ones: federation, mapping, density, and programmability. Just like in the case of basic Ethernet and carrier Ethernet, carrier SDN requires a lot more structure for scale and services. As we saw, most of these qualitative architectural distinctions have to do with SDN control plane distribution, the distribution model, sharing of information, density or level of distribution, differentiated programmability, and carrier logic in each node. Done correctly, carrier SDN can provide both multibillion end point carrier scale and five-nines-class availability for the network functions virtualization (NFV) era.

Monday, October 27, 2014

Per Device, Per Month or Per User? Tips for Pricing Managed Services

Which managed services pricing model works best -– per device, per month or per user? Mike Byrne, AVG Technologies' (AVG's) director of partner enablement, shared his thoughts on how managed service providers (MSPs) can price their offerings during a breakout session at this week's AVG Cloud Partner Summit in Phoenix.

Byrne noted standardization is key when it comes to pricing managed services. "Pricing is not an easy process, but it's an easy process if you just apply the same methodology to it," he told summit attendees.

Byrne also pointed out there are several factors that impact managed services pricing, including the competitive landscape, hardware costs, software costs, and technical and account management

So how should an MSP price its services? Byrne provided the following tips:

  1. Know your labor costs -- "Labor is key," Byrne said. "You really have to focus on all of the interactions between your team and the software you've deployed."
  2. Know the competition -- An MSP needs to review the competitive landscape to ensure it is offering managed services that meet its customers' needs and are competitively priced.
  3. Know your personnel and internal costs -- "When you're building a managed services offering, the only figure you need to determine is what you're paying your staff," Byrne said. "From an internal cost perspective, it's all about understanding your labor and understanding how long it takes for you to provide all of your deliverables."
  4. Know your service delivery costs -- It is important for an MSP to determine alert and notification, patch management and other service delivery costs before pricing its managed services.
  5. Know your devices -- Checklists are vital for MSPs because they allow service providers to monitor and evaluate all of the devices that they support.

Byrne ultimately recommended per month or contract-based pricing for MSPs because it can help them better manage their costs.

"Per device pricing, to me, is the single biggest roadblock for MSPs, and the problem with per user is you run into scenarios where you have office staff versus non-office staff users," Byrne added. "But MSPs can put every cost into all of their contracts."

Sunday, October 26, 2014

Why SDN & NFV - Evolution & Simpler Explaination

Difference between disaster recovery and business continuity

The terms business continuity and disaster recovery are often mistakenly used interchangeably. And while cloud computing services can be used to address both business continuity and disaster recovery, you must have a fundamental understanding of the differences to do effective planning.
Disaster recovery (DR) refers to having the ability to restore the data and applications that run your business should your data center, servers, or other infrastructure get damaged or destroyed. One important DR consideration is how quickly data and applications can be recovered and restored. Business continuity (BC) planning refers to a strategy that lets a business operate with minimal or no downtime or service outage.

The design of both solutions must balance a company’s tolerance for time to restore full function against the budget available to fund protection. In almost all cases, utilizing an externally managed service to accomplish DR or BC will result in lower costs and usually higher performance – less waiting time at a lower cost.

Make data protection your no. 1 concern

Whichever strategy you pursue, protecting your company’s data is critical. If your company lost some or all of its data, you’d likely be unable to continue operations. You wouldn’t know what to bill to your customers, what they already owe you, and what you owe vendors and service providers. Inventory information, manufacturing processes, contractual obligations, and competitive intelligence would all be gone. 

The question isn’t whether or not to implement DR or BC solutions, but rather how to balance the two. Depending upon the transaction velocity of your business, you may want to focus on one more than the other. But there are other factors affecting your decision as to how much protection and how much loss you can afford.

Disaster recovery plans are developed so that everyone knows exactly what to do to help the business recover in the aftermath of a major catastrophic event. Earthquakes, hurricanes, floods, and acts of war have all caused big companies to either activate their DR plans or deeply regret not having one. IT elements include having recent off-site stored backups of all data available for restoration once a new data center has been established. The more recent the backups, the better, meaning that the planning, scheduling, and rotation of data to offsite facilities is an integral part of a good DR plan.

Disasters happen. And when they do, they can destroy or incapacitate entire buildings, towns, and cities. This is where the concept of redundancy becomes critical. You may backup your data locally, and should a server or storage device fail, you simply replace it and restore the local copy of your data. But when a major outage hits your building, your neighborhood, perhaps even your entire city or region, you’ll want to be sure company data is replicated far away in a remote data center, perhaps more than one, and is available for restoration as soon as you’ve secured a new physical location from which to operate.

Business continuity planning is much more granular. Even brief lapses in operation can threaten an enterprise’s existence. Highly transactional environments almost always require a BC plan.

BC measures need to be put into place at multiple levels. For example, redundant servers, redundant storage, even redundant data centers may be required to provide enough availability to support true continuity of the business. Anything that could fail must be backstopped. Even personnel and physical premises! Alternate personnel need to be ready to step in, and substitute locations must be designated where employees can work should a calamity befall the operation. There is clearly overlap here with DR.

You can see why people try to use these terms interchangeably. True continuity of business operations requires high availability, which is the lowest level of fault tolerance, and the ability to recover from a disaster almost instantly.

Beyond replicating your valuable data, if your company can’t afford to stop doing business, you’ll want to replicate your entire infrastructure. When an outage or disaster occurs, your network “fails over” to the redundant data center and your people continue working as if nothing has happened. Users unable to access the company’s network can connect to the secondary data center easily from wherever they can securely access the internet.

Need for speed vs. budget

You may be unsure of just how much you need to invest to achieve the level of resilience appropriate for your business. You don’t want to overspend, but you don’t want to under protect either. Begin your process by assessing the value of each critical data asset, and create a specific plan for each. Compare the approximate cost of each plan against the value of the asset to establish an acceptable ratio. From there, the rest of the process is one of logistics.

Saturday, October 25, 2014

SDN Use Cases including Network Functions Virtualization (NFV), Network Virtualization, OpenFlow and Software Defined Networking

Use Case – WhatLocationWhy SDN NeededBenefits Achieved
Network Virtualization– Multi-Tenant NetworksDatacenterTo dynamically create segregated topologically-equivalent networks across a datacenter, scaling beyond typical limits of VLANs today at 4KBetter utilization of datacenter resources, claimed 20-30% better use of resources. Faster turnaround times in creating segregated network, from weeks to minutes via automation APIs.
Network Virtualization – Stretched NetworksDatacenterTo create location-agnostic networks, across racks or across datacenters, with VM mobility and dynamic reallocation of resourcesSimplified applications that can be made more resilient without complicated coding, better use of resources as VMs are transparently moved to consolidate workloads. Improved recovery times in disasters.
Service Insertion (or Service Chaining)Datacenter/
Service Provider DMZ/WAN
To create dynamic chains of L4-7 services on a per tenant basis to accommodate self-service L4-7 service selection or policy-based L4-7 (e.g. turning on DDoS protection in response to attacks, self-service firewall, IPS services in hosting environments, DPI in mobile WAN environments)Provisioning times reduced from weeks to minutes, improved agility and self-service allows for new revenue and service opportunities with substantially lower costs to service
Tap AggregationDatacenter/campus access networksProvide visibility and troubleshooting capabilities on any port in a multi-switch deployment without use of numerous expensive network packet brokers (NPB).Dramatic savings and cost reduction, savings of $50-100K per 24 to 48 switches in the infrastructure. Less overhead in initial deployment, reducing need to run extra cables from NPBs to every switch.
Dynamic WAN reroute –move large amounts of trusted data bypassing expensive inspection devicesService Provider/
Enterprise Edge
Provide dynamic yet authenticated programmable access to flow-level bypass using APIs to network switches and routersSavings of hundreds of thousands of dollars unnecessary investment in 10Gbps or 100Gbps L4-7 firewalls, load-balancers, IPS/IDS that process unnecessary traffic.
Dynamic WAN interconnectsService ProviderTo create dynamic interconnects at Internet interchanges between enterprise links or between service providers using cost-effective high-performance switches.Ability to instantly connect Reduces the operational expense in creating cross-organization interconnects, providing ability to enable self-service.
Bandwidth on DemandService ProviderEnable programmatic controls on carrier links to request extra bandwidth when needed (e.g. DR, backups)Reduced operational expense allowing self-service by customers and increased agility saving weeks of manual provisioning.
Virtual Edge – Residential and BusinessService Provider Access NetworksIn combination with NFV initiatives, replace existing Customer Premises Equipment (CPE) at residences and businesses with lightweight versions, moving common functions and complex traffic handling into POP (points-of-presence) or SP datacenter.Increased usable lifespan of on-premises equipment, improved troubleshooting, less truck rolls, flexibility to sell new services to business and residential customers.

What is the difference between declarative and imperative architecture?

It's becoming common to hear about declarative versus imperative architectures when listening to vendors talk about modern networking, automaton and policy solutions. And usually, the discussions center on Promise Theory, making the conversation even more confusing.

So what is the difference between declarative and imperative architectures? Simply put, imperative focuses on the "how," while declarative focuses on the "what." As an example, in an imperative architecture using the OpenFlow protocol, a controller would explicitly tell the switch how to handle network traffic. On the other hand, in a declarative architecture using OpFlex or modern DevOps IT automation tools, a controller/master would tell the switch what the desired state should be. The desired state could be "give high priority" or "drop this particular traffic," and that is interpreted by the individual target device without the controller being aware of how the device will achieve the desired state. In the imperative architecture, the exact commands or instruction set would be sent down to the device to make the change.

Friday, October 24, 2014

NFV Essentials

What is NFV – Network Functions Virtualization?

What is NFV – Network Functions Virtualization?

Network functions Virtualization (NFV) offers a new way to design, deploy and manage networking services. Network Functions Virtualization or NFV  decouples the network functions, such as network address translation (NAT), firewalling, intrusion detection, domain name service (DNS), caching, etc., from proprietary hardware appliances, so they can run in software. It’s designed to consolidate and deliver the networking components needed to support a fully virtualized infrastructure – including virtual servers, storage and even other networks.  It utilizes standard IT virtualization technologies that run on high-volume service, switch and storage hardware to virtualize network functions. It is applicable to any data plane processing or control plane function in both wired and wireless network infrastructures.

Example of How a Managed Router Service Would be Deployed with NFV. 

History of NFV

The concept for Network Functions Virtualization or NFV originated from service providers who were looking to accelerate the deployment of new network services to support their revenue and growth objectives. They felt the constraints of hardware-based appliances, so they wanted to apply standard IT virtualization technologies to their networks. To accelerate progress towards this common goal, several providers came together and created an ETSI Industry Specification Group for NFV. (Full member list). The goal is to define the requirements and architecture for the virtualization of network functions. The group is currently working on the standards and will be delivering the first specifications soon.

The Benefits of NFV

NFV virtualizes network services via software to enable operators to:

  • Reduce CapEx: reducing the need to purchase purpose-built hardware and supporting pay-as-you-grow models to eliminate wasteful overprovisioning.
  • Reduce OpEX: reducing space, power and cooling requirements of equipment and simplifying the roll out and management of network services.
  • Accelerate Time-to-Market: reducing the time to deploy new networking services to support changing business requirements, seize new market opportunities and improve return on investment of new services. Also lowers the risks associated with rolling out new services, allowing providers to easily trial and evolve services to determine what best meets the needs of customers.
  • Deliver Agility and Flexibility: quickly scale up or down services to address changing demands; support innovation by enabling services to be delivered via software on any industry-standard server hardware.

Why SDN – Software-Defined Networking or NFV – Network Functions Virtualization Now?

Software-defined networking (SDN), network functions virtualization (NFV) and network virtualization (NV) are giving us new ways to design, build and operate networks. Over the past two decades, we have seen tons of innovation in the devices we use to access the network, the applications and services we depend on to run our lives, and the computing and storage solutions we rely on to hold all that “big data” for us, however, the underlying network that connects all of these things has remained virtually unchanged. The reality is the demands of the exploding number of people and devices using the network are stretching its limits. It’s time for a change.

The Constraints of Hardware

Historically, the best networks – a.k.a. those that are the most reliable, have the highest availability and offer the fastest performance, etc. – are those built with custom silicon (ASICs) and purpose-built hardware. The “larger” the “box,” the higher the premium vendors can command, which only incents the development of bigger, even more complex, monolithic systems.

Because it takes a significant investment to build custom silicon and hardware, rigorous processes are required to ensure vendors get the most out of each update or new iteration. This means adding features ad hoc is virtually impossible. Customers that want new or different functionality to address their requirements, end up beholden to the vendor’s timeline.  It is so challenging to try to make any changes to these systems, even those that are “open,” that most companies have a team of experts (many of whom are trained and certified by the networking companies, themselves) on hand to keep the network up and running.

The hardware predominance has truly stifled the innovation in the network. It’s time for ‘out of the box’ thinking; it’s time to free the software and change everything…

The Time for Changes in Networking is Now

Thanks to the advances in today’s off-the-shelf hardware, developer tools and standards, a seismic technology shift in networking to software can finally take place. It’s this shift that underlies all SDN, NFV and NV technologies –software can finally be decoupled from the hardware, so that it’s no longer constrained by the box that delivers it. This is the key to building networks that can:

  • Reduce CapEx: allowing network functions to run on off-the-shelf hardware.
  • Reduce OpEX: supporting automation and algorithm control through increased programmability of network elements to make it simple to design, deploy, manage and scale networks.
  • Deliver Agility and Flexibility: helping organizations rapidly deploy new applications, services and infrastructure to quickly meet their changing requirements.
  • Enable Innovation: enabling organizations to create new types of applications, services and business models.

Which is Better – SDN or NFV?

Software-defined networking (SDN), network functions virtualization (NFV) and network virtualization (NV), are all complementary approaches. They each offer a new way to design deploy and manage the network and its services:

  • SDN – separates the network’s control (brains) and forwarding (muscle) planes and provides a centralized view of the distributed network for more efficient orchestration and automation of network services.
  • NFV – focuses on optimizing the network services themselves. NFV decouples the network functions, such as DNS, Caching, etc., from proprietary hardware appliances, so they can run in software to accelerate service innovation and provisioning, particularly within service provider environments.
  • NV – ensures the network can integrate with and support the demands of virtualized architectures, particularly those with multi-tenancy requirements.


SDN, NFV and NV each aim to advance a software-based approach to networking for more scalable, agile and innovative networks that can better align and support the overall IT objectives of the business.  It is not surprising that some common doctrines guide the development of each. For example, they each aim to:

  • Move functionality to software
  • Use commodity servers and switches over proprietary appliances
  • Leverage programmatic application interfaces (APIs)
  • Support more efficient orchestration, virtualization and automation of network services

SDN and NFV Are Better Together

These approaches are mutually beneficial, but are not dependent on one another.  You do not need one to have the other. However, the reality is SDN makes NFV and NV more compelling and visa-versa.  SDN contributes network automation that enables policy-based decisions to orchestrate which network traffic goes where, while NFV focuses on the services, and NV ensures the network’s capabilities align with the virtualized environments they are supporting. The advancement of all these technologies is the key to evolving the network to keep pace with the innovations of all the people and devices its connecting.

How Does ETSI NFV Operate?

The European Telecommunication Standards Institute (ETSI) is an independent standardization organization that has been instrumental in developing standards for information and communications technologies (ICT) within Europe. It was created in 1988 as a nonprofit by the European Conference of Postal and Telecommunications Administration (CEPT), which has been the coordinating body for European telecommunications and postal organizations since 1959. The not-for-profit organization has more than 700 member organizations representing more than 62 countries around the world.

How the Work of ETSI NFV Gets Done

Most of the work of the Institute is done in committees and working groups made up of experts from member organizations. They tackle technical issues and the development of specifications and standards to support the needs of the broad membership and the European ICT industry at large.

Committees, often referred to as Technical Bodies, typically meet one to six times a year. There are three recognized types of Technical Bodies:

  • Technical Committees – semi-permanent entities within ETSI organized around standardization activities for a specific technology area.
  • ETSI Projects are established based on the needs of a particular market sector and tend to exist for a finite period of time.
  • ETSI Partnership Projects (such as ETSI NFV) are activities within ETSI that require cooperation with other organization to achieve a standardization goal.

Industry Specification Groups (ISG) supplement the work of Technical Bodies to address work needed around a specific technology area. Recently, a group was formed to drive standards for Network Functions Virtualization (NFV).

Service providers came together and formed an industry specifications group within ETSI called the “Network Functions Virtualization” Group with over 100 members. The Group is focused on addressing the complexity of integrating and deploying new network services within software-defined networking (SDN) and networks that support OpenFlow.

They are working on defining the requirements and architecture for the virtualization of network functions to:

  • Simplify ongoing operations
  • Achieve high performance, portable solutions
  • Support smooth integration with legacy platforms and existing EMS, NMS, OSS, BSS and orchestration systems
  • Enable an efficient migration to new virtualized platforms
  • Maximize network stability and service levels and ensure the appropriate level of resilience

The group is currently working on the standards and will be delivering the first specifications soon.

What is OPNFV?

In September 2014, the Linux Foundation announced another open source reference platform — the Open Platform for NFV Project (OPNFV). OPNFV aims to be a carrier-grade, integrated platform that introduces new products and services to the industry more quickly. OPNFV will work closely with the European Telecommunications Standards Institute (ETSI or ETSI NFV) and others to press for consistent implementation of open standards.

The Linux Foundation, the non-profit known for its commitment to an open community, hosted the OpenDaylight Project in April 2013 to advance software-defined networking (SDN) and network functions virtualization (NFV). The project was created as a community-led and industry-supported open source framework. SDN and NFV together are part of the industry’s transition toward virtualization of networks and applications. With the integration of both, significant changes are expected in the networking environment.

OPNFV will promote an open source network that brings companies together to accelerate innovation, as well as market new technologies as they are developed. OPNFV will bring together service providers, cloud and infrastructure vendors, developers, and customers in order to create an open source platform to speed up development and deployment of NFV.

OPNFV Goals and Objectives

Not only will OPNFV put industry leaders together to hone NFV capabilities, but it will also provide consistency and interoperability. Since many NFV foundational elements already are in use, OPNFV will help with upstream projects to manage continued integration and testing, as well as address any voids in development.

In the initial phase, OPNFV will focus on building NFV infrastructure (NFVI) and Virtualized Infrastructure Management (VIM). Other objectives include:

  • Create an integrated and verified open source platform that can investigate and showcase foundational NFV functionality
  • Provide proactive cooperation of end users to validate OPNFV’s strides to address community needs
  • Form an open environment for NFV products founded on open standards and open source software
  • Contribute and engage in open source projects that will be influenced in the OPNFV reference platform

What is NFV MANO?

Network functions virtualization (NFV) has needed to be managed properly from its early stages. With NFV management and organization (MANO), management of NFV is now addressed by the MANO stream. NFV MANO is a working group (WG) of the European Telecommunications Standards Institute Industry Specification Group (ETSI ISG NFV). It is the ETSI-defined framework for the management and orchestration of all resources in the cloud data center. This includes computing, networking, storage, and virtual machine (VM) resources. The main focus of NFV MANO is to allow flexible on-boarding and sidestep the chaos that can be associated with rapid spin up of network components.

NFV MANO is broken up into three functional blocks:

  • NFV Orchestrator: Responsible for on-boarding of new network services (NS) and virtual network function (VNF) packages; NS lifecycle management; global resource management; validation and authorization of network functions virtualization infrastructure (NFVI) resource requests
  • VNF Manager: Oversees lifecycle management of  VNF instances; coordination and adaptation role for configuration and event reporting between NFVI and E/NMS
  • Virtualized Infrastructure Manager (VIM): Controls and manages the NFVI compute, storage, and network resources

For the NFV MANO architecture to work properly and effectively, it must be integrated with open application program interfaces (APIs) in the existing systems. The NFV MANO layer works with templates for standard VNFs, and gives users the power to pick and choose from existing NFVI resources to deploy their platform or element.

At the Internet Engineering Task Force (IETF) meeting in March 2014, NFV MANO announced a series of adopted interfaces for the MANO architecture, as well as said improvements were continual. An important next step for NFV MANO is to include an software-defined networking (SDN) SDN Controller into the architecture.

Ongoing NFV MANO Work

In September 2014, the Linux Foundation announced another open source reference platform — the Open Platform for NFV Project (OPNFV). OPNFV aims to be a carrier-grade, integrated platform that introduces new products and services to the industry more quickly. OPNFV will work closely with the ETSI and others to press for consistent implementation of open standards.


The European Telecommunication Standards Institute (ETSI), an independent standardization group, has been key in developing standards for information and communications technologies (ICT) in Europe. Created in 1988 as a nonprofit, ETSI was established by the European Conference of Postal and Telecommunications Administration (CEPT). With more than 700 member organizations, over 62 counties are represented by ETSI.

The ETSI Industry Specification Group for Network Functions Virtualization (ETSI ISG NFV), a group charged with developing requirements and architecture for virtualization for various functions within telecoms networks. ETSI ISG NFV launched in January 2013 when it brought together seven leading telecoms network operators, including: AT&T, BT, Deutsche Telekom, Orange, Telecom Italia, Telefonica, and Verizon. These companies were joined by 52 other network operators, telecoms equipment vendors, IT vendors, and technology providers to make up the ETSI ISG NFV. Not long after, the ETSI ISG NFV community grew to over 230 individual companies, including many global service providers.

ETSI ISG NFV exists side by side with the current Technical Organization, but they do have their own membership, which can be comprised of both ETSI and non-ETSI members (under some conditions). ISGs have their own voting rules and approve their own deliverables, as they independently choose their own work program.

Why Do We Need ETSI ISG NFV?

Telecoms networks are made up of an array of proprietary hardware devices. Launching new service often means more devices, which means finding the space and power to accommodate those appliances. However, this has become increasingly difficult. Hardware-based devices now have shorter and shorter life cycles due to rapid innovation, making the return of investment lower and lower when deploying new services, as well as limiting innovation as the industry is driven toward network-centric solutions.

Network functions virtualization (NFV) focuses on addressing these problems. By evolving standard IT virtualization technology, NFV implements network functions into software so that it can run on a range of industry standard server hardware and can easily be moved to various locations within the network as needed. With NFV, the  necessity to install new equipment is eliminated. This results in lower CapEx and OpEx, shorter time-to-market deployment of network services, higher return on investment, more flexibility to scale up or scale down, openness to the virtual device network, as well as more opportunity to test and deploy new services with lower risk.

The ETSI ISG NFV helps by setting requirements and architecture specifications for hardware and software infrastructure needed make sure  virtualized functions are maintained. ETSI ISG NFV also manages guidelines for developing network functions.

Thursday, October 23, 2014

How Saudi Arabia Will Kick Its Oil Habit

Saudi Arabia, the Middle East’s biggest economy and the world’s largest oil producer, is running out of its black gold. Some estimate that the wells will run dry as early as 2030. That’s a huge deal. Oil revenue reached $312 billion this year and accounts for almost half the economy and 90 percent of export revenue. It also makes the kingdom the Persian Gulf’s economic powerhouse.

That’s why diversification is no longer a luxury. Opening its notoriously insulated stock exchange to foreign investors and investing in solar power, poultry, dairy, petrochemicals and innovative technology — these are the threads stitching together the kingdom’s safety net. Here are the companies leading the way forward:


The Pitch: Global petrochemical leader

Size: $103.4 billion market value

CEO: Mohamed Al-Mady

Recent Moves: Recently crowned the Middle East’s big-biz king, SABIC is among the world’s largest petrochemical companies. Petrochemical usually means plastics and fertilizer, but SABIC is looking ahead with a host of new polycarbonate technologies, like solar panels, a film for touch screens and the first polycarbonate automobile wheel. Though it’s rolling back European operations, it’s venturing eagerly into the U.S., attracted by the shale oil boom. And with Saudi Arabia about to crack open its stock market to foreign investors, non-oil multinationals are poised for global prominence. Expect SABIC to receive the most lavish treatment from foreigners.


The Pitch: Leading the kingdom’s startup revolution

Size: Acquired for $16 million by SAS Holdings

Founder: Khalid Alkhudair

Recent Moves: To kick their country’s oil addiction, Saudi leaders have launched hundred-million-dollar venture funds and incubators nationwide. The goal? A knowledge-based economy by 2025. The pre-eminent career portal for Saudi women is a poster child for the kingdom’s startup offensive, not to mention female workplace empowerment. Glowork’s market is potentially tremendous: A third of women are unemployed, and because most of them are college educated, they represent a real opportunity cost. And a government cost, too, because the jobless receive $800 a month from the government — a total of $1.6 billion down the drain, according to Alkhudair.

An oil exploration rig operated by the Saudi Arabian drilling company Saudi Aramco near Abqaiq, Saudi Arabia.

Saudi Aramco

The Pitch: Jolly green oil giant

Recent Moves: The Saudi government is planning an ambitious solar renaissance — the kingdom wants its energy mix to include 23 percent solar by 2030 and 39 percent by 2050. With lots of sunshine, low-cost funding and abundant space, Saudi Arabia has already developed one of the world’s cheapest solar models. Saudi Arabia might be the site of a perfect solar storm.CEO: Khalid A. Al-FalihSize: 9.5 million barrels of crude oil production daily; 54,000 employees

Saudi Aramco is poised to helm it. The state-owned behemoth and undisputed world petroleum champion was once valued at $7 trillion, almost half of the U.S. GDP ($16.8 trillion in 2013). The coming oil crunch has it staking a claim in solar and other alternative energies.

Although progress toward solar salvation has been slow, Aramco is still investing heavily in other potentially revolutionary alternative energy solutions. This month, it spearheaded a $30 million investment in San Francisco-based Siluria Technologies, which plans to produce low-cost gasoline from natural gas — for $1 per gallon.


The Pitch: Moving Saudi Arabia from fuel to food

Size: $10.4 billion market cap; 16,000 employees

Founder: Prince Sultan bin Mohammed bin Saud Al Kabeer

Recent Moves: The region’s largest food company is in the right place at the right time. Saudi Arabia is the world’s largest importer of broiler meat, mostly chicken, to the tune of 875,000 metric tons in 2013. Domestic competitors like Almarai are ripe for growth. Already, poultry production is on the rise in the kingdom and is expected to swell 52 percent by 2018.

Almarai is showing other signs of international ambition. In partnership with PepsiCo, it launched a $345 million investment in Egypt this June, and its CEO has stated he plans to boost that figure to $560 million in five years. The funds will go toward a new juice factory, a 5,000-cow dairy farm and expanding existing facilities for Egyptian beverage firm Beyti. Drink and eat up!

How To Stay Calm And Perform Under Pressure

The most important task of any manager in very stressful, high pressure and extremely demanding situations is to keep calm and to keep one's shirt on. It's not about finding quick solutions in the first place. It's about avoiding chaos, about stabilizing the situation, and re-injecting self-confidence into the organization. And only afterwards looking for adequate solutions.

To get around the risk of facing stressed leadership I'd like to present two formula which help me to stay even-tempered and focused during critical times and crisis. They are straightforward in concept and with will and repetitive practice also in execution:

The START Formula – This is a preventive concept, i.e. knowing and applying it assists in minimizing the risk to encounter possible headless chicken situations.

The SWITCH Formula – Once you find yourself in a situation where you are about to lose your head you better exercise this method to stay on course and to keep your cool.


If you live by the START formula you significantly reduce the risk to be dragged into situations where you might lose your temper and head.

S - Stand Up
Make your point and explain your strategy and action plan. Don't allow others pushing you around. Don't be afraid to say no, especially if you’ve already got too much on your plate.

T - Trust
Trust in yourself and others.

A - Action
Action your strategy and plans. Push back if needed. Eliminate all possible distractions like unnecessary meetings, phone calls, etc.

R - Respond
Be responsive and responsible. Keep your line managers, peers, and all other stakeholders always informed and regularly ask for their opinions. If you need help, be brave enough to ask for it.

T - Take It Easy
Don´t take yourself and your tasks too serious. We are all replaceable. Don’t give in to stress or anxiety – no matter how far behind you are or how badly you’ve messed up. Relax.


Once you find yourself in deep and unknown waters applying this method will help you to keep your wits and calm.

Stop running and chasing your tail. Sit down. No further movements.

W – Wait
Make sure you allocate yourself as much downtime as necessary.

I – Inhale
Inhale and breathe. It clears your head. It'll balance the analytical processes of the mind with your emotions and gut.

T – Think
Contemplate about the situation you're in. Try to understand the root causes and interdependencies. Important: Always allow more time than you think you’ll need.

C – Calculate
Calculate and plan. Set yourself and your team clear goals. Break them down into monthly, weekly, and daily milestone targets. They should be specific, measurable, actionable, realistic, and time-bound (SMART). Leave a sufficiently big buffer between every task.

H - Head and Proceed
Once you are clear about your objectives and your action plan, and only then, move on and execute.


We all know that the more we rush, the higher the chances we fail. Still, often we do it with or without realizing it. In consequence we often create more waste and churn, trigger unnecessary activities, put too much pressure on others and make them feel uncomfortable, and might fail to spot real problems or real opportunities to improve.

Wednesday, October 22, 2014

Start with basic programming and scripting skills now

Regardless of what level of certification engineers seek from Cisco or other hardware vendors, they must also start learning programming basics -- and Python is the first step.

"Python allows you to start moving in your mind between procedural programming to object-oriented programming," said McNamara. "Switching into that mind of a software developer allows you start working with SDN controllers like OpenDaylight, as well as OpenStack and, to a point, NSX."

Along those same lines, network engineers should learn Linux, and begin playing with APIs. "You will need those basic interaction skills," McNamara said.

MacVittie advises engineers to seek training that is relevant to IT management, such as Project Management Professional or ScrumMaster certifications.

Odom suggests that admins start their career path with a vendor's basic routing and switching certification. Then they should learn data center virtualization, and finally move on to OpenStack Neutron.

From there, network engineers should pick an SDN focus. Each of the vendors and open source organizations are now differentiated enough in their strategies that engineers will have to pick a road, he said.

Monday, October 20, 2014

SDN Essentials - Part 4

Click Here for SDN Essentials - Part 1

Click Here for SDN Essentials - Part 2

Click Here for SDN Essentials - Part 3

What is an OpenFlow Controller?

What is an OpenFlow Controller?

An OpenFlow Controller is a type of SDN Controller that uses the OpenFlow Protocol. An SDN Controller is the strategic point in software-defined network (SDN). An OpenFlow Controller uses the OpenFlow protocol to connect and configure the network devices (routers, switches, etc.) to determine the best path for application traffic. There are also other SDN protocols that a Controller can use such as OpFlex, Yang, and NetConf, to name a few.

SDN Controllers can simplify network management, handling all communications between applications and devices to effectively manage and modify network flows to meet changing needs. When the network control plane is implemented in software, rather than firmware, administrators can manage network traffic more dynamically and at a more granular level. An SDN Controller relays information to the switches/routers ‘below’ (via southbound APIs) and the applications and business logic ‘above’ (via northbound APIs).

In particular, OpenFlow Controllers create a central control point to oversee a variety of OpenFlow-enabled network components. The OpenFlow protocol is designed to increase flexibility by eliminating proprietary protocols from hardware vendors.

OpenFlow Controller Protocol

SDN and OpenFlow Use Cases

When choosing an SDN Controller, IT organizations should evaluate the OpenFlow functionality supported by the Controller, as well as the vendor roadmap. IT organizations should understand existing functionality and ensure newer versions of OpenFlow and optional features are supported (for example, IPv6 support is not part of the OpenFlow v1.0 Standard, however, it is part of the v1.3 Standard).

A sampling of OpenFlow Controllers include:

  • Reference Learning Switch Controller: This Controller comes with the Reference Linux distribution, and can be configured to act as a hub or as a flow-based learning switch. It is written in C.
  • NOX: NOX is a Network Operating System that provides control and visibility into a network of OpenFlow switches. It supports concurrent applications written in Python and C++, and it includes a number of sample controller applications.
  • Beacon: Beacon is an extensible Java-based OpenFlow Controller. It was built on an OSGI framework, allowing OpenFlow applications to be built on the platform to be started/stopped/refreshed/installed at run-time, without disconnecting switches.
  • Helios: Helios is an extensible C-based OpenFlow Controller built by NEC, targeting researchers. It also provides a programmatic shell for performing integrated experiments.
  • Programmable Flow: Programmable Flow from NEC automates and simplifies network administration for better business agility, and provides a network-wide programmable interface to unify deployment and management of network services with the rest of IT infrastructure. Programmable Flow supports both OpenFlow 1.3 and 1.0, and was the first to be certified by the Open Networking Foundation.
  • Vyatta: The Brocade Vyatta Controller is based on OpenDaylight’s Helium release, announced in September 2014. It incorporates OpenStack orchestration, and while it is open sourced, Brocade will offer a commercial version.
  • BigSwitch: BigSwitch released a closed-source Controller based on Beacon that targets production enterprise networks. It features an user-friendly CLI for centrally managing your network.
  • SNAC: SNAC is a Controller targeting production enterprise networks. It is based on NOX0.4, and features a flexible policy definition language and a user-friendly interface to configure devices and monitor events.
  • Maestro: Maestro is an extensible Java-based OpenFlow Controller released by Rice University. It has support for multi-threading and targets researchers.

OpenFlow is currently being driven by the Open Networking Foundation (ONF).

What is an OpenDaylight Controller?

What is an OpenDaylight Controller?

OpenDaylight is an open source project aimed at enhancing software-defined networking (SDN) by offering a community-led and industry-supported framework. It is open to anyone, including end users and customers, and it provides a shared platform for those with SDN goals to work together to find new solutions.

Under the Linux Foundation, OpenDaylight includes support for the OpenFlow protocol, but can also support other SDN standards. It is meant to be configurable in order to support a plethora of SDN interfaces, including, but not limited to, the OpenFlow protocol.

The OpenFlow protocol, considered the first SDN standard, defines the open communications protocol that allows the Controller to work with the forwarding plane and make changes to the network. This gives businesses the ability to better adapt to their changing needs, and have greater control over their networks.

The OpenDaylight Controller is able to deploy in a variety of production network environments. It can support a modular controller framework, but can provide support for other software-defined networking standards and upcoming protocols. The OpenDaylight Controller exposes open northbound APIs, which are used by applications. These applications use the Controller to collect information about the network, run algorithms to conduct analytics, and then use the Controller to create new rules throughout the network.

The OpenDaylight Controller is implemented solely in software, and is kept within its own Java Virtual Machine (JVM). This means it can be deployed on hardware and operating system platforms that support Java. For best results, it is suggested that the OpenDaylight Controller uses a recent Linux distribution and a Java Virtual Machine 1.7.

Overview of OpenDaylight Controller

What is a Floodlight Controller?

What is a Floodlight Controller?

Floodlight is an SDN Controller offered by Big Switch Networks that works with the OpenFlow protocol to orchestrate traffic flows in a software-defined networking (SDN) environment. OpenFlow is one of the first and most widely used SDN standards; it defines the open communications protocol in an SDN environment that allows the Controller (brains of the network) to speak to the forwarding plane (switches, routers, etc.) to make changes to the network.

The SDN controller is responsible for maintaining all of the network rules and providing the necessary instructions to the underlying infrastructure on how traffic should be handled. This enables businesses to better adapt to their changing needs and have better control over their networks.

Released under an Apache 2.0 license, Floodlight can be included in commercial packages. The Apache License is a free software license that allows users to use Floodlight for almost any purpose.

The Floodlight Controller can be advantageous for developers, because it offers them the ability to easily adapt software and develop applications and is written in Java. Included are representational state transfer application program interfaces (REST APIs) that make it easier to program interface with the product, and the Floodlight website offers coding examples that aid developers in building the product.

Tested with both physical and virtual OpenFlow-compatible switches, the Floodlight Controller can work in a variety of environments and can coincide with what businesses already have at their disposal. It can also support networks where groups of OpenFlow-compatible switches are connected through traditional, non-OpenFlow switches.

The Floodlight Controller is compatible with OpenStack, a set of software tools that help build and manage cloud computing platforms for both public and private clouds. Floodlight can be run as the network backend for OpenStack using a Neutron plugin that exposes a networking-as-a-service model with a REST API that Floodlight offers.

How Floodlight Controller works in SDN Environments

What is Ryu Controller?

What is Ryu Controller?

Ryu, Japanese for “flow,” is an open, software-defined networking (SDN) Controller designed to increase the agility of the network by making it easy to manage and adapt how traffic is handled. In general, the SDN Controller is the “brains” of the SDN environment, communicating information “down” to the switches and routers with southbound APIs, and “up” to the applications and business logic with northbound APIs.

The Ryu Controller provides software components, with well-defined application program interfaces (APIs), that make it easy for developers to create new network management and control applications. This component approach helps organizations customize deployments to meet their specific needs; developers can quickly and easily modify existing components or implement their own to ensure the underlying network can meet the changing demands of their applications.

How a Ryu Controller Fits in SDN Environments

The Ryu Controller source code is hosted on GitHub and managed and maintained by the open Ryu community. OpenStack, which runs an open collaboration focused on developing a cloud operating system that can control the compute, storage and networking resources of an organization, supports deployments of Ryu as the network Controller.

Written entirely in Python, all of Ryu’s code is available under the Apache 2.0 license and open for anyone to use. The Ryu Controller supports Netconf and OF-config network management protocols, as well as OpenFlow, which is one of the first and most widely deployed SDN communications standards.

The Ryu Controller can use OpenFlow to interact with the forwarding plane (switches and routers) to modify how the network will handle traffic flows. It has been tested and certified to work with a number of OpenFlow switches, including OpenvSwitch and offerings from Centec, Hewlett Packard, IBM, and NEC.

What is a Cisco XNC (Extensible Network Controller)?

What is a Cisco XNC (Extensible Network Controller)?

In order to keep up with the changing software-defined networking (SDN) environments, Cisco XNC (Extensible Network Controller) was created; it’s support of OpenFlow, the most widely used SDN communications standard, helps it integrate into varied SDN deployments to enable organizations to better control and scale their networks.

As an SDN Controller, which is the “brains” of the network, Cisco XNC uses OpenFlow to communicate information “down” to the forwarding plane (switches and routers), with southbound APIs, and “up” to the applications and business logic, with northbound APIs. It enables organizations to deploy and even develop a variety of network services, using representational state transfer (REST) application program interfaces (API), as well as Java APIs.

The XNC is Cisco’s implementation of the OpenDaylight stack. Cisco is a contributor to the OpenDaylight initiative, which is focused on developing open standards for SDN that promote innovation and interoperability. Cisco XNC is designed to deliver the cutting edge OpenDaylight technologies as commercial, enterprise-ready solutions.

As a result, Cisco XNC provides functionality required for production environments, such as:

  • Monitoring, topology-independent forwarding (TIF), high availability and network slicing applications
  • Advanced troubleshooting and debugging capabilities
  • Support for the Cisco Open Network Environment (ONE) Platform Kit (onePK), in addition to its OpenFlow support

Cisco XNC can run on a virtual machine (VM) or on a bare-metal service, and can be used to manage any third-party switches, as long as they support OpenFlow. It uses the Open Services Gateway Initiative (OSGi) framework, which offers the modular and extensibility needs that business-critical application require.

What is the Brocade Vyatta SDN Controller?

What is the Brocade Vyatta SDN Controller?

An SDN Controller in a software-defined network (SDN) is the “brains” of the network. It is the strategic control point in the SDN network, relaying information to the switches/routers ‘below’ (via southbound APIs) and the applications and business logic ‘above’ (via northbound APIs). Recently, as organizations deploy more SDN networks, the Controllers have been tasked with federating between SDN Controller domains, using common application interfaces, such as the open virtual switch database (OVSDB).

The OpenDaylight Project, announced in April 2014 and hosted by the Linux Foundation, was created in order to advance SDN and NFV adoption. It was created as a community-led and industry-supported open source framework. The aim of the OpenDaylight Project is to offer a functional SDN platform that gives users directly deployed SDN without the need for other components. In addition to this, contributors and vendors can deliver add-ons and other pieces that will offer more value to OpenDaylight.

Brocade, as a charter member of the OpenDaylight Project, announced its own OpenDaylight-based SDN Controller in September 2014. The Brocade Vyatta Controller was in development for five months prior to its launch, and will be based on the OpenDaylight Project’s Helium code release, due out this fall. General availability for the Vyatta Controller is on track for November 2014.

OpenDaylight Controller Architecture

Vyatta Controller Features

The Vyatta Controller, the first commercial Controller built directly from OpenDaylight code, allows users to freely optimize their network infrastructure to address the needs of their specific workloads. Features of the Vyatta Controller include:

  • A common SDN domain for multi-vendor networks and virtual machines (VM)
  • Single-source technical support for Brocade SDN Controller domains
  • Easy-to-use installation tools and developer support
  • Pre-tested packages and services optimized for the different needs of service providers and traditional network operations
  • Flexibility for OpenDaylight applications developed on the Vyatta Controller

The Vyatta Controller is part of Brocade’s “new IP” vision and will incorporate OpenStack orchestration, OpenDaylight-based control, as well as virtual and physical network gear. The Vyatta Controller is open source, but Brocade will offer a commercial version.

Brocade will also offer applications for the Vyatta Controller. The first two will be sold separately from the Controller:

  • The Path Explorer: calculates optimal paths through the network; available with the first release of the Vyatta Controller
  • Volumetric Traffic Management: recognizes elephant flows and volume-based attacks; slated for early 2015

What is VMware NSX?

What is VMware NSX?

VMware NSX is the network virtualization and security platform that emerged from VMware after they acquired Nicira in 2012. The solution de-couples the network functions from the physical devices, in a way that is analogous to de-coupling virtual servers (VMs) from physical servers.  In order to de-couple the new virtual network from the traditional physical network, NSX natively re-creates the traditional network constructs in virtual space — these constructs include ports, switches, routers, firewalls, etc. In the past, everyone knew what these things were. It was possible to see and touch the switch port that a server connects to, but now, this isn’t possible. Fundamentally, these constructs still exist with VMware NSX, but it is no longer possible to touch them. It is this reason, the virtual network is sometimes harder to conceptualize.

There are two different product editions of NSX – NSX for vSphere and NSX for Multi-Hypervisor (MH). It’s speculated they will merge down the road, but for many possible, or soon to be, users of NSX, it doesn’t matter. Here’s why – they are used to support different use cases. NSX for vSphere is ideal for VMware environments, while NSX for MH is designed to integrate into cloud environments that leverage open standards, such as OpenStack.

NSX for vSphere
The most talked about and documented version of VMware NSX is purpose built for vSphere environments, otherwise referred to as NSX for vSphere. NSX for vSphere will be deployed 90% of the time, as it has native integration to other VMware platforms, such as vCenter and vCloud for Automation Center (vCAC). NSX for vSphere offers logical switching, in-kernel routing, in-kernel distributed firewalling, and edge-border L4-7 devices that offer VPN, load balancing, dynamic routing, and FW capabilities.

It is the culmination of the original networking solution from VMware, vCloud Networking and Security (vCNS), and the Network Virtualization Platform (NVP) from Nicira. In addition, NSX acts as a platform and integrates with third parties, such as Palo Alto Networks and F5.

NSX for MH

The second edition of VMware NSX is the next-generation NVP product that initially emerged out of Nicira. NSX for MV has no native integration with vCenter because it was purpose-built from the ground up to support any cloud environments, such as OpenStack and CloudStack. As an example, NSX for MH offers native integration into OpenStack, by supporting the OpenStack Neutron APIs. This means OpenStack could be deployed as the cloud management platform (CMP), but NSX will take responsibility for creating and configuring logical ports, logical switches, logical routers, security groups, and other networking services.

While there isn’t native integration with vCenter, it does still, in fact, support vSphere, KVM, and XEN hypervisors, though it contains less features than NSX for vSphere, from a networking perspective. There isn’t so-called native integration because a user would not be configuring NSX-MH through a GUI. It’s meant to be API-driven from a cloud platform.

What is Open vSwitch?

What is Open vSwitch?

In order to define what Open vSwitch (OVS) is, it’s extremely important to first understand virtual switching and the new access layer. In the past, servers would physically connect to a hardware-based switch located in the data center. When VMware created server virtualization the access layer changed from having to be connected to a physical switch to being able to connect to a virtual switch. This virtual switch is a software layer that resides in a server that is hosting virtual machines (VMs). VMs, and now also containers, such as Docker, have logical or virtual Ethernet ports. These logical ports connect to a virtual switch.

There are three popular virtual switches – VMware virtual switch (standard & distributed), Cisco Nexus 1000V, and Open vSwitch (OVS).

OVS was created by the team at Nicira, that was later acquired by VMware. OVS was created to meet the needs of the open source community, since there was no a feature-rich virtual switch offering designed for Linux-based hypervisors, such as KVM and XEN. OVS has quickly become the defacto virtual switch for XEN environments and it is now playing a large part in other open source projects, like OpenStack.

OVS supports NetFlow, sFlow, port mirroring, VLANs, LACP, etc. From a control and management perspective, OVS leverages OpenFlow and the Open vSwitch Database (OVSDB) management protocol, which means it can operate both as a soft switch running within the hypervisor, and as the control stack for switching silicon.

OVS differs from the commercial offerings from VMware and Cisco. One point worth noting about OVS is that there is not a native Controller or manager, like the Virtual Supervisor Manager (VSM) in the Cisco 1000V or vCenter in the case of VMware’s distributed switch. Open vSwitch is meant to be controlled and managed by 3rd party controllers and managers. For example, it can be driven by using an OpenStack plug-in or directly from an SDN controller, such as OpenDaylight. This doesn’t mean a Controller is necessary; it is possible to deploy OVS on all servers in an environment and let them operate with traditional MAC learning functionality.

What is OVSDB?

What is OVSDB?

Simply put, Open vSwitch Database (OVSDB) is a management protocol in a software defined networking (SDN) environment. OVSDB was created by the Nicira team that was later acquired by VMware. OVSDB was part of Open vSwitch (OVS), which is a feature-rich, open source virtual switch designed for Linux-based hypervisors. Most network devices allow for remote configuration using legacy protocols, such as simple network management protocol (SNMP). The focus with OVS was to create a modern, programmatic management protocol interface – OVSDB was the answer.

OVSDB Diagram

While it’s sometimes assumed OpenFlow can do it all, this is not the case. For SDN Controller deployments with OVS, OpenFlow is still used to program flow entries, but OVSDB is used to configure the OVS, itself. Configuring OVS means doing things like creating/deleting/modifying bridges, ports, and interfaces. If OVS is deployed in a standalone environment, there is no reason OVSDB can’t be used by itself to configure OVS (non-OpenFlow environment). While this is possible, very few standalone network management platforms really exist that support OVS or specifically, native OVSDB. Note: Some management platforms may be using scripting with bash/python.

While OVSDB was introduced to the world, along with OVS, the Open vSwitch Database is now being supported by more switch platforms, other than OVS. OVSDB is now being supported by network vendors, such as Cumulus, Arista, and Dell, just to name a few. By supporting the Open vSwitch Database, these vendors are integrating their hardware platforms with SDN and network virtualization solutions.

What is OpenStack Networking?

What is OpenStack Networking?

OpenStack is an open source, community driven, cloud management platform. OpenStack is sometimes referred to as a cloud operating system. In reality, it’s a collection of projects and APIs that can be implemented with open source or commercial technologies. OpenStack Networking, otherwise referred to as Neutron, is one of many core projects within OpenStack. For example, another core and popular project is Nova, or OpenStack Compute. Prior to being called Neutron, OpenStack Networking was called Quantum, but because of naming rights, the OpenStack Foundation was forced to change the name. If we go back a little further, networking was actually part of the Nova project and was called Nova-networking.

There was a lack of flexibility and features in Nova networking, resulting in OpenStack Networking becoming a standalone project. OpenStack Networking, or Neutron, provides APIs for network constructs, such as port, interface, switch, router, floating IPs, security groups, etc. The APIs exposed by Neutron can almost be thought of as a software-defined networking (SDN) northbound API, but in reality, Neutron has nothing to do with SDN.

How OpenStack Networking is Managed

As long as the networking technology has a plug-in for OpenStack Neutron, it can smoothly integrate with OpenStack. This means it is up to the implementer to determine how the network constructs are actually deployed – a port, interface, switch, or router can be either physical or virtual, it can be part of a SDN or a traditional network environment; it doesn’t matter. The only requirement is the vendor (or open source technology) supports the Neutron APIs. For example, Big Switch, VMware, Cisco, Nuage, PLUMgrid, Arista, Juniper, Brocade, and many more support OpenStack Networking. Arista will integrate natively and configure hardware switches to support OpenStack, but VMware will integrate natively and configure virtual switches to tie back into OpenStack.

From an end user’s perspective, this is a huge win. This means that as an end user, you should be able to replace one vendor with another quite easily in an OpenStack environment. In theory, this is true, but many vendors are supporting proprietary extensions with their Neutron integration efforts, potentially making them hard to swap out. This is something to consider as an end user, especially if you are thinking about deploying a commercial offering integrated with OpenStack.

What Is Cisco ACI?

What Is Cisco ACI?

Cisco ACI (Application Centric Infrastructure) is the solution that emerged from Cisco, following its acquisition of Insieme, which is a company they funded for more than two years. ACI is seen by many as Cisco’s software-defined networking (SDN) offering for data center and cloud networks.

How Cisco ACI Works

Cisco ACI is a tightly coupled policy-driven solution that integrates software and hardware. The hardware for Cisco ACI is based on the Cisco Nexus 9000 family of switches. The software and integration points for ACI include a few components, including Additional Data Center Pod, Data Center Policy Engine, and Non-Directly Attached Virtual and Physical Leaf Switches. While there isn’t an explicit reliance on any specific virtual switch, at this point, policies can only be pushed down to the virtual switches if Cisco’s Application Virtual Switch (AVS) is used, though there has been talk about extending this to Open vSwitch in the near future.

To a large extent, the network for Cisco ACI is no different than what has been deployed over the past several years in enterprise data centers. What is different, however, is the management and policy framework, along with the protocols used in the underlying fabric.

In a leaf-spine ACI fabric, Cisco is provisioning a native Layer 3 IP fabric that supports equal-cost multi-path (ECMP) routing between any two endpoints in the network, but uses overlay protocols, such as virtual extensible local area network (VXLAN) under the covers to allow any workload to exist anywhere in the network. Supporting overlay protocols is what will give the fabric the ability to have machines, either physical or virtual, in the same logical network (Layer 2 domain), even while running Layer 3 routing down to the top of each rack. Cisco ACI supports VLAN, VXLAN, and network virtualization using generic routing encapsulation (NV-GRE), which can be combined and bridged together to create a logical network/domain as needed.

From a management perspective, it is the central Controller of the ACI solution, the Application Policy Infrastructure Controller (APIC) that will manage and configure the policy on each of the switches in the ACI fabric. Hardware becomes stateless with Cisco ACI, much like it is with Cisco’s UCS Computing Platform. This means no configuration is tied to the device. The APIC acts as a central repository for all policies and has the ability to rapidly deploy and re-deploy hardware, as needed, by using this stateless computing model.

Cisco ACI also serves as a platform for other services that are required within the data center or cloud environment. Through the use of the APIC, 3rd party services can be integrated for advanced security, load balancing, and monitoring. Vendors and products, such as SourceFire, Embrane, F5, Cisco ASA, and Citrix can integrate natively into the ACI fabric and be part of the policy defined by the admin. Through the use of northbound APIs on the APIC, ACI can also integrate with different types of cloud environments.

How Cisco ACI Integrates with Other Products

What is Cisco Application Policy Infrastructure Controller (APIC)?

What is Cisco Application Policy Infrastructure Controller (APIC)?

The Cisco Application Policy Infrastructure Controller (APIC) is the single point of policy and management of a Cisco Application Centric Infrastructure (ACI) fabric. Cisco APIC re-defines how Cisco networks are managed and operated. In traditional Cisco networks, each node is managed independently, via the command-line interface (CLI), which is time consuming, tedious, and error prone. In ACI networks, network admins use the APIC to manage the network – they no longer need to access the CLI on every node to configure or provision network resources.

Cisco APIC differs from more traditional software-defined networking (SDN) Controllers and designs, in that there is zero de-coupling of the control plane from the data plane. Cisco APIC is only used to configure the policy; the policy is then delivered and instantiated on each of the nodes in the network. This allows the Cisco APIC to implement higher orders of logic to better integrate with the consumers of the network – the systems and application teams.

A common example is deploying a 3-tier application. In order to have done this in the past, administrators needed to know the VLANs, IP ranges, FW policy, Load balancing policy, etc. They are all network centric terms that the consumers of the network didn’t know. The value in Cisco ACI is changing this paradigm and becoming more application centric, as opposed to network centric. There will be low level parameters that need to be configured by a network admin, but these will be hidden and abstracted away for the server and application administrators.

With Cisco ACI, endpoint groups (EPGs) are created that may be “web,” “app,” and “db.” Contracts are created between EPGs to implement the desired functionality once, i.e. QOS, FW, LB, etc. As the business demands and more hosts and VMs are required, all that has to happen is the new machine be placed in the proper EPG.  Every other change happens dynamically. One of the values of EPGs here is that they are not, or do not have to be, based on traditional network constructs, like IP subnets or VLANs.

Industry shifts are redefining IT at every level, creating need for application agility to enable businesses to address changes quickly. Traditional methods use a silo’d operational stance, with no common operational model for applications, networks, security, and cloud teams. A common operational model offers simpler operations, better performance, and scalability.

To address these needs, Cisco introduced its Application Centric Infrastructure (ACI). It resides in the data center and is built with centralized automation and policy-driven application profiles. Cisco positions ACI as offering the flexibility of software with the scalability of hardware performance.

Along with the Cisco Nexus 9000 Series Switches and the Cisco Application Virtual Switch (AVS), a major component of Cisco ACI is the Cisco Application Policy Infrastructure Controller (APIC). APIC is the single point of automation and management in both physical and virtual environments, allowing operators to build fully automated and multitenant networks with scalability. The main function of Cisco APIC is to offer policy authority and resolution methods for the Cisco ACI, as well as devices attached to Cisco ACI.

Cisco APIC Features

  • The capability to build and enforce application-centric network policies.
  • An open standards framework, with the support of northbound and southbound application program interfaces (APIs).
  • Integration of third-party Layer 4-7 services, virtualization, and management.
  • Scalable security for multitenant environments.
  • A common policy platform for physical, virtual, and cloud computing

The Cisco APIC uses Cisco OpFlex, a southbound protocol in software-defined networking (SDN), to enable policies to be applied across physical and virtual switches.

The OpFlex approach differs from the OpenFlow communications protocol, which is one of the first and most widely deployed SDN standards, in that it focuses mainly on ensuring consistent policy enforcement across the underlying infrastructure. While OpFlex centralizes policies, OpenFlow looks to centralize all functions on the SDN Controller. OpFlex creators believe this shift will allow the Controller to offer greater resiliency, availability, and scalability, by moving some of the intelligence to hardware devices, using established network protocols.

With the Cisco APIC, northbound APIs allow for swift integration with existing management and orchestration frameworks. It is also compatible with OpenStack, which is developing an open cloud operating system to control the compute, storage and networking resources across the organization. This provides consistency across physical, virtual, and cloud environments when using the Cisco ACI policy. Southbound APIs enable users to extend the Cisco ACI policies to existing virtualization and Layer 4-7 services, as well as networking components.

My Blog List

Networking Domain Jobs