Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Showing posts with label NFV. Show all posts
Showing posts with label NFV. Show all posts

Monday, July 13, 2015

Oracle Unveils Four New VNFs - Policy Management, SBC, Converged Application Server & Services Gatekeeper


Oracle, has recently announced the release of four fully virtualized network functions (VNF) solution portfolio for Communications Service Providers (CSP). These Network Function Virtualization-enabled products - Oracle Communications Policy Management, Oracle Communications Converged Application Server, Oracle Communications Services Gatekeeper, and Oracle Communications Session Border Controller - are aimed at helping CSPs to conquer the layers of complexity inherent in bridging physical and virtual environments as they continue on their journey toward NFV.

According to Oracle, the newest version of Oracle Communications Policy Management is a key component in the deployment of next-generation LTE networks. As network virtualization continues to advance, CSPs require the flexibility, reliability, and depth of feature functionality that enables them to evolve their networks through LTE, Voice over LTE (VoLTE), IP Multimedia Subsystem (IMS) and virtualization. In addition, the solution is designed to enable tight integration with charging and billing systems, to provide valuable network insights via an integrated policy analytics solution and to allow providers to serve all subscribers from a single policy management instance regardless of network access type.

Sunday, July 12, 2015

Ooredoo Kuwait rolls out unified cloud for NFV


Ooredoo Kuwait has successfully deployed network functions virtualisation (NFV) architecture and its IT applications on a single, unified cloud, it was announced this week.

The single cloud is based on VMware's vCloud for NFV platform, and was rolled out as part of Ooredoo's ‘Unify' initiative, which aims to take advantage of a software-defined data centre architecture, as well as NFV and software-defined networking (SDN).

VMware's professional services team partnered with the operator and its virtual network functions (VNF) vendor, Huawei, to design and deploy the VMware vCloud for NFV platform and virtual network functions into a test environment in less than three months. The solution supported the seamless transfer of the virtualised Core IMS (IP Multimedia Subsystem) from test environment to Ooredoo's production IT environment, and enabled Ooredoo to conduct its first voice-over-LTE (VoLTE) call.

"Reliability and availability were key factors in our decision to work with VMware. The production-proven VMware technology and our previous experience with the high levels of technical support offered by VMware professional services gave us the confidence to move to a unified cloud platform," said Mijbil Al-Ayoub, director of Corporate Comms, Ooredoo Kuwait.

"The speed at which we have been able to trial our unified cloud and onboard the VoLTE service functions into our IT network has exceeded our expectations. We did a joint R&D project that took only two months to complete, and we finalised the development of our vIMS product that can be deployed in a production, commodity infrastructure, automatically in only 3.5 hours."

The Ooredoo platform uses VMware's policy-based resource allocation features to maintain application service level agreement (SLA) enforcement, and software-defined networking capabilities of the VMware NSX network virtualisation platform to address NFV network scalability needs, multi-tenancy with microsegmentation, capacity on demand and QoS. VMware NSX is a main pillar in segregating tenants across Ooredoo‘s single converged private cloud.

"We're delighted to be working with such forward-thinking customers who understand the value of a platform-based deployment strategy for NFV," said David Wright, vice president, Telecommunications and NFV Group, VMware.

"By making the decision to deploy and manage a horizontally virtualised platform capable of supporting multivendor VNFs and IT workloads, Ooredoo Kuwait has created a powerful service environment capable of delivering high-value services to customers while managing operational costs."

Monday, April 6, 2015

Cisco enhances NFV offering with Embrane acquisition


Networking giant Cisco announced its intent to acquire network function virtualisation (NFV) and Cisco tech specialist Embrane for an undisclosed sum this week, a move intended to bolster the company’s networking automation capabilities, reports Business Cloud News.

“With agility and automation as persistent drivers for IT teams, the need to simplify application deployment and build the cloud is crucial for the datacentre,” explained Cisco’s corporate development lead Hilton Romanski.

“As we continue to drive virtualization and automation, the unique skillset and talent of the Embrane team will allow us to move more quickly to meet customer demands. Together with Cisco’s engineering expertise, the Embrane team will help to expand ouenhances NFVr strategy of offering freedom of choice to our customers through the Nexus product portfolio and enhance the capabilities of Application Centric Infrastructure (ACI),” he said, adding that the purchase also builds on previous commitments to open standards, open APIs, and playing nicely in multi-vendor environments.

Beyond complimenting Cisco’s ACI efforts, Dante Malagrinò, one of the founders of Embrane and its chief product officer said the move will help further the company’s goal of driving software-hardware integration in the networking space, and offer Embrane an attractive level of scale few vendors playing in this space have.

“Joining Cisco gives us the opportunity to continue our journey and participate in one of the most significant shifts in the history of networking:  leading the industry to better serve application needs through integrated software-hardware models,” he explained.

“The networking DNA of Cisco and Embrane together drives our common vision for an Application Centric Infrastructure.  We both believe that innovation must be evolutionary and enable IT organizations to transition to their future state on their own terms – and with their own timelines.  It’s about coexistence of hardware with software and of new with legacy in a way that streamlines and simplifies operations.”

Cisco is quickly working to consolidate its NFV offerings, and more recently its OpenStack services, as the vendor continues to target cloud service providers and telcos looking to revamp their datacentres. In March it was revealed Cisco struck a big deal with T-Systems, Deutsche Telekom’s enterprise-focused subsidiary, that will see the German incumbent roll out Cisco’s OpenStack-based infrastructure in datacentre in Biere, near Magdeburg, as well as a virtual hotspot service for SMEs.


Saturday, March 28, 2015

Understanding NFV in 6 videos


If the adage says a picture is worth a thousand words, then a video should worth a million. In today’s post I offer you a quick way to fully understanding Network Functions Virtualization (NFV), Software Defined Networking (SDN), and some of its related trends through six short videos, ranging from the very basics of virtualization and cloud concepts, to the deepness of today’s architecture proposed for the NFV installations.

What “the heck” are virtualization and cloud about?

A short and self-explanatory video from John Qualls, President and CEO of Bluelock, covering the very basics of data centres transition towards virtualized models.



What is the difference between NFV and SDN?

This great summary from Prayson Pate, Chief Technologist at Overture Networks, highlights the differences and similarities between NFV and SDN, and how are these complemented in the telecoms industry.



Let us talk about the architecture

Now the basics are established we can see the overall architecture. Look at these diagrams from HP and Intel where they show the main components involved.








So, wait a minute, what is that thing they call OpenFlow?

The following video from Jimmy Ray Purser, Technical host for Cisco TechWise and BizWise TV, explains OpenFlow in a quick and straight way.




What about OpenStack?

This piece from Rackspace, featuring Niki Acosta & Scott Sanchez, makes a great summary about OpenStack, its origin, and its situation in the industry.





Now, what are the challenges faced and some real cases for the carriers?

Now that the concepts are clear and defined, we can study a couple of real use cases scenarios in the carriers’ network and its architecture, as well as methods for addressing the challenges faced in the NFV evolution. In the following video Tom Nolle, Chief Architect for CloudNFV, presents Charlie Ashton VP Marketing and US Business Development at 6Wind, and Martin Taylor CTO at Metaswitch Networks, covering some use cases like the Evolved Packet Core (EPC) and the Session Border Controllers (SBC) based on NFV.


Wrapping up, where are the vendors and the operators at with NFV?

The following pitch features Barry Hill, VP Sales & Marketing from Connectem Inc., at the IBM Smart Camp 2013 hosted in Silicon Valley. It covers a summary of the market opportunity for NFV, their specific solution for the operators EPC, and a brief check on the carriers’ status with it.




Although the ETSI ISG group for NFV definition will most likely publish the standards for it in one year from now, it is already a reality, and all the vendors and operators are working on it in some way or another. No matter if you are just starting to explore this trend, or mastering it already, I hope these videos gave you something about it you did not know before.




Wednesday, February 4, 2015

The Role of “VNFaaS”


The cloud and NFV have a lot in common.  Most NFV is expected to be hosted in the cloud, and many of the elements of NFV seem very “cloud-like”.  These obvious similarities have been explored extensively, so I’m not going to bother with them.  Are there any other cloud/NFV parallels, perhaps some very important one?  Could be.

NFV is all about services, and the cloud about “as-a-service”, but which one?  Cloud computing in IaaS form is hosted virtualization, and so despite the hype it’s hardly revolutionary.  What makes the cloud a revolution in a multi-dimensional way is SaaS, software-as-a-service.  SaaS displaces more costs than IaaS, requires less technical skill on the part of the adopter.  With IaaS alone, it will be hard to get the cloud to 9% of IT spending, while with SaaS and nothing more you could get to 24%.  With “platform services” that create developer frameworks for the cloud that are cloud-specific, you could go a lot higher.

NFV is a form of the cloud.  It’s fair to say that current conceptions of function hosting justified by capex reductions are the NFV equivalent of IaaS, perhaps doomed to the same low level of penetration of provider infrastructure spending.  It’s fair to ask whether there’s any role for SaaS-like behavior in NFV, perhaps Virtual-Network-Function-as-a-Service, or VNFaaS.

In traditional NFV terms we create services by a very IaaS-like process.  Certainly for some services that’s a reasonable approach.  Could we create services by assembling “web services” or SaaS APIs?  If a set of VNFs can be composed, why couldn’t we compose a web service that offered the same functionality?  We have content and web and email servers that support a bunch of independent users, so it’s logical to assume that we could create web services to support multiple VNF-like experiences too.

At the high level, it’s clear that VNFaaS elements would probably have to be multi-tenant, which means that the per-tenant traffic load would have to be limited.  A consumer-level firewall might be enough to tax the concept, so what we’d be talking about is representing services of a more transactional nature, the sort of thing we already deliver through RESTful APIs.  We’d have to be able to separate users through means other than virtualization, of course, but that’s true of web and mail servers today and it’s done successfully.  So we can say that for at least a range of functions, VNFaaS would be practical.

From a service model creation perspective, I’d argue that VNFaaS argues strongly for my often-touted notion of functional orchestration.  A VNFaaS firewall is a “Firewall”, and so is one based on a dedicated VNF or on a real box.  We decompose the functional abstraction differently for each of these implementation choices.  So service modeling requirements for VNFaaS aren’t really new or different; the concept just validates function/structure separation as a requirement (one that sadly isn’t often recognized).

Managing a VNFaaS element would be something like managing any web service, meaning that you’d either have to provide an “out-of-band” management interface that let you ask a system “What’s the status of VNFaaS-Firewall?” or send the web service for the element a management query as a transaction.  This, IMHO, argues in favor of another of my favorite concepts, “derived operations” where management views are synthesized by running a query against a big-data repository where VNFaaS elements and other stuff has their status stored.  That way the fact that a service component had to be managed in what would be in hardware-device terms a peculiar way wouldn’t matter.

What we can say here is that VNFaaS could work technically.  However, could it add value?  Remember, SaaS is a kind of populist concept; the masses rise up and do their own applications defying the tyranny of internal IT.  Don’t tread on me.  It’s hard to see how NFV composition becomes pap for the masses, even if we define “masses” to mean only enterprises with IT staffs.  The fact is that most network services are going to be made up of elements in the data plane, which means that web-service-and-multi-tenant-apps may not be ideal.  There are other applications, though, where the concept of VNFaaS could make sense.

A lot of things in network service are transactional in nature and not continuous flows.  DNS comes to mind.  IMS offers another example of a transactional service set, and it also demonstrates that it’s probably necessary to be able to model VNFaaS elements if only to allow something like IMS/HHS to be represented as an element in other “services”.  You can’t deploy DNSs or IMS every time somebody sets up a service or makes a call.  Content delivery is a mixture of flows and transactions.  And it’s these examples that just might demonstrate where VNFaaS could be heading.

“Services” today are data-path-centric because they’re persistent relationships between IT sites or users.  If we presumed that mobile users gradually moved us from being facilities-and-plans-centric to being context-and-event-centric, we could presume that a “service” would be less about data and more about answers, decisions.  A zillion exchanges make a data path, but one exchange might be a transaction.  That means that as we move toward mobile/behavioral services, contextual services, we may be moving toward VNFaaS, to multi-tenant elements represented by objects but deployed for long-term use.

Mobile services are less provisioned than event-orchestrated.  The focus of services shifts from the service model to a contextual model representing the user.  We coerce services by channeling events based on context, drawing from an inventory of stuff that looks a lot like VNFaaS.  We build “networks” not to support our exchanges but to support this transfer of context and events.

If this is true, and it’s hard for me to see how it couldn’t be, then we’re heading away from fixed data paths and service relationships and toward extemporaneous decision-support services.  That is a lot more dynamic than anything we have now, which would mean that the notion of service agility and the management of agile, dynamic, multi-tenant processes is going to be more important than the management of data paths.  VNFs deployed in single-tenant service relationships have a lot of connections because there are a lot of them.  VNFaaS links, multi-tenant service points, have to talk to other process centers but only edge/agent processes have to talk to humans, and I shrink the number of connections, in a “service” sense, considerably.  The network of the future is more hosting and less bits, not just because bits are less profitable but because we’re after decisions and contextual event exchanges—transactions.

This starts to look more and more like a convergence of “network services” and “cloud services”.  Could it be that VNFaaS and SaaS have a common role to play because NFV and the cloud are converging and making them two sides of the same coin?  I think that’s the really profound truth of our time, NFV-wise.  NFV is an accommodation of cloud computing to two things—flows of information and increasing levels of dynamism.  In our mobile future we may see both services and applications become transactional and dynamic, and we may see “flows” developing out of aggregated relationships among multi-tenant service/application components.  It may be inevitable that whatever NFV does for services, it does for the cloud as well.

Monday, November 17, 2014

One Operations Model for Networks and Services

Coutesy - Tom Nolle, President of CIMI Corp


OSS/BSS is part and parcel of the business of network operators, the way they manage their people, sell their services, bill and collect, plan…you get the picture.  The question of how OSS/BSS will accommodate new stuff like SDN and NFV is thus critical, and anything critical generates a lot of comment.  Anything that generates comment generates misinformation and hype these days, unfortunately, so it’s worth talking a look at the problem and trying to sort out the real issues.

To start with, the truth is that accommodating SDN and NFV isn’t really the issue.  The challenge operations systems face these days arises from changes in the business of the operators more than the technology.  Operators acknowledge that their traditional connection services are returning less and less on investment in infrastructure.  That means that they have to improve their revenue line and control costs.  It’s those requirements that are really framing the need for operations changes.  They’re also what’s driving SDN and NFV, so the impact of new technologies on OSS/BSS is really the impact of common drivers.

Human intervention in service processes is expensive, and that’s particularly true when we’re talking about services directed at consumers or services with relatively short lifespans.  You can afford to manually provision a 3-year VPN contract covering a thousand sites, but not a 30-minute videoconference.  We already know that site networking is what’s under the most price pressure, so most useful new services would have to move us to shorter-interval commitments (like our videoconference).  That shifts us from depending on human processes to depending on automated processes.

Human processes are easily conceptualized as workflows because people attack tasks in an orderly and sequential way.  When we set up services from an operations center, we drive a human process that when finished likely records its completion.  When we automate service setups, there’s a tendency to follow that workflow notion and visualize provisioning as an orderly sequential series of actions.

If we look at the problem from a higher perspective, we’d see that automated provisioning should really be based on events and states.  A service order is in the “ordered” state when it’s placed.  When we decide to “activate” it, we would initiate tasks to commit resources, and as these tasks evolved they’d generate events representing success or failure.  Those events would normally change the state of the service to “activated” or “failed” over time.  The sum of the states and the events represents the handling model for a given service.  This is proven logic, usually called “finite-state machine” behavior.  Protocol handlers are written this way, even described this way with diagrams.

The processes associated with setting up a service can now be integrated into the state/event tables.  If you get an “activate” in the “order” state, you initiate the provisioning process and you transition to the “activating” state.  If that provisioning works, the success event then transitions you to the “activated” state where you initiate the process of starting billing and notifying the customer of availability.  You then move to “in-service”.  If provisioning fails, you can define how you want to handle the failure, define what processes you want invoked.  This is what event-driven means.

The reason for this discourse is that OSS/BSS systems need to be event-driven to support service automation, for the simple reason that you can’t assume that automated activity is going to generate orderly progression.  A failure during service activation is not handled the same way as one when the service is “in-service” to the customer, and we can’t use the same processes to handle the two.  So what is necessary in operations systems is to become event-driven, and that is an architectural issue.

We always hear about conferences on things like billing systems and their response to SDN or NFV.  That’s a bad topic, because we should not be talking about how processes respond to technologies.  We should be talking about a general model for event-driven operations.  If we have one, billing issues resolve themselves when we map billing processes into our state/event structure.  If we don’t have one, then we’d have to make every operations process technology-aware, and that’s lousy design not to mention impractical in terms of costs and time.

But a “service state/event table” isn’t enough.  If we have a VPN with two access points, we have three interdependent services, each of which would have to have its own state/event processes, and each of which would have to contribute and receive events to/from the “master” service-level table.  What I’m saying is that every level of service modeling needs to have its own state/event table, each synchronized with the higher-layer tables and each helping synchronize subordinate tables.  The situation isn’t unlike how multi-layer protocols work.  Every protocol layer has a state/event table, and all the tables are synchronized by event-passing across the layer boundaries.

Where do our new technologies come into this?  First, they come into it in the same way the old ones do.  You can’t have automated operations that sometimes works and sometimes doesn’t depending on what you’ve translated to SDN or NFV and what’s still legacy.  All service and network operations has to be integrated or you lose the benefits of service automation.  Second, this illustrates that we have a level of modeling and orchestration that’s independent of technology—higher levels where we ask for “access” or a “VPN” and lower levels where we actually do the stuff needed based on the technology we have to manipulate to get the functionality required.

We could deploy SDN and NFV inside a “black box” that could also contain equivalent legacy equipment functionality.  “AccessNetwork” or “IPCore” could be realized many different ways, but could present a common high-level state/event process table and integrate with operations processes via that common table.  Any technology-specific stuff could then be created and managed inside the box.  Or, we could have a common architecture for state/event specification and service modeling that extended from the top to the bottom.  In this case, operations can be integrated at all levels, and service and network automation fully realized.

Our dilemma today is that every operator is looking for the benefits of event-driven operations, but there’s really nobody working on it from top to bottom.  If you are going to mandate operations integration into SDN or NFV state/event-modeled network processes, then you have to define how that’s done.  But SDN and NFV aren’t doing management.  Management bodies like the TMF are really not doing “SDN” or “NFV” either; they’re defining how SDN or NFV black boxes might integrate with them.

We can’t solve problems by sticking them inside an abstraction and then asserting that it’s someone else’s responsibility to peek through the veil.  We have to tear down the barriers and create a model of service automation that works for all our services, all our technology choices.

Saturday, November 15, 2014

SDN and NFV Strategies: Global Service Provider Survey


Providers believe that NFV and SDNs and are a fundamental change in telecom network architecture that will deliver benefits in new services and revenue, operational efficiency, and capex savings.

Top drivers for service provider NFV and SDN investments and deployments are:


  • Service agility and quick time to revenue
  • A global view of network conditions across different vendors’ equipment, network layers, and technologies (routers, switches, DSLAMs, mobile core, mobile backhaul, etc.)
  • The ability to simplify provisioning of services and virtualize their networks through a consolidated management plane, obviating the operational tedium of utilizing various vendor-specific management systems


Key findings and recommendations:


  • Nearly every operator we talked to was likely to deploy SDNs or NFV in some aspect of their network at some point—97% will deploy SDN, 93% will deploy NFV, the others don’t know yet.
  • Many carriers in 2014 are moving from their proof of concept (PoC) investigations/evaluations for SDN and NFV to working with vendors on the development and productization of their software that will become the basis for commercial deployments.
  • Operators want SDNs/NFV in most parts of their networks
  • Service providers are targeting many more than these top 5 target use cases (see discussions/definitions later) for NFV in 2014-2015:
    • Business vE-CPE
    • Service chaining
    • vIMS core
    • vCDNs
    • vPE


Wednesday, November 12, 2014

Which is Better – SDN or NFV?



Software-defined networking (SDN), network functions virtualization (NFV) and network virtualization (NV), are all complementary approaches. They each offer a new way to design deploy and manage the network and its services:


  • SDN – separates the network’s control (brains) and forwarding (muscle) planes and provides a centralized view of the distributed network for more efficient orchestration and automation of network services.
  • NFV - focuses on optimizing the network services themselves. NFV decouples the network functions, such as DNS, Caching, etc., from proprietary hardware appliances, so they can run in software to accelerate service innovation and provisioning, particularly within service provider environments.
  • NV – ensures the network can integrate with and support the demands of virtualized architectures, particularly those with multi-tenancy requirements.


Commonalities

SDN, NFV and NV each aim to advance a software-based approach to networking for more scalable, agile and innovative networks that can better align and support the overall IT objectives of the business.  It is not surprising that some common doctrines guide the development of each. For example, they each aim to:


  • Move functionality to software
  • Use commodity servers and switches over proprietary appliances
  • Leverage programmatic application interfaces (APIs)
  • Support more efficient orchestration, virtualization and automation of network services


SDN and NFV Are Better Together

These approaches are mutually beneficial, but are not dependent on one another.  You do not need one to have the other. However, the reality is SDN makes NFV and NV more compelling and visa-versa.  SDN contributes network automation that enables policy-based decisions to orchestrate which network traffic goes where, while NFV focuses on the services, and NV ensures the network’s capabilities align with the virtualized environments they are supporting. The advancement of all these technologies is the key to evolving the network to keep pace with the innovations of all the people and devices its connecting.

Wednesday, November 5, 2014

What to know about Cisco's NX-API 1.0 update

Courtesy - Matt Oswalt

On his Keeping It Classless blog, author Matt Oswalt reviews Cisco's update of its NX-API, which is now shipping as version 1.0. The API came from Cisco's Nexus 9000 platform. However, the NX-API is a more programmatic way of interacting with a Cisco Nexus switch, writes Oswalt. Although he mentions that Cisco is really just playing "catch up" with the release of the interface, it is worth looking into.

Oswalt reviews specific updates, including the introduction of JSON-RPC -- a format for communicating information bi-directionally. Included in the post are screenshots of how JSON-RPC is used. Oswalt also looks at the NX-API Sandbox, which has a better format despite being not that useful for experienced developers.

Click here for Cisco NX-API 1.0 Update


Sunday, November 2, 2014

Five commercial SDN controllers to know about


While open source SDN controllers were some of the first to emerge, a number of vendors have begun to offer commercial SDN controllers as part of their programmable networking portfolios.

As with open source controllers, it's still the early days for commercial controllers, said Andrew Lerner, research director at Gartner.

"We would estimate there are less than 1,000 mainstream production SDN deployments globally," Lerner said. The most well-known commercial controllers to date include platforms that manage SDN overlays, as well as those that control hardware and software network switches.

Lerner added that when considering controllers, it's important to recognize the amount of "SDN-washing" in the market -- or vendors referring to non-SDN concepts as SDN. "Orchestration, agility and dynamic provisioning are all fantastic and solve real problems but by themselves are not SDN."

We rounded up five key commercial SDN controllers.

1. Cisco Application Policy Infrastructure Controller (APIC) is considered a distributed system implemented as a cluster of controllers. Within Cisco's Application Centric Infrastructure (ACI), APIC acts as a single point of control. It provides a central API, a central repository of global data and a repository of policy data. The controller can automatically apply application-centric network policies and functions with data model-based declarative provisioning. The primary goal of APIC is to provide policy authority and policy resolution mechanisms for Cisco ACI devices to optimize application performance and network efficiency. Automation is said to occur as a direct result of policy resolution and of rendering its effects onto the ACI fabric.

There are ACI spine and leaf switch nodes that APIC communicates with. By doing so, it is able to distribute policies and deliver a number of administrative functions. By not having the controller involved directly in data plane forwarding, a cluster won't lose any data center functionality if there is a disconnect of APIC components.

2. HP Virtual Application Networks (VAN) SDN Controller controls policy and forwarding decisions in an SDN network running OpenFlow-enabled switches in the data center or campus infrastructure. HP is also working with Verizon and Intel to develop an app used for WAN bandwidth provisioning using the VAN controller.

The controller also enables centralized control and automation. Within an HP SDN environment, the VAN controller delivers integration between the network and business system. It uses programmable interfaces that enable the orchestration of application and automation of network functions. The controller also provides control of the network, including functions such as network topology discovery.

The VAN controller can also be clustered, allowing a controller to take over the functions of another if one fails. In regards to security, the controller uses authentication and authorization methods. In turn, SDN applications can interact with the controller, while unauthorized applications aren't able to gain network access. The southbound connections between the OpenFlow switches and the HP controller are also secured and encrypted.

3. NEC ProgrammableFlow PF6800 Controller is at the center of NEC's ProgrammableFlow OpenFlow-based Network Fabric. It provides a point of control for physical and management for virtual and physical networks. The controller is considered programmable, as well as standardized. It integrates with both OpenStack and Microsoft System Center Virtual Machine Manager for added network management and orchestration. The controller also includes NEC's virtual tenant network technology, which allows for isolated, multi-tenant networks.

4. Nuage Networks Virtualized Services Controller (VSC) allows for the full view of a per-tenant network and service topologies while externalizing network service templates defined through Nuage Networks' Virtualized Services Directory. The service directory is a policy engine that uses network analytics and rules to allow role-based permissions. The VSC sends messages using those rules to Nuage's Virtual Routing and Switching platform. The platform senses either the creation or deletion of a virtual machine and then asks the SDN controller if there is a policy in place for that tenant. If there is a rule, network connectivity is established immediately.

5. VMware NSX Controller is considered a distributed state management system that controls virtual networks and overlay transport tunnels. It is the central control point for all logical switches within a network. The controller maintains information of virtual machines, hosts, logical switches and VXLANs, while using northbound APIs to talk to applications.

When working with the controller, the applications communicate what they require, and the controller programs all vSwitches under NSX control in a southbound direction to meet those requirements. The controller could run two ways within NSX: either as a dismissed cluster of virtual machines in a vSphere environment, or in physical appliances for those with mixed hypervisors.

Thursday, October 30, 2014

Network as a service: The core of cloud connectivity


One of the complexities of the cloud is the fact that the cloud model has so many variations -- public, private, hybrid -- and many implementation options driven by multiple cloud providers and cloud software vendors. One common element in all clouds is cloud networking, which is always essential in connecting cloud resources to users, and increasingly for making connections among cloud resources and providers. Any enterprise that's building a cloud strategy has to develop a cloud networking strategy, and that strategy has to be versatile enough to accommodate rather than limit cloud choices.

A good place to start is with the notion of network as a service, or NaaS. Public cloud services include NaaS features implicitly (the Internet is an "access NaaS" for most public clouds) or explicitly (with VPN capabilities), and private and hybrid clouds will almost always control network connections using a NaaS model.

NaaS, for a cloud network builder, is an abstract model of a network service that can be at either Layer 2 (Ethernet, VLAN) or Layer 3 (IP, VPN) in the OSI model. A cloud user defines the kind of NaaS that their cloud connectivity requires, and then uses public or private tools to build that NaaS. NaaS can define how users access cloud components, and also how the components themselves are connected in a private or hybrid cloud.

The best-known example of NaaS in the public cloud space is Amazon's Elastic IP address service. This service lets any cloud host in EC2, wherever it is located, be represented by a constant IP address. The Elastic IP NaaS makes the cloud look like a single host. This is an example of an access-oriented NaaS application.


NaaS as cloud connectivity

In the private cloud, the most common example of NaaS comes from OpenStack's Neutron APIs, which let users build models of network services at Level 2 or Level 3 and then add their virtual machine (VM) instances to these models. This NaaS model builds inter-component connections that connect cloud application elements with each other, and also defines internetwork gateways to publish application services to users. Not all private cloud stacks have NaaS or Neutron-like capabilities, though, and where they don't exist it will be necessary to use management/orchestration tools, popularly called DevOps tools, in the cloud to build NaaS services in a private cloud deployment.

For hybrid clouds -- the direction most cloud users expect to be going with their own cloud plans -- NaaS is likely to be a three-step process. First, you'll need to define the public cloud NaaS service, and then the cloud connectivity you'll need for your private cloud, and finally the bridge between them. In most cases, this "hybrid bridge" is a gateway between the two NaaS services in your cloud, but it's often a gateway that operates on two levels. First, it has to provide an actual network connection between the public and private cloud, which in many cases will mean setting up a router device or software-based router. Second, it has to ensure that the directories that provide addressing for cloud application components (DHCP to assign addresses, DNS to decode URLs to addresses) are updated when components are deployed or moved. This is actually a part of cloud integration, and it may be done using DevOps tools or commercial products for cloud integration.

Cloud networking is critical for the cloud's success, and approaching it as the union of NaaS domains is a good way to plan and implement the necessary elements, and keep them running optimally for efficient cloud use.


Wednesday, October 29, 2014

Carrier SDN vs. Enterprise SDN


The software-defined networking trend has been picking up momentum and new use cases have been continuously evolving since its initial launch. In this article, we will take a look at this evolution, from the very basics to carrier-class SDN. We will do this through the lens of fundamental SDN architectural characteristics. These basic ingredients ultimately define the type and scale of market applications, as well as the road map of products implementing them. Additionally, we will demonstrate how these ingredients are centered on the pivotal SDN control plane distribution.

To start, you may find it interesting that SDN for enterprises and carriers assumes a new model for networking, a model different from traditional IP. The basic IP network model consists of autonomous junctions directing packets based on address-prefix tables, a kind of location-based set of “area codes” — 1.1.x go left, 1.2.x go right. This model represents a clear way for anyone joining the World Wide Web to send and receive data packets from anyone else. This is true, of course, as long as we know the location county, zip, town, street coordinates, or the IP address. In this model, packets find their way and zero in on their target hop by hop, source to destination.

The SDN model for networking is completely different, looking more like a programmable crossbar of sources and destinations, consumers and producers, subscribers and services or functions. Such a matrix assumes that you can physically get from any row to any column, recursively. But it also assumes that each specific “patch paneling” is under strict programmable control for both tapping and actual connectivity. No entity can talk to any other unless explicitly provisioned to do so. This is achieved using the key SDN element — “the controller” software entity that allocates whole flows or network conversations. This is a significant shift from the model of the IP “ant farm” of packets making their way from one junction to the next.

The straightforward architecture implementation of the model outlined above was initially defined as a centralized controller setting up each flow, from every source to every destination. However, this flow setup needs to be completed through every physical hop from source to destination. This is an implementation detail resulting from the structure of how networks are physically scaled, where not every endpoint is directly connected to any other endpoint.

But is this really just an implementation detail? Not exactly. It turns out that the job of the controller becomes exponentially more complex as the diameter of the network, or the average number of hops from source to destination, increases. This scaling aspect immediately identifies the first key distinction of carrier SDN: federation. Non-carrier SDN products that stick with a centralized approach are restricted to networks with size one or two diameters, namely spine-leaf enterprise data centers or point-to-point site circuits. And, even in these environments, due to the physical meshing factor, scaling large enough has to assume moderate dynamics in flow allocation for centralization to work.

Most SDN architectures that are on the path to carrier-grade, or more generalized and less restricted SDN, choose federated distributed-overlay architecture. In this model, the diameter of the network is surrounded by SDN edges, taking the hop-to-hop topology factor out of the scale equation. SDN edges control the mesh, allocate dynamic patch-points in the flow tables, and link the “outer-lay” identities while letting traditional IP bridging and routing autonomous junctions connect the underlay locations. This federated SDN architecture opens up the possibilities for carrier-scale use cases where SDN overlays are surrounding and leveraging the carrier distribution center networks, carrier metropolitan networks, and also the national backbones that are already in place. Carriers no longer need a greenfield network in order to deploy SDN applications.

Now that we have an SDN overlay in place, we can next look at global information propagation across the overlay network. This is now becoming the most fundamental and mission-critical SDN requirement and distribution element. Why? Since we have federated SDN control at the edges of the network, how will each SDN edge node know what’s behind every other SDN edge node or where the logical row and column identities of the model physically reside? We could keep global information centralized and have just the flow setups distributed, but that blocking point would limit the dynamics and overall flows per second, restricting the performance of the network. Because of this, most overlay architectures distribute global awareness by pre-pushing the global data records to all nodes on the edge. This information push is allowing SDN edges to know tenancy filtering and set up new flows concurrently.





This data push approach is once again a clear demarcation from traditional approaches, distancing carrier SDN and carrier use-case categorization from SDN on enterprise networks. Carrier use cases that require subscriber awareness when setting up flows cannot pre-push all subscriber information to all locations. Maintaining such a massive replication and distribution of data consistently is just not feasible. Subscriber-aware carrier SDN architectures have to allow for a non-blocking, pull and publish-subscribe method of sharing global information. This is typically done by leveraging the underlay IP network in addition to hop-to-hop transport, as well as for implementing an IP distributed non-blocking hash table or IP directory, also termed mapping. Many carrier SDN use cases are subscriber-aware or content-aware; these include mobile function chaining, evolved packet core virtualization, multimedia services, content distribution, virtual customer premises equipment, virtual private networks, and more. In general, most SDN use cases that connect subscriber flows to network functions require subscriber-aware lookup, pull, and classification. Otherwise, every possible service permutation is managed statically, and every single new function may double the static permutation maintenance. This would be exponentially burdensome for global carrier network management and operations.

Lastly, we look at carrier SDN solution element density and element extendibility-programmability qualities. Just like information sharing, these qualities are a direct result of an SDN model distribution and provide distinctions for carrier-class use cases and carrier SDN applicability. As far as SDN edge density, non-carrier-class SDN can potentially afford to push overlay edges all the way in to the tenant system or host. This software is for convenience only and doesn’t provide a “real OpenFlow” approach. This is a possible low-density SDN edge option mainly for hosting, since we don’t assume large scopes per software network. Rather, we expect many “small” enterprise tenants. Hence we do not assume massive global information sharing compared to a multimillion-subscriber database or multibillion-machine-to-machine identity base. For enterprise SDN, we also don’t assume too many physical geographic locations or heavy-duty SDN edge device packaging. This, of course, is not the case for carrier SDN applications. For those, global information sharing is massive and the number of SDN edge nodes must be kept at hundreds to thousands and not hundreds of thousands, and therefore the density of each SDN edge node should be quite high. Carrier SDN edge node capacity is a full rack size bandwidth-wise, and is in the thousands of flow-setup and mapping-lookup per second per node. We also look at millions of concurrent flows and millions of concurrent publish-subscribe states kept per node.

Similarly, an additional direct result of SDN distribution is that if SDN programmability is no longer “locked” inside the “controller,” a clear method needs to specify how distributed programmability and extendibility are delivered in each of the nodes, and how they are synced across the solution. If we refer to such distributed programmable logic as “FlowHandlers” and the variance of such logic as FlowHandler.Lib, then once again we see another immediate distinction of carrier SDN. While basic SDN solutions will have very limited FlowHandler.Lib, basically for handling multi-tenancy virtual L2/3, carrier use cases will a have a far more elaborate FlowHandler.Lib, with distinct flow-mapping logic for protocols, SIP, SCTP, GTP, GRE, NFS, etc.

These protocols enable applications such as mobility, VoIP, voice over LTE, content distribution, transcoding, and header enrichment, to name a few. Carrier-class FlowHandler.Lib will also handle jitter buffers and TCP flow control when SDN overlays span large geo-distributions, and/or connect very different mediums such as wide area networks (WAN) and radio access networks (RAN). Carrier FlowHandlers by law, and by default, should also be able to account for each and every flow connected over the public network. Accounting for patterns is an important public network security consideration; for instance, network functions such as firewalls or tapping may be opted in and out of dynamically per carrier FlowHandler decision. Additional important functions of a carrier-class FlowHandler are the abilities to apply traffic engineering and segment flow routes in the underlay IP network, optimize long-lived backup and replication flows, and load balance core-spine links and core routes.

To summarize, we touched upon the differences and distinctions of SDN in general, and when it comes to carrier SDN vs. enterprise SDN in particular. There are many differences as far as technology and use cases, but even more key are the architectural ones: federation, mapping, density, and programmability. Just like in the case of basic Ethernet and carrier Ethernet, carrier SDN requires a lot more structure for scale and services. As we saw, most of these qualitative architectural distinctions have to do with SDN control plane distribution, the distribution model, sharing of information, density or level of distribution, differentiated programmability, and carrier logic in each node. Done correctly, carrier SDN can provide both multibillion end point carrier scale and five-nines-class availability for the network functions virtualization (NFV) era.


Saturday, October 25, 2014

SDN Use Cases including Network Functions Virtualization (NFV), Network Virtualization, OpenFlow and Software Defined Networking



Use Case – WhatLocationWhy SDN NeededBenefits Achieved
Network Virtualization– Multi-Tenant NetworksDatacenterTo dynamically create segregated topologically-equivalent networks across a datacenter, scaling beyond typical limits of VLANs today at 4KBetter utilization of datacenter resources, claimed 20-30% better use of resources. Faster turnaround times in creating segregated network, from weeks to minutes via automation APIs.
Network Virtualization – Stretched NetworksDatacenterTo create location-agnostic networks, across racks or across datacenters, with VM mobility and dynamic reallocation of resourcesSimplified applications that can be made more resilient without complicated coding, better use of resources as VMs are transparently moved to consolidate workloads. Improved recovery times in disasters.
Service Insertion (or Service Chaining)Datacenter/
Service Provider DMZ/WAN
To create dynamic chains of L4-7 services on a per tenant basis to accommodate self-service L4-7 service selection or policy-based L4-7 (e.g. turning on DDoS protection in response to attacks, self-service firewall, IPS services in hosting environments, DPI in mobile WAN environments)Provisioning times reduced from weeks to minutes, improved agility and self-service allows for new revenue and service opportunities with substantially lower costs to service
Tap AggregationDatacenter/campus access networksProvide visibility and troubleshooting capabilities on any port in a multi-switch deployment without use of numerous expensive network packet brokers (NPB).Dramatic savings and cost reduction, savings of $50-100K per 24 to 48 switches in the infrastructure. Less overhead in initial deployment, reducing need to run extra cables from NPBs to every switch.
Dynamic WAN reroute –move large amounts of trusted data bypassing expensive inspection devicesService Provider/
Enterprise Edge
Provide dynamic yet authenticated programmable access to flow-level bypass using APIs to network switches and routersSavings of hundreds of thousands of dollars unnecessary investment in 10Gbps or 100Gbps L4-7 firewalls, load-balancers, IPS/IDS that process unnecessary traffic.
Dynamic WAN interconnectsService ProviderTo create dynamic interconnects at Internet interchanges between enterprise links or between service providers using cost-effective high-performance switches.Ability to instantly connect Reduces the operational expense in creating cross-organization interconnects, providing ability to enable self-service.
Bandwidth on DemandService ProviderEnable programmatic controls on carrier links to request extra bandwidth when needed (e.g. DR, backups)Reduced operational expense allowing self-service by customers and increased agility saving weeks of manual provisioning.
Virtual Edge – Residential and BusinessService Provider Access NetworksIn combination with NFV initiatives, replace existing Customer Premises Equipment (CPE) at residences and businesses with lightweight versions, moving common functions and complex traffic handling into POP (points-of-presence) or SP datacenter.Increased usable lifespan of on-premises equipment, improved troubleshooting, less truck rolls, flexibility to sell new services to business and residential customers.

Friday, October 24, 2014

NFV Essentials



What is NFV – Network Functions Virtualization?


What is NFV – Network Functions Virtualization?

Network functions Virtualization (NFV) offers a new way to design, deploy and manage networking services. Network Functions Virtualization or NFV  decouples the network functions, such as network address translation (NAT), firewalling, intrusion detection, domain name service (DNS), caching, etc., from proprietary hardware appliances, so they can run in software. It’s designed to consolidate and deliver the networking components needed to support a fully virtualized infrastructure – including virtual servers, storage and even other networks.  It utilizes standard IT virtualization technologies that run on high-volume service, switch and storage hardware to virtualize network functions. It is applicable to any data plane processing or control plane function in both wired and wireless network infrastructures.

Example of How a Managed Router Service Would be Deployed with NFV. 





History of NFV

The concept for Network Functions Virtualization or NFV originated from service providers who were looking to accelerate the deployment of new network services to support their revenue and growth objectives. They felt the constraints of hardware-based appliances, so they wanted to apply standard IT virtualization technologies to their networks. To accelerate progress towards this common goal, several providers came together and created an ETSI Industry Specification Group for NFV. (Full member list). The goal is to define the requirements and architecture for the virtualization of network functions. The group is currently working on the standards and will be delivering the first specifications soon.

The Benefits of NFV

NFV virtualizes network services via software to enable operators to:


  • Reduce CapEx: reducing the need to purchase purpose-built hardware and supporting pay-as-you-grow models to eliminate wasteful overprovisioning.
  • Reduce OpEX: reducing space, power and cooling requirements of equipment and simplifying the roll out and management of network services.
  • Accelerate Time-to-Market: reducing the time to deploy new networking services to support changing business requirements, seize new market opportunities and improve return on investment of new services. Also lowers the risks associated with rolling out new services, allowing providers to easily trial and evolve services to determine what best meets the needs of customers.
  • Deliver Agility and Flexibility: quickly scale up or down services to address changing demands; support innovation by enabling services to be delivered via software on any industry-standard server hardware.



Why SDN – Software-Defined Networking or NFV – Network Functions Virtualization Now?


Software-defined networking (SDN), network functions virtualization (NFV) and network virtualization (NV) are giving us new ways to design, build and operate networks. Over the past two decades, we have seen tons of innovation in the devices we use to access the network, the applications and services we depend on to run our lives, and the computing and storage solutions we rely on to hold all that “big data” for us, however, the underlying network that connects all of these things has remained virtually unchanged. The reality is the demands of the exploding number of people and devices using the network are stretching its limits. It’s time for a change.

The Constraints of Hardware

Historically, the best networks – a.k.a. those that are the most reliable, have the highest availability and offer the fastest performance, etc. – are those built with custom silicon (ASICs) and purpose-built hardware. The “larger” the “box,” the higher the premium vendors can command, which only incents the development of bigger, even more complex, monolithic systems.

Because it takes a significant investment to build custom silicon and hardware, rigorous processes are required to ensure vendors get the most out of each update or new iteration. This means adding features ad hoc is virtually impossible. Customers that want new or different functionality to address their requirements, end up beholden to the vendor’s timeline.  It is so challenging to try to make any changes to these systems, even those that are “open,” that most companies have a team of experts (many of whom are trained and certified by the networking companies, themselves) on hand to keep the network up and running.

The hardware predominance has truly stifled the innovation in the network. It’s time for ‘out of the box’ thinking; it’s time to free the software and change everything…

The Time for Changes in Networking is Now

Thanks to the advances in today’s off-the-shelf hardware, developer tools and standards, a seismic technology shift in networking to software can finally take place. It’s this shift that underlies all SDN, NFV and NV technologies –software can finally be decoupled from the hardware, so that it’s no longer constrained by the box that delivers it. This is the key to building networks that can:


  • Reduce CapEx: allowing network functions to run on off-the-shelf hardware.
  • Reduce OpEX: supporting automation and algorithm control through increased programmability of network elements to make it simple to design, deploy, manage and scale networks.
  • Deliver Agility and Flexibility: helping organizations rapidly deploy new applications, services and infrastructure to quickly meet their changing requirements.
  • Enable Innovation: enabling organizations to create new types of applications, services and business models.


Which is Better – SDN or NFV?


Software-defined networking (SDN), network functions virtualization (NFV) and network virtualization (NV), are all complementary approaches. They each offer a new way to design deploy and manage the network and its services:


  • SDN – separates the network’s control (brains) and forwarding (muscle) planes and provides a centralized view of the distributed network for more efficient orchestration and automation of network services.
  • NFV – focuses on optimizing the network services themselves. NFV decouples the network functions, such as DNS, Caching, etc., from proprietary hardware appliances, so they can run in software to accelerate service innovation and provisioning, particularly within service provider environments.
  • NV – ensures the network can integrate with and support the demands of virtualized architectures, particularly those with multi-tenancy requirements.


Commonalities

SDN, NFV and NV each aim to advance a software-based approach to networking for more scalable, agile and innovative networks that can better align and support the overall IT objectives of the business.  It is not surprising that some common doctrines guide the development of each. For example, they each aim to:


  • Move functionality to software
  • Use commodity servers and switches over proprietary appliances
  • Leverage programmatic application interfaces (APIs)
  • Support more efficient orchestration, virtualization and automation of network services


SDN and NFV Are Better Together

These approaches are mutually beneficial, but are not dependent on one another.  You do not need one to have the other. However, the reality is SDN makes NFV and NV more compelling and visa-versa.  SDN contributes network automation that enables policy-based decisions to orchestrate which network traffic goes where, while NFV focuses on the services, and NV ensures the network’s capabilities align with the virtualized environments they are supporting. The advancement of all these technologies is the key to evolving the network to keep pace with the innovations of all the people and devices its connecting.


How Does ETSI NFV Operate?


The European Telecommunication Standards Institute (ETSI) is an independent standardization organization that has been instrumental in developing standards for information and communications technologies (ICT) within Europe. It was created in 1988 as a nonprofit by the European Conference of Postal and Telecommunications Administration (CEPT), which has been the coordinating body for European telecommunications and postal organizations since 1959. The not-for-profit organization has more than 700 member organizations representing more than 62 countries around the world.

How the Work of ETSI NFV Gets Done

Most of the work of the Institute is done in committees and working groups made up of experts from member organizations. They tackle technical issues and the development of specifications and standards to support the needs of the broad membership and the European ICT industry at large.

Committees, often referred to as Technical Bodies, typically meet one to six times a year. There are three recognized types of Technical Bodies:


  • Technical Committees – semi-permanent entities within ETSI organized around standardization activities for a specific technology area.
  • ETSI Projects are established based on the needs of a particular market sector and tend to exist for a finite period of time.
  • ETSI Partnership Projects (such as ETSI NFV) are activities within ETSI that require cooperation with other organization to achieve a standardization goal.


Industry Specification Groups (ISG) supplement the work of Technical Bodies to address work needed around a specific technology area. Recently, a group was formed to drive standards for Network Functions Virtualization (NFV).

ETSI NFV’s Role
Service providers came together and formed an industry specifications group within ETSI called the “Network Functions Virtualization” Group with over 100 members. The Group is focused on addressing the complexity of integrating and deploying new network services within software-defined networking (SDN) and networks that support OpenFlow.

They are working on defining the requirements and architecture for the virtualization of network functions to:


  • Simplify ongoing operations
  • Achieve high performance, portable solutions
  • Support smooth integration with legacy platforms and existing EMS, NMS, OSS, BSS and orchestration systems
  • Enable an efficient migration to new virtualized platforms
  • Maximize network stability and service levels and ensure the appropriate level of resilience


The group is currently working on the standards and will be delivering the first specifications soon.


What is OPNFV?


In September 2014, the Linux Foundation announced another open source reference platform — the Open Platform for NFV Project (OPNFV). OPNFV aims to be a carrier-grade, integrated platform that introduces new products and services to the industry more quickly. OPNFV will work closely with the European Telecommunications Standards Institute (ETSI or ETSI NFV) and others to press for consistent implementation of open standards.

The Linux Foundation, the non-profit known for its commitment to an open community, hosted the OpenDaylight Project in April 2013 to advance software-defined networking (SDN) and network functions virtualization (NFV). The project was created as a community-led and industry-supported open source framework. SDN and NFV together are part of the industry’s transition toward virtualization of networks and applications. With the integration of both, significant changes are expected in the networking environment.

OPNFV will promote an open source network that brings companies together to accelerate innovation, as well as market new technologies as they are developed. OPNFV will bring together service providers, cloud and infrastructure vendors, developers, and customers in order to create an open source platform to speed up development and deployment of NFV.

OPNFV Goals and Objectives

Not only will OPNFV put industry leaders together to hone NFV capabilities, but it will also provide consistency and interoperability. Since many NFV foundational elements already are in use, OPNFV will help with upstream projects to manage continued integration and testing, as well as address any voids in development.

In the initial phase, OPNFV will focus on building NFV infrastructure (NFVI) and Virtualized Infrastructure Management (VIM). Other objectives include:


  • Create an integrated and verified open source platform that can investigate and showcase foundational NFV functionality
  • Provide proactive cooperation of end users to validate OPNFV’s strides to address community needs
  • Form an open environment for NFV products founded on open standards and open source software
  • Contribute and engage in open source projects that will be influenced in the OPNFV reference platform


What is NFV MANO?

Network functions virtualization (NFV) has needed to be managed properly from its early stages. With NFV management and organization (MANO), management of NFV is now addressed by the MANO stream. NFV MANO is a working group (WG) of the European Telecommunications Standards Institute Industry Specification Group (ETSI ISG NFV). It is the ETSI-defined framework for the management and orchestration of all resources in the cloud data center. This includes computing, networking, storage, and virtual machine (VM) resources. The main focus of NFV MANO is to allow flexible on-boarding and sidestep the chaos that can be associated with rapid spin up of network components.

NFV MANO is broken up into three functional blocks:


  • NFV Orchestrator: Responsible for on-boarding of new network services (NS) and virtual network function (VNF) packages; NS lifecycle management; global resource management; validation and authorization of network functions virtualization infrastructure (NFVI) resource requests
  • VNF Manager: Oversees lifecycle management of  VNF instances; coordination and adaptation role for configuration and event reporting between NFVI and E/NMS
  • Virtualized Infrastructure Manager (VIM): Controls and manages the NFVI compute, storage, and network resources


For the NFV MANO architecture to work properly and effectively, it must be integrated with open application program interfaces (APIs) in the existing systems. The NFV MANO layer works with templates for standard VNFs, and gives users the power to pick and choose from existing NFVI resources to deploy their platform or element.

At the Internet Engineering Task Force (IETF) meeting in March 2014, NFV MANO announced a series of adopted interfaces for the MANO architecture, as well as said improvements were continual. An important next step for NFV MANO is to include an software-defined networking (SDN) SDN Controller into the architecture.

Ongoing NFV MANO Work




In September 2014, the Linux Foundation announced another open source reference platform — the Open Platform for NFV Project (OPNFV). OPNFV aims to be a carrier-grade, integrated platform that introduces new products and services to the industry more quickly. OPNFV will work closely with the ETSI and others to press for consistent implementation of open standards.

What is ETSI ISG NFV?

The European Telecommunication Standards Institute (ETSI), an independent standardization group, has been key in developing standards for information and communications technologies (ICT) in Europe. Created in 1988 as a nonprofit, ETSI was established by the European Conference of Postal and Telecommunications Administration (CEPT). With more than 700 member organizations, over 62 counties are represented by ETSI.

The ETSI Industry Specification Group for Network Functions Virtualization (ETSI ISG NFV), a group charged with developing requirements and architecture for virtualization for various functions within telecoms networks. ETSI ISG NFV launched in January 2013 when it brought together seven leading telecoms network operators, including: AT&T, BT, Deutsche Telekom, Orange, Telecom Italia, Telefonica, and Verizon. These companies were joined by 52 other network operators, telecoms equipment vendors, IT vendors, and technology providers to make up the ETSI ISG NFV. Not long after, the ETSI ISG NFV community grew to over 230 individual companies, including many global service providers.

ETSI ISG NFV exists side by side with the current Technical Organization, but they do have their own membership, which can be comprised of both ETSI and non-ETSI members (under some conditions). ISGs have their own voting rules and approve their own deliverables, as they independently choose their own work program.

Why Do We Need ETSI ISG NFV?

Telecoms networks are made up of an array of proprietary hardware devices. Launching new service often means more devices, which means finding the space and power to accommodate those appliances. However, this has become increasingly difficult. Hardware-based devices now have shorter and shorter life cycles due to rapid innovation, making the return of investment lower and lower when deploying new services, as well as limiting innovation as the industry is driven toward network-centric solutions.

Network functions virtualization (NFV) focuses on addressing these problems. By evolving standard IT virtualization technology, NFV implements network functions into software so that it can run on a range of industry standard server hardware and can easily be moved to various locations within the network as needed. With NFV, the  necessity to install new equipment is eliminated. This results in lower CapEx and OpEx, shorter time-to-market deployment of network services, higher return on investment, more flexibility to scale up or scale down, openness to the virtual device network, as well as more opportunity to test and deploy new services with lower risk.

The ETSI ISG NFV helps by setting requirements and architecture specifications for hardware and software infrastructure needed make sure  virtualized functions are maintained. ETSI ISG NFV also manages guidelines for developing network functions.


My Blog List

Networking Domain Jobs