Saturday, January 31, 2015
Wednesday, January 21, 2015
Ethernet or GPON: Which technology is best suited for Information Transportation Systems (ITS) of the 21st Century?
In this article, we are going to learn how the next generation of Digital Information Transportation Systems will be deployed across the enterprise and service provider network.
Information Transportation Systems (ITS):
Information Transportation Systems can be broken into three different networks, such as Enterprise networking, Service Provider Access Network and Transmission Network. ITS is comprised not only of active components but also of passive components. In the below figure, you can get some overview of different types of wired and wireless communication network solutions available at Rural and Urban Area.
Previously every service such as able TV, data and voice networks had independent networks and different operators or service providers. Until today most of South Asia’s network was covered by those individual operators. In countries like India, CATV operators or telephone operators are providing Internet service through co-axial or PSTN networks. Maybe a person from a developed world wouldn’t believe that in most of the rural places of India, the average network speed is in between 128- 256 Kbps.
In some rural areas of the United States, people are also using VSAT technology to get triple play (Voice, Data & Video) services. These services are not very cost effective and they are unreliable and often have too much down time due to severe weather conditions (snowing or rain or dust cyclone). Many of India’s rural areas are running only 2G networks, which is not sufficient to have a reliable data transaction. These things are playing major obstacles to most of rural India’s development. Proper ICT planning for digitization of Information Transportation Systems must be implemented. Otherwise, business can’t be developed in rural areas.
In the early 2000s, DSL and VDSL were the popular choices for providing broadband connectivity to businesses. But it had limitations regarding bandwidth (up to 52 Mbit/sec downstream and 16 Mbit/sec upstream), which was quite good for data & voice traffic. But the demand of streaming video is making the network more bandwidth hungry. Presently our legacy copper network doesn’t have the capability to provide the optimal solution for the required bandwidth in decades to come.
Fibre-based solutions have become cheaper (almost same price of 2 core SM Fibre with CAT 6A cable) every day. Previously most service providers chose to deploy some sort of fibre optics solutions (in backbone) to provide broadband connectivity to their end user. In the below diagram, I used the famous Fibre to the curve topology, which is mainly used in multi-dwelling building environments, where fibre is running up to the cabinet inside/outside of the building, and later it uses the existing CAT 6 cable wiring for connecting individual customers to the DSL Modem. This solution is called last mile Ethernet technology.
These hybrid solutions (Fibre & Copper) are quite good when you want to use the existing copper cabling solutions of the building, and this is very useful for old buildings where preparing new block wiring is very tedious and time consuming. But there are several drawbacks with this technology, as it works in a single point of failure. If the active components (access layer switch) get powered off or fail then all subscribers in the building will lose connectivity. Service providers must send a network technician to boot up the active components and troubleshoot the switch. This will increase the operation cost (OPEX), which is not good for running a smooth business.
There are other options shown in the above diagram. Instead of placing active components in the curve, service providers can use only fibre-based Passive Optical Network where every customer will have their own Optical Network Terminal (ONT) on their premises. It will be connected directly to the Optical Line Terminal (OLT) located at the central office/local exchange of the service provider network. This solution is very popular and we will discuss more about this in a later part of this article.
Converged Information Transportation Systems:
In the below diagram, you will see today’s converged Information Transportation Systems (ITS) architecture using PON & Ethernet Technology for transporting data from a user located in Enterprise network via Access & Transmission Networks of Service Provider to the Internet cloud.
In today’s enterprise network, most of the horizontal cabling still runs over copper (CAT 6A) cable, but the backbone network runs mostly over multi-mode fibre (up to 400 meter). In an access network, single mode fibre (distance from ONT to OLT) can reach up to 60 KM). For transmission networks, it uses Multi Service Transport Platform (MSTP) to connect different types of network clouds (CATV, TDM & Internet). Australian ISP Telstra chooses Ericsson for its next generation optical network to deploy all IP based transportation networks. This will improve Telstra’s Optical technology and will increase bandwidth capacity and also lower latency which is of growing importance as more and more operations move to the cloud. It is impossible to discuss in one article everything about transmission network, so the rest of this article will focus on Enterprise Networking and Access Network of Service Provider.
Access Network of Service Provider:
In today’s next generation broadband network, the access network of the service provider’s domain is very important and has a major role to play. Search engine giant Google is also investing heavily in Fiber optics technology to provide 1 Gbps connectivity to subscribers in Kansas City.
Gigabit Passive Optical Network (GPON):
Questions may come to your mind. What is Google using to provide 1 Gbps connectivity? How is it different from other PON based solutions?
Many people think that Google is using active point-to-point Ethernet FTTH, but in reality they are also using shared Passive Optical based networking technology (GPON) like other service providers such as Verizon. GPON provides operators a more cost-effective approach for delivering Gigabit services to the customer. Active/P2P Ethernet is superior in terms of delivering dedicated, symmetric bandwidth, but the reality is that very few end users are going to utilize the dedicated gigabit Ethernet port, especially on a “24/7/365″ basis. It is important to remember that Google is not the only service provider who is using the GPON based solutions for providing FTTH services, or even the first. There are many Internet service providers working with Alcatel, Huawei, Commscope and Cisco for providing GPON-based FTTH solutions to its end users.
From the above diagram, you can understand that every customer is equipped with separate Optical Network Units (ONU/ONT). Depending upon the requirement of its subscribers (Businesses & Home), service providers choose from different types of ONT (Alcatel, Cisco, Huawei, etc.) as Customer Preemies Equipment. ONT converts optical signals to electrical signals and have different types of ports available such as Gigabit Ethernet port, Fast Ethernet port, T1/E1 port, CATV port, etc. Another side of ONT is connected to the nearest splitter (located either in a building or roadside manhole) by using 2 core Single Mode Fiber, and the splitter is also connected to the nearest central office’s Optical Line Terminal (OLT). Every subscriber is using the shared bandwidth and is driven by Time Division Multiplexing (TDM) scheme of 2.488 Gbps as Downstream and 1.244 Gbps as Upstream. Every OLT-GPON Port can support up to 64 subscribers, and it can also support 8 different VLAN per ONT Physical port. Every Optical Line Terminal can support up to 3,600 subscribers (based on maximum 64 subscriber per GPON Port). There is a new 10G-GPON technology available, which is a continuation in the evolution of GPON technologies, increasing the downstream bandwidth four times and can reach up to 20 KM to 60 KM and split from 64 to 128.
Enterprise Networking:
According to tech website techopedia, an enterprise network is an enterprise’s communications backbone that helps to connect computers and related devices across departments and workgroup networks, facilitating insight and data accessibility. This definition is for data communication only, but remember present enterprise networks must provide other services such as voice, video, teleconference, building automation, and security and surveillance systems in a single networking platform. In the below diagram, I have given an example of different types of enterprise networking by industry verticals based on networking equipment manufacturer such as Cisco, Huawei, etc.
This type of definition for enterprise networking is quite good for business development managers of companies like Cisco or Alcatel to manage their channel partner and clients. As an ICT architect, I prefer to see enterprise networking in slightly different ways. For me, enterprise networking consists of two parts. Number one is about local area network (this is called Inside Plant Network) and the second one is about wide area/metropolitan area networks (this is called Outside Plant Network).
I hope you understand that all types of buildings are a simple LAN Network and have different types of services depending upon its business requirements. In a campus environment, we need to connect all the buildings via Fiber Optics backbone (either in GPON or Ethernet based).
Ethernet vs GPON in Enterprise Networking or in Campus Environment:
Companies like Tellabs pushing for Fiber to Desktop solutions have developed the smallest ONT (100 miniseries), which you can mount in the back of a desktop.
Alcatel-Lucent the pioneer of GPON technology is vocal about the possibility of its usage in enterprise networking. And GPON-based vendors even claim that GPON is greener and more cost effective in respect of Ethernet networking. On the other hand, Cisco being the leader of enterprise networking is claiming that their switching equipment is much greener than that of Motorola’s GPON-based solutions. There are several papers that have been published on both the technologies (about which is best), and I am not going to argue any of these. I would request my reader to go through them with the below links and reference list for more understanding about security, reliability, design, cost, power consumption for both the technologies.
As an ICT Architect, I see both these technologies as having equal importantance to businesses. In businesses like hotels and hospitals, we can use GPON-based solutions, but they will not be able to provide the same scalability of Ethernet networking. Ethernet technology has the capability to provide power over Ethernet and it gives significant advantages over GPON-based solutions. However, there are some Optical Network Terminals available (ex. Huawei MA 5650 Series) which can provide Power via cat 6 cable. This device is good for installing in a house or apartment where it can serve as Access Layer Switch also, but there is no advantage of using this in businesses (at least from a power consumption standpoint).
From the above design, you can understand that unlike Ethernet (which is three layered technology) GPON is only a two-layered (only active & distribution Layer) technology. GPON-based solutions are very good for residential projects, where individual apartments can be connected and controlled through their respective ONT. Service providers can provide value added services to the customer, and this will have significant impact in IoT enabled businesses of the 21st century. A customer can get real time updates about his/her home from a service provider’s cloud-based solution. This will be a great opportunity for service providers to generate businesses from some sort of Home Automation.
Case Study: Shopping Mall:
Last year, I met a customer (shopping mall owner) who wants a separate independent network for an individual shop owner. In Qatar, by law, IP-CCTV & Security systems must be connected and controlled by separate Ethernet based networks. And it must not connect to the main network. For me, this is no brainer, but businesses must comply with this law, otherwise they will not get approval from the Ministry of Interior.
They told me that they don’t want to connect other building engineering services (HVAC, Fire, AV, Power etc.) to the building’s IP network. They simply want to provide one telephone and data point inside every shop with their own network. But unfortunately other contractors already proposed Ethernet-based centralized network solutions. As per design (find below), every shop will have double UTP ports, and CAT 6 cable will run horizontally to connect to the nearest access layer switch located at Telecom Room (IDF). And every IDFs will connect to the Main Distribution Frame (Data Centre Core Switch) by using redundant multimode fibre uplinks. And there they need to install Firewalls also as a combination of Internet Gateway and 1st level of defense. For controlling the IP Phones they proposed IP-based Private Branch Exchange- Cisco Call Manager or Avaya IP Office.
The above solution is good but expensive in terms of capital expenditure but also it will cost mall operators significantly to run day-to-day operation. Instead of using Ethernet-based solutions, I proposed GPON-based solutions, where each shop will be equipped with separate ONTs (check the below diagram).
Mall owners only need to provide telecommunication block wiring using single mode fiber to each shop. Service providers will install their ONT inside the shops, and inside the main telecommunication room they will also install their splitter inside a Rack/cabinet. This solution was very liked by the mall owner because it reduces the initial investment (CAPEX) and also the cost of running the network for daily operation (OPEX).
Individual shop owners also have more flexibility to use their own systems such as IP telephone or digital telephone. And they can use IP-based point of sale devices or the old analogue based POS device. They can run their own wireless routers for connecting their mobile user. They will be charged directly by the service provider as per their usage, unlike Ethernet-based solutions where the bill would have come directly to the mall operator.
Conclusion:
As a vendor independent ICT Professional, I would love to say both GPON and Ethernet based technologies are an opportunity to serve humanity in better ways. I believe developing countries like India must invest heavily in fiber-based Information Transportation Systems to be a smart technology driven nation. Internet connectivity should be a basic human right, and it should be accessible to all areas of the USA, India, Africa and every corner of the world. This will not be completed in one day or even in decades to come, but we the ICT professional must design (vendor independently) the network in efficient ways to reduce the initial capital investment (CAPEX) as well as the daily maintenance cost (OPEX).
Why keep employee salaries confidential ?
Somewhere along the growing up years, we all learnt to equate success with how much we earn. Media played it up by discussing salaries of C-Level executives, fresh hires who landed a whopping first salary. HR professionals made it further difficult by treating salaries like trade secrets and warning you to keep the magic figure confidential. Payscale made a business out of this by creating an online Compensation & Benefits information portal. Water cooler conversations after annual increments are around the rumours of “who got what”. Your salary can either be a source of joy or sorrow till you know someone else’s. Sometimes in a weak moment, you share that number with your best friend or colleague. You get the drift - this is serious stuff with so many emotions running high. Why then is salary a closely guarded secret? Can we do otherwise?
To answer the first question, it is necessary to understand the process of salary fitments for a job. This process is not an exact science and it is based on the process of evaluating jobs. Job evaluations aim to make a systematic comparison between jobs and assess their relative worth. Job evaluations set a basis for deciding the monetary value that is accorded to the job. The prices of similar jobs in the market are then compared and a decision to price a job is made. This is where it begins to get fuzzy. For one, organizations decide where it wishes to price jobs with respect to the market. Further, the salary for a job is not a single number but a range of salaries from minimum to maximum. This helps take care of subsequent salary increases made to the job holder. It also means starting salaries can be fitted at any point in the range. So employees joining a particular job can have different starting salaries based on what they earned in the previous job and how they negotiated when they joined the company.
The biggest reason for maintaining salaries confidential is to mask the pay differences between those performing the same job. Answering queries and grievances on pay disparity is an HR Manager’s nightmare. While relative value of differing jobs can be explained, pay differences in the same job cannot be explained rationally. Further, for differing jobs too, market price for niche skilled jobs or emerging jobs (where no historical salary data is available) is based on perceived value or business need for the skill. Here jobs are priced as per buying capacity of the organization, demand / supply of skill set. Explaining the salaries of such jobs to other job holders can also be tricky and here again keeping it confidential helps. Pay increases in most organizations are based on performance and though there are rules followed to accord increases, it may far from being objective in the eyes of the employee. Firstly, the issue of correctly determining performance levels has to be addressed. Then the decision to provide increases to ensure that pay differences for the same job are ironed out are to be made. Pay differences also arise between employees who are hired from the market compared to those who have grown to a position from within the organization. Unfortunately, it is costlier to buy than to build or it is perceived that those who grow within don’t mind lower salaries in exchange of developmental opportunities. Explaining all this to an employee can become very tough and the easier route seems to be keeping the information confidential. Radical transparency may just open a can of worms!!
That however, may be far from being true. The Great Places to Work Institute has through its research and data from millions of employees ascertained that trust is the foundation of a great place to work. Transparency is one of the key drivers of trust. It kills the rumour mill, and thus removes the distractions, fears, and negativity that saps concentration. Trust brings with it more agility, helps bring forth feedback; it makes talking about difficult things and challenges easier. It opens the doors to better functioning, improved productivity. The biggest case for being transparent in employee salaries is that it opens the doors for a higher level of trust in the organization. It also brings in more accountability in those who administer salaries right from the business leader who approves the salary to the recruiter who makes the offer. It calls for a systematic process to make salary decisions and an ability to explain differences.
Among companies who have adopted this policy to their benefit, is Buffer, a social media management company. Buffer is the creator of an application (by the same name) that is designed to schedule posts to Twitter, LinkedIn and Facebook. One of the core values at Buffer is “Default to transparency” and in line with this value, salary information at Buffer is open to all. You can read more about Buffer’s salary formula here.
In today’s world, we have become more open to sharing more of ourselves through social media. We live in an unprecedented age when we consume more information than our ancestors. Our children, born into such a culture and who will be the employees of tomorrow wish to be more informed about matters affecting them . Organizations of tomorrow have the choice of being early adopters of new and disruptive policies to make the workplace more relevant for the new workforce.
Tuesday, January 20, 2015
Telstra Turns On 100G Subsea Routes
To accommodate the growth of cloud-based data operators and connect the increasing number of data centres managed by service and content providers, Telstra today launched new 100 gigabits per second (100G Wavelength) connectivity across multiple ultra-long haul submarine cable routes globally.
Darrin Webb, Chief Operating Officer, Telstra Global Enterprise & Services, said Telstra’s 100G Wavelength service was designed to scale smartly and would also help deliver the connectivity and capacity needed to support market demand for larger bandwidth applications, including High Definition video services and emerging Ultra High Definition Television.
“As the volume of data generated and consumed worldwide continues to increase exponentially, it’s critical the infrastructure responsible for delivering it can cater to this need. Our job, as a trusted network partner, is to adapt and ensure there is capacity where it is required most by our customers.
“However the move to 100G is much more than just raw capacity. Alongside enhanced efficiency, 100G can help customers reduce operational expenditure and simplify network maintenance thanks to the service’s ability to consolidate bandwidths. It is also flexible enough to meet the requirements of most cable companies by offering landing station and point of presence options too.”
Delivered via Telstra’s Asia America Gateway (AAG Dedicated Fiber Pair), Reach North Asia Loop (RNAL), Telstra Endeavour, Australia-Japan Cable (AJC) and UNITY cable systems, the new 100G Wavelength service will cover Japan, Hong Kong, Taiwan, Korea, Australia and the United States.
Saturday, January 17, 2015
Friday, January 16, 2015
Thursday, January 15, 2015
Tuesday, January 13, 2015
Monday, January 12, 2015
The Future of Networking and the Network Engineer
Courtesy - Jason Edelman
I mentioned in my previous post Nick McKeown, uber smart Entrepreneur and Professor at Stanford, gave what I thought was the finest presentation of the week at the Open Networking Summit hosted in Santa Clara this week. As I wait to board my plane back to the East coast, here is a more detailed recap of the presentation and what I took away from it…
Nick McKeown arguably gave the most well thought out and intellectually engaging presentation of the week. In my opinion, it was the best I saw. He painted a picture that should have been done in the key note on Tuesday. McKeown started by looking at IBM and the closed nature of the mainframe and PC business back 20-30 years ago. It was a closed industry and innovation was slow until the x86 instruction set was developed. This allowed others to build operating systems and even more to build applications rapidly increasing the pace of innovation in this market. Quickly, we saw pictures of BFRs (Big *** Routers) on the screen that may have been a Cisco CRS-1. I couldn’t tell from where I was sitting, but the goal was to compare this big router to IBM and their legacy closed model. One vendor provided the ecosystem of hardware, OS, and applications riding on top.
Is this why innovation has been lacking in networking? Maybe. McKeown talked about the architecture that could change this.
Enter OpenFlow. Enter Software Defined Networking (SDN). This is where the x86 instruction set always comes up as an analogy to OpenFlow. From here, many can probably speculate on what was said over the next few minutes. It was nothing more than has been talked or blogged about for the past 18 months. Yes, it is understood OpenFlow can separate the control and data planes, a network OS can be created (more than one of them), and applications can be written to interface on top of the network OS as easy and quickly as they are built for iPhones and Android devices. Okay, maybe not that easy!
The next few slides went on to depict on how strict HW/ASIC and software development is within their respective industries. It showed the rigorous processes entailed and how important it is to get it right before going to production. Do you really want to deploy a defective ASIC? The costs associated to that could be catastrophic. Numbers ($$) were then shown on how large the industries are for development and testing. They were upwards to $10B. Then we saw sample numbers on how many books, papers, and classes there are for these at Universities. I don’t recall the numbers for each, but they were large. Dare you ask about the industry of testing in the world networking? No money, no papers, no books, and no classes*. We saw a slide that stated a few things network guys do when issues arise – ping, traceroute, tcpdump, etc. some primitive commands and functionality. But, think about it, how do you really troubleshoot? There are methodologies used by experiencing network engineers, but no automated tools, right? Is this sufficient for us? Have we really become “master’s of complexity?” Quite possibly, yes.
*Feel free to check the slides when they are posted for the exact numbers and examples presented by Nick McKeown. There is a chance I got something wrong as I’m writing from memory.
**Master’s of Complexity – this is in quotes because it’s a phrase being quoted now quite often from Schott Shenker from his presentation at the last ONS.
Over the next section of his presentation, McKeown went on to show some pretty in depth slides on value that can be provided to the network by having a centralized control plane. Equating back to hardware and software regression testing and development, his focus was on automated debugging tools and testing, proactive and reactive, for a Software Defined Network. Before I go on, I’m going to quickly summarize a feature on Cisco ASA appliances to help in making the case for what Nick McKeown went on to say.
If an ASA is deployed and there are packets being dropped, one popular tool is to use Packet-Tracer for debugging. Packet Tracer analyzes a synthetic packet that an admin generates the details of (source ip/port, des ip/port) from when it would enter the ASA until it would exit, and since the device knows the order of operations of applying certain policies, it goes through certain checks such as applying NAT policy, applying ACL, etc. So, as one can imagine, this uncovers out of order ACLs or you can uncover mis-configured NAT policies, etc. This has become a pretty important tool to have in Cisco ASA environments.
Have you ever thought about having similar functionality in a switch or router? Sounds interesting given the amount of even more complex configs you’ll have on L2/L3 devices ranging from long and complex routing policies, filters, NAT, PBR, QOS, etc. Do we have that functionality today? Not that I know of. Regardless, McKeown is thinking bigger, much bigger. Imagine you have this Packet-Tracer functionality NETWORK WIDE – that’s right, network wide. Imagine wanting to test how a change may affect the network campus wide or enterprise or debugging an issue where a packet doesn’t reach its destination. Within a SDN, this becomes possible. This was actually eye opening for me in terms of looking at the product development and testing of software based solutions. His team at Stanford is already well underway on developing these types of tools for the Stanford campus network. Imagine specifying a flow in a simple UI, clicking submit, and the output quickly showing where the dropped packet is in a muilt-node network. The output may show the error is with the ACL on Router 5 port 3. Well, you really don’t have to imagine this because you can go to Stanford and see these types of developments.
This was just a glimpse of what McKeown spoke about and is one of the industries that he thinks will significantly increase as SDN adoption begins to gain wheels. Unfortunately, this isn’t good news for me and who I will call “traditional” network folks. When asked by a participant if this type of automation for testing, development, and debugging will “wack” us, you heard a light chuckle from the audience, but believe me, everyone wanted to know Nick McKeown’s thoughts. He answered honestly saying he sees an overall increase in jobs in the network industry, BUT, there will be less admins/engineers needed to maintain and operate networks.
The large surplus of jobs will be in software and application development for network systems. This is of no surprise for those who have been following these trends. It will be interesting though to see how it plays over the coming years (don’t worry, it’s not weeks or months!). The network engineer of yesterday could very well become the PBX phone guy of the late 90s early 2000s. These network guys will adapt new skills such as programming and system administration or find the company that won’t adopt new technologies to try and stay in their comfort zone.
What will I do? Who the heck knows? Maybe I’ll take on writing full time. Just joshin' with ya, this is a great new opportunity for Integrators and Partners to add value for their customers as well!
On a side note, I’m not going to lie; it felt pretty good for people to recognize me by this blog while walking around at the ONS (everyone was wearing badges that had your name and the company you represented). It wasn’t many, but a few, and it was indeed still very cool. Glad to see people are actually reading this thing =).
Sunday, January 11, 2015
Learning Python
A list of links for getting started to use Python to develop application for SDN Controller.
Docs
Tutorials
Tools
Slides
Saturday, January 10, 2015
A NetOps to DevOps Training Plan
Courtesy - Dave Trucker
My recommendation was basically for Networkers to be open to change, and to start broadening their horizons. DevOps is coming to networking and that is a FACT. You might be wondering what skills a Network DevOps Engineer needs and here I attempt to answer that.
It's still about NETWORKING
I'm going to state this upfront here. You need to be good at Networking for any of the other skills here to be useful. Continue along vendor certification tracks, follow the IETF, join NANOG, experiment with new technologies. This is all invaluable.
Software Engineering Fundamentals
A lot of the DevOps skills have roots in Software Engineering. Being a Network Guy ™ this may seem like a little bit of a paradigm shift but here's something cool. Would you believe that some of these software engineering concepts have more to do with engineering best practice than with software, and are in fact relevant to the work you are doing today? Also, your SysAdmin buddies already know this and started their DevOps pilgrimage a while ago.
Unit/Functional/Integration Testing, Version Control, Agile, Test-Driven Development (TDD) and Behaviour Driven Development (BDD) are all things that you could benefit from today.
Fortunately, there is an easy way to pick these skills up. The folks over at Software Carpentry have put together a set of Tutorials to help research scientists get to grips with Python and supporting tools. The Lessons are put together in such a way that they are easy to understand for mere mortals (unlike a lot of CS textbooks/lectures)
Know your *nix
An understanding of Linux is going to stand you in good stead in the transition from NetOps to DevOps. As much as people like to talk about "Death of the CLI" they don't realise how much time Developers and SysAdmins spend in the Terminal. Whether this be checking in code with git, extracting information from Open vSwitch or using the OpenStack CLI clients you will likely spend a lot of time in the terminal too. Learning how to be productive here is essential and a good understanding of Linux will help when troubleshooting complex issues.
LPIC
There are vendor neutral *nix certifications which are worth a look like LPIC-1. While I haven't gone through this myself, I have read some LPIC study materials and found this infinitely useful. If you want a vendor certification, Red Hat have certifications available too.
Have some fun
Learning Linux doesn't have to be boring. I prefer a more practical approach so you may find attempting one of the following a nice project to hone your Linux-Fu:
Learn some Python
I'm biased towards Python, but I feel it's the most approachable Programming Language for Network Engineers to pick up.
You don't need to know Python much to start getting real value. Think of how many things you could automate! People joke about automation not saving time (as it takes time to automate) but during that time you are getting a better understanding of Python, so it's not a total loss. Whether it's your weekly report, Mining the Social Web or something more Network-centric, undertaking a Python project will be really worthwhile... and if you can, host the result up on GitHub.
There are hundreds of good tutorials online, but if you are just getting stated I would recommend CodeAcademy.
Get your head around "Infrastructure as Code"
"Infrastructure as Code" is the battle cry of DevOps. To really understand what this is about and to get a handle on the what/why/how for Networking I'd recommend that you spend some time with:
Run through the tutorials, boostrap a server with Chef, use Puppet to deploy a LAMP server and if you are feeling brave, write a Chef Cookbook/Puppet Manifest. I couldn't mention this and not mention the awesome working being done on the Netdev library for Puppet and Chef.
What about some SDN?
You could take a course on Coursera, but why not get some practical experience? Download OpenDaylight and follow one of Brent Salisbury's awesome tutorials. You can simulate massive networks using Mininet and have some fun pushing paths using the REST API or experimenting with OpenStack integration. The nice thing about the OpenStack integration piece is that this requires you to get DevStack working, which is not easy, and it gives you some OpenStack experience.
Conclusion
Looking in to my crystal ball, I would predict that the Network DevOps engineer will need:
Appendix A: Relevent Open Source Projects
Appendix B: Tools of the Trade
Appendix C: Further Reading
Kyle Mestery (@mestery) pointed me to a great slide deck that show his thoughts on this topic. This is definately worth a look!
My recommendation was basically for Networkers to be open to change, and to start broadening their horizons. DevOps is coming to networking and that is a FACT. You might be wondering what skills a Network DevOps Engineer needs and here I attempt to answer that.
It's still about NETWORKING
I'm going to state this upfront here. You need to be good at Networking for any of the other skills here to be useful. Continue along vendor certification tracks, follow the IETF, join NANOG, experiment with new technologies. This is all invaluable.
Software Engineering Fundamentals
A lot of the DevOps skills have roots in Software Engineering. Being a Network Guy ™ this may seem like a little bit of a paradigm shift but here's something cool. Would you believe that some of these software engineering concepts have more to do with engineering best practice than with software, and are in fact relevant to the work you are doing today? Also, your SysAdmin buddies already know this and started their DevOps pilgrimage a while ago.
Unit/Functional/Integration Testing, Version Control, Agile, Test-Driven Development (TDD) and Behaviour Driven Development (BDD) are all things that you could benefit from today.
Fortunately, there is an easy way to pick these skills up. The folks over at Software Carpentry have put together a set of Tutorials to help research scientists get to grips with Python and supporting tools. The Lessons are put together in such a way that they are easy to understand for mere mortals (unlike a lot of CS textbooks/lectures)
Know your *nix
An understanding of Linux is going to stand you in good stead in the transition from NetOps to DevOps. As much as people like to talk about "Death of the CLI" they don't realise how much time Developers and SysAdmins spend in the Terminal. Whether this be checking in code with git, extracting information from Open vSwitch or using the OpenStack CLI clients you will likely spend a lot of time in the terminal too. Learning how to be productive here is essential and a good understanding of Linux will help when troubleshooting complex issues.
LPIC
There are vendor neutral *nix certifications which are worth a look like LPIC-1. While I haven't gone through this myself, I have read some LPIC study materials and found this infinitely useful. If you want a vendor certification, Red Hat have certifications available too.
Have some fun
Learning Linux doesn't have to be boring. I prefer a more practical approach so you may find attempting one of the following a nice project to hone your Linux-Fu:
- Install Arch Linux
- Replace your ESXi Lab with KVM, Libvirt and Open vSwitch
- Write command aliases to save yourself some typing
- Learn vim, and make yourself a .vimrc
Learn some Python
I'm biased towards Python, but I feel it's the most approachable Programming Language for Network Engineers to pick up.
- It has an "Interactive Interpreter" which is a lot like a CLI and let's you enter statements to see what happens
- It can be used to basic scripting or beautifully designed object-oriented software but it doesn't force you to do things one way or another.
- There is a rich ecosystem of libraries that simplify doing everyday tasks
- It's being embedded in Network Devices AND network vendors are providing Python libraries for their software.
You don't need to know Python much to start getting real value. Think of how many things you could automate! People joke about automation not saving time (as it takes time to automate) but during that time you are getting a better understanding of Python, so it's not a total loss. Whether it's your weekly report, Mining the Social Web or something more Network-centric, undertaking a Python project will be really worthwhile... and if you can, host the result up on GitHub.
There are hundreds of good tutorials online, but if you are just getting stated I would recommend CodeAcademy.
Get your head around "Infrastructure as Code"
"Infrastructure as Code" is the battle cry of DevOps. To really understand what this is about and to get a handle on the what/why/how for Networking I'd recommend that you spend some time with:
Run through the tutorials, boostrap a server with Chef, use Puppet to deploy a LAMP server and if you are feeling brave, write a Chef Cookbook/Puppet Manifest. I couldn't mention this and not mention the awesome working being done on the Netdev library for Puppet and Chef.
What about some SDN?
You could take a course on Coursera, but why not get some practical experience? Download OpenDaylight and follow one of Brent Salisbury's awesome tutorials. You can simulate massive networks using Mininet and have some fun pushing paths using the REST API or experimenting with OpenStack integration. The nice thing about the OpenStack integration piece is that this requires you to get DevStack working, which is not easy, and it gives you some OpenStack experience.
Conclusion
Looking in to my crystal ball, I would predict that the Network DevOps engineer will need:
- Strong Networking Skills
- Knowledge of Linux System Administration
- Experience with Puppet/Chef/Ansible/CFEngine/SaltStack would be desirable
- Scripting skills in Bash, PHP, Ruby or Python
- Ability to work under Source Control (git)
- Experience in consuming (REST) APIs
- Experience with OpenStack and OpenStack Networking
- Appreciation of Software Defined Networking (SDN)
- Knowledge of Agile Methodologies
- Knowledge of Test-Driven Development
- Ability to write Unit and Integration Tests
Appendix A: Relevent Open Source Projects
Appendix B: Tools of the Trade
Appendix C: Further Reading
Kyle Mestery (@mestery) pointed me to a great slide deck that show his thoughts on this topic. This is definately worth a look!
Friday, January 9, 2015
NETCONF, YANG, RESTCONF and NetOps in SDN World
Courtesy - Dave Tucker
What is NETCONF
NETCONF is defined in RFC 6241 which describes it as follows:
The Network Configuration Protocol (NETCONF) defined in this document provides mechanisms to install, manipulate, and delete the configuration of network devices. It uses an Extensible Markup Language (XML)-based data encoding for the configuration data as well as the protocol messages. The NETCONF protocol operations are realized as remote procedure calls (RPCs).
It's not a new technology, as work started on this approximately 10 years ago, but what it gives us is an extensible and robust mechanism for managing network devices.
NETCONF understands the difference between configuration data and state data. As somebody who has been bitten by trying to perform a create operation and faced validation issues as I've mistakenly sent (or worse, edited) a read-only field in a request, I feel this is really valuable.
Another great thing from an operations perspective is the ability to test/validate configuration before it's applied to the device. NETCONF allows you to set at test-option for an edit-config operation that will either test only, or test then set the configuration.
Being XML-based, we can also validate our NETCONF against an XML Schema Document (XSD).
NETCONF supports devices with multiple config datastores e.g running/startup or candidate/running/startup.
Furthermore, we can also subscribe to notifications or perform other Remote Procedure Calls (RPCs) using NETCONF.
What is YANG
YANG is defined in RFC 6020 which describes it as follows:
YANG is a data modeling language used to model configuration and state data manipulated by the Network Configuration Protocol (NETCONF), NETCONF remote procedure calls, and NETCONF notifications.
I am going to make a bold assertion here
Machines love XML
Human's do not love XML.
-- Dave Tucker
Unfortunately it's humans that write standards, and standards dictate data models. Therefore it's in our interest to have a modeling language that people unfamiliar with XML can use and this is where YANG really shines.
YANG is hierarchical (like XML) and supports all of the niceties of a programming language like re-usable types and groupings and more importantly, extensibility. It has a powerful feature called "Augmentation" that allow you to extend an existing tree with some additional information. As it's designed for NETCONF, it allows you to model NETCONF-specific items like additional RPC's and the contents of notifications.
YANG is supported by some awesome open source tooling like pyang.
NETCONF <3 font="" yang="">3>
NETCONF is XML-based, which means that somebody (your network vendor) needs to model their configuration structure appropriately (unless they cheat and use a CLI format). Yang is the perfect way to do this, and also acts as good user documentation when parsed through pyang.
Consider the following yang snippet:
It doesn't take too much brain power to work out that this is a list of interfaces, the unique key is interface-name, and each interface has a speed and duplex. The accompanying XML would then be:
My hope for NETCONF and YANG is that the IETF and other SDO's standardize as many data models as they can. In this way, we can have a standard set of models that can be used for true multi-vendor network management. We don't want hundreds of proprietary MIB files, and I hope that the ease of modeling in Yang will encourage this.
So what has this got to do with SDN?
Even in SDN, we still have persistent state on network devices. OpenFlow doesn't automatically configure itself which is why OF-Config, which uses NETCONF, was developed. Open vSwitch, the de-facto standard for virtual switching, uses Open vSwitch Database Management Protocol (OVSDB) defined in informational RFC7047 which uses JSON-RPC.
Where I see NETCONF adding value is that we have a single protocol for managing configuration for both the traditional, and software defined network. I also don't want to get in to an Open Source vs Open Standards debate, but when interoperability is concerned open standards are essential, and having a standard set of Yang models would be advantageous.
It also has one other benefit, enabled by RESTCONF. Device-level Northbound API standardization.
What is RESTCONF you say? RESTCONF is currently an IETF Draft.
This document describes a REST-like protocol that provides a programmatic interface over HTTP for accessing data defined in YANG, using the datastores defined in NETCONF.
Now device-level NBI's aren't exactly SDN in my book, but they are pretty useful to Network DevOps. What RESTCONF does, is enable simple Yang models to be accessed over HTTP using RESTful-ish style.
Why is this so awesome?
NETCONF is really powerful, but it's a little cumbersome for small tasks. RESTCONF is the more nimble cousin which allow people that are already well-versed in a little REST-API work to perform small tasks without needing to learn an entirely new skill set. That's a real win for DevOps in my book.
What is NETCONF
NETCONF is defined in RFC 6241 which describes it as follows:
The Network Configuration Protocol (NETCONF) defined in this document provides mechanisms to install, manipulate, and delete the configuration of network devices. It uses an Extensible Markup Language (XML)-based data encoding for the configuration data as well as the protocol messages. The NETCONF protocol operations are realized as remote procedure calls (RPCs).
It's not a new technology, as work started on this approximately 10 years ago, but what it gives us is an extensible and robust mechanism for managing network devices.
NETCONF understands the difference between configuration data and state data. As somebody who has been bitten by trying to perform a create operation and faced validation issues as I've mistakenly sent (or worse, edited) a read-only field in a request, I feel this is really valuable.
Another great thing from an operations perspective is the ability to test/validate configuration before it's applied to the device. NETCONF allows you to set at test-option for an edit-config operation that will either test only, or test then set the configuration.
Being XML-based, we can also validate our NETCONF against an XML Schema Document (XSD).
NETCONF supports devices with multiple config datastores e.g running/startup or candidate/running/startup.
Furthermore, we can also subscribe to notifications or perform other Remote Procedure Calls (RPCs) using NETCONF.
What is YANG
YANG is defined in RFC 6020 which describes it as follows:
YANG is a data modeling language used to model configuration and state data manipulated by the Network Configuration Protocol (NETCONF), NETCONF remote procedure calls, and NETCONF notifications.
I am going to make a bold assertion here
Human's do not love XML.
-- Dave Tucker
Unfortunately it's humans that write standards, and standards dictate data models. Therefore it's in our interest to have a modeling language that people unfamiliar with XML can use and this is where YANG really shines.
YANG is hierarchical (like XML) and supports all of the niceties of a programming language like re-usable types and groupings and more importantly, extensibility. It has a powerful feature called "Augmentation" that allow you to extend an existing tree with some additional information. As it's designed for NETCONF, it allows you to model NETCONF-specific items like additional RPC's and the contents of notifications.
YANG is supported by some awesome open source tooling like pyang.
NETCONF <3 font="" yang="">3>
NETCONF is XML-based, which means that somebody (your network vendor) needs to model their configuration structure appropriately (unless they cheat and use a CLI format). Yang is the perfect way to do this, and also acts as good user documentation when parsed through pyang.
Consider the following yang snippet:
It doesn't take too much brain power to work out that this is a list of interfaces, the unique key is interface-name, and each interface has a speed and duplex. The accompanying XML would then be:
My hope for NETCONF and YANG is that the IETF and other SDO's standardize as many data models as they can. In this way, we can have a standard set of models that can be used for true multi-vendor network management. We don't want hundreds of proprietary MIB files, and I hope that the ease of modeling in Yang will encourage this.
So what has this got to do with SDN?
Even in SDN, we still have persistent state on network devices. OpenFlow doesn't automatically configure itself which is why OF-Config, which uses NETCONF, was developed. Open vSwitch, the de-facto standard for virtual switching, uses Open vSwitch Database Management Protocol (OVSDB) defined in informational RFC7047 which uses JSON-RPC.
Where I see NETCONF adding value is that we have a single protocol for managing configuration for both the traditional, and software defined network. I also don't want to get in to an Open Source vs Open Standards debate, but when interoperability is concerned open standards are essential, and having a standard set of Yang models would be advantageous.
It also has one other benefit, enabled by RESTCONF. Device-level Northbound API standardization.
What is RESTCONF you say? RESTCONF is currently an IETF Draft.
This document describes a REST-like protocol that provides a programmatic interface over HTTP for accessing data defined in YANG, using the datastores defined in NETCONF.
Now device-level NBI's aren't exactly SDN in my book, but they are pretty useful to Network DevOps. What RESTCONF does, is enable simple Yang models to be accessed over HTTP using RESTful-ish style.
Why is this so awesome?
NETCONF is really powerful, but it's a little cumbersome for small tasks. RESTCONF is the more nimble cousin which allow people that are already well-versed in a little REST-API work to perform small tasks without needing to learn an entirely new skill set. That's a real win for DevOps in my book.
Tuesday, January 6, 2015
10 Skills IT Pros Need for Cloud Computing
IT professionals have to work constantly to ensure that their skills are up to date. Today, IT pros must guarantee that their cloud-focused skills are resume-ready. And just what are those skills? According to a Forbes.com blog post by contributor Joe McKendrick, there are eight key skills. To those, I add two more.
1. Business and financial skills
The intersection of business and technology is always an overarching concern, but it is especially so when it comes to cloud-based computing.
2. Technical skills
With the cloud, organizations can streamline their IT resources, offloading much of day-to-day systems and application management. But that doesn’t mean IT abdicates all responsibility. There’s a need for language skills to build applications that can run quickly on the Internet.
3. Enterprise architecture and business needs analysis
Cloud computing requires that IT pros cross disciplines, especially where service-oriented architecture comes into play.
4. Project management skills
Organizations must not let the flexibility of the cloud lead to missed deadlines or amorphous goals. That could negate the cloud cost advantage.
5. Contract and vendor negotiation skills
To deal with service-level agreements—and the problems involved when those SLAs are breached—IT pros need experience with contract and vendor negotiations.
6. Security and compliance
IT professionals dealing with the cloud must have a firm grasp of security protocols and the regulatory mandates related to their industries, both within and without the United States.
7. Data integration and analysis skills
IT pros may not be data scientists, but to take advantage of big data, they do need to help data scientists hook up big data, internal ERP, data warehouse and other data systems, and work with the business side to make effective use of big data.
8. Mobile app development and management
Organizations need to think about what kind of mobile experience they are offering to customers via the cloud and how they would like to improve that down the line.
As enterprise cloud computing evolves, it is important to add two more skills to the list:
9. Knowledge of open hybrid clouds
IT is not homogeneous, and neither should your cloud computing model be. IT pros need to understand how to build and extend their companies’ cloud computing infrastructure in a way that is open.
10. Understanding of OpenStack
In order to build the kind of flexible, secure, interoperable cloud infrastructure mentioned above, IT pros must have a strong understanding of the technology required to make it so. OpenStack is a key component. OpenStack, a collaboration of developers and cloud computing technologists, comprises a series of projects delivering various components for a cloud infrastructure solution.
Friday, January 2, 2015
GETTING STARTED BUILDING HANDS-ON SDN SKILLS
When your company trials #SDN solutions, do you want to be involved in the project? Do you want to hit the ground running, or just hear about? To be ready, you need more than knowledge of what the vendors are doing; you need some core hands-on skills. Today’s post explores some suggestions for first steps to build hands-on SDN skills, along with the why and wherefore.
BIG PICTURE OF SDN AND WHY TO START MIDDLE-DOWN
If you’ve been thinking about SDN for a while, then you’ve probably already figured out your first steps for learning. For the rest of you, let me define a few terms so the first steps make more sense.
The classic SDN definition begins with the separation of the control plane and data plane. A basic SDN architectural model uses three layers, with the controller in the center, and some components above (North) of the controller, and some below (South) of the controller. The controller, and the SDN applications above the controller, implements the control plane functions. The network – that is, networking devices connected by links – implements the data plane, forwarding messages between devices.
Figure 1: SDN Architecture – Control and Data Plane Perspective
Why? Network engineers should be able to quickly build skills with the lower half of the model. Networkers know protocols, so picking up another protocol (OpenFlow) shouldn’t be too hard. You know existing data plane and control plane concepts, so now you just have to learn about the SDN alternatives. Even if you know how to program, and want to write SDN apps one day, you will still need a controller and some switches to use when testing.
I’ve reduced some suggestions to four steps, which I’ll explain in a little more depth for the rest of this post:
- Create a lab with both the data plane and a controller, using the Mininet simulator, and focus on learning Mininet options and commands.
- Learn OpenFlow protocol using Wireshark while testing with Mininet.
- Learn More OpenFlow using different start options with the POX controller.
- Use different SDN controllers to compare features and learn controller architectures.
STEP 1: LEARN DATA PLANE AND CONTROLLER BASICS USING MININET
Mininet is a freely available open source tool that simulates OpenFlow switches and SDN controllers. You can run it on a VM on your own computer, simulate an SDN-based network, run a controller, learn what’s happening, experiment all you like.
Mininet comes with the ability to simulate switches (and routers), along with a couple of included SDN controllers. The default reference controller works well for just kicking the tires, but for learning, the included POX controller has some nice features as well (see suggested step 3).
Getting started can truly be as easy as following the steps in the Mininet Walkthrough page. Install Mininet, follow the walkthrough, and you’ll learn a lot. Make sure and experiment with:
- Creating different switch topologies
- Displaying the topologies
- Pinging and other test traffic between Mininet hosts
- Starting and using Wireshark (to be ready for other upcoming tasks)
2) LEARN OPENFLOW PROTOCOLS USING WIRESHARK AND MININET
The pre-installed Wireshark in the Mininet VM makes it easy to quickly see packet captures of the OpenFlow messages that flow between the controller and the networking devices in Mininet. So, rather than just reading the OpenFlow protocol document, you can see the protocol messages that relate to the specific Mininet network you built.
As for a suggested exercise, let me make a few suggestions:
- Begin reading the OpenFlow 1.0 standards doc.
- Pick one Mininet topology, start Wireshark, and start Mininet.
- Read the OpenFlow messages you see in Wireshark before you send any test traffic.
- Compare those messages to the OpenFlow standard, and think about what flows have been programmed into which switches.
- Ping between hosts to generate traffic.
- Read the OpenFlow messages found in Wireshark, noticing the messages created as a result of each ping, and again compare that to the OpenFlow standard.
- Do not worry if you do not understand it all – digest what you can, and move on.
Note that for beginning your learning, OpenFlow 1.0 makes the most sense. First, 1.0 was the first version intended for widespread release in products. OpenFlow 1.3 appears to be the next release that is likely to see widespread implementation in products. (Mininet supports OpenFlow 1.0 and 1.3, by the way, and uses 1.0 by default). So the OpenFlow 1.0 doc is a good place to start.
For example, OpenFlow 1.0 defines a 12-tuple of field values that can be used to match a message to a flow in a switch’s flow table. Section 5.2.3 of the OpenFlow 1.0 standard details those fields. Figure 2 shows an example output from Wireshark for an OpenFlow FlowMod message that happens to direct a switch to program a flow with these criteria:
- 11 fields should be considered wildcards (that is, they always match)
- 1 field (Ethernet Type) that should be 0x0806
In other words, the flow definition in this trace matches IPv4 ARP messages, which have Ethernet Type 0x0806.
Figure 2: Wireshark Trace of OpenFlow 1.0 Flow Definition to Match ARP
3) LEARN EVEN MORE OPENFLOW WITH POX CONTROLLER OPTIONS
The POX controller gives us some convenient ways to learn OpenFlow. First, the Mininet VM comes with POX already installed. POX can then be started with different parameters from the Linux command line, with those parameters making the controller program the flows with a different mindset.
Using different POX options lets you run controlled experiments in Mininet. First, you can use the same Mininet network topology over and over, and run the same traffic generation tests over and over (for example, repeat the same pings each time). However, with different POX parameters each time, the controller thinks differently, programs different flows, so the OpenFlow messages differ, and each Wireshark trace differs. So, just run the tests and investigate the differences.
For instance, one POX startup option tells the controller to make the switches act like hubs, while another tells the controller to program flows between specific pairs of hosts based on their MAC address. (Check the POX documentation wiki for the list of options.)
Here’s a suggested outline for a lab:
- Start and capture your Wireshark trace for each pass. (You will know how to do this already if you work through the Mininet Walkthrough, as mentioned in Step 1.)
- Start POX with the parameter from the next list, e.g., ./pox.py forwarding.l2_learning)
- Start mininet, pointing to POX as a remote controller (e.g., sudo mn –controller=remote,ip=w.x.y.z –topo=linear,3 –mac)
- Look at the Wireshark trace for any flows populated before you generate any test traffic
- Ping from host H1 to each of the other hosts
- Look at the Wireshark trace, take notes, and save
- Stop Mininet and POX
Then repeat the steps, with a different POX option, and look for the different results. Here are a couple of recommended options for POX startup options for learning OpenFlow:
- forwarding.hub
- forwarding.l2-learning
- forwarding.l2-multi
4) TRY A FEW OTHER SDN CONTROLLERS
Once you get comfortable with Mininet, I would suggest a next step of kicking the tires with a few different SDN controllers, particularly those with a graphical interface. The controllers that come installed with Mininet have some great features, but they do not have extensive GUI interfaces. Getting some SDN experience with some other controllers, particularly those with a graphical interface, can be a great next learning step.
This option works well in part because the controller can sit anywhere compared to the SDN switches, at least anywhere that’s reachable with IP. Literally, you could run the controller somewhere in the Internet, and Mininet on your Laptop, and as long as no firewalls blocked the TCP port used by the controller (default 6633), the controller could control the network. That’s not the preferred design, but it would work.
More normally, you could just install another controller on another Linux VM and use Mininet to create the network on the same Mininet VM as usual. You ignore the controller you have been using on the Mininet VM – instead, when you start Mininet, you point to the IP address of the other controller. Figure 3 shows the idea, with the usual Mininet VM on the top, and the new controller running in the VM on the bottom.
Figure 3: Using an External SDN Controller in Lab
Just as a for instance, here’s a sample of what you get from Cisco eXtensible Network Controller (XNC) with a Mininet network:
Figure 4: Sample Screen Shot from Cisco XNC
So, which controllers should you test? Well, for a home lab rather than a lab at work, it helps to have some free options. I’ve listed three controllers that you can get for free without ambiguity. Two of them are open source, and are likely to be freely available for their natural lives. The other one, the HP VAN controller, has a 60-day trial period, so you can legally use it, without your company having to be in the middle of a product evaluation. (I do not see other commercial SDN controllers with a clear trial download option, but please feel free to let me know.)
- Get HP’s Virtual Application Networks (VAN) controller : 60 day trial
- Get Floodlight Controller: Free (open source), but similar to Big Switch Networks’ Big Tap Controller)
- Get OpenDayLight: Free, (open source)
When you get time to dig into each controller, ask yourself a few questions:
- What flows does the controller program into the switches to support ping and web traffic – even with no SDN apps running?
- What settings exist (from available documentation) to change the controller’s behavior?
- What features does the controller have for listing flows, displaying status, and troubleshooting the network?
What can you learn about the controller’s architecture based on what you see from the user interface?