13 Investments to Generate Regular Income




There can be several situations when we look for regular income. This is especially true for people after retirement without any pension. Also there would be new entrepreneurs who need regular income until their start-up stabilizes. We tell you 13 investments which can generate regular income for you along with their pros and cons.

1. Bank Fixed Deposit

This is the most popular investment avenue for regular cash flows. You can choose to get the interest credited in your savings account every month, quarter or annually.

Expected Returns: 7% to 8% for General Public and 7.5% to 8.5% for Senior Citizens. This keeps on changing with interest rate cycle.

The Good:

It’s convenient to invest and in most cases can be handled online.
The credit risk is very low especially in case of Government owned banks and large Private Banks. However investors should be careful about cooperative banks.
The income is guaranteed.

The Bad:

The interest earned is taxable according to the income tax slab of the person
TDS (Tax deduction at source) is deducted by banks in case the annual interest income exceeds Rs 10,000. This is especially painful for people who do not have income in taxable range. However eligible individuals can submit Form 15G/H prevent TDS deduction.
Reinvestment Risk – For most banks, the maximum tenure of bank fixed deposit is 10 years. So after 10 years you cannot be sure of interest rates offered. It may be much lower than what you were actually getting.
There may be penalty on closure of account before maturity.

Useful Tips:

Prefer Government banks or large private banks for FD. Cooperative banks are risky and hence you should limit your exposure in these banks.
In case eligible, submit Form 15G/H to get rid of TDS


2. Post Office Monthly Income Scheme (POMIS)

As the name suggests this is fixed deposit in Post Office on which you get regular monthly interest payment. The investment tenure is for 5 years only.

Expected Return: 8.4% (revised by Government of India on April 1 every year)

The Good:

As in case of banks, there is no credit risk as Post offices are Government owned.
The income is guaranteed.
There is no TDS deducted on the interest paid.

The Bad:

The investment tenure is limited to 5 years. After maturity you can invest again but at prevailing interest rates leading to reinvestment risk.
Though no TDS is deducted, the interest earned is taxable according to the income tax slab of the person.
Investing in Post Office schemes is not convenient. You need to visit Post Office to invest and to withdraw on maturity. This may be difficult for aged and also for people who change address frequently.Penalty on closure of account before maturity.
Penalty on closure of account before maturity.

3. Senior Citizen Saving Scheme (SCSS)

SCSS is again a popular investment option for senior citizens. The interest is paid out Quarterly in the bank account.

Expected Return: 9.3% (revised by Government of India on April 1 every year)

The Good:

There is no credit risk as the deposit is guaranteed by Government of India.
For FY 2015-16 the interest rate is 9.3% which is higher than most banks.
The investment up to Rs 1.5 lakhs in SCSS is eligible for tax deduction u/s 80C.
The income is guaranteed.

The Bad:

The interest earned is taxable according to the income tax slab of the person.
SCSS matures in 5 years. After maturity you can invest again but at prevailing interest rates. So it has reinvestment risk.
The maximum investment is limited to Rs 15 Lakhs.
TDS is deducted @ 10% of interest paid in case the annual interest is more than Rs 10,000 in a financial year
Penalty on closure of account before maturity.

Useful Tips:

You can open another account in your spouse name if he/she satisfies all other criteria.
SCSS can be opened in approved banks or post office. You should prefer banks as you can have online facility and can handle account from different places.
Also Read: Know all about Senior Citizens’ Savings Scheme

4. Company Fixed Deposit

There are NBFCs and Companies (both Government owned and Private) which offer fixed deposit schemes with monthly/quarterly or annual payment of interest.

Expected Returns: 8% to 9% (additional 0.25% to 0.5% for senior citizens)

The Good:

The interest paid is generally higher than that offered by banks.
The income is guaranteed.

The Bad:

The interest earned is taxable according to the income tax slab of the person
TDS (Tax deduction at source) is deducted by companies in case the annual interest income exceeds Rs 10,000. This is especially painful for people who do not have income in taxable range. However eligible individuals can submit Form 15G/H prevent TDS deduction.
The FD duration is generally 1 to 5 years. Some NBFCs offer tenure of up to 10 years. So there is reinvestment risk in the long run.
You might need to fill forms and do KYC formalities, every time you make an investment. This is not as convenient as bank FDs.
Premature withdrawal can have heavy penalty. Always look for the penalty clause before investing.

Useful Tips:

Prefer Government organizations or high credit rated companies (AAA) as the credit default risk is lower.
Invest only some part of your “regular income generating portfolio” in one company. Diversify across companies.
In case eligible, submit Form 15G/H to get rid of TDS.
Also Read: List of Company Fixed Deposits

5. Company Bonds (NCDs):

Companies offer NCDs (commonly known as bonds) from time to time. NCDs pay fixed interest rates known as coupon. You can buy NCDs directly from NSE/BSE using your Demat account or apply for them whenever they are issued by companies. These NCDs pay interest directly in your bank account and it can be monthly/quarterly or annual.

Expected Returns: 8% to 11% (depending on credit rating)

The Good:

The interest paid is higher than that offered by banks.
The income is guaranteed.
If you have Demat account, investment and selling can be done online.
No TDS is deducted if the investment is done in demat form.

The Bad:

The interest earned is taxable according to the income tax slab of the person
The NCD duration is generally 3 to 8 years. So there is reinvestment risk in the long run.
Though NCDs are listed on stock exchange and can be sold anytime but are thinly traded and so getting right price in case of emergency is a problem.

Useful Tips:

Prefer Government organizations or high credit rated companies (AAA) as the credit default risk is lower.
Invest only some part of your “regular income generating portfolio” in one company. Diversify across companies.
Some companies offer NCDs subscription in physical form too. In this case TDS is applicable.
Selling NCD before maturity leads to Capital Gains and is taxed accordingly.
Also Read: NCDs – Investment Tips, TDS and Taxation

6. Tax Free Bonds

Tax Free Bonds are good source of regular income for people in higher tax bracket. As the name suggests the interest received is tax free. However selling bonds before maturity leads to Capital gains tax. These bonds can be bought in secondary markets through Demat account or when companies open bonds for initial subscription.

Expected Return: 6.75% – 7.25% (Tax Free)

The Good:

The interest paid is tax free, so it’s good for people in higher tax brackets
The income is guaranteed.
The tenure of these bonds is up to 20 years, so reinvestment risk is reduced to an extent.
Tax Free bonds are issued by big PSUs and have high credit rating, so have negligible credit default risk.
If you have Demat account, investment and selling can be done online.

The Bad:

Most bonds have only annual payout option. This can be difficult for people who need monthly payouts.

Useful Tips:

Some companies offer Tax Free Bonds subscription in physical form too.
Selling tax Free Bonds before maturity leads to Capital Gains and is taxed accordingly.


7. Government Securities/Bonds (G-Secs)

G-Secs are government bonds issued by RBI on behalf of Government of India. These bonds have tenure of up to 30 years and have no Credit risk. They pay interest every 6 months.

Expected Return: 7.5% – 8% (depending on tenure) changes with interest rate cycle

The Good:

No Credit risk
Long investment tenure of up to 30 years, hence minimal reinvestment risk
Investment can be done online through Demat account or IDBI Samriddhi G-Sec portal
Liquid – Can be easily sold
No TDS on interest earned on G-Secs
Income is guaranteed

The Bad:

The interest earned is taxable according to the income tax slab of the person
The price of G-Secs fluctuates with change in interest rate regime. But if you hold till maturity it does not matter.

8. Annuity

Annuities are offered by Insurance companies. The insurance company pays a fixed amount every month in return for lumpsum investment. Returns vary depending on your age, gender and the type of annuity. Also all NPS (National Pension Scheme) subscribers have to necessarily buy annuity on withdrawal.

Expected Return: 5% to 7% (depending on age and type of annuity selected). Higher aged person would get get better returns.

The Good:

Annuties are easy to manage. Buying is one time process and you get money regularly paid in the bank account.
The income is guaranteed.
There is no reinvestment risk.

The Bad:

Once you buy annuity you are locked in for life.
Usually returns lower than bank FDs.
The interest earned is taxable according to the income tax slab of the person.

Useful Tips:

As the money is locked for life, choose your options carefully.

9. Rent from Real Estate:

Rental income from real estate is another popular option.

Expected Return: 1% to 4% rental yield for residential property and 5% to 12% for commercial property.

The Good:

The rental return generally goes up with inflation.
30% standard deduction along with actual incurred expenses can be deducted from income for computation of income tax.

The Bad:

Initial investment is large.
Difficult to sell off at the right price in case of emergency.
Need to be involved in regular maintenance of property.
Income is not guaranteed as the property may remain vacant for longer period of time.

10. Reverse Mortgage:

Reverse mortgage is a special type of loan where you can get loan against your home. The loan is not paid in one go but in installments. You can think of it as reverse EMI. This is offered by a lot of banks and housing finance companies. Learn more about Reverse Mortgage by clicking here.

The good:

Even though you mortgage the house, you can still live in it.
Your legal heirs can pay the loan (after your death) to the bank and get back the house.

The Bad:

You can outlive the reverse mortgage duration as most banks offer maximum tenure of 20 years.
This option is available for senior citizens only.
It involves lot of paper work.
The loan amount is capped at Rs 50 lakh – Rs 1 crore by the lender. So it does not suit house owners with expensive houses.

11. Systematic Withdrawal Plan in Debt/Arbitrage Mutual Funds

Systematic Withdrawal Plan in Debt funds can be efficiently used to generate regular income. These funds have returns similar to Bank FDs but are tax efficient in case the SWP is planned for more than 3 years. Arbitrage Funds can also be used in place of Debt Funds. The advantage of Arbitrage is the returns are tax free after one year.

Expected Return: Similar to Bank Fixed Deposit

The Good:

The returns are more tax efficient than fixed deposits, so more suited for people in higher tax bracket.
It’s easy to manage. Everything can be handled online.

The Bad:

There is risk of capital running out in case the performance is lower than expected or if there is need to extend the regular income duration.

12. Dividend Income (MIP)

There are some Mutual funds namely (MIPs – Monthly Income Plans) who claim to pay regular money every month. MIPs are debt oriented mutual funds who pay dividend income every month. The problem is the payout can fluctuate and at times funds can skip dividend in certain months. So MIP is misnomer!

There can be dividend income from equities too. But in that case too, the income can fluctuate a lot and there can be years where no dividend is declared.

Expected Return: Similar to Bank Fixed Deposit

The Good:

Easy to manage

The Bad:

The dividend received is tax free but the mutual funds have to pay Dividend distribution Tax while paying dividend.
The income can fluctuate widely as the dividend depends on the performance of the mutual fund for the month. In extreme cases, funds may not declare any dividend for a month.

Useful Tip:

Systematic Withdrawal Plan in Debt Mutual Funds is more tax efficient way than dividend income for investment duration of more than 3 years

13. Dividend Income (Stocks)

You can plan regular income through portfolio of stocks and equity mutual funds. The problem is the dividend can fluctuate every year. Also the payout is not very regular and depends on the performance of the company.

The Good:

Dividend Income is tax free.
The portfolio can have capital appreciation in the long run.

The Bad:

The prices of stocks are volatile and so the portfolio can fluctuate a lot in value.
The income is not guaranteed. Even stock with long term dividend history can skip dividend in certain years.
Should have good skills to select stocks and need to continuously monitor the investment.

To Conclude:

As you can see there are various investment avenues available for generating regular income. You should evaluate each of them and see what suits you the most. It would depend on your risk appetite, duration for which you require regular income and the pain you want to take in handling investments. Ideally you should diversify among various asset classes to minimize risk.




Tuesday, September 29, 2015

4 open-source monitoring tools that deserve a look


Network monitoring is a key component in making sure your network is running smoothly. However, it is important to distinguish between network monitoring and network management. For the most part, network monitoring tools report issues and findings, but as a rule provide no way to take action to solve reported issues.

We found all four products to be capable network monitoring tools that performed well in our basic monitoring tasks such as checking for host availability and measuring bandwidth usage. Beyond the basics, there were quite a few differences in terms of features, granularity and configuration options.

  • Overall we liked Zabbix, which was easy to install, has an intuitive user interface and enough granularity to perform most network monitoring tasks.
  • Cacti is great for what it does, has excellent graphing capabilities and is relatively easy to configure and use. But Cacti is somewhat limited in features. It does not provide a dashboard with infrastructure status and alerts, nor does it have the ability to provide alerts.
  • Observium is another capable product, but we did not like having to map everything to host names without the ability to use IP addresses directly. However, it has a modern interface and, like Cacti, offers graphing capabilities that provide good information at-a-glance.


All of the products offered basic network monitoring, using common protocols like ping, without requiring agents. Diving deeper required agents or SNMP, which must be installed and/or configured on the devices to be monitored. Zabbix offers both agent and agent-less configuration options. Since all of the host servers run on Linux, to keep the playing field level we used a fresh install of Ubuntu 14.04 LTS prior to installing each product. The hardware was a quad-core, 64-bit, 8GB RAM server with adequate storage. Here are the individual reviews:


Observium

Observium is a Linux-based, command-line driven product with a web-based monitoring interface. Released under the QPL Open Source license, Observium is currently in version 0.14. Observium is available in both a community edition, which we tested, and a professional edition. Observium uses the RRDTool for certain features, such as buffer storage and graphing capabilities. It provides auto-discovery of a wide variety of devices from servers and switches to printers and power devices.

Observium is installed and configured through a set of command line inputs. Prerequisites include MySQL, PHP and Apache. We found a useful step by step installation guide on the Observium website which saved us time in performing the install. After installation, the server is accessible from a browser.

After completing the basic installation we loaded the Web interface, which displayed a large, blank Google map and a summary of devices, ports and sensors, all showing zero values. We decided to add a new device from the Web interface by entering the host name and the SNMP community name. This provided no results.

After some online searching we realized we needed to add our devices to the ‘hosts’ file in order for Observium to correctly resolve the host names. We were not running DNS on our test network and you cannot add devices by using IP addresses. Since Observium is set up using a configuration file, the Web interface provides essentially a read-only overview of the infrastructure. We added our first device with a simple command from the command line and then logged back into the Web interface, where we could then see our newly added Windows host. The map was populated with a location quite distant from our actual location, but we attribute this to using internal subnets.

Observium uses several protocols such as CDP, FDP, EDP and LLDP to discover new devices. When encountering a new device, it will attempt to contact it on a SNMP community name supplied in the configuration file. Once one or multiple devices have been added, the information for each device needs to be added using the discovery and polling commands from the command line.

This task can be automated by creating a Linux ‘crontab’ file that is called at set intervals. Most configuration changes are accomplished through editing the configuration file. We found this a bit cumbersome at first, but once the initial configurations and inevitable tweaks have been completed there should be no need to revisit this file on a daily basis. The configuration file content is available to view read-only from the global configuration link, which is helpful in getting a bird's eye view of the setup.

With our new devices configured and added, we re-loaded the Observium Web interface again. The device list displayed our three hosts with some basic information about each (platform, OS type and uptime). Mousing over each device displays previews of various performance graphs such as processor and memory use. To drill down in more detail, you can click any device which displays a secondary screen with additional information about the device and the ability to view collected data in different ways such as general information, a graph view that includes a myriad of performance data, plus an event and system log view.

Observium has no direct export or reporting capabilities, which would be a nice addition for documenting performance or outputting usage data to hard copy. However, the on-screen reporting is very good and numerous filters are available to customize views. Although it doesn't aid much in actual configuration tasks, the Web interface has a modern, easy-to-read display and the navigation is intuitive with a horizontal, drop-down style menu across the top. We also like the start page overview with the ability to mouse over various items to see graphs for that item.

The Observium professional edition is available for an annual subscription fee and provides users with real-time updates and various support options. The professional version also has added features such as threshold alerting and traffic accounting, which can be helpful for organizations like ISPs that need to calculate and bill client bandwidth usage.

Cacti

Like the other products, Cacti is a Web-based application that runs on PHP/Apache with a MySQL backend database. Currently in version 0.8.8, it provides a custom front-end GUI to the RRDTool, an open source round robin database tool. It collects data via SNMP and there is also a selection of data collection scripts available for download from the Cacti website.

Although the Cacti server can be installed on Windows, it does require software mostly associated with Linux, such as PHP, Apache (although you can use IIS) and MySQL. This can be accomplished using WAMP server or by configuring each component individually using the Cacti installation guide.

Regardless of OS type, there are a number of configuration requirements and Cacti assumes the installer is fairly familiar with the aforementioned components. The user manual provides general guidelines for installation, but it did not provide specifics for our particular environment. As is often the case, we found an online third-party source that had a good step-by-step guide for our OS (Ubuntu).

Once the installation and initial configuration have been completed, you access the Cacti Web GUI from a Web browser. We found the Web interface clean and fairly easy to navigate once we became familiar with the overall layout. Cacti’s bread and butter is its graphing capabilities and it provides users with the tools to create custom graphs for various devices and their performance using SNMP. Devices can range from servers and routers to printers, essentially any networked devices with an IP address.

To set up a new device and indicate which values to monitor, you follow a short wizard-like step by step process where you first specify the basics, such as the IP address and type of device. To determine whether the device is available, Cacti can use a simple Ping command or a combination of Ping and SNMP.

Once a device has been created, it is time to create the graphs you want to monitor for this particular device. The graph setup uses a simple one-page with a set of options based on the type of device being configured. You can select items such as interface traffic and memory usage to CPU utilization or number of users logged in. We created a number of graphs for a couple of devices and once a graph has been saved, it can take a while before data starts showing, but found that generally within a few minutes it started displaying data. What we found helpful when creating graphs is that Cacti will inform you right away if a data query responds with any data before proceeding. That way you don’t end up with a bunch of empty graphs.

Cacti uses three types of XML templates for configuration purposes, data, graph and host templates. These allow administrators to create custom configurations that can be reused across multiple devices. The templates can be applied as you create a new device, graph or host. Settings may include values such as display settings for a graph or information on how data is to be collected for a certain host type.

Although Cacti does not require an agent to be installed on a device, SNMP needs to be installed and configured in order to take advantage of all features available in Cacti. As often is the case with open source software, Cacti does provide more options for Linux/UNIX without the need to install additional templates. In order to better monitor Windows servers we needed to install additional templates. Some of the online third-party tutorials are very good, but it should be noted that these are not one-click operations and require a steady hand to get everything configured properly. (Also read: Cacti Makes Device Monitoring Simple. )

From the graph console you can call up any graph by filtering by device, custom date and time range or you can even do a search. We found this interface to be very flexible as you can essentially display anything from one custom graph to literally thousands, however displaying too many graphs per page will slow down the load time. The time/date range is very flexible with a drop-down that allows for granular selections from ‘last 30 minutes’ to ‘this year’. You can zoom in on any graph as well as export the graph values to a CSV file.

One feature commonly used by ISPs is the bandwidth measurement, especially the usage at the 95th percentile, which is often how bandwidth is measured and billed.

Cacti provides custom user management that allows administrators to determine what information users can view and also what actions they can take from the console. These items include ability to import/export data, change templates and various graph settings. We found the granularity to be flexible enough without providing so many settings it becomes cluttered.

Compared to the other products we tested, Cacti is somewhat limited in features. It does not provide a dashboard with infrastructure status and alerts, nor does it have the ability to provide alerts. However, that should not preclude you from considering Cacti as what it does, it does well. The interface is efficient and quick to navigate, no need to sit around for minutes while pages load. Also, with no agents to be deployed to hosts, it is an unobtrusive monitoring product that gives administrators a good overview of network topology with little overhead.

Zabbix

Zabbix is an open-source network management solution released under the GPL2 license and is currently in version 2.4. It provides a Web interface for monitoring and stores collected data to one of several common databases such as MySQL, Oracle, SQLite or PostgreSQL. The Zabbix server itself runs only on Linux/UNIX and not on Windows; however, Zabbix agents are available for most Windows and Linux/UNIX server and desktop operating systems.

We installed Zabbix using one of the many available installation packages. The product can also be installed by compiling the source code or downloading a virtual appliance in formats such as VirtualBox, Hyper-V, ISO and VMWare. In addition to the regular install, we also took a quick look at the available VM, a good option for those looking to evaluate Zabbix. The install was simple and straightforward using instructions available from the Zabbix website. We especially liked the condensed installation package, requiring just a few command line inputs and including the Apache/PHP/MySQL setup into the main install, with no need for separate configuration unless there are special circumstances to consider.

When loading the Web interface for the first time, there is a short wizard that confirms that the pre-requisites and database connection are properly configured before loading the main dashboard. The first screen is the personal dashboard, which provides a general overview of the IT infrastructure with a list of hosts, system and host status. On a new installation this screen is largely blank with the exception of information related to the Zabbix server itself. This dashboard is customizable; you can add preferred graphs, maps and screens as you configure them.

Zabbix collects data in three different ways; by installing agents on a Linux/Windows host or by using a variety of protocols such as SNMP, ICMP, TCP and SSH. Basic network health information can also be collected over HTTP and SMTP. Zabbix can use auto-discovery network devices and also had the capability to perform low-level discovery. We started by configuring a discovery rule to map out our test network. The granularity of this configuration is very good and you can specify IP ranges, protocols and other criteria to determine how a network is mapped out. After a few minutes we started to see a list of devices, ranging from routers and printers to servers and desktops. The discovery provides a general network overview, but does not provide any in-depth information until you add the individual host/device to Web interface. We added several hosts using both the Zabbix agent for Linux and Windows, together with a few utilizing SNMP.

We installed the agents using a single command on our Linux hosts. There are some configuration options that can be set in the ‘zabbix_agentd.conf ‘configuration file, such as the server IP and server name along with other custom options. The agents can perform either passive checks, where certain data (memory use, disk space, bandwidth) is essentially polled from the Zabbix server, or active checks, where the agent retrieves a ‘to-do’ list from the server and sends update data to the server periodically. Installing as a Windows service is also fairly straightforward using an executable and making a few tweaks to a configuration file to let Windows know where the Zabbix server resides.

The Web interface is a bit complex and looks intimidating at first, but once you become a bit more familiar with the various screens and terminology we found it easy to navigate. We also wish the fonts and graphics were a bit more prominent as some of the information can be difficult to read. One of features we liked is the dynamic link history that shows which section you recently used, allowing you to quickly navigate back. The online user manual is comprehensive and up to date, and the Zabbix website has lots of comprehensive information on features, installation and configuration options.

Administrators can either use built-in templates or create their own triggers to build rules that send messages and/or perform commands when certain conditions are met. For instance, we created a rule that sent us a message when there was a general problem with one of the hosts and also restarted the agent on that host. Rules provide a lot of granularity and this was one of the few areas where we wish the online manual had a bit more detail on configuration options.

Most of the reporting is to the screen with the ability to print. The print option essentially displays what is on the screen minus the navigation header and other extraneous information. This is a not necessarily a bad configuration, but it does not make for the most elegant printouts. We would have liked to have seen some ‘save-as-PDF’ and export capabilities. That being said, the online reporting and graphs are excellent, with multiple customization options. As mentioned earlier, the custom graphs and screens can be added to the main dashboard and called up with a simple click.

Zabbix is all open source. There is no separate paid Enterprise version. This means all of the source code is open source and available, which should be attractive to both small and large enterprises. Although Zabbix does not offer a separate commercial version, commercial support contracts are available in five different levels ranging from ‘Bronze’ to ‘Enterprise’. Zabbix also offers other paid services such as integration, turnkey solutions and general consulting.

Icinga

Initially created as a fork from Nagios in 2009, Icinga’s latest version has been developed per the vendor "free from the constraints of a fork." Version 2 sports a new rules-driven, object-based configuration format. Icinga is still open source under GPL and the current releases include Core and Web 1.11.x versions along with a 2.x version. Icinga can monitor devices in both Linux and Windows, but the server itself runs only on Linux. Since the 2.x Web GUI was still in beta, we installed the core server version 2.x and used the latest 1.1x version as the Web GUI.

Icinga has a modular design where you select the core server, your preferred GUI and add any desired plug-ins such as reporting and graphing tools. We installed the basic server using only two commands. Overall, we found the Icinga online documentation to be good; however, a quick start guide would have been helpful as there is no guidance from the get-go on which of the many configuration files needs to be tweaked, even for a basic installation.

We determined that we needed either a MySQL or PostgreSQL database in order to run the Web interface; in addition Apache or NginX plus PHP are also required. There are a few steps involved in this install and configuration process, depending on how many incremental upgrade files are available for the Icinga 2 DB IDO module and also how the Web server is configured. We went through the list of commands and after a fair bit of trial and error we were able to access the Web GUI from a browser.

After logging in, a dashboard type overview is displayed with navigation organized into groups along the left, Icinga calls them ‘cronks’ with the main part of the screen used to display information. Along the top there is a section that provides an overview of the infrastructure using color coded counts with healthy hosts shown in green, warnings in yellow, critical problems in orange and hosts that are unavailable in red.

Icinga maximizes the use of most screens and even if the first impression is a bit cluttered, overall we found the navigation and organization of data to be intuitive. Many of the screens are list based, displaying information about hosts and host issues sorted by various criteria such as availability, length of down time or severity of the issue. Icinga provides the ability to sort each list ascending or descending on each column, something we found very helpful. Furthermore you can select which columns to display for each list on the fly, this provides a nice level of customization.

Icinga takes advantage of several common protocols for monitoring the network infrastructure, from simple PING commands to SNMP, POP3, HTTP and NNTP. Information can also be gathered from other devices such as temperature sensors and other network-accessible devices. Configuring which hosts to monitor and what to monitor is accomplished using the configuration files and the granularity to which you can customize this is overwhelming, the ‘Basic Monitoring’ section of the user manual runs 50 pages. Luckily, you can use templates and re-useable definitions to streamline this process.

For our environment we defined a few hosts by linking a name to the IP addresses and then added what is known as ‘check commands’. These are essentially protocol definitions such as PING and HTTP that instruct Icinga what to monitor for each host. You can then expand these configurations to include how often to query a host, when to escalate warnings and where to send email notifications of pending issues.

Configuration files are the core of Icinga; we counted 14 main files and some of these include additional files for more specific configurations. Some configuration files can be left with default values, but others must be configured specifically for the environment such as hosts, email addresses for notifications and enabling/disabling services used for host monitoring. The configuration files can be modified/created using an editor like VI or nano, but there are also configuration add-ons available plus third-party tools such as Chef and Puppet. In future releases Icinga will be adding the ability to configure via GUI, API as well as CLI, something that would be helpful for items that may require ongoing changes, such as changing host configurations.

Icinga provides native support for Graphite for graphing purposes and an Icinga Reporting module is available; it is based on the open source Jasper Reports.

There is no paid version of Icinga available, but there are several organizations worldwide that offer paid support at different levels. Icinga also hosts several camps every year where users, developers and sponsors get together to discuss best practices and further development goals of the product.

Conclusion

When selecting monitoring tools it is important to have a clear goal from the outset. Are you looking to just send a ping to each device every 15 minutes to make sure the device responds? Or, do you need more comprehensive information such as CPU, RAM, disk and bandwidth usage? Installing agents and configuring SNMP to access more advanced features should be a consideration as this can be a time-consuming task that may not be practical in larger organizations. A workable hybrid approach could be to install agents on critical devices that need deep-dive monitoring, while monitoring other devices in agent-less mode.


Sunday, September 27, 2015

10 TED Talks for techies


These inspiring, sometimes frightening presentations detail how technologies from bionics to big data to machine learning will change our world for good or ill -- and sooner than you might think.

Topics range from bionics, virtual reality and facial recognition to driverless cars, big data and the philosophical implications of artificial intelligence.


Hugh Herr: New bionics let us run, climb and dance

Hugh Herr is a bionics designer at MIT who creates bionic extremities that emulate the function of natural limbs. A double leg amputee, Herr designed his own bionic legs -- the world's first bionic foot and calf system called the BiOM.

Herr's inspirational and motivational talk depicts the innovative ways that computer systems can be used in tandem with artificial limbs to create bionic limbs that move and act like flesh and bone. "We want to close the loop between the human and the bionic external limb," he says. The talk closes with a moving performance by ballroom dancer Adrianne Haslet-Davis, who lost her left leg in the 2013 Boston Marathon bombings. She dances beautifully wearing a bionic leg designed by Herr and his colleagues.





Chris Milk: How virtual reality can create the ultimate empathy machine

This inspiring talk details how Chris Milk turned from an acclaimed music video director who wanted to tell emotional stories of the human condition into an experiential artist who does the same via virtual reality. He worked with the United Nations to make virtual reality films such as "Clouds Over Sidra," which gives a first-person view of the life of a Syrian refugee living in Jordan, so that U.N. workers can better understand how their actions can impact people's lives around the world.

Milk notes, "[Virtual reality] is not a video game peripheral. It connects humans to other humans in a profound way that I've never seen before in any other form of media... It's a machine, but through this machine, we become more compassionate, we become more empathetic, and we become more connected -- and ultimately, we become more human."





Topher White: What can save the rainforest? Your used cell phone

Topher White is a conservation technologist who started the Rainforest Connection, which uses recycled cell phones to monitor and protect remote areas of rainforests in real time. His extraordinary talk revolves around his 2011 trip to a Borneo gibbon reserve. He discovered that illegal logging was rampant in the area, but the sounds of animals in the rainforest were so loud that the rangers couldn't hear the chainsaws over the natural cacophony.

Resisting the urge to develop an expensive high-tech solution, White turned to everyday cell phones, encased in protective boxes and powered by solar panels. The devices are placed high in the trees and programmed to listen for chainsaws. If a phone hears a chainsaw, it uses the surprisingly good cellular connectivity in the rainforest to send the approximate location to the cell phones of rangers on the ground, who can then stop the illegal logging in the act. Through this means, White's startup has helped stop illegal logging and poaching operations in Sumatra, and the system is being expanded to rainforest reserves in Indonesia, Brazil and Africa.



Fei-Fei Li: How we teach computers to understand pictures

An associate professor of computer science at Stanford University, Fei-Fei Li is the director of Stanford's Artificial Intelligence Lab and Vision Lab, where experiments exploring how human brains see and think inform algorithms that enable computers and robots to see and think.

In her talk, Li details how she founded ImageNet, a service that has downloaded, labeled and sorted through a billion images from the Internet in order to teach computers how to analyze, recognize and label them via algorithms. It may not sound like much, but it's a vital step on the road to truly intelligent machines that can see as humans do, inherently understanding relationships, emotions, actions and intentions at a glance.





Pia Mancini: How to upgrade democracy for the Internet era

Argentine democracy activist Pia Mancini hopes to use software to inform voters, provide a platform for public debate and give citizens a voice in government decisions. She helped launch an open-source mobile platform called DemocracyOS that's designed to provide citizens with immediate input into the legislative process.

In her talk, Mancini suggests that the 18th-century democratic slogan "No taxation without representation" should be updated to "No taxation without a conversation" for the modern age. She poses the question, "If the Internet is the new printing press, then what is democracy for the Internet era?" Although it took some convincing, Mancini says, the Argentine Congress has agreed to discuss three pieces of legislation with citizens via DemocracyOS, giving those citizens a louder voice in government than they've ever had before.




Kenneth Cukier: Big data is better data

As data editor for The Economist and coauthor of Big Data: A Revolution That Will Transform How We Live, Work, and Think, Kenneth Cukier has spent years immersed in big data, machine learning and the impact both have had on society. "More data doesn't just let us see more," he says in his talk. "More data allows us to see new. It allows us to see better. It allows us to see different."

The heart of Cukier's talk focuses on machine learning algorithms, from voice recognition and self-driving cars to identifying the most common signs of breast cancer, all of which are made possible by a mind-boggling amount of data. But along with his clear enthusiasm for big data and intelligent machines, he sounds a note of caution: "In the big data age, the challenge will be safeguarding free will, moral choice, human volition, human agency." Like fire, he says, big data is a powerful tool -- one that, if we're not careful, will burn us.



Rana el Kaliouby: This app knows how you feel — from the look on your face

Technology has been blamed for lessening social and emotional connections among millennials, but what if it could sense emotion? In this talk, computer scientist Rana el Kaliouby, cofounder and chief strategy & science officer of Affectiva, outlines her work designing algorithms for an application used on mobile phones, tablets and computers that can read people's faces and recognize positive and negative emotions.

What good is that? el Kaliouby gives a few examples: Wearable glasses armed with emotion-sensing software could help autistic children or the visually impaired recognize particular emotions in others. A learning app could sense that the learner is confused or bored, and slow down or speed up accordingly. A car could sense a driver's fatigue and send an alert. "By humanizing technology," el Kaliouby concludes, "we have this golden opportunity to reimagine how we connect with machines, and therefore how we, as human beings, connect with one another."



Chris Urmson: How a driverless car sees the road

In this talk, roboticist Chris Urmson cites some of the dangers drivers face -- inclement weather; distractions that include answering phone calls, texting and setting the GPS; flawed, careless drivers -- as well as the staggering amount of time wasted each day by drivers stuck in traffic. The solution? Not surprisingly, Urmson, who has headed up Google's self-driving car project since 2009, says autonomous cars are the answer.

Urmson shows how driverless cars see and understand their environment -- the layout of the roads and intersections, other vehicles, pedestrians, bicyclists, traffic signs and signals, construction obstacles, special presences such as police and school buses, and so on -- and decide what action to take based on a vast set of behavioral models. It's a fascinating car's-eye look at the world.




Jeremy Howard: The wonderful and terrifying implications of computers that can learn

Jeremy Howard, a data scientist, CEO of advanced machine learning firm Enlitic and data science professor at Singularity University, imagines how advanced machine learning can improve our lives. His talk explores deep learning, an approach to enabling computers to teach themselves new information via set algorithms. A bit lengthy but fascinating, Howard's talk outlines different ways computers can teach themselves by "seeing," "hearing" and "reading."




Nick Bostrom: What happens when our computers get smarter than we are?

With a background in physics, computational neuroscience, mathematical logic and philosophy, Nick Bostrom is a philosophy professor at Oxford University and author of the book Superintelligence: Paths, Dangers, Strategies. He is also the founding director of the Future of Humanity Institute, a multidisciplinary research center that drives mathematicians, philosophers and scientists to investigate the human condition and its future.

This metaphyshical discussion, reminiscent of a college philosophy course, explores how older A.I., programmed by code, has evolved into active machine learning. "Rather than handcrafting knowledge representations and features," Bostrom says, "we create algorithms that learn from raw perceptual data." In other words, machines can learn in the same ways that children do.

Bostrom theorizes that A.I. will be the last invention that humanity will need to make, and eventually machines will be better at inventing than humans -- which may leave us at their mercy as they decide what to invent next. A solution to control A.I., he suggests, is to make sure it shares human values rather than serving only itself




Microsoft Azure Builds Its Own Networking OS For Switches


Microsoft has developed a new networking OS, Azure Cloud Switch (ACS), that will give it a say in how data center switches operate.

It’s “a cross-platform modular operating system for data center networking,” writes Kamala Subramanian, principal networking architect for Microsoft Azure, in a recent company blog.

Linux-based ACS is Microsoft’s first foray into offering its own SDN-like software for running network devices such as switches. It came about because cloud and enterprise networks are challenged by the task of integrating “different software running on each different type of switch into a cloud-wide network management platform,” Subramanian writes.

“Ideally, we would like all the benefits of the features we have implemented and the bugs we have fixed to stay with us, even as we ride the tide of newer switch hardware innovation,” she writes.

ACS allows Microsoft to share the same software stack across multiple switch vendors. Other benefits include the ability to scale down the software while developing features that are required for data center and networking needs, as well as debug, fix, and test software bugs at a faster rate.

A cynic might view ACS as Microsoft’s attempt to get commercial switches to behave the way it wants. Subramanian explains it this way: “We believe there are many excellent switch hardware platforms available on the market, with healthy competition between many vendors driving innovation, speed increases, and cost reductions.”

ACS was designed using the Switch Abstraction Interface (SAI) specification, which is an Open Compute Project (OCP) standard that has an API for programming network-switch chips. Microsoft is a founding member of SAI and continues to be a major contributor. In July, the SAI specification was officially accepted by OCP as a standardized C API to program ASICs.

Microsoft put its ACS through its paces publicly at the SIGCOMM conference in August. The ACS was demonstrated with four ASIC vendors (Mellanox, Broadcom, Cavium, and the Barefoot Networks software switch), six implementations of SAI (Broadcom, Dell, Mellanox, Cavium, Barefoot, and Metaswitch), and three applications stacks (Microsoft, Dell, and Metaswitch.)

In her blog, Subramanian endorses the trend of separating switch software from switch hardware as part of a growing trend in the networking industry, and “we would like to contribute our insights and experiences of this journey starting here.”


Wednesday, July 15, 2015

ASR5000 / ASR5500 Troubleshooting Guide


https://supportforums.cisco.com/document/12539481/asr5000-asr5500-troubleshooting-guide


Tuesday, July 14, 2015

Virtulization and its Different Types




Virtualization is the abstraction of computing and network resources. In the computing world, there can be various degrees of virtualization:

Network Virtualization

Segmenting a common network into separate virtual networks to isolate user traffic within individual network domains involves logical separation of data-plane (and some control-plane) functionality. There can be several forms of network virtualization:
  • Virtual LANs (VLANs) - Separate L2 LAN broadcast domains.
  • Virtual Routing Forwarding (VRFs) - Separate L3 routing domains.
  • Virtual Private Networks (VPNs) - Creating virtual circuits in a shared network. Commonly deployed VPN technologies include MPLS-VPN, IPsec-VPN, etc.

Device Virtualization

Segmenting a device into separate logical independent entities.

  • Virtual Contexts - Used on firewalls, load balancers, and other application networking platforms. Involves logical separation of data-plane, and some separation of configuration and management plane. Available on Cisco FWSM, ACE, etc.
  • Secure Domain Routers (SDR) - Creating separate logical routers, each using its own route processors and line cards, within the same physical chassis. SDRs are isolated from each other in terms of their resources, performance, and availability. Available in Cisco IOS-XR platforms.
  • Virtual Device Contexts (VDCs) - Logical separation of control-plane, data-plane, management, resources, and system processes that enables collapsing multiple logical networks into a single physical infrastructure. Available on Cisco Nexus 7000 platforms.


Server Virtualization

Hardware assisted virtualization simulates a complete hardware environment, or Virtual Machine (VM), in which an unmodified "guest" operating system executes in complete isolation. A physical computer server is partitioned into multiple logical servers so that each has the appearance and capabilities of running on its own dedicated machine. Each virtual server can run its own full-fledged operating system, and each server can be independently rebooted. With hardware-assisted full virtualization, multiple low usage servers can be virtualized - transformed into a Virtual Machine - and multiple VMs can be run simultaneously on the same physical server. VMs can also be moved from one server to another for load balancing or disaster recovery.

Desktop Virtualization or Virtual Desktop Infrastructure (VDI)

VDI is a server-centric computing model that provides the ability to host and centrally manage desktop virtual machines in the data center while giving end users a full PC desktop experience.

Storage Virtualization

Storage virtualization is the process of abstracting logical storage from physical storage. This abstraction can be done at any layer of the storage software and hardware stack. Virtualization of storage helps achieve location independence by abstracting the physical location of the data. The virtualization system presents to the user a logical space for data storage and itself handles the process of mapping it to the actual physical location. The logical storage can be a partition, volume, or virtual disk (vdisk). The abstraction can be host-based, storage device-based, or network-based. Storage area networks (SANs) can also be virtualized into zones and Virtual SAN (VSANs). Zoning is a distributed service common throughout the Fibre Channel (FC) fabric and prevents devices from communicating with other unauthorized devices. A VSAN provides the ability to create separate virtual fabrics on top of the same redundant physical infrastructure.

Monday, July 13, 2015

Oracle Unveils Four New VNFs - Policy Management, SBC, Converged Application Server & Services Gatekeeper


Oracle, has recently announced the release of four fully virtualized network functions (VNF) solution portfolio for Communications Service Providers (CSP). These Network Function Virtualization-enabled products - Oracle Communications Policy Management, Oracle Communications Converged Application Server, Oracle Communications Services Gatekeeper, and Oracle Communications Session Border Controller - are aimed at helping CSPs to conquer the layers of complexity inherent in bridging physical and virtual environments as they continue on their journey toward NFV.

According to Oracle, the newest version of Oracle Communications Policy Management is a key component in the deployment of next-generation LTE networks. As network virtualization continues to advance, CSPs require the flexibility, reliability, and depth of feature functionality that enables them to evolve their networks through LTE, Voice over LTE (VoLTE), IP Multimedia Subsystem (IMS) and virtualization. In addition, the solution is designed to enable tight integration with charging and billing systems, to provide valuable network insights via an integrated policy analytics solution and to allow providers to serve all subscribers from a single policy management instance regardless of network access type.

Ooredoo Kuwait rolls out unified cloud for NFV


Ooredoo Kuwait has successfully deployed network functions virtualisation (NFV) architecture and its IT applications on a single, unified cloud, it was announced this week.

The single cloud is based on VMware's vCloud for NFV platform, and was rolled out as part of Ooredoo's ‘Unify' initiative, which aims to take advantage of a software-defined data centre architecture, as well as NFV and software-defined networking (SDN).

VMware's professional services team partnered with the operator and its virtual network functions (VNF) vendor, Huawei, to design and deploy the VMware vCloud for NFV platform and virtual network functions into a test environment in less than three months. The solution supported the seamless transfer of the virtualised Core IMS (IP Multimedia Subsystem) from test environment to Ooredoo's production IT environment, and enabled Ooredoo to conduct its first voice-over-LTE (VoLTE) call.

"Reliability and availability were key factors in our decision to work with VMware. The production-proven VMware technology and our previous experience with the high levels of technical support offered by VMware professional services gave us the confidence to move to a unified cloud platform," said Mijbil Al-Ayoub, director of Corporate Comms, Ooredoo Kuwait.

"The speed at which we have been able to trial our unified cloud and onboard the VoLTE service functions into our IT network has exceeded our expectations. We did a joint R&D project that took only two months to complete, and we finalised the development of our vIMS product that can be deployed in a production, commodity infrastructure, automatically in only 3.5 hours."

The Ooredoo platform uses VMware's policy-based resource allocation features to maintain application service level agreement (SLA) enforcement, and software-defined networking capabilities of the VMware NSX network virtualisation platform to address NFV network scalability needs, multi-tenancy with microsegmentation, capacity on demand and QoS. VMware NSX is a main pillar in segregating tenants across Ooredoo‘s single converged private cloud.

"We're delighted to be working with such forward-thinking customers who understand the value of a platform-based deployment strategy for NFV," said David Wright, vice president, Telecommunications and NFV Group, VMware.

"By making the decision to deploy and manage a horizontally virtualised platform capable of supporting multivendor VNFs and IT workloads, Ooredoo Kuwait has created a powerful service environment capable of delivering high-value services to customers while managing operational costs."

Sunday, June 14, 2015

New Cisco CEO Announces Executive Team

Courtesy - Cisco Blog


Today I’m extremely proud and excited to announce my next generation executive leadership team who will lead Cisco into the digital age.

With the increasing pace and complexity of today’s market, it’s critical that our leadership team understands our customers, delivers results, brings diverse perspectives and experiences, and builds world-class, highly motivated teams. This is what will differentiate us as a much faster, innovative organization that delivers the best results for our customers.

We have been developing and attracting our next generation of leaders for many years, and I’m confident that this team is ready to lead Cisco’s next chapter. They know how Cisco works, what makes Cisco great, and how we can accelerate our current momentum. Some have been with the company for as long as I have or longer, a third have joined Cisco in the last 3 years, and others are new to Cisco.

They have the capabilities, accomplishments, and the values required to lead us into the future. Their combined vision, passion and authenticity, along with a focus on strategy, results, and innovation truly differentiate this team. These unique characteristics reflect the remarkable culture of Cisco that has motivated and energized me for the past 17 years.

Let me explain why each person is the ideal leader to move us forward.

Pankaj Patel, Executive Vice President, Development


  • Everything we do starts with our innovation. Pankaj leads Cisco’s 25,000 development engineers and the company’s $36 billion technology portfolio.
  • He joined Cisco through the acquisition of Stratacom and has since overseen Cisco’s innovation in the cloud, mobility, data center, security, collaboration, software and the Internet of Everything markets.
  • Over the last 2 years, Pankaj has led the transformation of Cisco’s engineering organization to drive focus and accelerate innovation. He is pioneering new ways of driving innovation at Cisco, including a new model of internal start-ups which he launched last year.


Kelly Kramer, Executive Vice President and Chief Financial Officer


  • Kelly joined Cisco three years ago after 20 years at GE working across numerous divisions and countries around the world.
  • She quickly established herself as a business leader capable of partnering across and influencing the entire organization, particularly with her no-nonsense direct style.
  • She has driven a disciplined focus on our financial model and delivered on our commitments to our shareholders.
  • Her promotion to CFO 3 quarters ago was seamless, and she is extremely well respected internally and externally.


Rebecca Jacoby, Senior Vice President, Operations


  • Previously Chief Information Technology Officer (CIO), Rebecca has a strong track record of operational excellence, innovative problem solving, and partnering cross-functionally. Her leadership and talent development skills have resulted in some of the best employee satisfaction scores in the company.
  • She has elevated the role of IT at Cisco and positioned us as one of the best in the industry. She has exemplified Fast IT, enabling $5.4B of incremental revenue in the last 4+ years with just $400M of incremental expense. She did all of this while driving down costs by over 5% each year.
  • Her experience in development, operations, supply chain and IT make enable her to drive Cisco’s continued focus on profitability, accountability, and world-class operational excellence.
  • Her deep relationships with CIOs around the world make her extremely well respected across the industry and she was recently inducted into the CIO Hall of Fame.
  • Guillermo Diaz has been promoted to CIO reporting directly to Jacoby. Diaz, most recently Cisco senior vice president of IT—Connected IT, has been accountable for the company’s enterprise IT architecture, technology strategy, and IT services/operating model.


Francine Katsoudas, Senior Vice President, Chief People Officer


  • Fran is the architect of Our People Strategy and Human Resources Organization, focusing on how Cisco wins in the talent marketplace while creating a compelling and unique employee experience.
  • She accelerates company transformation through leadership, attracting and retaining the best talent and building a culture of passion and innovation.
  • Fran is committed to ground breaking HR solutions, analytics and new talent models.


Chris Dedicoat, Senior Vice President, Worldwide Sales


  • Chris joined Cisco in 1995 and has served as Senior Vice President of EMEAR for the past four years where he has led the region to solid growth in a very challenging market.
  • Chris has a keen understanding of technology and market opportunities, an ability to drive transformation while running the business, and an unparalleled ability to lead and motivate teams.


Joe Cozzolino, Senior Vice President, Services


  • Joe has extensive global General Management experience in all facets of the business including engineering, sales, & services. He has a competitive edge and in his own words: “hates losing more than he loves winning.”
  • He began his career more than 25 years ago as a Systems Engineer designing video voice on fiber optics. For the last 2 years, Joe led Cisco’s Service Provider Mobility and Video Infrastructure businesses.
  • Before Cisco, he spent 12 years at Motorola in various executive roles successfully growing new businesses undergoing technology inflection.
  • Joe has an Electrical Engineering degree from the University of Massachusetts, Dartmouth, and an MBA from Annie Maria College.


Hilton Romanski, Senior Vice President and Chief Technology and Strategy Officer


  • Hilton has been responsible for the ‘buy’ in Cisco’s “build, buy, partner, integrate” strategy for growth and innovation
  • He has led over $20 billion in acquisitions in 40 deals, including Sourcefire, Meraki, and Airespace and was named Deal Maker of the Year in 2014 by The Deal.
  • He has also led Cisco’s M&A and investment entry into the emerging markets by forming and expanding teams and activities in China, India, Russia, Eastern Europe, and Latin America.
  • Hilton oversees Cisco’s corporate venture investment portfolio, currently valued at $2 billion, one of the highest performing corporate venture capital funds globally.
  • In his new role, Hilton will lead CTSO and be chartered to drive strategic development and growth of Cisco by applying important tools to nurture technology disruption, build alliance partnerships, acquire companies, invest in start-ups, and engage the global marketplace of ideas to drive Cisco’s success


Karen Walker, Senior Vice President and Chief Marketing Officer


  • Karen joined Cisco six years ago from Hewlett-Packard, where she held business and consumer leadership positions including Vice President of Alliances and Marketing for HP Services, and Vice President of Strategy and Marketing.
  • Her 20-plus years in the IT industry have included senior field and marketing leadership roles in Europe, North America, and the Asia Pacific region.
  • Karen is a Board member of the I.T. Services Marketing Association and a member of the CMO Council North America Advisory Board, the Marketers that Matter Council, Advancing Executive Women (AWE) in Silicon Valley, and CRN’s 2013 Women of the Channel. She also sponsors multiple initiatives to accelerate female leadership within Cisco.


Mark Chandler, Senior Vice President and General Counsel


  • Mark joined Cisco’s Legal Department in 1996 when it had 12 employees; today we have a phenomenal team of legal, contract and compliance professionals of more than 400 people that is regularly ranked among the industry’s best.
  • Before Cisco, Mark was General Counsel at Maxtor, a Fortune 500 disk drive manufacturer, and at StrataCom.
  • Mark is a strong business leader with a keen ability to innovate, disrupt and provide tremendous input into our strategy. In 2010, The National Law Journal named him one of the 40 Most Influential Lawyers of the Decade and in 2013, American Lawyer numbered Mark among the “Top 50 Big Law Innovators of the Last 50 Years.”


Ruba Borno, VP, Growth Initiatives and Chief of Staff


  • Ruba will join Cisco from The Boston Consulting Group where she is a Principal and leader in the Technology, Media & Telecommunications, and People & Organization practice groups.
  • She holds a Ph.D. and a Master of Science in Electrical Engineering from the University of Michigan, and a Bachelor of Science in Computer Engineering with honors from the University of North Carolina at Charlotte.
  • For the last seven years, Ruba has been advising enterprise and consumer technology executives on organizational change, increasing operational effectiveness, and accelerating business growth.
  • Ruba has been an Intel Ph.D. fellow at the National Science Foundation Engineering Research Center for Wireless Integrated MicroSystems, contributed to multiple peer-reviewed research publications, and is a supporter of Bay Area organizations tackling education and poverty challenges.


I am committed to investing in and developing Cisco’s extended leadership team over time. I plan to also look externally to fill several roles that will lead key growth initiatives in new markets. I am also committed to adding even more diversity of thought and experience over time, constantly strengthening both our bench and our decision making.

There are also a few individuals who have made tremendous contributions to Cisco who will be transitioning over the next few months. I am thankful for the years of partnerships I’ve had with these amazing leaders who will be leaving Cisco:


  • Wim Elfrink has served as Cisco’s Chief Globalization Officer since 2006 and opened Cisco’s second global headquarters in Bangalore, India. His leadership on smart cities and connected industries has helped define our vision for the next wave of the Internet, the Internet of Everything. Wim will retire from Cisco on July 25th and I want to thank him for his exceptional leadership and his passion for what is possible.
  • Padmasree Warrior is a highly respected leader who most recently served as Cisco’s Chief Strategy and Technology Officer. She is well known across the industry and the globe, and has been a champion internally for innovation, strategic partnerships, investments and mergers and acquisitions. Padmasree has led the success of many of our strategic partnerships and will remain with us until September to help finalize some of our key partnerships for the future. I am grateful for the impact she’s had on Cisco and her commitment to helping us finalize these important alliances.
  • Edzard Overbeek in his role as Senior Vice President of Cisco Services has been an incredible partner to me for many years. He has made the decision to leave Cisco after 15 years at the company and leadership roles in every region around the world. Edzard has agreed to stay on through the transition as a strategic advisor on key disruptive strategies that he has shown great passion for while at Cisco. His vision and energy will ensure his success in his next venture, something I hope will be closely connected to Cisco.


We are so fortunate that these leaders are able to remain with us in the near-term to finish key projects and ensure a smooth transition. I believe this is a testament to the Cisco culture as well as their commitment, service, and leadership.

Going forward, my new team will define and build the next chapter for Cisco together. I’m extremely confident we will move even faster, innovate like never before, and pull away from the competition. This is an incredible team with a diverse set of experiences, expertise and backgrounds to accelerate our innovation and execution, simplify how we do business, drive operational rigor in all we do, and inspire our amazing employees to be the best they can be.

This new leadership team, along with the deep talent and passion of all of Cisco, gives me absolute confidence that we will lead, and our customers will win. I’ve never been more excited to build our future together.


Wait is over - Cisco CCNA & CCNP Cloud Certification Available



Cisco Cloud certification programs build and validate basic through advanced knowledge of the skills required to design, install, and maintain Cisco Cloud solutions. The curricula emphasize real-world best practices through labs and course materials.

Cisco Cloud certification programs are practical, relevant, and job-ready certification curricula, closely aligned with the specific tasks expected of today’s in-demand Cloud professionals. The Cisco Cloud certification program validates the skill set of individuals on industry-leading Cloud solutions and best practices as well as offering job-role-based curricula for all levels of IT staff.

Cisco Cloud portfolio consists of an associate CCNA level and professional CCNP level certifications. The program gives students an opportunity to take advantage of the promise of cloud and receive training that enables them to start a career in Cloud and/or make the transition from Data Center and Networking job roles.

The cloud portfolio covers the breadth of cloud products and technologies deployed in mid-sized to large networks utilizing Cisco cloud solutions.  The curricula emphasize real-world best practices through labs and course materials covering:


  • Private and Hybrid Cloud Design
  • Cloud Security Design
  • Cloud Infrastructure Implementation
  • ACI and APIC Automation
  • Private and Hybrid IaaS Provisioning
  • Application Provisioning and Life Cycle Management
  • Cloud Systems Management 



CCNA Cloud

CCNA Cloud is a job role-based career certification that trains and certifies Cloud engineers, Cloud administrators, and network engineers. Cloud engineers and administrators can enhance and demonstrate key skills, while network engineer can extend their careers into Cloud. This certification prepares you for work in an SMB Cloud environment and to support a senior Cloud engineer in an enterprise environment.

Job duties include entry-level provisioning and support of Cisco Cloud solutions.

The CCNA Cloud certification can also provide career opportunities for employees of business and channel partner organizations transitioning from other technical areas such as Data Center or Networking.

Topics covered in CCNA Cloud include:

  • Cloud Characteristics and Models
  • Cloud Deployments
  • Basic Knowledge of Cloud Compute
  • Basic Knowledge of Cloud Networking
  • Provide End-User Support
  • Cloud Infrastructure Administration and Reporting
  • Chargeback and Billing Reports
  • Cloud Provisioning
  • Cloud Systems Management and Monitoring
  • Cloud Remediation


The CCNA Cloud certification is an associate-level certification that is valid for three years.

Achieving CCNA Cloud Certification

There are no prerequisites for CCNA Cloud certification. Completion of the CCNA Cloud certification requires the exams and recommended training shown in the table below.


Exams & Recommended Training

Required Exam(s)Recommended Training
210-451 CLDFNDUnderstanding Cisco Cloud Fundamentals (CLDFND)
210-455 CLDADM*Introducing Cisco Cloud Administration (CLDADM)*
*More information available July 2015
http://www.cisco.com/web/learning/certifications/associate/ccna_cloud/index.html



CCNP Cloud Certification

The CCNP Cloud is a job role-based career certification intended to distinguish Cloud engineers, administrators, designers, and architects, who design, implement, provision and troubleshoot Cisco Cloud and Intercloud solutions. It provides you with the skills to help your IT organization meet changing business demands, technology transitions, and deliver important business outcomes.

This certification prepares you to work in a SMB Cloud environment as well as perform the duties of a senior Cloud engineer in an enterprise environment.

The CCNP Cloud certification is a professional-level certification that is valid for three years.

Achieving CCNP Cloud Certification

Prerequisite Skills and Requirements

A valid Cisco CCNA Cloud or any Cisco CCIE certification can act as a prerequisite. Completion of the CCNP Cloud certification requires the exams and recommended training shown in the table below.

Prerequisites

Valid CCNA Cloud certification or any CCIE certification can act as a prerequisite.

Exams & Recommended Training

Required Exam(s)*Recommended Training*
300-504 CLDINFImplementing and Troubleshooting the Cisco Cloud Infrastructure (CLDINF)
300-505 CLDDESDesigning the Cisco Cloud (CLDDES)
300-506 CLDAUTAutomating the Cisco Enterprise Cloud (CLDAUT)
300-507 CLDDACIBuilding the Cisco Cloud with Application Centric Infrastructure (CLDDACI)
*More information available August 2015
http://www.cisco.com/web/learning/certifications/professional/ccnp_cloud/index.html

Saturday, June 13, 2015

Massive Route leak causes Internet Slowdown



Earlier Yesterday a massive route leak initiated by Telekom Malaysia (AS4788) caused significant network problems for the global routing system. Primarily affected was Level3 (AS3549 – formerly known as Global Crossing) and their customers. Below are some of the details as we know them now.

Starting at 08:43 UTC today June 12th, AS4788 Telekom Malaysia started to announce about 179,000 of prefixes to Level3 (AS3549, the Global crossing AS), whom in turn accepted these and propagated them to their peers and customers. Since Telekom Malaysia had inserted itself between these thousands of prefixes and Level3 it was now responsible for delivering these packets to the intended destinations.

This event resulted in significant packet loss and Internet slow down in all parts of the world. The Level3 network in particular suffered from severe service degradation between the Asia pacific region and the rest of their network. The graph below for example shows the packet loss as measured by OpenDNS between London over Level3 and Hong Kong. The same loss patterns were visible from other Level3 locations globally to for example Singapore, Hong Kong and Sydney.


Packet loss London to Hong Kong over Level3


At the same time the round trip time between these destination went up significantly as can be seen in the graph below.


Round trip time London to Hong Kong over Level3

Time line
By just looking at the number of BGP messages that BGPmon processed over time as can been seen in the graph below, there’s a clear start where all of a sudden the number of BGP updates increased. When we look closer at the data it becomes clear that this increase in BGP messages starts at 08:43 UTC and aligns exactly with the start of the leak and the start of the packet loss issues. At around 10:40 we slowly observed improvements and at around 11:15 UTC things started to clear up.




BGP update messages



Let’s look at an Example
An example affected prefix is 31.13.67.0/24 which is one of the Facebook prefixes. The AS path looked like this

1103 286 3549 4788 32934

If we look at this path we see that AS32934, Facebook, is the originator of the prefix. Facebook peers with 4788 and announced it to its peer Telekom Malaysia (AS4788) which in turn announced it to Level3 (AS3549) which announced it to all of its peers and customers, essentially giving it transit and causing a major routing leak.

Because Telekom Malaysia did this for about 176,000 prefixes they essentially signalled to the world that they could provide connectivity for all these prefixes and as a result attracted significantly more traffic than normally. All this traffic had to be squeezed through their interconnects with Level3. As a result all this traffic was now being routed via Level3 and Telekom Malaysia was likely to hit capacity issues, which then resulted in the severe packet loss issues as users reported on Twitter and as we’ve shown with the data above.

The 176,000 leaked prefixes are likely all Telekom Malaysia’s customer prefixes combined with routes they learned from peers. This would explain another curious increase in the number of routes Level3 announced during the leak time frame.

The graph below shows the number of prefixes announced by Level3 to its customers. Normally level3 announces ~534,000 prefixes on a full BGP feed. These are essentially all the IP networks on the Internet today. Interestingly during the leak an additional 10,000 prefixes were now being observed. One explanation for this could be that these are more specific prefixes announced by peers of Telekom Malaysia to Telekom Malaysia and are normally supposed to stay regional and not visible via transit.






Number of prefixes on Level3 full IPv4 BGP table


Since Level3 was now announcing many more prefixes than normally, it would have hit Max prefix limits on BGP session with its peers. These peering sessions with other large tier1 networks carry a significant portion of the worlds Internet and the shutdown of these session would cause traffic to shift around even more and exacerbate the performance problems as well as causing even more BGP churn.


So in conclusion, what we saw this morning was a major BGP leak of 176,000 prefixes by Telekom Malaysia to Level3. Level3 erroneously accepted these prefixes and announced these to their peers and customers. Starting at 8:39 and lasting for about 2 hours traffic was being redirected toward Telekom Malaysia, which in many cases would have been a longer route and also caused Telekom Malaysia to be overwhelmed with traffic. As a result significant portions of traffic were dropped, latency increased and users world wide experienced a slower Internet service.