Tuesday, July 31, 2012

Why Cloud Computing?

In IT terms, the true value of cloud computing lies in the flexibility it delivers to the organisation; the capability of adding new capacity without needing additional infrastructure, personnel or software licences. In hard cash terms, it means taking the financial pain out of adding new applications, or additional users by converting IT from a capital intensive balance sheet asset to a pay as you go subscription based system.

So, why cloud computing? Simply, it’s cheaper and more efficient but let’s be more specific here:

Increased efficiency through outsourcing servers.

Running your own servers into your own data centre means they need power, cooling and running by people with the expertise to do so. Outsourcing these hardware needs means dealing with a provider who does this on a vast scale, far larger than most businesses can afford and so take advantage of the economies of scale they realise. So, the need for server hardware disappears along with the need for server licences. In undertaking this, IT staff are either redeployed or laid off; the old maxim that it’s an IT man’s job to make himself (or herself) redundant holds true.

Reduced hardware costs at the user level.

Computing in the cloud means users need very little computing power on the desktop to run the applications they need for their day to day work; the reason for this is simple, all computing (processing etc.) is done remotely by the servers at the data centre. Consequently many users’ applications can be delivered using either a dumb terminal or a mobile device. Traditionally mobile devices are laptops which confer major problems on the organisation both as a direct cost and/or indirectly through theft, breakages or imprudent loss (leaving in taxis, on aircraft etc.). Like mobile phones it is now possible to dispel that overhead by paying the user a fixed allowance typically monthly and making them responsible for sourcing the device of their choice and being responsible for its availability, safekeeping, insurance and so on. This arrangement is known as Bring Your Own Device or BYOD for short.

Elimination of need for software licences.

Not owning servers means not needing the server licences either! Any IT person will appreciate this is a major cost saving. The same goes for desktop licences, outsourcing means not needing a licence for each machine you operate.

Organisational agility.

Traditionally adding people needing IT or new applications has meant investment in new hardware and software to serve them. Conversely, ceasing use of a product or reducing headcount has meant bearing the additional cost of excess IT capacity in one form or another. Leading on from this, short term project work such as sales campaigns, product launches etc. have been onerous in terms of the cost burdens placed on IT. The real benefit of using a cloud based application means usage can be turned on and off with users added almost instantaneously and when needed removed just as quickly. Consequently short term projects are similarly straightforward to accommodate.

Green computing.

Outsourcing servers means outsourcing the energy costs of powering and cooling them; whether this is a total energy saving is subject to question and dependent on local conditions and energy costs. What you can say with certainty is that taking full account of build and disposal costs does represent a significant reduction in the environmental impact of IT on an organisation. Using lower powered machines on the desktop also means reducing this total environmental impact of IT on the organisation both in terms of power usage and asset disposal. In addition, In house data centres have typically utilise servers around 5 to15% occasionally much less, whereas cloud data centres achieve 30-40% which again has significant effect on the environmental impact of IT.

Reduction in staff costs.

Shifting IT into the cloud means you do not need staff to support your servers; it also drastically reduces the need to support users. The consequence of this is IT staff can be redeployed on to more productive tasks or laid off.

Change to pay-as-you-go cost model.

Traditionally IT has been a capital-intensive long-term investment with assets firmly anchored on the balance sheet. Moving to the cloud computing model means the costs shift to a pay-as-you-go model which in accounting terms moves costs to the statement of income off the balance sheet. Looking again at the traditional IT model, desktop licencing has meant that all users are licenced to use products they may never use or have any inkling of how to use. Using a desktop delivered over the web means if you are using Word, Excel and PowerPoint, that is all you pay for.

Monday, July 30, 2012

What is Quality?

Quality is meeting customer expectations, simple as that.


Sourcing managers are frequently polled on their satisfaction with their services relationships. And usually more than half, sometimes closer to three quarters, say they are dissatisfied. Their expectations are not being met. Trillions of $ are spent every year on services that do not deliver the quality required.

That level of failure is dramatic and suggests an industry in crisis one would assume. What are these failures? What’s being done about them? Is it in service delivery, performance, continuous improvement objectives, tools or process, threat or issue management, knowledge management or the relationship itself? Incompetent service providers are at fault surely, perhaps it is unreasonable customers, or maybe outsourcing just does not deliver on expectations.

Some sourcing managers are satisfied however, a minority it seems, but for some there are competent providers, customers are reasonable and outsourcing delivers on its promise. Something is making a difference.

The darkest secret of outsourcing is that in the majority neither customers nor service providers make the necessary investment to ensure that quality expectations are met. They invest massively in production systems, ticketing systems, people (advisors and consultants at least, not so much in talent development and training), ERP add-ons, in sourcing projects and the commercial engagement.

However, neither party spends anything like the right amount of time and effort on the core capabilities required to ensure that attention is being paid all day every day to the small stuff – measuring things, understanding what matters, identifying issues, spotting potential problems and assigning actions, ensuring accountability is set and responsibility taken, change is planned for (and recognized when it happens), that the correlation, or more simply, the connection between things is thought about and acted on. The core capability is governance.

It is this small stuff that makes the difference in the end. Doing this small stuff well all the time is how the Service Provider shows they care, they pay attention, and they want to do better. If the culture of the customer is ‘this stuff matters and we pay attention to it’, that will be reflected in the attitude of their people and how they expect service providers to behave. This is hard work not through complexity, or the intellect required, but because it is repetitive, time and effort, tedious maybe, stuff that is not usually considered high value, or strategic. It only becomes that when things go wrong. It is not difficult work for the most part, and the preparation and care required to get it right at the beginning is relatively easily done. The hero who resolves the big problem of the day is lauded. The person who makes sure the problem never happens gets scant recognition. The result of not doing this well is poor quality, always. People can define this but it cannot be done reliably without a technology enabler. Because this is what technology is good at, consistently doing things the same way all the time. What’s this called? Good governance.

Using an enabling technology to support the Governance of your services engagements and experience quality, is the best investment you will make? The world smartest outsourcing relationships rely on technology to ensure quality.

Introduction to Cisco Unified Communications Components

When getting started in the world of Cisco Unified Communications, it is easy to get overwhelmed with all of the available products, features and terms that encompass the Cisco unified communications world. This article takes a look at five of the most commonly used components available from Cisco:
  • Cisco Unified Communications Manager (CUCM)
  • Cisco Unified Communications Manager – Express (CME)
  • Cisco Unity Connection (CUC)
  • Cisco Unity Express (CUE)
  • Cisco Unified Presence (CUP)
If you’re thinking of going for your Cisco CCNA Voice Certification then this is a great place to start getting familiar with the different Cisco Unified Communications components.

Cisco Unified Communications Manager

One of the keystone products of Cisco’s Unified Communications products is Cisco Unified Communications Manager (CUCM). CUCM provides a product that enables the unification of voice, video, data and mobile application centralized through a single product. CUCM integrates into all of the other Cisco Unified products enabling support for a very large number of potential Unified Communication solutions. CUCM can be used as an appliance solution using Cisco’s 7800 Media Convergence Servers or Cisco Unified Computing System (UCS) B200 and C210 M2 Rack Mount and Blade Servers as well as installed on a number of different third party servers.

Cisco Unified Communications Manager – Express

The Cisco Unified Communications Manager – Express (CME) provides a scaled down version of CUCM that is able to be deployed on Cisco Integrated Servers Routers (ISR). CME is intended to be deployed in smaller businesses that don’t require the full CUCM solution or an enterprise branch offices that have limited connectivity into the main CUCM solution. CME provides most of the features that would be required in these offices and is a very popular solution in these situations. CME provides the ability to support many features that are conventionally associated with key systems and private branch exchanges (PBX) including IP telephony, voice gateway services, voicemail and auto attendant features (with Cisco Unity Express).

Cisco Unity Connection

Cisco Unity Connection (CUC) is a voice and unified messaging platform that provides the ability to access and manage voice messages in a number of different ways including through email, web browser, IP phone and smartphones. CUC also provides access to a powerful speech engine that is able to not only read text messages but also perform speech recognition. CUC can be installed on Cisco UCS B200 M2 and B210 Rack Mount servers and B200 M2 Blade servers.

Cisco Unity Express

The Cisco Unity Express (CUE) provides a subset of functionality that is provided by CUC in a smaller package that is deployed on Cisco Integrated Servers Routers (ISR). CUE can be integrated into a larger CUC and CUCM solution or implemented with only CME to a small business or office. CUE specifically provides local storage and processing of integrated messaging, voicemail, fax, auto attendant, and interactive voice response (IVR). CUE can be deployed in a couple of different form factors depending on the series of ISR being used. When being deployed in Cisco 2800 and 3800 series routers, CUE can be deployed using the Cisco Unity Express Network Module (NME-CUE) or Cisco Unity Express Advanced Integration Module (AIM-CUE or AIM2-CUE-K9). When being deployed in Cisco 2900 and 3900 series routers, CUE can be deployed using the Cisco Integrated Services-Ready Engine (ISM-SRE-900-K9) and Service Module Services-Ready Engine (SM-SRE-700, 710, 900, 910-K9).

Cisco Unified Presence

Cisco Unified Presence (CUP) provides an enterprise instant messaging (IM) and network based presence solution that integrates into the Cisco Unified Communications products. CUE provides the ability for clients to support many different features including instant messaging, presence, click to call, phone controls, voice, video, visual voicemail and web collaboration.

Summary

Cisco’s unified communications solutions provide an organization, regardless of size, several options to maintain communications and collaboration. This is vital in modern organizations. This article provides just a brief introduction of the most popular of these options, if deploying this type of solution, take the time to review Cisco’s full product offerings.

Thursday, July 26, 2012

Key ITIL Processes For Cloud Computing


With cloud computing, the only difference that is going to be in ITIL world would be that the ITIL processes could no longer be ignored. In my experience, today in many organizations ITIL processes exist in silos. Key processes like configuration management are ignored. In most of the processes, an option to bypass the process exists, i.e. the process adherence and its compliance with best practices are significantly low.

But with cloud computing ITIL processes can no longer be neglected. Service Asset and Configuration Management (SACM) process will become utmost important along with the IT security and demand management processes. To highlight the key processes in cloud computing environment, it is prefered to list 5 key processes from customer's and service providers’ perspective in the order of their importance.

Customer perspective:
1) Security Management
2) Service Continuity Management
3) Incident Management
4) Change Management
5) Release & Deployment Management
This perspective is important for a customer while finalising a vendor. These are the processes which concerns a customer the most.

Service provider perspective – External Facing
1) Service Level Management
2) Service Portfolio Management
3) Service Catalog Management
4) Financial Management
5) Supplier Management
These are the key processes which the service provider needs to focus on while approaching a customer.


Service provider perspective – Internal Facing
1) Service Asset & Configuration Management
2) Demand Management
3) Financial Management
4) Request Fulfillment
5) Capacity Management
These processes are important for a cloud service provider for their internal organization in order to provision customers’ requests.

Tuesday, July 24, 2012

4 Essential steps for Successful Incident Management

It never hurts to go back to basics. Recently, we were surprised at the confusion of some organizations about the process of incident management, so we thought – why not to put a quick incident management primer down on paper?

For successful incident management, first you need a process – repeatable sequence of steps and procedures. Such a process may include four broad categories of steps: detection, diagnosis, repair, and recovery.

1 – Detection

Identification Problem identification can be handled using different tools. For instance, infrastructure monitoring tools help identify specific resource utilization issues, such as disk space, memory, CPU, etc. End user experience tools can mimic user behavior and identify users’ POV problems such as response time and service availability. Last but not least, domain-specific tools enable detecting problems within specific environments or applications, such as a database or an ERP system.

On the other hand, users can help you detect unknown problems that are not reported by infrastructure or user behavior monitoring tools. The drawback with problem detection by users is that it usually happens late (the problem is already there), moreover the symptoms reported may lead you to point to the wrong direction.

So which method should you use? Depending on your environment, the usage of the combination of multiple methods and tools would be the best solution. Unfortunately, no single tool will enable detecting all problems.

Logging events will allow you to trace them at any point to improve your process. Properly logged incidents will help you investigate past trends and identify problems (repeating incidents from the same kind), as well as to investigate ownership taking and responsibility.

Classification of events lets you categorize data for reporting and analysis purposes, so you know whether an event relates to hardware, software, service, etc. It is recommended to have no more than 5 levels of classification; otherwise it can get very confusing. You can start the top level with something like Hardware / Software / Service, or Problem / Service request.

Prioritization lets you determine the order in which the events should be handled and how to assign your resources. Prioritization of events requires a longer discussion, but be aware that you need to consider impact, urgency, and risk. Consider the impact as critical when a large group of users are unable to use a specific service. Consider the urgency as high when the impacted service is of critical nature and any downtime is affecting the business itself. The third factor, the risk, should be considered when the incident has not yet occurred, but has a high potential to happen, for example, a scenario in which the data center’s temperature is quickly rising due to an air conditioning malfunction. The result of a crashing data center is countless services going down, so in this case the risk is enormous, and the incident should be handled at the highest priority.

2 – Diagnosis

Diagnosis is where you figure out the source of the problem and how it can be fixed. This stage includes investigation and escalation.

Investigation is probably one of the most difficult parts of the process. In fact, some argue that when resolving IT problems, 80% of the time is spent on root cause analysis vs. 20% that is spent on problem fixing. With more straightforward problems, Runbook procedures may be very helpful to accelerate an investigation, as they outline troubleshooting steps in a methodical way.

Runbook tip: The most crucial part of the runbook is the troubleshooting steps. They should be written by an expert, and be detailed enough so every team member can follow them quickly. Write all your runbooks using the same format, and insist on using the same terms in all of them. New team members who are not familiar yet with every system will be able to navigate through the troubleshooting steps much more easily.

Following the runbook can be very time consuming and lengthen the recovery time immensely. Instead, consider automating the diagnostic steps by using run book automation software. If you build the flow cleverly and weigh in all the steps that lead to a conclusion, automating the diagnostics process will give you quick answers, and help you decide what your next step is.

Escalation procedures are needed in cases when the incident needs to be resolved by a higher support level.

3 – Repair

The repair step, well… it fixes the problem. This may sometimes involve a gradual process, where a temporary fix or workaround is implemented primarily to bring back a service quickly. An incident repair may involve anything from a service restart, a hardware replacement, or even a complex software code change. Note that fixing the current incident does not mean that the issue won’t recur, but more on that issue in the next step.
In this case too, straightforward repairs such as a service restart ,a disk cleanup and others can be automated.

4 – Recovery

The recovery phase involves two parts: closure and prevention.

Closure means handling any notifications previously sent to users about the problem or escalation alerts, where you are now notified about the problem resolution. Moreover closure also entails the final closure of the problems in your logging system.

Prevention relates to the activities you take, if possible, to prevent a single incident from occurring again in the future and therefore becoming a problem. Implement two important tools to help you in this task:

RCA process (Root Cause Analysis) The purpose of the RCA process is to investigate what was the root cause that led to the service downtime. It is important to mention that the RCA process should be performed by the service owners, who are not necessarily the ones who solved the specific incident. This is an additional reason why incident logging is so important – the information in the ticket is crucial for this investigation process.

And finally, Incident reports – while this report will not prevent the problem from occurring again, it will allow you to continually learn and improve your incident management process.

Monday, July 23, 2012

Network Manager's Free Toolkit

10 great free downloads for your network.

Got a small network, home network, medium-size network -- even an enterprise network -- and want to get the most out of it? Then I've got good news for you: 10 free pieces of software that can make your network easier to use, troubleshoot and maintain. These freebies will help everyone from networking pros to networking newbies and everyone in between.

There's plenty here for you -- great free tools for keeping your network secure; creating a quick, navigable network map; scanning networks and putting together a list of all connected devices; checking to see if your servers are up and running; even designing networks and more.

Note that I'm leaving out extremely popular and well-known free downloads, such as the Ethereal network protocol analyser or WireShark and am concentrating instead on lesser-known downloads.
And as a bonus, I'm including a review of an extra, for-pay, try-before-you-buy download that can help your network as well.

Network Magic

If you're looking for a simple, free, all-in-one network management tool for a small peer-to-peer network, this is the one to get. It handles all the basic network chores, including adding new devices to the network, fixing broken network connections, setting up wireless encryption and protection, sharing printers and folders, reporting on the state of the security of each PC, and much more.

Wizards guide you through all these tasks and others. If you've got network experience, the wizards may or may not be useful, but those with moderate or less network experience will certainly find them helpful. But even if you're a network pro, there's a lot in this simple program you'll find worthwhile.

For example, the network map, pictured nearby, displays every device connected to your network, shows whether it's online or offline, and displays details about each, including the computer name, IP address, MAC address, operating system being used, shared folders, and system information such as its processor and RAM. It also lets you change the machine name, and it displays alerts about each device, such as if it isn't protected properly. Overall, it's far superior to Windows Vista's Network Map.

The software's Status Centre is also useful. It displays overall information about your network, such as whether there are any problems with overall security or with an individual PC. It also lets you troubleshoot connections, shows whether there are any intruders on the network, and displays information about wireless protection.

Parents will appreciate some of Network Magic's features. For example, the software can monitor the use of any individual PC on the network for the websites it visits, the times the computer is online and which programs are being used, and then mail a daily report about it to an email address. So it's ideal for parents who want to keep track of their kids' computer use. There's much more as well, including a bandwidth tester to show you your current Internet broadband speed.

Note that there are both paid and free versions of the software. The free version includes most basic features, such as repairing broken connections, issuing security alerts, monitoring network activity and the Network Map. The paid version, which costs from $24 to $40 (depending on how many PCs are on your network), delivers daily reports of Internet activity, supports remote access to your network's files and includes other advanced features.

When you install this program, you may need to tell your firewall to let this application access your network and the Internet.

Spiceworks IT Desktop

This freebie can help small or one-person shops with small and medium-size networks, although the complexity of its interface and some anomalies don't make it particularly useful for home networks. It's an all-in-one network inventory and management tool with a surprising number of features for a free piece of software.

The program will inventory your network and provide information about each device on it. It goes further than Network Magic and provides a significant amount of detail about each PC and device, including free and used disk space, antivirus software being used, problems on the device (such as server connection errors), and other information, as you can see in the nearby figure.

It will even provide an inventory of the software installed on each PC, in quite a bit of detail, finding not just popular applications such as Microsoft Office and Adobe Reader, but lesser-known ones such as the FileZilla FTP client. I discovered, however, that it had a more difficult time than Network Magic finding all of my network devices; you may need to fine-tune permissions and log-ins to get it to work properly.

Note that when you install this program, you may need to tell your firewall to let this application access your network and the Internet.

The program includes a variety of other tools, such as easy access to ping and traceroute functions.

And it attempts to be a help desk application as well. You can create help tickets with it, assign the ticket to others or yourself, and include due dates, priorities and so on. It's certainly no replacement for a full-blown help-desk application, but for a small office with a small IT staff, you can't argue with free.

Because the program doesn't always easily find all devices attached to the network, and it has some anomalies (some antivirus software may flag one of this software's components as a virus, for example), this isn't a perfect application. But it's free and simple to set up -- and for that reason alone, it's worth the download.

NetLimiter Monitor

There are also for-pay versions of this software available. NetLimiter Lite costs $8.95 to $16.95, depending on the number of licenses; and NetLimiter Pro costs $14.95 to $29.95, depending on the number of licenses.

What's the biggest problem on many small networks? Bandwidth hogs -- applications that suck up all or most of the available Internet and network bandwidth. Typically, it's tough or impossible to track down which applications or PCs are using all that bandwidth and harder still to do anything about it.

That's where NetLimiter comes in. It monitors bandwidth use so that you can identify the hogs. The free version of the software, though, won't let you actually set bandwidth limits. For that, you'll need to buy one of the paid versions. The paid versions let you set bandwidth limits, including total amount of data downloaded or uploaded, on a per-application or per-connection basis. You can fine-tune it quite a bit, for example, by setting different limits for uploading and downloading.

There's a lot more to this application as well, including a firewall, bandwidth monitor and other functions. This isn't the easiest program to use -- at first, it seems as if there's no way to limit the bandwidth for any application. To do it, you need to click the Grants tab at the bottom of the screen and then, for the application you want to limit, click the Grant column, enter a value for the bandwidth limit, and click the check box.

There are three different versions of this program, starting with the free version, which only monitors network use and won't let you limit bandwidth use. The Lite version will let you set limits but won't do much more, and the Pro version adds a slew of features, including a firewall, scheduler and more.

Network Notepad

Designing a network, or keeping a clear record of one you already have, can be an exceedingly frustrating task. Most drawing programs don't have adequate tools for creating network diagrams. And as for pencil and paper, the less said about them, the better.

If you're looking for a tool to help you design your network or keep visual track of one you already have, you'll want to get Network Notepad. With it, you can design your network and draw schematics that are more than flat documents -- they're live and include links so that you could, for example,

Telnet into any device on your network just by clicking on a button on the diagram.

It comes with a palette of icons for routers, servers, printers, boxes, hubs, modems and other network devices. To design your network, choose graphics from the palette and drag them onto your diagram, and connect the devices using a set of drawing tools. You then define the properties of each device such as giving them names and IP addresses. You can also import a host file, and Network Notepad will automatically populate the devices with the right IP addresses.

You can also program five buttons to launch programs when a device is clicked upon that will then act on the device. So you could click on a device to ping it, for example. Your diagram becomes a live, interactive drawing.

Advanced IP Scanner

This little free utility is a great way to get a quick list of all the devices connected to your network, listed by IP address, along with information about each. It does a lightning-fast scan of all IP addresses in a range that you specify, then specifies whether a device is present at each address. For each device, it lists the status, the machine name, NetBIOS information, ping information and MAC address.

The program will do more than just scan your network. It also gives you a set of tools that lets you shut down PCs remotely, use the "Wake on LAN" feature for any PC whose network card supports that capability, and connect to remote PCs via RAdmin, if it's installed. You can also apply some operations, such as shutting down remote PCs, to a group of computers, not just individual ones.

Advanced Net Tools (ANT)

Here's the Swiss Army Knife of network utilities, and you won't have to pay a penny for it. This freebie puts a whole suite of tools at your fingertips, including ones for conducting ports scans, DNS lookups and pings, and scanning for network shares, checking on routing tables and more.

The security modules are especially useful for quick-and-dirty network scans. There's a network port scanner that can scan all computers on your network and report on their open ports, and a share scanner that reports on all the shared drives on your network.

The information modules are also useful. With them, you can examine your routing table and add and delete entries in it. You can also find out what IP addresses are available to be assigned on your network. Other modules do advanced DNS lookups, let you view all the network adapters connected to computers on the network and add and remove their IP addresses, and more.

DreamSys Server Monitor

Want to know if your servers are up and running? Then get this utility that will monitor whether your servers are alive and, if they're not, take a variety of actions that you can choose. At a specified interval, it will check your servers to see if they're still running. You can also check the servers manually at any time.

You can also tell the program to take a variety of actions when it identifies a problem server, including sending an e-mail, rebooting the machine, starting a service, playing a sound or running a command. It can also play a sound or run a command when the server is running.

Be aware that it can be a bit confusing setting up the program to monitor a server. If you're going to monitor a server via TCP/IP, when you add a new server to monitor, make sure to click the Options tab and type in the TCP port you want to monitor. If you don't, you'll get an error message.

NetBrute Scanner

A network is only as secure as its weakest link, and in many cases that's shared folders or mistakenly open ports. Trying to find all the shared folders and open ports on a network -- even a small one -- can be a difficult, time-consuming task.

This free suite of three simple security tools will put your network through a basic security check, looking for shared resources and open ports. As a bonus, you can also use it to test the security of any webservers on your network.

You can check for shared folders and resources, as well as open ports, on any individual PC on the network by using its network name or IP address. You can also scan an entire range of IP addresses, although I found that feature to be somewhat flaky; it didn't find all the PCs on my network. However, scanning individual PCs worked fine.

The program lists all shared resources and, better yet, lets you connect to those resources and browse them from the program as well. The program also scans the PCs on the network for open TCP ports, so you'll be able to find out what webservers, FTP servers, Telnet resources and the like are installed. More important, it will show you where your port vulnerabilities are.

The final utility in the suite checks the webservers on your network and sees whether it can break into them using a "dictionary attack" by trying combinations of usernames and passwords to gain access to the webmaster's account.

There are a variety of technical limitations to this program; before using it, it's a good idea to check out its details. Still, it's free, it's simple, and it's fast, and because of that, more than worth a try.

Technitium MAC Address Changer

There are plenty of ways to protect your home wireless network against intruders. One is to block anyone from connecting to your network except those who have network cards with specific MAC addresses. It's easy enough to set your router to block out intruders. But how do you know if it really works?

By checking it yourself. One of the best ways to do it is to spoof a MAC address, by giving one of your existing network cards a new address. You can do it with this software that lets you change your MAC address with a few simple clicks. Run the program, highlight the network card that you want to give a spoofed MAC address, click Random MAC Address, and then click the Change Now! button. That's all it takes. To restore to your original MAC address, highlight it and click Original MAC.

This program has other uses as well. It's a great way to show all the details about your network cards, including the manufacturer name; MAC address; and IP, Gateway and DNS information associated with each of your network cards. It includes other useful utilities, such as releasing and renewing an IP address for a card, which can help fix broken network connections.

RogueScanner

Here's an even better way to find out whether your network has any intruders on it: Run this program. Before you run it, put together a list of every PC and device on your network. Once you have that in hand, run RogueScanner. It lists every device on your network, including routers, printers, PCs and others. For each device, it lists the IP and MAC addresses. In addition, it peers deeper and tries to find other information, such as whether the device is a workstation, printer, server, router or PC, as well as the manufacturer and model number.

Compare what the program finds with the list of devices that you know are safe and secure. If you find a device on the network that's not on the list you drew up, you've got an intruder.

NetPeek

This one isn't free -- it's shareware, so it's free to try but costs $40 (£20) if you decide to keep it. It scans your network, identifies every device on it -- including computers, servers, printers and more -- and gives vital information about each. For every device, it tries to identify the IP address, the DNS name, the Ethernet address, the server software, the manufacturer of its network card, the user who's currently logged on, ping response and more, such as open ports. For each device, it also includes useful weblinks, such as a link to the network card manufacturer to get patches and firmware updates.

It's a pretty bare-bones program, and its best features aren't easy to access. For example, it's tough to know, at first, how you can scan a network range. To do it, you need to choose Scan Range from the File menu and fill in the form. Make sure you click "Log results to file" to create a log file so you always can refer back to the results. You can also use the program's Cache Manager tool to see information about all the devices on your network.

Be aware that this program takes its time going about its work, so if you have a lot of devices to scan, be prepared to wait. You'll be able to use NetPeek for free for 30 days or 500 scans, whichever comes first. After that, you'll have to pony up for the registration fee.

Fundamentals of VXLAN


 Virtual Extensible LAN (VXLAN) delivers the scalability required for multi-tenancy isolation in the cloud. See why this joint effort from Cisco, VMWare, Citrix, and Red Hat is fast becoming the first choice of engineers around the world. Host Robb Boyd guides you through the technical differences that make VXLAN a smarter, more scalable choice in your enterprise than the traditional VLAN.



Network Engineers Favourite Free Network Tools

From sniffing to mapping and monitoring, these ten utilities perform surprisingly sophisticated tasks.


Wireshark

To be fair, Wireshark was mentioned in the original article as one of those tools that's so popular that including it in the original top 10 network tools would be essentially repeating old news. Some readers believed, however, that Wireshark is so good it deserved a mention.

Wireshark is a network protocol analyzer or sniffer and is the continuation of the well-known Ethereal project. A protocol analyzer "listens" to a network, records all of the packets seen on the connection and presents a detailed analysis of those captured packets. Properly placed, a good sniffer can provide reams of data invaluable for network troubleshooting and monitoring.

The problem is in the presentation of the information. Simply producing a text file of raw packet output is difficult to analyze. A good protocol analyzer needs to be able to take that information and present it to a network administrator in a summary format, and Wireshark does that.

Wireshark can provide deep inspection of hundreds of protocols, and more are added with each release. It can also import traces from other programs (tcpdump, Cisco IDS, Microsoft Network Monitor and Network General to name a few) so analyzing information from other sources is a breeze. It runs on Windows, Linux, Mac OS and other operating systems.

If you are going to administer a network, big or small, a protocol analyzer is a necessary tool. Wireshark fits the bill.

The Dude

Knowing that services are available on your network is a good thing, but knowing when services go down as soon as (or better yet before) your users and customers do is essential. The Dude is a network management package that excels in so many facets it must be tried to be believed that so much can be offered by a freeware tool.

After installation, like many network management packages, The Dude begins with a network discovery process. You input the IP address range or network to discover plus the type of discovery (such as ping or services). This produces a basic network map from which you may customize types of monitoring. The colour of the network device's model changes from green to orange if a service goes down and red if all connectivity is lost.

Monitoring includes simple pings, services based on TCP port number, SNMP probes and the ability to log into machines to acquire more specific data. The Dude comes with a preconfigured services set so as to not overwhelm monitoring, but it's trivial to add user-customized services. While it can do so, The Dude isn't designed for discovering services offered by machines on your network. For that you'll want Nmap, which is discussed later.

Without decent notification attributes though, network management packages lose usefulness. This isn't a problem for The Dude. In addition to the map, you can configure a variety of notification modes, from pop-up windows to e-mail messages. In one test, I manually shut off access to MySQL on my Linux Snort IDS box. The Dude popped up a flag and sent me a customized e-mail within a few seconds. You may wish to tweak probe intervals because a lot of false positives would be a distraction.

The Dude comes as a standard client/server package. You can run the client and server on one computer, or run the server on one computer and connect to it from another machine. It also offers a Web interface (http and/or https) for remote access. Various accounts can be created, from a read-only version for help-desk type operations to full administrative access for network managers.

The Dude has so many features and is so versatile that it easily can fit into just about any network monitoring environment. With the ability to nearly instantaneously inform a network administrator of problems, it can be a very cost-effective support tool that your end users will be glad you implemented.

Nmap/Zenmap

Nmap is one of those programs that has been around so long it's virtually considered a staple of a networker's bag of tools. But even though the functionality of Nmap has remained strong, it has grown beyond a Linux-based command-line tool. Today's Nmap provides quick information using a crisp graphical user interface (GUI) called Zenmap.

Nmap's function is simple: discover what ports are open on a target machine or range of target machines. Knowing what ports are open is helpful for many reasons. Not sure how many Web servers are running in your environment? Worried the firewall configuration you pushed out with Group Policy isn't effective? Then run Nmap, concentrating on those ports you assume are blocked by your firewall. Concerned that your users' machines may be running a Trojan known to listen on TCP port 25192? Then perform an Nmap scan (behind firewalls) for that port on your entire address space.

Zenmap runs common Nmap scan commands and displays the actual command-line command in a window for verification. You can also modify the command manually or run Nmap completely from the command prompt. Although Zenmap is a great interface for Nmap, it doesn't replace the need for knowing what it is you are actually scanning for.

Nmap is one of those "initial probe" tools that hackers love to use to discover vulnerabilities on a target network. Use it on your network before they do, or you may be in reactive mode when you could have been proactive.

ZipTie

Admit it. You have many devices on your network, but no easy method of storing the configurations of your routers, switches and firewalls.

Maybe you do store configurations, but it's via a cumbersome file transfer process, cut and paste, or some other time-consuming method that is not only a drain on time but may not always work the way you would like it to.

Sure, some vendors have proprietary packages to manage the configurations of their equipment, but what about configuration management in a heterogeneous environment? How many networks out there are truly composed of a single vendor's equipment? Even in a single-vendor network, wouldn't it be wonderful to manage those configurations without paying the network vendor's licensing and maintenance fees for their packages?

ZipTie is an open-source, no-cost product designed to provide multivendor network equipment configuration management. It allows for discovery of network devices, backup and restoration of configurations, and comparison of configurations among devices or over time (to track changes). As a bonus feature, it also contains several basic network design and management tools, including a subnet calculator (who doesn't need one of those?).

There is nothing magic about ZipTie. It is, at the core, a nice front end to communication protocols (SNMP, Secure Shell (SSH), Telnet, HTTP, Trivial File Transfer Protocol and so on). But it uses those, and other protocols, to discover and consolidate information on network devices. Do you manage your network devices with HTTP running on a non-standard port? No problem; just create another protocol entry and specify the desired port.

One drawback is that ZipTie only supports a small number of network vendors in its core release. However, being open source, a large and growing database of user-submitted add-on modules extends the functionality of ZipTie significantly. These add-ons provide SNMP Management Information Base (MIB) data so that ZipTie can recognize the devices.

Installing ZipTie is somewhat more complicated than installing some of the other reviewed tools. Read the prerequisites page before downloading and installing. Links are provided for the Sun Java Development Kit and Perl for the server, and Sun Java Runtime for the client. Install these first. Be sure to change the default administrator password before using it on your production network. It's not intuitive how to do so but read the documentation; it requires that a command be run at the command line interface on the server.

ZipTie does operate in a true client/server model, so you can allow one source for your configuration management and still have multiple clients manage it via the client piece. It's definitely worth looking into. If a particular module doesn't exist for one of your network devices, consider submitting a module yourself. That is after all the backbone of open source.

NetStumbler

If you manage wireless networks and have never used NetStumbler, you need to. NetStumbler is, at the core, an interface between what your 802.11 wireless card "sees" and what you see. It presents all of the wireless networks found in different formats, including individual transmitter signal strength or aggregate information grouped by Service Set Identifier channel or whether the network is secured or "open."

NetStumbler is the de facto tool for war drivers, as it easily identifies networks within range of a client. War drivers look for open wireless networks, and a corporate network that has improperly configured and/or installed wireless access points is ripe for exploitation. NetStumbler is a cheap tool for conducting surveys to find these potential network entry points.

What about strength of signal surveys? Do you have one of those "regular" help desk callers who insists that the wireless network always becomes hard to use at a certain time of day? Take a laptop with NetStumbler and let it run unattended (in a secured location, of course) on site. You'll have a real-time log of signal strength data for troubleshooting. At least you can conclusively show if there is a drop in your access point's signal -- or a drop in connectivity from interference associated with that 2:30 p.m. nuking of a burrito in the local (and leaky) microwave.

There's a reason why NetStumbler has been around for so long. It works, and it's useful. Anyone who manages a wireless network, or even those looking for a Wi-Fi hot spot, needs NetStumbler.

Nessus

Nessus has been one of the staples of a networker's bag of free tools for years. With more than 20,000 vulnerability checks (plug-ins), Nessus is a powerhouse application no network or security administrator should be without.

Like Nmap, in the early days using Nessus with the command line was rather cumbersome and the output difficult to decipher. It also ran on Linux, so a Linux server was necessary for scanning. But this isn't your father's Nessus, as it installs and runs easily on Windows with a crisp GUI interface.

After installation, scanning can commence immediately or a regular download of updated scanning variables can be configured. There are two such plug-in feeds available: the Direct feed provides plug-ins as they become available and is available for a fee, while the Registered feed is free, but the plug-ins are available seven days after they are available for the Direct feed.

Updating your scans is important, and if you don't think that changes can occur in a short period of time, think again.

I went two weeks without updating my scan information and when I ran a new scan it found more than 7MB of new information I needed to download. So don't think that the free subscription database isn't kept up to date.

If your network infrastructure permits such, Nessus can run on anyone's machine. If you don't have the infrastructure to protect against scans, and if you have public access ports, beware; finding a vulnerability can be as easy as an intruder running Nessus on your net. The same advice applies here as for Nmap: run it before the hackers do.

PuTTY

It wasn't too long ago that managing network devices via Telnet was commonplace. Telnet, that venerable terminal emulation program, was the first main link between the old hard-wired terminals of the mainframe days and a distributed networked environment. Yet Telnet, in all its glory, has one major problem that makes it unsuitable to remote access today: it's unencrypted.

Enter PuTTY, a free SSH client for Windows platforms. It provides for encrypted command-line interface access to network equipment running an SSH server. For those older devices that will only respond to Telnet, there is a Telnet option as well.

PuTTY is a small program but big on options for secure access to your network equipment and servers running an SSH daemon.

As with many other terminal emulators, PuTTY allows for logging of sessions. You can save your session settings as well. Also available with the package is a secure FTP client for transferring files encrypted and an RSA and DSA key generation utility.

PuTTY is one of those rare small freeware packages with huge benefits. It should be the first tool on your networker's USB stick (everyone has one, right?) if you have a need for secure access to network equipment or secure file transfers, as you will use it often.
And a couple of the author's favourites

Our readers had several good suggestions for tools, but to round out your tool kit, here are a few more utilities I have found to be indispensable over the years.

Active Ports

Active Ports is a small utility designed to show - in real time - what processes have what ports open on a machine. The processes are linked by program, making this a very handy tool for discovering programs using network resources that might not be obvious.

There isn't much to Active Ports. Running it produces a window showing the active (open) TCP and UDP ports on the user's system. True, you can get most of this information via the netstat command, but the difference here is easily finding the program that opened the connection.

Active Ports does what many of these tools do: take information available elsewhere and present it in a format that is easily accessible and understandable - two important considerations for a network administrator tracking problems.

Suppose you performed an analysis on your network with Wireshark because your Internet connection usage had suddenly spiked, and Wireshark showed that 95 percent of your bandwidth was used by one machine on your network listening on a specific TCP port. Or perhaps you performed a proactive Nmap scan and found that several machines on your network were listening on a specific TCP port. You would need to know what process has opened that port to be able to solve the root cause of the problems. Running Active Ports on a machine provides that valuable information instantly.

Multi Router Traffic Grapher

I have written about Multi Router Traffic Grapher (MRTG) before, but it deserves mention here because it's such a useful program and is very popular among network administrators. There are other graphic monitor programs out there, but nothing beats this old standard.

MRTG, like most of these tools, is a program that provides a useful representation of data gleaned from standard sources. The most common MIB variable that is polled is interface traffic statistics, but any MIB variable can be graphed. MRTG requires a Web server, and default displays give one day, one week and one year statistics.

The methodology is simple: poll network devices every five minutes via SNMP for the desired variable(s) and then present data via a graph in a Web page covering three basic periods of time.
Using this data for traffic usage, for example, it's trivial to establish a baseline for "normal" traffic on your network and determine when perhaps you need to throw more money at bandwidth.

MRTG takes SNMP data and displays it graphically so baselines can be recorded, trends analyzed and anomalies detected not just in traffic flow but any aspect of a network device that has an SNMP MIB attribute.

Because MRTG presents SNMP data, any such data can be graphed. It's not uncommon to graph ambient temperature, CPU utilization or number of connected clients. The bottom line, if SNMP can report it, MRTG can graph it. Of course, because the data is displayed as an HTML page, it can be accessed from anywhere on the Internet, or standard controls such as .htaccess passwords can limit access to the data to authorized personnel.

SNMP Traffic Grapher

Like its big cousin, MRTG, SNMP Traffic Grapher (STG) takes SNMP data and presents it in a graph form. But it doesn't need a back-end Web server, nor does it need to be refreshed every time statistics are updated. Think of STG as a real-time MRTG application. In fact, it was developed to be a companion to MRTG.

STG can provide timely information just when you need it most. Think of when you want to make a network change and you're worried how it will affect traffic. Maybe you're loosening restrictions and afraid the egress bandwidth will spike. Or perhaps you're activating VPN on your firewall and are worried that CPU utilization will go up.

STG, like MRTG, can graph any SNMP MIB variable, but the difference is that information is displayed in real time. That's its main strength. STG is as configurable as it needs to be; enter the MIB value, the polling time and the display output. That's all.

Like MRTG, STG displays in a graphical format any SNMP MIB variable, such as inbound and outbound traffic as shown here.

STG is invaluable not so much for trending (use MRTG for that) but for checking in real time how network changes affect performance. We often have to make changes we don't want to in the middle of the business day. Knowing how that affects performance before the end user notices problems is essential.

Sunday, July 22, 2012

Dell Launches Military Data Centers-In-A-Box

Self-contained, portable, weatherproof data centers are easily transported alongside military operations.

Dell has introduced an "anytime, anywhere" data center for the military that can be transported by aircraft and operate in extreme weather conditions.

Dell's Tactical Mobile Data Center (TMDC) has been designed for maximum portability, using the military standard ISU-96 container, which measures 10 feet wide by 10 feet long, said John Fitzgerald, CTO for Dell Federal. "This one was inspired by a customer request for something that blended in, something that didn't look like a data center. It's just one more option about how you can configure and deploy [servers]," he said.

One container holds what Dell calls an IT pack, consisting of three server racks with power distribution and data connections. A second houses an AC/UPS pack, which includes a glycol closed-loop system for cooling the IT pack and battery backup. The data center can be powered by electricity, where available, or by generators. Each container is equipped with fire suppression, emergency power-off, high-density cooling, backup ventilation, remote environmental monitoring (including video), intrusion monitoring, and controls for temperature, humidity, and airflow.
The containers are certified for air transport in military fixed-wing or commercial aircraft, and can withstand shifting during takeoffs and landings or flying through rough weather. They can also be transported by forklift, helicopter, rail car, and ship. Dell says they are not only weatherproof, but dust- and sand-proof, as well.

Multiple units can be connected together, so the system can be expanded beyond a three-rack server configuration. And connection plugs for power and connectivity are on the outside, making it both easy and fast to dismantle hookups and move the units, an important consideration for military applications and emergency situations.

"The problem we're solving is how do you get the data and the computing power closer to the warrior," Fitzgerald said. "As you look at the explosion of data, information from drone planes, soldiers walking around with handhelds, [you've] got to get the processing power closer to them."
Physical proximity of computing power will support big data applications on the battlefield and in remote regions without requiring a WAN connection, Fitzgerald said. The TMDC could also be used by first responders, as well as for some commercial scenarios, he added.

While the TMDC is being introduced in this configuration, Fitzgerald said the mobile server container can be reconfigured to meet just about any requirement. If a military unit needs everything in a single container, a server rack comes out and the AC/USP unit can be put in its place. "These are like Lego building blocks," he said. "You can piece together what you need."

Friday, July 20, 2012

Cisco Unified Fabric

What Is Cisco Unified Fabric?

Chief information officers (CIOs) and IT managers spend a lot of time considering standard data center metrics such as uptime and application response time. However, with budgets for IT stagnant or even shrinking, CIOs need to consider not only traditional metrics but also such factors as data center consolidation; power; heating, ventilation, and air conditioning (HVAC); and the management of ongoing complexity. Market trends such as the growing deluge of data, the need for workload mobility, cloud computing, and the growing importance of video all are important elements in data center planning. Beyond all these is the basic goal of data center productivity. Business opportunities can be lost if IT cannot implement business initiatives quickly and efficiently. By reducing operational complexity, data center managers can shift IT staff resources from maintenance to deployment. New deployments that provide business opportunities that normally would be lost or delayed can be implemented and business impact can be increased.

Cisco® Unified Fabric is one of the pillars of the Cisco Unified Data Center, which unifies computing, storage, networking, and management resources to simplify IT operations, reduce costs, and increase performance. Products in the Cisco Unified Fabric portfolio include the Cisco Nexus® 7000, 5000, and 3000 Series Switches and 2000 Series Fabric Extenders; and the Cisco MDS 9500 Series Directors, 9200 Series Multilayer Switches, and 9100 Series Multilayer Fabric Switches. Cisco Unified Fabric provides a basis for the automation required to deliver the private cloud or public cloud data center. Cisco Unified Fabric delivers architectural flexibility, transparent convergence, scalability and intelligence with reduced total cost of ownership (TCO), quicker application deployment, and faster return on investment (ROI).

Specifically, a Cisco Unified Fabric is a data center network that supports both traditional LAN traffic and all types of storage traffic, including traditional non-IP-based protocols such as Fibre Channel and IBM Fibre Connection (FICON), tying everything together with a single OS (Cisco NX-OS Software), a single management GUI, and full interoperability between the Ethernet and non-Ethernet portions of the network (Figure 1).
Overall, the data center journey is from point virtualization to more strategic server and storage virtualization projects, evolving to private cloud initiatives. Cisco Unified Fabric supports this journey and provides investment protection throughout.





Convergence

Cisco Unified Fabric creates high-performance, low-latency, and highly available networks. These networks serve diverse data center needs, including the lossless requirements for block-level storage traffic. A Cisco Unified Fabric network carries multiprotocol traffic to connect storage (Fibre Channel, FCoE, Small Computer System Interface over IP [iSCSI], and network-attached storage [NAS]) as well as general data traffic. Fibre Channel traffic can be on its own fabric or part of a converged fabric with FCoE. Offering the best of both LAN and SAN environments, Cisco Unified Fabric enables storage network users to take advantage of the economy of scale, robust vendor community, and aggressive performance roadmap of Ethernet while providing the high-performance, lossless characteristics of a Fibre Channel network.  Convergence reduces TCO through both reduced capital expenditures  (CapEx; host interfaces, cables, and upstream switch ports) and operating expenses (OpEx; management, power, cooling, rack space, and floor space) and is designed for  incremental adoption without major system upgrades and without disruption of existing LAN and SAN management and operation procedures.


Scalability
Cisco uniquely offers multidimensional scalability for the data center network:  switch size and performance, system scale, and geographic span. Cisco Unified Fabric scalability enables businesses to scale simultaneously in multiple areas to support changing traffic patterns in the data center, including the larger, more complex workloads brought about by virtualization, the proliferation of virtual machines, and the challenges of cloud computing. Cisco Unified Fabric transparently extends the network to encompass all locations in a single extended environment or spanning data centers,
allowing resources to be efficiently accessed and effectively used regardless of size or scope.
Intelligence

Cisco Unified Fabric continues and extends Cisco’s strategy of embedding policybased,
intelligent services directly in the network fabric to create a service platform. On the LAN side, these services include Layer 4 through 7 acceleration and load balancing throughout the data center in a consistent and uniform manner. For SAN traffic, services include acceleration of I/O over metropolitan area network (MAN) and WAN links, data migration across storage arrays, and encryption of data being written to tape and disk. Benefits from this approach include:

• Ubiquity: All services, whether physical applications, virtual workloads, network services, or other infrastructure elements, are available to all elements of the data center.
• Scalability: Service delivery capability automatically scales with changes in the size of the network.
• Agility: Applications can be deployed more quickly with policy-based compliance instead of physical infrastructure changes.

The intelligence in Cisco Unified Fabric is powered by Cisco NX-OS, the common operating system across the Cisco Nexus and Cisco MDS 9000 Families. Cisco  NX-OS is a modern, modular, Linux-based operating system that provides consistent management and predictable response for the data center. Cisco NX-OS is managed by Cisco Data Center Network Manager (DCNM), which provides single pane of glass management across the Cisco Nexus and Cisco MDS 9000 Families. Cisco DCNM simplifies operations and can manage, monitor, and automate the data center network

Why Is Cisco Unified Fabric Needed?

Cisco Unified Fabric supports the data center evolution to virtualization and cloud architectures; enables improved deployment, operation, and end-user experience of virtualized resources; and meets growing bandwidth and computing requirements. Cross–data center processes, such as rapid backup and recovery and workload mobility, require this type of data center transformation.
Cisco Unified Fabric addresses these trends:
• Data center consolidation
• Limitations on server virtualization scale caused by I/O bottlenecks and the
complexity of integration with network infrastructure
• Increasingly bandwidth-intensive multimedia applications
• Rapid storage growth
• Rising energy costs
Cisco Unified Fabric enables solutions such as private cloud computing, public cloud computing, workload consolidation, desktop virtualization and virtual desktop infrastructure, Web 2.0, backup and recovery, business continuity, and pre-integrated data center pods.
Cisco Unified Fabric consolidates and standardizes the way that servers and storage resources are connected, application delivery and core data center services are provisioned, servers and data center resources are interconnected to scale, and server and network virtualization is orchestrated.
Products Included in Cisco Unified Fabric
Cisco Unified Fabric includes:
• Cisco Nexus Family of data center switches
• Cisco MDS 9000 Family of storage network switches
• Cisco DCNM
• Cisco NX-OS
• Cisco Application Control Engine (ACE)
• Cisco Wide Area Application Services (WAAS)

Main Benefits of Cisco Unified Fabric

Cisco Unified Fabric delivers reliable, scalable, agile, and cost-effective network services to servers, storage, and applications while improving the user experience. It facilitates better support of virtualization and cloud services with improved staff utilization, more efficient resource utilization (more load on servers and storage),  low-latency options, lower TCO, and better resiliency and uptime. Your data center  can do more with less.
Cisco is the only vendor with server and switch platforms natively designed for integrated virtualized services. Cisco is a leader in LAN and SAN convergence standards bodies and is the first to bring intelligent virtualization to the network, enabling service and resource access anytime and anywhere. Cisco is the only vendor with a common operating system across data center LAN and SAN product lines. Cisco Unified Fabric is the networking pillar of Cisco Unified Data Center, bringing unified storage and data networking and supporting application performance, application delivery, automation, and services delivery. This approach enables overall solutions such as business continuity, virtualization, and low-latency, high-performance computing while providing energy-efficient, resilient, and secure data centers.

Thursday, July 19, 2012

Penalties and rewards in Telco Managed Services contracts

A lot of our time is spent helping Service Providers and Operators create a framework for managing what is most important to both businesses.


 
From a commercial point of view this is crucial because the Service Provider’s performance against agreed KPIs will determine service-level penalties and rewards paid by the Operator. We have seen interesting differences in Managed Services contracts of varying sizes around the world in terms of the Operator’s approach to penalties and rewards.

 
Here’s a typical example:

 
  • KPI success >= 99.97% – reward of 5%
  • KPI success >=99.93% but <99.97% - no reward, no penalty
  • KPI success <99.93% but >=99.88% – penalty of 5%
  • KPI success <99.88% – penalty of 10%
 
In this case hitting 99.97% and above gives a 5% reward to the Service Provider, whereas failing to hit 99.88% gives a penalty of 10%.

  
We have seen some contracts where there is no provision for reward but only small penalties; others where the balance is relatively equal; and others still that have enormous potential upsides and significant penalties for underperformance.

 

It is difficult to draw any universal conclusions or patterns in these approaches because it depends on so many different factors, including the maturity of the outsourcing relationship, the difference in size and importance of the Service Provider and the Operator, the level of development of the country’s infrastructure, and the experience of the individuals involved.

 
However, we are starting to see the discussion on rewards and penalties being framed as part of a wider debate about moving from technical network KPIs to end-to-end metrics that genuinely reflect the Operator’s business objectives, because it is far more satisfactory for both parties to link the Service Provider’s rewards to KPIs that represent tangible business benefits that the Operator can derive from the Managed Service.

 
So, when setting up this sort of discussion, make sure you incentivise and penalise behaviour that you want to control, and be fair, in order to avoid running into problems with the relationship later in the contract period.

 

Wednesday, July 18, 2012

Cloud Computing 101

 What is Cloud Computing ?

You may have heard the term cloud computing or 'the Cloud,' but could you describe what it is? There are so many definitions flying around that you wouldn't be alone if you struggled to define it. Cloud computing is simply a set of pooled computing resources and services delivered over the web. When you diagram the relationships between all the elements it resembles a cloud.

Cloud computing-not to be confused with grid computing, utility computing, or autonomic computing-involves the interaction of several virtualised resources. Cloud servers connect and share information based on the level of website traffic across the entire network. Cloud computing is often provided "as a service" over the Internet, typically in the form of infrastructure as a service (IaaS), platform as a service (PaaS), or software as a service (SaaS).

Cloud computing customers don't have to raise the capital to purchase, manage, maintain, and scale the physical infrastructure required to handle drastic traffic fluctuations. Instead of having to invest time and money to keep their sites afloat, cloud computing customers simply pay for the resources they use, as they use them. This particular characteristic of cloud computing-its elasticity-means that customers no longer need to predict traffic, but can promote their sites aggressively and spontaneously. Engineering for peak traffic becomes a thing of the past.

Why Cloud Computing

Cloud computing is a flexible, cost-effective and proven delivery platform for providing business or consumer IT services over the Internet. Cloud resources can be rapidly deployed and easily scaled, with all processes, applications and services provisioned "on demand," regardless of user location or device.

As a result, cloud computing gives organisations the opportunity to increase their service delivery efficiencies, streamline IT management and better align IT services with dynamic business requirements. In many ways, cloud computing offers the "best of both worlds," providing solid support for core business functions along with the capacity to develop new and innovative services.

Both public and private cloud models are now in use. Available to anyone with Internet access, public models include Software as a Service (SaaS) clouds like webex and salesforce.com, Platform as a Service (PaaS) clouds such as Force.com and Google App Engine, and Security as a Service (Security-aaS) clouds like the earthwave Vulnerability Management Security-aaS or SecurID Security-aaS.

Private clouds like the earthwave cleancloud are owned and used by a single organisation. They offer many of the same benefits as public clouds, and they give the owner organisation greater flexibility and control. Furthermore, private clouds can provide lower latency than public clouds during peak traffic periods. Many organisations embrace both public and private cloud computing by integrating the two models into hybrid clouds. These hybrids are designed to meet specific business and technology requirements, helping to optimise security and privacy with a minimum investment in fixed IT costs.

Cloud Computing Models

Cloud computing models vary: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Manage your cloud computing service level via the surrounding management layer.
  • Infrastructure as a Service (IaaS). The IaaS layer offers storage and compute resources that developers and IT organizations can use to deliver business solutions.
  • Platform as a Service (PaaS). The PaaS layer offers black-box services with which developers can build applications on top of the compute infrastructure. This might include developer tools that are offered as a service to build services, or data access and database services, or billing services.
  • Software as a Service (SaaS). In the SaaS layer, the service provider hosts the software so you don't need to install it, manage it, or buy hardware for it. All you have to do is connect and use it. SaaS Examples include customer relationship management as a service 

Inside the Cloud

Expanding the Cloud

Typically, cloud computing environments are able to add or remove resources like CPU cycles, memory, and network storage as needed.

Why Cloud Hosting is Relevant to Businesses

The bottom line is that cloud computing architectures have the ability to scale to suit user demand and traffic spikes quickly. Developers don't have to constantly re-engineer their environment and cost structures to handle peak loads. Businesses don't have to wrestle with the underlying infrastructure and core technologies or the day-to-day operational, performance and scalability issues of their platform. Instead, with cloud computing, they can truly focus their resources on developing their applications and sites.

The Long History of the Cloud

The terms "cloud" and "cloud computing" have only been around for a couple of years, but the underlying concepts of these architectures aren't new at all. Parallel processing and clustering of multiple computers to form a larger, more powerful single or virtual instance are proven solutions to performance and scalability challenges.
Charging for computing on a pay-per-use or subscription basis (common with grid and time-sharing environments), have been employed for decades. Hosted SaaS and cloud applications, such as email and collaboration tools for example, have also existed for years.
What's new with the evolution of the Cloud is fully abstracting these technologies behind a common user interface, which frees developers and other professionals from the operational aspects of their applications and sites.

Why Not Build Your Own Cloud

Cloud computing environments are designed to operate reliably, scale in a controlled manner, and be cost effective to operate. While all of this can be developed-given enough time, money and specific expertise-by competent in-house engineering teams, the full value of cloud computing comes into play with cloud providers. They provide and guarantee all the advantages of the Cloud along with full developer service and support, for a fraction of the cost of creating, maintaining, supporting and operating this complex environment in-house.

The Technologies Behind a Cloud

Numerous underlying technologies can be incorporated into the basic architecture of the Cloud. The Internet, of course, is a common thread. And in most cases, clouds are built upon virtualisation technologies, like VMware and Xen, or scalable architectures based on semi-dedicated managed hosting models or grids.
Usually, public clouds (not in-house environments) utilize control panels and configuration management applications, much like Software-as-a-Service (SaaS). These facilitate application development activities and make raw technology readily consumable. Cloud computing environments typically provide access to LAMP and Windows stacks, web hosting and database technologies.

Migrating to a Cloud Environment

In most cases, dedicated applications can be migrated or adapted to operate in cloud computing environments with minimal effort. And the benefits in stability, reliability, and scalability can be realized immediately.

Limitless Scaling on the Cloud

In theory, cloud computing architectures are limitless. In practice, however, the size of a particular cloud footprint, the size of a cloud's data center, and the reliability and scalability of the underlying technology (network access, bandwidth, peering, etc.) all affect scalability. And they must be taken into account to properly assess the capacity of any specific cloud.
For practical purposes, most cloud providers offer enough scalability to successfully accommodate even the most massive spikes in usage or traffic.

Running Applications and Technologies on the Cloud

Since the Cloud is an architecture, theoretically almost anything can run on it. In reality, some cloud technologies, by design, are more suited to parallel or shared processing applications. Others are more suited to intensive single-threaded applications. Properly constructed clouds resolve this issue by leveraging the performance characteristics of each technology and implementing a mix of industry standard interfaces and custom integrations or applications to make the dissimilar technologies operate and scale smoothly.

Compute Platform Built to Suit Your Needs

You don't have to choose between dedicated hardware and cloud-based servers, with the earthwave cleancloud you can have the best of both worlds. With the hybrid hosting option, build your own custom configuration of Dedicated Servers and Cloud-based servers all working together seamlessly

Myths about The Cloud

Myth: Cloud is just a fad.
Truth: Cloud as a term is new, but the concepts and requisite technologies have been evolving for years (many years in some cases). Cloud computing continues to emerge as a game-changing technology, with high adoption rates and investment. Gartner Research predicts that by 2012, 80% of Fortune 1000 enterprises will be paying for some form of cloud computing services. Cloud computing is here to stay.


Myth: The cloud is not secure.
Truth: Public clouds are fundamentally multi-tenant to justify the scale and economics of the cloud. As such, security is a common concern. Whereas the traditional security perimeter is a network firewall, the cloud security perimeter now becomes the hypervisor and/or underlying cloud application. So far, security in the cloud has been good, but this is very cloud-dependent and requires a solid design and operational rigor that prioritises security. Also, handing your data and systems to someone else requires proper internal controls to ensure that not just anyone has access. Be sure to ask potential cloud computing providers about security from technical, operational, and control perspectives, as well as what experience they have being stewards of customer systems and data. If the public cloud is fundamentally not secure enough, consider an on-premise cloud, virtual private cloud, or some sort of hybrid cloud solution (see Truth #10) that allows you to maintain the level of security you require.


Myth: The cloud is not reliable.
Truth: No system has 100% uptime, and neither does the Cloud. Given the scale, however, cloud computing services are typically designed to provide high redundancy and availability. While this same level of redundancy/availability is possible to achieve in-house or with dedicated hosting, it's generally cost prohibitive except for the most critical systems. The cloud enables a higher level of reliability at a fraction of the cost.


Myth: Performance is a problem in the cloud.
Truth: It depends. There are different types of clouds and use cases. In many instances, performance is higher in the cloud because there is more available capacity and scalability. In other cases (most notably running a database server), performance may be less than a traditional server. It's best to benchmark your application in the cloud to determine any performance impact (good or bad). If performance is an issue, consider a hybrid solution (see Truth # 10) that allows you to synergise the best of both worlds: the scalability and cost efficiencies of cloud computing and the performance of dedicated servers.


Myth: Customers lose control in the cloud and get locked-in.
Truth: There are different types of clouds that offer different levels of customisation and flexibility. Clouds that implement standard technology stacks and are participating in cloud standardisation efforts are your best bet to enable application mobility. Traction for open clouds is gaining momentum and the future will involve federation between public-to-public as well as public to on-premise/hosted private clouds. Ask your cloud computing provider about their participation in and vision for cloud standardisation and federation.


Myth: The cloud is too complex.
Truth: Again, there are different types of clouds that have differing levels of complexity. Many clouds simplify management and involve little to no change in your application to move it to the cloud. Other clouds offer more power and control, but involve a change in application architecture. Simplicity and control are often at odds and the cloud is no different. Depending on your needs, the cloud can offer you a good balance.


Myth: Pay as you go cloud pricing will cost me more.
Truth: Cloud computing has huge economies of scale that get passed on to the consumer. In addition, cloud computing transfers what is typically CapEx (large upfront expenditures) into OpEx (ongoing operational costs) and enables pricing to be commensurate with usage. If pricing variability and budgeting are a concern, consider a pricing plan that offers a predictable price. Also, don't just look at raw cost. Generally, best value solutions are superior to lowest cost. Consider all the factors including support, customer service, reputation, reliability, etc. when measuring value.


Myth: The cloud is hard to integrate with existing systems.
Truth: Many applications are stand-alone and can be moved independent of other existing systems. For integrated applications that are service oriented, integration is relatively simple. For non-service oriented applications that require tight integration, hybrid solutions (see Truth #10) are designed to simplify integration with the cloud. As with all integration considerations, latency is likely a concern, so transparency about where your cloud application lives is important.


Myth: The cloud is not for enterprises.
Truth: The benefits of cloud computing apply equally to enterprises as they do to SMBs, startups and consumers. Since enterprises are typically more risk averse, new technologies are generally adopted by small business first. That said, overall cloud adoption rates are increasing substantially and we are seeing enterprise adoption today. Expect to see a significant inflection point in the next several years where cloud is a standard enterprise fixture (see Truth #1).


Myth: I should move everything to the cloud.
Truth: Not all applications are suitable for cloud computing. While the Cloud is here to stay, it will not replace traditional hosting or on-premise deployments, but rather complement them. There will always be situations where security requirements, flexibility, performance or control will preclude the cloud. In those cases, a hybrid solution involving both cloud and either traditionally hosted or on-premise servers may make sense. Beware of vendors who promote pure cloud for ALL applications. Instead, look for a cloud provider who can offer you hosting options that best fit your application needs. Also, if you are a Managed Hosting customer, recognise that today, the cloud is "unmanaged," meaning the onus for backups, patching, monitoring, etc. is back on you should you move to the Cloud. If management services are important to you (and they probably are if you are already a Managed Hosting customer), consider the ramifications of a move to the cloud and look for a cloud provider that will provide the level of support and service necessary for you to be successful.

Tuesday, July 17, 2012

Cisco CCIE Security Gets Freshened Up

New (v4.0) CCIE Security exams will go live starting November 19th. Both the written and lab exams will be updated. Objectives for the v4.0 written exam and lab exam are available on Cisco's website. If you're planning to take the v3.0 lab exam and haven't already registered, you may be out of luck. Per discussion on Cisco's CCIE Security forum, there appears to be a shortage of lab exam seats available as people rush to get take the v3.0 version before the switchover.

New CDW Cloud Collaboration Solution Helps Organizations Move Communications to the Cloud

CDW Offers Cisco Hosted Solution to Ease IT Staff Workload While Enhancing Collaboration With Colleagues and Customers

CDW LLC (CDW), a leading provider of technology solutions to business, government, education and healthcare, today announced its new CDW Cloud Collaboration solution. CDW, a Cisco Gold Certified Partner, will host the new offering in its own facilities and will combine Cisco's collaboration platforms with CDW professional services to provide customers the security and personalization of a traditional, on-premise communications solution with the flexibility, scale and economics of a cloud solution.

Most organizations understand the value of collaboration technologies such as voice and video conferencing, voicemail or customer collaboration platforms, but many lack the deep technical expertise and staff resources to successfully deploy and manage them. With CDW's Cloud Collaboration offering, users can focus on critical business responsibilities and customer needs with peace of mind, knowing that CDW is behind them to deliver a strategic collaboration solution from design to implementation to management.

"The way organizations work is changing, with employees no longer confined to working in the same physical location and customers increasingly involved in product and service development. As a result, collaboration technology needs are evolving," said Christine Holloway, vice president of converged infrastructure solutions, CDW. "Today, organizations need new technologies that increase efficiency, accommodate a dispersed workforce and integrate customer communications, while freeing up IT staff to work on other projects. Built upon industry-leading technology from Cisco, CDW Cloud Collaboration includes all the features necessary to provide organizations with top collaboration services."

CDW's Cloud Collaboration solution, powered by the Cisco Hosted Collaboration Solution (HCS), provides organizations with exceptional flexibility in choosing the way that collaboration applications are deployed. The capability to choose a hosted deployment option can also help organizations deploy collaboration technologies faster, while potentially lowering capital expenditures and operating expenses.

"As enterprises adopt collaboration technologies to quickly connect people with a high level of security to the resources and information they need to get work done, many businesses are requiring cloud solutions and looking for options from partners," said Richard McLeod, senior director, Worldwide Partner Collaboration Sales at Cisco. "Based on Cisco HCS, CDW Cloud Collaboration provides customers with the flexibility to have collaboration applications delivered to them as services, with ongoing support and management provided by CDW."

CDW Cloud Collaboration is supported by a dedicated team of CDW solution architects and engineers, and each client will have its own highly secure, virtualized private intranet hosted in CDW's facilities. Partnering with Cisco helps ensure that CDW Cloud Collaboration meets a proven quality standard, with CDW's expert team ready to support implementation and management. CDW holds Cisco Master Certifications in unified communications, managed services and security, and the CDW team includes more than 600 Cisco certified engineers, including more than 50 with the CCIE certification - the highest technical certification offered by Cisco. Further demonstrating CDW's deep knowledge, Cisco recently named CDW its Global Partner of the Year - Americas and awarded the company its U.S. Nationals Architectural Excellence Award for Collaboration.

For more information about CDW's Cloud Collaboration offerings, please visit: http://www.cdw.com/cloudcollaboration

About CDW

CDW is a leading provider of technology solutions for business, government, education and healthcare. Ranked No. 270 on the FORTUNE 500 and No. 32 on Forbes' list of America's Largest Private Companies, CDW was founded in 1984 and employs more than 6,800 coworkers. For the trailing twelve months ended March 31, 2012, the company generated net sales of $9.8 billion. For more information, visit www.CDW.com