Network Enhancers - "Delivering Beyond Boundaries" Headline Animator

Tuesday, January 17, 2012


A tool to measure maximum TCP bandwidth, allowing the tuning of various parameters and UDP characteristics

What is Iperf?

While tools to measure network performance, such as ttcp, exist, most are very old and have confusing options. Iperf was developed as a modern alternative for measuring TCP and UDP bandwidth performance.

Iperf allows the tuning of various parameters and UDP characteristics. Iperf reports bandwidth, datagram loss, delay jitter.

Here are some key features of "Iperf":

· Measure bandwidth
· Report MSS/MTU size and observed read sizes
· Support for TCP window size via socket buffers
· Multi-threaded if pthreads or Win32 threads are available. Client and server can have multiple simultaneous connections
· Client can create UDP streams of specified bandwidth
· Measure packet loss
· Measure delay jitter
· Multicast capable
· Multi-threaded if pthreads are available. Client and server can have multiple simultaneous connections. (This doesn't work in Windows.)
· Where appropriate, options can be specified with K (kilo-) and M (mega-) suffices. So 128K instead of 131072 bytes
· Can run for specified time, rather than a set amount of data to transfer
· Picks the best units for the size of data being reported
· Server handles multiple connections, rather than quitting after a single test
· Print periodic, intermediate bandwidth, jitter, and loss reports at specified intervals
· Run the server as a daemon (Check out Nettest for running it as a secure daemon)
· Run the server as a Windows NT Service
· Use representative streams to test out how link layer compression affects your achievable bandwidth
· A library of useful functions and C++ classes

Test network throughput with TTCP

TTCP is hidden IOS command designed to measure network throughput. In order to use TTCP you need to configure a sender and a receiver. Keep in mind this will result in increased Router load.

R2#ttcp transmit
ttcp-t: buflen=8192, nbuf=2048, align=16384/0, port=5001  tcp  ->
ttcp-t: connect (mss 1460, sndwnd 4128, rcvwnd 4128)
ttcp-t: 16777216 bytes in 75696 ms (75.696 real seconds) (~215 kB/s) +++
ttcp-t: 2048 I/O calls
ttcp-t: 0 sleeps (0 ms total) (0 ms average)
R1#ttcp receive
ttcp-r: buflen=8192, align=16384/0, port=5001
rcvwndsize=0, delayedack=yes  tcp
ttcp-r: accept from (mss 1460, sndwnd 4128, rcvwnd 2668)
ttcp-r: 16777216 bytes in 75696 ms (75.696 real seconds) (~215 kB/s) +++
ttcp-r: 8330 I/O calls
ttcp-r: 0 sleeps (0 ms total) (0 ms average)

Tuesday, January 10, 2012

Cisco To Offer Subscription-Based Telepresence To SMBs Through Channel Partners

Cisco (NSDQ:CSCO) will offer subscription-based telepresence services to SMBs through the channel -- one of several video-related announcements the vendor plans to make Wednesday along with several new products in its TelePresence portfolio.

Among the new telepresence wares launching Wednesday are Cisco's MX300 -- a larger, 55-inch version of the MX200 endpoint Cisco debuted in July, offering 1080p at 30 frames per second video. Also new is a second generation version of its VX Clinical Assistant, a telemedicine-centric version of Cisco TelePresence offered to health care customers.

The big news for partners, however, is Cisco TelePresence Callway, which combines Cisco TelePresence endpoints with a subscription-based telepresence service hosted and managed by Cisco and sold through the channel. Callway service will start at $99 a month for SMB customers, a standard package that includes unlimited telepresence calling and data sharing. A premium version of the service, at $149 a month, adds higher resolution video.

Callway will work with Cisco's MX video systems, C40 codec, C20 quick set codec, and EX 60, EX90 and E20 video units, and Cisco customers can either buy or lease the endpoints. Cisco will also offer a service called MeetMe -- essentially a multiparty bridging capability that can support up to 12 callers -- for $349 a month.

U.S.-based Cisco channel partners will get first crack at selling Callway. Solution providers need to be Cisco Telepresence-certified in order to sell the product, according to Gina Clark, vice president and general manager in Cisco's TelePresence Cloud Business Unit.

In addition to the new endpoints and services, Cisco is also launching Jabber Video for Telepresence, a software client that customers can use to connect into Cisco telepresence meetings even if they don't have Cisco telepresence or other comparable endpoints. In essence, that participant can click on a link sent to them and then be able to do full instant messaging, voice and video capabilities using the Jabber client, which will work with Cisco's Video Communication Server (VCS) Expressway and the Callway services. It'll be released in beta in the first quarter of 2012, according to Cisco.

During a conference call for media and analysts earlier this week, Marthin De Beer, Cisco senior vice president and general manger, Emerging Business Group, said video overall is a $5 billion business for Cisco, but is expected to hit $18 billion over the next three years as interest in video increases. Cisco is investing approximately $1 billion in R&D related to video, said De Beer, who now manages all of its video businesses: consumer, enterprise and service provider.

Cisco's latest moves come as various competitors aggressively target Cisco's video dominance, from Polycom looking to bulk up its channel relationships, services opportunities and software strategy to smaller competitors like LifeSize Communications, Vidyo and a host of startups jockeying for video channel business. Other major vendors with substantial unified communications practices, such as Avaya, Microsoft (NSDQ:MSFT) and Siemens Enterprise Communications, are also pushing video and its role in the broader UC and collaboration market.

Cisco's argument is that it's the only vendor to offer a full collaboration portfolio that touches everything from call control to video infrastructure and management capabilities. All of Cisco's collaboration endpoints, from Cius, its Android-based tablet, to its WebEx collaboration platform and its IP phones, are now video-enabled -- a point Cisco executives repeatedly emphasized during the media conference this week.

Cisco also now offers interoperability between its video products and those of other vendors that used standards-based protocols like H.323 and SIP, and as of November, all of its video products will have touch user interfaces.

Many of Cisco's recent product announcements address flexibility and easier management of video infrastructure; during its most recent telepresence product blitz this past July, for example, Cisco released the Cisco Telepresence Conductor, which can process multi-party conferencing request and automatically assign meetings to appropriate conference units -- meaning that telepresence meetings can be conducted on-the-fly, without so much pre-scheduling hassle.

O.J. Winge, senior vice president for Cisco (NSDQ:CSCO)'s TelePresence Technology Group, said Cisco Callway adds to Cisco's vision of offering telepresence to companies of all sizes. He and Clark both made reference to Callway being a third option for acquiring Cisco telepresence, on top of the TelePresence systems and endpoints it already sells to enterprises and the hosted TelePresence it offers through global service providers.

Winge said that Cisco doesn't think Cisco Callway will conflict with existing platforms like WebEx, which use video and are popular among small and midsized business users, but are more about collaboration than immersive video conferencing, he argued.

"What's important is the human relationship building -- the ability to have that in-person type of experience, and that's the clearly differentiating factor," said Winge in an interview with CRN. "I am looking at these tools as very complementary in terms of the ability to use them together."
There's enough room for both, Winge said; WebEx is a data-sharing collaboration platform that can involve video, whereas Callway and the telepresence endpoints offer the immersive conferencing ability where users can see life-size images of their colleagues.

"You'd use the WebEx capability to post questions and have a dialogue, but you wouldn't want that popping up on your telepresence screen and ruin the experience you're having there," he explained. "I believe these technologies will eventually blend into each other and create various use cases."

While the resale of video endpoints is less and less lucrative for VARs as video becomes more ubiquitously deployed technology, Winge said Cisco believes there will continue to be growth in integration and services opportunities for solution providers behind that ubiquity, especially as customers look to bring video into different lines of business. Revenue in Cisco's TelePresence business has been been growing 20 to 25 percent over the past few quarters, Winge said.

"Everyone is always afraid of cannibalization and partners are always saying, what is happening to my revenue stream going forward," Winge said. "I'd tilt the discussion to say that we're only just scratching the surface with the use of telepresence. How many people do you know that are exposed to telepresence on a daily basis? There is an enormous amount of growth in this space."

Winge also acknowledged the increasing role of service providers in video sales to small business and midmarket customers -- and the fear from some solution providers that with the coverage breadth the service providers have, those solution providers will be cut out of the market opportunity.
Cisco sells hosted telepresence through 14 of the major service providers, including AT&T, Verizon, BT and Tata, but according to Winge, the push by service providers into video is also creating partnership opportunities for traditional channel partners, too.

"You do see a lot of VARs that are are actually partnering with these service providers, and you also see a lot of M&A-type activity," Winge said. "We see it as both a threat and an opportunity [for VARs], but the reality of this business is that it's both an extraordinarily global business and an extraordinarily local business."

Monday, January 9, 2012

Saudi Hacker Threatens to Release 1 Million Israeli Credit Card Numbers

fter releasing 15,000 credit card numbers hacked from an Israeli website on Tuesday, the Saudi hacker known as 0xOmar has released 11,000 more today. He has threatened to release a further one million.

The hacker broke into a popular Israeli sports site, making off with hundreds of thousands of accounts' worth of personal information, including some credit card numbers.

Of the numbers released, credit companies claim only a few hundred dollars was illegally spent before the cards were closed down, according to the Washington Post.


On Tuesday, in a statement on the sports site, the hacker claimed to have stolen 400,000 identities. The message left on the sport site, according to CNN, included an introduction.
"Hi, it's 0xOmarfrom group-xp, largest Wahhabi hacker group of Saudi Arabia. We are anonymous Saudi Arabian hackers. We decided to release first part of our data about Israel."
Hacker News reported that his group claimed to be a part of the Anonymous hacking collective.
Yoram Hacohen, who heads up the Law, Information and Technology Authority at the Israeli Ministry of Justice, told CNN that "Israeli authorities have begun a criminal investigation, including a computer forensic probe to search for electronic evidence to try to locate the group." He is more worried about identity theft than credit card fraud.

This week, Israeli security companies have taken this opportunity to speak to computer security overall in the country. According to Oren Levy, CEO of ZooZ:

"The core of the problem lies in the fact that payment information, such as credit cards, ID and phone numbers, and other information, is being processed and stored by tens of thousands of different merchants who aren't equipped to handle the information. There is a real need to separate merchants from this critical private data."

Is he or isn't he?

Haaretz reported that a blogger named Amir Fedida claimed to have unmasked the blogger as Omar Habib, a student from the United Arab Emirates "works in a café, and studies computer science in at the 'Hidalguense Cenhies' in Mexico."

In another report from the Jerusalem Post, the hacker denies he is anything other than what he claims, and says he's too well hidden to be unmasked.

Cyber-attacks, both by governmental, and amateur, hacking teams, have become more and more part of the landscape of international relations in the last few years.

Worst Internet disasters of the decade

Now that this decade is coming to an end, we thought it would be a good time to list the very worst Internet disasters that happened between 2000 and 2009. And believe us, there have been some really big ones. Some you may remember, and some may be new to you, but they all affected a huge amount of Internet users.

We focused on Internet service disruptions that lasted a significant amount of time and affected many people. Other criteria were that the incident shouldn’t be about any one single service or website and that it should be technical in nature (i.e. the dot-com bubble bursting in 2000 doesn’t count).

We have arranged the incidents in chronological order, oldest first.

DDoS attacks cripple web heavyweights

February 2000

A series of DDoS attacks crippled or disabled large websites like Yahoo, CNN, Amazon, eBay,, ZDNet, and online trading sites like E*Trade and Datek. The attacks were spread out over days and attacked different sites, but were thought to be connected.
To name an example of the extent of the DDoS attack, was hit with eight times more traffic than its maximum capacity.

The Code Red worm attacks web servers

July 2001
Code Red was a computer worm that spread itself via a security hole in the Microsoft IIS web server, even though a security patch had been out for months.
The infected websites were defaced by the worm, showing the following message:
HELLO! Welcome to! Hacked By Chinese!
At its peak, on July 19, 2001, as many as 359,000 web servers were infected with Code Red.
After the defacement, Code Red also had a second payload that activated itself after 20-27 days, when it launched DoS attacks on a set of pre-determined IP addresses. One of the sites the attacks targeted was the White House web server.

The SQL Slammer worm wreaks Internet havoc

January 2003
SQL Slammer was a computer worm that spread itself rapidly via a security hole in Microsoft SQL Server. A security patch had been available for six months, but many had not installed it. At least 22,000 systems were infected, possibly many more.

The entire worm was only 376 bytes, and spread itself by sending off a single UDP packet which could hold the entire code, making its distribution highly effective. It has been estimated that the number of infected computers initially doubled every 8.5 seconds (exponential growth) and that 90% of all computers with the vulnerability had been infected within 10 minutes.

The rapid spread and broadcasting of the worm effectively acted as a DDoS attack on the entire Internet. It overloaded routers all over the Internet, many of which crashed. The series of routing changes and restarting routers this led to caused a flood of communication between routers, which made ordinary Internet traffic either slow down or just stop.

Turkish ISP hijacks the Internet

December 24, 2004

A Turkish ISP (TT Net) made a mistake when configuring its routers, effectively announcing to the rest of the Internet that everything should be routed to them. Routers talk to each other and propagate this kind of information, so the configuration error spread and resulted in tens of thousands of networks on the Internet sending traffic to the wrong destination or not getting the traffic they were supposed to.

The result was that a lot of websites were unreachable for a large portion of the Internet population. This “traffic hijacking” to Turkey lasted for hours, and would most likely have been considerably more noticeable if it hadn’t happened on Christmas Eve.

Earthquake breaks Asian Internet

December 26, 2006

A massive earthquake with an epicenter outside the coast of Taiwan broke a large number of important submarine communications cables. Internet traffic to and from China, Taiwan, Hong Kong, the Philippines, Malaysia, Singapore and many other places was severely affected by the incident, especially to the US.

As with most undersea cable breaks, repairs were complicated and took a long time to complete. Full service wasn’t restored until over a month later.

Big sites go dark as San Francisco datacenter loses power

July 24, 2007
When 365 Main’s datacenter in San Francisco lost power it effectively took down a number of big websites and services like Craigslist, Typepad, LiveJournal, Yelp, Second Life, Technorati and Adbrite. All of them were hosted at this supposedly super-reliable co-location facility. The incident was made worse because several of the backup power generators failed to start. Although power was restored after about 45 minutes, it took hours before all the websites were back up and running.
Ironically, this incident happened the same day as 365 Main sent out a press release announcing that its San Francisco facility had had two years straight of 100% uptime.

Data center problems are fairly common, but this one had a huge impact since so many big sites were affected.

The Mediterranean submarine cable break

January-February 2008

This was actually three separate incidents, but they happened so closely together that the effect was enormous (and launched a number of conspiracy theories). Between January 23 and February 4, 2008, a total of five submarine data communications cables in the Mediterranean outside Egypt were cut. These cables were part of the Internet backbone and the disruption severely limited the Internet access to and from the Middle East and India.

Theories as to why the various cable breaks happened include damage done by ship anchors and bad weather conditions, although due to various circumstances there are some conspiracy theories about sabotage which have not been completely ruled out even by the UN (ITU).

“Honorable” mentions

While not making the list above for various reasons, there are undoubtedly some incidents from the last decade that at least deserve a mention. Here are some of the disasters that didn’t quite make the cut, but were awfully close.

Arranged in chronological order:
  • October, 2002: DDoS attack on the DNS backbone. This would definitely have made the cut, but the attack ultimately did not succeed in disabling DNS; it was more of a close call. Another attack was attempted in February, 2007.
  • August, 2003: The Northeast Blackout of 2003. The largest power outage in US history (hence the caps) of course had widespread effects on the Internet access for a huge number of people in that area. Parts of Canada were also affected.
  • July, 2007: Apple’s Mobile Me launch problems. The service was extremely rocky for weeks after its launch, and Steve Jobs has later admitted in an internal (but leaked) email that Apple should have done more testing before launching it.
  • August 2007: The great Skype outage. Skype stopped working for almost two days for its millions of users due to a problem indirectly triggered by Windows Update. We use Skype a lot here at Pingdom, and have less-than-fond memories of this incident.
  • November, 2007: The Navisite datacenter migration. Around 175,000 websites were offline for days when a server migration between datacenters went wrong.

The major incidents on the Internet in 2010

In what has become something of a yearly tradition, it’s now time for us to present 10 of the most noteworthy incidents on the Internet from this past year. As you’ll see, 2010 has been very interesting.

Just like previous years, we have included problems ranging from website outages and service issues to large-scale network interruptions. If you’re an avid Web user, you are bound to recognize several of them.

Let’s get started! The major incidents on the Internet in 2010 were…

Wikipedia’s failover fail

Wikipedia has become so ubiquitous that it can’t go down for a minute without people noticing. According to Google Trends for Websites, the site has roughly 50 million visitors per day.
In March, when servers in Wikimedia’s European data center overheated and shut down, the service was supposed to fail over to a US data center. Unfortunately, the failover mechanism didn’t work properly and broke the DNS lookups for all of Wikipedia. This effectively rendered the site unreachable worldwide. It took several hours before everyone could access the site again.’s big-blog crash got a pretty bad start this year when a network issue caused the biggest outage the service had seen in four years. The outage became extra noticeable not just because of the sheer number of blogs it hosts (at the time 10 million, now many more), but also because so many high-profile blogs use it. The outage took down blogs such as TechCrunch, GigaOM and the Wired blogs for almost two hours in February.

China reroutes the Internet

In April, China Telecom spread incorrect traffic routes to the rest of the Internet. In this specific case it meant that during 18 minutes, potentially as much as 15% of the traffic on the Internet was sent via China because routers believed it was the most effective route to take.

Similar incidents have happened before, for example when YouTube was hijacked globally by a small Pakistani ISP two years ago. Normally this results in a crash since the ISP can’t handle the traffic. However, China Telecom was able to handle the traffic, so most people never noticed this. At most they noticed increased latency as traffic to the affected networks took a very long and awkward route across the Internet (via China).

Even though no serious outage happened as a result of this, we think it’s such a fascinating disruption of the traffic flow that we felt it was worth including here. This is an inherent weakness of today’s Internet infrastructure, which largely relies on the honor system. Renesys has a more in-depth explanation of this incident and how it could happen. We should state that it wasn’t necessarily an intentional hijacking.

Twitter’s World Cup woes

Twitter seemed like the ideal companion to the World Cup (soccer to you Americans, football to the rest of the world, John Cleese explains it best). Tweeting about the World Cup proved so popular that it slowed down or broke Twitter several times during the weeks of the event. The upside is that this effectively load tested Twitter’s infrastructure, revealing potential weaknesses. As a result, Twitter’s service today is most likely more stable than it might otherwise have been.

Facebook’s feedback loop

Facebook has become a true juggernaut with more than 500 million users. That hasn’t changed its development philosophy, “don’t be afraid to break things.” This aggressive approach to speedy development has been key to Facebook’s success, but, well, sometimes it will break things.
Facebook’s worst outage in four years came in September when a seemingly innocent update to Facebook’s backend code caused a feedback loop that completely overloaded its databases. The only way for Facebook to recover was to take down the entire site and remove the bad code before taking the site back online. Facebook was offline for approximately 2.5 hours.

Foursquare’s double whammy

Foursquare’s location-based social network has been a resounding success and has in little time gathered a following of millions, so when the service went down for roughly 11 hours early in October, people of course noticed. The culprit was an overloaded database. And as if to add insult to injury, almost exactly the same thing happened the day after, taking the site down for an additional six hours.

Paypal’s payment problems

When Paypal stumbles, so do the many thousands of merchants that rely on Paypal to handle payments, not to mention the millions of regular consumers who use Paypal for their online payments. You can imagine the effect, and sales lost, if Paypal stops working for hours on end. Which was exactly what happened in October when a problem with Paypal’s network equipment crippled the service for as much as 4.5 hours. At its peak the issue affected all of Paypal’s members worldwide for 1.5 hours.

Tumblr’s tumble

Tumblr was (and still is) one of the great social media successes of 2010, but with rapid growth comes scalability challenges. This has become increasingly noticeable, and culminated with a 24-hour outage early in December when all of Tumblr’s 11 million blogs were offline due to a broken database cluster.

The Wikileaks drama

If you’ve missed this you must have been hiding under a rock, which in turn was buried below a mountain of rocks. The site issues that Wikileaks experienced during the so-called Cablegate were significant. First the site was the victim of a large-scale distributed denial-of-service attack which forced Wikileaks to switch to a different web host. After Wikileaks moved to Amazon EC2 to better handle the increased traffic, Amazon soon shut them down. In addition to this, several countries blocked access to the Wikileaks site. And then the possibly largest blow came when the DNS provider for the official domain, EveryDNS, shut down the domain itself.
Without a working domain name in place, Wikileaks could for a time only be reached by its IP address. Since then, Wikileaks has spread itself out, mirroring the content over hundreds of sites and different domain names, including a new main site at

As if this wasn’t enough drama, you have to add the reactions from some of Wikileaks’ supporters (not from Wikileaks itself). The services that cut off Wikileaks in various ways (Paypal, VISA, Mastercard, Amazon, EveryDNS, etc.) were subjected to distributed denial-of-service attacks from upset supporters across the world, which resulted in even more downtime. There was also collateral damage, when some attackers mistook the DNS provider EasyDNS for EveryDNS, aiming their attacks at the wrong target.

The Wikileaks drama is without a doubt the Internet incident of the year.

Wednesday, January 4, 2012

Washington Upgrades State Network with Juniper Gear

Washington State's K-20 Education Network recently completed an upgrade of its network infrastructure to encompass annual bandwidth growth that had reached 40 to 50 percent. The 15-year-old organization runs a wide area network (WAN) that links 498 locations at public schools, two-year colleges, and four-year colleges in the state. The upgrade to equipment in two of its major sites was completed during the third quarter of 2011. K-20 deployed MX480 3D Universal Edge Routers from Juniper Networks.

According to Tom Carroll, K-20's service manager, basic Internet access on the network has given way to more demand for support of online business applications and high-definition videoconferencing. "A child in a rural area like Forks, WA can talk on a video call to a psychologist at University of Washington in Seattle," he explained. "The network must provide high-fidelity video so that the psychologist can see the child's face. Without our network the child wouldn't get the care he or she needs."

Likewise, the new network is also supporting enterprise resource planning, time and attendance, payroll, learning management, and other school applications. "As mission critical applications migrate onto the network, the expectation for reliability goes up," Carroll said. "There's an expectation of five-nines of reliability--or better."

The original WAN used Juniper gear as well. According to Carroll, the equipment in use specifically in the Seattle and Vancouver installations had reached end of life and required replacement. Those core routers that were replaced were redeployed to other WAN locations.

A major driver for the latest upgrade was to accommodate greater demand for increased bandwidth--to 10Gbps capacity--but without a concomitant increase in service charges. "Our customers have fixed line items in their budgets for bandwidth, and the only way we can keep costs stable while providing increased bandwidth is to adopt new technology and invest in infrastructure," Carroll explained. "The network architecture has to be flexible at both the high and low ends of the spectrum at a price they can afford."

The total cost of the project was under a million dollars, an expense incurred by K-20's members.

My Blog List

Networking Domain Jobs