Friday, September 28, 2012

List of Cloud Certifications


Cloud certifications and Cloud computing certifications are very young, but their value grows so fast. Managers and IT specialist want to extend their knowledge about neutral cloud topics, but also vendor-specific implementations.

Not too many vendors created Cloud certifications.

Few of them, like Arcitura Education with the CloudSchool program, created vendor neutral certifications.

The biggest vendors like HP, EMC, Microsoft and IBM have in their portfolio also Cloud certifications, that help you with prove your skills about products and technologies.

On the horizon we can see other vendors like Huawei or Cisco with new certifications.

This is for sure a big and good trend for companies (on the management level) and also engineers or IT architects.

List of Cloud Certifications
29 Certifications 12 Vendors



Arcitura Education

Arcitura EducationArcitura Education created a set of Cloud certifications beside their SOA certifications. Not all are available yet, but they are great to prove your proficiency in specific areas of cloud computing.

  • Certified Cloud Professional
  • Certified Cloud Technology Professional
  • Certified Cloud Architect
  • Certified Cloud Security Specialist
  • Certified Cloud Governance Specialist
  • Certified Cloud Storage Specialist
  • Certified Cloud Trainer


More about these cloud certifications on the Arcitura Education website.

Cloud Security Alliance (CSA)

Cloud Security AllianceThe launch of CSA’s CCSK program is an important step in improving security professionals’ understanding of cloud security challenges and best practices and will lead to improved trust of and increased use of cloud services.



CompTIA

CompTIACompTIA has only one neutral Cloud certification, addressed to people who want to understand the cloud idea.



Cloud Security Alliance

CCSKThe Cloud Security Alliance’s Certificate of Cloud Security Knowledge (CCSK) is the industry’s first user certification program for secure cloud computing.



EMC

EMCEMC has in the portfolio three certifications for Cloud specialists. One for people who start with cloud topics with EMC solutions and two for Architects (the highest level).

  • EMC Cloud Infrastructure and Services (EMCCIS)
  • EMC Cloud Architect (EMCCA) IT-as-a-Service Planning and Design
  • EMC Cloud Architect (EMCCA) Virtualized Infrastructure Specialty


More about these cloud certifications on the EMC website.

EXIN

EXINEXIN has one cloud certification in the portfolio. This cloud certification is suited for a management level, but also engineers find this valuable. The EXIN certification was built by specialists from 4 companies.


HP

HPHP at this moment, has two certifications for people who are familiar with HP cloud technologies.
You must provide more end-to-end architectural skills with these cloud certifications.

  • HP Accredited Technical Associate – Cloud
  • HP Accredited Solutions Expert – Cloud Architect


More about these cloud certifications on the HP website.

IBM

IBMIBM has three certification for people who want to demonstrate their knowledge with the Cloud Computing infrastructure solutions.
One certification is focused on Tivoli systems, two on architectural concepts.

  • IBM Service Management Tivoli Cloud Computing Infrastructure
  • IBM Certified Solution Advisor – Cloud Computing Architecture
  • IBM Certified Solution Architect – Cloud Computing Infrastructure


More about these cloud certifications on the IBM website.

Microsoft

Microsoft Cloud CertificationMicrosoft is one of the biggest players on the certifications markets. 20 years after creating first IT certifications, Microsoft developed first cloud certification. New will be available this year.



Oracle

OracleOracle has only one certification that touches cloud concepts. In the future we should see much more.



RackSpace Hosting

CloudUCloudU is a cloud certification designed for IT professionals and business leaders who want to upgrade their knowledge of the fundamentals of Cloud Computing. CloudU program is sponsored by RackSpace Hosting.


Salesforce.com

SalesforceSalesforce is a cloud computing company. All certifications from the Salesforce portfolio are based on their cloud services.
Companies who use certified cloud specialists see smoother deployments and better use of Salesforce.

  • Salesforce.com Certified Administrator
  • Salesforce.com Certified Advanced Administrator
  • Salesforce.com Certified Developer
  • Salesforce.com Certified Advanced Developer
  • Salesforce.com Certified Sales Cloud Consultant
  • Salesforce.com Certified Service Cloud Consultant
  • Salesforce.com Certified Technical Architect

Thursday, September 27, 2012

12 free cloud storage options

 
With all the public cloud storage offerings on the market today, many vendors just want customers to sign up for their services. So, in return for a new account, many offer free cloud storage.
 
 
Using the following 12 public cloud storage options, you could theoretically get 112GB of free cloud storage. But not all services are the same. Some have their pros and cons related to how large a file can be uploaded, the pricing of additional cloud storage space, integrations with various operating systems and mobile apps, and of course security precautions the vendors take.
Related Content
Here are side-by-side comparisons of free cloud storage options.
 
 
Amazon Cloud Drive
Name: Amazon Cloud Drive
Free cloud storage: 5GB
Extra storage: 20GB, $10/year; 50GB, $25/year; 100GB, $50/year; 200GB, $100/year; 1TB, $500/year. Cloud Music player: 250 imported songs free; 250,000 imported songs, $24.99/year.
 
More info: Music purchased and downloaded from Amazon is automatically stored in Amazon Cloud Drive for free. Service also backs up Kindle-branded tablets. Since launching in March 2011, the service has come under criticism for its access policies, which states that Amazon is allowed to use access files stored in Amazon Cloud Drive.
 
 
Apple iCloud
Name: Apple iCloud
Free cloud storage: 5GB
Extra storage: 10GB (15GB total with 5GB free), $20/year; 50GB, $100/year.
 
 
More info: Automatically synchs files, photos, videos and even Web browsing tabs across Apple devices. Windows iCloud Control Panel is available. Apps such as Keynote, Pages and Numbers are used for document management/synchronization. Apple boasts a minimum 128-bit AES encryption for iCloud.
 
 
Box
Name: Box
Free cloud storage: 5GB
Extra storage: Personal account, 25GB for $9.99/month; 50GB, $19.99/month. Business account: $15/user/month, 3 to 500 users; 1TB with password-protected sharing, access management and user administration. Enterprise edition: Custom pricing, unlimited storage, offers customer branding, group access controls.
 
More info: Provides SSL AES 256-bit encryption behind the firewall. For business and enterprise accounts, files are stored encrypted with automatic redundancy. File size limits: 100MB for the free accounts, 1GB for paid personal accounts; Business editions have 2GB file size limit. Box allows document editing in the cloud through third-party apps, such as Zoho.
 
 
 
Dropbox
Name: Dropbox
Free cloud storage: 5GB
Extra storage: "Pro" accounts range from 100GB, $9.99/month or $99/year, to 500GB for $49.99/month or $499/year. "Teams" account, 1TB for $795/year for 5 users and $125 for each additional user.
 
More info: One of the best-known public cloud storage offerings, Dropbox uses SSL AES 256-bit encryption for its Pro and Teams editions. No limit on file size when uploading from the desktop application, which works on Windows, OS X and Linux; 300MB limit when uploading from the Dropbox website. Get 500MB of extra free storage when friends register, up to 16GB. Dropbox does not allow editing of documents directly in the service.
 
 
Google Drive
Name: Google Drive
Free cloud storage: 5GB -- Google Docs and files converted to Google Docs do not count against storage limit. 1GB of free photo video storage in Picasa Web Albums, unlimited storage of photos and videos (up to 15-minute videos) in Google+.
Related Content
Extra storage: 25GB, $2.49/month; 100GB, $4.99/month; 200GB, $9.99/month; 1TB, $49.99/month; 16TB, $799.99/month.
 
More info: Google Drive allows users to store a lot more files in its cloud for free if the files are Google Docs. In many cases, files can be converted to this format simply by copying them into a Google document. Drive does have some file limits, including 2MB for converted files, or 10MB for non-Google Doc files. Spreadsheets have a 20MB limit, or 256 columns. Google Doc files can be edited in the application, but third-party apps are needed to edit non-Google Doc files, such as Microsoft Word files.
 
MediaFire
Name: MediaFire
Free cloud storage: 50GB
Extra storage: Pro edition features 250GB for $4.50/month, and Business edition offers 1TB for $49/month.
 
More info: Startup MediaFire offers a large amount of cloud storage, but it only has Windows, OS X and Linux desktop applications with no mobile apps yet. For $1.50/month, users can get 50GB of storage with no advertisements through the "personal" edition. MediaFire markets its content distribution package heavily. Pro edition allows 500GB/month of content distribution through 500 one-time links per day; enterprise edition allows 4TB/month of distribution with 5,000 links per day. The free edition has 200MB file size limit, while personal has a 1GB file size limit. Pro has a 4GB file size limit and business has a 10GB individual file size limit.
 
 
Microsoft SkyDrive
Name: Microsoft SkyDrive
Free cloud storage: 7GB
Extra storage: 20GB, $10/year; 50GB, $25/year; 100GB, $50/year.
 
More info: Microsoft SkyDrive, which has a Windows 8-style interface, offers users one of the largest initial free storage accounts of the major cloud offerings. It does limit uploads to 300MB files via the Web browser and it has a 2GB limit via the desktop application, which can be run in Windows and OS X. It also supports iOS, Android and Windows phone apps. It includes a "forgot something" feature that allows users to remotely retrieve a file on their PC that has not uploaded to the cloud.
 
 
MiMedia
Name: MiMedia
Free cloud storage: 7GB
Extra storage: 100GB, $4.99/month; 500GB, $20/month or $199/year; 1TB, $35/month or $325/year.
 
 
More info: MiMedia offers one of the higher amounts of free cloud storage in the market. It bills itself as being a backup repository and cloud-access tool specifically for media, although it works the same with documents. For large uploads, the company will send a hard drive onto which you can upload an initial dump of information, then send it back to the company for uploading to MiMedia's cloud. Files are encrypted during upload transmission but not while stored on MiMedia servers. MiMedia does not yet support Mac OS X. It does have iOS and Android apps.
 
 
SpiderOak
Name: SpiderOak
Free cloud storage: 2GB
Extra storage: $10 per month or $100 per year for each additional 100GB increment.
 
More info: SpiderOak presents itself as the secure public cloud storage option. Boasting a "zero-knowledge" policy, SpiderOak's program does not store passwords of customers, and all customer data is encrypted both in transmission and while in storage, using salted hashes and a combination of 2048-bit RSA and 256-bit AES encryption. For developers of the crowd, the company has also begun open sourcing some of the code used to create the product. SpiderOak offers personal, business and partner/reseller versions of its cloud service.
 
SugarSync
Name: SugarSync
Free cloud storage: 5GB
Extra storage: 30GB, $4.99/month or $49.99/year; 60GB, $9.99/month or $99.99/year; 500GB, $39.99/month or $399.99/year. Business account offers 100GB for three users for $29.99/month or $299.99/year.
 
More info: Up to 32GB of free additional storage available if you recommend others that sign up for the service. SugarSync has mobile apps available on iOS, Android, BlackBerry, Symbian and WinMobile platforms.
 
 
Symform
Name: Symform
Free cloud storage: Up to 10GB
 
Extra storage: Symform offers by far the largest amount of potentially free storage, but there's a catch. Its public cloud uses storage space donated by users, meaning other customers' encrypted data will be stored on your system when you contribute to the Symform cloud. The amount of storage each user gets is based on how much storage they contribute back to the Symform public cloud network. So, for example, if you contribute 2TB of storage, you can get 1TB of storage for free. Common use cases for this is around disaster recovery and backup. Customers can pay for the storage instead of contributing back excess storage space as well. Symform encrypts files using 256-bit AES, then divides files stored in the cloud into 64 blocks which are distributed throughout the Symform cloud network so that no one single user has access to a customer's complete set of encrypted data.
 
 
Syncplicity
Name: Syncplicity
Free cloud storage: 2GB
Extra storage: 50GB, $15/month for personal edition.
 
More info: Syncplicity is owned by EMC. Pricing for business edition, which includes central access controls, start at $45/month with tiered pricing for up to unlimited storage. No file size limit or number of files. AES 256-bit encryption is used in transmission and at rest.

Wednesday, September 26, 2012

Huawei Targeting Cisco in Telepresence Market

 

The networking competition between Cisco and Huawei is now moving into the immersive video conferencing space, where Huawei is looking to displace Cisco as the top vendor.

Huawei Technologies, which already competes with Cisco Systems in the global networking market, reportedly is now looking to challenge Cisco in the telepresence space.
 
Cisco is the dominant player in the $2.8 billion immersive telepresence arena, leading other major rivals like Polycom. However, as Huawei looks to expand its capabilities beyond the networking business, telepresence is becoming a focus, Li Jun, general manager of such telepresence products at the Chinese vendor’s Enterprise unit, told Bloomberg News.
 
Huawei had about $200 million in sales of telepresence equipment in 2011, Li said.
 
Telepresence technology enables businesses to conduct conferences via video in an immersive fashion that gives participants the feeling of being in the same room, even though they may be across the world from each other. The equipment involved with telepresence systems can include not only the technologies, screens and cameras, but also furniture for the rooms.
 
Telepresence and other video conferencing technologies have gotten a boost in recent years as businesses look to increase employee productivity and improve the lines of communication with workers, partners and customers, while at the same time driving down travel costs. However, with the growth of such trends as cloud computing, bring your own device (BYOD) and greater workforce mobility, the video conferencing market is changing, with an increasing emphasis on software-based solutions and enabling video collaboration on a range of devices, including PCs and mobile devices like smartphones and tablets.
 
The uncertain global economy and reduced spending in the public sector are conspiring to slow sales of video conferencing solutions overall, and the market for immersive telepresence products is being particularly hard hit, according to analysts at IDC. In the second quarter, overall video conferencing spending fell 10 percent from the same period in 2011, while revenue in the multi-codec telepresence dropped fell 38.4 percent. Those numbers were a continuation of the trend seen in the first quarter, according to IDC.
 
“The high-end, immersive telepresence market has been taking a hit lately as lower-cost, HD-quality video solutions, along with a range of new video deployment options for customers, have emerged," Rich Costello, senior analyst for IDC’s Enterprise Communications Infrastructure business, said in a statement in May when announcing first-quarter market numbers.
 
Still, Huawei executives see telepresence technology as a key part of their plans to triple revenue to $100 billion by 2021, up from $32 billion in 2011. Huawei also is looking to push its way into other markets as well, including cloud computing and mobile devices such as smartphones and tablets.
 
Li indicated to Bloomberg that Huawei will look to offer less expensive telepresence systems as it competes with Cisco. The company’s high-end telepresence offering, the TP3106, delivers high-definition images across three flat screens for about $160,000, while a similar system from Cisco could cost as much as $300,000 Li said.
 
Cisco executives have acknowledged Huawei as a strong rival in the networking space. In April, Cisco CEO John Chambers told The Wall Street Journal that he saw Huawei—rather than the likes of Hewlett-Packard and Juniper Networks—as its top competitor. That came after Huawei in 2011 launched an aggressive campaign to increase its presence in the North American market.
 
 

Monday, September 24, 2012

Cisco unveils Nexus 3548 with Algorithm Boost technology


 
Cisco Systems, Inc. has announced the Cisco Nexus 3548 with Algorithm Boost technology, a new networking innovation which delivers up to 60% network-access performance improvement over competing full-featured 10 Gigabit ethernet switches.


Designed for use in high performance computing, high performance trading, and big data environments, this new switch offers network-access performance as low as 190 nanoseconds, a performance improvement enabled by the Algo Boost technology developed by silicon engineers at Cisco.


The Cisco Nexus 3548 with Algo Boost enhances the Cisco High Performance Trading Fabric architecture that helps enable business agility and intelligence for customers without imposing performance or latency penalties.


The Nexus 3548 one-rack-unit (1RU) 10 GB Ethernet switch running in "warp mode" offers latencies as low as 190 ns in those environments with small to medium Layer 2 and Layer 3 scaling requirements.


The ultra-low-latency switch also facilitates the delivery of stock market data to financial trading servers in as little as 50 ns with the warp switch port analyzer (SPAN) feature. The Nexus 3548 also includes Hitless Network Address Translation (NAT), a critical feature to allow algorithmic traders to easily connect to any trading venue they desire without any latency penalty.


Cisco Nexus 3548 switch latency speeds were verified using Spirent TestCenter across various workloads and using testing specifications developed jointly with Spirent Communications.


David Yen, senior vice president of Data Center Group at Cisco, said: "Today, Cisco has leapfrogged our competitors in delivering a full featured switch that offers the lowest latency Ethernet in the networking industry. The Nexus 3548 with the unique Cisco Algo Boost technology implemented in ASIC provides a robust feature set to give financial traders more control over their sophisticated trading algorithms and respond more quickly to the changes in the market.


"In addition to the performance, this unique ultra-low-latency Ethernet technology is part of the unified data center fabric and offers strong total cost of ownership for commercial high-performance computing and big data environments as well as scale-out storage topologies."
 

Advanced Cloud Computing Interview Questions - Part 3

Introduction to Cloud Computing



What is Hypervisor in Cloud Computing and its types?

The hypervisor is a virtual machine monitor (VMM) that manages resources for virtual machines. The name hypervisor is suggested as it is a supervisory tool for the virtual machines. There are mainly two types of hypervisors :

• Type-1: the guest Vm runs directly over the host hardware, e.g Xen, Hyper-V, VmWare ESXi

• Type-2: the guest Vm runs over hardware through a host OS, e.g Kvm, Oracle virtualbox

Are Type-1 Hypervisors better in performance than Type-2 Hypervisors and Why?

Yes the Type-1 Hypervisors are better in performance as compared to Type-2 hypervisors because Type-1 hypervisors does not run through a host OS, they utilize all resources directly from Host hardware. In cloud implementation Type-1 hypervisors are used rather than Type-2 because Cloud servers need to run multiple OS images and it should be noted that if OS images are run on host a OS as in case of Type-2, the resources will get wasted.

What are the characteristics on which a Cloud Computing Model should be selected for implementing and managing workload?

Scalability is a characteristic of cloud computing through which increasing workload can be handled by increasing in proportion the amount of resource capacity. It allows the architecture to provide on demand resources if the requirement is being raised by the traffic. Whereas, elasticity is being one of the characteristic provide the concept of commissioning and decommissioning of large amount of resource capacity dynamically. It is measured by the speed by which the resources are coming on demand and the usage of the resources.

What do you understand by CaaS?

CaaS is a terminology given in telecom industry as Communication as a Service. The Voice-over-Ip (VoIP) follows a same delivery model. CaaS can offer the enterprise user features such as desktop call control, presence, unified messaging, and desktop faxing. In addition to the enterprise features, CaaS also has a set of services for contact center automation that includes IVR, ACD, call recording, multimedia routing (e-mail and text chat), and screen pop integration.


Cloud computing architecture


What is the use of defining cloud architecture?

Cloud architecture is a software application that uses on demand services and access pool of resources from the cloud. Cloud architecture act as a platform on which the applications are built. It provides the complete computing infrastructure and provides the resources only when it is required. It is used to elastically scale up or down the resources according to the job that is being performed.

How does cloud architecture overcome the difficulties faced by traditional architecture?

Cloud architecture provide large pool of dynamic resources that can be accessed any time whenever there is a requirement, which is not being given by the traditional architecture. In traditional architecture it is not possible to dynamically associate a machine with the rising demand of infrastructure and the services. Cloud architecture provides scalable properties to meet the high demand of infrastructure and provide on-demand access to the user.

What are the three differences that separate out cloud architecture from the tradition one?

The three differences that make cloud architecture in demand are:
1. Cloud architecture provides the hardware requirement according to the demand. It can run the processes when there is a requirement for it.
2. Cloud architecture is capable of scaling the resources on demand. As, the demand rises it can provide infrastructure and the services to the users.
3. Cloud architecture can manage and handle dynamic workloads without failure. It can recover a machine from failure and always keep the load to a particular machine to minimum.

What are the advantages of cloud architecture?

Cloud architecture uses simple APIs to provide easily accessible services to the user through the internet medium.
It provides scale on demand feature to increase the industrial strength.
It provides the transparency between the machines so that users don’t have to worry about their data. Users can just perform the functionality without even knowing the complex logics implemented in cloud architecture.
It provides highest optimization and utilization in the cloud platform

What is the business benefits involved in cloud architecture?

1. Zero infrastructure investment:
Cloud architecture provide user to build large scale system with full hardware, machines, routers, backup and other components. So, it reduces the startup cost of the business.

2. Just-in-time Infrastructure: It is very important to scale the infrastructure as the demand rises. This can be done by taking cloud architecture and developing the application in the cloud with dynamic capacity management.

3. More efficient resource utilization: Cloud architecture provides users to use their hardware and resource more efficiently and utilize it in a better way. This can be done only by applications request and relinquish resources only when it is needed (on-demand).

Cloud computing – Amazon 

What are the different components used in AWS?

The components that are used in AWS are:
1. Amazon S3: it is used to retrieve input data sets that are involved in making a cloud architecture and also used to store the output data sets that is the result of the input.
2. Amazon SQS: it is used for buffering requests that is received by the controller of the Amazon. It is the component that is used for communication between different controllers.
3. Amazon SimpleDB: it is used to store intermediate status log and the tasks that are performed by the user/
4. Amazon EC2: it is used to run a large distributed processing on the Hadoop cluster. It provides automatic parallelization and job scheduling.

What are the uses of Amazon web services?

Amazon web services consist of a component called as Amazon S3 that acts as a input as well used as an output data store. It is used in checking the input and according to that gives the output. The input consists of the web that is stored on Amazon S3 as object and it is update frequently to make the changes in the whole architecture. It is required due to the on demand growing of the data set and to provide persistent storage.

How to use Amazon SQS?

Amazon SQS is a message passing mechanism that is used for communication between different connectors that are connected with each other. It also acts as a communicator between various components of Amazon. It keeps all the different functional components together. This functionality helps different components to be loosely coupled, and provide an architecture that is more failure resilient system.

How buffer is used in Amazon web services?

Buffer is used to make the system more resilient to burst of traffic or load by synchronizing different component. The components always receive and process the requests in unbalanced way. Buffer keeps the balance between different components and makes them work at the same speed to provide faster services.

What is the need of the feature isolation in Amazon web services?

Isolation provides a way to hide the architecture and gives an easy and convenient way to the user to use the services without any difficulty. When a message is passed between two controllers then a queue is maintained to keep the message. No controller calls any other controller directly. The communication takes place between the controllers by storing their messages in a queue. It is a service that provides a uniform way to transfer the messages between different application components. This way all the controllers are kept isolated from each other.

Cloud Computing – MapReduce


What do you understand by MapReduce?

MapReduce is a software framework that was created by Google. It`s prime focus was to aid in distributed computing, specifically large sets of data on a group of many computers. The frameworks took its inspiration from the map and reduce functions from functional programming.

Explain how mapreduce works.

The processing can occur on data which are in a file system (unstructured ) or in a database ( structured ). The mapreduce framework primarily works on two steps:
1. Map step
2. Reduce step

Map step: During this step the master node accepts an input (problem) and splits it into smaller problems. Now the node distributes the small sub problems to the worker node so that they can solve the problem.

Reduce step: Once the sub problem is solved by the worker node, the node returns a solution to the master node which accepts all the solutions of the worker node and re-compiles them into a solution. This solution is for the input that was provided to the master node.

What is an input reader in reference to mapreduce?

The input reader as the name suggests primarily has two functions:
1. Reading the Input
2. Splitting it into sub-parts

The input reader accepts a user entered problem and then it divides/splits the problem into parts which then each are assigned a map function. Also an input reader will always read data from a stable storage source only to avoid problems.



Ubuntu Cloud

 

What components have been released by ubuntu for their for their cloud strategy?

Ubuntu till date has released three components, they are named as :

- Ubuntu Server Edition on Amazon EC2 (IaaS)
- Ubuntu enterprise cloud powered by Eucalyptus (IaaS)
- UbuntuOne(SaaS)

The first two components are targeted for the infrastructure layer of the computer stack. And UbuntuOne is meant for the software layer also known as Software as a service (SaaS).

List out a few of the uses of the private cloud concept.

The private cloud concept has a variety of usage scenarios. Some of them are :

- It enables an organization to rapidly develop and also prototype cloudware applications. Also this can be done behind a firewall enabling the creation/development of privacy sensitive applications such as classified data handling.
- Being an elastic platform it enables the use of high performance applications whose load can be fluctuating. The system is based on aggregated peak loads of different applications at a single point of time.
- By using private cloud concept the organization can assign a pool of hardware inside the firewall henceforth enabling it to be assigned to the users by a common gui to speed up the process.

What does private cloud offer in building an infrastructure?

Private cloud offers complete set of development tools and easy to configure panel where you can customize and deploy prototype applications.
- It keeps the private sensitive application separate and hidden from the world.
- It provides the provision to create high performance applications and include the concept of elasticity.
- It uses a firewall and keeps all the resources in a pool that separates them with other resources that are made public.

What are elements included in ubuntu cloud architecture?

The elements that are included in ubuntu cloud architecture are:

1. Cloud controller: it is the main controller that controls the communication between two nodes and allows the system to communicate with each other.
2. Walrus Storage controller: It controls the storage of the data and resource at one place for easy access.
3. Elastic block storage controller: it uses the elasticity concept and allow the resources to scale up as the demand rises. This block consists of dynamic resources.
4. Cluster controller: it controls the cloud clusters which are made up of mady nodes and contains the configuration of all the nodes from a single point.
5. Node controller: it consists of the hardware resources that is being provided to the web or to the user through cluster controller.


 

Software-defined networks - Five needs driving SDNs

 Courtesy - Network World

BYOD, cloud, Big Data, IT consumerisation and rapid service provisioning

Five trends in networking are driving the transition to software-defined networking and programmability.
 
 
They are:
 
• User, device and application mobility; 
• Cloud computing and services;
• Consumerisation of IT;
• Changing traffic patterns within data centres;
• Agile service delivery.
 
 
The trends stretch across multiple markets, including enterprise, service provider, cloud provider, massively scalable data centres - like those found at Google, Facebook, Amazon, etc. - and academia/research. And they require dynamic network adaptability and flexibility and scale, with reduced cost, complexity and increasing vendor independence, proponents say.
 
According to Cisco, which just released its ONE programmability architecture, enterprises need network programmability to automate the operation of private cloud deployments. These deployments include virtual workloads, virtual desktop infrastructures, and the orchestration of security profiles across them.
 
And within the enterprise data centre, traffic patterns have changed from the "north-south" directions of client/server to "east-west," in which applications access different databases and servers before delivering data back to the client. The Open Networking Foundation says this, as well as the increasing use of personal devices to access corporate data - the BYOD phenomenon - and the deployment of private, public and hybrid cloud infrastructures and services is also changing traffic patterns, requiring the automation, rapid reconfigurability and simplified extendability SDNs provide.
 
Other SDN/programmability/network virtualisation players say SDNs and the applications they enable can relieve VLAN exhaustion, facilitate data centre interconnect and disaster recovery, allow for granular, policy-based security, network isolation, service interposition, deterministic application performance and customization, among others. Policy-based security is particularly important to the BYOD trend; as the ONF notes, enterprise IT is under pressure to accommodate personal devices in a fine-grained manner while protecting corporate data and intellectual property, and meeting compliance mandates.
 
Service providers need SDNs for agile service delivery, proponents say. SDNs and network programmability can enable policy-based control and analytical data capture to help optimise and monetise service delivery, they say. Cloud service providers and webscale companies like Google, Facebook and Yahoo to ease or automate network configuration and reconfiguration, and quickly add more functionality without manually touching each and every switch or router in the network.
 
Such companies can use OpenFlow and SDNs to reroute traffic, balance traffic loads, provide bandwidth on demand for peak requirements, execute policies to scale and segregate the networks of different data centre or cloud tenants, and connect subscribers to content and services. Cloud providers in particular require programmability to support scalable multi-tenant environments through automated provisioning and virtualisation overlays that abstract complicated and distributed physical infrastructures from function.
 
From an IT perspective, deployment of and access to cloud services can be facilitated by SDNs that enable "elastic scaling" of compute, storage and network resources using a common suite of tools from a common viewpoint, according to the ONF. This is particularly important as organisations stress increased security, compliance, and auditing in cloud environments, or when abrupt changes emerge as businesses reorganise, consolidate or merge.
 
Massively scalable data centres are said by Cisco to require network flow management enabled through customisation using programmatic APIs to provide deep insight into network traffic. And academia and research require network "slicing," or partitioning to separate experimental network use - i.e., to investigate applicability of OpenFlow or SDNs - from production networks.
 
 
In both scenarios, Big Data plays a key role in requiring SDNs or network programmability. SDNs can help steer traffic between thousands of servers processing massive datasets in parallel; and can help data centre networks scale more efficiently while maintaining server-to-server, server-to-storage, and server/storage-to-network connectivity.
 
Whatever the requirement, today's networks are not up to the task of providing the adaptability and scale that SDNs promise, according to the ONF:
 
"The explosion of mobile devices and content, server virtualisation, and advent of cloud services are among the trends driving the networking industry to reexamine traditional network architectures.
 
Many conventional networks are hierarchical, built with tiers of Ethernet switches arranged in a tree structure. This design made sense when client-server computing was dominant, but such a static architecture is ill-suited to the dynamic computing and storage needs of today's enterprise data centres, campuses, and carrier environments."
 

Saturday, September 22, 2012

Free Cisco Labs for Certification

 
Looking for some practical experience with Cisco routers and switches for little to no cost? There's no shortage of free Cisco labs on the Web.
 
These sites are designed to assist network operators and Cisco certification students with common problems or challenges that crop up in configuring Cisco networks. In some instances, they may serve as teasers to get students to pay for more elaborate and comprehensive testing services.
 
Here's a sample of six free Cisco labs available either online or in person:
 
PacketLife Community Lab - Currently offline until November, the PacketLife community Lab provides free access to "modern" Cisco networking equipment for training purposes. Lab equipment and other costs are provided or sponsored by the site's owner, commercial sponsors, and voluntary contributions by community members.
 
Free Cisco Catalyst Switch Lab - This non-commercial effort provides free 24x7 access to Cisco Catalyst switches to learn networking, Cisco IOS, and prepare for certification exams. It requires no reservation or registration. Users telnet in to get 80-90 minutes per session on each console line, a one hour waiting period, and a two hour reset period. Users have many logins and sessions per day on each console, and can login to as many devices as they want that are available. Each line is timed separately.
 
Free Cisco Lab -- Free Cisco Lab is dedicated to providing educational help for students pursuing Cisco certifications. It provides exam preps, practice exams and free lab scenarios for routing, switching, security, wireless, and VOIP. It is operated by Barry Burdette, a 15-year network industry veteran who has designed, installed and maintained network infrastructure during his career.
 
Free Cisco Lab Simulators - The Ciscoconsole site has a link to free Cisco lab simulators available for download. One is the GNS3 simulator, an open source program that allows users to design complex network topologies. The program enables emulation of many Cisco IOS router platforms, IPS, PIX and ASA firewalls, and Juniper's Junos. It also simulates simple Ethernet, ATM and frame relay switches, and enables connection of the simulated network to production networks. GNS3 also performs packet capture using Wireshark. It can be run on multiple operating systems, including Windows, Linux, and MacOS X.
 
Dynamips - Dynamips is an emulator program for Cisco routers. It emulates Cisco router hardware by booting a Cisco IOS image into the emulator. Dynamips emulates Cisco 1700, 2600, 3600, 3700, and 7200 series routers for testing and experimenting with IOS features, and checking configuration before production deployment. GNS3 can be a graphical front-end for Dynamips; another front-end is Dynagen. Dynamips runs on Linux, Mac OS X or Windows.
 
Free CCNA Workbook - This site provides a free lab to those that prefer to use real equipment over emulated gear from Dynamips/Dynagen/GNS3. The lab consists of Cisco 3725, 3550 and 2950 hardware running 12.4 images of IOS. Each lab session is limited to total of three consecutive hours, which equates to eight sessions per day. Users are only permitted to schedule one session at any given time.
 
 

Go Daddy outage highlights need for network segmentation, visibility

 
What kind of lessons can enterprises draw from the recent Go Daddy outage?
 
While outside influences -- like a distributed denial-of-service (DDoS) attack -- can trigger a network outage, many outages are the result of avoidable network events brought on by system updates or configuration errors.
 
Domain name hosting provider GoDaddy.com recently suffered intermittent network outages that brought down customers' email and websites for six hours. The Go Daddy outage was originally thought to be the work of a DDoS attack, but later was determined to be the result of corrupted router data tables stemming from a series of internal network events. In a message posted on Go Daddy's website, CEO Scott Wagner said the provider wasn't hacked and no customer data was compromised.
 
Go Daddy has since implemented measures to prevent this type of network outage from occurring again, Wagner wrote.
 
Just because network outages are a common occurrence doesn't mean providers and enterprises should go down without a fight and accept an unreliable network. Better visibility, failover plans and network segmentation can help limit the impact of a network outage for an enterprise.
 

GoDaddy outage points out need for network monitoring and visibility

 
Six hours of service interruption is a long time for a popular provider like Go Daddy. Better network visibility could ensure earlier identification of such problems and faster resolution for a provider like Go Daddy and for an internal enterprise network, noted Tim Nichols, vice president of global marketing at New Zealand-based Endace, a network traffic recording and visibility provider.
 
Because so many businesses and customers rely on Go Daddy for connectivity, "[the provider] must invest real money in network visibility and network history tools in order to minimize the time it takes engineers to respond, establish root cause, and repair serious service-affecting problems," he said. That visibility must extend all the way into the changes network administrators make to a network. Because the source of the Go Daddy outage most likely stemmed from a corrupted router updating other router tables incorrectly, better configuration and patch management may have avoided the outage, said John Pironti, president of consultancy IP Architects LLC.
 
But given the reputation of Go Daddy, it is unlikely that one issue -- such as human error resulting in a corrupted update -- triggered the outage, and one solution may not have prevented it.
 
"This was most likely the result of a complicated set of updates or [a] number of controls failing," Pironti noted.
 
Early detection of networking issues is important, especially following any network updates. Enterprises and providers should closely monitor traffic flows and data to spot corruption as soon as possible, he said.
 

Network segmentation, design considerations slashes risk of network outages

 
Careful network design techniques can go a long way in preventing a failure similar to the Go Daddy outage. Network segmentation can lessen the impact of a failed network device, Pironti said.
 
"Outages are a reality, but both enterprises and providers should segment their infrastructure so that no single set of equipment or solutions can impact an entire environment if availably and uptime is the primary business need," Pironti said, noting that greater resiliency should be built into network infrastructure.
 
One method of network segmentation is running multiple, parallel systems -- a network design technique that allows organizations to make necessary updates one at a time in order to limit the risk of impact to the users.
 
"Having failover between the systems in case one should start behaving strangely is important because it allows traffic to be quickly moved to another system that has not been impacted," said Craig Mathias, principal with Ashland, Mass.-based Farpoint Group advisory firm.
 
A highly distributed network architecture is much more likely to survive and bounce back quickly from any type of network failure compared to a monolithic system, Mathias noted.
 
While running parallel systems is an option for an enterprise, it may not be an appropriate solution for a provider like Go Daddy, Pironti said.
 
While enterprises can benefit from segmenting two physical facilities -- what is done in most cases of network segmentation -- it's not clear that parallel systems would have solved the Go Daddy outage because the provider has many facilities, he said.
 

Final Go Daddy outage lesson: Don't single-source providers

 
While providers can adopt technologies and procedures to minimize failure impact to their customers, enterprises do not need to sit idly by while the service provider works out networking issues.
 
"[Enterprises] must understand that they are dealing with complex, imperfect systems that will fail and they do need a contingency plan so they don't become a victim," Mathias said.
 
If availability is a critical factor, enterprises should diversify with multiple providers to ensure a secondary environment in the event of a network outage of one provider, noted IP Architects' Pironti.
 
"The user has to take some responsibility," he added, noting that no provider is too big to fail.
 
Failing over to another provider is one way that enterprises can ensure that business will stay up and running, even though they may not have 100% functionality, noted Farpoint's Mathias.
 
An enterprise considering a multi-provider environment can also benefit from using vendors -- like Akamai Technologies -- that offer load balancing and redundancy between providers, Mathias added.
 
"Enterprises need scalable expansion that allows them to grow, without any single point of failure," he said.
 
 

Friday, September 21, 2012

Huawei previews Cisco-killin' E9000 modular system

 
 
Pulling the telecom-to-datacenter California alley-oop
 
Chinese telecom giant and increasingly important server player Huawei Technologies is moving from racks and blades into modular designs that use a mix of both approaches – and look very much like modular kit from Cisco Systems, IBM, and Hitachi, as well as the newer bladish iron from HP and Dell.
 
The likeness between the forthcoming Huawei servers and IBM and Hitachi machines announced back in April is enough to make you wonder if Huawei is actually manufacturing those companies' respective Flex System and Compute Blade 500 machines.
 
Huawei isn't – as far as we know – but as El Reg pointed out when Hitachi announced the CB500 machines, it sure does look like IBM and Hitachi are tag-teaming on manufacturing for modular systems. Possibly by using the same ODM to bend the metal and make the server node enclosures, perhaps?
 
The distinction between a blade and a modular system is a subtle one. With modular systems, server nodes are oriented horizontally in the chassis and are taller than a typical vertical blade is wide, allowing for hotter and taller processors as well as taller memory and peripheral cards than you can typically put in a skinny blade server.
 
The modular nodes can be half-width or full-width in the chassis and offer the same or slightly better compute density as a blade server in a similar-sized rack enclosure, and because of the extra room in the node, can accommodate GPU or x86 coprocessors as well. They are made for peripheral expansion and maximizing airflow around the nodes.
 
Modular systems generally have converged Ethernet networks for server and storage traffic, but also support an InfiniBand alternative to Ethernet for server networks and Fibre Channel for storage networks, just as do blade servers. Modular systems also tend to have integrated systems management that spans multiple compute node enclosures and are geared for virtualized server clouds. It's not a huge difference, when you get right down to it.
 
What is most important about modular systems, in this evolving definition, is that they look like – and compete with – the "California" Unified Computing System machines that Cisco put into the field three years ago when it broke into the server racket.
 
Cisco's business has been nearly doubling for the past two years and is bucking the slowdown big-time in serverland. Cisco is defining the look of the modern blade server and eating market share. Huawei wants to pull the same California maneuver, peddling its own servers to its installed base of networking and telecom gear customers and driving out the server incumbents.
 
Huawei lifted the veil on the Tecal E9000 modular machines at the Huawei Cloud Congress show recently in Shanghai, and says that the boxes won't actually ship until the first quarter of next year – Huawei is clearly not in any kind of a big hurry to get its Cisco-alike boxes out the door.
 
The Tecal E9000 modular system from Huawei
The Tecal E9000 modular system from Huawei
 
The Tecal E9000 is based on a 12U chassis that can support either eight full-width nodes or sixteen half-width nodes. The chassis has 95 per cent efficient power supplies, and a total of six supplies can go into the enclosure with redundant spares, rated at 3,000 watts a pop AC and 2,500 watts a pop DC.
The chassis and server nodes have enough airflow that they can operate at 40°C (104°F) without additional water blocks or other cooling mechanisms on the chassis or the rack. This is the big difference with modular designs, and one that was not possible with traditional blades. Blade enclosures ran hot because they were the wrong shape, and the fact that by simply reorienting the parts you can get the machines to have the same computing capacity in the same form factor just goes to show you that the world still need engineers.
 
The Tecal E9000 server nodes are all based on Intel's Xeon E5-2600 or E5-4600 processors, which span two or four processor sockets in a single system image, respectively. There are a couple server node variants to give customers flexibility on memory and peripheral expansion. The nodes and the chassis are NEBS Level 3 certified (which means they can be deployed in telco networks) and also meet the European Telecommunications Standards Institute's acoustic noise standards (which means workers won't go deaf working on switching gear).
 
The Tecal CH121 server node
 
The Tecal CH121 server node
 
The CH121 is a single-width server node with two sockets that can be plugged with any of the Xeon E5-2600 series processors, whether they have four, six, or eight cores per socket. Each socket has a dozen DDR3 memory slots for a maximum capacity of 768GB across the two sockets using fat (and crazy expensive) 32GB memory sticks.
 
 
The node has two 2.5-inch disk bays, which can be jammed with SATA or SAS disk drives or solid state disks if you want lots of local I/O bandwidth but not as much capacity for storage on the nodes. The on-node disk controller supports RAID 0, 1, and 10 data protection on the pair of drives.
The CH121 machine has one full-height-half-length PCI-Express 3.0 x16 expansion card and two PCI-Express 3.0 x16 mezzanine cards that plug the server node into the midplane and then out to either top-of-rack switches through a pass-through module or to integrated switches in the E9000 enclosure.
 
The CH221 takes the same server and makes it a double-wide node, which gives it enough room to add six PCI-Express peripheral slots. That's two x16 slots in full-height, full-length form factors plus four x8 slots with full-height, half-length dimensions.
 
The expanded Tecal CH221 server node
 
The double-wide Tecal CH221 server node
 
A modified version of this node, called the CH222, uses the extra node's worth of space for disk storage instead of PCI-Express peripherals. The node has room for the same two front-plugged 2.5-inch drives plus another thirteen 2.5-inch bays for SAS or SATA disks or solid state drives if you want to get all flashy. These hang off the two E5-2600 processors, and the node is upgraded with a RAID disk controller that has 512MB of cache memory and supports RAID 0, 1, 10, 5, 50, 6, and 60 protection algorithms across the drives. This units steps back to one PCI-Express x16 slot and two x16 mezz cards into the backplane.
 
If you want more processing to be aggregated together in an SMP node, then Huawei is happy to sell you the CH240 node, a four-socket box based on the Xeon E5-4600. Like other machines in this class from other vendors, the CH240 has 48 memory slots, and that taps out at 1.5TB of memory using those fat 32GB memory sticks. The CH240 supports all of the different SKUs of Intel's Xeon E5-4600 chips, which includes processors with four, six, or eight cores.
 
The Tecal CH240 server node
 
The Tecal CH240 four-socketeer
 
The CH240 does not double-up on the system I/O even as it does double-up the processing and memory capacity compared to the CH221. It has the two PCI-Express x16 mezzanine cards to link into the midplane and then out to switches, but no other peripheral expansion beyond that in the base configuration.
 
 
This is a compute engine in and of itself, designed predominantly as a database, email, or server virtualization monster. It supports the same RAID disk controller used in the CH221, but because of all that memory crammed into the server node, there's only enough room for eight 2.5-inch bays for disks or SSDs in the front. If you want to sacrifice some local storage, you can put in a PCI-Express riser card, which lets you put one full-height, 3/4ths length x16 peripheral card into the CH240.
All of the machines are currently certified to run Windows Server 2008 R2, Red Hat Enterprise Linux 6, and SUSE Linux Enterprise Server 11, and presumably will be ready to run the new Windows Server 2012 when they start shipping early next year.
 
VMware's ESXi 5.X hypervisor and Citrix Systems' XenServer 6 hypervisor as well, and again, presumably Hyper-V 3.0 will get certified on the box at some point and maybe even Red Hat's KVM hypervisor as well. There is no technical reason to believe that the server nodes can't run any modern release of any of the popular x86 hypervisors, but there's always a question of driver testing and certification.
 
 
The CX series of switch modules for the E9000 enclosure
The CX series of switch modules for the E9000 enclosure
 
 
On the switch front, Huawei is sticking with three different switch modules, which slide into the back of the E9000 chassis and provide networking to the outside world. The CX110, on the right in the above image, has 32 Gigabit Ethernet ports downstream into the server midplane and out to the PCI-Express mezz cards, which is two per node. The CX110 switch module has a dozen Gigabit and four 10GbE uplinks to talk to aggregation switches in the network.
 
The CX311 switch module takes the networking up another notch, with 32 10GbE downstream ports and sixteen 10GbE uplinks. This switch also has an expansion slot that can have an additional eight 10GbE ports or eight 8Gb/sec Fibre Channel switch ports linking out to storage arrays.
Huawei also has a QDR/FDR InfiniBand switch model with sixteen downstream ports and eighteen upstream ports, which can run at either 40Gb/sec or 56Gb/sec speeds.
 
The current midplane in the E9000 chassis is rated at 5.6Tbit/sec of aggregate switching bandwidth across its four networking switch slots, which can be used to drive Ethernet or InfiniBand traffic (depending on the switch module you choose).
 
Here's the important thing: the Tecal E9000 midplane will have an upgrade option that will allow it to push that enclosure midplane bandwidth up to 14.4Tb/sec, allowing it to push Ethernet at 40 and 100 Gigabit speeds and next-generation InfiniBand EDR, which will run at 100Gb/sec; 16Gb/sec and 32Gb/sec Fibre Channel will also be supported after the midplane is upgraded. It is not clear when this upgraded midplane will debut.
 
Pricing on all of this Tecal E9000 gear has not been set yet, according to Huawei. ®
 
 

Cisco Nexus 3548 and Arista 7150: Duelling ultra-low-latency switches

 
 
Cisco and Arista Networks Inc. traded punches this week, both announcing new, ultra-low-latency top-of-rack switches that not only are fast, but pack many more features and functions than a typical switch in this class.
 
The Arista 7150 is the first switch in the industry to use Intel Corp.'s new Fulcrum Alta FM6000 networking chip, while Cisco's Nexus 3548 uses a new Cisco custom application-specific integrated circuit (ASIC), the Algorithm Boost or Algo Boost chip. At first glance, Cisco has taken a lead in the race to near-zero latency.
 

Arista 7150 and Cisco Nexus 3548: Fast and smart

 
Both top-of-rack switches push the state of the art in low-latency forwarding, a feature that is critical to the competitive high-frequency trading market and also attractive to high-performance computing shops, particularly in genomic research and oil and gas exploration.
 
The Arista 7150 has 350-nanosecond forwarding latency, a 30% improvement on previous generations of Arista switches. The Nexus 3548 ships with 250-nanosecond latency, a significant leap over Arista. The Algo Boost ASIC on the Nexus 3548 can operate in "warp" mode to push latency down to 190 nanoseconds. It achieves this by reducing the size of the switch's address table to from 64,000 to 8,000 hosts.
 
But these ultra-low-latency switches are smart as well as fast. They offer low-latency multicast and unicast routing and in-hardware network address translation (NAT).
 
The Arista 7150 now has "all the features and functions of a [Cisco] Catalyst 6500," according to Arista customer John Koehl, head of infrastructure and operations for Headland Technologies LLC, an algorithmic financial trading firm based in San Francisco and Chicago. As many financial trading firms do, Koehl collocates his switches with financial exchanges across the world so that trades can be made as close to the exchange as possible. In those environments, NAT becomes critical.
 
"If you're connecting to the exchange, [NAT] provides a little bit of security because you can mask what you're coming in as," Koehl said. "Second, you don't have to use up all the exchange IP addresses. You can use your private IP inside and just NAT to what the exchange is providing you."
 
Having features like NAT in an ultra-low-latency switch like the Nexus 3548 and the Arista 7150 means that network engineers don't need to place a firewall or another device inline to perform these functions in ultra-fast switching environments.
 
"We try to keep our switch footprint as small as possible so that we don't have a lot of switch hops," Koehl said. "So, the important thing at each data center is to be fast on the network and at the same time have enough switch port capacity and provide all the features and functionality that we need so that we ... can do everything in one box."
 
The Nexus 3548 ships with 48x10 Gigabit Ethernet ports and an 64,000-host address table and 16,000 IP routes. The Arista 7150 ships in three models ranging from 24x10 GbE to 64x10 GbE ports. It offers a 64,000-hosts table and 84,000 IP routes.
 

Arista 7150: Programmable forwarding plane for SDN and network virtualization flexibility

 
The Fulcrum Alta chip on the Arista 7150 has a programmable forwarding plane. Combined with EOS, Arista's fully programmable operating system, this chip allows the Arista 7150 to be upgraded to support new protocols in hardware without a device refresh. The switch is shipping with silicon support forVirtual Extensible VLAN (VXLAN), for instance, but Arista could easily add native hardware support for Microsoft's alternative Network Virtualization using Generic Routing Encapsulation, or NVGRE, standard with a simple software update. Arista demonstrated the VXLAN support at VMworld last month, showing how an Arista 7150 can serve as a gateway for attaching non-VXLAN network services to a VXLAN network.
 
As other protocols emerge in the software-defined networking (SDN) industry, enterprises will be able to migrate to those new technologies without ripping out hardware, according to Martin Hull, senior product manager at Arista. "A fully programmable software stack doesn't necessarily get line-rate performance in hardware. A traditional fixed-logic switch gives performance but doesn't give you flexibility. The solution to all this is a hardware approach that has flexibility in the data plane and a programmable software stack, which is where Arista sits," he said.
 
That means that the Arista 7150 will have the flexibility to change on the fly as applications come out, especially on the forwarding plane with SDN or if network virtualization techniques are being deployed, said Rohit Mehra, director of enterprise communications infrastructure for Framingham, Mass.-based research firm IDC. "Arista will be able to leverage these technologies and make changes without rewriting and reworking the ASICs. Redoing the ASIC can take anywhere from 12 to 24 months," he said.
 

Advanced analytics in an ultra-low-latency switch

 
Both the Arista 7150 and the Nexus 3548 ship with advanced analytical capabilities that can track and analyze latency spikes and buffer utilization. These capabilities allow enterprises to tune their networks to avoid microbursts that can affect ultrafast applications.
 
Arista offers a Latency and Application Analysis Package (LANZ), which provides detailed visibility into buffer utilization, and captures data held in the buffer when the switch experiences congestion. Arista also has added time-stamping to LANZ so that enterprises can know exactly when a latency microburst occurred. The Algo Boost ASIC on the Nexus 3548 has a similar analytics package that can operate in real time.
 
"We put functionality in the hardware to do fine-grained polling per port on buffer utilization," said Paul Perez, vice president and chief technology officer for Cisco's data center group. "Even down to the granularity of 10 nanoseconds, we can collect buffer utilization and extract that out into a software interface. It can be used in offline mode to do trend analysis, but also in real-time mode to be able to tune your environment."
 
Nexus 3548: Cisco's retreat from merchant silicon?
Cisco's first generation of ultra-low-latency switches, the Nexus 3000 series, used merchant silicon from Broadcom Corp. Rather than ride merchant silicon to even lower latency, Cisco elected to build the Algo Boost ASIC. Although Cisco has emphasized ASICs as a major differentiator, the company will continue to use merchant silicon where appropriate.
 
Cisco CTO Perez said his company believes in using custom ASICs "when you need it and merchant when you don't." "We have a non-religious technology strategy," he said. “We have 600 silicon designers and more than twice that in software developers to drive our custom capabilities. But we will take advantage of commercial silicon where appropriate."
 
Cisco will use much of the technology in the new Algo Boost ASIC to enhance other silicon in Cisco's product portfolio and not just in switching. "If you look in our server line, Unified Computing, we're differentiating and adding value in a highly competitive environment with custom silicon for expanded memory and also with our custom NIC[network interface card]," Perez said. "I think some synergy between this Algo Boost technology at the switching level coupled to that computing edge at the NIC is a very fertile area of exploration for my team in terms of how we can progress the engineering of the next generation of high-performance computing environments."
 

Wednesday, September 19, 2012

Juniper planning open source alternative to Cisco, VMware SDN controllers

 
Data centre and campus networks are where Juniper sees the immediate short-term benefits of SDNs because the infrastructure is geographically concentrated
 
 
Juniper is working with other industry players on an open source-based controller for software-defined networks (SDN) that would be an alternative to proprietary offerings from VMware and Cisco.
 
 
Juniper hopes to have an open source SDN controller emerge as a de facto standard that's broadly supported by the industry, much like Linux, Apache and Hadoop have become in open source operating system, Web server and big data analytics, said Bob Muglia (pictured), executive vice president of Juniper's Software Solutions Division. He does not consider current open source SDN controllers, like Floodlight, Nox, Trema and others, to have attained that status yet.
 
 
"We think the likely thing and the best thing for the industry is to have an open source controller emerge that becomes a third standard controller," Muglia said in an interview with Network World this week. "One that is available broadly across companies and supports the broad set of capabilities that are needed. That has yet to emerge. And we are looking closely and working with a number of players in the industry to determine how it is likely to emerge."
 
 
A couple of the other participants Muglia mentioned are IBM and Microsoft. Muglia was a longtime Microsoft executive before moving to Juniper a year ago.
 
 
Isolating that de facto open source controller standard seems to be a priority for the company because VMware and Cisco, with their leadership positions in server virtualisation and data centre switching, and major investments in programmable networking, will have significant market positions in SDNs .
 
 
VMware just bought network virtualization startup Nicira for $1.26 billion, and Cisco funded startup and potential spin-in Insieme Networks with $100 million, and may acquire it for $750 million more.

SDN is a significant growth opportunity for Juniper

"We're actually working kind of hard on it," Muglia says of the open source SDN controller effort. "It's sensible for Juniper because we want to be disruptive in this space."
 
 
Juniper recently laid out its SDN strategy, which includes adding OpenFlow support to specific products next year. The company considers SDN, which makes multi-vendor networks programmable through software, a significant growth opportunity for Juniper because it's coming from a small installed base of LAN and data centre switching. Juniper has about a 3 percent share of the $20 billion Ethernet switch market, good for a No. 3 position behind Cisco and HP, but small enough to have "a lot of upside," Muglia says.
 
 
And the data centre and campus networks are where Juniper sees the immediate short-term benefits of SDNs because the infrastructure is geographically concentrated. Programming a network is harder when the elements are geographically dispersed.
 
 
"When a disruptive thing comes in like SDN, there's a significant opportunity to change some of the dynamics of the market share," Muglia says. "So we see this as being very, very positive for us."
Yet, while working on a standard open source controller, Muglia also says Juniper will support whatever VMware does in SDNs due to its market influence and intrinsic technology.

Juniper's SDN controller will not be based on OpenFlow

"They're an important part of the data centre hypervisor and orchestration market and in the cloud space," he says. "We will support what they do."
 
 
Support does not, however, imply fully embracing a technology or direction or strategy. Muglia did characterise the VMware/Nicira controller as proprietary; open source code is renowned for being just that: open and freely accessible.
 
 
Still, Muglia could not say whether Juniper will brand its own SDN controller or source another. One thing is for sure though - while it will support the OpenFlow protocol, it will not be based on it.
 
 
"OpenFlow is one of the protocols for this controller to support," Muglia says. "There are scenarios in the WAN case where OpenFlow has provided a benefit," such as Google's data centre interconnection application. "When we look in the data center domain, and in particular in the switching aspects of the data centre, it is not clear that OpenFlow will be the key protocol. OpenFlow is about controlling the networking services, the Layer 2/Layer 3 services of the switches. And we're not at all clear that there's a lot of control that needs to be done there. We certainly don't believe that control has to be done on a per flow basis.
 
 
"On the other hand, when you deal with the policy that has to be set on a given application, for a given tenant in a cloud, that's interesting. And that needs to get established probably between the virtual switches and the hypervisor. That seems to be the way the industry is going, with Layer 2 overlay services sitting on top of a Layer 3 routed network. There, the protocols that seem to be more relevant to us, that have emerged thus far, are VXLAN and NVGRE. "