Monday, October 29, 2012

SSD storage technology - Finally some new technology

Courtesy - Beth Cohen is a senior cloud architect


Until recently, the massive changes to IT infrastructure brought on by widespread cloud architecture adoption have barely touched the relatively mature multi-billion dollar storage industry. The old adage that storage is cheap, but management of storage is expensive is as valid as it was 10 years ago. That is about to be turned on its head as prices on solid state drive technology continues to plummet (SSD) and the demand for massive amounts of highly scalable cloud storage surges.


The traditional storage market has settled into several distinct streams. On the high end is the EMC approach which is to build very complex storage systems with multiple redundant hardware plus many configuration options and features. This very expensive approach works for enterprises that really need fast reliable storage and have the budget to justify the investment. At the low end of the spectrum, there are lots of options for small storage pools -- half a petabyte or so – which are generally sold through channels. The prices for these systems have remained relatively stable at the $1 million for a petabyte range.


For companies that want massive amounts (15 petabytes or more) of cheap storage or a horizontally scalable cloud type architecture, the options have been limited to a few vendors or for the technologically adventurous, a roll your own system. All this is happening despite the fact that there are now three hard disk manufacturers left, a de facto monopoly, and most users treat hard drives as interchangeable commodities with excessively high failure rates that require massive amounts of duplication. Any vendor that can demonstrate a system that reduces the need for three data copies to maintain integrity by improving the failure rate from the current 10-15% a year, would deliver a big win for everyone.


As the cloud storage business grows the traditional vertically scalable storage is less and less viable. There are several companies that are already working on building horizontally scalable storage, Scality and CloudBytes are two that come to mind. There are also several Open Source projects like OpenStack Swift that are taking on the large data store architecture problem as well. Not all the cloud storage vendors are using a pooled block of storage approach. ScaleIO turns all the disks in the servers into a pool of storage. This radically different approach is dependent on the right use case and could have network bottleneck issues. The newer horizontally scaled cloudy architectures, such as Mezeo and OpenStack Swift have finally opened up the possibility of building attractively priced massive data pools.


The biggest innovation of late is the incorporation of SSD technology. Amazon is already offering an all SSD cloud storage service and a number of new storage vendors, such as SolidFire are building SSD only storage systems from the ground up. SolidFire isn’t the first vendor with an SSD offering, Nimbus Data began selling all-SSD systems more than a year ago. EMC jumped in with a Symmetrix with all SSDs this summer, and others are expected to follow shortly. Texas Memory Systems, Avere, Violin Memory and Alacritech, have all-SSD systems designed to speed SAN and NAS performance for internal storage solutions. Most storage vendors now give customers the option of installing some SSD alongside hard drives in their systems. In the long run, this is just an interim step as SSD prices will continue to fall over the next few years. The future is a move to all SSD units coming to a data center near you.


It is good to see innovation coming back in the storage industry after years of slack. Expect to see more hardware innovation as SSD technology becomes the standard for fast reliable cloud storage and new systems take advantage of more reliable hard disk hardware and continued rise in data densities. My crystal ball says that we might see a repeat of the massive changes that occurred in the 1990’s as smaller disks rapidly swept away all the incumbent vendors who were focused on their established large customers. Watch your back EMC…


Nine 'everything-as-a-service’ (Xaas) companies to watch




Everything as a Service (XaaS) describes, well, everything that can be offered as a cloud-based IT service. It’s a pretty wide swath of service categories with a long list of products in each. To narrow the field a bit, we asked industry analysts and practitioners to list the companies they are watching in the three XaaS categories that are currently bubbling to the top in terms of enterprise interest: Unified Communications as a Service (UCaaS), Monitoring/Management as a Service (MaaS) and Network as aService (NaaS).




M5 Networks
XaaS category: UCaaS
Why we’re watching: ShoreTel picked up this start-up (which previously had about 2,000 customers) in February to quickly offer UCaaS to customers looking to deploy IP communications through a hosted model. We’re watching to see how quickly ShoreTel customers are going to jump into the cloud.



Microsoft
XaaS category: UCaaS
Why we are watching: Not exactly a start-up, we have to admit. But we’re watching to see whether the proliferation of Office365 makes Microsoft’s underlying UCaaS engine, Lync, a no-brainer.



Thinking Phone Networks
XaaS category: UCaaS
Why we’re watching: Thinking Phone has been named as a visionary twice by Gartner. We’re watching the company because it has placed a strong focus on providing a deep dive analytics engine that helps enterprises understand how their cloud-based unified communications is operating across the company.



Boundary
XaaS category: MaaS
Why we’re watching: It’s all about real-time monitoring with the Boundary cloud application monitoring offering by the same name. The company deploys agents on an enterprise’s applications running in the cloud that examine every packet they see cross the application. So, the devil is in the details on this one. We’re watching to see if the company can hit its goal of being able to analyze 17 billion records per day by the end of the year, and then serve up useful information to customers on just what their cloud applications are doing.


enStratus Networks
XaaS category: MaaS
Why we’re watching: enStratus is focused on providing a unified management platform for hybrid cloud environments so that enterprise IT departments have a way to orchestrate virtual infrastructure available online with what’s existing within their own data centers. Seeing as hybrid clouds are corporate reality, we’re watching to see how quickly the company can amass IaaS partners to give customers more choice in who they can build hybrid clouds with and still have centralized control.



RightScale
XaaS category: MaaS
Why we’re watching: RightScale is sitting on a very interesting perch in the scheme of the cloud: right in between cloud users and their cloud Infrastructure as a Service providers, multiple providers, in fact. RightScale has struck up partnerships with a dozen cloud providers so that subscribers can manage all of their cloud infrastructure from a single, integrated pane.



BigSwitch
XaaS category: NaaS
Why we’re watching: Back in January, BigSwitch released an open-source controller based on OpenFlow, a switching and communications protocol that handles packet routing on a software layer, separate from the physical network infrastructure. We’re watching to see if the open source move will attract application development around the controller before the company releases a commercial version later this year.



Insieme (still in stealth mode)
XaaS category: NaaS
Why we’re watching: This is the software defined network (SDN) company that Cisco invested $100 million into last spring. Started by three Cisco employees, Insieme is still operating in stealth mode. But should Cisco choose to buy the company once its product development is more mature, it will signal that Cisco is serious about embracing SDN, a concept that currently threatens its core networking hardware business.



Nicira
XaaS category: NaaS
Why we’re watching: VMware paid 25 times more than what this startup had amassed in venture capital funds for this tiny company, which has built a virtual networking engine out of open source software. We’ll be watching to see what VMware does with its new networking division



Sunday, October 28, 2012

Cloud Success: Focus on Fundamentals

Courtesy - Bill Peldzus

There are more reasons than ever to make a move to cloud computing. Whether it’s savings, cost avoidance, rapid deployment, agile development, or speeding time-to-market, the cloud has the potential to provide more flexibility, scalability, and elasticity for your developer’s toolbox than ever.


Nevertheless, many customers that I’ve visited lately still think that cloud computing will solve all the issues that plague them; they are still seeing the cloud as a panacea for all IT problems. If getting a server takes too long, if a development platform is unstable, if they lose critical data or code updates can’t be restored – the cloud will fix everything, right?


W.C. Fields once said, “The best cure for insomnia is to get a lot of sleep.” Yes, sometimes the obvious is overlooked. If I have a cold, going to a different doctor won’t cure it any faster. I may get a different prescription, may get different advice, but the bottom line is that it’s still a cold and it will still take seven days to go away. The same holds true for IT and infrastructure – if your shop is a mess, moving to the cloud will not cure the fundamental issues.


To use cloud computing successfully, pay attention to the basics, standards, strategic and tactical policies, vetted operating procedures, service catalogues, service level agreements (SLAs), and other fundamentals. A move to the cloud, whether public, private, or hybrid, doesn’t make the IT services weaknesses go away; often it just makes matters worse because you have added another layer of complexity to your IT portfolio. If you can’t successfully backup (and more importantly, restore) data in your own shop, what makes you think it will work in the cloud? It won’t. What makes the cloud so successful is not that it is easier and highly automated, it is the fundamental policies, processes, tools and testing wrapped around the technology that make the approach successful.


Certainly you should be looking to the cloud for your next generation of IT services. However, during this process, you may first want to take a step back, and get some help putting the fundamentals in order before taking that next big leap. Use the breather as a chance to leapfrog current bottlenecks and align processes to your desired future state. If you don’t embrace these changes in a cloud environment, you will be missing out on all those touted cloud benefits. Think of it as a chance to reassess what was done well and what has failed. By taking this slower approach, you will definitely position your organization for a successful cloud deployment.

Cisco Predicts Sixfold Increase in Cloud Traffic by 2016


"The biggest driver is that the network has become a more cost-effective storage repository. With computing in the old days, the cheapest way to store data was to put it on a local hard drive," said analyst Zeus Kerravala of Cisco's prediction. "When you look at the price performance...it's more cost-effective to leverage the network for content."


Global data center traffic is set to grow fourfold, hitting 6.6 zettabytes a year by 2016. And cloud traffic, the fastest-growing component of data center traffic, will grow 44 percent annually, or sixfold, in the same time frame. So says Cisco's second annual Global Cloud Index.

To put the growth into perspective, 6.6 zettabytes is equivalent to 92 trillion hours of streaming music, 16 trillion hours of business Web conferencing, and 7 trillion hours of online high-definition (HD) video streaming.


"This year's forecast confirms that strong growth in data center usage and cloud traffic are global trends, driven by our growing desire to access personal and business content anywhere, on any device ," said Doug Merritt, senior vice president of Corporate Marketing at Cisco.


"When you couple this growth with projected increases in connected devices and objects, the next-generation Internet will be an essential component to enabling much greater data center virtualization and a new world of interconnected clouds."


What's Driving the Cloud?


Cisco predicts global cloud traffic will account for nearly two-thirds of total global data center traffic by 2016. Globally, cloud traffic will grow from 39 percent of total data center traffic in 2011 to 64 percent of total data center traffic.


In fact, global cloud traffic will grow faster than overall global data center traffic. The transition to cloud services is driving global cloud traffic at a growth rate greater than global data center traffic.


"The biggest driver is that the network has become a more cost-effective storage repository. With computing in the old days, the cheapest way to store data was to put it on a local hard drive," said Zeus Kerravala, principal analyst at ZK Research.


"When you look at the price performance of the network, considering broadband, fiber, FiOS and other networking technologies, from a consumer standpoint it's more cost-effective to leverage the network for content. Instead of storing the content locally, you just go to the network and get it when you want it."


Shifting Workloads


From 2011 to 2016, data center workloads will grow 2.5-fold and cloud workloads will grow 5.3-fold, Cisco predicted. In 2011, 30 percent of workloads were processed in the cloud, with 70 percent being handled in a traditional data center.


Cisco also predicts 2014 will be the first year when the majority of workloads shift to the cloud. By then, 52 percent of all workloads will be processed in the cloud versus 48 percent in the traditional IT space. And by 2016, 62 percent, or nearly two-thirds of total workloads, will be processed in the cloud.


Finally, Cisco predicts the average workload per physical cloud server will grow from 4.2 in 2011 to 8.5 by 2016. In comparison the average workload per traditional data center physical server will grow from 1.5 in 2011 to 2.0 in 2016.

Friday, October 26, 2012

Amazon's Cloud Outage Offers Lessons for the Enterprise


"We recommend spreading your services across separate cloud vendors to provide the best protection from disasters," said technology consultant Damian Bramanis. "Ultimately, staying safe in the cloud is a matter of trust. It's important to trust your cloud vendor at the best of times, but it's critical in the face of major problems."


It wasn't the first time Amazon Web Services saw outages -- and it probably won't be the last. Amazon's cloud service saw outages on the East Coast on Monday. Popular sites like Pinterest, Reddit, FastCompany and Flipboard went dark.


Could this give cloud computing a black eye? Or just Amazon's service? And why do the outages keep occurring? There are yet more questions than answers but analysts are weighing in on what the latest outage, which the hacker group Anonymous is taking credit for, means for the cloud.


Zeus Kerravala, principal analyst at ZK Research, told us the outage may give Amazon's customers cause for concern.


"As long as they've been selling cloud, you would think they would be one of the more stable cloud services," Kerravala said. "If Amazon is going to invest anywhere, they should invest to make sure this doesn't happen again."


Cloud Services Best Practices


Damian Bramanis, director of Advisory Services at Sentinus.com.au, said businesses need to use best practices for cloud services in order to avoid outages like the one Amazon just suffered. That, he said, means not putting all of your eggs in one basket, so cloud disasters don't become your disaster.


As he sees it, many cloud services can cope with failures by spreading the service across different data centers and availability zones. Businesses that followed this best practice model would have seen their operations continue despite the Amazon disruption.


"We recommend spreading your services across separate cloud vendors to provide the best protection from disasters," Bramanis said. "Ultimately, staying safe in the cloud is a matter of trust. It's important to trust your cloud vendor at the best of times, but it's critical in the face of major problems."


Fast Reactions to Outages


Quality of service is one of the top barriers to adopting cloud services for any companies, Sentinus.com.au reports. Bramanis said he's seeing businesses rate quality of service as the second-most important issue for cloud computing. The first is data security .


The good news is, cloud provider outages are typically handled swiftly. Reaction times have been promising, Bramanis said, and vendors are not only working hard to fix the problems, they're keeping customers informed with regular updates.


Kerravala said the flexibility an enterprise gains with the cloud outweighs the loss of control, "assuming that those outages are minimized. That makes the cloud provider you use very important. They are not all created equal."


Top Trends in Managed Services


Among strategies and recent initiatives employed by the Top 50, several trends are clearly present. They include…




Sustainability. Unsurprisingly, contractors are folllowing the broader cultural imperative toward more local sourcing, the use of organics and the reuse of waste through recycling and composting initiatives. This is especially apparent for companies operating in the college segment where these issues are most resonant but can also be seen in other segments. One prominent example is Sodexo’s Stop Wasting Food campaign in partnership with Lean Path but in truth, just about every company in the Top 50 has (or claims to have) a sustainability program of one kind or another.



Nutrition/wellness. Nutrition education is a huge issue in K-12 schools, where First Lady Michelle Obama’s campaign against childhood obesity has prompted a spate of moves toward not only healthier meals (now required by statute) but an emphasis on developing healthy lifetime habits. Contractors have responded by developing nutrition education curricula and special event programs to satisfy the demand among client districts for such services. Meanwhile, menu programs designed to meet increasingly stringent federal mandates for reimbursable meals have been developed by companies operating in the K-12 segment, such as Nutrition Group with its Choose Two initiative.



In other segments like B&I and healthcare, contractors are developing wellness programs that mesh with larger institutional objectives designed to promote healthier lifestyles among the employees. For instance, Prince Food Systems offers a point-based system tied to nutritional information about its offerings that allows customers to choose healthier meals that accord with the institution’s wellness goals. These programs can also focus on specific dietary regimens. Vegetarian and even vegan menus are already standard chapters in most contractors’ menu programs, but gluten-free menus are still in the growth stage, a recent example being Metz’s Gluten-Free Zone concept for college clients.



Mobile computing. The rush to get on Facebook and connect with customers through Twitter is on. A number of contractors have introduced mobile computing apps that allow customers to see menus, gather nutritional data and even place and pay for orders through mobile devices. A corrollary of this trend is the deployment of mobile ordering kiosks in or adjacent to serveries to facilitate the transaction process.



Value. Hard times call for biting the bullet on check averages by offering value-priced items/combos to drive participation. This is happening not just in the school and workplace but in the entertainment arena as well, an example being the Victory Menu (“fan friendly favorites at fan friendly prices”) initiative from Centerplate designed for sports stadiums. Meanwhile, programs to encourage impulse sales, such as Compass Group’s Cookie initiative, are deployed to at least partially offset the damage to margins and the bottom line wreaked by the value pricing.



Expanding the Client Base. The traditional B&I mantra states that, absent subsidies, there is a radically diminishing rate of profitable return that is tied to site population size. But in these difficut times, contractors are looking at ways to expand that range of profitability through technology. Several contractors with substantial vending programs have deployed unmanned mini-kiosks that offer broader foodservice than the traditional bank of vending machines. Designed for small-population sites with limited foodservice footprint availability, they are one promising avenue for building B&I business. Another approach is the “pop-up" cafĂ© concept developed by CulinArt for multi-building accounts that foregoes a fixed foodservice real estate allocation in favor of a flexible, mobile approach that can better serve a dispersed customer base.



Interactivity. Chefs by and large are showoffs and contractors are using their extrovert talents to engage customers, who are already used to the Food Network/celebrity chef paradigm. Culinary demos and cooking exhibitions are a standard way now to draw an audience to the cafeteria who will, hoefully, stay and eat. This has been going on for a while now in segments like B&I and college with their guest chef programs and exhibition stations, but now even K-12 is getting in on the, er, act.



The Roving Chef program from Southwest Foodservice Excellence, Revolutionizing Pomptonian’s Menus from Pomptonian Food Service and Traveling Display Stations from Quest Food Management are just three examples of K-12 specialist contractors taking their culinarians to meet customers and put on a show (while teaching some health and nutrition lessons). Even hospital contractor Luby’s has gotten in on the trend with a program, Chef’s Table, that brings lets young patients at childrens hospitals into the kitchen to interact with Luby’s chefs to top pizzas, bake cookies and engage in other fun kitchen antics.



Food Trucks. Oh yes, perhaps THE hottest concept in foodservice has certainly come to the attention of contractors, several of which have deployed these trendy meals on wheels outlets in appropriate venues, especially college campuses. Parkhurst, for example, recently launched mobile catering vehicles at two college accounts while CulinArt used food truck style cuisine (especially international street foods) as the central concept in a new menu program for the fixed servery.



Thursday, October 25, 2012

How-to: Get started with Amazon EC2


Amazon cloud skills are in high demand. This easy, step-by-step guide will help start you on your path to cloud mastery.


If your company hasn't ventured an Amazon cloud deployment already, the day may be fast approaching. Amazon's pay-as-you-go cloud is no longer "just" a popular playground for developers, a magnet for technology startups, and the clandestine home of "shadow IT" projects. It's also increasingly a component of official IT operations.


Working with the Amazon EC2 cloud isn't especially difficult, but it is different. This quick guide will get you up and running and on your way to cloud mastery. When your company finally embarks on that Amazon deployment or the next stop in your career requires cloud skills, you'll be ready to answer the call.


Learning your way around Amazon


A first look at the Amazon Web Services dashboard confronts a bewildering array of services. Where to start? The truth is that a few of these resources will do almost everything you need. Others you may use little or not at all. The following services are the ones that will loom largest on your radar.


EC2 (Elastic Compute Cloud). EC2 instances are the servers on which you run your workload. Although you use a Web interface or API call to provision the servers and bring them into your collection, ultimately they are real computers with CPUs, memory, and access to physical storage.


S3 (Simple Storage Service). So-called simple storage, S3 is used for persistent and very cheap storage. S3 integrates with CloudFront, Amazon's content delivery solution. If you have website content such as graphics images and CSS, these files would typically be stored in S3 and fetched by your Web server at delivery time.


EBS (Elastic Block Storage). EBS is essentially a virtualized storage area network or SAN solution that all of your servers can share. Slice out chunks of storage for use by your instances as root or alternate volumes. You can then take snapshots of them to use for backups -- just as you would with Linux's LVM (Logical Volume Manager).


RDS (Relational Database Services). Amazon RDS is Amazon's managed relational database solution based on MySQL, Oracle, or SQL Server under the hood. When you launch a database instance, you choose the database engine you want.


ElastiCache. This is an Amazon-managed memcache solution. You can add and remove nodes easily, and with CloudWatch monitoring, you can have Amazon replace nodes for you if they fail.


Route 53. Route 53 is an Amazon-hosted DNS solution that allows you to associate names to your provisioned computing resources. Because instances in Amazon change their IP addresses whenever they are stopped and started again, reaching those boxes via names can be much more convenient and easier to support than relying on IP addresses.


VPC (Virtual Private Cloud). VPC is a superb addition to the Amazon portfolio of services, one that may very well benefit your enterprise. VPC essentially allows you to dynamically scale your existing data center using Amazon resources. Connecting the Amazon cloud with your data center via VPN, VPC allows your existing network to route Amazon instances privately, as though they were physical machines in your data center. Get all the benefits of the cloud with none of the security headaches.


There are of course many other Amazon services available, including email sending, message queueing, workflow, search, NoSQL, MapReduce, and alternative authentication solutions. But the above are the main services to understand.


In addition to these core services, you're sure to encounter a number of Amazon vocabulary terms again and again. Before you get started, it will pay to be familiar with the following concepts.


EC2 Instance. An instance is a unit of computing power, with CPUs, memory, and attached storage.


Amazon Machine Images. An Amazon Machine Image (AMI) is essentially a snapshot of a root volume. It may initially be difficult to wrap your head around this idea, but imagine the Linux Logical Volume Manager. Like LVM, an AMI allows you to snapshot your root volume and create a block-by-block copy of everything stored on the disk. That includes the master boot record, the kernel image, and so forth. The hypervisor layer in EC2 allows you to boot from these images on generic commodity servers in the Amazon data centers.


EBS Volumes. Volumes are snapshots or backups of volumes you have mounted on your server instances. In other words, EBS volumes persist independently of the instances themselves.


Security Groups. Amazon doesn't go with traditional perimeter security unless you're using the Virtual Private Cloud services. That means each server is its own universe, governed by security roles enforced by the hypervisor layer. This is real security, though the new paradigm may take some getting used to. Think of putting servers in groups by role, such as a database tier group, a Web server tier group, and so forth. You might even spin up a t1.micro instance and use it as a jump box. Make this instance the only machine in your environment with SSH access allowed, then grant access to all your servers' port 22 (for SSH) only from this jump box.


Load balancers. A load balancer in AWS becomes another facility that you can configure in a completely virtual way. Here's where you start to see the real power of the AWS environment. You can associate your instances to the load balancer by instance ID even if they are in different availability zones. You can configure the listener and cookie stickiness policies as well.


Availability Zones. Availability Zones are distinct data centers in the Amazon environment, but deployment is nevertheless transparent. All resources can be deployed easily whether on the East Coast, the West Coast, or the other side of the world. Storing mission-critical resources in multiple Availability Zones is your hedge against the inevitable Amazon outage.


Install the Amazon EC2 API Tools

Now that you're familiar with the core offerings and vocabulary, let's try out some of the services. You'll need to create an AWS account before we can go any further. Note that a free usage tier is available for new users.


First, we'll want to install the API tools. These Java-based tools allow you to issue Amazon commands from any terminal window, whether it be your local laptop, another server, or even an instance hosted in Amazon itself. Bootstrapping indeed!


The first step is to download the tools from Amazon. Next you'll set up a couple of environment variables:


export JAVA_HOME=/usr
export EC2_HOME=/home/sean/api-tools



These are examples of the commands for Linux and Unix. For more detail on these and for the corresponding commands on Windows, follow this link to Amazon's documentation.


Create your access keys

The Amazon dashboard provides an easy way to set up your keys.


1.Go to aws.amazon.com and log in.
2.Under your account name in the upper right, click the menu and select Security Credentials.
3.Click the first link, Access Credentials.
4.Click "Create new access key" and follow the instructions.
5.The last step will involve downloading two .pem files. Save these locally.
6.So that your Amazon tools can locate these .pem files, set these two environment variables:

export EC2_PRIVATE_KEY=/home/sean/keys/pk-A5X4ZTZRLDEMYVHGXCQHU2HW3HALFS3T.pem

export EC2_CERT=/home/sean/keys/cert-A5X4ZTZRLDEMYVHGXCQHU2HW3HALFS3T.pem


Choose an Availability Zone and Region

Availability Zones are distinct data centers. It is incredible that we can distill a data center down to a short identifier such as us-east-1a or us-west-1c, but that is the beauty of cloud computing and Amazon Web Services. As you build more complex applications with more resilient architecture, you'll pay more attention to which Availability Zone you deploy components in. For now, pick the one that's physically closest to your location.

You'll find the menu for selecting your Availability Zone right next to your account name in the upper-right corner of the EC2 dashboard.


Choose an Amazon Machine Image

Next stop on your Amazon tour is to decide which AMI to use. There are nearly 1,000 AMIs to choose from, and you can easily browse or search for what you need.


At this stage I wouldn't spend an inordinate amount of time deciding. Go with an Ubuntu image as a default. Also be sure to pick an EBS root AMI. There are very few use cases for Instance Store now that EBS is mature. I'm personally partial to Eric Hammond's images, which are well maintained, well supported, and well respected in the community.


A note on 32-bit versus 64-bit images: Only micro, small, and medium instances are available in 32-bit. As a general rule, it's best to go with 64-bit for everything unless you have a particular and compelling reason to require 32-bit. With 64-bit, your images will work on all instance types, and you can vertically scale easily.


Spin up your EC2 instance

You have your tools installed, you have your keys, you've picked an AMI and availability zone. Now you're finally ready to create a real Amazon instance. At the command line, enter:


$ ec2-run-instances ami-31814f58 -k my-keypair -t t1.micro -z us-east-1a


Notice I chose a micro instance. Micro instances are free, so they're a great option for trying out the tools.


Connect to your instance

Now that you have a running instance in EC2, you'll want to connect. Let's find out its name:


$ ec2-describe-instances


RESERVATION r-d1a71cc1046997127105 default

INSTANCE i-17086273ami-31814f58 ec2-64-21-210-168.compute-1.amazonaws.comip-10-44-61-104.ec2.internalrunning my-keypair0 t1.micro2012-06-15T13:11:05+0000us-east-1a aki-417d2539monitoring-disabled 64.21.210.168 10.46.63.204ebs paravirtualxen sg-65f4ec0adefault

BLOCKDEVICE /dev/sda1vol-3f1ac253 2012-06-15T13:11:32.000Z


Once you know the IP address to the box, go ahead and connect:


$ ssh -i my-keypair ec2-user@64.21.210.168


A few routine tasks

Folks familiar with Linux Volume Manager know that you can easily snapshot a disk volume. In Amazon, snapshots are a powerful facility for creating backups, protecting you from instance failure, and even creating new AMIs from your custom server setups. Look at the BLOCKDEVICE line above. You'll see the volume ID. That's all you need:


$ ec2-create-snapshot vol-3f1ac253


A few details to keep in mind: Although you can snapshot a running server, some tools will stop your instance in order to snapshot the root volume. This is for extra protection against corruption of the file system. If you're using a journaling file system such as ext3, ext4, or xfs, snapshotting a running system will leave your volume in a state similar to a crashed server. Upon startup, incomplete blocks will be repaired. In the case of a database mount such as MySQL, however, you should issue these additional commands from the MySQL shell:


mysql > flush tables with read lock;

mysql > system xfs_freeze -f /data



For an in-depth explanation of how to do this, see article, "Autoscaling MySQL on Amazon EC2."



When instances are started, Amazon automatically assigns a new IP address to them. Dynamic addresses are fine for playing around, but you'll undoubtedly want static, global IP addresses for some machines eventually. That's where elastic IP addresses enter the picture; your AWS account comes with a number of these. You can set your new instance with one of these static IPs using a simple command-line call:


$ ec2-associate-address 10.20.30.40 -i i-17086273


You're all set.


Now that you've had a taste of Amazon, you'll want to explore more. With the command-line tools installed and your security keys set up, you have everything you need to go further -- and get comfortable with different instance types, various AMIs, the Availability Zones your instances and volumes are stored in, how load balancers work, and beyond. The further you go, the more you'll appreciate that Amazon's documentation is as copious as its services.




Tuesday, October 23, 2012

Vmware Interview Questions - Part 3


Click here for Part 1

Click here for Part 2



What is the maximum Hosts in Linkedmode environment?

• 1000

What is the maximum Hosts per datacenter?

• 100

What is the maximum Hosts per vCenter Server if the vCenter Server is running on a 64-Bit OS ?
• 300

What is the maximum Hosts per vCenter Server if the vCenter Server is running on a 32-Bit OS ?

• 200

What is the maximum Linked vCenter Server systems?

• 10


What is the Maximum Floppy controllers per virtual machine?

• 1

What is the Maximum VMDirectPath SCSI targets per virtual machine?

• 60

What is the Maximum Virtual SCSI adapters per virtual machine?

• 4

What is the Maximum Virtual SCSI targets per virtual SCSI adapter?

• 15

What is the Maximum IDE controllers per virtual machine?

• 1

What is the Maximum Parallel ports per virtual machine?

• 3

What is the Maximum Floppy devices per virtual machine?

• 2

What is the Maximum Virtual SCSI targets per virtual machine?

• 60

What is the Maximum VMDirectPath PCI/PCIe devices per virtual machine?

• 2

What is the Maximum RAM per virtual machine?

• 255GB

What is the Maximum Virtual machine swap file size?

• 255GB

What is the Maximum IDE devices per virtual machine?

• 4

What is the Maximum Virtual CPUs per virtual machine (Virtual SMP) ?

• 8

What is the Maximum Concurrent remote console connections to a virtual machine?

• 40

What is the Maximum Virtual NICs per virtual machine?

• 10

What is the Maximum Serial ports per virtual machine?

• 4

What is the Maximum virtual machine Disk Size?

• 2TB minus 512B

Monday, October 22, 2012

Juniper Networks unveils edge services engine enabling service

Juniper Networks, a provider of network infrastructure services, has unveiled a virtualized platform, an edge services engine enabling service, for a rapid deployment of multivendor applications.
Combined, the new Juniper Networks MX2020 3D Universal Edge Router and new JunosV App Engine transform the network edge into a platform for rapid service deployment, speeding time to revenue by up to 69% compared to other solutions. This helps service providers reverse declining revenue models by removing the complexity and time associated with service deployment, the company said.


"Our vision for the new network is to provide innovations that help transform service providers into super providers," said Rami Rahim, senior vice president, Edge and Aggregation Business Unit, Juniper Networks.


"Thousands of MX customers have validated the need for massive scaling and reduced complexity driven by emerging video, applications, and cloud-based services. We see our new edge services engine as the first solution that truly empowers service providers to take control of their business by removing barriers currently hindering innovation and revenue expansion."


This solution offers new levels of scale for service providers by delivering a gateway for everything the network has to offer: one system, innovative software services, with unlimited revenue possibilities at unconstrained scale.


Mike Wright, executive director of Networks and Access Technologies at Telstra, said: "With data on Telstra's fixed network doubling approximately every 17 months we need a network that is scalable, resilient and integrated. The Juniper Edge Services Router assists Telstra to manage content delivery, is highly scalable and integrates our networks. Using this equipment in our network assists us to manage the ever increasing volumes of traffic on the Telstra network."


Friday, October 19, 2012

Cisco Showcases Growth in Cloud Collaboration and Announces New Collaboration Capabilities and Services for Enterprises, Service Providers and Partners

 
At its Collaboration Summit Cisco unveiled enhancements to its collaboration portfolio by: (1) bringing into its Hosted Collaboration Service (HCS) both TelePresence bridging and many of the capabilities of Cisco Unified Communications Release 9.0; and (2) extending Cisco WebEx to the Private Cloud. The offering of feature parity between premise and cloud gives customers more flexibility in their UCC consumption choices. And partners will be provided with additional “as-a-Service” revenue opportunities via the HCS enhancements to grow their business, as well as an expanded addressable market through the new way to deploy WebEx for those customers who prefer an on-premise solution.

Expanding Cisco HCS

New HCS technologies are targeted for global availability in Q4 CY 2012.

Extending TelePresence Capabilities in HCS

The HCS solution with TelePresence Exchange (CTX) integrates a shared multitenant media solution for video to enable a “static bridge” service option, also known as “rendezvous conferencing,” for secure B2B services for video and telepresence. The service creation platform offers simplified management and coordination of media resources, configurable service levels, and open application programming interface (API) for further service customization. This solution enables partners to extend their hosted UCC service portfolios to include telepresence as a service. Through the integration of CTX with HCS, customers have the ability to connect multiple locations, including external vendors and suppliers, at the same time. Via the hosted model, customers simply buy or lease the endpoints and the partner provides the backend technology.

Bringing Cisco Unified Communications Release 9.0 Capabilities into HCS

Cisco is extending many of the latest Cisco UC capabilities announced earlier this summer to the cloud via HCS.
 
Contact Center integration:Through HCS enhancements for Customer Collaboration, partners can easily manage multiple contact centers for customers. Additionally, HCS partners can arm customers with a highly-customizable Web 2.0 collaboration desktop that puts relevant information that call center agents need in a single, modifiable cockpit. Agents can use this information to assist callers faster, better, and with greater accuracy.
 
Extend and Connect: Controlled by Jabber and enabled by Cisco Unified Communications Manager (CUCM) 9.0, Extend and Connect brings third-party phones into the Cisco UC environment. This feature offers those enterprise customers migrating to CUCM 9.0 an opportunity to leverage their existing investments further without compromising the ability of their employees to benefit from new UCC capabilities. In addition, telecommuters and business travelers can initiate a Jabber session on their PC device, input the phone number of their preferred voice device, and have CUCM route all voice traffic directly to that phone number while all the call control remains anchored in the Jabber client. In this way a knowledge worker can initiate a conference call from home, for example, with office features and billing associated with his/her office phone.
 
Extend and Connect operates on signaling and call control data, not VoIP, so it requires no Quality of Service (QoS) and is bandwidth stingy. That means users will be able to leverage the Cisco UC environment despite slow or unreliable connections, including those sometimes found in cafes, hotels, and home offices. Jabber is a software client and includes IM/P, voice, voice messaging, video, desktop sharing and conferencing.
 
Connecting the dots between enterprise and mobile networks:With IP Multimedia Subsystem (IMS) integration capabilities, Cisco HCS partners can offer Fixed Mobile Convergence (FMC) capabilities to their customers. This helps mobile service providers connect enterprise and mobile networks, offering seamless services between the two. This offering also enables operators to provide voice over 4G data networks.

Cisco WebEx Meetings Server

Cisco WebEx Meetings Server, targeted for global availability in Q4 CY 2012, is an on-premise version of WebEx that gives customers the capability to run WebEx out of their own private cloud datacenter. It’s a full collaboration solution including WebEx clients for PC, Mac, iPhone and iPad; high quality video; desktop sharing, annotation, and collaboration tools; recording and playback. Integration with Cisco’s UC suite that extends IP telephony to conferencing, and provides escalation from a Jabber IM session to a full WebEx meeting directly from the Jabber client is targeted for availability January, 2013. As subsequent releases come out, other mobile devices will be added and movement to HD video will occur.

What This Means to You

To Customers: Collaboration Summit places a lot of emphasis on giving customers choice in the way that they deploy collaboration solutions – be that private cloud, public cloud or hybrid.
For example, those customers who purchased CUCM 9.0 as a premised-based solution can now actually have those services deployed in the cloud as well via HCS.
 
Then, there are customers who are very sensitive to data privacy and are just unwilling to trust cloud-based solutions. In addition, there are some markets where the regulations are such that cloud-based WebEx is not a viable solution. Or you’re in the part of the world where a cloud-based solution is infeasible. And there are still customers who want to have the choice between a CapEx model and OpEx model. These are customers who are willing to say, "look, I’ll pay the CapEx upfront and run it myself. I think I can save more money that way, than going to the Opex model." The WebEx meeting server announcement serves these types of markets.
 
To Partners: Cisco has been very focused about putting partners at the center of its cloud strategy. Last month Cisco announced the Master Cloud Builder Specialization that offers elite branding, deeper engagement with Cisco sales teams and financial incentives. At Collaboration Summit Cisco will be discussing with partners what they’ve done to make WebEx resale viable and the new WebEx Advanced Technology Partner (ATP) specialization that will be available for them as well. The WebEx ATP specialization is for partners who are actually incorporating WebEx into their own offerings.
 
The total WebEx addressable market will likely increase because partners can now get to those customers who historically were not willing to embrace a cloud-based web conferencing solution.
Through the integration of CTX into HCS, partners can more efficiently manage the infrastructure necessary to enable TelePresence and videoconferencing, while also leveraging that infrastructure across their entire customer base. This enables partners to avoid making separate infrastructure investments for each customer. In addition, this solution enables partners to extend their hosted UCC service portfolios to include telepresence as a service.
 
Cisco is also expanding HCS by bringing in many of the capabilities of CUCM 9.0 including the video and customer collaboration. This opens new ways for partners to create recurring revenue streams because it offers multiple insertion points. Partners will now be able to approach customers by helping them initiate their contact center and then upsell them with upgrades to their UCC infrastructure. The fact that Cisco can now offer feature parity between premise and cloud, so customers don’t have to make a choice when they pick a deployment model is a big attraction point for partners.

 

Vmware Interview Questions - Part 2

Click Here for Part 1


1. Explain the physical topology of Virtual Infrastructure 3 Data Centre?

a typical VMware Infrastructure data center consists of basic physical building blocks such as x86 computing servers, storage networks and arrays, IP networks, a management server and desktop clients.

2. How do you configure Clusters, Hosts, and Resource Pools in VI3?

A cluster is a group of servers working together closely as a single server, to provide high availability, load balancing and high performance.  A host is a single x86 computing server with individual computing and memory resources. Resource pools are allocation of the available resources in to pieces for the proper distribution.

3. What are resource pools & what’s the advantage of implementing them?

A VMware ESX Resource pool is a pool of CPU and memory resources. Inside the pool, resources are allocated based on the CPU and memory shares that are defined. This pool can have associated access control and permissions. Clear management of resources to the virtual machines.

4. Explain why VMware ESX Server is preferred over Virtual Server or Workstation for enterprise implementation?

For better resource management as it has a virtualization layer involved in its kernel, which communicates with the hardware directly.

5. In what different scenarios or methods can you manage a VI3 ?

Using the Virtual Infrastructure Client we can manage one esx server, using virtual center we can manage more than 1 esx server.. and also we can use service console to manage it.
http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1280576_mem1,00.html

6. Explain the difference between access through Virtual Infrastructure Client (vi client), Web access, Service Console access(ssh) ?

Using VI Client we can access the ESX server as well as Virtual Center Server also, here we can use unix type of authentication or windows type authentication. But to access the service console, we should use unix type of authentication preferably even though we can access the service console through ad authentication using esxcfg-auth, but it does not support all functions to work on, all the functions are available only with root account which is based on red hat Linux kernel. Using the web access also we can manage virtual center as well as a single host. But all the enterprise features are not supported.

Console access to the Service Console

The disadvantages to this mode are

you must be at the console (or connect using an IP KVM) and
you must know Linux to accomplish your task (no GUI).

SSH to the Service Console

You can SSH to the console prompt of an ESX server and receive the same Linux text console access as I showed above. Telnet is not allowed. To use this method, the ESX server must be working on the network and you must have an SSH client on your PC to connect. Again, in this mode, you don't get a GUI interface.

VMware Virtual Infrastructure (VI) Web Access to the ESX Server

This is the VMware VI Web Access interface. The benefit to using this is that you get a GUI client for your ESX server without having to install a client on your local machine. The downside to the web interface is that you can only perform basic ESX functions like controlling existing machines (start/stop/pause) and console remote access. You cannot add new VMs, work with VM storage, or VM networks. Still, this is a great interface if you just need to check the status of your ESX VMs, restart a VM, or use console remote control.

VMware Virtual Infrastructure Client (VI Client) to the Server

The benefits to the VI client are that you have full access to do whatever is needed on the ESX Server and you get a GUI client to do it in. The only downside is that you must install the VI client application to do this. However, the installation is negligible and the VI client is the absolute best way to administer your ESX Server.

VMware Virtual Infrastructure Client (VI Client) to the Virtual Center Server (VC Server)

From this VI VC interface, you can manage all ESX servers, VM storage, VM networks, and more. Virtual Center, of course, is an optional product that requires additional licenses and hardware.

7. Explain advantages or features of VMware Virtual Machine File System (VMFS) ?

It’s a clustered file system, excellent support for sharing between ESX servers in a cluster.

Features

Allows access by multiple ESX Servers at the same time by implementing per-file locking. SCSI Reservations are only implemented when LUN meta data is updated (e.g. file name change, file size change, etc.)

Add or delete an ESX Server from a VMware VMFS volume without disrupting other ESX Server hosts.
LVM allows for adaptive block sizing and addressing for growing files allows you to increase a VMFS volume on the fly (by spanning multiple VMFS volumes)
With ESX/ESXi4 VMFS volumes also can be expanded using LUN expansion
Optimize your virtual machine I/O with adjustable volume, disk, file and block sizes.
Recover virtual machines faster and more reliably in the event of server failure with Distributed journaling.

Limitations

Can be shared with up to 32 ESX Servers.
Can support LUNs with max size of 2TB and a max VMFS size of 64 TB as of version 4 (vSphere).
"There is a VMFS-3 limitation where each tree of linked clones can only be run on 8 ESX servers. For instance, if there is a tree of disks off the same base disk with 40 leaf nodes in the tree, all 40 leaf nodes can be simultaneously run but they can only run on up to 8 ESX hosts."
VMFS-3 limits files to 262,144 (218) blocks, which translates to 256 GB for 1 MB block sizes (the default) up to 2 TB for 8 MB block sizes.

8. What are the types of data stores supported in ESX3.5 ?

iSCSI datastores, FC SAN datastores, Local VMFS, NAS and NFS

9. How can you configure these different types of datastores on ESX3.5 ?

If we have FC cards installed on the esx servers, by going to the storage option, we can scan for the luns.

10.What is Vmware Consolidate Backup (VCB) ? Explain your work exposure in this area ?
VMware Consolidated Backup is  a backup framework, which enables 3rd party tools to take backups. VCB is used to help you backup your VMware ESX virtual servers. Essentially, VCB is a "backup proxy server". It is not backup software. If you use VCB, you still need backup software. It is commonly installed on its own dedicated Windows physical server.

Here are the benefits of VMware's VCB:

 •Centralize backups of VMware ESX Virtual Servers
 •Provide file-level backups of VMware ESX Virtual Servers - both full and incremental (file level backup available to only Windows guests)
 •Provide image-level backups
 •Prevent you from having to load a backup agent on every Virtual Machine
 •Prevent you from having to shutdown Virtual Machines to get a backup
 •Provides LAN-Free backup because the VCB server is connected to the SAN through your fibre channel adaptor
 •Provides centralized storage of Virtual Server backups on the VCB server, that is then moved to your backup tapes through the 3rd party backup agent you install
 •Reduces the load on the VMware ESX servers by not having to load a 3rd party backup agent on either the VMware ESX service console or on each virtual machine.
 •Utilizes VMware Snapshots

Basically, here is how VCB works:

 •If you are doing a file level backup, VCB does a snapshot of the VM, mounts the snapshot, and allows you to backup that mounted "drive" through VCB to your 3rd party backup software
 •If you are doing an image level backup of the VM, VCB does a snapshot of the VM, copies the snapshot to the VCB server, unsnaps the VM, and allows you to backup the copied snapshot image with your 3rd party backup software.


11. How do you configure VMware Virtual Centre Management Server for HA & DRS ? What are the conditions to be satisfied for this setup?

HA & DRS are the properties of a Cluster. A Cluster can be created only when more than one host added, in that case we need to configure HA & DRS as well to provide High Availability and Load balancing between hosts and for the virtual machines.

12.Explain your work related to below terms :
VM Provisioning:  Virtual Machine  Creation.
Alarms & Event Management: Alarms are used to know the status of the resource usage for a VM. Events are used monitor the tasks that are taken place on the esx servers or in the virtual center
Task Scheduler: Task scheduler, if you want to schedule a task it will be used, for example if you want move one vm from one host to another host or if you want shutdown/reboot a vm etc.
Hardware Compatibility List: what are the hardware that compatible with ESX OS.

13.What SAN or NAS boxes have you configured VMware with ? How did you do that ?

Storage team will provide the LUN information, with that we will add those LUNs to ESX hosts from VM storage.

14.What kind of applications or setups you have on you Virtual Machines ?

Exchange server and Share Point, but these are for DEMO purposes, Cirtrix presentation servers etc.

15. Have you ever faced ESX server crashing and Virtual Centre Server crash? How do you know the cause of these crashes in these cases ?

16. Will HA work if Virtual Center Server is down ?

A1) HA continues to work if VC is down - the agents are initially configured by virtual center, but HA operations are controlled by local agents on ESX. VC does NOT monitor the ESX servers for HA. ESX servers monitor each other.
DRS do not work while VC is down.
A2) For DRS, the config and logic is completely in VC.
For HA, only the config is in VC. The logic is in the service consoles, and that's where the reaction is coming from. VC will notice the HA reaction afterwards when it connects to the service consoles the next time.
No, Why because all these futures are comes with Virtual Center only.

17. What are the situations which triggers vMotion automatically?

Resource Contention between virtual machines (DRS)
Distributed power management

18. What is DRS/HA/DPM/dvSwitch/FT/vApps/vSafe/vShields ? :-)

DRS : Distributed Resource Scheduling
HA : High Availability
DPM : Distributed Power Management
dvSwitch : Distribute vSwitch – It’s a new feature introduced in vSphere4.0
FT : Fault Tolerance for Virtual Machines – it’s a new feature introduced in vSphere4.0
vApps : vApp is a container same as resource pool, but it is having some features of virtual machines, a vApp can be powered on or powered off, and it can be cloned too.
http://communities.vmware.com/message/1308457#1308457

vmSafe : VMsafe's application programming interfaces are designed to help third-party vendors create virtualization security products that better secure VMware ESX, vShield Zones is a security tool targets the VMware administrator.

vShield : VShield Zones is essentially a virtual firewall designed to protect VMs and analyze virtual network traffic. This three-part series describes vShield Zones, explains how to install it and provides useful management tips. To begin, let's get started with the basics: what vShield Zones is and how it works.

http://searchvmware.techtarget.com/tip/0,289483,sid179_gci1363051_mem1,00.html

19. What are the requirement for FT ?
http://communities.vmware.com/thread/209955

20. What are the differences between ESX and ESXi ?

ESX is an OS with full features of virtualization, ESXi is a limited features OS with 32MB image.

21. Which are the new features introduced in vSphere 4 ? *****

1. 64-bit hypervisor - Although not everyone realized it, the hypervisor in ESX Server 3.5 was 32-bit. As a result, ESX Server 3.5 couldn't take full advantage of today's more powerful 64-bit hardware platforms. ESX Server 4.0 uses a native 64-bit hypervisor that provides significant performance and scalability enhancements over the previous versions. However, the new hypervisor does require a 64-bit hardware platform.

2. Increased VM scalability - ESX Server 4.0's new 64-bit architecture provides significant increases in scalability. ESX Server 4.0 supports virtual machines (VMs) with up to 255GB of RAM per VM. In addition, the vSphere 4.0 Enterprise Plus edition provides support for up to 8-way virtual SMP per VM. The other editions support up to 4-way virtual SMP. These gains are available on both Windows and Linux guests.

3. Hot add CPU, RAM, and virtual disks - This important enhancement in vSphere 4.0 is designed to create a dynamic IT infrastructure through the ability to add CPU, RAM, and virtual disks to a running VM. The hot add capability lets you dynamically increase your VMs' performance during periods of high resource demands.

4. Thin provisioning - This feature is nothing new to Microsoft virtualization users; vSphere now offers a thin-provisioning feature that's essentially the equivalent of Hyper-V's dynamic disks. Thin provisioning lets you create and provision a Virtual Hard Disk (VHD), but the host uses only the amount of storage that's actually required by the VM rather than using the VHD's allocated size.

5. VMware Fault Tolerance - Fault Tolerance is a new high-availability feature in vSphere 4.0. Fault Tolerance works only between two systems. It uses a technology called vLockstep to provide protection from system failure with absolutely no downtime. VMware's vLockstep technology keeps the RAM and the virtual processors of two VMs in sync at the instruction level.

6. vNetwork Distributed Switch—vSphere 4.0's vNetwork Distributed Switch lets you create and share network configurations between multiple servers. The vNetwork Distributed Switch spans multiple ESX Server hosts, letting you configure and manage virtual networks at the cluster level. It also lets you move network configuration and state with a VM when the VM is live migrated between ESX Server hosts.

7. IPv6 support - Another enhancement in vSphere 4.0 is support for IPv6. Many organizations are planning to move to IPv6. vSphere's IPv6 support lets customers manage vCenter Server and ESX Server hosts in mixed IPv4/IPv6 network environments.

8. vApps—vApps essentially lets you manage as a single entity multiple servers that comprise an n-tiered application. Using vApps, you can combine multiple VMs, their interdependencies, and their resource allocations together as a unit. You can manage all the components of the vApps as a single unit, letting you power off, clone, and deploy all the vApps components in the same operations.

9. vSphere Host Update Utility—The new vSphere Host Update Utility lets you centrally update your ESXi and ESX Server 3.0 and later hosts to ESX Server 4.0. The UI displays the status of the remote updates in real time.

10. VMware vShield Zones—VMware's new vShield Zones let customers enforce network access protection between VMs running in the virtual data center. The vShield Zones feature lets you isolate, bridge, and firewall traffic across vCenter deployments.

22. Which are the traffic shaping options available to configure?23. What is promiscuous mode ?

If the promiscuous mode is enabled for a switch, the traffic sent that switch will be visible to all vm’s connected to that switch. I mean, the data will be broadcasted.

24. What makes iSCSI and FC diffrent ?
Addressing Scheme, iSCSI relies on IP and FC not, and the type of transfer of data also. In FC the data transferred as blocks, in iSCSI the data transferred as files. The cabling also, FC uses Fibre cable and iSCSI uses RJ45.
25. What is the format for iSCSI addressing ?

IP Address

26. VM's Task Manager shows performance normal, But vCenter reports high resource utilization, what is the reason ?
Search KEY WORDS : VM's performance normal,  vCenter reports high resource utilization
http://communities.vmware.com/message/897975

27. What are the different types of memory management tricks available under ESX ?

http://en.wordpress.com/tag/esx-memory-management/

http://www.cs.northwestern.edu/~fabianb/classes/cs-443-s05/ESX.pps

28. What is vmmemctl ?

http://pubs.vmware.com/vi3/resmgmt/wwhelp/wwhimpl/common/html/wwhelp.htm?context=resmgmt&file=vc_advanced_mgmt.11.24.html

29. How we can list pNICs & status using command line ?

ifconfig –a

30. What is resource pool ? What are the use of it ?
A resource pool is a logical abstraction for flexible management of resources. Resource pools can be grouped into hierarchies and used to hierarchically partition available CPU and memory resources.

31. How HA works.
VMware HA provides high availability for virtual machines by pooling them and the hosts they reside on into a cluster. Hosts in the cluster are monitored and in the event of a failure, the virtual machines on a failed host are restarted on alternate hosts.

32. Is HA dependent on virtual center
(Only for Install)
33. What is the Maximum Host Failure allowed in a cluster
(4)
34. How does HA know to restart a VM from a dropped Host
(storage lock will be removed from the metadata)

35.How many iSCSI targets will ESX support
8 for 3.01, (64 for 3.5)
36 How Many Fiber Channel targets
(256) (128 on Install)

37 What is Vmotion

(ability to move running vm from one host to another)

38 What is virtual SMP –
when and why should you give a vm multiple vCPUs - part of their answer whould be that best pracrtice is to start with a single vCPU because of you can run into perfomance issues do to CPU scheduling

39 Ask what version of Linux kernel does ESX run

if they are truly experienced they should say ESX is not Linux and does not use a Linux kernel - and give them an extra poijnt if they explain that the service console runs a modified version of Red Hat Ent 3 -

40 does HA use vmotion?

the answer is no - vm stops and restarts on ESX other host

41. what is the different when you use viclient connect to VC and directly to ESX server itself.

When you connect to VC you manage ESX server via vpxa (Agent on esx server). Vpxa then pass those request to hostd (management service on esx server). When you connect to ESX server directly, you connect to hostd (bypass vpxa). You can extend this to a trobleshoot case, where connect to esx see one thing and connect to VC see another. So the problem is most likely out of sync between hostd and vpxa, "service vmware-vpxa restart" should take care of it.

42. What was the most difficult VMWare related problem/issue you faced in a production environment and what were the specific steps you took to resolve it?

HA issues – because of dns problems, the hosts are unable to communicate together. Corrected by adding all servers ip’s in each server’s /etc/hosts file
VM was not powered up –because the swap file was locked by another host, when I try to power on the vm its not powering up. After releasing the lock its powered on.

43. When was the last time you called VM Support and what was the issue?

Licensing related issues.

44. What was the most performance intensive production app that you supported in VMware and what were the some of the challenges that it posed?

In exchange sharepoint demo project, getting lot of VLAN issues. (its my experience, you can say yours)

45. How would you determine that a perf intensive app is a good candidate? Spefically what tools would you use to identify candidates. Specifically inside those tools what metrics would you use?

46. What is yor philosophy on how much of the data center can be virtualized? (If the interviewer wants max virtualization, but the interviewee is not convinced that this is a good idea, this could be a deal breaker)

47. What is your opinion on the virtualization vendors (MS vs VM vs Citrix vs etc) and why? (Just trying to figure out if the candidate is keeping up with this ever changing virtualization market)

48. I beleive another good question would be to ask the candidate to briefly describe VST, VGT & EST mode and 802.1Q trunking. I say this because networking is such an important part of VMware implementations and on going support.., do you really want a VMware engineer working in your environment if they lack the knowledge of these concepts (+unless of course they are only delegated with low level permissions for generic VM operations+)

More information on these mode's can be found here: www.vmware.com/pdf/esx3_vlan_wp.pdf
Also ask the candidate to explain why one mode would be used as opposed to another?, remember that there can be numerous reasons for the use of different modes depending on your company/client's network, security policies etc..

49. If you are interviewing for a consultant role it would also be a good scenario to provide a brief overview of a fictional network and ask the candidate to do a whiteboard draft of how the network would be layed out if say the ESX servers have 6 NIC's or 8 NIC's etc.. etc...

50. What are notable files that represent a VM?

.vmx – configuration settings for VM
.vmxf – configuration settings used to support an XML-based VM configuration API
.vmtx – configuration settings for a Template VM (replaces the .vmx file)
.vmdk – virtual disk file. (Note: if a thick disk is used, a–flat.vmdk file that represents the actual monolithic disk file will exist but will be hidden from the vSphere Client.)
.nvram – non-volatile memory (BIOS)
.vswp – swap file used by ESX/ESXi per VM to overcommit memory, i.e. use more memory than physically available. This is created by the host automatically when powering on a VM and deleted (default behavior) when powering off a VM. Swap files can remain and take up space if a host failed prior to shutting down a VM properly. Normally the swap file is stored in the location where the VM configuration files are kept; however the location can be optionally located elsewhere—for example, locally for performance reasons and if using NAS/NFS, local swap should be used.
.vmss – suspend file (if placed into suspend power mode)
.vmsd – for snapshot management
.vmsn – snapshot file

51. Host Profiles
What licensing is required for Host Profiles? Available with vSphere Enterprise Plus edition.

52 Can Host Profiles work with ESX/ESXi 3.x hosts?
•No. Only starting with ESX/ESXi 4.0.

53 Can Host Profiles be used with a cluster running both ESX and ESXi hosts?
•Yes, but remember to use an ESX host and not an ESXi host to create a profile for use.
•In theory, Host Profiles should work with mixed host clusters, as it translates ESX to ESXi, but be careful as there are enough differences between ESX and ESXi that can lead you to make self-inflicted errors when applying Host Profiles. The easiest method is to create clusters that are homogeneous and maintain two different profiles for these two types of clusters.

54 Can Host Profiles work when using the Cisco Nexus 1000v?
•No, because Host Profiles was designed with the generic vNetwork Distributed Switch. The Cisco Nexus 1000v switch gives administrators finer-grained control of the networking beyond what Host Profiles can apply.

55. What are host profiles?
A set of best practiced configuration rules, which are can be applied to entire cluster or to an individual host. So that all the hosts in sync with each other, this will avoid vmotion, drs and ha problems.
56. Could not power on VM: no swap file
My ESXi 3.5 machine runs 8-10 VMs (Win2k3 and WinXP) normally. At the moment, 5 of them are complaining that they cannot Power On. They seem to start and then complain "Could not power on VM: no swap file". I had a look with the data browser. It's a small installation, so the vswp files ought to be in the same directory as the vmx file (I did not inttionally put them anywhere else). Of course I don't see a vswp file there because the machine is not running. I don't know enough about the vmx file structure to identify if anything is wrong in the specifications. I have downloaded one of the vmx files and attached it here. Please either tell me what to change in that vmx file, or suggest another approach to get the machines to start.
57. What are the available Storage options for virtual machines ? Raw device mappings, VMFS
inShare0
There are two ways to provision storage for virtual machines (VMs) on a storage area network (SAN). One way is to use VMFS, the proprietary, high-performance clustered file system provided with VMware Infrastructure (VI). Using virtual disks (VMDK files) on VMFS is the preferred option for most enterprise applications, and as such supports the full range of functionality available in a VI implementation, including VM snapshots, VMotion, Storage VMotion, and VMware Consolidated Backup (VCB).
The other way to provision storage is Raw Device Mapping (RDM). RDMs are sometimes needed in instances where virtualized access to the underlying storage would interfere in the operation of software running within the VM. One such example is SAN management software, which typically requires direct access to the underlying hardware; and thus would need to use an RDM instead of a virtual disk. In this tip, I'll discuss what RDMs are and when to use them over a virtual disk.
Defining raw device mappings
An RDM is a file that resides within a VMFS volume that acts as a proxy, or an intermediary, for a raw physical device. One can think of an RDM as a symbolic link to a raw LUN. The RDM contains metadata and other information about the raw physical device being accessed and can, depending upon the configuration of the RDM, add features like VMotion support and snapshots to VMs that are using raw LUNs.
Why use RDMs instead of virtual disk

58. What are the differences between Virtual and Physical compatibility modes when mapping the Raw Devices to virtual machines?

You can configure RDM in two ways:
Virtual compatibility mode—this mode fully virtualizes the mapped device, which appears to the guest operating system as a virtual disk file on a VMFS volume. Virtual mode provides such benefits of VMFS as advanced file locking for data protection and use of snapshots.

Physical compatibility mode—this mode provides access to most hardware characteristics of the mapped device. VMkernel passes all SCSI commands to the device, with one exception, thereby exposing all the physical characteristics of the underlying hardware. In this mode, the mapping is done as follows, when we create a mapping, the configuration stored in a file and that file is stored with the vm files in datastore. This file points to the raw device and makes it accessible to the vm.

59. What are RDM Limitations?

RDM limitations
There are two types of RDMs: virtual compatibility mode RDMs and physical compatibility mode RDMs. Physical mode RDMs, in particular, have some fairly significant limitations:
•No VMware snapshots
•No VCB support, because VCB requires VMware snapshots
•No cloning VMs that use physical mode RDMs
•No converting VMs that use physical mode RDMs into templates
•No migrating VMs with physical mode RDMs if the migration involves copying the disk
•No VMotion with physical mode RDMs

Virtual mode RDMs address some of these issues, allowing raw LUNs to be treated very much like virtual disks and enabling functionality like VMotion, snapshotting, and cloning. Virtual mode RDMs are acceptable in most cases where RDMs are required. For example, virtual mode RDMs can be used in virtual-to-virtual cluster across physical hosts. Note that physical-to-virtual clusters across boxes, though, require physical mode RDMs.

While virtual disks will work for the large majority of applications and workloads in a VI environment, the use of RDMs--either virtual mode RDMs or physical mode RDMs--can help eliminate potential compatibility issues or allow applications to run virtualized without any loss of functionality.

Thursday, October 18, 2012

Cisco Enhances its Unified Communications Portfolio

 
Cisco has announced several new improvements to its collaboration portfolio, showcasing its growth in cloud collaboration. Included in its new offerings are Telepresence, web conferencing, UC, and contact center solutions, which are delivered via public, private, or hybrid cloud models.
 
As part of the enhancements, Cisco is expanding its Hosted Collaboration Solution (HCS) to include advancements in many of its features. From TelePresence and Customer Collaboration to Unified Communications, Cisco is providing its partners and customers with a more robust UC experience as an "as-a-service" cloud-based model.
 
Furthermore, the Cisco WebEx Web Conferencing has been extended to the private cloud, as per the increased demand from its customers, providing the same collaboration capabilities and user experience that they're used to.
 
Cisco CloudVerse, Cisco's set of solutions for building and managing clouds, continues to improve and advance, helpinb businesses reap the benefits of the cloud, providing an integrated collaboration experience with improved agility and security.
 
"Cisco is taking advantage of its dominance in network connectivity to push cloud-based communications, through both public services from leading service providers/carriers, as well as aiming at private and hybrid cloud applications that can be customized by channel partners that have the skills to do so," says Art Rosenberg, UC Expert at UCStrategies. "Although it is becoming clear that UC-enabled applications can be most efficiently implemented, supported, and managed in a cloud environment, rather than physically on premise-based equipment, the migration to cloud-based applications will require hybrid approaches and integrations with existing communication applications. Cisco's approach to its cloud infrastructure, however, is flexible enough to cover a variety of end user needs, including providing the cloud integration expertise through their partners that IT organizations don't have."
 
Rosenberg adds: "Cisco's announcement was quite comprehensive, in terms of the functional communication tools and the flexibility of cloud implementation options, and will be a significant challenge to competitors in the business communications market. A key strategy here, of course, is to exploit their many channel partners to take on the responsibilities for helping customers migrate to customized cloud applications and services. Anyone with competing technologies will have to do the same, so the battle for new channel partners is on!
 
"Although Cisco has put the 'collaboration' label on their new offering, Hosted Collaboration Solution (HCS), the functional emphasis is on person-to-person business contacts and interactions. That includes mobile UC capabilities, as well as contact center customer interactions. What I didn't see (yet), is the role that Cisco’s cloud services will play in supporting UC-enabled online applications for multi-modal mobile users, as well as CEBP process-to-person contacts for time-sensitive alerts and notifications. As I have stressed in the past, these will be particularly important as consumers/customers increasingly use smartphones and tablets to give or get personalized information and messages directly with automated business applications, not necessarily through people. UC-enablement, however, will allow click-to-connect contextually with people, when necessary."