Enter the computer room of the average mid-sized company and you’re likely to see a few high-end servers running mission-critical applications, and next to them a bunch of desktop-style PC’s providing secondary services. The reason is simple: cost. It’s difficult to justify the expense of a proper server — with redundant drives, fans, and power supplies, multiple network cards, and loads of memory — for an application of limited importance and benefit to the company. Only mission-critical systems warrant that expense.
Further, many applications don’t play well together. You may have a powerful, multi-processor server, but if you try to run several disparate applications on it, even applications that in themselves have a small footprint or light requirements, and you’re likely to bring the server to a crawl, if it will run at all. So you run those applications on separate pieces of hardware, sized — as closely as feasible — to the application’s need.
The problem is that low-end servers can cost just as much to maintain, if not more, than their high-end counterparts. Operating systems must still be patched, backups must still be performed, anti-virus and other security measures must still be maintained. And if a hard drive on a PC fails? There’s usually no redundancy in place to keep the system up and running while you replace the drive. So you have to rebuild the system and restore from backup, which takes time and costs money.
What if you could combine all those low-end servers into one high-end system, letting them share processors, memory, disk, and network, while keeping them separated so they don’t interfere with one another? You can! It’s called “virtualization.”
What is Virtualization?
A virtual server looks and acts like a regular server, but without its own hardware. Instead it shares all physical resources with other virtual servers on a physical host. For example, suppose you have a host server with 8 processors, 12 GB of memory, and a terabyte of disk. The host would run its own operating system, depending on what virtualization software you use. (We’ll discuss those options in a moment.) Residing inside that system you could have several other servers — guests — each with its own operating system, each accessing its own portion of the hardware.
On some virtualization schemes, each guest system is allocated a specific processor, specific network cards, etc. Thus, on a 4 CPU host system, you could run at most 3 guests, with one CPU reserved for the host. This is similar to many partitioning systems.
A more flexible arrangement pools the resources of the host and presents virtual processors to the virtual servers. So, on a 4 processor host you could run a dozen or more guests. The host may have only one or two network cards, but each guest would see its own private connection to the LAN.
The benefits of virtualization are obvious: A single high-end host can support multiple low-end guests. A secondary server that may not warrant redundancy on its own can still have it by sharing it with other secondary servers. If a hard drive on the host server fails, the guests can continue to run while you replace it. If you add a second network card to the host, all of the guests on that host can benefit from the additional bandwidth.
What are the Options?
Perhaps the most popular virtualization vendor today is VMware, recently acquired by EMC. They offer several flavors of their products, including a workstation version tailored for developers who need to run multiple configurations on their local systems; a mid-range version (GSX) that runs atop a Windows or Linux host and can support up to 4 guests on each CPU, on a platform with up to 8 CPU’s; and a high-end version (ESX) that installs its own host operating system on hardware with up to 16 CPU’s, supporting up to 8 hosts per CPU.
Using VMware Workstation, an IT administrator could develop and test the configuration of a server in an isolated environment, and when he was sure it was ready, deploy it to a VMware GSX or ESX host. It’s also possible to pre-build server images so that a complete and functioning server can be brought online in a matter of minutes.
Using VMware, many large organizations have been able to reduce the number of physical servers they maintain from hundreds to tens, saving hundreds of thousands of dollars in the process.
Microsoft has recently entered the market with its own Virtual Server 2005 in Standard Edition (up to 4 processors) and Enterprise Edition (up to 32 processors.) Officially, MS Virtual Server 2005 currently only supports other versions of Windows, although they claim they’ll soon be able to support various x86-based Linux distributions.
One of the problems with the virtualization systems we’ve discussed is that they make deploying new servers too easy. It used to be that end-users would have to build a solid business case for the purchase of a new server because it required such a significant investment in hardware and labor from IT. Now, with tools like VMware and MS Virtual Server, new servers can be switched on almost as quickly and easily as turning on a light bulb, with no additional investment in hardware. Of course, that new server still requires administration: licensing, security, patching, backup, and everything else required of a physical server.
With discipline, planning, and careful procedures, virtual servers may have a very real place in your business.