Login or e-mail Password   

Virtualization in the SMB Environment

Virtualization has long been the domain of large enterprises. Beginning with time-sharing technology on massive mainframes, virtualization required large data centers and larger...
Views: 911 Created 07/04/2007

Virtualization has long been the domain of large enterprises. Beginning with time-sharing technology on massive mainframes, virtualization required large data centers and larger IT budgets. The advent of high-performance workstations and servers based on Windows, Linux and similar technologies brought the benefits of virtualization to small- and mid-size businesses that might have only a limited IT staff and an even more limited budget.

Today, virtual operating systems from companies such as Microsoft, VMware, SWsoft and XenSource allow companies of all size to take advantage of hardware that would have made yesterday's IT managers salivate in envy. Multicore processors, inexpensive system memory and commodity-priced massive disk drives are putting the disk farms and mainframes of years ago into a small chassis. The result: technology that once was the province of multinational companies can now be used by small- and mid-sized businesses as well — assuming they have the technological wherewithal to implement these capabilities.

For example, instead of being confined to a single operating system on each physical computer, companies can leverage virtual server technology to deploy multiple environments on the same box. Companies can use virtual servers to eliminate costs of managing and upgrading legacy hardware by migrating older applications onto virtual machines running on new, reliable hardware. They can also consolidate low-use departmental servers onto a single physical server to decrease management complexity.

In order for an SMB to take advantage of virtualization, there are several technological issues that need to be understood and exploited. Once harnessed, a world of opportunity exits.

  • Deploying a Server: Migration from physical to virtual, virtual to physical, virtual to virtual and physical to physical.
  • Hardware and Software Support: Support for multiple hardware platforms and operating systems, including both 32- and 64-bit servers as well as VMware, Microsoft, XenSource and Parallels virtual environments.
  • Customizing the Migration Process: Migrate an entire system or specific files to another server.
  • Working with Data: Migrating either live data or data at rest with either an on-line or off-line migration with minimal disruption.
  • Disaster Recovery: Take a backup image and migrate that to a new server for historic data retrieval purposes.

Sounds impossible for an SMB with only limited IT capabilities? It isn't. It really is based on the axiom: Use the right tool for the job.


Deploying a Server

In an SMB, deploying a server generally requires building the system from scratch, including installing the operating system and applications, configuring the applications and then configuring the network. This is a time- and personnel-intensive task. Depending on the server being built, it could take literally days to build, test, configure, test, debug, test and deploy. Then, when you build another system, you start all over.

It would be a lot more efficient to build a single system, and then deploy it again and again. In a virtual environment, this could mean designing a system in the IT lab and then deploying it to virtual servers at a hosting company or to remote systems. But how do you ensure that the system you built in one location actually works in another?

The key here is to create "transportable images". A transportable image is one that can be designed and tested on one hardware platform, then deployed on another, regardless of the hardware configuration. The benefits of transportable images are reduced configuration time, reduced deployment time, and the ability to deploy even without necessarily knowing at the outset the hardware configuration.

The drawback: not every deployment tool supports transportable images. The key is to select an imaging program that will not only allow you to move from a physical to a virtual machine, but also from virtual to virtual, virtual back to physical and physical to physical — think of it as going full circle.


Hardware and Software Support

This is fairly straightforward, but certainly worth mentioning. Many IT infrastructures are purchasing hardware and software that support 64-bit technology. While you might not be using 64-bit hardware or applications today, make sure your servers are capable of moving to that platform in the future. You don't want to have to redevelop all of your servers again in three to five years.

Additionally, make sure you have tools that allow you to move from one virtual operating system to another. With mergers, acquisitions and new applications, you don't want to be locked in to a single platform. Having the right tools to move from one virtual OS to another is imperative.


Customizing the Migration Process

There likely will be occasions when you will need to migrate just part of a virtual server to another system. To ensure that you can successfully move a group of folders or files, you need to have a tool that is able to drag and drop the files from one system to another. It sounds easy, but again, not every migration tool can perform this function.


Working with Live Data

Most migration tools do an excellent job of moving data that is at rest — data that is not currently being used. In fact, much of the time you will be dealing with data that is off-line. However, when you are backing up a server that must be up 24x7 or restoring a transactional server, you're dealing with live data.

In such cases, you'll want to be able to image the live data when making the backup so that the backup operation will not impact the server or your users. Select a disk imaging application that can take a snapshot of the server and then perform the backup operation in the background.

You only have two choices with imaging products: those that force you to take a server down to back up, which can cause massive interruptions to your business processes and productivity, or those that allow you to back up live data completely in the background on a live, running Windows system with open system files. Only programs that can image open Windows files will allow you to save the state of that machine; when you restore that image, the system will be back working in a known, good condition.

Here's an important caution: If your program does not back up open Windows files, you will not be able to restore the image to a bare-metal drive effectively. You will first have to reinstall the operating system, then all applications, drivers and such. Your backup will be file-based only.


Disaster Recovery

Disasters come in all sizes. Companies need to plan for the recovery of systems first by prioritizing resources and creating backup schedules to match the maximum allowable downtime for any given server. Remember that creating a system backup is not the end of the task, it's the beginning. Ensuring that the backup can be restored is a goal that must be achieved but remember, to be useful, that backup must be able to be restored to any hardware platform, not just the system from which it was created.

Disk imaging offers the best choice for disaster recovery because it can return a system to a known, good working state. However, as noted earlier, make sure that you select an imaging product that can work entirely in the background or you will find yourself with potentially damaging productivity issues.

File-based backups require that you reinstall the operating system, applications, drivers and such, then reconfigure the system to meet your needs.

There are a number of quality disk imaging products available today. You can test each one by creating a live image from one system and restoring it to another hardware platform. (To make things comparable, be sure to use x86 platforms running a version of Windows.) It does not matter if the systems are Intel- or AMD-based; in fact, going from one architecture to the other is a good way to test the software. If you cannot restore an image from one hardware platform to another, chances are you don't want that product; it likely will cause more grief than it's worth.

If an SMB's data protection solution does not address the complete lifecycle management of data, the company risks unacceptable exposure of its data that can easily result in the loss of data and costly downtime. Policies, procedures and having the right tools to do the job sometimes just aren't enough. SMBs, just like their corporate competitors, need to test their disaster recovery plan to ensure that they know how to recover. Having the right IT products is just the first step; practice and experience round out a fully functional disaster recovery plan.

Walter Scott is a CEO of Acronis, a technology company producing file system-related software tools, including data backup and restore, partitioning, boot manager, privacy, data migration, and other storage management products for enterprises, corporations and home users.

Similar articles


8
comments: 2 | views: 4884
7
comments: 3 | views: 8602
7
comments: 0 | views: 2156
7
comments: 0 | views: 2433
7
comments: 1 | views: 4588
7
comments: 0 | views: 4251
7
comments: 0 | views: 5225
7
comments: 1 | views: 3412
 
Author
Article

Related topics






No messages


Add your opinion
You must be logged in to write a comment. If you're not a registered member, please register. It takes only few seconds, and you get an access to additional functions .
 


About EIOBA
Articles
Explore
Publish
Community
Statistics
Users online: 170
Registered: 107.587
Comments: 1.493
Articles: 7.180
© 2005-2018 EIOBA group.