home   |   contact us   |   sitemap  
 
 
News
Campaigns
Press Contacts
 
SERVERS : HOW TO DO MORE WITH LESS
(PC Quest , September 2005)

Server virtualization and consolidation can help reduce the number of physical servers in your data center, server room, or even across multiple locations. But is it worth the trouble?br>

The two buzzwords making rounds in the industry today are server virtualization and consolidation. Both are aimed at reducing the clutter of servers in your setup such that just a few servers can do the job of many. While virtualization aims to squeeze out every ounce of computing power from your servers, consol¬idation aims at helping you centralize your server infrastructure. In either case, you're reducing the number of physical servers in your setup.

The end results of doing this are many. Fewer physical servers mean lesser floor space requirements, and lesser cooling, ventilation, and cabling. It also means lesser running around for troubleshooting, doing server upgrades and routine mainte¬nance. Imagine being able to commission a fresh server in just a few minutes for that new application you have to deploy.

Or better still, you don't need to depute engineers to manage your remote server locations anymore, because all those servers have been consolidated into one large box. Everything is managed centrally. Imagine the cost savings that would result by having this kind of an environment. While these could be significant, one mustn't forget to look at the other side of things. How much would it cost to achieve this?

Let's take virtualization first. Using this technology, you can run multiple operating systems or multiple instances of the same operating system on a single physical server. Each would have its own hardware requirements. On top of each OS, you'll also install an application, which will further increase the hard ware requirements. In all, you need to put in more RAM, processing power, and more storage in case you're not using a separate storage network for the last option. Moreover, if your existing servers can't take the extra load despite the hardware upgrade, you'll have to replace them with new and more powerful hardware servers. All these come at a cost, which needs to be taken into account.

Cost of migration comes next, which also applies to server consolidation. You have to move everything from multiple phys¬ical servers into a single server. This would take quite a bit of time and expertise, which come at a cost. To take one example, if you're running your own customized applications on physical servers, then how easy would it be to re-deploy them on new hardware and that too in a visualized environment? If the hardware is signifi¬cantly different, it might require extra coding to do that. So add the software development cost here.

Up next is the cost of downtime. This is what gives most net¬work managers and CIOs sleepless nights. If you had individual applications running on their own physical servers, then you'd only have to worry about a single application going down. But in a virtualized environment, all applications would go down. To protect this, you would have to set up a redundant or fail-over server. So most of the costs we just talked of would get almost doubled. In case of consolidation, you're likely louse a multiple CPU (4-way or 8-way) server for the job, which doesn't come very cheap. So here again, you'll need to have another box for fail over.

These arc just a few costs to give you an idea. Don't forget to add the direct cost of the virtualization software itself, if you're using a commercial package. In case of free visualization soft¬ware, you'll need to acquire skilled manpower or train existing people to deploy it. In case of consolidation, you'll need skills to do the migration from one platform lo another.

Once you've taken all the costs for doing server virtualization and consolidation into account, work out the cost savings that would result from it. This would again vary from case to case, and would be a mix of direct and indirect costs. In case of consolida¬tion, for instance, you'd save on the licensing costs for applica¬tions, a direct cost saving.

In virtualization, you'll be using all your servers much more efficiently, which is an indirect cost. Server provisioning becomes easier, as you could easily deploy new applications quickly. That's another indirect cost saving. Cost of downtime can betaken in a positive sense here as well. If you're doing an application upgrade, then you'd need to first check it in atleast environment before rolling it out on the production server. With virtualization, you could quickly clone your production application, and upgrade it to see whether everything works fine. If not. then you can bring it down.

Only after you find significant cost savings after taking the investments into account should you venture into the actual
deployment. In this story, we've discussed both concepts in detail, how they work, their benefits and limitations. Plus, we've gone a step further and evaluated a number of server virtualiza¬tion software, both free and commercial. Lastly, we've also spoken to a number of organizations who have already done their deployments and are benefiting from them. Their inputs have also been included.

UNDERSTANDING SERVER VIRTUALIZATION
The dictionary meaning of the word 'virtual' is 'being such in essence or effect though not in actual fact’. How is that possible in an IT infrastructure you may ask ? The infra¬structure is definitely there, and users do access it. So how can it be virtual ? The answer is simple. Present a logical view in¬stead of a physical one, for any kind of computing device, be it your storage, server or your network. This makes the presentation simple for users and easier to administer for network adminis¬trators. In case of server virtualization, while a user might see separate servers for various applications, each running on dif¬ferent OSs, they may actually all be running from the same phys¬ical machine. It's easy to visualize services running from the same machine, because any network OS today is capable of doing that. What sounds unbelievable is the ability lo run multiple OSs from the same machine, with each of them running a different appli¬cation. That in essence is what virtualization allows you to do.
Server virtualization technology is primarily of use for en¬terprises, ISPs and software development houses. There is lots of software for doing server virtualization, which we've covered else¬where in this story. Here we'll concentrate on visualization tech¬nology and what benefits can your business derive from it.
Types of server virtualization
Virtualization, as a topic, is pretty broad in itself, but when it comes to server virtualization in particular, then there are two methods that are commonly followed. One is hardware emulation and the other is OS partitioning. In hardware emulation, there's a base OS running on a server with virtualization software. This software creates full emulation of the entire hardware for as many OSs as you want to run on it. The hardware includes everything right from the BIOS, to the hard drive, video card, networking and all other I/O devices. This capability allows you to install any OS you want to within this environment, while the underlying hardware remains the same. So hardware emulation abstracts the software from the hardware.
In OS partitioning, you're running multiple instances of the same OS as the base OS. So if you have Linux as the base OS, then OS partitioning will allow you to run multiple Linux kernels on the same machine. The advantage here is that unlike hardware emulation, the guest OSs (Linux kernels) in OS partitioning share the same hardware resources as the host. The visualization software doesn't create multiple instances of BIOS, hard drives and networking. It actually utilizes the hardware resources more effectively than the hardware partitioning approach.

Benefits and limitations
As long as your organization had just a few servers, life was easy. You could run every application on a separate hardware server. But what happens when the number of servers increases to unmanageable limits, a problem typically known as server proliferation. One major problem is that of wasted capacity. It's not possible that all servers will always run at 100% of their rated capacity. In fact, chances are that they'll not even run at a fraction of that most of the times. So their capacity and potential is being wasted most of the time. What if you could somehow make use of this potential ? That's where server virtualization comes lo the rescue. Instead of investing in a new physical server for a new application, you want to deploy, can you use one of the existing servers to deploy it using virtualization technology? You could also shift your existing applications from their physical servers into a single server using virtualization. This way, you'll be using the server more effectively. So extracting every bit of unused computing power from a server is one advantage of virtualization.

CASE STUDY
Learnings from an ISP: Server virtualization at Net4lndia
Net4India uses several server virtualization techniques for its business. One of them is application-level virtualization, wherein it offers corporate e-mail solution using its custom built application BizMail+. This application is able to run up to 1000 mailboxes on one dual-Xeon server with 4 GB of memory. Using this, Net4India can combine up to 200 mailboxes and offer them as a solution to customers. While everything runs on a single server, the customer only sees a customized mailing solution complete with his company's colors, logo, etc. The virtualization technology keeps each customer's mailbox separate from others. All servers are kept in a failover cluster for failsafe operation, due to which the customer never knows if a server to go down due to this. All virtualized environments are constantly monitored for server resource utilization to prevent any instance from hogging all server resources.
Net4India also does OS-level virtualization for its Linux-based Web hosting services. For this, the company uses Red Hat Linux with a virtualization component. This component allows the ISP to create multiple Linux kernels on top of a single Linux server. Each kernel can then be offered to a customer. Once again, the customer sees a full-fledged Linux server running, while actually it's a virtual server running at the backend.
One interesting fact that emerged from our discussions with Net4Indkt was that using virtualization technology- ISPs are able to offer services at very competitive rates. It's because they no longer have to put up dedicated physical servers for each customer, saving on the hardware costs. So the next time your ISP offers you a dedicated server at a very attractive price, check whether it's a dedicated physical server being offered or a virtual one.
With inputs from Jasjit Sawhney, CEO and Desi Valli, Net4lndia

While this may sound lucrative, there's a flip side to it as well. How does running multiple servers on the same piece of hardware help? Isn't it like putting all your eggs in one basket? If yes, then there's tremendous risk in doing that because if the physical server crashes, it actually brings down multiple services. This scenario can't be neglected and there are ways of taking care of it. Large enterprises typically deploy fail over clusters for their mission-critical applications. The same can be done for virtualized environments as well. It does of course become trickier, as setting up a fail over node for a server running multiple OSs is no bed of roses.

Another benefit of server virtualization is flexible resource allocation. In case you find that one of the servers in the virtual environment doesn't need so many resources, you can take them away and reallocate to another server that really needs them. This could be RAM, storage space, network interface or a number of other things. All this can happen dynamically without bringing down the physical server. The challenge in this is to keep an eye on server resources. Suppose one of the applications hogs up too many resources, this could affect the performance of other applications. Therefore, proper server management is a must in server visualization. You need lo set up SNMP alerts to warn yourself every time any resource gets over-utilized.

Moving on, agility is another benefit of virtualization. You can create multiple clones or backup copies of every OS that you install, so that if any OS gets corrupted, you can replace it with a fresh one. This kind of a virtualization approach is ideal for software developers who need multiple OSs for testing their programs on different platforms.

Plus, it gives the flexibility to quickly have any OS up and running within minutes for testing. You can even set up fail-over servers within the virtualized environment itself. The limitation hereof course, is that the more the number of virtual servers, the greater the resource requirements for the server. For instance, while reviewing a particular software called Webtrends, we were surprised to find the hardware recommendations that run it. The software requires a CPU with more than 3 GB of memory and several hundred megs of hard drive space. One such application could really hog the resources on your server, so before going for virtualization, you need to find out the recommended hardware resources for the applications you intend to run on them. Then you need lo ensure that the server specifications can handle such load.

Reliability of your applications also increases with virtualization as you can create a fail over server on the same machine as the primary application. Since both applications are on the same machine, in case the primary application fails, fail over to the back up will be much faster.

Better fault tolerance is yet another benefit of server virtualization. Instead of running multiple applications on the same OS, you're running each one on its own OS. This leads to lesser software faults, and even if a fault occurs in an application, it doesn't end up affecting other applications. This provides for better fault isolation.

Migration of legacy applications also becomes easier. Your current server hardware has reached end of life, but your legacy application hasn't. It needs to run on a new server, but on an older OS. Installing an older OS and a legacy application on a new piece of powerful server hardware can become overkill. Instead of doing that put it in along with other applications using server virtualization.

SERVER CONSOLIDATION

Most of our resources either lie redundant or are wasted due to their non-judicious distribution. This is because the capacity assigned to each remains unused most of the times as the peak loads do not stay continuously. Even during the times when the workload is maximum, all the resources are seldom occupied. The systems and processes are designed in a manner such that they do not overshoot the maximum capacity. Same happens in our server rooms and data centers. Organizations end up with servers of different configurations, some of them completely out of date, running a variety of OSs and applications for different departments. Since these are individual servers, resources such as CPU power, disk space and RAM built into each remain redundant to an appreciable extent. Just imagine, these unused resources if added up could come to something that would be sufficient to run another such set of processes. But, just because they are on discrete systems, these are not useable by other systems than those allocated.
To add to our woes, today's IT environment is unique. Budgets remain flat, business units now hold IT departments accountable for services provided and businesses demand less downtime and increased productivity. In short, the expectation is to do more with less. This has driven the need for IT consolidation as a way to streamline the IT infrastructure and to help IT departments achieve this goal expectation.

Server Consolidation at NSE
NSE had a collection of servers from various vendors about two years ago, spread across an area of 1200 sq feet. With so many servers across such a large area, keeping track of their utilization was quite a challenge. Then NSE decided to go for server consolidation. It was done for its clearing and settlement and offline applications. These applications area part of the trading system, but not a part of the trading engine itself. They handle the listings, memberships, etc.
Before consolidation, NSE had 15 servers in their setup at Mumbai from various vendors. The consolidation brought it down to 11, which were from HP and IBM. This may not seem like a major consolidation on the face of it. but in reality, NSE derived huge benefits from it. For one, they were initially running Unix based servers with 15U form factors. So imagine the space saving if four such servers are removed from racks? That's exactly what happened, and NSE was able lo put in more powerful servers in less than half the space. There was a dramatic reduction in the AMC, so much so that the cost saving justified the consolidation. The consolidated machines were far more powerful than previous ones, and NSE was able to gain 30% improvement in performance due to this. It also improved their overall system availability. Finally, the real benefit, which NSE claims they received from this consolidation was the scalability. The new servers are not only capable of running NSE's current applications, but they would be able to handle future applications as well.
Lastly, such benefits don't come without hardships, and NSE had its share of them to face during this exercise. The biggest challenge for them for this exercise was the migration of everything from existing servers to the new ones. Secondly, since most of the applications were home grown, they had to be ported to the new platform. As you can well imagine, this is not an easy exercise, because code has to be ripped from one machine and run on a new platform. As you can imagine, consolidation didn't come easy for NSE, and it took almost a year to complete.
With inputs from Ashish Dandekar
Senior Manager Enterprise Management, NSE, IT

Unleash the power of server consolidation. In simple terms, combine a few servers and be able to work on any OS or application you like. But what lies in the world of consolidated servers for any organization? And why should one take the pain to consolidate at all? After all, it costs money. While a server consolidation effort provides IT departments many operational and strategic advantages, often the most important aspect of a successful consolidation is the financial value.

Talking of the costs involved, Mahindra & Mahindra—an organization that has very recently upgraded lo consolidated servers on a SAP R/3 platform (refer to the case study) — puts it very elegantly as," The cost of no change is more than the cost of change."

So what is server consolidation? At its core, server consolidation is an enabling technology encompassing not just hardware, but software, services and—most importantly—the systems management disciplines and 'best practices’ to tie it all together.

Mahindra & Mahindra: Project Sankraman
Scenario: M&M; had a distributed implementation of SAP R/3 sinci1 1997 at different locations. At that time SAP R/3 did not support Indian taxation requirements and auto industry-specific processes. Workaround solutions called 'Mahindra Add-ons or z-programs' were developed by MCL (Mahindra Consulting Ltd). Though these met the requirements, the performance of the system was affected. After they had realized the benefits from ERP, supply chain planning & execution business intelligence, and connecting partners. The new dimension products (APO, BW) implemented to meet the changed requirement necessitated an upgrade of the R/3 system to version 4.6B. This was done in 2001. The latest in the upgrade until June 2004 was R/3 Enterprise version 4.7. It had better functionality, catered to Indian taxation and auto industry-specific processes (as it used the z-programs). It was then that M&M; decided to take advantage of the new and improved technology making way in India. They wanted to do away with the z-programs as the maintenance costs were high and data consolidation was cumbersome.

The distributed environment had inherent problems. Two servers communicating with each other on a gigabit LAN was not possible in a distributed environment. So came the idea of making use of consolidating the system on a single server due to a marked improvement in the communication infrastructure. M&M; then went for data center consolidation has already been done at Kandivii with all M&M; servers at Worli and all R/3 servers from Nashik, Igatpuri, Zaheerabad and Nagpur co-located at Kandivii data center.

In order to take maximum advantage of the improved functionality, rectify insufficiencies of earlier implementation, avoid working with sub-optimal work around solutions created in house, do away with delays in consolidation of information and data transfer and incurring the cost of maintaining around 55 servers (20 Database Servers and 33 Application/DRS servers) with operating difficulties in server wise back up, patch upgrades and version upgrades— M&M; decided to opt for server consolidation with fresh implementation of SAP R/3 Enterprise version 4.7. This would implement R/3 Enterprise on a single consolidated server with ease of maintenance on enhanced technology platform, also allowing inefficiency due lo 7, programs be reduced to minimum. To add, no version upgrade would be needed till 2009 for core R/3. This would enable them use the enhanced functionalities on an improved platform with ease of operation and effective data transfer between systems.


Value propositions:
SAP R/3 E4.7 would provide an advanced ERP base for advanced versions of new dimension SAP products like IS-Auto, SRM, CRM, etc within the organization. With this, they would be able to exploit the new technology platform provided by SAP and effectively meet the business requirements like vehicle sequencing, vehicle tracking, warranty management, etc. Functionality would improve, especially with respect to India-specific requirements and auto industry specific requirements and rectify the insufficiencies of earlier implementation. As the number of servers will reduce and all of them will be centrally located, the efforts required to maintain the same will reduce proportionately. Facilities management teams can ensure better support in authorization, application of support patches; version upgrades and data backup. The time and efforts currently spent to clear the stuck i-docs from diverse systems would be saved. Consolidation of information and data transfer will be in a single server without any delay with reduced time and efforts in periodic consolidation of information and data flow. Currently on account of separate servers lot of efforts are invested in consolidation of information and data flow thru 'AI.E', data upload, etc. This will get eliminated and integrated data will be available in one server. With its implementation this is expected to improve data visibility and facilitate decision-making {inventory control, production planning, funds planning etc across the plants/company). Company-wide analytics can now be generated from single consolidated server. This would also result in reduction of time in period closing of accounts on monthly. quarterly as well as annual basis. The benefits that server consolidation brings in by upgrading to R/3 Enterprise include:

• Latest application platform, ie Web Application server 6.20
• Flexible upgrade strategies
• Integration of CIN with standard SAP
• Supports future developments (collaborative infrastructure)

The corporate IT team after study of various options decided to go in for UNIX based servers for its R/3 database. IBM P5 with AIX was chosen after in depth evaluation. M&M; also decided to deploy the existing Intel-based servers as application servers with Win 2003. The state of the art data center at Kandivli, which has won the best NCPI (Network Critical Physical Infrastructure) award, provided for redundancy of power, network and air-conditioning.

CISCO Routers and switches are used all across the landscape, Leased lines and MPLS connectivity connect all in manufacturing plant locations. Spare parts depots; various area sales offices and branch offices are connected by MPLS technology provided by BSNL. While the expected life of the project is 5 years, it took less than a month to get completed successfully.
With inputs from Krishna H Nabar,
Head - Business Solutions, Corporate IT. Mahindra & Mahindra Ltd


The goal is to optimize and simplify your existing IT infrastructure—not just the servers, but the entire end-to-end infrastructure. The objective being to provide a stable foundation for new solution deployment: e-business, enterprise resource plan¬ning, supply chain management and business intelligence.

Server consolidation is not only the physical movement of dis¬tributed architecture to centralized one, but also comprises col¬location, hardware/data layer integration, application integration and Web host layer consolidation. Let's briefly look at what each of these signify.

Collocation: An important part of a server consolidation, the benefits of hardware relocation include immediate cost savings on server management and operation. You also get better physi¬cal security, availability and system-usage capabilities.

Hardware/data layer integration: This means reducing the count of servers and enabling centralizing storage. It also lowers operating costs while improving performance and maximizing the availability of applications and data.

Application integration: This means shifting the IT environ¬ment from multiple applications accessing multiple databases to solutions running on fewer servers that integrate databases and applications. This improves performance while reducing the TCO (total cost of ownership).

Web layer hosting: Consolidates the applications that run on a Web server onto a smaller number of servers to regain the data center space and reduce the expenditures. You can do this by clus¬tering, using virtual machines, etc.

Ways of server consolidation
Server consolidation is an important part of IT consolidation. Today's servers consistently deliver increased reliability and pro¬cessing power. The technological capabilities of servers present new options for IT managers. Large servers with multiple processors for mission-critical applications and smaller servers designed to utilize space more efficiently can help IT managers streamline their infrastructure.

A successful server consolidation initiative will result in tan¬gible financial, operational and strategic benefits while making the IT environment more efficient and easier to manage. This can be clone in the following ways:

• Centralization: You can collocate the servers and/or storage into fewer locations or one central location
• Physical: This means consolidating servers or storage sys¬tems with the same application types or platforms onto fewer or larger systems with the same application type or platform
• Data integration: Here you combine data with different for¬mats onto a similar format or platform
• Application: This would consolidate and define the way the servers or storage systems would support different types of work-loads onto fewer or larger systems. Here you are allowed to manage applications.
• Storage: This means unifying storage onto fewer or larger storage systems independent of the server type, OS or application
• Application consolidation is the hottest topic of discussion among IT professionals. You can consolidate your applications by using either heterogeneous consolidation or homogeneous consolidation. While heterogeneous application consolidation combines several different application types on the same server, the latter combines several instances of the same application on a single server.
Both these approaches of application consolidation can reduce the number of servers required to run applications and maintain the IT infrastructure. So one should use them according to the enterprise-specific needs.

consolidate?
There are different drivers that influence an organization's business decisions of whether or not to consolidate your servers. And accordingly the strategies are defined and the extent of consolidation within an IT environment is decided. The driving forces behind a decision to consolidate could either be financial, operational or strategic. In most organizations, you have increasing number of servers as they grow. But not all of them have the financial strength to buy, implement, support and maintain these servers. So you may want to consolidate.

Otherwise, on the operations front, your organization maybe facing higher downtimes and end-user frustration due to peaking server capacities or overloading. So the need to upgrade to an integrated environment where you can allocate resources properly and manage them efficiently as well.

On the other hand, there are some visionaries who would want a better and stable IT environment and processes in place. They know that with server consolidation in place, they would avail better functionality and availability. That is, one objective might be to obtain, say, 99.99% uptimes. Not only would you reduce the costs associated with managing your IT infrastructure, but you would also benefit from having a less complex infrastructure and increased agility.

From the customers point of view, they would be looking to reduce the number of servers, data center space, TCO or the total cost of operation, operating costs, while simplifying the IT environment. While a server consolidation initiative might
reduce the number of servers in your environment it is also likely that the scalability of the server environment will be greatly increased allowing you to put resources to use exactly when and where needed.

VIRTUAL SERVERS COMPARED
Today if you are looking for server visualization solutions then there are quite a few of them available in the market that let you do server/OS partitioning as well as consolida¬tion. So we decided to check out some of the most popular ones. These include the VMware, MS Virtual Server and Xen.
We took a look at four of them, tested them in our labs and compared their features and performance against each other. To test these products we used IBM xSeries 225 server having dual 2.4 GHz Xeon processors with 2GB RAM and an SCSI (10k RPM) hard drive. We connected the server and its clients on a Gigabit net¬work.

To test the performance of the virtual systems, we created two virtual machines instances. One running Windows XP with one dedicated processor (2.4 GHz), 512 RAM and 10GB of HDD space and other running PCQ Linux 2005 with exactly the same specs. Then we ran Business WinStone 2003 on the Window XP instance and on Linux we timed a Linux Build (aka kernel compilation) run. We also ran the same tests while maxing out the resources of the Host OS to check whether it affected the performance of the virtual instance. We also have taken in consideration the usability and manageability of the product and mention them in detail here.

HDFC Bank's Migration to IBM p570
Scenario: HDFC Bank runs corporate banking business on Flex-Cube sourced from i flex Solutions while it uses Cash Tech Solutions' flagship product called 'Cashln' to manage its cash business. These systems not only run the processes smoothly, but also interface with various other systems in the bank. The existing Alpha server infrastructure was not geared to meet the bank's aggressive expansion plans and a growing customer base. With expanding operations and scaling up of the corporate banking and cash management systems, the bank required additional infrastructural support that would optimize the systems and future-proof the bank's IT investments. They needed a solution, which would provide greater infrastructural flexibility, and better manageability and handling of peak workloads, at the same time offering significant performance improvements.

IBM proposed a solution built around the advanced POWERS processors and the scalability of the IBM system and its virtualization capability filled the requirements perfectly. Based on I BM P0WER5TM processors with simultaneous multi-threading and a unique scalable, building-block packaging, the p570 that HDFC used is well- suited for server consolidation projects, database management, etc. The solution was designed to be capable of consolidating database over highly available servers, with separate part it ions housing different instances; offering a totally redundant automatic fail over, configured to support both Oracle RAC and non RAC environments; handling scalability, expansion and increase of workloads. Best-of-breed technology and server consolidation approach was used offering the best system performance on standard benchmarks (TPC-C, SAPS, etc.),

The project took less than a year to come to action. It was initiated in August 2004, within a month the solution was finalized, and it was ready in March 2005. The lifetime of the project is expected to be 4-5 years, after which it would be reviewed again.

Architecture: IBM's p570 servers are configured in the HACMP cluster environment. Two such servers are placed at the primary site and one at the DR site. All the servers are connected to the SAN Switches, which in turn are connected to the Hitachi storage for product ion/DR environment and IBM storage for UAT environment. The solution has been configured with redundant paths with no single point of failure in the connectivity to the storage and the network. Veritas Netbackup solution has been implemented for efficiently taking backup copies of databases on Gen3 LTO tapes. CA-Unicenler agents are configured on the servers for online real-time monitoring and proper escalations for proactively managing these servers are defined.

FlexCube migration involved a complex qualification on Oracle 9i on IBM AIX 5.3 platform. Forms 6 migration to 6i, business objects migration to latest supported version and a whole lot of load-testing scripts using rational tool, and soon. Similar exercise was carried out to certify Cashln on the new platform.
Value propositions:

• IBM solution offered a clear roadmap of p5 technology. The new processor and hardware and was well suited for scaling along with the Bank's growing business.
• With superior architecture of IBM POWERS &0racle9i,a performance 2-3 times with lesser number of CPUs was achieved. The software licensing cost reduced to a sizable extent at the same time offering a low-cost path for future growth.
• With lBM's dynamic real location of resources and on-demand computing, the problems of the tackling peak loads have been addressed. IBM solution also offered the technological superiority of visualization, allowing the bank to create pseudo-servers within servers by logically partitioning the servers and dynamically assigning resources. This allows the bank to automatically manage the loads at peak hours by spreading it across unutilized resources from other partitions.

VMWare ESX Server 2
VMware ESX Server is aimed at the server partitioning and consolidation needs of enterprises. The solution efficiently allocates hardware resources such as processor, RAM, storage and NIC to the virtual machines and lets you to utilize each bit of your server performance level. The server comes as a single bootable Linux CD with all the pre-requisites and the actual software in place. So to install you just need a single barebones machine (without any OS installed). The CD carries an older version of RedHat Linux and the installation of that is a piece of cake. After the installation you get a terminal-based Linux box. But all the management can be done from a remote machine over a Web browser, Coming to features, the product has SAN support and can boot the ESX host server from the SAN directly. Plus it can also be used to create a replica or take a backup of a virtual machine. It also has VMware Virtual SMP support for multi-processor virtual machines. ESX has built-in support for NIC teaming and creating VLANs. On the resource allocation front, the software supports Intel HT processors and lets you allocate multiple processors for different virtual machines. For storage, you can host virtual disks on a SAN and you can even emulate SCSI or IDE drives interface as needed. The NIC of the physical server can be shared evenly among the virtual machines or you can allocate them in dedicated mode for a particular running virtual machine.

To manage the ESX, it gives you a Web interface, which lets you do every thing, from configuring hardware to running the virtual machines remotely. If you want to have more features such as proactive management and rapid OS deployment, you can buy VMware's Virtual Center Management software. This software is a virtual infrastructure management tool that lets you manage all VMware's virtual servers from one place. It lets you do proactive tasks like sending alerts and resetting virtual machines, if it smells something fishy on them. To rapidly deploy OSs, you just need to create templates of a base deployment and then the same template tan be used to deploy further virtual machines. It has an 'Instant Provisioning Deployment Wizard' that lets you do this in less than 10 minutes. You can even make a clone of a virtual machine and can restore it later in case of failure. The software gives you migration capability and using this you can migrate virtual machines between VMware's virtual servers in your infrastructure.

Moving to security, VMware ESX Server authenticates all remote users who connect to a server using the Web interface or the remote console. Network traffic between the server and the client are secure when using SSH.

Finally on the performance front, VMware ESX server gave a decent score of 25.7 on Business Winstone on the Windows XP VM. The kernel compilation ran on PCQ Linux with same specs and it clocked at 10 minutes 47 sees.

Overall this product gave us the best performance among all the products we looked at. But it was not able to overshoot the performance of Xen in the kernel compilation test.

are GSX Server 3
This is a variant of the ESX server, which installs on top of an existing Windows or Linux OS. lt is basically aimed at enterprises looking for a virtualization solution that is easy to setup on an existing infrastructure. Feature wise, VMware GSX is a scaled down version of the ESX server and supports lesser number of resources. This software does not have SAN and virtual SMP support. Also missing is the support for creating VLANs and NIC teaming. Since this software is installed over an existing OS with its own services and applications, the performance is comparatively lesser than that of the ESX.

Coming to the management part, you can configure the virtual machines remotely by through its Web interface or you can use the VMware virtual machine console. It also supports the Virtual Center Management tool discussed earlier. That means you can set up proactive alerts and fan do rapid OS deployment on the GSX server as well. Like the ESX, its also has failsafe clustering support, but only within the particular host server. However the ESX allows both intra-host and cross-host VMs. On the security front, GSX authenticates users connecting to the server and the traffic between the host and clients are transferred over an SSH tunnel.

Performance wise, GSX gave a marginal performance drop over VMware ESX and a drastic drop when compared with Xen. It gave the score of 19 in Business Winstone and the Linux Build test took around 12 minutes 15secs.

MS Virtual Server 2005
MS Virtual Server 2005 is the virtual machine solution for Windows 2003 systems. Like other virtual servers, this also emulates and runs multiple OSs concurrently on a single physical server. The solution is aimed at organizations that are looking for efficiency in software testing, development and server consolidation scenarios.

The product features a robust storage, networking capability and provides an easy to use Web inter face. Its storage (virtual disk) is a file hosted on any medium the host server can use. Secondly, to boot the virtual machine from an ISO image over a network, you have to first copy the ISO to the host machine—however, this looks like a bug in this version. We did not sec this problem with either VMware product.

On the management front, the software allows you to manage and configure the virtual machine remotely with on your Web browser. You can even see and work on the virtual machine in the Web browser itself, seamlessly. Plus the same interface allows you to monitor the health of the virtual server. If you have multiple deployments of MS Virtual Server 2005 on your network, you can use a single Web interface to manage all of them. This is a good feature that we did not see in the other offerings. However, the same feature can be realized in ESX/GSX using the VMware Virt000ual Center Management software. In addition, you can set up proactive alerts using MOM (Microsoft Operation Manager) and with the Virtual Server Management Pack. Still, compared with against VMware Virtual Center, we found MSVS 2005 lacking in features and detailed reports of the health of your virtual server.

Coming to security, it can authenticate users from an Active Directory Domain and the remote connection from clients to host is established via SSH. On the performance front, MS Virtual Server disappointed us. It took a longer time to install both Windows XP and PCQ Linux. On running Business Winstone, it gave a score of 8.8 which is drastically lower than everything else and kernel compilation took around 17 mins 20 sees, which is not so good among the lot.

Xen
In our hunt for the right server visualization software for your enterprise, we came across with this nice piece of software that is becoming very popular among Linux Web hosting companies nowadays. It works in a different manner compared with other contenders in the lot. As we have seen VMware and MS VS, do system visualization whereas this does OS visualization. That is, the guest OS is ported on top of the host OS and running with a specialized visualization kernel. This technique is increases the performance ratio of the virtual machines. And that we saw reflected in our test results. Here the reason for performance enhancement is that it doesn't have the Virtual Machine Layer. The guest OS is installed directly on top of the host OS and reduces the overhead created by the VM layer.

But at the same time Xen is not a user-friendly thing and you have to be more or less a Linux guru to make it run properly. But the installation is quite easy and took just around half an hour. We installed it on a PCQ Linux 2005 machine and got just two dependency errors. One was about Python and another for Twister. Both of them were sorted out with a single yum update and we were able to boot the machine with the customized Xen kernel (for more details on how to install go through the article at http://www.pcquest.com/content/linux/2005/I05041204.asp).

From then on, we needed to get our hands dirty. First, we encountered a weird problem that Xen faces with FC3 distros. In FC3 based Linux distros, you don't have the /dev/console file and this is required for Xen to boot. This problem had the virtual machine to rebooting continuously. After sorting out this problem we figured out that there was a similar problem with the 'udev' service. We managed to solve this by first deleting /dev/null file and recreating it with the command'mknod/dev/null c 1 3'. Now our Xen was up and running.

We allotted the machine with the same spec that we gave to all the other contenders for benchmarking and did a Linux Build test and found that it took the lowest time of the lot. And not only this we also tested the guest OS while fully consuming the resources of the host machine and found that the total difference turned out to be negligible (a mere 3 second difference). And that was only, due to the fact that we had both the file systems (of the host and the guest) on the same disk. So we can easily say the performance of the VM doesn't get affected by any resource constraint on the host OS. But after all this, the disappointing part of the software is that it can only run Linux and BSD clones as a the guest OS. So no Windows in Xen. Another disappointment was that it can still only emulate x86 32 bit machines. So, while you can install Xen on a 64-bit machine, it will not use the capabilities of the 64-bit processor. But the next version of Xen is supposed to have SPARC and 64-bit support.

Now comes the manageability part. After using Xen for a couple of days we decided this thing is still rough on its edges and needs a good polish. The configuration is majorly command line based. There is a Web-based interface available, but it refused to run on our PCQ Linux 2005 setup. To use that though, you could try with the Xen Live distro, since it has all the components of Xen pre installed and configured. But still this Web-based configuration tool cannot stand in front of the VMware and MS VS management consoles. Overall if your organization needs more performance than easy manageability and you can leverage Linux expertise this thing could be for you. And you obviously should not need to run Windows.

 
   
  Net 4 India Limited, Trak Online Net India Pvt Ltd, All rights reserved
Home - Company Info - Services - Media - Partnership - Careers - Contact Us
Disclaimer - Privacy Statement