Understanding a server replacement cycle

Waiting too long increases the risk

TODO alt text

The big expansion in the volume of data handled by business – often estimated at about 40% per year – coupled with regulatory issues is pushing IT managers towards important decisions on the replacement of ageing servers.

It can of course be tempting to push a server's replacement past the usual two to three year lifespan, but research conduced by IDC indicates that the failure rate for those that are not replaced when their manufacturers recommend increases by 85%, creating a serious risk for mission critical servers.

There is also an issue regarding the installed software. Often, these upgrades will run ahead of the servers they run on, and so are not optimised and can create a maintenance headache for IT managers and system administrators.

Making choices

As the workloads on servers continue to increase, replacing ageing hardware becomes a commercial imperative. Indeed, IT managers can expect servers that are more than five years-old to have about a third more downtime than new hardware.

It's possible to make a financial case for a server refresh by looking closely at the benchmarking that can be done. Servers that show poor performance are ideal candidates for replacement.

Of course, in an age of virtualisation replacing servers can be a hard sell to CIOs. With the vast majority of servers using only about 10% of their capacity at any one time, it is necessary to investigate the virtualisation route, but this should not blinker IT managers into ignoring the risk of ageing servers failing.

In a white paper on the issue, IDC says: "Many organisations had previously enacted standard and routine refresh cycles that corresponded with vendors' refresh cycles - typically, about every 18-to-24 months. As companies now deal with their capital constraints, coupled with new demands, they keep their IT infrastructure in operation far beyond what was normally considered useful lifespan.

"A number of IDC studies show a significant decline in the availability and reliability of most x86 servers once they have been in operation for about three to 3.5 years. A regular refresh at about 3.2 years would be appropriate.

"However, many companies are pushing their servers to four or five or more years."

The cloud clearly offers IT managers a number of options when new servers are considered, and more virtualisation on this channel is clearly part of the equation. However, with BYOD presenting new IT challenges and a trend towards the use of private clouds, a hybrid approach to the server replacement cycle is now needed.

Leveraging the resources that a business already has is clearly the first step, but ultimately, the purchase of new servers will be the right course of action for many businesses.