The link between the terrorist attacks in New York on 11 September 2001 and UK construction in 2006 is not immediately obvious.
But the tragedy appears to have set in motion a new approach to IT that is fuelling a boom in the building of data centres on this side of the Atlantic. The 11 September attacks highlighted the danger of locating back-up systems at the same site as the company headquarters. Several major banks found that when their Manhattan HQ was hit so were the systems that would have saved critical data and speeded their struggle to return to normal service.
Now, financial institutions across the world are moving data centres out of capital cities. In the UK the data centre market has been buzzing for around two years and experts believe it has another two years of intensive building activity to go.
The data centre market is showing a significant revival. Growth is being fuelled by the demand to house IT equipment in resilient environments with apparently ever-increasing power consumption. Clients are also asking for whole systems to be duplicated and for buildings that will resist potential terrorist threats. As a result, construction costs are rising sharply.
In developing data centres companies are seeking to improve performance. In other words, they want to boost revenue through increased processing power with high availability. The aim is to reduce downtime resulting from IT outages.
Nearly every element of current data centre development has increased in cost and complexity
The space required for all this has been cut due to the development of high compaction equipment – often referred to as blade technology. However, a huge increase in the usage of applications has lead to a similarly exponential rise in processing uptake. Unfortunately the energy usage of high compaction equipment has not fallen at the same rate as the uptake of processing capability. As a result, power consumption for each unit of space has grown dramatically.
To put this in to perspective, a single cabinet loaded with blade servers may demand in excess of 15kW. This compares to perhaps as little as 1kW for conventionally loaded cabinets and a likely limit of 8kW for air-cooled cabinets. That’s the equivalent of placing two three bar electric fires in each square metre of data hall space.
A common benchmarking standard for high specification data centres has been developed with specifications pertaining to all communication equipment critical to maintaining uninterrupted information availability. The Uptime Institute, an organisation specialising in improving uptime management in data centre facilities, has developed a series of ‘Tier’ classifications. The highest, Tier 4, was introduced in 1994 and is often the baseline against which high specification data centre design is undertaken. At its heart is the provision of a space that is fully fault tolerant (capable of withstanding a major system failure without disruption to operations) and concurrently maintainable (capable of allowing system maintenance without disrupting the end user). However, in practice, achieving all requirements of a Tier 4 system are often compromised by site factors, such as feasibility of sourcing totally diverse power supplies from the local electricity supplier. This has given rise to additional categories such as ‘Tier 4 unwound’ and ‘Tier 3 plus’.
What all this means is that nearly every element of current data centre development has increased in cost and complexity. Enhanced processing power brings with it increased power supply requirements, electrical infrastructure distribution and cooling capacity. Add to this duplication of whole systems to achieve resilience, additional system units to provide redundancy and construction elements to resist potential terrorist threats, and it is no surprise that costs to construct have also risen abruptly.
The 11 September attacks highlighted the danger of locating back-up systems at the same site as the company headquarters
Table 1 illustrates the progressive increase in costs that have occurred in the last six years alone. Although low resilience schemes have demonstrated a relatively steady growth, a sharp rise in cost has occurred in high end schemes, driven by escalating average load densities – typically ranging from 1,000 to 1,500 W/m2 of Nett Data Hall Area (NDHA) – and enhanced levels of resilience.
These trends do not recognise a potential further hike if cabinet loadings above 8kW are considered where air cooling is unlikely to be adequate and alternative cooling techniques such as water or carbon dioxide are required.
The trends shown in Table 1 assume that all available data hall space is completed in one operation. However, many site acquisitions deliver a greater overall space capacity than is necessary for initial requirements in order to make future expansion possible. Given the sensitivity of these developments, the future cost of delivery of data hall space whilst minimising disruption to existing space has a heavy impact on initial costs. For example, pipework and electrical infrastructure may be installed from ‘Day One’ to service future ‘Day Ultimate’ expansion. Table 2 demonstrates the possible increase in cost relative to completed NDHA space that may occur in the event that progressive areas of data hall space are left incomplete.
Finally, it is fundamental that costs should be prepared not only with an understanding of both engineering benchmark costs but also applying an understanding of system structure. To address this, at Sense we have developed a cost model that can provide a detailed component-by component breakdown derived if necessary from little more than Nett Data Hall Area and agreed resilience levels. The principle of the model is to follow the logical path of each system to derive unit quantities and then apply appropriate benchmarked unit rates. Table 3 demonstrates a sequence showing how some system component quantities and costs are derived.
These principles can be applied to many elements of a data centre design. They are vital to forming an early, detailed basis of cost for engineering services, which is an essential start for good data centre design.
Source
QS ºÃÉ«ÏÈÉúTV
No comments yet