Data Center Designing and Planning

Posted byVijay Gupta12/05/20200 Comment(s)

Data Center design

Data Center designing involves planning of computing resources of the Data Center, as well as the consideration of the HVAC requirements. The design also considers the construction of the building and its surrounding areas. There is now a growing awareness about designing sustainable Data Centers in green zones, using eco-friendly materials. The design also factors in waste recycling and conservative use of energy resources and water.

 

Data Center architecture

It is a layout of Data Center resources and equipment which acts as a blueprint for the construction of a Data Center facility. The architecture specifies how different resources and devices are connected and how process workflows are managed in the Data Center. The architecture can be specific to network, computing, security, or information based on which physical or virtual layouts are constructed.

 

Building Management System (BMS)

A BMS is installed in a Data Center building for monitoring and controlling equipment in the building through a computer-based system. This would include management of fire systems, power systems, cooling, lighting, ventilation, and security.

 

Power Usage Efficiency (PUE)

It is a ratio that is used to determine the energy efficiency of a Data Center. Greed Grid has proposed benchmarks for PUE that can guide IT professionals in understanding energy efficiency in Data Centers. Different PUE values have different impacts on infrastructure efficiency. The average value for PUE is 2.0 which provides 50% Data Center infrastructure Efficiency (DCiE). However, companies ideally target a PUE figure of 1.2, which reflects high energy efficiency and provides 83% of infrastructure efficiency.

 

Data Center Infrastructure Efficiency (DCiE)

The term is used to assess the energy or power efficiency in a Data Center. It is calculated for individual pieces of equipment as the percentage of power consumed from the total power available to a Data Center facility. The DCiE values vary between 33% and 83%, the former being least efficient, and the latter being the most energy efficient Data Center.

 

Disaster Recovery (DR)

DR is a process of protecting organizations from natural disasters and emergencies. DR allows an organization to quickly resume mission critical operations after a disaster, to affect minimal disruption to business. It involves procedures for recovery of lost data and processing of important functions. To do this, a secondary site called DR site is maintained; it is a close copy of all aspects of the primary site. If the primary Data Center goes down, there is an automatic switchover (within a few seconds) to the secondary site. The primary and secondary sites need to be periodically synchronized so that one is as close a copy of the other.

 

Green Data Center

A green Data Center is designed to maximize energy efficiencies and minimize negative environmental impact of systems. These Data Centers utilize renewable forms of energy such as solar power and consume reusable and recyclable materials. They work on green principles that reduce wastage and CO2 emissions, thereby minimizing adverse impacts of technology on the environment. Green Data Centers also recycle water for non-potable uses. They employ passive cooling techniques to save on power for air-conditioning. Green Data Center buildings are built in green zones and the architecture and construction considers sustainability measures at various levels.

 

Business Continuity

Companies need Data Centers to keep running 24 hours, 365 days of the year. However, unexpected problems such as power cuts and physical damage due to a natural calamity can occur. Business continuity planning helps in ensuring that the Data Center continues to operate in these situations, thereby ensuring maximum uptime. The proactive contingency planning process for such emergencies is called Business Continuity Planning (BCP).

 

Data Center Interconnection

Two or more Data Centers (from the same or different service providers) connected to achieve a common goal. An interconnection Data Center is used for pooling resources so that individual Data Centers can easily meet the demands of scalability by utilizing the resources shared between them. Some organisations opt for interconnected Data Centers from different service providers at a global level – to expand their global footprint and serve customers in different geographies. This is a strategic decision for strengthening their global presence.

 

Power Redundancy

It is achieved using additional power units that can power the infrastructure in case the main power supply is gone. High-end computing units have two redundant power supply units embedded. Only one power supply is used at a time while the other is deployed in emergencies. This is called an N+1 configuration, where ‘N’ denotes the component. The ‘+1’ denotes one extra component for backup purposes.

 

Uptime

The term ‘Uptime’ was devised by the Uptime Institute. It is crucial to have minimum downtime in the Data Center, and this is gauged in terms of the Uptime percentage – the closer that figure is to 100% the higher is the availability of resources in that Data Center. This is classified into four Data Center tiers.

 

Data Center Tiers

There are four tiers of classification for Data Centers that are classified, on the basis of their uptime: Tier-1: 99.671%; Tier-2, 99.749%; Tier-3: 99.982%; Tier-4: 99.995%.

Tier-1 Data Center is the most cost-effective and does not provide any redundancies. A Tier-2 system would have redundant systems for power and cooling. A Tier-3 Data Center uses multiple power sources. Tier-4 level Data Centers are the most complex and offer the highest levels of redundancy for power and cooling components. The increasing levels of component redundancies are defined by terms like N+1 and 2N.

Tier-4 Data Center is an error-tolerant facility that comes with full redundancy (2N). That means double the amount of resources, to ensure redundancy for almost every single system and component in the Data Center.

 

Meet-Me-Room (MMR)

This is a room within a Data Center facility where telecommunication companies can physically connect to one another and exchange data without having to pay any local looping fee. An MMR provides a safe production environment where the carrier handover point equipment can be expected to run on a 24/7 basis with minimal risk of interruption.

 

Colocation Center (Colo)

A colocation facility provides physical space, power systems, cooling systems, security services, networking, and other resources to multiple organizations on a shared basis. Customers can have their own servers, storage, and applications installed in the ‘colo’ space and use the additional services (like physical security) provided in the facility.

 

Carrier-neutral Data Center

It allows interconnections to be made between different colocation services and telecom carriers. Carrier neutrality in colocation Data Centers provides many benefits such as cost efficiency, more scalability, fewer chances of data loss, flexibility, and local redundancy.

 

Express Routing

Routing defines how endpoints of an application respond to client requests. Express routing comes with a mini application that allows loading of specified paths for all requests such that a login route can be created, and values can be retrieved faster from a URL.

 

Close-Coupled Cooling (CCC)

It is a system of cooling that brings the heat transfer in a facility closest to its source such as the equipment rack so that the exhaust air can be immediately captured and thus, delivered more effectively. A CCC system can provide huge energy savings in a Data Center by bringing heat transfer closer to its source such as a server rack.

 

Data Center Infrastructure Management (DCIM)

DCIM converges building facilities and IT facilities for an organization so that administrators can have a holistic view of the whole facility including energy performance, floor space utilization, and equipment performance. Administrators can combine, store as well as analyze data from different infrastructure components.

 

Cloud Data Center

When physical infrastructure of a Data Center is replicated in the virtual environment, the core benefit is scalability. In a cloud Data Center, a third-party service provider has this space available that is leased to organizations at a cost such that the virtual resources are shared between multiple companies. The organization pays for these resources on a pay-as-you-consume model.

 

Data Center migration

It involves the transfer of data from one operating environment to another. An organization may be changing its Data Center location, which would require data migration from the previous facility to a new facility. Data migration is also required when an organization wants to move from traditional physical infrastructure to a cloud system. A migration from one site to another may require one to also establish new layouts for managing power, HVAC, cabling, and electrical work, as they must be adjusted as per the needs of the new Data Center environment.

 

Virtual Data Center

A virtual Data Center is a cloud-based Data Center that pools together the resources available on the cloud infrastructure for fulfilling the enterprise requirements. These resources can be RAM, CPU, storage space, and bandwidth. Unlike cloud Data Center in which machines are used on-demand for the storage of company data, in a virtual Data Center, companies purchase computing resources like hardware components and thus, can have their own set-ups.

 

Computer Room Air conditioning (CRAC) unit

CRAC is an air conditioning system that is used for monitoring and maintaining temperature, humidity, and airflow in a Data Center. Traditional ACs were used only for cooling systems while CRAC units also provide climate control facility. A typical CRAC system contains a chilled water unit, compressor, and condensers. A CRAC system can provide up to 100 tons of cooling.

 

Edge Data Center

They are small facilities that extend a network edge to deliver cloud resources for computing. These Data Centers stream content in caches that are placed close to the end users /customers to be able to serve them fast. So these are ideal for latency sensitive applications. In IoT networks, edge Data Centers are often used as clearing houses for additional data processing. Edge Data Centers may be providing their own services, but at the backend, they always connect to the main Data Center which could be a cloud-based infrastructure.

 

Data Center cooling

Data Center cooling is done by using a cooling unit which supplies cool air to IT systems. A cooling system is placed near Data Centers and uses containerized infrastructure. Cooling can be air based or liquid based. Cooling is an important part of Data Center management as it can help in saving energy consumption at the Data Center. Common cooling systems used in modern Data Centers can include provisions like refrigeration, free cooling, chilled water, air evaporative system, containment, and environmental monitoring.

 

Data Center consolidation

Data Center consolidation involves the merging of multiple facilities to reduce overall operating costs. Consolidation uses highly efficient technologies to reduce volumes of IT infrastructure and to enable business continuity. A consolidation approach reduces capital expenditure by increasing hardware utilization. The purpose of consolidation is to save on space and to reduce the carbon footprint of Data Centers.

 

Data Center components

A Data Center has an IT infrastructure that includes storage, networking equipment, and computing units. Components that are most commonly available in a Data Center environment include racks, power supply, cables, fire protection system, network operations center, meet-me room, environment control, and physical security.

 

Data Center Network (DCN)

A Data Center contains a set of resources including storage, networking, and computational resources that are connected through a communication network. Data Center networking is the process that establishes this interconnection between resources within a Data Center. A DCN plays a critical role in creating private and hybrid architectures.

 

Data Center rack (U)

Racks, also called cabinets are used for storing electronics. They are used for protecting equipment and systems from physical and environmental damage. Racks are built in a way that the temperature of Data Center systems is controlled and thus, systems are protected from any damage due to cold or heat. A fully enclosed rack is normally 42U or 48U in height (‘U’ or ‘RU’ denotes rack unit and 1U = 1 ¾ inches or 44.45 mm). 42U and 48U are full height racks and measure 6.125 feet and 7 feet respectively. Half-height racks are 18U, 22U or 27U.

 

Data Center services

Basic Data Center services include the provision of infrastructure components that are required to store and manage data through facilities like storage, processing, distribution, and management. Data Center services include all facilities that are needed to manage systems including hardware, software, processes, and people.

 

Data Center solutions

Solutions include the Data Center products and services that are required for creating and maintaining a Data Center. Typical solutions include cloud services, virtualization, and data analytics. The facilities available as Data Center solutions differ from vendor to vendor.

 

Power Distribution Unit (PDU)

A PDU is a device that is used in Data Centers for distributing power to servers, storage units, networking equipment, and other components placed in the racks. A PDU can be basic or metered PDUs. These can be installed vertically or horizontally inside racks. Today’s Data Centers use intelligent PDUs that do power metering, power management, monitoring, and environmental control.

Track Order