Expert Unbiased Reviews

Technological Data Center Solutions

Technological Data Center Solutions - Since its "birth" took a long time, there were more advanced and powerful computing systems. This is what we will talk today: about technological trends developing in the field of data management.
Technological Data Center Solutions

Technological Data Center Solutions

One reason for the improvement of data centers is saving money. Companies and startups are trying to create more efficient systems, optimizing the space occupied by them and the cost of maintenance. For example, to reduce the cost of cooling, some organizations build data centers in colder regions of the world or underground, as Microsoft has, in general, after a data center in the depths of the sea. These data centers will surely solve the problem of placing the equipment.

Also, in an attempt to reduce the size of the area occupied by the racks, and related operational and capital costs, the company entered the game Vapor IO, announced in March 2015 a new approach to designing data centers: design modular data center Vapor Chamber.

Technological Data Center Solutions

According to the words of Director General of Vapor IO "Crawford Cole", they were able to conclude a data center in a convenient housing that is easy to load and transport. One such cylinder with a diameter of three meters and can accommodate six mounting rack 42 U with total IT load of up to 150 kW, as is well-suited for scanning in tight urban environments.

Also Read Related One : Cloud Data Storage

The website Vapor IO said that 36 units Vapor Chamber can fit in the same space, which is necessary to accommodate 120 standard mounting racks in the configuration with the "hot" and "cold" corridors, but the Vapor Chamber significantly reduces the utilization efficiency power PUE.

Among other things, Vapor IO has developed its own software called Vapor CORE (Core Operating Runtime Environment) . Software allows data center operators to determine the performance of the IT equipment with the use of various metrics (whether the number of processed the URL-addresses or transactions).

This approach is similar to the one that used eBay company when creating its concept of the Digital the Service Efficiency , presented in March 2013. This development Vapor IO is much more versatile.

The work complements the Vapor Chamber OpenDCRE (Open Data Center Runtime Environment), the open API-interface Vapor IO, enabling applications to be integrated with any operating data center. The information provided in real time, enables data center operators to correctly identify the need for resources and distribute power.

Open Technological Data Center Solutions - The future of the data center

It seems that more and more global companies are beginning to work with open source technologies. In 2015, Microsoft has adopted Linux, Apple showed the company code of its newest and most popular programming language, and cloud services simply could not function without Linux. This large wave lifted the Facebook company that launched the Open Compute Project.

Facebook is working on the creation of open-source hardware, immediately introducing new technologies in their data centers. The company uses all the advanced development - SSD, GPU, NVM and JBOF, which is part of the new vision of the company to establish a network of powerful data centers.

"In the next 10 years, we have Let us closely artificial intelligence technology and virtual reality, says Mark Zuckerberg. This will require much more computing power than we have today".

with Facebook completely redid its infrastructure. Instead of the usual two-processor server, a single-chip system (SoC) on Intel Xeon-D base with lower power consumption.

"In the development of the new processor we are working closely with Intel. In parallel, the process of alteration of the server infrastructure for the system to meet our needs and was scalable", wrote the company.

This single-socket CPU server with less power to cope with web load better than a dual-processor option. At the same time server infrastructure was rebuilt so that the number of processors on one level doubled.

"Working the numbers of the new processor is fully in line with our expectations - report Facebook engineers. Moreover, the socket server has less stringent requirements on the heat sink".

All this has allowed Facebook to create a server infrastructure, which can be packed much more production capacity to the level of remaining within the 11 kW per rack.

Also on Facebook have shared their new approach to the use of GPUs, which has received increased attention in recent years. Initially GPU used to increase performance desktop when working with graphics, but today they are actively incorporated into supercomputers to solve more complex problems.

The company uses the power of the GPU in their systems, artificial intelligence and machine learning. Appropriate laboratory inside the company develops neural networks to solve specific problems. Obviously, this requires a completely different performance levels.

System of Big Sur, for example, uses accelerated computing platform Nvidia Tesla with eight high-performance graphics processors at 300 watts each. Facebook engineers have optimized the performance of power and heat transfer of new servers, allowing them to use the company's existing data center in conjunction with classic servers. As a result, significantly reduced the time on the neural network training.

Technological Data Center Solutions

Another area where Facebook sent his foot a memory. The company uses flash technology to speed up the boot disk and cache for many years. Engineers have replaced hard drives solid state drives, transforming storage of JBD (Just a Bunch of Disks) in JBOF (Just a Bunch of Flash).

New JBOF-module on Facebook developed jointly with Intel - he got the name Lightning. Through the use of NVM Express protocol and PCI Express interface optimized for the SSD, it managed to achieve high speed. However, as the company said, that it is still not enough.

A company is trying to find in the technology 3D XPoint, developed by Intel and Micron. It is based on the elements of phase-change memory and switch to on Ovshinsky elements that provides a memory with a volume greater than that of the DRAM, and a capacity exceeding possibilities of flash memory.

This technology is able to produce a new category of non-volatile memory: fast enough to use a DRAM bus, and capacious enough, in order to store large amounts of data.

Developers architecture data centers are already experimenting with emerging new technologies of data storage. For example, it is known attempts to create a cluster of chip flash memory chip under the control of emulating disk controller.

In the future, the structure of data storage will change dramatically: DRAM will be far-reaching array of high-performance flash memory, which will "bypass" the operating system and hypervisor working with server DIMM in direct access mode. This significantly reduced latency, and data loggers can be placed next to the servers.

Open Compute change market data center and cloud computing. The joint efforts of large representatives of the IT Market is standardized and cheaper development of server solutions. The main purpose of open source projects is the creation of the most efficient and scalable server systems with low costs of maintenance and energy consumption. So far, it's safe to say that things are going in this direction.

Popular Posts