You are currently viewing Super-Sized Server Farms

Super-Sized Server Farms

This article is also available in: Arabic German

Server Farm 

A server farm is a network of multiple computers integrated into a cluster that offers functions that a single computer cannot provide, and they are extremely expensive to construct and operate. One application for server farms is hosting web, called Web Farm. Due to the high costs associated with the design parameter, their performance is less than optimal. (Mitrani, (January 2013))

Building hyper-scale server farms can be a tedious and time-consuming task. It will take well over two years before 100 server farms can be functional. 

Super-sized Server Farms 

As the world progresses towards cloud technology, the server farms are becoming super-sized, as the demand for increased computation power has also grown. There is an increase in data center capacity, and it is changing the global server topography. 

Cloud builders such as Microsoft, Apple, Equinix, Google, and Facebook are investing large amounts of money into building these farms to establish superiority over each other. Apart from this, many other reasons can prove the feasibility of such server farms. As the economy is going digital, every sector has an online presence which increased the need for large data storage abilities. Second, there is now a consensus among companies the workload is expected to shift from in-house servers to cloud service providers.  

Establishing big data centers allows the relative cost of infrastructure to be lowered and considering that the key parameter for these farms is the cost per megawatt, the power is a major cause of incurred costs. Super-sized server farms also allow a greater ability to centralize and increases profitability. 

Hyper-Scale Server Farms

Hyperscale is a reference to the ability of computer architecture to produce responses to the increase in demand. Computers rely on a set of nodes that contain resources. Scaling the computer architecture means improving performance, ability, infrastructure, and/or storage. Scaling can create a strong system that fulfills the requirements. The term Hyper indicates that the scaling performed in these networks is massive and swift. Traditionally, there was a trend of scaling up, which meant the computer performance and efficiency have improved to fulfill the demand. However, currently, the method followed by many companies is called scaling-out which requires adding more computers to the integrated network, thereby improving net computational capabilities. Hyper-scaling is the move to the future, where data is transferred, acquired, processed more quickly than it ever was. This scaling is expensive and needs to be very efficient that is why the main controlling parameter in these systems is the limitations of automation presently. (Kidd, 2018)

The Consequence

There are now more than 500 hyper-scale farms in the world. While the definition of a hyper-scale farm is ambiguous, Synergy has its criteria of classifying the size of the company considering its operations in cloud technology, e-commerce, social networking, and computational capacities. The facilities are measured in thousands or ten thousand computers. 

The statistical data shows that there is a total of 504 hyper-scale farms and there are 40% of these in the United States of America. However, the greatest development and growth has taken place in Europe, UK, Japan, China, Asia-pacific, and Australia. Australia accounts for 32% of the total. 

The technological giants that own these humongous facilities follow one common practice. They lease more than 70% of these capacities to commercial data center operators. Datacenter facilities are expected to increase for the forcible future. (Sverdlik, 2019)

These super-scales and hyper-scale data centers are a way forward for humanity. But they have an underlying downside to it. These data centers have a huge carbon footprint that needs to be taken care of before it will be too late. 

The way forward to deal with this problem is to make the operations as efficient as possible. For this purpose, the data companies are employing ewer, efficient cooling systems, introducing automation wherever possible, etc. These practices have bear fruit in the form of relatively constant power consumption over the past decade. On the upside, as more companies outsource their data handling capacity, their power consumption diminishes certainly.

However, there will be an optimum point, after which any more companies that decide to outsource their operations to these companies will require additional power, which means more carbon. There have been attempts and claims of decarburization of energy production for these data centers but, none of the big three companies have completely ditched fossil fuels. 

Did you like this article?

Click on a star to rate it!

Average rating 5 / 5. Vote count: 1

No votes so far! Be the first to rate this post.

Spread & Share

Leave a Reply