This article is also available in: Arabic German French
A server is computer hardware, software, or device that offers assistance to another computing unit, a client, by providing information or computing resources. When hundreds and thousands of such servers are connected via network switches and routers within the same physical location, they compose a cluster also known as a server farm. A server farm is intended to provide excess processing power and storage capacity for machines and applications which need extensive computing resources.
In 1991, the first-ever Web server was launched at the Stanford Linear Accelerator Center in California. The number of web servers reached 10 000 at the end of 1994. The same year, World Wide Web Consortium (W3C) was founded to manage and standardize the advancements in web technologies.
Nowadays, Apache server is the most used web server environment in web development, handling nearly 70% of all the sites accessible (Thota et al., 2017).
In 2001, RLX Technologies launched the first-ever commercial blade server in the market. They were developed by Christopher Hipp and David Kirkeby for an industry project but their efficiency led to commercialization.
A blade server usually comprises of a chassis, or box-like design, containing numerous thin circuit boards, known as blades. Every blade is associated with software whose data is stored in a memory device.
Blade servers have more accessibility and usability advantages as they primarily focus on a single application, and consume less power supply as well as storage. Companies can allocate a single blade server altogether to an operation-critical function which is very crucial for the organization.
As advancements in server architectures increased, more attention was being paid to the maintenance and management of servers. Server management includes operations and functions carried out to maintain a server’s ideal performance. Strategies were devised for observing the server programs running on it and monitoring their issues, fixing those errors and casualties, upgrading the server software, setting up the services, and planning allocation of resources, all of these were considered to be the main objectives of the server management.
The invention of remote management led to the virtual maintenance of server farms.
As the number of server farms increases they are expected to meet high standards of computing power, scalability, and efficiency. Hardware is likely to become obsolete as virtualization takes over. Cloud computing has already led organizations to convert their systems into virtual environments. Mobile computing has also made it easy to access data, software, and other computing resources with a lightweight and easily accessible computing environment. In the next decade, advanced technologies like virtual desktops, digital libraries, global internet provider services, and open-source migration will completely change the game of server farms.
As the world is progressing towards cloud technology, the server farms are getting super-sized since the demand for increased computation power is increased.
Cloud builders such as Microsoft, Apple, Equinix, Google, and Facebook are investing huge amounts of money into building these farms to establish superiority over each other and meet the hour’s need. Apart from this many other reasons prove the feasibility of such server farms. Firstly, the economy is going digital (Xu et al., 2018). Every sector of the economy has an online presence now and this is expected to increase which will call for greater amounts of data to be stored. Secondly, cloud computing has found a general acceptance for a majority of companies and the workload is expected to shift from in-house servers t cloud service providers.
The statistical data shows that there is a total of 504 hyperscale farms and there are 40% of these in the United States of America. However, the greatest development and growth has taken place in Europe, UK, Japan, China, Asia-pacific, and Australia. Australia accounts for 32% of the total (Tang et al., 2016).
The technological giants that own these humongous facilities follow one common practice. They lease more that 70% of these capacities to commercial data center operators. The data center facilities are expected to increase for the good part of near future.
These super-scales and hyperscale data centers are a way forward for humanity. But they have an underlying downside to it. These data centers have a huge carbon footprint that needs to be taken care of before it’s too late. The carbon footprint of these technological giants is a sign of worry.
The way forward to deal with this problem is to make the operations as efficient as possible. For this purpose, the data companies are employing ewer, efficient cooling systems, introducing automation wherever possible, etc. These practices have bear fruit in the form of relatively constant power consumption over the past decade. On the upside, as more companies outsource their data handling capacity to these companies their power consumption diminishes certainly. However, there will be an optimum point, after which any more companies that decide to outsource their operations to these companies will require additional power, which means more carbon.
Thank you for the article.