A discipline, you need to master!

The backbone of all major corporations is a large and complex IT infrastructure

Digitalization opens a lot of opportunities. Slowly but surely, it transforms any kind of company into an IT company. The backbone of production companies as well as that of retail-, logistics-, insurance- companies, financial institutions, banks and many more, is and will increasingly be, a large and complex IT infrastructure often comprising a mix of technologies with onsite servers, outsourced servers, servers on private and public Clouds and sometimes mainframes.

The business owns the needs, the IT department bears the responsibility

In the distributed platforms area, adding new virtual or cloud servers has become so easy and relatively cheap, that many companies delegate the possibility to spin up new servers to the business. Whether server purchase decisions are centralized or decentralized, in most cases, the IT-department owns the budget. The head of IT is responsible for explaining increasing IT budgets, and to secure that resources spent at any time are delivering value to the business.

Having the total overview is crucial

Reality out there is that many servers have been purchased with different projects in mind but never taken into use because of changing priorities. Some of them are not doing any useful work but have been forgotten and never decommissioned (zombie-servers). Some servers have much more capacity than they need to be able to run their workload smoothly.

Having the total overview and constantly monitoring utilization patterns and trends is vital to secure a lean and healthy server infrastructure. The time spent in regularly monitoring the utilization and capacity of the servers, is time well spent and saves companies a significant waste of money both in terms of hardware, software and manpower.

Home-grown solutions have their limits

If a distributed platform has a relatively limited number of servers (normally up to a couple of hundreds), most IT departments can manage to have a good overview of all servers’ utilization with home-grown Excel-like solutions and use of internal resources.

But when it comes to installations with 1000s of servers, the infrastructural complexity increases, and it becomes difficult if not impossible to keep track of the situation and its development manually.  As the distributed platforms become bigger, home-grown solutions become less effective and more costly, often involving many hours of detailed detective-work from IT people, whose precious time could be better spent on more qualified tasks. Home-grown solutions are person-dependent and represent therefore a liability for the companies. In installation with over 1000 servers, automation and formal procedures become indispensable.

Lots of monitoring tools do not give the total overview

A large amount of monitoring tools come along with different operating systems and different solutions offers valuable and deep insights into what is happening within an individual server or a defined group of servers of the same type.

What very few tools can, is offering the total overview across different platforms and operating systems, identifying the general utilization patterns and the overarching trends. This kind of check should be conducted on a regular basis, 2-4 times a year in larger infrastructures as a general “hygiene”. Objective of the exercise is securing a lean and agile IT infrastructure, a better reporting to the business and thereby better IT-related decisions, and better ROI on IT investments.

In any case, a larger Server Rightsizing should be conducted every 1-2 years in any larger IT department. SMT Data’s experience shows, that such an exercise is typically securing yearly cost savings of around yearly 10-30% of server capacity-related costs – equating to millions of euros for large IT installations.

Reducing the capacity of over-configured servers or stopping servers that get launched and forgotten, requires a concentrated effort based on an understanding of where there is excess or idle capacity. Keeping a constant eye on the development of the Server infrastructure is an important discipline that needs to get its prominent place among the good habits of IT departments. It has always been so, but the trend to outsource and the move into the Cloud make it even more crucial.

SMT Data helps companies with large and complex IT infrastructure getting the needed overview and conducting regular checks and Server Rightsizing projects with the support of its specialized SaaS tool ITBI for Servers. Contact us to hear more about the solution and our related services.

Jan Vilstrup, CCO

Jan Vilstrup joined the SMT Data management team in 2008. With a past in Sun Microsystems, StorageTek, SiliconGraphics (SGI), Kodak and MicroAge he holds a solid experience within international Sales and Management.