
Imagine that you are hired to inspect, categorize, and stack inventory. You are trained, then tested, and you are capable of processing up to eight items per hour under ideal conditions. You are expected to work eight hours a day, and at the end of each day you are to have processed 48 items. With the goal of processing six items per hour you begin your new occupation.
You do your job day after day and the organization is happy with the results as you consistently process six items per hour. Then one day the person who hired you leaves the organization and a new manager is brought in. The new manager never reviews your capabilities, your training, or the original expectations of the role.
As the business grows your manager increases the amount of items you are to process each day from 48 to 56. You adjust and are now processing seven items per hour. The organization is happy with the results, but the occasional mistake is made. You work an extra half hour per day to correct these mistakes.
The business continues to grow and now your manager increases the amount of items that you are supposed to process each day to 64. You are now staying late for an extra two hours per day to correct mistakes, and the organization is no longer happy with the results.
Your new manager increases the daily quota once again to 72 items processed per day. You cannot keep up, and the mistakes start piling up. You continue to work late and one night you overhear your boss speaking on the phone about your work. The comment leaves no room for speculation:
“The person currently processing inventory is the problem. We need to hire someone who is better.”
This is how too many organizations treat their infrastructure. The hardware is the problem. The system is too slow. The software is too buggy. Everything would be better if you would just replace whatever is being blamed.
Technology always marches forward, and all infrastructure eventually must be replaced. That is no excuse for not knowing what an infrastructure is capable of handling before problems occur. Monitoring a system is a requirement for putting it into production use. Having automatic alerts configured to notify the administrators for when thresholds are in danger of being exceeded is a necessity.
Just as important though is knowing how an additional workload will impact the infrastructure before it is placed into production. You cannot do this without metrics, research, and an attention to detail that comes from understanding your organization’s needs.
As an IT professional do not use the “It is the infrastructure that is the problem. We need to replace it with something better.” excuse. Instead learn how to monitor and assess your existing environment to prevent a bottleneck before it emerges. Whether you do it yourself or use a service provider for your monitoring your role requires that you be aware of the infrastructure’s limits before they are reached. That way instead of blaming the infrastructure for causing the organization’s problems you will instead be protecting the infrastructure. By protecting the infrastructure you will be preventing problems for the organization from ever emerging.
Besides, it is lot better to be known as the IT professional who is able to say:
“It is the infrastructure that is keeping our organization moving forward. We need to continue to invest in the infrastructure if we want our organization to be better.”