social share alt icon
Thought Leadership
Banner Image
September 29, 2016
Mainframe Cost Optimization
Krishna Sasidharan

In today’s competitive IT world, the Mainframe System still maintaining its legacy power since most of the business critical applications are under its radar coverage and moreover it proves to be the most secured and a reliable source for storing the Client sensitive data. In spite of that strong viewpoint, operating an application on Mainframe landscape is one of the most daunting factors since it consumes nearly 20-40% of company’s IT budget which is very high when compared to the distributed platform. Adding to that, an introduction of new technologies, tighter operating Margins and increasing competition urges the Mainframe business owner to reduce their operation cost for their survival. Hence taking that survival factor into our consideration and to the benefit of Mainframe resources who targets for the Mainframe Cost Optimization, here are the few valuable tips which can be plugged into any Mainframe application.

License Cost Reduction

1. Hardware Licensing Cost

It includes all costs associated with acquiring and maintaining hardware, servers and other infrastructure components necessary to deploy, sustain and manage the Mainframe landscape. Almost 20% of Mainframe cost consumed by the Hardware License cost. Hence more focus should be on the cost effective hardware which may reduce the present licensing cost.

2. Software Licensing Cost

The total cost incurred on the Software, Products, Tools and other Utilities contributing towards the Mainframe Application Support and Maintenance. Based on the current market study, the software license cost consumes almost 40% of IT operation cost and to reduce that, the organization should perform the periodic review for their Software, Products, Tools and other Utilities. Identify and eliminate the obsolete and un-used software and tools. Replace the existing one with less cost and equivalent Software/Tools.

Mainframe Workload Offloading

Mainframe CPU resources are always costlier than the distributed platform, hence offloading the critical and time-consuming workloads from the Mainframe to a distributed system like UNIX, LINUX, CLOUD, etc. will prove to be the huge cost saving factor for Mainframe clients.  Also, the major portion of Mainframe work is being spent on the Development and Test regions and by offloading those from the current Mainframe system will free up the MIPS for the PROD box.

Storage Cost Reduction

In regards to the Storage cost reduction, a good storage management technique, future growth study and periodic review process will always yield good results thereby reducing Mainframe TCO. From Mainframe perspective, it is always good to have more frequent access to the tape or some other open system disks instead of the DASD (Direct Access Storage devices) which should be very minimal. In addition to that, compressing the data stored on the disks/tape, moving the Test/Dev region datasets to the tape and deletion of obsolete datasets across all regions, indeed proves to be a huge cost saving factor from the Storage perspective. Periodic check on the obsolete/deferred datasets will always save a huge amount of memory and storage space.

MIPS Optimization

MIPS stands for million instructions per second and it is a measure which indicates the System’s speed and Power. But nowadays, MIPS consumption on Mainframe is steadily on the rise and IT industry analysts estimate that most of the large organizations utilizing mainframes should expect their system’s CPU resource consumption to increase by 1520% annually.

How does a company reduce the costs of their mainframe environment such that it should be an immediate impact but at a very low risk?  The answer incidentally, lies within the crux of the problem: MIPS.   Organizations can significantly reduce their MIPS consumption by following the below listed critical action points,

  1. Identify the top CPU consuming jobs/transaction based on the detailed analysis of the SMF/RMF reports.
  2. Identify the Business critical jobs/transaction and validate whether the scheduled time is really needed for its actual run, if not reschedule it to non-business hours.
  3. Identify the frequent failing jobs and provide the fix for it.
  4. Avoid repetitive usage of I/O calls for the same data instead go for Data Memory concept.

Renovate and Restructure

A renovation typically drives better efficiencies and cost savings in case of retaining the Legacy applications. Jobs in the critical path take enormous amount of time. The Mainframe batch jobs restructuring will help to reduce the heavy workload. Deferring certain batch workloads, such as backups and reporting to quiet periods in the day will reduce the load during the peak hours. Enable parallel execution without contention of files. The batch schedule and related dependencies can be analyzed for mandatory dependencies so that the overall batch schedule can be restructured which may in-turn reduce the batch window.

The objective is to break the dependencies and achieve the maximum parallel execution within the permissible limit. The parallelism should be designed in a manner that it should not add to the peak usage.

Code Optimization

Source code being the backbone for each and every application and it is always advisable to optimize the code for its CPU usage, memory swapping frequency, Paging logic, I/O operation frequency, Database calls.  Since the Application source code are the basic entities affecting the cost of running an application, refining or fine tuning each executable will definitely help in reducing the operational cost.

Database Fine Tuning

One of the most important techniques which operates more efficiently in terms of storage and response time to I/O requests. Based on the detailed study on the existing database structure, current storage volume, database hits and future growth projections, DBA experts has to come up with the possible changes in restructuring the existing database. On a normal circumstances, changes will involve the introduction of indexes for the tables, removing the indexes for a table, introducing composite indexes, merging two tables into one, dividing a table into multiple tables, or redefining the relationships among tables. The main goal of any database tuning is to decrease the response time for any database I/O operation and target for efficient memory usage.

Automation Driven

Automation is one of the most important driving factors of the current IT world and it always proves to be a differentiator for each and every company in this competitive IT world. So whenever an automation is being introduced in the current setup, eventually there will be a drop in the manual effort spent which may impact TCO cost on a positive side. Taking that positive impact and considering their Business long run, now it has become a serious ask/demand from most of the Business owner in prioritizing the Automation and what best they can get in the form of service offerings from their respective vendors. Moreover, by bringing in automation, it is 100% confirmed that the current business will be more robotic in nature, especially in accomplishing the mundane and repetitive tasks with a very minimal human involvement. As a result of this, in the current delivery model, it has been mandated to plug-in more and more automation throughout the entire SDLC process and by doing this, huge amount of manual effort will be used effectively in developing new ideas/ solutions towards the Business growth and its continuity for the future.

As an ending note, I assume that if any of the above optimization techniques is being plugged in any of our Mainframe applications, there is always 100% probability of realizing the actual cost savings which eventually impacts the TCO cost on a positive side. But with regards to the selection of the above cost optimization techniques, it is the critical/mandatory call which should be taken either by the Business or it should be a proactive measure which every Application support team has to take, which will in-turn makes them different from the routine crowd. Since me being an application support person and want to be unique, I strictly adhere to the above pro-active measure in my day to day IT life and request the same from your end as well. Moreover, from last decade, we have been hearing that this Legacy Mainframe Giant is going to die soon, but till now it is ruling this competitive IT world with its super power. So it is very clear that most of the mission critical systems which are currently running on the Mainframe system will still continue to run on the same and organizations will not take any sort of risk in moving their business out of the Mainframe platform.

So let’s all optimize our current Mainframe system to the maximum extent and make our end client to be delighted forever. By doing this, it is very evident that we are going to be the part of a specific crowd who really wants this golden age of Mainframe to cherish and make it shine for more and more decades to come.

Comments
MORE ARTICLES BY THE AUTHOR
RECENT ARTICLES
RELATED ARTICLES