The mainframe is arguably the most cost-effective platform for larger, faster, and ‘always available’ demands of IT business. Despite its central role in large enterprises, many IT leaders are disinvesting in mainframe performance management – since they feel that the mainframe platform’s extraordinary performance and efficiency needs little attention.
To the contrary, it is high time organizations re-invested in mainframe performance management - especially since today’s consumers never sleep, are always connected and demand rapid and instant response times.
Three persuasive reasons bear out this need. The first is the increased speed of mobile applications. It is estimated that most new customer facing apps and enterprise applications will hit a mainframe. Even a small improvement in mainframe application performance can have a significant impact on the customer’s mobile and web experience.
The second is the staggering increase in customer and transactional data caused by digital business. If not properly maintained, it can significantly degrade application performance. Third, enterprises are turning to an agile development culture in their mainframes to remain competitively nimble. This demands that IT validate and re-tune performance with every new code release so that the updates do not adversely impact customer experience.
Enhancing performance through job-efficient source code
Well-written code with optimized job parameters should support speed and CPU efficiency. Making the source code job-efficient calls for attention to simple, yet important aspects.
- Specify the BLOCK CONTAINS 0 records for output files – this will help gain the benefit of system-determined blocking for output files
- Use a binary search (SEARCH ALL) to search a huge element array – this will be 100 percent faster than using a sequential search (SEARCH)
- Use EVALUATE over IF parameter, eliminate GOTO statements and Internal SORT procedures - this will enhance readability and make the code structured
- Use an odd number of digits for PACKED DECIMAL (COMP-3) - this is faster than using an even number of digits
- Use comp, comp-3, display format variables to address a table- this heavily slogs the operation when compared to indexes.
Improving job performance
Here are a few key actions that boost mainframe performance
- Allocate sufficient primary space to contain the entire contents of the file - this reduces the secondary extents. Also ensure to code the RLSE option with the SPACE parameter.
- Specify a reasonable amount of secondary space.
- Place BLKSIZE=0 in the DCB of all the sequential files.
- Code UNIT= (CART, 2), into the JCL to eliminate multi-volume tape mount delays within a job.
- Define temporary datasets instead of cataloging it for intermediate operations within the job.
With respect to file performance, the following considerations are important
- Avoid using multiple alternate indexes - updates have to be applied in primary and multiple alternate paths which eventually slows down the performance
- Reorganize the VSAM datasets on a weekly basis – this enables control interval size (CISZ) and Freespace (FSPC) to be determined fresh, thereby eliminating splits
- Address repeated failures due to space issues for a relatively huge physical sequential file - get the correct volume hosting less number of datasets from the DASD team and provide it in the VOL parameter option in the JCL. This will eliminate space abends.
- Eliminate the ERASE option in the VSAM definition – The ERASE option causes a tremendous amount of I/O by writing binary zeros to the entire allocated file space each time the file is deleted using IDCAMS
- Define files greater than 7 tracks in size in Cylinders - Files less than 7 tracks in size which have I/O activity should also be defined in Cylinders.
The following should be noted regarding database access performance
- Avoid using wildcard (%) at the beginning of a predicate - this will trigger full table scan
- Specify the columns in the SELECT clause instead of using SELECT* - unwanted columns place extra load on the database and slows down the whole system. Each return column has an overhead that is multiplied by the number of rows.
- Execute the EXPLAIN statement to understand the query performance and the access path that DB2 chooses for each query.
- Use indexed columns of the DB2 table for WHERE clause conditions – this helps locate records quickly.
- Code SQL JOINs instead of singular SQL access whenever possible - a single SQL JOIN is always faster than two SQL statements within an application program to compare and filter the result set data.
Performance tuning is a must-have process for new system implementation and when change implementation in existing systems go live. It is the responsibility of all parties involved to develop and maintain the mainframe application to be constantly on the alert for prominent bottlenecks, redundant processes and unnecessary steps. Remember continuous self-tuning is a journey, not a destination!