Performance-Related APARs

The following is a list of performance related APARs. It does not replace the need for normal service process. See The z/VM Service Page for additional service information.


  1. Selected APARs that affect VM performance
  2. Base APARs are listed
    • Pre-reqs, Post-reqs (PEs), Co-reqs, and If-reqs may not be listed
    • Research APARs required for your system!
  3. Most Performance APARs tested in production
    • May not be bug free
    • PEs may or may not apply in your environment
  4. APARs may not be on an RSU

APARs may be OPEN, in Fix Test, or Correctively Available

z/VM 6.2.0 APARs

Improves the performance associated with diagnose x'10' processing as used by LINUX CPUPLUGD memory unplug function. This removes the need to put the virtual machine in console function mode for serialization. This APAR also applies to z/VM 6.1.0 and z/VM 5.4.0, and is on RSU 1201.
Corrects VM64943 which in combination with this avoids abends and other problems when the *MONITOR system service is used on a server where Global Performance Data control has been disabled. This APAR also applies to z/VM 6.1.0 and z/VM 5.4.0.

z/VM 6.1.0 APARs

CP use of Diagnose x'44' can lead to high LPAR overhead. This APAR will decrease the amount of LPAR overhead created by spin lock contention and associated diagnose x'44' instructions issued by CP and diagnose x'9C' instructions issued by the guest. Also introduced additonal monitor fields instrumenting spin locks in CP. This is on RSU 1101.
Checkpoint processing is slow on SCSI Disk. The checkpoint phase of shutdown processing takes significantly longer if the checkpoint volume resides on SCSI DASD using N-Port Virtualization. During the checkpoint phase of shutdown processing, the HCPCKSEP entry point is called each time a page must be written to SCSI DASD. Each time HCPCKSEP is called it initializes QDIO queues to the checkpoint volume and then resets the queues. The performance overhead of this is negligible unless NPIV is being used. Initialization takes much longer with NPIV activated because of additional required delays. Also applies to z/VM 5.4.0. For 6.1.0, it is on RSU 1102.
Problem description is erratic system performance or brief system hang. When there is a lot of contention for CPU time and the PLDVs overflow, special processing is needed to ensure that all ready users are eventually dispatched. An error was found in the PLDV reshuffle code that delayed dispatching them. This can result in erratic system performance or brief system hangs. This APAR also applies to z/VM 5.4.0. It is on RSU 1102 for both releases.
Controlled failover on VSwitches causes performance problems. Also applies to z/VM 5.4.0
PGMBK Steal can incorrectly skip PGMBKs, resulting in sub-optimal performance in memory constrained environments. This fixes PE VM64225. Also applies to z/VM 5.4.0 and z/VM 5.3.0. On RSU 1002 for all three releases.
Required updates to z/VM are made to support the z196 processor. This includes compatibility with new facilities and storage management performance improvements using non-quiescing and conditional SSKE. This APAR alone is not sufficient for z196 support, please see service bucket for full details. This APAR also applies to z/VM 5.4.0. It is in RSU 1003 for both releases.
SET/QUERY REORDER command. See this article for more information. Also applies to z/VM 5.4.0. In RSU 1003 for both releases.
Long CPEBK chains for user SYSTEMMP are not processed in timely fashion. Can lead to VAP002 abends or other problems due to the delays. Also applies to z/VM 5.4.0 and z/VM 5.3.0.
Corrects monitor records that include share setting by processor type. Also applies to z/VM 5.4.0.
Page Release serialization impacts performance. Release processing was changed to do as much processing while holding PTIL shared (PTIL-s) as possible, only switching to holding PTIL exclusive (PTIL-x) when necessary. Changed modules in release processing are HCPHPC, HCPHPH, and HCPHPK. Some callers of release entry points also required subsequent minor changes. Impact more significant for environments with DB/2 for z/VM and VSE. Also applies to z/VM 5.4.0.
Corrects imbalance in managing control blocks associated with dedicate FCP and QDIO devices that left uncorrected can result in MCW002 Abends. Also applies to z/VM 5.4.0 and z/VM 5.3.0

z/VM 5.4.0 APARs

When storage above 2G is constrained, CP erroneously uses only one shortterm subpooled block within a 4K page of storage that is backed below 2G. This causes an inefficient usage of available storage.
CCW Translation changed to improve performance for CMS format. See also VM64602. Also applies to z/VM 5.3.0 and z/VM 5.2.0.
Improves performance of CMS Format command. Also applies to z/VM 5.3.0
Improves NFS performance when performing OPENVM PUTBFS operations. Also applies to z/VM 5.2.0 and z/VM 5.3.0.
Improves SFS performance when erasing large files. Without APAR, ERASE can take hours to erase large SFS files (>5GB). Also applies to z/VM 5.3.0.
Improves MDC utilzation from problem introduced with VM64082. Also applies to z/VM 5.3.0.
Introduces DUMPLD2 utility which is replacement for DUMPLOAD and allows a dump file to be segmented into multiple files when space is limited and the dump needs to span multiple disks. Also applies to z/VM 5.3.0
This introduces slight CMS performance hit in some scenarios. It corrects a functional problem where control is not returned to program after an FSWRITE call. The fix removes the fastpath processing for WRBUF (FSWRITE). Also applies to z/VM 5.2.0 and z/VM 5.3.0.

Linux Guest

OSA-Express QDIO Performance Enhancements Requirements
This document lists enhancements made to software and microcode for those customers interested in running Linux guests that use OSA-Express QDIO to communicate.
Queued I/O Assist Requirements
This document lists the requirements for all components involved in this enhancement made in z/VM 4.4.0.
QDIO Enhanced Buffer State Management Requirements
This document lists requirements for all components involved in this enhancement made in z/VM 5.2.0.


Connections traversing HiperSockets experience performance degradation, the results of which range from slow data tranfers to connection termination. The problem may not occur immediately after an IPL, but instead make take several days to surface. This affects only connections which use zIIP-assisted IQDIOMULTIWRITE, configured on the GLOBALCONFIG statement of the TCPIP profile.
Return to the Performance Tips Page