Contents | Previous | Next

Performance Considerations

These items warrant consideration since they have potential for a negative impact to performance.

  • Increased CPU Usage
  • Performance APARs
  • Expanded Storage Size
  • Large VM Systems

Increased CPU Usage

The constraint relief provided in z/VM 5.2.0 allows for much better use of large real storage. However, the many structural changes in CP resulted in some unavoidable increases in CP CPU usage. The resulting increase in total system CPU usage is in the 2% to 11% range for most workloads but the impact can be higher in unfavorable cases. See CP Regression Measurements for further information.

Performance APARs

There are a number of z/VM 5.2.0 APARs that correct problems with performance or performance management data. Review these to see if any apply to your system environment.

Expanded Storage Size

The 2G-line constraint relief provided by z/VM 5.2.0 can affect what expanded storage size is most suitable for best performance. The "bottom line" z/VM guidelines provided in Configuring Processor Storage continue to apply. Those guidelines suggest that a good starting point is to configure 25% of total storage as expanded storage, up to a maximum of 2 GB. Some current systems have been configured with a higher percentage of expanded storage in order to mitigate a 2G-line constraint. Once such a system has been migrated to z/VM 5.2.0, consider reducing expanded storage back to the guidelines.

Large VM Systems

With z/VM 5.2.0, it becomes practical to configure VM systems that use large amounts of real storage. When that is done, however, we recommend a gradual, staged approach with careful monitoring of system performance to guard against the possibility of the system encountering other limiting factors. Here are some specific considerations:

  • Most CP control blocks and data areas can now reside above the 2G line. The most important exception is the page management blocks (PGMBKs). Each PGMBK is 8 KB in size and is pageable. When resident, a PGMBK is backed by two contiguous real storage frames below the 2G line. There is one resident PGMBK for every in-use one-megabyte segment of virtual storage, where "in-use" means that it contains at least one page that is backed by a real storage frame. The fact that PGMBKs must still reside below 2G sets an absolute maximum of 256 GB to the amount of in-use virtual storage that z/VM 5.2.0 can support. (The actual maximum is somewhat less because of other data that still must reside below 2G.) Most z/VM systems are far below this limit but it is more likely to become a factor as systems become larger.

    Performance Toolkit for VM can be used to check PGMBK usage. Go to the FCX134 screen (Shared Data Spaces Paging Activity - DSPACESH) and look at the data for the PTRM0000 data space. "Resid" is the number of frames being used for PGMBKs and the number of PGMBKs is half that amount. (Note: In z/VM 5.1.0, this count is in error and is, in effect, a count of PGMBKs rather than PGMBK frames.) The "Enhanced Large Real Storage Exploitation" section has a table that includes this data.

  • Since the System Execution Space (SXS) is limited to a maximum of 2 GB, it represents another resource that can become constrained. We don't expect this to become a problem on z/VM 5.2.0. You can use the QUERY SXSPAGES command to check SXS utilization at any moment in time. SXS utilization is 100 times "Total SXS Pages in Use" divided by "Total SXS Pages". Performance Toolkit for VM can be used to see SXS utilization over time. Go to the new "System Execution Space Utilization" report (SXSUTIL; FCX264) and use the "Total Pages Used" and "Total Pages" values. The "Enhanced Large Real Storage Exploitation" section has a table that includes this data. SXS storage management is designed to keep aliases in place until SXS pages are needed for some other purpose. Consequently, the "Total Pages Used" count tends to overstate the true requirements.

  • On larger systems, it becomes more important to follow the minidisk cache tuning recommendations. In particular, randomly accessed large databases are usually poor candidates for minidisk caching and therefore MDC should be turned off for the underlying minidisks / real devices.

  • The time and space required to process CP dumps can become a factor on very large z/VM systems.

  • If real processors are added, look for MP locking as a potential constraint. See 24-Way Support for discussion and measurement results.

Contents | Previous | Next