VM/ESA 1.2.1 Performance Changes


Performance Improvements

Virtual Disk in Storage

VM/ESA 1.2.1 supports a new type of temporary minidisk known as a virtual disk in storage. To a virtual machine, a virtual disk in storage appears similar to a temporary disk (T-disk). However, virtual disks in storage can be shared. A virtual disk in storage is allocated in an ESA/370 address space in system storage instead of on DASD.

Virtual disks in storage enable guests, servers, and other applications to use system storage for temporary data that traditionally resides on minidisks, with no recoding for a different architecture. Virtual reserve/release is supported for virtual disks in storage, which allows data sharing among multiple guests, such as VSE, that also support reserve/release CCWs.

Virtual disks in storage can provide better performance than traditional minidisks that are located on actual DASD. Although there is some decrease in processor usage, this improvement is mainly in terms of reduced elapsed time, which results from a reduction in real I/Os. The only real I/Os caused by the use of virtual disks in storage are in the form of page read and write operations that arise because the virtual disk in storage data contend for real and expanded storage.

An excellent situation for using a virtual disk in storage is when there is a small amount of frequently used data. A good example is a VSE lock file used to reflect locking status of shared resources for multiple VSE guests. In this situation, the virtual disk in storage data causes a negligible increase in storage contention, the virtual disk in storage data becomes resident in real memory, and usage of the data therefore causes a negligible amount of real I/O to occur.

Usage of virtual disks in storage with large amounts of data should be done more cautiously. Overall performance can still improve relative to the use of traditional minidisks, but only if the benefits that arise from avoiding real minidisk I/O outweigh the performance impact from increased storage usage and paging.

Load Wait State PSW

CP's handling of situations in which a guest virtual machine loads a wait state PSW has been improved. This benefits servers that do asynchronous processing (such as VTAM and SFS) and guest operating systems that are run on non-dedicated processors. The benefits are in terms of reduced processing requirements. The amount of benefit is proportional to the system-wide frequency of this operation, which is displayed as the GUESTWT variable on the RTM SYSTEM screen. The largest benefits have been observed for V=R, MODE=370 VSE guests running an I/O-intensive workload.

IUCV Processor Usage

IUCV and APPC/VM processor usage was reduced substantially in VM/ESA 1.1.1. Processor usage has been further reduced in VM/ESA 1.2.1. Both VTAM and SFS use IUCV or APPC/VM and benefit from this improvement.

Enhanced Fast Path CCW Translation

Fast path CCW translation was extended to include support for the case of V=R guests that do I/O to full pack minidisks. This can result in substantial improvements in system responsiveness and processor capacity for I/O-intensive workloads.

Shared Segment Serialization

Changes have been made to the way CP handles system data files containing saved segments and named saved systems (NSSs). The changes allow functions to run concurrently which previously had to run consecutively. The following performance improvements are provided:

  • Multiple saved segment loads, finds, and purges (using DIAGNOSE code X'64' or functions that call it) can run at the same time. They can also run while a SAVESEG or SAVESYS command is being processed. Previously, only one saved segment or NSS operation could occur at any given time.

  • Users can IPL NSSs and load saved segments into their virtual machines while other users are doing the same.

  • Except in rare cases, saved segments and NSSs can be loaded, saved, or purged while the SPTAPE command is being used to dump or load a saved segment or NSS.

  • A SAVESEG or SAVESYS operation can be interrupted with the PURGE NSS command. Previously, the purge had to wait for the save to complete.

Support for Data Compression

If VM/ESA is installed on an ES9000 processor that supports hardware assisted data compression, any program that can use data compression when running native can also use data compression when running as a guest of VM/ESA. The potential benefits of such data compression include reduced auxiliary storage requirements and reduced data transmission time.

EXECUTE XEDIT Macro

The processing requirements of the EXECUTE XEDIT macro have been significantly reduced. The EXECUTE macro is used by FILELIST, DIRLIST, RDRLIST, MACLIST, CSLLIST, and CSLMAP whenever a command is issued that applies to a line of information on the full screen display these facilities provide.

Minidisk File Cache Size

The minidisk file cache reduces the frequency of minidisk file I/Os. When a file is read sequentially, CMS reads as many blocks at a time as will fit into the minidisk file cache. When a file is written sequentially, completed blocks are accumulated until they fill the file cache and are then written together.

In the past, the size of this cache was fixed at 8KB. This remains the default, but now a different size can be specified when the CMS nucleus is built. That size then applies to all users of that CMS nucleus. A minidisk file cache of up to 96KB can be specified. The actual size is, however, subject to a limit of 24 CMS data blocks. That means, for example, that the maximum effective file cache size is 48KB for a minidisk that is blocked at 2KB.

As the file cache size increases, minidisk I/Os decrease but system real storage requirements and paging may increase. The 8KB default is suitable for storage-constrained systems, while a larger size is likely to result in an overall performance improvement for systems with more storage.

The specified size only applies to minidisk I/O. There is (and has been) a separate file cache size value that applies to SFS files. That size defaults to 12KB and is also specified when the CMS nucleus is built.

CMS Multitasking Performance

The performance of the CMS multitasking kernal has been improved, reducing the processing requirements of CMS multitasking applications that make use of multiple virtual processors.

CMS Pipeline for Retrieving Accounting Data

The performance of the RETRIEVE utility for collecting CP accounting data is a recognized concern. The fact that RETRIEVE closes the output file after writing each accounting record is the biggest factor. The new STARSYS CMS Pipelines stage can be used to create a replacement for the RETRIEVE utility. In the STARSYS documentation, an example exec is shown using the *ACCOUNT system service. This example accepts a list of "precious" records. After writing a "precious" record, the exec closes the file. Otherwise, the file is closed when the internal buffers are filled. The performance of the example exec is similar to the RETRIEVE utility for "precious" records. However, it is significantly faster for the other records.

SFS Control Data Backup

In the past, if only one block of a 512-block cell had data in it, all of the blocks in that cell were backed up. SFS now backs up only those blocks that are in use. This can reduce backup time and the amount of space required to hold the backup. It also means that the amount of space required to hold the backup can now be estimated more accurately. The output from QUERY FILEPOOL MINIDISK can be used to do this.

SFS Thread Blocking I/O

In VM/ESA 1.2.0, applications using the CMS multitasking interfaces could only do SFS requests asynchronously by using a polling/check method to verify when the asynchronous request was complete. A thread can now issue an asynchronous SFS CSL request, do other processing, and then wait for completion of that SFS request by using the EventWait multitasking service. This means that it is now practical for CMS multitasking applications to use the SFS functions asynchronously.

VMSES/E

The performance of the VMFBLD and VMFSIM functions has been improved. This will generally result in a decrease in elapsed time required to build a nucleus.

The automation of more service processing in VMSES/E R2.1 eliminates many manual tasks. Therefore, the overall time required to do these tasks will decrease. The following automation functions have been added to VMSES/E:

  • Local Service Support

    The VMFSIM CHKLVL, VMFASM, VMFHASM, VMFHLASM, and VMFNLS commands provide a LOGMOD option to automatically update the Software Inventory when local service is applied.

  • Automation of Library Generation

    The VMFBLD function automates the generation of callable services libraries and CMS/DOS libraries.

  • Support for Generated Objects

    The VMFBLD function automates the building of generated objects, such as text decks.


Performance Considerations

TRSAVE Enhancements

By use of the DEFERIO option on the (privilege class A and C) TRSAVE command, the user can now request that trace buffers not be scheduled for I/O until tracing is stopped. This function will tend to be used with a large number of in-storage trace buffers in order to avoid wrapping the data. If the number of requested buffers is too large, use of this facility can adversely impact system performance.

3390 Model 9 DASD Support

Because three logical tracks are mapped to one physical track, the revolutions per minute are one-third the speed of a 3390 model 3. This causes a latency 3 times larger than the 3390 model 3. The 3390 model 9 is intended as a mass-storage device for applications which require faster access times than those provided by tape or optical drives, but which do not require the high performance of traditional DASD. These devices should not be used for system data or data used by applications that require high performance DASD.


Performance Management

Monitor Enhancements

A number of new monitor records and fields have been added. Some of the more significant changes are summarized below. For a complete list of changes, see the MONITOR LIST1403 file for VM/ESA 1.2.1. For information about the content and format of the monitor records file, see the VM/ESA: Performance book.

  • Monitor Enhancements for Virtual Disks in Storage

    The monitor has been enhanced to include data on the virtual disk in storage feature. Information is included for overall system usage, usage by virtual disk in storage, and usage by individual user.

    • System limit values, number of virtual disks in storage, and current space allocated for virtual disks in storage are included in domain 0 record 7 (D0/R7).

    • A new record called Virtual Disk in Storage Information (D3/R17) was added. This sample record is created for each existing virtual disk in storage and includes address space name, owner, size, links, and I/Os.

    • The count of virtual I/Os to a virtual disk in storage for a given user is included in User domain records (D4/R2, D4/R3, D4/R9). This value is a subset of the existing virtual DASD I/Os found in those same records.

    • Using the address space name associated with the virtual disk in storage, the related storage domain records can be used for additional information (D3/R12, D3/R13, D3/R14).

  • I/O Assist Monitor Enhancements

    Prior to VM/ESA 1.2.1, the potential existed for anomalies in the monitor data associated with I/O assist (also known as SIE assist) in D6/R3. For each device eligible for assist, it can be in one of three states: in assist, out of assist, or leaving assist. Counters for the transitions into each state and time spent in each state are kept by CP. These counters are sampled and included in the D6/R3 monitor record. The anomalies resulted because the time-spent counter is not updated until a device leaves a state. Therefore, the monitor data did not include the time spent in the current state.

    The enhancement is to add a flag indicating the current state of the device and a time stamp indicating when the device entered that state. Using this new information and the monitor record time stamp, one can compute the amount of time not included in time-spent counters and which state is missing this time.

  • Improved Processor High Frequency Monitor Sampling

    High frequency state sampling is used for collecting some data found in record D5/R3. One item is the number of VMDBKs on the Processor Local Dispatch Vector (PLDV). Prior to VM/ESA 1.2.1, this value was often artificially inflated for the master processor because while collecting other high-frequency data, the monitor is running on the master and this sudden burst of activity skews the PLDV values. This has been corrected in VM/ESA 1.2.1 by sampling the processor data prior to the user data.

  • Other Monitor Enhancements

    Two event records (D3/R15 and D3/R16) have been added. These records are generated when a named saved system, discontiguous saved segment, or segment space is loaded into storage (accessed by the first user) or removed from storage (released by the last user).

INDICATE Command

If INDICATE USER (privilege class E and G) is issued with the new EXPANDED option, the output is organized into groups (I/O, storage, CPU, and so forth) and additional information is provided for each of the private and shared address spaces associated with that user. A number of the data fields have been enlarged so as to make data wrapping much less frequent.

When INDICATE USER is issued without the EXPANDED option, a user's counts for the IO, RDR, PRT, and PCH fields are reset to zero whenever an ACNT command is executed for that user ID. With the EXPANDED option, these values are not reset. All reported time and count values are ever-increasing accumulators since logon.

INDICATE NSS (class E) is a new command that displays information about named saved systems and saved segments that are loaded in the system and are in use by at least one user. For each NSS or saved segment, the output includes paging statistics and the number of pages residing in main storage, expanded storage, and on DASD. INDICATE NSS makes available information that had been available on certain earlier VM releases by specifying INDICATE USER nss_name.

INDICATE SPACES (class E and G) is a new command that provides information about one or all of the address spaces owned by the specified user ID. For each address space, the output includes paging statistics and the number of pages residing in main storage, expanded storage, and on DASD.

QUERY VDISK Command

The QUERY VDISK command (privilege class B) displays a list of all existing virtual disks in storage. It can also display information about the system or user limits on the amount of virtual storage available for virtual disks in storage. Those limits are set using the SET VDISK command.

STARMONITOR

The STARMONITOR pipeline stage writes lines from the CP *MONITOR system service. STARMONITOR uses IUCV to connect to the CP *MONITOR system service. STARMONITOR writes these lines as logical records, beginning with the 20-byte prefix defined for monitor records. Monitor sample data, event data, or both can be chosen and records can be suppressed from one or more of these domains. A STARMONITOR stage can be used only as the first stage of a pipeline.

STARSYS

Use the STARSYS stage command to write lines from and send replies to a CP system service. STARSYS uses IUCV to connect to a two-way system service (such as *ACCOUNT, *LOGREC, or *SYMPTOM). A STARSYS stage can be used only as the first stage of a pipeline.

Effects on Accounting Data

The following list describes fields in the virtual machine resource usage accounting record (type 01) that may be affected by performance changes in VM/ESA 1.2.1. The columns where the field is located are shown in parentheses.

Milliseconds of processor time used (33-36)
This is the total processor time charged to a user and includes both CP and emulation time. For most workloads, this should not change much as a result of the changes made in VM/ESA 1.2.1. CMS-intensive workloads may experience a slight decrease.
Milliseconds of Virtual processor time (37-40)
This is the virtual time charged to a user. The processing time to read pages from XSTORE used to be included in this field. This time is now only included in total processor time (33-36). This change will somewhat decrease the virtual processor time reported for V=V guest virtual machines on systems that use XSTORE for paging.
Requested Virtual nonspooled I/O Starts (49-52)
This is a total count of requests. All requests may not complete. The value of this field will tend to decrease if a minidisk file cache size larger than 8KB is selected. See "Minidisk File Cache Size" for details. Virtual disk in storage I/Os are counted in this total.
Completed Virtual nonspooled I/O Starts (73-76)
This is a total count of completed requests. The value of this field will tend to decrease if a minidisk file cache size larger than 8KB is selected. Virtual disk in storage I/Os are counted in this total.

Back to the Performance Changes Page