VM/ESA 2.4.0 Performance Changes
- Performance Improvements
- Performance Management
The pathlengths associated with the thread creation, event signalling, and Posix signal handling CMS multitasking functions have been significantly reduced in VM/ESA 2.4.0. This improvement is especially applicable to situations that involve access to byte file system (BFS) files. Examples: Java initialization processor usage was reduced by 10%. NFS server processor usage for reading a large BFS file decreased by 27%.
The CMS Pipelines DATECONVERT stage is implemented in REXX. VM/ESA 2.3.0 APAR VM61673 provided this stage in compiled form for improved performance. This improvement has now been integrated into VM/ESA 2.4.0.
PEEK is often used to view the beginning of a large reader file (PEEK defaults to reading the first 200 records). With VM/ESA 2.3.0, PEEK would read the entire contents of a reader file into memory if it was in NETDATA (used by SENDFILE) or disk dump format. With VM/ESA 2.4.0, PEEK has been changed so that only the requested records are kept in memory. For large reader files in NETDATA or disk dump format, this can result in greatly improved responsiveness and reduced requirements for virtual storage and processor time.
The SORT Xedit macro, formerly written in EXEC2, has been rewritten in REXX, compiled, and added to the CMSINST shared segment. This reduces processor, I/O, and real storage usage. The measured processor usage improvements were usually in the 2% to 4% range.
The SDIR Xedit macro has been rewritten for improved serviceability and performance, resulting in a measured 38% processor usage improvement. SDIR is used by the FILELIST function to sort by SFS subdirectory.
Processor usage of SENDFILE when using the UFTSYNC option has been significantly reduced.
The RECEIVE EXEC was added to the CMSINST saved segment, eliminating loading time and reducing real storage requirements for CMS environments that use this function.
Three SFS performance APARs have been incorporated into VM/ESA 2.4.0. They all deal with scenarios where other SFS users appear to be locked out while a particular task for another user is in progress. VM61547 and VM62008 deals with the task of deleting large files. The impact is proportional to file size and is not really noticeable for files under 512 Kb. VM62086 addresses a different scenario where a large number of file changes are made without a commit being issued.
Elapsed time improvements of up to 83% and processor usage improvements of up to 87% have been measured for NFS workloads. These improvements result from the combined effects of performance enhancements that were added to NFS through service and in TCP/IP Function Level 320, support for the NFS version 3 protocol, and the CMS multitasking improvements that are included in VM/ESA 2.4.0. NFS access to BFS and SFS files improved the most but NFS access to minidisk files also improved in some cases.
Two improvements were made to the TCP layer of the protocol stack that serve to reduce the mainline pathlength for handling incoming messages. The first one, sometimes referred to as "TCP header prediction", optimizes the implementation for the normal case where the incoming segments are all present and are received in the order sent. The other improvement is the creation of a lookaside buffer for fast access to the TCP control block. Primarily due to these improvements, processor usage in the TCPIP stack virtual machine decreased by 0.5% for the Telnet measurement and by an average of 4% for the 6 FTP measurements. Large data transfers tend to experience the largest percentage improvements.
APAR PQ18391 to TCP/IP Function Level 310 is now included in TCP/IP FL 320. Previously, there was a hardcoded limit of 20 segments that could be outstanding (unacknowledged) at any given time. This was a constraint whenever the window size exceeded 20 times the maximum segment size. This 20 segment limit has now been removed, allowing the full benefits of TCP window scaling (RFC 1323) to be realized.
The maximum large envelope size supported by TCP/IP VM has been increased from 32768 bytes (32 Kb) to 65535 (64 Kb - 1). The TCP/IP stack machine uses large envelopes to hold UDP datagrams and to hold IP datagram fragments during reassembly when they do not fit into small (2048 bytes) envelopes. Large envelope size is specified on the LARGEENVELOPEPOOLSIZE statement in the node_name TCPIP (or PROFILE TCPIP) configuration file.
A large envelope size greater than 32 Kb may improve performance in certain cases. One example is when a key application can send datagrams that exceed 32 Kb. Another example is when it is advantageous to be able to specify an MTU size that exceeds 32 Kb as might be the case, for example, for CTC devices. TCP/IP VM requires that the MTU size (specified on the GATEWAY statement) not exceed the large envelope size.
A change has been made to FTP MODULE that reduces pathlength in the VM client virtual machine for the case of binary file transfer from a foreign host to the VM system. This changed was observed to decrease processor usage in the client virtual machine by 9% for binary get of a 2 Mb file and by 31% for binary get of a 24 Kb file. This change is implemented by TCP/IP FL 320 APAR PQ28148.
The LIMITHARD option on the SET SHARE command can be used to limit the percentage of total system processing capacity that can be consumed by a given virtual machine. In prior releases, the observed percentage would tend to be lower than the requested percentage, especially at low total system processor utilizations. With VM/ESA 2.4.0, the LIMITHARD implementation has been changed so that the actual percentage tracks more closely to the requested percentage.
A number of new monitor records and fields have been added. Some of the more significant changes are summarized below. For a complete list of changes, see the MONITOR LIST1403 file (on MAINT's 194 disk) for VM/ESA 2.4.0. The major changes in the CP monitor data were extended channel measurement support, inclusion of data for new DASD, and some miscellaneous changes.
With the addition of support for FICON channels, some processors have also added the Extended Channel Path Measurement Facility. The original Channel Path Measurement Facility is still supported and is used if the extended facility is not available. A separate record, domain 0 record 20, is created for each channel when the extended facilities are available. For the new FICON channels, this new record will include channel utilization and number of bytes read and written over the channel for the physical channel and also from a logical view when VM/ESA is running in a logical partition. Comments were changed in other channel utilization records to reference this new record where appropriate.
Changes were also made in support of features provided by the IBM Enterprise Storage Server (ESS). The ESS DASD subsystem allows for parallel access volumes where a single physical volume can appear as multiple volumes to a S/390 system. This allows multiple I/Os to be started to the same device in parallel. The Device Configuration (domain 1 record 6) and Vary On Device (domain 6 record 1) were changed to indicate whether the specified volume is a parallel access volume, and if so whether it is the base volume or an alias volume. Parallel access volumes are typically configured prior to system IPLs. However, the State Change event record (domain 6 record 20) was added to record the dynamic creation of an alias volume.
The System Configuration Data (domain 1 record 4) was enhanced to indicate the source for Subchannel Device Active Only time (as reported in domain 6 record 3). On some processors this data is generated by the hardware, while on others software generates it. Also, an indication of whether the Channel Path Measurement Facility or Extended Channel Path Measurement Facility is available on the processor has been added to the System Configuration Data record.
The APPLDATA domain data contributed by the TCP/IP stack virtual machine has also been enhanced. The changes include the following.
- In order to record larger data transfer amounts, new fields were added to the TCB Close (type 2) and UCB Close (type 7) event records. This change was introduced via APAR PQ16942 to FL310 and rolled into the base for FL320.
- Support was added in FL320 to handle certain denial of service attacks. Discarded packets associated with these attacks are counted in the MIB record (type 0).
- The IP address for a connection was added to the TCB Open (type 1) and Close (type 2) records.
- Window scaling information was added to the TCB Close (type 2) record.
- The FL320 stack was optimized to predict TCP headers. A counter was added to the TCB Close (type 2) record to indicate the number of times headers were predicted correctly.
- The TCP/IP Pool Limit Record (type 3) now includes additional information for the new dynamic pools and for virtual storage usage in general for the stack machine. Similarly, the TCP/IP Pool Size Record (type 4) now includes the new segment acknowledge pool.
None of the VM/ESA 2.4.0 performance changes are expected to have a significant effect on the values reported in the virtual machine resource usage accounting record.
This section contains information on the support for VM/ESA 2.4.0 provided by VMPRF, RTM/ESA, FCON/ESA, and VMPAF.
VM Performance Reporting Facility 1.2.1 (VMPRF) will run on VM/ESA 2.4.0 with the same support as VM/ESA 2.3.0. The latest service is recommended.
Realtime Monitor VM/ESA 1.5.2 (RTM/ESA) will run on VM/ESA 2.4.0 with the same support as VM/ESA 2.3.0. RTM must be recompiled using the VM/ESA 2.4.0 libraries. Follow the "REBUILD" section of the RTM Program Directory.
FCON/ESA Version 3.1.xx will run on VM/ESA 2.4.0 with the same support as for VM/ESA 2.3.0. The next release, FCON/ESA V.3.2.00, planned to become available in the third quarter of 1999, will also include support for FICON channels and some of the additional TCP/IP FL 320 data, in addition to other enhancements that are not VM/ESA release dependent.
Performance Analysis Facility/VM 1.1.3 (VMPAF) will run on VM/ESA 2.4.0 with the same support as VM/ESA 2.3.0.