VM/ESA 2.3.0 Performance Changes


Performance Improvements

Reduced Segment Table Storage

For each virtual address space, CP must maintain a contiguous, page-aligned segment table in fixed storage for the hardware to use to translate virtual addresses into real addresses. In prior releases, the storage allocated for any given primary address space segment table was always one of three possible sizes. Segment tables of 32M or less were allocated from space reserved at the beginning of that virtual machine's VMDBK and therefore took no additional storage. Primary address spaces greater than 32MB but not exceeding 1GB had their segment table allocated from the beginning of a separate 4K page. Finally, primary address spaces greater than 1GB and up to the architected maximum of 2GB had their segment table allocated from the beginning of two contiguous 4K pages. In either of these last two cases, any remaining space not needed for the segment table was unused.

In VM/ESA 2.3.0, this space is now available for satisfying other user free storage requests. As a result, fixed storage requirements go up in a much more continuous manner as segment table size increases. Although it is still best for performance if the segment tables are small, it is no longer important to try to keep the virtual machine segment table size below 32MB or, failing that, below 1GB. The net effect of this change is to simplify shared segment and virtual machine size management, and to improve performance in cases where large numbers of segments or virtual machines exceed 32MB or 1GB.

Installations that currently define their shared segments top-down starting at 1GB should consider relocating them to lower virtual address ranges in order to benefit from this change.

Record Level Minidisk Cache

This support, first available on VM/ESA 2.1.0 and VM/ESA 2.2.0 as APAR VM61045, has now been integrated into VM/ESA 2.3.0. It is intended for unusual cases where the default full track minidisk caching is not appropriate. This typically occurs when a large database (hundreds of cylinders) is implemented as a single CMS file and the application does large numbers of random accesses to small amounts of data in that file. In such cases, using record level minidisk cache for that file's minidisk is likely to improve performance.

This support is limited to:

  • 4KB-formatted CMS minidisks on non-FBA DASD

  • I/O done by diagnose X'18', diagnose X'A4', diagnose X'250', or the *BLOCKIO CP system service

Record level caching can be enabled by use of the RECORDMDC option with the SET MDCACHE MDISK command or the MINIOPT directory control statement. The current setting can be determined by using the QUERY MDCACHE MDISK command.

See the Performance manual for minidisk cache tuning guidelines.

Improved Pacing for Secondary User Console Output

The pacing algorithm for console output directed to a secondary user (established using the CP SET SECUSER command) has been improved. In prior releases, the pacing algorithm limited output to 22 lines per second. Once this limit was reached, the virtual machine generating the output would be suspended for a second. If this virtual machine is a server doing tracing or otherwise generating a high rate of console I/O, that server's performance could be degraded. In VM/ESA 2.3.0, this problem has been reduced because the limit has been raised to 255 lines per second.

Execute Macro Rewrite

The EXECUTE Xedit macro was rewritten for improved performance and maintainability. This resulted in a 6% CPU usage reduction for EXECUTE when used by FILELIST and a 13% CPU usage reduction for EXECUTE when used by RDRLIST. EXECUTE macro CPU usage is essentially unchanged when used by DIRLIST, CSLLIST, or MACLIST.

CMSINST Shared Segment Additions

The following additional CMS system files have been moved into CMSINST:

  ALL      XEDIT  (ALL Xedit command)
  APILOAD  EXEC   (add REXX copy files for multitasking)
  DEFAULTS EXEC   (setup for productivity aids)
  EXECUPDT EXEC   (apply updates to an executable)
  HELP     XEDIT  (HELP in Xedit)
  JOIN     XEDIT  (JOIN in Xedit)
  MOREHELP EXEC   (HELP for more information)
  OPENVM   EXEC   (front end to OPENVM)
  PREFIXX  XEDIT  (X prefix command)
  PRFSHIFT XEDIT  (> prefix command)
  PRFSHOW  XEDIT  (S prefix command)
  PROFTMPL XEDIT  (profile for templates)
  RECEIVE  XEDIT  (RECEIVE from PEEK)
  RGTLEFT  XEDIT  (PF10 in default Xedit)
  SPLTJOIN XEDIT  (PF11 in default Xedit)
  X$EUPD$X XEDIT  (EXECUPDT with NOCOMMENT option)
  X$EXCM$X EXEC   (EXECMAP)
  X$LKED$X XEDIT  (LKED)
  X$TMPL$X XEDIT  (TEMPLATE option with CSLLIST)

When these functions are used, this change decreases per-user real storage requirements and eliminates the processor time and I/Os that were required to load them from the S-disk into memory.

PEEK Improvements

A number of changes were made to the PEEK command for improved function and performance. The performance results depend upon the format of the file being viewed. Measurements showed a 17% CPU usage reduction for print files, a 15% reduction for punch files, and a 25% reduction for disk dump files. There was no significant change in the performance of PEEK for NETDATA files.

Reduced TCP/IP Processor Usage

Processor usage by the TCPIP virtual machine has been reduced by approximately 2%. An additional processor usage improvement (up to 1%) can be realized on processors that support the checksum (CKSM) instruction.

Reduced TCP/IP Real Storage Usage

In prior TCP/IP releases, an increase in the TCP/IP buffer pool sizes resulted in an increase in TCPIP virtual machine real storage usage, even when the additional buffers were never used. With TCP/IP Function Level 310, this has been corrected so that excess buffers have no appreciable effect on TCPIP real storage requirements. This means that you can now provide extra buffers without the risk of undesirable performance effects.

TCP/IP RFC 1323

TCP/IP Function Level 310 implements RFC 1323, a TCP protocol extension that allows window sizes exceeding 64KB to be negotiated. This RFC applies to high bandwidth, high latency connections. In such cases, it can increase maximum throughput by increasing the amount of data that can be transmitted before an acknowledgement from the receiving system is required.

Increased SMTP Capacity

The SMTP server has been redesigned so that it does fewer minidisk I/Os and reads spool files asynchronously (using the *SPL CP system service). These changes have resulted in substantial improvements to the maximum throughput that one SMTP server virtual machine can deliver. Example measurements show 1.3-fold to 3.4-fold throughput increases relative to TCP/IP 2.4. The actual degree of improvement that will be experienced on a given system configuration will primarily depend upon where the SMTP log file is written (spool or minidisk), average access time to the DASD volume containing the SMTP A-disk, the average access time to the spool volumes, processor availability, and processor speed. The largest relative improvements will be observed on systems where the log file is written to spool and that are characterized by long DASD access times, high processor availability, and high processor speed.


Performance Considerations

DASD I/O Queue Ordering Algorithm Removed

In prior releases, CP ordered DASD I/O requests in an attempt to minimize seek time. This has been made obsolete and, in some cases, even counterproductive by the newer DASD technologies. In addition, in certain unusual cases, this algorithm can result in very long service delays for a given user. Because of these considerations, this algorithm has been disabled.

For the great majority of cases, this change is expected to have either no discernible effects or result in improved system characteristics. However, DASD I/O access times might increase in some situations. This is more likely on systems with substantial DASD I/O queueing, old DASD, and non-cached control units.

Potential Shared Segment Overlaps

The portion of the CMS saved system that resides above the 16MB line has been extended by one megabyte and now ends at location X'13FFFFF'. While installing VM/ESA 2.3.0, check to make sure that this has not caused any overlaps with other shared segments in your system. In addition, if you have any non-relocatable modules that were generated to load between X'1300000' and X'13FFFFF', they will need to be regenerated to run in another location.

NFS Performance

The performance of NFS when used with shared file system or byte file system directories is significantly less than the performance of NFS when used with CMS minidisks.


Performance Management

Monitor Enhancements

A number of new monitor records and fields have been added. Some of the more significant changes are summarized below. For a complete list of changes, see the MONITOR LIST1403 file (on MAINT's 194 disk) for VM/ESA 2.3.0.

  • System Configuration Data

    Various fields were added to the system configuration data record (domain 1 record 4). These include the volume serial numbers for the checkpoint and warm start areas and the system identifier name.

  • IUCV Connection Information

    Two new fields were added to the user activity data record (domain 4 record 3). These fields are the maximum number of IUCV connections allowed and the current number of IUCV connections in use. The maximum number of connections is from the system default or the MAXCONN setting in the user directory entry.

  • Improved MDC Counter

    The MDC read request count in the expanded storage data record (domain 0 record 14) is now more accurate. In previous releases, it was possible for this number to be inaccurate in either direction. This field is often used to compute MDC hit ratios.

  • APPLDATA (domain 10) Interface

    Enhancements were made to diagnose X'DC', which is the interface applications can use to contribute data to the monitor. Applications can now easily contribute event type and configuration type data to the APPLDATA domain.

APPLDATA monitor records are now being contributed by two TCP/IP servers--TCPIP and TFTPD--to allow for improved monitoring of their performance.

  • TCPIP

    The TCP/IP Function Level 310 stack machine can now generate APPLDATA event and sample records. The layouts of these records are provided in the Performance manual. The following types of records are produced:

       TYPE    Rec  Description
     
       Sample   00  TCP/IP MIB Record
       Event    01  TCP/IP TCB Open Record
       Event    02  TCP/IP TCB Close Record
       Config   03  TCP/IP Pool Limit Record
       Sample   04  TCP/IP Pool Size Record
       Sample   05  TCP/IP LCB Record
       Event    06  TCP/IP UCB Open Record
       Event    07  TCP/IP UCB Close Record
       Config   08  TCP/IP Link Definition Record
       Sample   09  TCP/IP ACB Record
       Sample   0A  TCP/IP CPU Record
       Event    0B  TCP/IP CCB Record
       Sample   0C  TCP/IP Tree Size Record
       Config   0D  TCP/IP Home Record
    

    These records are provided if the APPLMON option is specified for the TCPIP virtual machine, subject to the new MONITORRECORDS statement in the PROFILE TCPIP configuration file:

       NORECORDS      no monitor records (default)
       MOSTRECORDS    all records except for the ACB records
       ALLRECORDS     all monitor records
    

  • TFTPD

    The TFTPD server was first introduced as an APAR to TCP/IP 2.4 as part of VM/ESA's support for the IBM Network Station. TFTPD contributes APPLDATA sample records to CP monitor. These records are always provided if the APPLMON option is specified for the TFTPD virtual machine. They can be used to get a more detailed understanding of the work that is being performed by the TFTPD server. The information provided includes:

    • files read by client
    • bytes read
    • files read from cache
    • elapsed time spent reading files
    • files written to the TFTPD server
    • bytes written
    • elapsed time spent writing files
    • information on failed transactions
    • timeouts waiting for an acknowledgement

Improved MONVIEW Performance

The MDATPEEK stage, which is part of the MONVIEW tool that is shipped on the 3B2 samples disk was rewritten from REXX to assembler. This greatly improves MONVIEW performance for certain cases. MONVIEW can be used to view CP monitor data. For more information on MONVIEW, see the MONVIEW SAMPLIST file on the samples disk.

NETSTAT Enhancements

NETSTAT has been improved to have greater selectability, limiting the amount of output to what you actually need. In addition, the NETSTAT INTERVAL command now has a fullscreen interface where the data can be scrolled and sorted by field.

Effects on Accounting Data

None of the VM/ESA 2.3.0 performance changes are expected to have a significant effect on the values reported in the virtual machine resource usage accounting record.

VM Performance Products

This section contains information on the support for VM/ESA 2.2.0 provided by VMPRF, RTM/ESA, FCON/ESA, and VMPAF.

VM Performance Reporting Facility 1.2.1 (VMPRF) will run on VM/ESA 2.3.0 with the same support as VM/ESA 2.2.0. The latest service is recommended.

Realtime Monitor VM/ESA 1.5.2 (RTM/ESA) requires APAR GC05430 (PTF UG03868) to run on VM/ESA 2.3.0. RTM/ESA has been updated to use a field new to VM/ESA 2.3.0 when calculating the minidisk hit ratio. With this new field, the calculation corresponds more closely to the value returned from the CP INDICATE command. RTM/ESA will continue to do the old calculation when running on earlier VM/ESA releases.

FCON/ESA Versions 2.3.02 and 3.1.00 will run on VM/ESA 2.3.0 with the same support as VM/ESA 2.2.0.

Performance Analysis Facility/VM 1.1.3 (VMPAF) will run on VM/ESA 2.3.0 with the same support as VM/ESA 2.2.0.

Last updated 19 March 1998

Back to the Performance Changes Page