Contents | Previous | Next

Dynamic Memory Upgrade

Abstract

z/VM 5.4 lets real storage be increased without an IPL by bringing designated amounts of standby storage online. Further, guests supporting the dynamic storage reconfiguration architecture can increase or decrease their real storage sizes without taking a guest IPL.

On system configurations with identical storage sizes, workload behaviors are nearly identical whether the storage was all available at IPL or was achieved by bringing storage online dynamically. When storage is added to a z/VM system that is paging, transitions in the paging subsystem are apparent in the CP monitor data and Performance Toolkit data and match the expected workload characteristics.

Introduction

This article provides general observations about performance results achieved when storage (aka memory) is brought online dynamically. The primary result is that on system configurations with identical storage sizes, results are nearly identical whether the storage was all available at IPL or was brought online by CP commands. Further, when storage is added to a paging workload, transitions in the paging subsystem are apparent in the CP monitor data and Performance Toolkit data and match the workload characteristics.

The SET STORAGE command allows a designated amount of standby storage to be added to the configuration. Storage to be dynamically added must be reserved during LPAR activation but does not need to exist at activation time. Storage added by the SET STORAGE command will be initialized only when storage is needed to satisfy demand or the system enters a wait state.

The QUERY STORAGE command now shows the amounts of standby and reserved storage. Reserved storage that exists will be shown as standby storage while reserved storage that does not exist will be shown as reserved. Values for standby and reserved can change when real storage is added, LPARs are activated or deactivated, and storage is dynamically added by operating systems running in other LPARs.

The DEFINE STORAGE command was enhanced to include STANDBY and RESERVED values for virtual machines and the values will be shown in the output of the QUERY VIRTUAL STORAGE command.

The maximum values for MDCACHE and VDISK do not get updated automatically when storage is dynamically increased. After increasing real storage, the system administrator might want to evaluate and increase any storage settings established for SET SRM STORBUF, SET RESERVED, SET MDCACHE STORAGE, or the SET VDISK system limit.

CP monitor data and Performance Toolkit for VM provide information relative to the standby and reserved storage. The new monitor data is described in z/VM 5.4 Performance Management. Storage added by the SET STORAGE command will not be reflected in CP monitor data and Performance Toolkit for VM counters until the storage has been initialized.

Method

Dynamic memory upgrade was evaluated using transition workloads and steady state workloads. Transition workloads were used to ensure that workload characteristics change as a result of dynamically adding storage. Steady state workloads were used to ensure performance results are similar whether the storage was all available at IPL or was achieved by a series of SET STORAGE commands.

Virtual Storage Exerciser was used to create the transition and steady state workloads used for this evaluation.

Here are the workload parameters for the two separate workloads that were used in this evaluation.

VIRSTOEX Users and Workload Parameters

Workload 2G 16G
Users 8 8
End Addr 4GB 18GB
Increment 56 32
Requested Time 720 720
CPUs 3 3
Requested Loops 0 0
Fixedwait 0 0
Randomwait 0 0

For transition evaluations, the workload was started in a storage size that would require z/VM to page. Storage was then dynamically added, in an amount that should eliminate z/VM paging and allow the workload to achieve 100% processor utilization. After that, additional storage was added to show that dynamically added storage is not initialized until it is needed or until the system enters a wait condition.

For steady state evaluation, a workload was measured in a specific storage configuration that was available at IPL. The measurement was repeated in a storage configuration that was IPLed with only a portion of the desired storage and the remainder dynamically added with SET STORAGE commands.

Guest support was evaluated by using z/VM 5.4 as a guest of z/VM 5.4, using the same workloads used for a first-level z/VM.

Results and Discussion

2G Transition Workload

The system was IPLed with 1G of storage and a workload started that required about 2G. This workload starts with heavy paging and less than 100% processor utilization.

Three minutes into the run, 1G of storage was added via the SET STORAGE command. This new storage was immediately initialized, paging was eliminated, processor utilization increased to 100%, and the monitor counters correctly reported the new storage values (DPA, SXS, available list). SXS will be extended when storage is dynamically increased until it reaches its maximum value of 2G which corresponds to real storage just slightly over 2G.

Six minutes into the run, another 1G of storage was added via the SET STORAGE command. Because processor utilization was 100% and no paging was in progress, as expected this storage remained uninitialized for the next six minutes of steady-state workload.

Twelve minutes into the run, the workload ended, causing processor utilization to drop below 100%, the storage to be initialized, and counters updated (DPA, available list).

All of the aforementioned results and observations match expectations.

Here is an example (run ST630E01) containing excerpts from four separate Performance Toolkit screens showing values changing by the expected amount at the expected time. Specific data is extracted from these screens:

  • Performance Toolkit SYSSUMLG screen (FCX225)
  • Performance Toolkit PAGELOG screen (FCX143)
  • Performance Toolkit AVAILLOG screen (FCX254)
  • Performance Toolkit SXSUTIL screen (FCX264)
------------------------------------------------
          FCX225   FCX143        FCX254   FCX264
------------------------------------------------
                      DPA                    SXS
 Interval    Pct  Pagable   <Available>    Total
 End Time   Busy   Frames    <2GB  >2GB    Pages
------------------------------------------------  Start with 1G
 10:50:25   29.2   251481     875     0   258176
 10:50:55   24.3   251482    1183     0   258176
 10:51:25   34.9   251482     404     0   258176  Workload paging
 10:51:55   25.9   251482    1105     0   258176  <100% cpu
 10:52:25   30.4   251483      51     0   258176
------------------------------------------------  1G to 2G
 10:52:55   55.5   504575    176k     0   515840
 10:53:25  100.0   504575    170k     0   515840
 10:53:55  100.0   504575    170k     0   515840  No workload paging
 10:54:25  100.0   504575    170k     0   515840  100% cpu
 10:54:55  100.0   504575    170k     0   515840
 10:55:25  100.0   504577    170k     0   515840
------------------------------------------------  2G to 3G
 10:55:55  100.0   503755    169k     0   524287
 10:56:25  100.0   499240    165k     0   524287
 10:56:55  100.0   498475    164k     0   524287
 10:57:25  100.0   499252    164k     0   524287  Storage not being
 10:57:55  100.0   498483    164k     0   524287  initialized due to
 10:58:25  100.0   499253    164k     0   524287  100% cpu
 10:58:55  100.0   498476    164k     0   524287
 10:59:25  100.0   499249    164k     0   524287
 10:59:55  100.0   498446    164k     0   524287
 11:00:25  100.0   499255    164k     0   524287
 11:00:55  100.0   499252    164k     0   524287
 11:01:25  100.0   499253    164k     0   524287
------------------------------------------------  Workload end, init starts
 11:01:55   15.4   763566    169k  260k   524287
 11:02:25    2.6   765402    170k  260k   524287
 -----------------------------------------------  Initialization completes
2G Steady State Workload

Results for steady state measurements of the 2G workload in 3G of real storage were nearly identical whether the storage configuration was available at IPL or the storage configuration was dynamically created with SET STORAGE commands. Because they were nearly identical, no specific results are included here.

16G Transition Workload

The system was IPLed with 12G of storage and a workload started that required about 16G, so the workload starts with heavy paging and less than 100% processor utilization.

Three minutes into the run, 18G of storage was added via the SET STORAGE command. Enough of this new storage was immediately initialized to eliminate paging and to allow processor utilization to reach 100%. The remainder of the storage was not initialized until the workload ended and processor utilization dropped below 100%. The monitor counters then correctly reported the new storage (DPA, available list).

All of the aforementioned results and observations match expectations.

Here is an example (run ST630E04) containing excerpts from four separate Performance Toolkit screens showing values changing by the expected amount at the expected time. Specific data is extracted from these screens:

  • Performance Toolkit SYSSUMLG screen (FCX225)
  • Performance Toolkit PAGELOG screen (FCX143)
  • Performance Toolkit AVAILLOG screen (FCX254)
  • Performance Toolkit SXSUTIL screen (FCX264)
-------------------------------------------------
          FCX225    FCX143       FCX254    FCX264
-------------------------------------------------
                       DPA                    SXS
Interval    Pct    Pagable  <Available>     Total
End Time   Busy     Frames   <2GB  >2GB     Pages
------------------------------------------------- Start with 12G
12:35:04   54.9    3107978    780  1451    524287 Workload paging
12:35:34   73.8    3107980    299   139    524287 <100% cpu
12:36:03   81.6    3107981    425  2860    524287
12:36:34   79.6    3107985     97  1073    524287
12:37:03   79.0    3107986     29    28    524287
12:37:34   75.9    3107986    390   163    524287
12:38:03   78.7    3107990    191    50    524287
------------------------------------------------- Add 18G
12:38:33   93.2    6933542     66 2476k    524287 Immediate
12:39:03  100.1    6933542     73 2475k    524287 Initialization
12:39:33   99.9    6933542     76 2474k    524287 satisfies the
12:40:03   99.9    6933542     88 2474k    524287 workload need
12:40:33   99.9    6933542     96 2474k    524287
12:41:03   99.9    6933542     99 2474k    524287
12:41:33   99.9    6933542    100 2474k    524287
12:42:03   99.9    6933542    100 2474k    524287
12:42:33   99.9    6933542    100 2474k    524287
12:43:03   99.9    6933542    100 2474k    524287
12:43:33   99.9    6933542    101 2474k    524287
12:44:03   99.9    6933541    100 2474k    524287
12:44:33   99.9    6933541    109 2474k    524287
12:45:03   99.9    6933541    109 2474k    524287
12:45:33   99.9    6933541    109 2474k    524287
12:46:03   99.9    6933541    109 2474k    524287
------------------------------------------------- Workload End
12:46:33   17.0    7789646    109 3330k    524287 Init resumes
12:47:03     .0    7789646    109 3330k    524287
 -----------------------------------------------  Init completes

16G Steady State Workload

Results for steady state measurements of the 16G workload in 30G of real storage were nearly identical whether the storage configuration was available at IPL or the storage configuration was dynamically created with SET STORAGE commands. Because they were nearly identical, no specific results are included here.

z/VM 5.4 Guest Evaluation

The four separate z/VM 5.4 guest evaluations produced results consistent with the results described for z/VM 5.4 in an LPAR, so no specific results are included here.

Elapsed Time Needed to Process a SET STORAGE Command

Although no formal data was collected, the time to execute a SET STORAGE command is affected by the amount of standby storage and the percentage of standby storage that is being added. The largest amount of time generally occurs on the first SET STORAGE command when there is a large amount of standby storage and a small percentage of the standby storage is added.

Summary and Conclusions

On system configurations with identical storage sizes, results are nearly identical whether the storage was all available at IPL or was achieved by a series of SET STORAGE commands.

When storage is added to a paging workload, paging subsystem transitions matched the expectation of the workload characteristics and the updated storage configuration.

CP monitor data and Performance Toolkit for VM provide the expected information relative to the standby and reserved storage transitions.

The QUERY command provided the expected information relative to standby and reserved storage.

Storage added by the SET STORAGE command will be initialized only when storage is needed to satisfy demand or the system enters a wait state.

A z/VM 5.4 guest of z/VM 5.4 reacted as expected with dynamic memory changes.

Contents | Previous | Next