Contents | Previous | Next

Minidisk Cache with Large Real Storage

The minidisk cache facility comes with tuning parameters on the SET MDCACHE command that can be used to control the size of the real storage minidisk cache and the expanded storage minidisk cache by establishing various kinds of constraints on the real storage arbiter and expanded storage arbiter. For either kind of storage, you can bias the arbiter in favor or against the use of that storage for minidisk caching (rather than paging), set a minimum size, set a maximum size, or set a fixed size.

It is not clear how well the MDC tuning rules of thumb that have worked in the past apply to configurations having more than 2G of real storage. Accordingly, we have done a series of measurements to explore this question, the results of which are presented and discussed in this section.

The approach taken was to focus on the 6G/2G and 10G/2G configurations presented in Real Storage Sizes above 2G. Of the real/expanded storage configurations measured for the 8G total storage case, the 6G/2G configuration resulted in the best performance. Likewise, for the 12G total storage case, the 10G/2G configuration performed best. For both of these storage configurations, we did a series of measurements using various MDC settings.

All measurements were obtained on the 2064-1C8 8-way configuration described on page , but with the 6G/2G and 10G/2G storage configurations. There were 10,800 CMS1 users, driven by internal TPNS, resulting in an average processor utilization of about 90%. Hardware instrumentation, CP monitor, and TPNS throughput data were collected for each measurement. RTM and TPNS response time data were not collected.

MDC Tuning Variations: 6G/2G Configuration

Measurements were obtained with default settings (no constraints on the MDC arbiters, bias=1), with bias 0.1, and with various fixed MDC sizes. The results are summarized in Table 1 and Table 2. Table 1 shows the absolute results, while Table 2 shows the results as ratios relative to E0104864 (third data column) -- the run that was used for the 8G total storage case in section Real Storage Sizes above 2G.


Table 1. MDC Tuning Variations: 6G/2G Configuration


MDC Real
MDC Xstor
Run ID


default
default
E0104862


bias 0.1
bias 0.1
E0104863


202M
476M
E0104864


200M
200M
E0104868


400M
0M
E0104869


Response Time
TRIV INT
NONTRIV INT
TOT INT



0.99
3.38
1.67



0.05
0.48
0.11



0.04
0.48
0.11



0.04
0.49
0.11



0.05
0.52
0.12


Throughput
ETR (T)
ITR (H)



506.35
967.79



1091.04
1214.25



1091.84
1235.03



1090.10
1238.83



1089.95
1212.05


Proc. Usage
PBT/CMD (H)
CP/CMD (H)
EMUL/CMD (H)



8.266
3.418
4.849



6.588
1.773
4.816



6.478
1.656
4.821



6.458
1.655
4.803



6.600
1.799
4.802


Processor Util.
TOTAL (H)
UTIL/PROC (H)
TOTAL EMUL (H)
TOTAL EMUL
TVR(H)
TVR



418.56
52.32
245.51
247.20
1.70
1.37



718.82
89.85
525.39
538.40
1.37
1.28



707.25
88.41
526.42
539.20
1.34
1.26



703.95
87.99
523.54
536.80
1.34
1.26



719.41
89.93
523.38
535.20
1.37
1.29


Paging
READS/SEC
WRITES/SEC
PAGE/CMD
PAGE IO RATE
PAGE IO/CMD
XSTOR IN/SEC
XSTOR OUT/SEC
XSTOR/CMD



1256
1210
4.87
336.30
0.66
542
737
2.53



48
135
0.17
7.90
0.01
699
861
1.43



33
125
0.14
5.60
0.01
526
661
1.09



19
111
0.12
7.20
0.01
490
605
1.00



27
132
0.15
11.80
0.01
552
692
1.14


I/O
RIO RATE
RIO/CMD
NONPAGE RIO/CMD
DASD RESP TIME
MDC REAL SIZE (MB)
MDC XSTOR SIZE (MB)
MDC TOTAL SIZE (MB)
MDC HIT RATIO



2515
4.97
4.30
14.1
2495
1150
3645
95.5



4695
4.30
4.30
6.1
476
202
678
95.7



4622
4.23
4.23
5.9
200
476
676
96.3



4690
4.30
4.30
6.0
198
200
398
95.7



4814
4.42
4.41
6.6
398
0
398
94.5


PRIVOPs
PRIVOP/CMD
DIAG/CMD



57.12
75.18



59.15
77.85



59.51
77.85



59.27
77.75



58.81
77.55

Note: 2064-1C8, 8 processors, 10800 users, internal TPNS, 6G real storage, 2G expanded storage; T=TPNS, H=Hardware Monitor, Unmarked=VMPRF


Table 2. MDC Tuning Variations: 6G/2G Configuration - Ratios


MDC Real
MDC Xstor
Run ID


default
default
E0104862


bias 0.1
bias 0.1
E0104863


202M
476M
E0104864


200M
200M
E0104868


400M
0M
E0104869


Response Time
TRIV INT
NONTRIV INT
TOT INT



23.116
7.122
15.549



1.047
1.015
1.025



1.000
1.000
1.000



1.000
1.025
1.016



1.070
1.103
1.093


Throughput
ETR (T)
ITR (H)



0.464
0.784



0.999
0.983



1.000
1.000



0.998
1.003



0.998
0.981


Proc. Usage
PBT/CMD (H)
CP/CMD (H)
EMUL/CMD (H)



1.276
2.063
1.006



1.017
1.070
0.999



1.000
1.000
1.000



0.997
0.999
0.996



1.019
1.086
0.996


Processor Util.
TOTAL (H)
UTIL/PROC (H)
TOTAL EMUL (H)
TOTAL EMUL
TVR(H)
TVR



0.592
0.592
0.466
0.458
1.269
1.087



1.016
1.016
0.998
0.999
1.018
1.016



1.000
1.000
1.000
1.000
1.000
1.000



0.995
0.995
0.995
0.996
1.001
1.000



1.017
1.017
0.994
0.993
1.023
1.024


Paging
READS/SEC
WRITES/SEC
PAGE/CMD
PAGE IO RATE
PAGE IO/CMD
XSTOR IN/SEC
XSTOR OUT/SEC
XSTOR/CMD



38.061
9.680
33.655
60.054
129.494
1.030
1.115
2.323



1.455
1.080
1.159
1.411
1.412
1.329
1.303
1.315



1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000



0.576
0.888
0.824
1.286
1.288
0.932
0.915
0.924



0.818
1.056
1.008
2.107
2.111
1.049
1.047
1.050


I/O
RIO RATE
RIO/CMD
NONPAGE RIO/CMD
DASD RESP TIME
MDC REAL SIZE (MB)
MDC XSTOR SIZE (MB)
MDC TOTAL SIZE (MB)
MDC HIT RATIO



0.544
1.173
1.018
2.390
12.475
2.417
5.392
0.992



1.016
1.017
1.016
1.034
2.382
0.424
1.003
0.994



1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000



1.015
1.016
1.016
1.017
0.990
0.420
0.589
0.994



1.042
1.043
1.042
1.119
1.989
0.000
0.589
0.981


PRIVOPs
PRIVOP/CMD
DIAG/CMD



0.960
0.966



0.994
1.000



1.000
1.000



0.996
0.999



0.988
0.996

Note: 2064-1C8, 8 processors, 10800 users, internal TPNS, 6G real storage, 2G expanded storage; T=TPNS, H=Hardware Monitor, Unmarked=VMPRF

The first measurement shows that default tuning produced very large minidisk cache sizes, resulting in poor performance.

One way to reduce these sizes is to bias against the use of storage for MDC. The second measurement shows that setting bias to 0.1 for both real storage MDC (real MDC) and expanded storage MDC (xstor MDC) produced much more suitable MDC sizes, resulting in greatly improved performance. Additional MDC tuning variations (see measurements 3 and 4, described below) resulted in only slightly better performance than using bias 0.1 for both real and expanded storage.

For the third measurement, we used fixed MDC sizes and reversed the real and xstor MDC sizes. That is, instead of the 476M real MDC and 202M of xstor MDC that resulted from the bias 0.1 settings, we ran with 202M of real MDC and 476M of xstor MDC. The third measurement (with 202M real MDC) showed somewhat better performance, suggesting that it may be better to place much of the MDC in expanded storage.

The fourth and fifth measurements were done with a total MDC size of 400M to see if a smaller size would be better. The fourth measurement (200M real MDC, 200M xstor MDC) showed performance that was essentially equivalent to the third measurement (202M real MDC, 476M xstor MDC). The MDC hit ratio dropped only slightly. The fifth measurement (400M real MDC, no xstor MDC) was slightly degraded. This finding is consistent with the conclusion drawn from comparing measurements 2 and 3 that it is beneficial to have some of the MDC reside in expanded storage.

MDC Tuning Variations: 10G/2G Configuration

Measurements were obtained with various fixed MDC sizes and with various bias settings. The results are summarized in Table 3 and Table 4. Table 3 shows the absolute results, while Table 4 shows the results as ratios relative to E01048A6 (first data column), the run that was used for the 12G total storage case in Real Storage Sizes above 2G.


Table 3. MDC Tuning Variations: 10G/2G Configuration


MDC Real
MDC Xstor
Run ID


400M
0M
E01048A6


200M
200M
E01048AD


0M
400M
E01048AE


bias 0.1
bias 0.1
E01048AC


bias 0.05
bias 0.1
E01048AF


Response Time
TRIV INT
NONTRIV INT
TOT INT



0.04
0.46
0.10



0.04
0.47
0.11



0.04
0.49
0.11



0.04
0.46
0.10



0.08
0.72
0.17


Throughput
ETR (T)
ITR (H)



1093.19
1264.59



1092.65
1260.90



1091.77
1239.19



1093.09
1261.24



1093.12
1267.19


Proc. Usage
PBT/CMD (H)
CP/CMD (H)
EMUL/CMD (H)



6.326
1.521
4.805



6.345
1.540
4.805



6.456
1.643
4.813



6.343
1.541
4.802



6.313
1.551
4.762


Processor Util.
TOTAL (H)
UTIL/PROC (H)
TOTAL EMUL (H)
TOTAL EMUL
TVR(H)
TVR



691.57
86.45
525.26
538.40
1.32
1.24



693.25
86.66
524.99
538.40
1.32
1.25



704.82
88.10
525.44
537.60
1.34
1.26



693.34
86.67
524.86
537.60
1.32
1.24



690.10
86.26
520.52
532.80
1.33
1.24


Paging
READS/SEC
WRITES/SEC
PAGE/CMD
PAGE IO RATE
PAGE IO/CMD
XSTOR IN/SEC
XSTOR OUT/SEC
XSTOR/CMD



0
80
0.07
0.00
0.00
29
34
0.06



0
79
0.07
0.00
0.00
103
108
0.19



0
80
0.07
0.00
0.00
238
262
0.46



0
80
0.07
0.00
0.00
198
209
0.37



0
80
0.07
0.00
0.00
203
216
0.38


I/O
RIO RATE
RIO/CMD
NONPAGE RIO/CMD
DASD RESP TIME
MDC REAL SIZE (MB)
MDC XSTOR SIZE (MB)
MDC TOTAL SIZE (MB)
MDC HIT RATIO



4595
4.20
4.20
5.8
398
0
398
96.5



4607
4.22
4.22
5.8
197
200
397
96.4



4746
4.35
4.35
5.9
0
400
400
95.3



4603
4.21
4.21
5.6
925
205
1130
96.6



4559
4.17
4.17
5.9
462
205
667
96.7


PRIVOPs
PRIVOP/CMD
DIAG/CMD



59.63
77.79



59.71
77.84



59.31
77.81



59.64
77.87



58.67
77.36

Note: 2064-1C8, 8 processors, 10800 users, internal TPNS, 10G real storage, 2G expanded storage; T=TPNS, H=Hardware Monitor, Unmarked=VMPRF


Table 4. MDC Tuning Variations: 10G/2G Configuration - Ratios


MDC Real
MDC Xstor
Run ID


400M
0M
E01048A6


200M
200M
E01048AD


0M
400M
E01048AE


bias 0.1
bias 0.1
E01048AC


bias 0.05
bias 0.1
E01048AF


Response Time
TRIV INT
NONTRIV INT
TOT INT



1.000
1.000
1.000



1.024
1.017
1.020



1.049
1.074
1.066



1.000
0.998
0.999



1.829
1.574
1.682


Throughput
ETR (T)
ITR (H)



1.000
1.000



1.000
0.997



0.999
0.980



1.000
0.997



1.000
1.002


Proc. Usage
PBT/CMD (H)
CP/CMD (H)
EMUL/CMD (H)



1.000
1.000
1.000



1.003
1.012
1.000



1.020
1.080
1.002



1.003
1.013
0.999



0.998
1.020
0.991


Processor Util.
TOTAL (H)
UTIL/PROC (H)
TOTAL EMUL (H)
TOTAL EMUL
TVR(H)
TVR



1.000
1.000
1.000
1.000
1.000
1.000



1.002
1.002
0.999
1.000
1.003
1.008



1.019
1.019
1.000
0.999
1.019
1.016



1.003
1.003
0.999
0.999
1.003
1.000



0.998
0.998
0.991
0.990
1.007
1.000


Paging
WRITES/SEC
PAGE/CMD
XSTOR IN/SEC
XSTOR OUT/SEC
XSTOR/CMD



1.000
1.000
1.000
1.000
1.000



0.988
0.988
3.552
3.176
3.351



1.000
1.001
8.207
7.706
7.947



1.000
1.000
6.828
6.147
6.461



1.000
1.000
7.000
6.353
6.651


I/O
RIO RATE
RIO/CMD
NONPAGE RIO/CMD
DASD RESP TIME
MDC REAL SIZE (MB)
MDC TOTAL SIZE (MB)
MDC HIT RATIO



1.000
1.000
1.000
1.000
1.000
1.000
1.000



1.003
1.003
1.003
1.000
0.496
0.997
0.999



1.033
1.034
1.034
1.017
0.000
1.005
0.988



1.002
1.002
1.002
0.966
2.325
2.839
1.001



0.992
0.992
0.992
1.017
1.160
1.676
1.002


PRIVOPs
PRIVOP/CMD
DIAG/CMD



1.000
1.000



1.001
1.001



0.995
1.000



1.000
1.001



0.984
0.994

Note: 2064-1C8, 8 processors, 10800 users, internal TPNS, 10G real storage, 2G expanded storage; T=TPNS, H=Hardware Monitor, Unmarked=VMPRF

The first 3 measurements were done with total MDC size held constant at 400M and the real/xstor MDC apportionments set to 400M/0M, 200M/200M, and 0M/400M respectively. The first 2 measurements performed about the same, while the 0M/400M measurement showed somewhat degraded performance. The lower performance of the 0M/400M measurement is consistent with what we have seen in the past for storage configurations when the real MDC size is set too small. Setting the real MDC size to 0 is not recommended and, in some environments, could result in worse performance than is shown here.

The fourth measurement was run with bias 0.1 for both real and xstor MDC. This resulted in good performance even though the real MDC size (925M) was larger than it needed to be. A fifth measurement was done with MDC real bias reduced to 0.05. This reduced the real MDC to 462M but overall performance was essentially equivalent to the fourth measurement.

This 10G/2G configuration appears to be less sensitive to how MDC is tuned than the 6G/2G configuration shown earlier. This makes sense because the larger memory size is better able to withstand tuning settings that less than optimally apportions that memory between MDC and demand paging. This also means that it advisable to draw MDC tuning conclusions from the 6G/2G results.

MDC Tuning Recommendations

These results suggest the following general MDC tuning recommendations when running in large real memory sizes:

  1. It is important to override the defaults and constrain both the real MDC arbiter and the expanded storage MDC arbiter.

    In the past, we have recommended constraining the MDC expanded arbiter in some way and we have done most of our measurements with bias 0.1 for expanded storage MDC, while staying with default tuning (no constraints) for real storage MDC. However, these results indicate that it is now important to constrain both MDC arbiters. The best way to do this (bias settings, minimum sizes, maximum sizes, or some combination of these) will vary depending on the nature of the system workload and configuration. These tuning actions are implemented using the SET MDCACHE CP command.

    How much constraint is enough? Keep an eye on two things: the MDC hit ratio and the paging rate. For example, if an increased constraint does not reduce the MDC hit ratio appreciably and it reduces paging a lot, that is a good indication that the system has benefitted from MDC being constrained and may benefit some more by being further constrained.

  2. Use a combination of real storage MDC and expanded storage MDC.

    The exact balance is probably not too important but the results suggest that it is good to avoid the extreme cases of either no xstor MDC or no real MDC. The results for this workload further suggest that it is better to have much of the minidisk cache, perhaps something like 80%, in expanded storage.

Contents | Previous | Next