Minidisk Cache with Large Real Storage
The minidisk cache facility comes with tuning parameters on the SET MDCACHE command that can be used to control the size of the real storage minidisk cache and the expanded storage minidisk cache by establishing various kinds of constraints on the real storage arbiter and expanded storage arbiter. For either kind of storage, you can bias the arbiter in favor or against the use of that storage for minidisk caching (rather than paging), set a minimum size, set a maximum size, or set a fixed size.
It is not clear how well the MDC tuning rules of thumb that have worked in the past apply to configurations having more than 2G of real storage. Accordingly, we have done a series of measurements to explore this question, the results of which are presented and discussed in this section.
The approach taken was to focus on the 6G/2G and 10G/2G configurations presented in Real Storage Sizes above 2G. Of the real/expanded storage configurations measured for the 8G total storage case, the 6G/2G configuration resulted in the best performance. Likewise, for the 12G total storage case, the 10G/2G configuration performed best. For both of these storage configurations, we did a series of measurements using various MDC settings.
All measurements were obtained on the 2064-1C8 8-way configuration described on page , but with the 6G/2G and 10G/2G storage configurations. There were 10,800 CMS1 users, driven by internal TPNS, resulting in an average processor utilization of about 90%. Hardware instrumentation, CP monitor, and TPNS throughput data were collected for each measurement. RTM and TPNS response time data were not collected.
MDC Tuning Variations: 6G/2G Configuration
Measurements were obtained with default settings (no constraints on the MDC arbiters, bias=1), with bias 0.1, and with various fixed MDC sizes. The results are summarized in Table 1 and Table 2. Table 1 shows the absolute results, while Table 2 shows the results as ratios relative to E0104864 (third data column) -- the run that was used for the 8G total storage case in section Real Storage Sizes above 2G.
Table 1. MDC Tuning Variations: 6G/2G Configuration
MDC Real MDC Xstor Run ID | default default E0104862 | bias 0.1 bias 0.1 E0104863 | 202M 476M E0104864 | 200M 200M E0104868 | 400M 0M E0104869 |
---|---|---|---|---|---|
Response Time TRIV INT NONTRIV INT TOT INT |
|
|
|
|
|
Throughput ETR (T) ITR (H) |
|
|
|
|
|
Proc. Usage PBT/CMD (H) CP/CMD (H) EMUL/CMD (H) |
|
|
|
|
|
Processor Util. TOTAL (H) UTIL/PROC (H) TOTAL EMUL (H) TOTAL EMUL TVR(H) TVR |
|
|
|
|
|
Paging READS/SEC WRITES/SEC PAGE/CMD PAGE IO RATE PAGE IO/CMD XSTOR IN/SEC XSTOR OUT/SEC XSTOR/CMD |
|
|
|
|
|
I/O RIO RATE RIO/CMD NONPAGE RIO/CMD DASD RESP TIME MDC REAL SIZE (MB) MDC XSTOR SIZE (MB) MDC TOTAL SIZE (MB) MDC HIT RATIO |
|
|
|
|
|
PRIVOPs PRIVOP/CMD DIAG/CMD |
|
|
|
|
|
Note: 2064-1C8, 8 processors, 10800 users, internal TPNS, 6G real storage, 2G expanded storage; T=TPNS, H=Hardware Monitor, Unmarked=VMPRF |
Table 2. MDC Tuning Variations: 6G/2G Configuration - Ratios
MDC Real MDC Xstor Run ID | default default E0104862 | bias 0.1 bias 0.1 E0104863 | 202M 476M E0104864 | 200M 200M E0104868 | 400M 0M E0104869 |
---|---|---|---|---|---|
Response Time TRIV INT NONTRIV INT TOT INT |
|
|
|
|
|
Throughput ETR (T) ITR (H) |
|
|
|
|
|
Proc. Usage PBT/CMD (H) CP/CMD (H) EMUL/CMD (H) |
|
|
|
|
|
Processor Util. TOTAL (H) UTIL/PROC (H) TOTAL EMUL (H) TOTAL EMUL TVR(H) TVR |
|
|
|
|
|
Paging READS/SEC WRITES/SEC PAGE/CMD PAGE IO RATE PAGE IO/CMD XSTOR IN/SEC XSTOR OUT/SEC XSTOR/CMD |
|
|
|
|
|
I/O RIO RATE RIO/CMD NONPAGE RIO/CMD DASD RESP TIME MDC REAL SIZE (MB) MDC XSTOR SIZE (MB) MDC TOTAL SIZE (MB) MDC HIT RATIO |
|
|
|
|
|
PRIVOPs PRIVOP/CMD DIAG/CMD |
|
|
|
|
|
Note: 2064-1C8, 8 processors, 10800 users, internal TPNS, 6G real storage, 2G expanded storage; T=TPNS, H=Hardware Monitor, Unmarked=VMPRF |
The first measurement shows that default tuning produced very large minidisk cache sizes, resulting in poor performance.
One way to reduce these sizes is to bias against the use of storage for MDC. The second measurement shows that setting bias to 0.1 for both real storage MDC (real MDC) and expanded storage MDC (xstor MDC) produced much more suitable MDC sizes, resulting in greatly improved performance. Additional MDC tuning variations (see measurements 3 and 4, described below) resulted in only slightly better performance than using bias 0.1 for both real and expanded storage.
For the third measurement, we used fixed MDC sizes and reversed the real and xstor MDC sizes. That is, instead of the 476M real MDC and 202M of xstor MDC that resulted from the bias 0.1 settings, we ran with 202M of real MDC and 476M of xstor MDC. The third measurement (with 202M real MDC) showed somewhat better performance, suggesting that it may be better to place much of the MDC in expanded storage.
The fourth and fifth measurements were done with a total MDC size of 400M to see if a smaller size would be better. The fourth measurement (200M real MDC, 200M xstor MDC) showed performance that was essentially equivalent to the third measurement (202M real MDC, 476M xstor MDC). The MDC hit ratio dropped only slightly. The fifth measurement (400M real MDC, no xstor MDC) was slightly degraded. This finding is consistent with the conclusion drawn from comparing measurements 2 and 3 that it is beneficial to have some of the MDC reside in expanded storage.
MDC Tuning Variations: 10G/2G Configuration
Measurements were obtained with various fixed MDC sizes and with various bias settings. The results are summarized in Table 3 and Table 4. Table 3 shows the absolute results, while Table 4 shows the results as ratios relative to E01048A6 (first data column), the run that was used for the 12G total storage case in Real Storage Sizes above 2G.
Table 3. MDC Tuning Variations: 10G/2G Configuration
MDC Real MDC Xstor Run ID | 400M 0M E01048A6 | 200M 200M E01048AD | 0M 400M E01048AE | bias 0.1 bias 0.1 E01048AC | bias 0.05 bias 0.1 E01048AF |
---|---|---|---|---|---|
Response Time TRIV INT NONTRIV INT TOT INT |
|
|
|
|
|
Throughput ETR (T) ITR (H) |
|
|
|
|
|
Proc. Usage PBT/CMD (H) CP/CMD (H) EMUL/CMD (H) |
|
|
|
|
|
Processor Util. TOTAL (H) UTIL/PROC (H) TOTAL EMUL (H) TOTAL EMUL TVR(H) TVR |
|
|
|
|
|
Paging READS/SEC WRITES/SEC PAGE/CMD PAGE IO RATE PAGE IO/CMD XSTOR IN/SEC XSTOR OUT/SEC XSTOR/CMD |
|
|
|
|
|
I/O RIO RATE RIO/CMD NONPAGE RIO/CMD DASD RESP TIME MDC REAL SIZE (MB) MDC XSTOR SIZE (MB) MDC TOTAL SIZE (MB) MDC HIT RATIO |
|
|
|
|
|
PRIVOPs PRIVOP/CMD DIAG/CMD |
|
|
|
|
|
Note: 2064-1C8, 8 processors, 10800 users, internal TPNS, 10G real storage, 2G expanded storage; T=TPNS, H=Hardware Monitor, Unmarked=VMPRF |
Table 4. MDC Tuning Variations: 10G/2G Configuration - Ratios
MDC Real MDC Xstor Run ID | 400M 0M E01048A6 | 200M 200M E01048AD | 0M 400M E01048AE | bias 0.1 bias 0.1 E01048AC | bias 0.05 bias 0.1 E01048AF |
---|---|---|---|---|---|
Response Time TRIV INT NONTRIV INT TOT INT |
|
|
|
|
|
Throughput ETR (T) ITR (H) |
|
|
|
|
|
Proc. Usage PBT/CMD (H) CP/CMD (H) EMUL/CMD (H) |
|
|
|
|
|
Processor Util. TOTAL (H) UTIL/PROC (H) TOTAL EMUL (H) TOTAL EMUL TVR(H) TVR |
|
|
|
|
|
Paging WRITES/SEC PAGE/CMD XSTOR IN/SEC XSTOR OUT/SEC XSTOR/CMD |
|
|
|
|
|
I/O RIO RATE RIO/CMD NONPAGE RIO/CMD DASD RESP TIME MDC REAL SIZE (MB) MDC TOTAL SIZE (MB) MDC HIT RATIO |
|
|
|
|
|
PRIVOPs PRIVOP/CMD DIAG/CMD |
|
|
|
|
|
Note: 2064-1C8, 8 processors, 10800 users, internal TPNS, 10G real storage, 2G expanded storage; T=TPNS, H=Hardware Monitor, Unmarked=VMPRF |
The first 3 measurements were done with total MDC size held constant at 400M and the real/xstor MDC apportionments set to 400M/0M, 200M/200M, and 0M/400M respectively. The first 2 measurements performed about the same, while the 0M/400M measurement showed somewhat degraded performance. The lower performance of the 0M/400M measurement is consistent with what we have seen in the past for storage configurations when the real MDC size is set too small. Setting the real MDC size to 0 is not recommended and, in some environments, could result in worse performance than is shown here.
The fourth measurement was run with bias 0.1 for both real and xstor MDC. This resulted in good performance even though the real MDC size (925M) was larger than it needed to be. A fifth measurement was done with MDC real bias reduced to 0.05. This reduced the real MDC to 462M but overall performance was essentially equivalent to the fourth measurement.
This 10G/2G configuration appears to be less sensitive to how MDC is tuned than the 6G/2G configuration shown earlier. This makes sense because the larger memory size is better able to withstand tuning settings that less than optimally apportions that memory between MDC and demand paging. This also means that it advisable to draw MDC tuning conclusions from the 6G/2G results.
MDC Tuning Recommendations
These results suggest the following general MDC tuning recommendations when running in large real memory sizes:
- It is important to override the defaults and constrain both
the real MDC arbiter and the expanded storage MDC arbiter.
In the past, we have recommended constraining the MDC expanded arbiter in some way and we have done most of our measurements with bias 0.1 for expanded storage MDC, while staying with default tuning (no constraints) for real storage MDC. However, these results indicate that it is now important to constrain both MDC arbiters. The best way to do this (bias settings, minimum sizes, maximum sizes, or some combination of these) will vary depending on the nature of the system workload and configuration. These tuning actions are implemented using the SET MDCACHE CP command.
How much constraint is enough? Keep an eye on two things: the MDC hit ratio and the paging rate. For example, if an increased constraint does not reduce the MDC hit ratio appreciably and it reduces paging a lot, that is a good indication that the system has benefitted from MDC being constrained and may benefit some more by being further constrained.
- Use a combination of real storage MDC and expanded storage MDC.
The exact balance is probably not too important but the results suggest that it is good to avoid the extreme cases of either no xstor MDC or no real MDC. The results for this workload further suggest that it is better to have much of the minidisk cache, perhaps something like 80%, in expanded storage.