Contents | Previous | Next

The 2G Line

One of the restrictions of the CP 64-bit support is that most data referenced by CP are required to be below the 2G line. 1 In addition to CP's own code and control blocks, this also includes most data that CP needs to reference in the virtual machine address spaces. This includes I/O buffers, IUCV parameter lists, and the like. When CP needs to reference a page that resides in a real storage frame that is above the 2G line, CP, when necessary, dynamically relocates that page to a frame that is below the 2G line.

This process normally has little effect on performance because the pages that need to be below the line are quickly relocated there and they tend to remain there. However, if there is enough demand for frames below the line, pages that had been moved below the line later have to be paged out in order to make room for other pages that must have frames below the line and get paged back in, often above the 2G line, when they are next referenced. This repeated movement of pages can result in degraded performance. The most likely scenario where this problem could develop is when a large percentage of the frames below the 2G line are taken up by a large V=R area.

Measurements were obtained in environments with progressively fewer frames available below the 2G line in order to better understand CP performance as this thrashing situation is approached and to provide some guidance on how many below-the-line frames tend to be required per CMS user.

The measured system was a 2064-109 LPAR, configured with 2 dedicated processors, 3G of real storage, and 1G of expanded storage. See page for I/O subsystem and virtual machine configuration details. The amount of the 3G of real storage actually used by CP was controlled by means of the STORE=nnnnM IPL parameter. Through the use of appropriately chosen STORE sizes and V=R area sizes, five measurement configurations were created where the total amount of available real storage was held constant at 1G and the amount of available real storage that resided below the 2G line were 1G, 0.5G, 0.25G, 0.2G, and 0.1G.

Each measurement was made with 3420 CMS1 users. The real storage minidisk cache size and the expanded storage minidisk cache size were each set to a fixed size of 100M in order to eliminate minidisk cache size as a variable affecting the results. Measurements were successfully completed for the first 4 configurations. The 0.1G configuration was too small and the system was not able to log on all of the users (the system stayed up but entered a "soft hang" situation due to severely degraded performance). The results are summarized in the following two tables. Table 1 shows the absolute results, while Table 2 shows the results relative to the 1G below-the-line base case.


Table 1. 2G Line Constraint Experiment


Available Storage < 2G
Run ID


1G
72CC3422


0.5G
72CC3423


0.25G
72CC3424


0.2G
72CC3425


Response Time
TRIV INT
NONTRIV INT
TOT INT
TOT INT ADJ
AVG FIRST (T)
AVG LAST (T)



0.07
0.65
0.18
0.14
0.11
0.18



0.07
0.65
0.18
0.14
0.11
0.18



0.07
0.66
0.19
0.15
0.13
0.20



0.08
0.67
0.19
0.15
0.14
0.21


Throughput
AVG THINK (T)
ETR
ETR (T)
ETR RATIO
ITR (H)
ITR
EMUL ITR
ITRR (H)
ITRR



9.11
268.10
341.08
0.786
377.15
148.43
186.37
1.000
1.000



9.11
267.60
341.03
0.785
378.93
148.88
187.19
1.005
1.003



9.12
269.73
340.26
0.793
372.26
147.73
187.68
0.987
0.995



9.12
271.52
340.11
0.798
368.38
147.28
188.08
0.977
0.992


Proc. Usage
PBT/CMD (H)
PBT/CMD
CP/CMD (H)
CP/CMD
EMUL/CMD (H)
EMUL/CMD



5.303
5.307
1.212
1.085
4.091
4.222



5.278
5.278
1.212
1.085
4.066
4.193



5.373
5.378
1.276
1.146
4.097
4.232



5.429
5.439
1.314
1.205
4.115
4.234


Processor Util.
TOTAL (H)
TOTAL
UTIL/PROC (H)
UTIL/PROC
TOTAL EMUL (H)
TOTAL EMUL
MASTER TOTAL
MASTER EMUL
TVR(H)
TVR



180.87
181.00
90.44
90.50
139.54
144.00
92.00
66.00
1.30
1.26



179.99
180.00
90.00
90.00
138.67
143.00
91.00
66.00
1.30
1.26



182.81
183.00
91.41
91.50
139.39
144.00
93.00
66.00
1.31
1.27



184.65
185.00
92.33
92.50
139.97
144.00
94.00
66.00
1.32
1.28


Storage
NUCLEUS SIZE (V)
TRACE TABLE (V)
WKSET (V)
PGBLPGS
PGBLPGS/USER
TOT PAGES/USER (V)
FREEPGS
FREE UTIL
SHRPGS
2GPAGES/USER
2GMOVES/SEC



2644KB
350KB
71
234K
68.4
202
11226
0.98
1174
76.7
0



2644KB
350KB
77
234K
68.4
201
11226
0.96
1201
38.3
659



2644KB
350KB
74
234K
68.4
204
11198
0.98
1186
19.2
786



2644KB
350KB
80
234K
68.4
204
11104
0.97
1196
15.3
764


Paging
READS/SEC
WRITES/SEC
PAGE/CMD
PAGE IO RATE (V)
PAGE IO/CMD (V)
XSTOR IN/SEC
XSTOR OUT/SEC
XSTOR/CMD
FAST CLR/CMD



490
157
1.90
43.10
0.13
1751
1995
10.98
20.73



488
166
1.92
49.10
0.14
1364
1612
8.73
20.28



507
187
2.04
52.50
0.15
1526
1839
9.89
20.32



502
201
2.07
54.20
0.16
1214
1555
8.14
20.39


Queues
DISPATCH LIST
ELIGIBLE LIST



91.0
0.0



99.8
0.0



96.8
0.0



96.1
0.0


I/O
VIO RATE
VIO/CMD
RIO RATE (V)
RIO/CMD (V)
NONPAGE RIO/CMD (V)
DASD RESP TIME (V)
MDC REAL SIZE (MB)
MDC XSTOR SIZE (MB)
MDC READS (I/Os)
MDC WRITES (I/Os)
MDC AVOID
MDC HIT RATIO



6449
18.91
1922
5.63
5.51
10.6
100
100
2990
592
2885
96.5



6455
18.93
1931
5.66
5.52
10.8
95
100
2991
590
2885
96.4



6429
18.89
1914
5.63
5.47
10.8
99
100
2987
593
2881
96.5



6406
18.84
1898
5.58
5.42
10.9
95
100
2981
590
2876
96.5


PRIVOPs
PRIVOP/CMD
DIAG/CMD
DIAG 04/CMD
DIAG 08/CMD
DIAG 0C/CMD
DIAG 14/CMD
DIAG 58/CMD
DIAG 98/CMD
DIAG A4/CMD
DIAG A8/CMD
DIAG 214/CMD
DIAG 270/CMD
SIE/CMD
SIE INTCPT/CMD
FREE TOTL/CMD



1.67
78.97
0.786
1.298
0.131
0.069
0.983
1.289
11.246
3.835
40.955
1.287
82.091
50.896
58.636



1.67
79.17
0.786
1.300
0.132
0.069
0.984
1.298
11.242
3.850
41.101
1.287
82.105
51.726
55.714



1.67
78.84
0.787
1.300
0.132
0.069
0.984
1.248
11.251
3.843
40.868
1.288
82.289
51.842
52.900



1.67
78.81
0.788
1.298
0.131
0.069
0.984
1.212
11.246
3.846
40.909
1.287
79.386
50.013
52.924


TCPIP Machine
WKSET (V)
TOT CPU/CMD (V)
CP CPU/CMD (V)
VIRT CPU/CMD (V)
DIAG 98/CMD (V)



7236
0.3140
0.1330
0.1810
1.289



7549
0.3110
0.1330
0.1780
1.300



7637
0.3130
0.1330
0.1800
1.248



7515
0.3170
0.1360
0.1810
1.212

Note: 2064-109; LPAR; 2 dedicated processors; LPAR storage: 3G central, 1G expanded; available real storage: 1G; 3420 CMS1 users; External TPNS; T=TPNS, V=VMPRF, H=Hardware Monitor, Unmarked=RTM


Table 2. 2G Line Constraint Experiment - Ratios


Available Storage < 2G
Run ID


1G
72CC3422


0.5G
72CC3423


0.25G
72CC3424


0.2G
72CC3425


Response Time
TRIV INT
NONTRIV INT
TOT INT
TOT INT ADJ
AVG FIRST (T)
AVG LAST (T)



1.000
1.000
1.000
1.000
1.000
1.000



0.971
0.995
0.989
0.987
0.982
0.987



1.043
1.011
1.034
1.042
1.136
1.094



1.086
1.018
1.045
1.061
1.215
1.149


Throughput
AVG THINK (T)
ETR
ETR (T)
ETR RATIO
ITR (H)
ITR
EMUL ITR



1.000
1.000
1.000
1.000
1.000
1.000
1.000



1.000
0.998
1.000
0.998
1.005
1.003
1.004



1.001
1.006
0.998
1.009
0.987
0.995
1.007



1.001
1.013
0.997
1.016
0.977
0.992
1.009


Proc. Usage
PBT/CMD (H)
PBT/CMD
CP/CMD (H)
CP/CMD
EMUL/CMD (H)
EMUL/CMD



1.000
1.000
1.000
1.000
1.000
1.000



0.995
0.995
1.000
1.000
0.994
0.993



1.013
1.013
1.053
1.057
1.001
1.002



1.024
1.025
1.084
1.111
1.006
1.003


Processor Util.
TOTAL (H)
TOTAL
UTIL/PROC (H)
UTIL/PROC
TOTAL EMUL (H)
TOTAL EMUL
MASTER TOTAL
MASTER EMUL
TVR(H)
TVR



1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000



0.995
0.994
0.995
0.994
0.994
0.993
0.989
1.000
1.001
1.001



1.011
1.011
1.011
1.011
0.999
1.000
1.011
1.000
1.012
1.011



1.021
1.022
1.021
1.022
1.003
1.000
1.022
1.000
1.018
1.022


Storage
WKSET (V)
PGBLPGS/USER
TOT PAGES/USER (V)
FREEPGS
FREE UTIL
SHRPGS
2GPAGES/USER



1.000
1.000
1.000
1.000
1.000
1.000
1.000



1.085
1.000
0.995
1.000
0.977
1.023
0.500



1.042
1.000
1.010
0.998
1.003
1.010
0.250



1.127
1.000
1.010
0.989
0.987
1.019
0.200


Paging
READS/SEC
WRITES/SEC
PAGE/CMD
PAGE IO RATE (V)
PAGE IO/CMD (V)
XSTOR IN/SEC
XSTOR OUT/SEC
XSTOR/CMD
FAST CLR/CMD



1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000



0.996
1.057
1.011
1.139
1.139
0.779
0.808
0.795
0.979



1.035
1.191
1.075
1.218
1.221
0.872
0.922
0.900
0.981



1.024
1.280
1.090
1.258
1.261
0.693
0.779
0.741
0.984


Queues
DISPATCH LIST



1.000



1.096



1.063



1.056


I/O
VIO RATE
VIO/CMD
RIO RATE (V)
RIO/CMD (V)
NONPAGE RIO/CMD (V)
DASD RESP TIME (V)
MDC REAL SIZE (MB)
MDC XSTOR SIZE (MB)
MDC READS (I/Os)
MDC WRITES (I/Os)
MDC AVOID
MDC HIT RATIO



1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000



1.001
1.001
1.005
1.005
1.002
1.019
0.956
1.003
1.000
0.997
1.000
1.000



0.997
0.999
0.996
0.998
0.993
1.019
0.994
1.002
0.999
1.002
0.999
1.000



0.993
0.996
0.988
0.990
0.984
1.028
0.953
1.004
0.997
0.997
0.997
1.000


PRIVOPs
PRIVOP/CMD
DIAG/CMD
DIAG 04/CMD
DIAG 08/CMD
DIAG 0C/CMD
DIAG 14/CMD
DIAG 58/CMD
DIAG 98/CMD
DIAG A4/CMD
DIAG A8/CMD
DIAG 214/CMD
DIAG 270/CMD
SIE/CMD
SIE INTCPT/CMD
FREE TOTL/CMD



1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000



1.000
1.002
1.000
1.001
1.001
1.001
1.000
1.007
1.000
1.004
1.004
1.000
1.000
1.016
0.950



1.001
0.998
1.002
1.001
1.002
1.002
1.001
0.968
1.000
1.002
0.998
1.001
1.002
1.019
0.902



1.000
0.998
1.003
1.000
1.001
1.000
1.000
0.941
1.000
1.003
0.999
1.000
0.967
0.983
0.903


TCPIP Machine
WKSET (V)
TOT CPU/CMD (V)
CP CPU/CMD (V)
VIRT CPU/CMD (V)
DIAG 98/CMD (V)



1.000
1.000
1.000
1.000
1.000



1.043
0.990
1.000
0.983
1.009



1.055
0.997
1.000
0.994
0.968



1.039
1.010
1.023
1.000
0.940

Note: 2064-109; LPAR; 2 dedicated processors; LPAR storage: 3G central, 1G expanded; available real storage: 1G; 3420 CMS1 users; External TPNS; T=TPNS, V=VMPRF, H=Hardware Monitor, Unmarked=RTM

The results show increasing CP overhead (CP/CMD (H)) as the amount of storage below the 2G line is decreased from 1G to 0.2G but this effect is relatively small. Relative to the 1G base case, CP/CMD (H) increased by 8.4% for the most constrained environment, resulting in a 2.4% decrease in internal throughput (ITR (H)). This workload didn't run with only 0.1G below the 2G line so these results indicate that the adverse performance effects are small until the amount of available storage below 2G gets close to the system thrashing point.

A count of pages moved below the line has been added in z/VM 3.1.0 to the CP monitor data. This is field SYTRSP_PLSMVB2G, located in domain 0 record 4. This count, expressed as pages moved per second, is shown in the Storage section of Table 1 as 2GMOVES/SEC. It is interesting to note that 2GMOVES/SEC is 0 with 1G available below the 2G line, then increases to 659/second with 0.5G, but then does not increase substantially after that. This is analogous to the curve of paging as a function of decreasing storage size. With enough storage, everything fits into memory and there is no paging. This is followed by a transition where an ever larger percentage of each user's working set has to be paged back in when that user becomes active again after think time, followed by a plateau representing the situation where all of a user's pages have been paged out by the time that user becomes active again and have to be paged back in. When available storage becomes sufficiently small, the paging rate rises very steeply as the system starts thrashing the pages required by the active users. The existence of this plateau where PGMOVES/SEC is not very sensitive to decreasing frames below the 2G line decreases your ability to use this value to predict how close the system is operating to the thrashing point. On the other hand, if the page move rate is near zero, you know that the system is not anywhere close to the thrashing point.

Another value called 2GPAGES/USER has also been added to the Storage section of the results tables. It is calculated as the total number of available page frames below the 2G line divided by the number of CMS1 users (3420). Using this number, we can see that, for this workload, somewhere between 38 and 77 frames per user are needed below the 2G line in order to avoid all page move processing. This can be reduced to 19 frames per user without much effect on system performance, but the system hits the thrashing point somewhere between 19 and 15 frames per user.


Footnotes:

1
Also sometimes referred to as the 2G bar.

Contents | Previous | Next