Contents | Previous | Next

VSE/ESA Guest

This section examines the performance of migrating a VSE/ESA guest from VM/ESA 2.4.0 to z/VM 3.1.0 31-bit CP and to z/VM 3.1.0 64-bit CP. All measurements were made on a 2048-109 using the DYNAPACE workload. DYNAPACE is a batch workload that is characterized by heavy I/O. See VSE Guest (DYNAPACE) for a description of this workload.

Measurements were obtained with the VSE/ESA system run as a V=R guest and as a V=V guest. The V=R guest environment had dedicated DASD with I/O assist. The V=V guest environment was configured with full pack minidisk DASD with minidisk caching (MDC) active.

Workload: DYNAPACE 

Hardware Configuration 

Processor model:
    V=R case:               2064-109 in basic mode, 2 processors online
    V=V case:               2064-109 LPAR with 2 dedicated processors
Storage
    Real:                   1GB
    Expanded:               2GB
DASD:


Type of
DASD


Control
Unit


Number
of Paths



PAGE



SPOOL


- Number of Volumes -
TDSK



VSAM



VSE Sys.



VM Sys.

3390-3
RAMAC 1
4






20
2


3390-3
RAMAC 2
4










1

Note: Each set of RAMAC 1 volumes is behind a 3990-6 control unit with 1024M cache. RAMAC 2 refers to the RAMAC 2 Array Subsystem with 256M cache and drawers in 3390-3 format.

Software Configuration 

VSE version:  2.1.0 (using the standard dispatcher)
 
Virtual Machines:


Virtual
Machine



Number



Type


Machine
Size/Mode



SHARE



RESERVED



Other Options

VSEVR
1
VSE V=R
96MB/ESA
100


IOASSIST ON












CCWTRANS OFF
or












VSEVV
1
VSE V=V
96MB/ESA
100


IOASSIST OFF









WRITER
1
CP monitor
2MB/XA
100













Measurement Discussion  The V=R and V=V results are summarized in Table 1 and Table 2 respectively. In each table, the absolute results are shown, followed by the same results expressed as ratios relative to the VM/ESA 2.4.0 base case.

The V=R results for all 3 cases are equivalent within run variability. Likewise, the V=V results are also equivalent within run variability. However, the apparent increases in CP/CMD (H) do suggest that there is some increase in CP overhead as a result of the 64-bit support. These increases, if present, are much lower than the 5.2% to 7.1% increases in CP/CMD (H) that were observed for the z/VM 3.1.0 64-bit case in the CMS-intensive environments (see CMS-Intensive).


Table 1. VSE V=R Guest Migration from VM/ESA 2.4.0


Release
CP
Runid


2.4.0
31 bit
P3R240G0


3.1.0
31 bit
P3R12280


3.1.0
64 bit
P6R12280


UTIL/PROC (H)
PBT/CMD (H)
CP/CMD (H)
EMUL/CMD (H)
PERCENT CP (H)
RIO/CMD
VIO/CMD
PRIVOP/CMD
DIAG/CMD


5.54
430.05
23.12
406.93
5.4
855
855
295
268


5.48
424.93
22.11
402.82
5.2
854
854
287
268


5.50
426.86
23.19
403.67
5.4
855
855
291
271


UTIL/PROC (H)
PBT/CMD (H)
CP/CMD (H)
EMUL/CMD (H)
PERCENT CP (H)
RIO/CMD
VIO/CMD
PRIVOP/CMD
DIAG/CMD


1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000


0.989
0.988
0.956
0.990
0.963
0.999
0.999
0.973
1.000


0.993
0.993
1.003
0.992
1.000
1.000
1.000
0.986
1.011

Note: 2048-109; basic mode; 2 online processors; 1G/2G central/expanded storage; dedicated DASD; IOASSIST ON; DYNAPACE workload; H=Hardware Monitor, Unmarked=VMPRF
 
 
 
 


Table 2. VSE V=V Guest Migration from VM/ESA 2.4.0


Release
CP
Runid


2.4.0
31 bit
P3V240G4


3.1.0
31 bit
P3V12101


3.1.0
64 bit
P6V12101


UTIL/PROC (H)
PBT/CMD (H)
CP/CMD (H)
EMUL/CMD (H)
PERCENT CP (H)
MDC REAL SIZE (MB)
MDC XSTOR SIZE (MB)
MDC HIT RATIO
RIO/CMD
VIO/CMD
PRIVOP/CMD
DIAG/CMD


8.78
492.79
81.74
411.05
16.6
279
9
95.3
510
1170
9820
269


8.76
491.33
82.30
409.03
16.8
290
9
94.7
515
1171
9826
272


8.71
488.64
83.52
405.13
17.1
299
8
95.3
510
1171
9828
272


UTIL/PROC (H)
PBT/CMD (H)
CP/CMD (H)
EMUL/CMD (H)
PERCENT CP (H)
MDC REAL SIZE (MB)
MDC XSTOR SIZE (MB)
MDC HIT RATIO
RIO/CMD
VIO/CMD
PRIVOP/CMD
DIAG/CMD


1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000
1.000


0.998
0.997
1.007
0.995
1.012
1.039
1.000
0.994
1.010
1.001
1.001
1.011


0.992
0.992
1.022
0.986
1.030
1.072
0.889
1.000
1.000
1.001
1.001
1.011

Note: 2048-109; LPAR; 2 dedicated processors; 1G/2G central/expanded storage; VSE DASD attached as full pack minidisks; MDC stor: default; MDC xstor: bias 0.1; DYNAPACE workload; H=Hardware Monitor, Unmarked=VMPRF

Contents | Previous | Next