Contents | Previous | Next

ISFC Improvements

Abstract

In z/VM 6.2 IBM shipped improvements to the Inter-System Facility for Communication (ISFC). These improvements prepared ISFC to serve as the data conveyance for relocations of running guests.

Measurements of ISFC's capabilities for guest relocation traffic studied its ability to fill a FICON chpid's fiber with data and its ability to ramp up as the hardware configuration of the logical link expanded. These measurements generally showed that ISFC uses FICON chpids fully and scales correctly with increasing logical link capacity.

Because ISFC is also the data conveyance for APPC/VM, IBM also studied z/VM 6.2's handling of APPC/VM traffic compared back to z/VM 6.1, on logical link configurations z/VM 6.1 can support. This regression study showed that z/VM 6.2 experiences data rate changes in the range of -6% to +78%, with most cases showing substantial improvement. CPU utilization per message moved changed little.

Though IBM did little in z/VM 6.2 to let APPC/VM traffic exploit multi-CTC logical links, APPC/VM workloads did show modest gains in such configurations.

Introduction

In z/VM 6.2 IBM extended the Inter-System Facility for Communication (ISFC) so that it would have the data carrying capacity needed to support guest relocations. The most visible enhancement is that a logical link can now be composed of multiple CTCs. IBM also made many internal improvements to ISFC, to let it scale to the capacities required by guest relocations.

Though performing and measuring actual guest relocations is the ultimate test, we found it appropriate also to devise experiments to measure ISFC alone. Such experiments would let us assess certain basic ISFC success criteria, such as whether ISFC could fully use a maximally configured logical link, without wondering whether execution traits of guest relocations were partially responsible for the results observed. A second and more practical concern was that devising means to measure ISFC alone let us run experiments more flexibly, more simply, and with more precise control than we could have if we had had only guest relocations at our disposal as a measurement tool.

Because ISFC was so heavily revised, we also found it appropriate to run measurements to check performance for APPC/VM workloads. Our main experiment for APPC/VM was to check that a one-CTC logical link could carry as much traffic as the previous z/VM release. Our second experiment was to study the scaling behavior of APPC/VM traffic as we added hardware to the logical link. Because we made very few changes in the APPC/VM-specific portions of ISFC, and because we had no requirement to improve APPC/VM performance in z/VM 6.2, we ran this second experiment mostly out of curiosity.

This report chapter describes the findings of all of these measurements. The chapter also offers some insight into the inner workings of ISFC and provides some guidance on ISFC logical link capacity estimation.

Background

Early in the development of z/VM 6.2, IBM did some very simple measurements to help us to understand the characteristics of FICON CTC devices. These experiments' results guided the ISFC design and taught us about the configuring and capacity of multi-CTC ISFC logical links. This section does not cite these simple measurements' specific results. Rather, it merely summarizes their teachings.

Placement of CTCs onto FICON Chpids

When we think about the relationship between FICON CTC devices and FICON CTC chpids, we realize there are several different ways we could place a set of CTCs onto a set of chpids. For example, we could place sixteen CTCs onto sixteen chpids, one CTC on each chpid. Or, we could place sixteen CTCs all onto one chpid.

In very early measurements of multi-CTC ISFC logical links, IBM tried various experiments to determine how many CTCs to put onto a chpid before performance on that chpid no longer improved, for data exchange patterns that imitated what tended to happen during guest relocations. Generally we found that for the FICON Express2 and FICON Express4 chpids we tried, putting more than four to five CTC devices onto a FICON chpid did not result in any more data moving through the logical link. In fact, with high numbers of CTCs on a chpid, performance rolled off.

Though we do not cite the measurement data here, our recommendation is that customers generally run no more than four CTCs on each chpid. This provides good utilization of the fiber capacity and stays well away from problematic configurations.

For this reason, for our own measurements we used no more than four CTCs per FICON chpid.

Traffic Scheduling and Collision Avoidance

A CTC device is a point-to-point communication link connecting two systems. Data can move in either direction, but in only one direction at a time: either side A writes and side B then hears an attention interrupt and reads, or vice-versa. A write collision is what happens when two systems both try to write into a CTC device at the same instant. Neither side's write succeeds. Both sides must recover from the I/O error and try again to write the transmission package. These collisions degrade logical link performance.

When the logical link consists of more than one CTC, ISFC uses a write scheduling algorithm designed to push data over the logical link in a fashion that balances the need to use as many CTCs as possible with the need to stay out of the way of a partner who is trying to accomplish the very same thing.

To achieve this, the two systems agree on a common enumeration scheme for the CTCs comprising the link. The agreed-upon scheme is for both sides to number the CTCs according to the real device numbers that are in use on the system whose name comes first in the alphabet. For example, if systems ALPHA and BETA are connected by three CTCs, the two systems would agree to use ALPHA's device numbers to place the CTCs into an order on which they agree, because ALPHA comes before BETA in the alphabet.

The write scheduling scheme uses the agreed-upon ordering to avoid collisions. When ALPHA needs a CTC for writing, it scans the logical link's device list lowest to highest, looking for one on which an I/O is not in progress. System BETA does similarly, but it scans highest to lowest. When there are enough CTCs to handle the traffic, this scheme will generally avoid collisions. Further, when traffic is asymmetric, this scheme allows the heavily transmitting partner to take control of the majority of the CTCs.

The write scheduling technique also contains provision for one side never to take complete control of all of the CTCs. Rather, the scan always stops short of using the whole device list, as follows:

Number of CTCs in Logical Link Write Scheduling Stop-Short Behavior
1-8 CTCs Scan stops 1 short
9-16 CTCs Scan stops 2 short

The stop-short provision guarantees each side that the first one or two devices in its scan will never incur a write collision.

The figure below illustrates the write scheduling scheme for the case of two systems named ATLANTA and BOSTON connected by eight CTCs. The device numbers for ATLANTA are the relevant ones, because ATLANTA alphabetizes ahead of BOSTON. The ATLANTA side scans lowest to highest, while the BOSTON side scans highest to lowest. Each side stops one short.

Figure 620isfc2 not displayed.

Understanding the write scheduling scheme is important to understanding CTC device utilization statistics. For example, in a heavily asymmetric workload running over a sixteen-CTC link, we would expect to see only fourteen of the devices really busy, because the last two aren't scanned. Further, in going from eight to ten RDEVs, each side's scan gains only one in depth, because for 10 RDEVs we stop two short instead of one.

Understanding the write scheduling scheme is also important if one must build up a logical link out of an assortment of FICON chpid speeds. Generally, customers will want the logical link to exhibit symmetric performance, that is, the link works as well relocating guests from ALPHA to BETA as it does from BETA to ALPHA. Achieving this means paying close attention to how the CTC device numbers are placed onto chpids on the ALPHA side. When there are a number of fast chpids and one or two slow ones, placing the faster chpids on the extremes of ALPHA's list and the slower chpids in the middle of ALPHA's list will give best results. This arrangement gives both ALPHA and BETA a chance to use fast chpids first and then resort to the slower chpids only when the fast CTCs are all busy. For similar reasons, if there is only one fast chpid and the rest are slow ones, put the fast chpid into the middle of ALPHA's device number sequence.

Because understanding the write scheduling scheme is so important, and because the write scheduling scheme is intimately related to device numbers, the QUERY ISLINK command shows the device numbers in use on both the issuer's side and on the partner's side. Here is an example; notice that for each CTC, the Remote link device clause tells what the device number is on the other end of the link:

q islink Node 2NDI Link device: CC30 Type: FCTC Node: 2NDI Bytes Sent: 36592 State: Up Bytes Received: 0 Status: Idle Remote link device: 33C0 Link device: CC31 Type: FCTC Node: 2NDI Bytes Sent: 0 State: Up Bytes Received: 0 Status: Idle Remote link device: 33C1 Link device: CC32 Type: FCTC Node: 2NDI Bytes Sent: 0 State: Up Bytes Received: 0 Status: Idle Remote link device: 33C2 Link device: CC33 Type: FCTC Node: 2NDI Bytes Sent: 0 State: Up Bytes Received: 1458 Status: Idle Remote link device: 33C3

Once again, remember that the only device numbers that are important in understanding write scheduling are the device numbers in use on the system whose name comes first in the alphabet.

Estimating the Capacity of an ISFC Logical Link

When we do capacity planning for an ISFC logical link, we usually think about wanting to estimate how well the link will do in servicing guest relocations. Guest relocation workloads' data exchange habits are very asymmetric, that is, they heavily stream data from source system to destination system and have a very light acknowledgement stream flowing in the other direction. Thus it makes sense to talk about estimating the one-way capacity of the logical link.

Roughly speaking, our early experiments revealed that a good rule of thumb for estimating the maximum one-way data rate achievable on a FICON ExpressN CTC chpid at the size of messages tended to be exchanged in a guest relocation is roughly to take the chpid fiber speed in megabits (Mb) per second, divide by 10, and then multiply by about 0.85. The resultant number is in units of megabytes per second, or MB/s. For example, a FICON Express4 chpid's fiber runs at 4 gigabits per second, or 4 Gb/s. The chpid's estimated maximal data carrying capacity in one direction in MB/s will tend to be about (4096 / 10 * 0.85) or about 350 MB/sec. Using this rough estimating technique, we can build the following table:

FICON Adapter Generation Rough Estimate of One-Way Capacity in Guest Relocation Workloads
FICON Express 87 MB/sec
FICON Express2 175 MB/sec
FICON Express4 350 MB/sec
FICON Express8 700 MB/sec

To estimate the maximum one-way capacity of an ISFC logical link, we just add up the capacities of the chpids, prorating downward for chpids using fewer than four CTCs.

As we form our estimate of the link's one-way capacity, we must also keep in mind the stop-short property of the write scheduling algorithm. For example, a logical link composed of twelve CTC devices spread evenly over three equal-speed chpids will really have only ten CTCs or about 2-1/2 chpids' worth of capacity available to it for streaming a relocation to the other side. Estimates of logical link capacity must take this into account.

For our particular measurement configuration, this basic approach to logical link capacity estimation gives us the following table for the estimated one-way capacity of this measurement suite's particular ISFC logical link hardware:

Our CTC RDEVs Total Number of CTCs Max Number of CTCs Used in Streaming Distribution over Chpids Estimated One-Way Capacity (MB/sec)
6000 1 Less than 1 Less than 1/4 of an Express2 20
6000-6003 4 3 3/4 of an Express2 131
6000-6003,
6020-6023
8 7 7/4 of an Express2 306
6000-6003,
6020-6023,
6040-6041
10 8 8/4 of an Express2 350
6000-6003,
6020-6023,
6040-6043
12 10 8/4 of an Express2
2/4 of an Express4
525
6000-6003,
6020-6023,
6040-6043,
6060-6061
14 12 8/4 of an Express2
4/4 of an Express4
700
6000-6003,
6020-6023,
6040-6043,
6060-6063
16 14 8/4 of an Express2
6/4 of an Express4
875

Customers using other logical link configurations will be able to use this basic technique to build their own estimation tables.

It was our experience that our actual measurements tended to do better than these estimates.

Of course, a set of FICON CTCs acting together will be able to service workloads moving appreciable data in both directions. However, because LGR workloads are not particularly symmetric, we did not comprehensively study the behavior of an ISFC logical link when each of the two systems tries to put appreciable transmit load onto the link. We did run one set of workloads that evaluated a moderate intensity, symmetric data exchange scenario. We did this mostly to check that the two systems could exchange data without significantly interfering with one another.

Method

To measure ISFC's behavior, we used the ISFC workloads described in the appendix of this report. The appendix describes the CECs used, the partition configurations, and the FICON chpids used to connect the partitions. This basic hardware setup remained constant through all measurements.

The appendix also describes the choices we made for the numbers of concurrent connections, the sizes of messages exchanged, and the numbers of CTCs comprising the logical link. We varied these choices through their respective spectra as described in the appendix.

A given measurement consisted of a selected number of connections, exchanging messages of a selected size, using an ISFC logical link of a selected configuration. For example, an APPC/VM measurement might consist of 50 client-server pairs running the CDU/CDR tool, using server reply size of 5000 bytes, running over an ISFC logical link that used only the first four CTC devices of our configuration.

We ran each experiment for five minutes, with CP Monitor set to one-minute sample intervals. We collected MONWRITE data on each side.

When all measurements were complete, we reduced the MONWRITE data with a combination of Performance Toolkit for VM and some homegrown Rexx execs that analyzed Monitor records directly.

Metrics of primary interest were data rate, CPU time per unit of data moved, and CTC device-busy percentage.

For multi-CTC logical links, we were also interested in whether ISFC succeeded in avoiding simultaneous writes into the two ends of a given CTC device. This phenomenon, called a write collision, can debilitate the logical link. ISFC contains logic to schedule CTC writes in such a way that the two systems will avoid these collisions almost all of the time. We looked at measurements' collision data to make sure the write scheduling logic worked properly.

Results and Discussion

ISFC Transport Traffic

For convenience of presentation, we organized the result tables by message size, one table per message size. The set of runs done for a specific message size is called a suite. Each suite's table presents its results. The row indices are the number of CTCs in the logical link. The column indices are the number of concurrent conversations.

Within a given suite we expected, and generally found, the following traits:

  • With a given number of client-server pairs, adding CTCs would generally increase data rate until it peaked, and beyond that, adding even more CTCs would not appreciably harm the workload in data rate nor in CPU utilization per message.
  • With a given number of CTCs, adding client-server pairs would generally increase the data rate until a peak, and beyond that, adding even more client-server pairs would not increase data rate.
  • High write-collision rates for the one-CTC cases.
  • Reduced write-collision rates for the multi-CTC cases.

We also expected to see that the larger the messages, the better ISFC would do at filling the pipe. We expected this because we knew that in making its ISFC design choices, IBM tended to use schemes, algorithms, and data structures that would favor high-volume traffic consisting of fairly large messages.

For the largest messages, we expected and generally found that ISFC would keep the write CTCs nearly 100% busy and would fill the logical link to fiber capacity.

Small Messages

HS
512/8192, 20%, 2048
2011-10-14 Pairs
RDEVs Metrics 1 5 10 50 100
1
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001616C
14.45
0.00108
56
833.6
H001617C
19.87
0.00082
47
764.5
H001618C
21.15
0.00084
24
1113.1
H001619C
22.33
0.00087
25
1061.4
H001620C
21.31
0.00087
25
1064.5
4
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001636C
21.28
0.00082
78
0.0
H001637C
46.00
0.00061
99
0.0
H001638C
54.00
0.00064
112
0.0
H001639C
96.00
0.00062
181
0.0
H001640C
117.00
0.00059
307
55.3
8
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001656C
25.26
0.00084
84
0.0
H001657C
63.00
0.00060
127
0.0
H001658C
75.00
0.00057
124
0.0
H001659C
158.00
0.00060
202
0.0
H001660C
230.00
0.00059
436
0.7
10
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001676C
26.61
0.00085
90
0.0
H001677C
69.00
0.00061
128
0.0
H001678C
84.00
0.00063
130
0.0
H001679C
189.00
0.00059
261
0.0
H001680C
297.00
0.00059
529
0.0
12
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001696C
26.59
0.00085
90
0.0
H001697C
70.00
0.00060
127
0.0
H001698C
84.00
0.00059
122
0.0
H001699C
189.00
0.00060
263
0.0
H001700C
297.00
0.00060
529
0.0
14
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001716C
26.61
0.00085
90
0.0
H001717C
69.00
0.00068
116
0.0
H001718C
82.00
0.00065
129
0.0
H001719C
185.00
0.00062
288
0.0
H001720C
297.00
0.00062
528
0.0
16
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001736C
26.62
0.00084
90
0.0
H001737C
69.00
0.00063
124
0.0
H001738C
81.00
0.00058
141
0.0
H001739C
186.00
0.00067
243
0.0
H001740C
297.00
0.00060
529
0.0
Notes: z10, 2097-E56, mci 754, two dedicated partitions, client 3-way, server 12-way, both 43G/2G. Mixed FICON: 6000-6003 2 Gb/s, 6020-6023 2 Gb/s, 6040-6043 4 Gb/s, 6060-6063 4 Gb/s. LGC/LGS workload driver. W0A13 of 2011-10-13, stand-in for W0GOLDEN. Data rate includes traffic in both directions.

Medium Messages

HM
8192/32768, 20%, 4096
2011-10-14 Pairs
RDEVs Metrics 1 5 10 50 100
1
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001621C
41.43
0.00211
49
695.3
H001622C
40.29
0.00239
45
999.0
H001623C
41.44
0.00210
45
982.9
H001624C
46.03
0.00200
47
896.0
H001625C
44.88
0.00215
46
919.5
4
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001641C
67.89
0.00197
88
0.0
H001642C
99.00
0.00185
98
0.0
H001643C
138.00
0.00199
144
0.0
H001644C
198.00
0.00271
285
0.0
H001645C
209.00
0.00286
358
32.4
8
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001661C
86.00
0.00203
112
0.0
H001662C
146.00
0.00185
145
0.0
H001663C
194.00
0.00215
159
0.0
H001664C
324.00
0.00273
483
0.0
H001665C
387.00
0.00285
687
75.0
10
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001681C
91.00
0.00202
118
0.0
H001682C
162.00
0.00187
161
0.0
H001683C
211.00
0.00219
194
0.0
H001684C
449.00
0.00261
861
0.0
H001685C
449.00
0.00264
861
0.0
12
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001701C
90.00
0.00205
118
0.0
H001702C
161.00
0.00233
149
0.0
H001703C
212.00
0.00219
202
0.0
H001704C
459.00
0.00281
627
0.0
H001705C
620.00
0.00304
973
0.0
14
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001721C
90.00
0.00213
118
0.0
H001722C
160.00
0.00186
156
0.0
H001723C
211.00
0.00220
194
0.0
H001724C
597.00
0.00271
844
0.0
H001725C
848.00
0.00281
1293
0.1
16
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001741C
90.00
0.00214
118
0.0
H001742C
162.00
0.00195
156
0.0
H001743C
211.00
0.00216
192
0.0
H001744C
522.00
0.00269
693
0.0
H001745C
797.00
0.00280
1099
0.0
Notes: z10, 2097-E56, mci 754, two dedicated partitions, client 3-way, server 12-way, both 43G/2G. Mixed FICON: 6000-6003 2 Gb/s, 6020-6023 2 Gb/s, 6040-6043 4 Gb/s, 6060-6063 4 Gb/s. LGC/LGS workload driver. W0A13 of 2011-10-13, stand-in for W0GOLDEN. Data rate includes traffic in both directions.

Large (LGR-sized) Messages

HL
98304/122880, 20%, 8192
2011-10-14 Pairs
RDEVs Metrics 1 5 10 50 100
1
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001626C
83.52
0.00540
50
692.3
H001627C
84.59
0.00533
57
681.8
H001628C
84.60
0.00532
57
682.0
H001629C
85.60
0.00553
57
681.9
H001630C
84.59
0.00597
58
678.6
4
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001646C
157.00
0.00473
146
0.0
H001647C
187.00
0.00656
257
0.0
H001648C
208.00
0.00516
325
0.0
H001649C
209.00
0.00573
355
0.0
H001650C
209.00
0.00590
355
0.0
8
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001666C
179.00
0.00485
132
0.0
H001667C
275.00
0.00758
363
0.0
H001668C
314.00
0.00986
500
0.0
H001669C
416.00
0.00983
764
0.1
H001670C
416.00
0.00983
763
0.0
10
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001686C
181.00
0.00491
134
0.0
H001687C
308.00
0.00705
448
0.0
H001688C
418.00
0.00541
860
0.0
H001689C
418.00
0.00594
859
0.0
H001690C
418.00
0.00619
859
0.0
12
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001706C
194.00
0.00523
132
0.0
H001707C
306.00
0.00793
446
0.0
H001708C
477.00
0.00900
670
0.0
H001709C
706.00
0.01067
1026
0.0
H001710C
708.00
0.01112
1032
0.0
14
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001726C
179.00
0.00505
131
0.0
H001727C
307.00
0.00707
446
0.0
H001728C
537.00
0.00829
779
0.0
H001729C
796.00
0.01003
1228
0.0
H001730C
793.00
0.01178
1214
0.0
16
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001746C
182.00
0.00509
133
0.0
H001747C
307.00
0.00732
447
0.0
H001748C
538.00
0.00844
751
0.0
H001749C
911.00
0.01033
1199
0.0
H001750C
1031.00
0.01295
1410
0.2
Notes: z10, 2097-E56, mci 754, two dedicated partitions, client 3-way, server 12-way, both 43G/2G. Mixed FICON: 6000-6003 2 Gb/s, 6020-6023 2 Gb/s, 6040-6043 4 Gb/s, 6060-6063 4 Gb/s. LGC/LGS workload driver. W0A13 of 2011-10-13, stand-in for W0GOLDEN. Data rate includes traffic in both directions.

Symmetric 32 KB Traffic

HY
32768/32768, 20%, 32768
2011-10-14 Pairs
RDEVs Metrics 1 5 10 50 100
1
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001631C
50.00
0.00321
70
394.9
H001632C
52.00
0.00322
40
650.5
H001633C
60.00
0.00334
51
609.9
H001634C
68.00
0.00376
59
561.0
H001635C
64.00
0.00380
61
576.0
4
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001651C
70.00
0.00296
94
0.0
H001652C
118.00
0.00287
105
0.0
H001653C
158.00
0.00412
147
0.0
H001654C
254.00
0.00531
322
41.2
H001655C
250.00
0.00552
338
91.1
8
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001671C
82.00
0.00297
82
0.0
H001672C
152.00
0.00287
125
0.0
H001673C
214.00
0.00415
137
0.0
H001674C
336.00
0.00555
352
0.2
H001675C
386.00
0.00563
719
220.1
10
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001691C
94.00
0.00293
92
0.0
H001692C
188.00
0.00296
147
0.0
H001693C
266.00
0.00446
188
0.0
H001694C
564.00
0.00529
559
0.0
H001695C
702.00
0.00612
915
130.3
12
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001711C
94.00
0.00304
92
0.0
H001712C
188.00
0.00281
130
0.0
H001713C
272.00
0.00432
178
0.0
H001714C
558.00
0.00555
544
0.0
H001715C
740.00
0.00589
1070
308.6
14
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001731C
92.00
0.00309
90
0.0
H001732C
186.00
0.00275
128
0.0
H001733C
268.00
0.00424
213
0.0
H001734C
564.00
0.00526
554
0.0
H001735C
990.00
0.00638
1222
236.3
16
Run
MB/sec
%CPU/msg
RDEV util
Coll/sec
H001751C
92.00
0.00297
91
0.0
H001752C
184.00
0.00281
131
0.0
H001753C
274.00
0.00403
164
0.0
H001754C
562.00
0.00532
554
0.0
H001755C
948.00
0.00628
1118
20.7
Notes: z10, 2097-E56, mci 754, two dedicated partitions, client 3-way, server 12-way, both 43G/2G. Mixed FICON: 6000-6003 2 Gb/s, 6020-6023 2 Gb/s, 6040-6043 4 Gb/s, 6060-6063 4 Gb/s. LGC/LGS workload driver. W0A13 of 2011-10-13, stand-in for W0GOLDEN. Data rate includes traffic in both directions.
One Measurement In Detail

So as to illustrate what really happens on an ISFC logical link, let's take a look at one experiment in more detail.

The experiment we will choose is H001709C. This is a large-message experiment using 50 concurrent conversations and 12 CTCs in the logical link. Devices 6000-6003 and 6020-6023 are FICON Express2. Devices 6040-6043 are FICON Express4. Devices 6060-6063 are in the IOCDS but are unused in this particular experiment.

Here's a massaged excerpt from the Performance Toolkit FCX108 DEVICE report, showing the device utilization on the client side. It's evident that the client is keeping ten CTCs busy in pushing the traffic to the server. This is consistent with the CTC write scheduling algorithm. It's also evident that devices 6040-6043 are on a faster chpid. Finally, we see that the server is using device 6043 to send the comparatively light acknowledgement traffic back to the client. Device 6042 is very seldom used, and from our knowledge of the write scheduling algorithm, we know it carries only acknowledgement traffic.

From H001709C PERFKIT B M-HL C-50 R-12 <-- Device Pa- <-Rate/s-> <------- Time (msec) -------> Req. <Percent> Addr Type ths I/O Avoid Pend Disc Conn Serv Resp CUWt Qued Busy READ 6000 CTCA 1 61.8 ... .5 1.9 13.4 15.8 15.8 .0 .0 98 .. 6001 CTCA 1 61.8 ... .5 1.9 13.4 15.8 15.8 .0 .0 98 .. 6002 CTCA 1 61.6 ... .5 1.8 13.5 15.8 15.8 .0 .0 97 .. 6003 CTCA 1 61.6 ... .5 1.9 13.4 15.8 15.8 .0 .0 97 .. 6020 CTCA 1 61.5 ... .5 1.8 13.6 15.9 15.9 .0 .0 98 .. 6021 CTCA 1 61.6 ... .5 1.8 13.5 15.8 15.8 .0 .0 97 .. 6022 CTCA 1 61.4 ... .5 1.9 13.5 15.9 15.9 .0 .0 98 .. 6023 CTCA 1 61.2 ... .5 1.9 13.5 15.9 15.9 .0 .0 97 .. 6040 CTCA 1 171 ... .4 2.0 3.1 5.5 5.5 .0 .0 94 .. 6041 CTCA 1 170 ... .4 2.0 3.1 5.5 5.5 .0 .0 94 .. 6042 CTCA 1 .7 ... .3 .2 1.1 1.6 1.6 .0 .0 0 .. 6043 CTCA 1 472 ... .2 .1 .9 1.2 1.2 .0 .0 57 .. 6060 CTCA 1 .0 ... ... ... ... ... ... ... 0 .. .. 6061 CTCA 1 .0 ... ... ... ... ... ... ... 0 .. .. 6062 CTCA 1 .0 ... ... ... ... ... ... ... 0 .. .. 6063 CTCA 1 .0 ... ... ... ... ... ... ... 0 .. ..

A homegrown tool predating Performance Toolkit for VM's ISFLACT report shows us a good view of the logical link performance from the client side. The tool uses the D9 R4 MRISFNOD logical link activity records to report on logical link statistics. The tool output excerpt below shows all of the following:

  • The client's transmission-pending queue is not severely deep.
  • No write collisions are happening.
  • The link is carrying about 660 MB/sec to the server.
  • Each client-to-server link package, written with one SSCH, carries about seven API messages totalling about 829,000 bytes.
  • The link is carrying about 46 MB/sec to the client.
  • Each server-to-client link package, also written with one SSCH, carries about 25 API messages totalling about 205,000 bytes.
Run H001709C talking over link GDLBOFVM, config HL, P=50, R=12 ____ISO-UTC________ _TXPENDCT_ _WCol/sec_ 2011-10-14 04:53:43 6.0 0.0 2011-10-14 04:54:43 6.0 0.0 2011-10-14 04:55:43 4.0 0.0 ____ISO-UTC________ _WMB/sec__ _WMsg/sec_ _WPkg/sec_ _WByt/pkg_ _WMsg/pkg_ 2011-10-14 04:53:43 660.0 5849.4 834.2 829631.1 7.0 2011-10-14 04:54:43 657.3 5825.2 830.7 829679.3 7.0 2011-10-14 04:55:43 661.0 5857.3 835.4 829597.7 7.0 ____ISO-UTC________ _RMB/sec__ _RMsg/sec_ _RPkg/sec_ _RByt/pkg_ _RMsg/pkg_ 2011-10-14 04:53:43 46.3 5847.2 236.7 205140.3 24.7 2011-10-14 04:54:43 46.1 5825.5 235.3 205590.7 24.8 2011-10-14 04:55:43 46.4 5856.8 236.6 205612.1 24.8
APPC/VM Regression

For these measurements our objective was to compare z/VM 6.2 to z/VM 6.1, using an ISFC logical link of one CTC, with a variety of message sizes and client-server pairs. We measured server replies per second and CPU utilization per reply. The tables below show the results.

Generally z/VM 6.2 showed substantial improvements in server replies per second. A few anomalies were observed. Generally z/VM 6.2 showed small percentage gains in CPU consumption per message moved. These gains are not alarming because the CDU/CDR suite spends almost no CPU time in the guests. In customer environments, guest CPU time will be substantial and so small changes in CP CPU time will likely be negligible.

Runs 60* are z/VM 6.1 driver 61TOP908, which is z/VM 6.1 plus all corrective service as of September 8, 2010.

Runs W0* are z/VM 6.2 driver W0A13, which is z/VM 6.2 as of October 13, 2011.

Reply size 1
Pairs Run name Msgs/sec %CPU/Msg
1 6000899S 1113.84 0.0040
1 W000934S 1421.68 0.0039
- Delta 307.84 -0.0001
- PctDelta 27.64 -2.50
5 6000906S 2347.28 0.0029
5 W000941S 3342.56 0.0031
- Delta 995.28 0.0002
- PctDelta 42.40 6.90
10 6000913S 4305.60 0.0026
10 W000948S 6066.32 0.0027
- Delta 1760.72 0.0001
- PctDelta 40.89 3.85
50 6000920S 8557.12 0.0024
50 W000955S 11596.0 0.0025
- Delta 3038.88 0.0001
- PctDelta 35.51 4.17
100 6000927S 8344.96 0.0024
100 W000962S 12771.2 0.0025
- Delta 4426.24 0.0001
- PctDelta 53.04 4.17
Notes: z10, 2097-E56, mci 754, two partitions, one 3-way and one 12-way, both 43G/2G. One FICON CTC on a 2 Gb/s chpid. CDR/CDU workload driver. 61TOP908 of 2010-09-08. W0A13 of 2011-10-13, stand-in for W0GOLDEN. Message rate is replies sent from server to client. Statistics collected on server side.

Reply size 1000
Pairs Run name Msgs/sec %CPU/Msg
1 6000900S 1110.72 0.0040
1 W000935S 1398.80 0.0040
- Delta 288.08 0
- PctDelta 25.94 0.00
5 6000907S 2322.32 0.0029
5 W000942S 3149.12 0.0032
- Delta 826.80 0.0003
- PctDelta 35.60 10.34
10 6000914S 4308.72 0.0026
10 W000949S 5676.32 0.0028
- Delta 1367.60 0.0002
- PctDelta 31.74 7.69
50 6000921S 8426.08 0.0024
50 W000956S 8410.48 0.0026
- Delta -15.60 0.0002
- PctDelta -0.19 8.33
100 6000928S 8309.60 0.0024
100 W000963S 8712.08 0.0026
- Delta 402.48 0.0002
- PctDelta 4.84 8.33
Notes: z10, 2097-E56, mci 754, two partitions, one 3-way and one 12-way, both 43G/2G. One FICON CTC on a 2 Gb/s chpid. CDR/CDU workload driver. 61TOP908 of 2010-09-08. W0A13 of 2011-10-13, stand-in for W0GOLDEN. Message rate is replies sent from server to client. Statistics collected on server side.

Reply size 5000
Pairs Run name Msgs/sec %CPU/Msg
1 6000901S 1025.232 0.0043
1 W000936S 1294.80 0.0043
- Delta 269.568 0
- PctDelta 26.29 0.00
5 6000908S 2110.16 0.0032
5 W000943S 2422.16 0.0035
- Delta 312.00 0.0003
- PctDelta 14.79 9.38
10 6000915S 3546.40 0.0029
10 W000950S 4291.04 0.0031
- Delta 744.64 0.0002
- PctDelta 21.00 6.90
50 6000922S 5315.44 0.0028
50 W000957S 6494.80 0.0029
- Delta 1179.36 0.0001
- PctDelta 22.19 3.57
100 6000929S 5441.28 0.0028
100 W000964S 5150.08 0.0032
- Delta -291.20 0.0004
- PctDelta -5.35 14.29
Notes: z10, 2097-E56, mci 754, two partitions, one 3-way and one 12-way, both 43G/2G. One FICON CTC on a 2 Gb/s chpid. CDR/CDU workload driver. 61TOP908 of 2010-09-08. W0A13 of 2011-10-13, stand-in for W0GOLDEN. Statistics collected on server side.

Reply size 10000
Pairs Run name Msgs/sec %CPU/Msg
1 6000902S 973.128 0.0045
1 W000937S 1214.72 0.0049
- Delta 241.592 0.0004
- PctDelta 24.83 8.89
5 6000909S 1901.12 0.0036
5 W000944S 2280.72 0.0039
- Delta 379.60 0.0003
- PctDelta 19.97 8.33
10 6000916S 3176.16 0.0033
10 W000951S 3895.84 0.0033
- Delta 719.68 0
- PctDelta 22.66 0.00
50 6000923S 2714.40 0.0034
50 W000958S 3255.20 0.0034
- Delta 540.80 0
- PctDelta 19.92 0.00
100 6000930S 3548.48 0.0042
100 W000965S 3947.84 0.0042
- Delta 399.36 0
- PctDelta 11.25 0.00
Notes: z10, 2097-E56, mci 754, two partitions, one 3-way and one 12-way, both 43G/2G. One FICON CTC on a 2 Gb/s chpid. CDR/CDU workload driver. 61TOP908 of 2010-09-08. W0A13 of 2011-10-13, stand-in for W0GOLDEN. Message rate is replies sent from server to client. Statistics collected on server side.

Reply size 50000
Pairs Run name Msgs/sec %CPU/Msg
1 6000903S 559.936 0.0086
1 W000938S 524.576 0.0092
- Delta -35.360 0.0006
- PctDelta -6.32 6.98
5 6000910S 843.232 0.0076
5 W000945S 1023.464 0.0070
- Delta 180.232 -0.0006
- PctDelta 21.37 -7.89
10 6000917S 996.216 0.0072
10 W000952S 1491.36 0.0064
- Delta 495.144 -0.0008
- PctDelta 49.70 -11.11
50 6000924S 694.304 0.0127
50 W000959S 1219.92 0.0125
- Delta 525.616 -0.0002
- PctDelta 75.70 -1.57
100 6000931S 908.856 0.0141
100 W000966S 1536.08 0.0133
- Delta 627.224 -0.0008
- PctDelta 69.01 -5.67
Notes: z10, 2097-E56, mci 754, two partitions, one 3-way and one 12-way, both 43G/2G. One FICON CTC on a 2 Gb/s chpid. CDR/CDU workload driver. W0A13 of 2011-10-13, stand-in for W0GOLDEN. Message rate is replies sent from server to client. Statistics collected on server side.

Reply size 100000
Pairs Run name Msgs/sec %CPU/Msg
1 6000904S 343.408 0.0140
1 W000939S 342.784 0.0140
- Delta -0.624 0
- PctDelta -0.18 0.00
5 6000911S 495.768 0.0121
5 W000946S 674.336 0.0113
- Delta 178.568 -0.0008
- PctDelta 36.02 -6.61
10 6000918S 308.880 0.0142
10 W000953S 386.360 0.0145
- Delta 77.480 0.0003
- PctDelta 25.08 2.11
50 6000925S 402.480 0.0239
50 W000960S 678.184 0.0230
- Delta 275.704 -0.0009
- PctDelta 68.50 -3.77
100 6000932S 497.328 0.0265
100 W000967S 862.056 0.0251
- Delta 364.728 -0.0014
- PctDelta 73.34 -5.28
Notes: z10, 2097-E56, mci 754, two partitions, one 3-way and one 12-way, both 43G/2G. One FICON CTC on a 2 Gb/s chpid. CDR/CDU workload driver. 61TOP908 of 2010-09-08. W0A13 of 2011-10-13, stand-in for W0GOLDEN. Message rate is replies sent from server to client. Statistics collected on server side.

Reply size 1000000
Pairs Run name Msgs/sec %CPU/Msg
1 6000905S 39.936 0.1202
1 W000940S 40.248 0.1193
- Delta 0.312 -0.0009
- PctDelta 0.78 -0.75
5 6000912S 55.328 0.1084
5 W000947S 84.344 0.0948
- Delta 29.016 -0.0136
- PctDelta 52.44 -12.55
10 6000919S 28.184 0.1703
10 W000954S 42.848 0.1494
- Delta 14.664 -0.0209
- PctDelta 52.03 -12.27
50 6000926S 46.904 0.2558
50 W000961S 77.584 0.2526
- Delta 30.680 -0.0032
- PctDelta 65.41 -1.25
100 6000933S 55.744 0.2727
100 W000968S 99.216 0.2661
- Delta 43.472 -0.0066
- PctDelta 77.99 -2.42
Notes: z10, 2097-E56, mci 754, two partitions, one 3-way and one 12-way, both 43G/2G. One FICON CTC on a 2 Gb/s chpid. CDR/CDU workload driver. 61TOP908 of 2010-09-08. W0A13 of 2011-10-13, stand-in for W0GOLDEN. Message rate is replies sent from server to client. Statistics collected on server side.
APPC/VM Scaling

When IBM improved ISFC for z/VM 6.2, its objective was to create a data transport service suitable for use in relocating guests. Low-level ISFC drivers were rewritten to pack messages well, to use multiple CTCs, and the like. Further, new higher layers of ISFC were created so as to offer a new data exchange API to other parts of the Control Program.

As part of the ISFC effort, IBM made little to no effort to improve APPC/VM performance per se. For example, locking and serialization limits known to exist in the APPC/VM-specific portions of ISFC were not relieved.

Because of this, IBM expected some APPC/VM scaling for multi-CTC logical links, but the behavior was expected to be modest at best. Mostly out of curiosity, we ran the CDU/CDR workloads on a variety of multi-CTC logical links, to see what would happen.

We found APPC/VM traffic did scale, but not as well as ISFC Transport traffic did. For example, the largest-configuration APPC/VM measurement, W001054C, achieved [ (1000000 * 199.78) / 1024 / 1024 ] = 190 MB/sec in reply messages from server to client over our sixteen-RDEV setup. By contrast, the largest-configuration ISFC Transport measurement, H001750C, achieved 964 MB/sec client-to-server (not tabulated) on the very same logical link.

The tables below capture the results.

Reply size 1 Pairs
RDEVs Metrics 1 50 100
1 Run
Msgs/sec
%CPU/msg
RDEV util
W000995C
1414.0
0.0040
60.00
W000999C
11356.2
0.0021
48.00
W001003C
12615.2
0.0022
46.00
4 Run
Msgs/sec
%CPU/msg
RDEV util
W001007C
1412.0
0.0037
60.00
W001011C
28890.0
0.0019
127.00
W001015C
45905.6
0.0019
158.00
8 Run
Msgs/sec
%CPU/msg
RDEV util
W001019C
1413.0
0.0037
45.00
W001023C
29160.0
0.0019
114.00
W001027C
45697.6
0.0019
122.00
12 Run
Msgs/sec
%CPU/msg
RDEV util
W001031C
1458.5
0.0038
46.00
W001035C
30655.8
0.0019
120.00
W001039C
42130.4
0.0019
121.00
16 Run
Msgs/sec
%CPU/msg
RDEV util
W001043C
1435.0
0.0045
45.00
W001047C
30078.0
0.0021
118.00
W001051C
41787.2
0.0019
119.00
Notes: z10, 2097-E56, mci 754, two partitions, one 3-way and one 12-way, both 43G/2G. FICON CTCs as in appendix. CDR/CDU workload driver. W0A13 of 2011-10-13, stand-in for W0GOLDEN. Message rate is replies sent from server to client. Statistics from client side.

Reply size 10000 Pairs
RDEVs Metrics 1 50 100
1 Run
Msgs/sec
%CPU/msg
RDEV util
W000998C
40.485
0.1383
47.00
W001002C
74.790
0.1872
46.00
W001006C
97.760
0.1800
55.00
4 Run
Msgs/sec
%CPU/msg
RDEV util
W001010C
40.285
0.1390
66.00
W001014C
92.178
0.2300
109.00
W001018C
156.416
0.2276
204.00
8 Run
Msgs/sec
%CPU/msg
RDEV util
W001022C
40.985
0.1464
67.00
W001026C
91.638
0.2357
120.00
W001030C
162.968
0.2454
216.00
12 Run
Msgs/sec
%CPU/msg
RDEV util
W001034C
46.185
0.1386
54.00
W001038C
119.286
0.2247
84.00
W001042C
195.000
0.2338
156.00
16 Run
Msgs/sec
%CPU/msg
RDEV util
W001046C
45.565
0.1492
54.00
W001050C
121.608
0.2270
83.00
W001054C
199.784
0.2483
160.00
Notes: z10, 2097-E56, mci 754, two partitions, one 3-way and one 12-way, both 43G/2G. FICON CTCs as in appendix. CDR/CDU workload driver. W0A13 of 2011-10-13, stand-in for W0GOLDEN. Message rate is replies sent from server to client. Statistics from client side.

Reply size 100000 Pairs
RDEVs Metrics 1 50 100
1 Run
Msgs/sec
%CPU/msg
RDEV util
W000997C
342.70
0.0163
49.00
W001001C
698.76
0.0166
47.00
W001005C
860.184
0.0167
62.00
4 Run
Msgs/sec
%CPU/msg
RDEV util
W001009C
342.75
0.0163
66.00
W001013C
957.42
0.0188
108.00
W001017C
1466.40
0.0199
195.00
8 Run
Msgs/sec
%CPU/msg
RDEV util
W001021C
348.50
0.0161
50.00
W001025C
971.46
0.0202
122.00
W001029C
1607.84
0.0214
219.00
12 Run
Msgs/sec
%CPU/msg
RDEV util
W001033C
386.80
0.0165
54.00
W001037C
1084.32
0.0196
77.00
W001041C
1806.48
0.0210
146.00
16 Run
Msgs/sec
%CPU/msg
RDEV util
W001045C
379.65
0.0179
54.00
W001049C
1085.94
0.0195
76.00
W001053C
1799.20
0.0205
142.00
Notes: z10, 2097-E56, mci 754, two partitions, one 3-way and one 12-way, both 43G/2G. FICON CTCs as in appendix. CDR/CDU workload driver. W0A13 of 2011-10-13, stand-in for W0GOLDEN. Message rate is replies sent from server to client. Statistics from client side.

Reply size 1000000 Pairs
RDEVs Metrics 1 50 100
1 Run
Msgs/sec
%CPU/msg
RDEV util
W000998C
40.485
0.1383
47.00
W001002C
74.790
0.1872
46.00
W001006C
97.760
0.1800
55.00
4 Run
Msgs/sec
%CPU/msg
RDEV util
W001010C
40.285
0.1390
66.00
W001014C
92.178
0.2300
109.00
W001018C
156.416
0.2276
204.00
8 Run
Msgs/sec
%CPU/msg
RDEV util
W001022C
40.985
0.1464
67.00
W001026C
91.638
0.2357
120.00
W001030C
162.968
0.2454
216.00
12 Run
Msgs/sec
%CPU/msg
RDEV util
W001034C
46.185
0.1386
54.00
W001038C
119.286
0.2247
84.00
W001042C
195.000
0.2338
156.00
16 Run
Msgs/sec
%CPU/msg
RDEV util
W001046C
45.565
0.1492
54.00
W001050C
121.608
0.2270
83.00
W001054C
199.784
0.2483
160.00
Notes: z10, 2097-E56, mci 754, two partitions, one 3-way and one 12-way, both 43G/2G. FICON CTCs as in appendix. CDR/CDU workload driver. W0A13 of 2011-10-13, stand-in for W0GOLDEN. Message rate is replies sent from server to client. Statistics from client side.
Why APPC/VM Traffic Doesn't Scale

The reason APPC/VM traffic achieves only modest gains on a multi-CTC logical link is fairly easy to see if we look at an FCX108 DEVICE excerpt from the server side. Run W001054C was the largest APPC/VM scaling measurement we tried: server replies 1000000 bytes long, 100 client-server pairs, and a sixteen-CTC logical link. Here is the FCX108 DEVICE excerpt from the server's MONWRITE data. The server, the sender of the large messages in this experiment, is later in the alphabet, so it starts its CTC scan from the bottom of the list and works upward.

Run W001054C, server side, 1000000 bytes, 100 pairs, 16 RDEVs <-- Device Pa- <-Rate/s-> <------- Time (msec) -------> Req. <Percent> Addr Type ths I/O Avoid Pend Disc Conn Serv Resp CUWt Qued Busy READ 6000 CTCA 1 430 ... .1 .1 .0 .2 .2 .0 .0 9 .. 6001 CTCA 1 .0 ... .4 .3 .0 .7 .7 .0 .0 0 .. 6002 CTCA 1 .0 ... ... ... ... ... ... ... 0 .. .. 6003 CTCA 1 .0 ... ... ... ... ... ... ... 0 .. .. 6020 CTCA 1 .0 ... ... ... ... ... ... ... 0 .. .. 6021 CTCA 1 .0 ... ... ... ... ... ... ... 0 .. .. 6022 CTCA 1 .0 ... ... ... ... ... ... ... 0 .. .. 6023 CTCA 1 .0 ... ... ... ... ... ... ... 0 .. .. 6040 CTCA 1 .0 ... ... ... ... ... ... ... 0 .. .. 6041 CTCA 1 .0 ... ... ... ... ... ... ... 0 .. .. 6042 CTCA 1 .1 ... .2 .4 .9 1.5 1.5 .0 .0 0 .. 6043 CTCA 1 35.1 ... .1 .4 .8 1.3 1.3 .0 .0 5 .. 6060 CTCA 1 76.8 ... .3 .8 1.9 3.0 3.0 .0 .0 23 .. 6061 CTCA 1 128 ... .3 .8 1.6 2.7 2.7 .0 .0 34 .. 6062 CTCA 1 163 ... .2 .7 1.5 2.4 2.4 .0 .0 39 .. 6063 CTCA 1 269 ... .2 .5 1.0 1.7 1.7 .0 .0 46 ..

The server is making use of more than one CTC, but it is not nearly making use of all of the CTCs. This is not much of a surprise. The APPC/VM protocol layer of ISFC is known to be heavily serialized.

Contrast the APPC/VM device utilization picture with the one from large ISFC Transport workload H001750C. Remember that in the ISFC Transport workload, the client, the sender of the large messages, is earlier in the alphabet, so it starts its scan from the top and works downward.

From H001750C PERFKIT B M-HL C-100 R-16 <-- Devic Pa- <-Rate/s-> <------- Time (msec) -------> Req. <Percent> Addr Type ths I/O Avoid Pend Disc Conn Serv Resp CUWt Qued Busy READ 6000 CTCA 1 61.7 ... .5 2.9 12.1 15.5 15.5 .0 .0 96 .. 6001 CTCA 1 61.6 ... .5 2.9 12.1 15.5 15.5 .0 .0 95 .. 6002 CTCA 1 61.4 ... .5 2.9 12.2 15.6 15.6 .0 .0 96 .. 6003 CTCA 1 61.3 ... .5 2.9 12.2 15.6 15.6 .0 .0 96 .. 6020 CTCA 1 61.4 ... .5 2.9 12.2 15.6 15.6 .0 .0 96 .. 6021 CTCA 1 61.3 ... .5 2.9 12.2 15.6 15.6 .0 .0 96 .. 6022 CTCA 1 61.3 ... .5 2.9 12.2 15.6 15.6 .0 .0 96 .. 6023 CTCA 1 60.9 ... .5 3.0 12.2 15.7 15.7 .0 .0 96 .. 6040 CTCA 1 108 ... .4 3.2 5.0 8.6 8.6 .0 .0 93 .. 6041 CTCA 1 107 ... .4 3.2 5.0 8.6 8.6 .0 .0 92 .. 6042 CTCA 1 107 ... .4 3.2 5.0 8.6 8.6 .0 .0 92 .. 6043 CTCA 1 106 ... .4 3.2 5.0 8.6 8.6 .0 .0 91 .. 6060 CTCA 1 142 ... .4 3.1 2.7 6.2 6.2 .0 .0 88 .. 6061 CTCA 1 141 ... .4 3.1 2.7 6.2 6.2 .0 .0 88 .. 6062 CTCA 1 214 ... .3 .2 1.1 1.6 1.6 .0 .0 34 .. 6063 CTCA 1 282 ... .2 .2 1.0 1.4 1.4 .0 .0 40 ..

This comparison clearly shows the payoff in having built the ISFC Transport API not to serialize. The client side is doing a good job of keeping its fourteen transmit CTCs significantly busy.

Summary and Conclusions

For traffic using the new ISFC Transport API with message sizes approximating those used in relocations, ISFC fully uses FICON fiber capacity and scales correctly as FICON chpids are added to the logical link.

For APPC/VM regression traffic, z/VM 6.2 offers improvement in data rate compared to z/VM 6.1. Message rate increases of as high as 78% were observed.

APPC/VM traffic can flow over a multi-CTC logical link, but rates compared to a single-CTC link are only modestly better.

Contents | Previous | Next