Skip to main content

IBM Systems  >   System z  >   z/VM  >  

Using CPU Measurement Facility Host Counters

With VM64961 for z/VM 5.4 or with later z/VM releases, z/VM can now collect and record the System z CPU Measurement Facility host counters. These counters record the hardware performance experience of the logical PUs of the z/VM partition.

z/VM's CP Monitor facility logs out the counters in a new Monitor record, D5 R13 MRPRCMFC. The MONWRITE utility journals the monitor records to disk.

In this article we describe what the counters portray, how to reduce the counters, what the calculated metrics mean, and how to use the calculated metrics to gain insight about the behavior of the z/VM partition and its logical PUs.


What the Counters Portray

The System z CPU Measurement Facility offers means by which a System z CPU records its internal performance experience for later extraction by software. In this book IBM describes the System z CPU Measurement Facility.

The host counters component of CPU MF counts internal CPU events such as instructions completed, clock cycles used, and cache misses experienced. In this related book IBM describes the specific meanings of some of the more detailed counter sets.

The counters record the behavioral experience of the System z CPU on such metrics as instructions completed, clock cycles used, and cache misses experienced.


How to Collect the Counters

To make use of the counters, one must first set up to collect them. In our z/VM 6.2 performance considerations article we describe the process by which one sets up to record the counters in MONWRITE data. Following the instructions correctly results in one obtaining a MONWRITE file containing the D5 R13 MRPRCMFC records.


How to Reduce the Counters

In his CPU MF presentation John Burg of IBM describes the calculations needed to derive interesting metrics from the raw counter values. Each CEC type (z10, z196, zEC12) emits raw counters of different meaning and layout, so the calculations are specific to machine type. The output of the calculations is a set of values useful in understanding machine behavior.

z/VM Performance Toolkit contains no support for analyzing the raw counter values. In other words, Perfkit has not been updated to do the calculations prescribed by Burg.

On this web site we have posted a reduction tool one can use to do the Burg calculations. This package contains these items:

  • A first exec, CPUMFINT, that extracts the raw counters and other data from a MONWRITE file, writing the extracted data to an intermediary CMS file we call the interim file.
  • A second exec, CPUMFLOG, that reads the interim file, applies the Burg formulas, and produces a formatted, time-indexed log report as output.
  • Ancillary or support execs used by CPUMFINT or CPUMFLOG.

The process of reducing the counters, then, amounts to this:

  1. Start with a MONWRITE file that contains D5 R13 records.
  2. Use the CPUMFINT tool to extract counter data from the MONWRITE file. CPUMFINT takes the MONWRITE file as input and produces the interim file as output. The interim file will have CMS filetype CPUMFINT.
  3. Use the CPUMFLOG tool to process the interim file. The CPUMFLOG tool applies the Burg formulas, does the appropriate calculations, and writes a report. The report file will have CMS filetype $CPUMFLG.

Specific invocation instructions are included in the downloadable package.

The CPUMFLOG tool uses only the basic counters and the extended counters in its calculations. The interim file does also contain the problem-state counters and the crypto counters, provided the administrator enabled those counter sets for this partition on the SE. Those interested in analyzing the crypto counters or problem counters can do so by applying the formulas and techniques described in the Burg presentation.


Appearance of The CPUMFLOG Report

Metrics calculated from the CPU MF counters describe the performance experience of each logical PU in the partition over time. For each CP Monitor sample interval, for each logical PU, CPUMFLOG writes a report row calculated from the counter values for that interval. The resulting tabular report bears a vague resemblance to a Perfkit xxxxLOG report.

The columns of the report will vary slightly according to CEC type. The various models have different cache structures and therefore warrant accordingly different sets of columns in their report outputs.

Here is an excerpt of a z196 report. The report is very wide; on this web page, for page rendering purposes, we have broken the columns into groups. The workload here was entirely contrived for internal lab purposes; the values in the report mean absolutely nothing as far as customer workload expectations are concerned.

_IntEnd_ LPU Typ __EGHZ__ LPARCPU_ PrbState >>Mean>> 0 CP 5.21 89.89 0.00 >>Mean>> 1 CP 5.21 67.00 0.00 >>Mean>> 2 CP 5.21 56.79 0.00 >>Mean>> 3 CP 5.21 46.40 0.00 >>Mean>> 4 CP 5.21 36.87 0.00 >>Mean>> 5 CP 5.21 25.99 0.00 >>Mean>> 6 CP 5.21 16.63 0.00 >>Mean>> 7 CP 5.21 88.07 0.00 >>Mean>> 8 CP 5.21 84.79 0.00 >>Mean>> 9 CP 5.21 81.31 0.00 >>Mean>> 10 CP 5.21 83.04 0.00 >>Mean>> 11 CP 5.21 78.03 0.00 >>Mean>> 12 CP 5.21 79.28 0.00 >>Mean>> 13 CP 5.21 70.62 0.00 >>Mean>> 14 CP 5.21 59.50 0.00 >>Mean>> 15 CP 5.21 43.30 0.00 >>Mean>> 16 CP 5.21 14.50 0.00 >>Mean>> 17 CP 5.21 7.39 0.00 >>Mean>> 18 CP 5.21 3.06 0.00 >>Mean>> 19 CP 5.21 2.91 0.00 >>Mean>> 20 CP 5.21 2.58 0.00 >>Mean>> 21 CP 5.21 3.49 0.00 >>Mean>> 22 CP 5.21 3.14 0.00 >>Mean>> 23 CP 5.21 3.43 0.00 >>MofM>> 5.21 43.67 0.00 >>AllP>> 1047.99 (continued) __CPI___ _EICPI__ _EFCPI__ ESCPL1M_ __RNI___ 7.00 1.35 5.65 187.74 1.75 6.50 1.55 4.94 196.38 1.62 6.57 1.55 5.01 195.56 1.61 6.49 1.61 4.89 197.02 1.59 6.33 1.62 4.71 198.45 1.63 6.17 1.71 4.47 198.97 1.58 6.03 1.80 4.23 200.07 1.53 7.10 1.30 5.80 187.70 1.79 6.98 1.35 5.62 189.88 1.76 7.01 1.35 5.66 188.33 1.75 7.05 1.33 5.72 187.51 1.75 6.91 1.39 5.53 189.32 1.72 6.34 1.85 4.49 185.72 1.60 6.27 1.87 4.39 188.83 1.58 6.38 1.86 4.52 189.60 1.57 6.16 1.94 4.22 201.24 1.54 5.44 1.46 3.98 319.15 2.31 5.43 1.49 3.94 314.91 2.24 5.77 1.40 4.37 309.55 2.22 4.94 1.61 3.33 424.61 2.37 4.80 1.62 3.18 467.69 2.49 4.84 1.63 3.21 467.30 2.49 4.67 1.65 3.01 515.52 2.53 4.63 1.64 3.00 489.29 2.51 6.08 1.58 4.49 266.26 1.90 (continued) _T1MSEC_ _T1CPU__ T1CYPTM_ PTEPT1M_ 7302.09 14.85 95.54 70.83 4316.08 12.18 98.92 69.73 3691.22 12.27 98.78 69.24 2891.70 11.80 99.11 69.11 2237.29 11.16 96.21 68.28 1477.79 10.45 96.05 66.81 891.43 9.72 94.56 64.19 7478.71 15.59 95.91 70.42 6910.61 15.13 96.99 71.03 6678.17 15.26 97.09 70.90 6789.97 14.94 95.47 71.05 6165.82 14.47 95.74 70.95 4954.78 13.98 117.22 41.62 4217.75 13.48 118.34 42.21 3597.14 13.61 117.97 42.93 2316.35 12.13 119.05 42.96 350.21 4.08 78.20 50.63 165.27 4.25 78.84 49.91 65.56 5.07 77.41 48.51 29.18 2.28 63.94 40.29 21.01 1.50 55.24 32.19 28.63 1.53 51.08 32.19 25.07 0.94 46.12 25.95 25.86 1.17 50.50 28.45 3026.15 9.66 88.93 54.60 (continued) __L1MP__ __L2P___ __L3P___ __L4LP__ __L4RP__ __MEMP__ 3.01 32.31 32.33 26.03 0.77 8.57 2.52 33.73 21.53 37.32 0.89 6.53 2.56 33.72 22.02 36.93 0.85 6.48 2.48 33.66 22.52 36.68 0.79 6.36 2.37 33.91 18.24 40.64 0.81 6.40 2.24 34.21 19.24 39.71 0.78 6.05 2.11 34.32 19.87 39.30 0.84 5.67 3.09 30.99 32.80 26.88 0.61 8.73 2.96 30.90 33.41 26.52 0.59 8.58 3.01 30.95 33.43 26.54 0.60 8.47 3.05 32.19 32.71 25.77 0.77 8.56 2.92 32.06 33.39 25.44 0.75 8.35 2.42 37.80 10.71 44.58 0.92 6.00 2.33 38.03 11.61 43.60 0.91 5.86 2.38 37.88 11.90 43.43 0.97 5.84 2.10 37.48 13.80 42.09 0.98 5.66 1.27 34.55 10.07 11.20 40.79 3.40 1.30 34.77 11.84 11.47 38.47 3.45 1.51 34.52 14.07 11.44 36.10 3.87 0.84 30.98 4.02 18.52 44.42 2.06 0.73 28.66 2.85 17.38 49.54 1.57 0.74 29.47 3.67 15.76 49.33 1.78 0.62 27.46 1.99 17.81 51.46 1.29 0.64 27.80 3.07 17.49 50.12 1.52 2.05 33.01 17.54 28.44 15.54 5.46

The table below gives definitions for each of the columns in the report.

Column Meaning
Basic LPU Statistics
IntEnd The hh:mm:ss of the CP Monitor interval-end time, in UTC.

The first flock of rows is marked ">>Mean>>" to indicate that the rows are the mean experience of each logical PU over the whole time range recorded in the MONWRITE file.

The special row ">>MofM>>", mean of means, is the average experience of the average logical PU over the whole time range of the MONWRITE file.

The special row ">>AllP>>", all processors, merely states the sum of the LPARCPU column, described later.

LPU The processor address, aka logical PU number, of the PU this row describes.
Typ The type of processor: CP, IFL, etc.
EGHZ Effective clock rate of the CEC, in GHz.
LPARCPU Percent busy of this logical PU as portrayed by the counters.
PrbState The percentage of instructions that were problem-state instructions.
Basic CPI Statistics
CPI Cycles per instruction. The average number of clock cycles that transpire between completion of instructions.
EICPI Estimated instruction complexity CPI, sometimes also known as "infinite CPI". This is the number of clock cycles instructions would take if they never, ever incurred an L1 miss. The word "infinite" comes from the wish, "If we but had infinite L1, this is how long the instructions would have taken."
EFCPI Estimated cache miss CPI, sometimes also known as "finite CPI". This is the number of clock cycles instructions are being delayed because of L1 misses. The word "finite" comes from the lament, "Because our L1 is finite, this is how much our CPI is elongating." If we had infinite L1, this number would be zero.
ESCPL1M Estimated sourcing cycles per L1 miss. When an L1 miss happens, this is how many clock cycles it takes to make things right.
RNI Relative nest intensity. A scalar that expresses how hard the caches are working to keep up with the demands of the CPUs. Higher numbers indicate higher intensity. Each CEC type's RNI formula is weighted in such a way that RNI values are comparable across CEC types.
Basic TLB Statistics
T1MSEC Miss rate of the Translation Lookaside Buffer (TLB), in misses per millisecond.
T1CPU Percent of total CPU utilization that is attributable to TLB misses.
T1CYPTM Number of cycles a TLB miss tends to cost.
PTEPT1M PTE percent of all TLB misses.
Memory Cache (L1, etc.) Behavior
L1MP L1 miss percentage. This is the percent of memory references that incur an L1 miss.
LxxP Percent of L1 misses sourced from cache level xx.

On z10, the levels are L1.5 ("15"), L2 on this book ("2L"), or L2 on some other book ("2R").

On z196 or zEC12, the levels are L2 ("2"), L3 ("3"), L4 on this book ("4L"), or L4 on some other book ("4R").

MEMP Percent of L1 misses sourced from main memory.


What To Do With The Information

The CPU MF counters data isn't like ordinary performance data, in that there is no z/VM or System z "knob" one can directly turn to affect the achieved values. For example, there's no "give me more L1" knob that we could turn to increase the amount of L1 on the CEC, if we felt there were something lacking about our L1 performance.

For this reason, the CPU MF report is at risk for being labelled "tourist information" or "gee-whiz information". Some analysts might say that because there isn't much that can be done to influence it, why would we bother even looking at it?

It turns out there are some very useful things we can do with CPU MF information, even though we don't have cache adjusting knobs at our immediate disposal. In the rest of this article, we briefly explore some of them.

Probably the most useful thing to do with the CPU MF report is to use it as your workload's characterization index into the IBM Large Systems Performance Report (LSPR). The L1 miss percent L1MP and the RNI value together constitute the "LSPR hint" which in turn reveals which portion of the LSPR to consult when projecting your own workload's scaling or migration characteristics. For more information on this, see IBM's LSPR page.

One thing we can do to affect cache performance is to be cognizant of the idea that all of the partitions running on the CEC are competing for the CEC's cache. Steps we take to help the partitions' peak times not to overlap will help matters. If we have our workload scheduled so that all partitions heat up at 9 AM and all partitions cool off at 6 PM, we might consider whether we might stagger our company's work so that the partitions heat up at different times. An extension to this might be that if we had put all of the Europe partitions on one CEC, all of the North America partitions on a second, and all of the Asia partitions on a third, we might instead consider a less time-oriented placement, so that any given CEC doesn't have all of its partitions hot at the same time.

If our CEC is hosting a mix of z/OS partitions and other partitions, we can affect cache performance by turning on z/OS HiperDispatch in the z/OS partitions. Doing this helps PR/SM and z/OS to shrink those partitions' cache influence, because z/OS HiperDispatch switches the z/OS partitions to something called vertical mode. For more information about vertical mode partitions, consult z/OS documentation.

Another thing we can do to affect cache performance is to tune the system's configurations of logical CPUs and virtual CPUs so that those two choices are right-sized for the workload. If a z/VM partition is a logical 16-way but is running only 425% busy on average with peaks at 715%, set it to be an 8-way instead of a 16-way. The same thing applies to virtual servers. If that big Linux guest runs only 115% busy on average with peaks of 280%, it probably should not be configured as a virtual 12-way. Set it to be a virtual 3-way or 4-way instead.

Speaking of tuning Linux virtual machines, customers report varying degrees of success with using the cpuplugd daemon to shut off unneeded virtual CPUs during off-peak times. If you have large N-way Linux guests, consider trying cpuplugd in a test environment, and if the tests work out for you, consider putting it into production.

Just as CPU counts can be right-sized, memory can also be right-sized. Take another look at those UPAGELOG reports for your virtual servers and the I/O rates to your virtual servers' swap extents. If your virtual servers are ignoring their swap extents, you can probably afford to decrease their memory sizes.