Contents | Previous | Next

Accounting for Virtualized Network Devices

z/VM 4.3.0 installed accounting into IUCV connections, VM guest LAN connections, and virtual CTC connections. The accounting logic accrues bytes moved. For a VM guest LAN, the accrual is separated according to whether the data are flowing to a router virtual machine or a non-router virtual machine, said distinction being drawn using entries in the CP directory.

In this experiment we sought to determine whether the accrual of said accounting information had an impact on the performance of the communication link. We measured link throughput in transactions per second. We measured link resource consumption in CPU time per transaction.

We ran the experiment for only a VM guest LAN in HiperSockets mode. Based on our review of the code involved, we would expect similar performance effect on VM guest LAN in QDIO mode, virtual CTC, and IUCV connections.

We found that collecting accounting data did not significantly affect networking performance.

Hardware

2064-109, LPAR, 2 CPUs dedicated to LPAR, 1 GB real for LPAR, 2 GB XSTORE for LPAR.

Software

z/VM 4.3.0. A 2.4.7-level internal Linux development driver. This was the same Linux used for the z/VM 4.2.0 Linux networking performance experiments. To produce the network loads, we used an IBM internal tool that can induce networking workloads for selected periods of time. The tool is able to record the transaction rates and counts it experiences during the run.

Configuration

Two Linux guests connected to one another via VM guest LAN. The Linux guests were 512 MB virtual uniprocessors, configured with no swap partition. MTU size was 56 KB for all experiments.

Experiment

We ran the following workloads on this configuration, each with accounting turned off and then with accounting turned on: 1

  • Request-response, 200/1000, 50 concurrent connections

  • Connect-request-response, 64/8192, 50 concurrent connections

  • Streaming get, 20/20M, 50 concurrent connections

For each run we collected wall clock duration and CPU consumption. We also collected transaction rate information from the output of the network load inducer.

Results

In these tables we compare the results of our two runs.

Table 1. Transactions Per Second

Accounting RR % change CRR % change STRG % change
OFF 15998.35
3578.88
8.40
ON 15844.28 -0.1 3532.52 -1.3 8.38 -0.2
Note: 2064-109, LPAR with 2 dedicated CPUs, 1 GB real, 2 GB XSTORE, LPAR dedicated to these runs. RAMAC-1 behind 3990-6. z/VM 4.3.0. Linux 2.4.7, 31-bit, internal lab driver. 512 MB Linux virtual machine, no swap partition, Linux DASD is DEDICATEd volume.

Table 2. CPU Per Transaction (msec)

Accounting RR % change CRR % change STRG % change
OFF 0.12
0.54
180.91
ON 0.12 0.0 0.55 1.9 181.26 0.2
Note: 2064-109, LPAR with 2 dedicated CPUs, 1 GB real, 2 GB XSTORE, LPAR dedicated to these runs. RAMAC-1 behind 3990-6. z/VM 4.3.0. Linux 2.4.7, 31-bit, internal lab driver. 512 MB Linux virtual machine, no swap partition, Linux DASD is DEDICATEd volume.

Conclusion

This enhancement does not appreciably degrade the performance of VM guest LAN connections in the configurations we measured.


Footnotes:

1
For more description and explanation of these workloads, see the "Linux Networking Performance" section of this report.

Contents | Previous | Next