Skip to main content

IBM Systems  >   z Systems  >   z/VM  >  
HyperPAV on z/VM

IBM HyperPAV Support on z/VM

On October 31, 2006, IBM announced the plan to offer enhancements for Parallel Access Volumes (PAV) with support for HyperPAV on the IBM System Storage DS8000 series (M/T 2107). The HyperPAV capability was offered on z/OS 1.6 and later releases in November 2006.
Announcement letter: US ENUS106-811

We had been asked about this support for z/VM and we understood your interest in having this support in z/VM. On Feb. 6, 2007, IBM announced that z/VM V5.3 supports the IBM Hyper Parallel Access Volume (HyperPAV) function optionally provided by the IBM System Storage(TM) DS8000 disk storage systems.

For the announcement letter and information about z/VM V5.3, see z/VM V5.3 resources.

Refer to these pages for VM PAV support information,

This page contains

IBM HyperPAV Support Overview

HyperPAV support complements the existing basic PAV support in z/VM V5.2, for applicable supporting disk storage systems. The HyperPAV function potentially reduces the number of alias-device addresses needed for parallel I/O operations since HyperPAVs are dynamically bound to a base device for each I/O operation instead of being bound statically like basic PAVs.

z/VM provides support of HyperPAV volumes as linkable minidisks for guest operating systems, such as z/OS, that exploit the HyperPAV architecture. This support is also designed to transparently provide the potential benefits of HyperPAV volumes for minidisks owned or shared by guests that do not specifically exploit HyperPAV volumes, such as Linux and CMS.

Using IBM HyperPAVs

z/VM provides support for the IBM HyperParallel Access Volumes (HyperPAV) feature of IBM DASD subsystems. IBM DASD HyperPAV volumes must be defined to z/VM as a 3390 Model 2, 3, or 9 DASD on a 3990 Model 3 or 6, 2105, or 2107 Storage Controller. 3380 track-compatibility mode for the 3390 Model 2 or 3 DASD is also supported.

Traditional PAV support operates on statically assigning one or more PAV alias subchannels to a specific PAV base device. The DASD Administrator is able to manually reassign PAV aliases from one PAV base to another using the DASD subsystem's configuration menus and certain software can dynamically "reassign" PAV aliases as well. When there are many PAV bases and aliases, it is possible to begin to exhaust the supply of subchannels that are available. This potential for exhausting the supply of available subchannels and easier system operation has led to the creation of HyperPAV.

A Logical Subsystem (LSS) can operate in one of the Non-PAV, PAV, or HyperPAV modes. When an LSS is in HyperPAV mode, there is one pool of HyperPAV bases and aliases that are shared within the LSS. Any HyperPAV alias in a pool can service I/O requests for any HyperPAV base in the same pool. Thus, the PAV concept of an alias being assigned to a particular base no longer exists. Instead, the HyperPAV base-alias association exists only for the duration of each I/O operation on a HyperPAV alias. This pooling of HyperPAV bases and aliases can greatly reduce the number of aliases required. Now, with HyperPAV, only the number of aliases required to obtain a desired I/O performance objective for the LSS are required. Performance tuning is now moved from the volume level to the LSS level.

Within the HyperPAV content, the concept of a "volume" becomes a bit more "natural" in the sense that a HyperPAV alias subchannel no longer has a fixed association with a particular volume. With PAV, an alias subchannel was associated with a particular volume, and this led to some confusion. In the HyperPAV world, there is no such association.

HyperPAV devices are defined within a Storage Controller when the proper Licensed Internal Codes (LICs) are installed and enabled. The LSS is configured as a PAV environment, and when the HyperPAV feature is enabled by z/VM, the static PAV aliases are converted to HyperPAV aliases and they are joined together to form the pool for the LSS. z/VM can be configured to operate each LSS in Non-PAV, PAV, or HyperPAV mode by using the new CU DASD statement in its configuration file and/or the new SET CU command.

PAV base subchannels are defined in IOCP as UNIT=3990, 2105, or 2107 on the CNTLUNIT statement and UNIT=3390B (or 3380B) on the IODEVICE statement. Alias subchannels are defined as UNIT=3990, 2105, or 2107 on the CNTLUNIT statement and UNIT=3390A (or 3380A) on the IODEVICE statement. Each base or alias subchannel can be assigned any available z/VM real device number. Use the IBM DASD subsystem configuration console to initially define which subchannels are base subchannels, which subchannels are alias subchannels, and which alias subchannels are associated with each base volume. Use the CP QUERY PAV command to view the current allocation of base and alias subchannels.

Base and alias subchannels provide nearly identical functions for a volume. One exception is that "volume-wide" commands such as the Reserve and Release channel commands can be issued only to a base subchannel, but the resulting status applies to the associated alias subchannels as well.

Certain virtual HyperPAV operations require the consistent use of the same real base or alias subchannel. To facilitate this, each virtual HyperPAV base and alias has an "assigned" real device subchannel that can be displayed with the QUERY VIRTUAL vdev DETAILS and QUERY VIRTUAL PAV commands. The assignment is automatic and cannot be changed. One example of this would be the execution of the Read Configuration Data command. The scheduling of I/O to an assigned device is automatically handled by z/VM during its analysis of the virtual channel program. Because each virtual HyperPAV base or alias must have a uniquely assigned real HyperPAV base or alias subchannel, you cannot have more virtual HyperPAV aliases than real HyperPAV aliases for an LSS.

The Define Extent channel command specifies if a channel program can read and/or write data from or to a volume. For each volume, read operations are permitted concurrently over multiple base or alias subchannels. However, write operations are serialized on the volume when the cylinder ranges specified in the Define Extent channel command overlap with another active CCW chain in any other subchannel for the volume.

A dedicated HyperPAV base volume or alias can be assigned to only one guest. I/O operations initiated through a HyperPAV alias can be directed only to base volumes that are ATTACHed or LINKed to the issuing virtual machine.

HyperPAV Pools

New with HyperPAV support is the concept of a pool. A pool consists of a collection of HyperPAV bases and the alias subchannels that can refer to them. A typical PAV configuration can be defined as:

Figure 1: Example DASD Logical Subsystems and Pools
Figure 1: Example DASD Logical Subsystems and Pools

A pool can contain up to 254 HyperPAV alias devices, and there is a limit of 16,000 pools in a z/VM configuration. Currently, there is a one-to-one correspondence of pools and LSSs. Base disks are assigned to a specific pool and aliases within the same pool can be used to access the base.

Using HyperPAV Dedicated DASD

Dedicated HyperPAV bases operate in the traditional z/VM manner. To use dedicated HyperPAV alias devices, the guest must contain support for managing and serializing the volume's data across the subchannels. z/VM acts only as the "pipe" between the guest and the hardware. Once the necessary base and associated alias subchannels are attached to the guest, the guest must manage their use. Dedicated HyperPAV alias I/O operations are restricted by z/VM to be able to access only HyperPAV bases that are attached to the guest.

In a dedicated environment, the performance benefits of HyperPAV are entirely up to the operating system running in the virtual machine. z/VM will not make any attempt to optimize or alter the I/O flowing through the base and alias subchannels.

Using HyperPAV Minidisks

In the context of HyperPAV, the real I/O scheduling algorithms for full-pack and non-full-pack minidisks behave in the same manner.

A guest virtual machine can define one or more minidisk volumes that exist on a real HyperPAV volume. The real HyperPAV volume has a real base and is associated with a pool that also contains zero or more HyperPAV aliases. All HyperPAV I/O operations that are directed to a minidisk volume are optimized by z/VM's automatic selection of an appropriate real HyperPAV base or alias subchannel for the underlying real volume. In other words, the scheduling of I/O to a virtual device will be dynamically scheduled and multiplexed on any real HyperPAV base or alias subchannel that is defined in the hardware. This gives z/VM the flexibility to choose a real HyperPAV base or alias subchannel that is not in use at the time. For example, if users GUEST1 and GUEST2 simultaneously issue an I/O request to two different minidisk volumes that are defined on the same real underlying volume, via their respective virtual subchannels, the result would be that one I/O would be executed on the real base subchannel and the other, simultaneously, would be executed on a real alias subchannel.

A real HyperPAV alias must be attached to the VM SYSTEM to have minidisk I/O issued to it. Use the SYSTEM_Alias configuration file statement to accomplish this at VM IPL (e.g., SYSTEM_Alias 4C81-4C87).

Using HyperPAV Minidisks with Exploiting Operating Systems

An exploiting operating system is one that is capable of controlling the HyperPAV architecture and is configured to control the features of HyperPAV. Such an operating system understands how to control and utilize virtual HyperPAV aliases. Examples might be z/VM and z/OS.

Virtual HyperPAV base devices are those defined as full-pack minidisks on real HyperPAV base devices. Associated virtual HyperPAV alias devices can be subsequently defined using the DEFINE HYPERPAVALIAS command (either in the user directory or after the user is logged on). Virtual HyperPAV devices can be displayed using the QUERY VIRTUAL PAV command. For each LSS, the number of virtual HyperPAV aliases for a guest cannot exceed the number of real HyperPAV aliases defined in the hardware for the underlying real LSS. Virtual HyperPAV devices cannot be defined for non-full-pack minidisks.

The SET MDCACHE command is not valid for an alias HyperPAV minidisk volume. Cache settings are applicable only for base HyperPAV minidisk volumes.

To define a full-pack minidisk for an exploiting operating system at virtual E100 with virtual aliases at E101, E102, and E103, you can code the following statements in the user directory:

MDISK E100 3390 0 END PAK001

The following is a typical example of several virtual machines that exploit HyperPAV volumes. Both volumes are full-pack minidisks that are shared among the five guests (E100, PAK001 and E200, PAK002). Note that there are more HyperPAV minidisk volumes (5 MDISKs) than real volumes (2). z/VM will multiplex I/O operations on the real base and alias subchannels for each:

Figure 2: Example HyperPAV Minidisk Configuration for Exploiting Guests
Figure 2: Example HyperPAV Minidisk Configuration for Exploiting Guest

Note that in the above example, there is no reference to pool numbers. The purpose of the "FOR BASE nnnn" option on the DEFINE HYPERPAVALIAS command is to ensure that the virtual alias is assigned to an appropriate real alias in the same pool as the base. If in the above example PAK001 and PAK002 are in the same real LSS, then all the devices are assigned to the same pool and the E1xx and E2xx aliases can be used to issue I/O requested to PAK001 and/or PAK002. If PAK001 and PAK002 are in different LSSs, then the E1xx aliases are in a unique pool and can access only PAK001 and likewise the E2xx aliases can access only PAK002.

Using HyperPAV Minidisks with Non-Exploiting Operating Systems

A non-exploiting operating system is one that is not configured to control the features of HyperPAV or has no knowledge of the HyperPAV architecture. Although the guest operating system will not use its minidisk volumes in HyperPAV mode, z/VM will still provide HyperPAV performance optimization across multiple non-exploiting guests. Performance gains can be realized only when full-pack minidisks are shared among guests with multiple LINK statements or when multiple non-full-pack minidisk volumes reside on a real HyperPAV volume. Performance gains are achieved by transparently multiplexing the I/O operations requested on each guest minidisk volume over the appropriate real HyperPAV base and alias subchannels. z/VM V4.4, V5.1, V5.2, VSE, TPF, and CMS are examples of non-exploiting operating systems. z/OS can also be considered non-exploiting, depending on how it is configured.

To define a full-pack minidisk for a non-exploiting operating system at virtual E100, you can code the following statements in the user directory:

MDISK E100 3390 0 END PAK001

To define a non-full-pack minidisk for a non-exploiting operating system at virtual E100, you can code the following statements in the user directory:

MDISK E100 3390 100 200 PAK002
LINK GUEST1 E100 3100 MW

The following is a typical example of several non-exploiting guest virtual machines using HyperPAV for enhanced performance:

Figure 3: Example HyperPAV Minidisk Configuration for Exploiting Guests

Figure 43. Example: HyperPAV Minidisk Configuration for Non-Exploiting Guests

Note that in the above configuration, there are five links to real volume PAK001. For these five virtual HyperPAV base subchannels, there is one real HyperPAV base and six real HyperPAV alias subchannels that will be used to perform the I/O. z/VM will concurrently multiplex the I/O from the GUEST1-GUEST5 E100 virtual bases onto the real 4580, 4581, 4582, 4583, 4585, 4586, and 4587 subchannels as they are available. Using this strategy, it is possible to have many guest virtual machines sharing a real DASD volume with z/VM dynamically handling the selection of real HyperPAV base and alias subchannels.

I/O operations to the minidisks defined on PAK002 will likewise be optimized by z/VM's dynamic selection of real HyperPAV base and alias subchannels 4584, 4581, 4582, 4583, 4585, 4586, and 4587.

Due to their dynamic nature, HyperPAV aliases have additional CCW controls that are not present for PAV aliases. For example, under CMS it is possible (but not recommended) to use the DDR program to access a PAV alias since there is a fixed association of the PAV alias to a specific PAV base. Since DDR does not understand (that is, is non-exploiting) how to control a HyperPAV alias, any attempt to access a HyperPAV alias will result in an I/O error because it is unclear which base is intended.

Note: CMS is a non-exploiting operating system, and therefore the use of the class G DEFINE HYPERPAVALIAS command is not recommended. CMS itself is not HyperPAV-aware. When multiple CMS volumes are defined on a real HyperPAV volume, I/O operations by CMS can be concurrently scheduled on any real HyperPAV base or alias subchannel by z/VM. The CMS user does not need to take any action for this to occur.

z/VM Restrictions on Using HyperPAV
  1. A virtual alias subchannel cannot be IPLed.
  2. You should not use HyperPAV alias volumes as z/VM installation volumes (for example, do not use for the 520RES volume).
  3. z/VM Paging and SPOOLing operations do not take advantage of HyperPAV. It is recommended that PAGE and SPOOL areas be placed on DASD devices dedicated to this purpose.
  4. Virtual HyperPAV devices can be defined for only full-pack minidisks. They cannot be defined for non-full-pack minidisks.
  5. For a HyperPAV exploiting guest, it is recommended to avoid defining a mixture of dedicated HyperPAV alias devices and full-pack HyperPAV alias devices for the same underlying real LSS. Dedicated HyperPAV aliases can be associated only with dedicated HyperPAV bases and full-pack HyperPAV aliases can be associated with only full-pack HyperPAV aliases. If a mixture is defined for an LSS, the full complement of alias devices cannot be exploited for each base device.
  6. CMS does not support virtual HyperPAV alias subchannels.
  7. Diagnose codes X'18', X'20', X'A4', X'250', and the *BLOCKIO System Service do not support HyperPAV alias devices. I/O issued to a HyperPAV alias via one of these interfaces will be rejected because a means for specification of the associated base device is not provided.

Reference Documents
  • Creating and Updating a User Directory chapter in z/VM CP Planning and Administration: provides for information about the user directory statements referenced on this page
  • DASD Sharing chapter in z/VM CP Planning and Administration: is the source of this HyperPAV web page
  • CP Commands chapter of the z/VM CP Commands and Utilities Reference documents the CP commands referenced on this web page

Return to the top of the page
IBM Systems
z/VM Storage Management resources
IBM Systems Storage
Attend events to learn about z/VM, Linux on System z, and more