Skip to main content

IBM Systems  >   z Systems  >   z/VM  >  

Exploiting virtual machine technology for the development and test of z/OS system solutions

Updated: 15 Feb. 2011

This page provides reference information and links to resources you may find useful when running z/OS as a guest of VM.

z/OS News

Publications, Redbooks and RedPapers

  • z/VM V6.1: Running Guest Operating Systems PDF
    z/VM V5.4: Running Guest Operating Systems PDF
    This book is intended to help you to plan for and to operate guest operating systems under z/VM®. It also includes sample execs to help you automate certain tasks.
  • Using z/VM for Test and Development Environments: A Roundup
    This IBM Redbook shows the strengths of z/VM and how you can use these strengths to create a highly flexible test and production environment. Some of the strengths of z/VM that are shown in this book are that you can run Linux on z/VM, you can run a sysplex under z/VM, and you can develop code under z/VM for z/TPF. You can also provision Linux guests under z/VM. A vswitch allows you to connect all of your guests (all operating systems that run under z/VM) easily to the network. You can simulate your production environment on a sysplex.
  • Redpaper about z/OS test environments on z/VM
    Multiple z/OS Virtual Machines on z/VM
    This IBM Redpaper describes some of the possible ways to configure a z/VM system and a set of z/OS virtual machines for use in testing z/OS-based tools and products. (April 2009)

Parallel Sysplex

z/VM Storage Management

VM FAQ's

Related links:

Support considerations for running z/OS on z/VM
This section contains:



  • Maximum of 24,576 virtual devices

    z/VM limits an individual guest to having 24,576 (24K) virtual devices. The architecture imposes a limit of 65,536 (64K) devices in a subchannel set and allows up to four subchannel sets.

    The limit of 24,576 is an arbitrary one that was established during the development of VM/ESA. Each virtual device has an associated control block (a VDEV) that consumes real memory resources. At present, one page can hold 13 VDEVs, so 24,576 VDEVs would consume 1890.5 pages or 7.4 megabytes of memory.

    Most guests never even approach this limit. However, if you wanted to create a duplicate of a large production z/OS system using z/VM, you might require more virtual devices than the limit allows. This would prevent you from creating a system with all the required devices. While this limit has not, to IBM's knowledge, been a problem for customers, a requirement with business justification for removing it would be appropriate if it does become an issue.

    Circumventing this limitation would be relatively trivial. The remedy would probably take the form of a guest-specific virtual device limit, designated via the User Directory, and perhaps a default limit established in the System Configuration File. However, in the absence of a customer requirement, there is no reason to pursue these enhancements.
     

  • Dynamic I/O Configuration devices

    Dynamic I/O configuration is a mechanism that allows an operating system to dynamically add, change, or remove I/O resources from its I/O configuration. Channel paths, control units, and devices can be managed in this way. In the z/OS environment, this facility is provided by the Hardware Configuration Definition (HCD) tool. A graphical user interface, Hardware Configuration Management (HCM), runs on a workstation and simplifies the use of HCD. HCD is also supported in the z/VM environment and can be accessed via HCM. In addition, z/VM has built-in commands to enable I/O configuration management without HCD and HCM.

    z/VM does not allow a guest to use the Dynamic I/O Configuration interfaces to manage its virtual I/O configuration. This is because adding this support would not add significantly to the capabilities of a guest z/OS system since dynamic changes to the I/O configuration are not used very extensively. Rather, they are usually used in response to physical changes to the configuration, which occur over relatively long periods of time rather than minute to minute. As well, other VM facilities provided through its command interface allow devices to be added, changed, and removed.

    Aside from testing HCD and HCM, there would be little value in supporting dynamic I/O configuration. As such, this should not present an issue to customers.

    There are no plans to virtualize dynamic I/O configuration capabilities and no intentions to reconsider this decision unless there is a significant change in the breadth of the technology's applicability.
    &npsp;

  • Dynamic Storage Reconfiguration

    Dynamic Storage Reconfiguration (DSR) is a mechanism that allows an operating system to dynamically add memory resources to and remove them from its configuration. Central and expanded storage can each be defined with initial and reserved allocations. After a manual action (e.g., deactivating another logical partition), some or all of the reserved memory amounts may become available and can be varied on-line programmatically. Subsequently, these storage areas may be varied off-line.

    z/VM V5.4 introduced support to allow a guest to use the Dynamic Storage Reconfiguration interfaces to manage its central virtual memory configuration. Expanded storage can be dedicated to a guest, but its size cannot be adjusted dynamically. As well, z/VM can exploit central storage DSR for its own use.

    In an environment that cannot sustain an outage in order to resize the z/VM logical partition, the ability to add central storage resources dynamically can offer significant value. z/VM's central storage size can be increased using the SET STORAGE command. Removing storage is not supported because limiting the use of dynamic storage so that it would be feasible to give it up would have made it insufficiently useful. Enabling dynamic storage reconfiguration for z/VM requires some advance planning to define a quantity of reserved storage in its LPAR activation profile.

    Guest DSR support enables guests, including those running z/OS, to both add and remove central storage dynamically. The z/VM DEFINE STORAGE command permits the amounts of guest reserved and standby storage to be established for DSR purposes.
     

  • Parallel Access Volume (PAV) Support

    PAV support provides a mechanism for issuing multiple I/O operations concurrently to a DASD volume, enabling I/O throughput to be increased. In a DASD subsystem, one or more alias volumes can be associated with a base volume to enable multiple I/O operations to be started concurrently. While certain operations require the use of the base volume, most can be started using either the base or one of its aliases.

    VM has supported dedicated use of PAV for many years. This support is explained in detail at the Initial PAV Support page. However, PAV was not supported for minidisks and required the guest operating system to be PAV-aware in order to obtain the benefits of increased throughput. In May, 2006, z/VM PAV support was extended to eliminate these shortcomings. With the availability of the PTF for APAR VM63952, PAV is now also supported for minidisks, allowing multiple z/OS guests to have virtual alias devices associated with virtual base volumes. In addition, PAV is exploited by z/VM for some of its own I/O operations and for those of its guests, whether or not those guests are PAV-aware.

    A comprehensive description of what is supported is provided at z/VM PAV Support page.

    z/VM does not support dynamic activation of PAVs in response to I/O demand. Rather, it requires a fixed assignment of aliases to bases. This may introduce some operational complexity and may require additional effort to ensure that there are sufficient aliases assigned to each base to support the anticipated I/O load. In some situations, this might not be possible. Removing the limitations of enhanced PAV support is planned for delivery in z/VM 5.3 with exploitation of HyperPAV.

     

  • Specialty Engine Support

    System z specialty engines provide support for designated workloads and are less expensive than general-purpose CPs. Integrated Facility for Linux (IFL) engines are intended to run Linux and z/VM systems with Linux guests. IBM System z Application Assist Processors (zAAPs) are designed to provide a cost-effective execution environment for Java applications under the control of the IBM Java Virtual Machine (JVM) on z/OS. IBM System z9 Integrated Information Processors (zIIPs) are the latest specialty processors, designed to help improve resource optimization and lower the cost for eligible workloads. This includes certain DB2 processing, enhancing the role of the mainframe as the data hub of the enterprise. z/OS and z/OS.e exploit zIIPs to offload software system overhead from standard Central Processors.

    z/VM V5.3 and later provide guest support for specialty engines. Virtual zIIPs, zAAPs, and IFLs may be defined as part of guest configurations and are implemented using one of the following methods:

    • Simulation, where z/VM dispatches specialty engines for guest virtual machines on real Central Processors (CPs).
    • Virtualization, where z/VM dispatches specialty engines for guest virtual machines on the real specialty engines that match their type, if those specialty engines are available in the real z/VM processor configuration.

    The virtual specialty engine types of zIIP, zAAP, and IFL are externalized by several z/VM commands, including DEFINE CPU, QUERY VIRTUAL CPUS, and INDICATE USER.

    Guest support for specialty engines enables testing the associated Java and data base applications that exploit them and evaluating the amount of benefit to be expected from installing these engines in the real environment. Virtualization support for these engines also enables additional resources to be used by z/VM guests without increasing software license charges. While modeling tools can be used to estimate benefits, guest support provides significant additional value for z/OS customers by allowing the actual workloads to be executed and the offload benefits to be measured.

    As the use of specialty engines continues to expand, z/VM guest support is poised to provide a platform on which customers can explore their benefits. IBM recently announced an enhancement to the z/OS Communications Server to move a portion of its IPSec processing to zIIPs. IBM also issued a statement of direction to enable the XML System Service element of z/OS to take advantage of zIIPs and zAAPs. z/VM will support these changes for z/OS guests as soon as they are available.

    Details about the z/VM support for specialty engines is available in the publication, z/VM Running Guest Operating Systems.